---
title: Cloudflare Workers
description: With Cloudflare Workers, you can expect to:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cloudflare Workers

A serverless platform for building, deploying, and scaling apps across[Cloudflare's global network ↗](https://www.cloudflare.com/network/) with a single command — no infrastructure to manage, no complex configuration

With Cloudflare Workers, you can expect to:

* Deliver fast performance with high reliability anywhere in the world
* Build full-stack apps with your framework of choice, including [React](https://developers.cloudflare.com/workers/framework-guides/web-apps/react/), [Vue](https://developers.cloudflare.com/workers/framework-guides/web-apps/vue/), [Svelte](https://developers.cloudflare.com/workers/framework-guides/web-apps/sveltekit/), [Next](https://developers.cloudflare.com/workers/framework-guides/web-apps/nextjs/), [Astro](https://developers.cloudflare.com/workers/framework-guides/web-apps/astro/), [React Router](https://developers.cloudflare.com/workers/framework-guides/web-apps/react-router/), [and more](https://developers.cloudflare.com/workers/framework-guides/)
* Use your preferred language, including [JavaScript](https://developers.cloudflare.com/workers/languages/javascript/), [TypeScript](https://developers.cloudflare.com/workers/languages/typescript/), [Python](https://developers.cloudflare.com/workers/languages/python/), [Rust](https://developers.cloudflare.com/workers/languages/rust/), [and more](https://developers.cloudflare.com/workers/runtime-apis/webassembly/)
* Gain deep visibility and insight with built-in [observability](https://developers.cloudflare.com/workers/observability/logs/)
* Get started for free and grow with flexible [pricing](https://developers.cloudflare.com/workers/platform/pricing/), affordable at any scale

Get started with your first project:

[ Deploy a template ](https://dash.cloudflare.com/?to=/:account/workers-and-pages/templates) [ Deploy with Wrangler CLI ](https://developers.cloudflare.com/workers/get-started/guide/) 

---

## Build with Workers

#### Front-end applications

Deploy [static assets](https://developers.cloudflare.com/workers/static-assets/) to Cloudflare's [CDN & cache](https://developers.cloudflare.com/cache/) for fast rendering

#### Back-end applications

Build APIs and connect to data stores with [Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/) to optimize latency

#### Serverless AI inference

Run LLMs, generate images, and more with [Workers AI](https://developers.cloudflare.com/workers-ai/)

#### Background jobs

Schedule [cron jobs](https://developers.cloudflare.com/workers/configuration/cron-triggers/), run durable [Workflows](https://developers.cloudflare.com/workflows/), and integrate with [Queues](https://developers.cloudflare.com/queues/)

#### Observability & monitoring

Monitor performance, debug issues, and analyze traffic with [real-time logs](https://developers.cloudflare.com/workers/observability/logs/) and [analytics](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/)

---

## Integrate with Workers

Connect to external services like databases, APIs, and storage via [Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/), enabling functionality with just a few lines of code:

**Storage**

**[Durable Objects](https://developers.cloudflare.com/durable-objects/)** 

Scalable stateful storage for real-time coordination.

**[D1](https://developers.cloudflare.com/d1/)** 

Serverless SQL database built for fast, global queries.

**[KV](https://developers.cloudflare.com/kv/)** 

Low-latency key-value storage for fast, edge-cached reads.

**[Queues](https://developers.cloudflare.com/queues/)** 

Guaranteed delivery with no charges for egress bandwidth.

**[Hyperdrive](https://developers.cloudflare.com/hyperdrive/)** 

Connect to your external database with accelerated queries, cached at the edge.

**Compute**

**[Workers AI](https://developers.cloudflare.com/workers-ai/)** 

Machine learning models powered by serverless GPUs.

**[Workflows](https://developers.cloudflare.com/workflows/)** 

Durable, long-running operations with automatic retries.

**[Vectorize](https://developers.cloudflare.com/vectorize/)** 

Vector database for AI-powered semantic search.

**[R2](https://developers.cloudflare.com/r2/)** 

Zero-egress object storage for cost-efficient data access.

**[Browser Rendering](https://developers.cloudflare.com/browser-rendering/)** 

Programmatic serverless browser instances.

**Media**

**[Cache / CDN](https://developers.cloudflare.com/cache/)** 

Global caching for high-performance, low-latency delivery.

**[Images](https://developers.cloudflare.com/images/)** 

Streamlined image infrastructure from a single API.

---

Want to connect with the Workers community? [Join our Discord ↗](https://discord.cloudflare.com)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}}]}
```

---

---
title: Examples
description: Explore the following examples for Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Examples

Explore the following examples for Workers.

Filter resources...

[Single Page App (SPA) shell with bootstrap dataUse HTMLRewriter to inject prefetched bootstrap data into an SPA shell, eliminating client-side data fetching on initial load. Works with Workers Static Assets or an externally hosted SPA.](https://developers.cloudflare.com/workers/examples/spa-shell/)[Write to Analytics EngineWrite custom analytics events to Workers Analytics Engine for high-cardinality, time-series data.](https://developers.cloudflare.com/workers/examples/analytics-engine/)[Stream large JSONParse and transform large JSON request and response bodies using streaming.](https://developers.cloudflare.com/workers/examples/streaming-json/)[HTTP Basic AuthenticationShows how to restrict access using the HTTP Basic schema.](https://developers.cloudflare.com/workers/examples/basic-auth/)[Fetch HTMLSend a request to a remote server, read HTML from the response, and serve that HTML.](https://developers.cloudflare.com/workers/examples/fetch-html/)[Return small HTML pageDeliver an HTML page from an HTML string directly inside the Worker script.](https://developers.cloudflare.com/workers/examples/return-html/)[Return JSONReturn JSON directly from a Worker script, useful for building APIs and middleware.](https://developers.cloudflare.com/workers/examples/return-json/)[Sign requestsVerify a signed request using the HMAC and SHA-256 algorithms or return a 403.](https://developers.cloudflare.com/workers/examples/signing-requests/)[Stream OpenAI API ResponsesUse the OpenAI v4 SDK to stream responses from OpenAI.](https://developers.cloudflare.com/workers/examples/openai-sdk-streaming/)[Using timingSafeEqualProtect against timing attacks by safely comparing values using timingSafeEqual.](https://developers.cloudflare.com/workers/examples/protect-against-timing-attacks/)[Turnstile with WorkersInject Turnstile implicitly into HTML elements using the HTMLRewriter runtime API.](https://developers.cloudflare.com/workers/examples/turnstile-html-rewriter/)[Custom Domain with ImagesSet up custom domain for Images using a Worker or serve images using a prefix path and Cloudflare registered domain.](https://developers.cloudflare.com/workers/examples/images-workers/)[103 Early HintsAllow a client to request static assets while waiting for the HTML response.](https://developers.cloudflare.com/workers/examples/103-early-hints/)[Cache Tags using WorkersSend Additional Cache Tags using Workers](https://developers.cloudflare.com/workers/examples/cache-tags/)[Accessing the Cloudflare ObjectAccess custom Cloudflare properties and control how Cloudflare features are applied to every request.](https://developers.cloudflare.com/workers/examples/accessing-the-cloudflare-object/)[Aggregate requestsSend two GET request to two urls and aggregates the responses into one response.](https://developers.cloudflare.com/workers/examples/aggregate-requests/)[Block on TLSInspects the incoming request's TLS version and blocks if under TLSv1.2.](https://developers.cloudflare.com/workers/examples/block-on-tls/)[Bulk redirectsRedirect requests to certain URLs based on a mapped object to the request's URL.](https://developers.cloudflare.com/workers/examples/bulk-redirects/)[Cache POST requestsCache POST requests using the Cache API.](https://developers.cloudflare.com/workers/examples/cache-post-request/)[Conditional responseReturn a response based on the incoming request's URL, HTTP method, User Agent, IP address, ASN or device type.](https://developers.cloudflare.com/workers/examples/conditional-response/)[Cookie parsingGiven the cookie name, get the value of a cookie. You can also use cookies for A/B testing.](https://developers.cloudflare.com/workers/examples/extract-cookie-value/)[Fetch JSONSend a GET request and read in JSON from the response. Use to fetch external data.](https://developers.cloudflare.com/workers/examples/fetch-json/)[Geolocation: Custom StylingPersonalize website styling based on localized user time.](https://developers.cloudflare.com/workers/examples/geolocation-custom-styling/)[Geolocation: Hello WorldGet all geolocation data fields and display them in HTML.](https://developers.cloudflare.com/workers/examples/geolocation-hello-world/)[Post JSONSend a POST request with JSON data. Use to share data with external servers.](https://developers.cloudflare.com/workers/examples/post-json/)[RedirectRedirect requests from one URL to another or from one set of URLs to another set.](https://developers.cloudflare.com/workers/examples/redirect/)[Rewrite linksRewrite URL links in HTML using the HTMLRewriter. This is useful for JAMstack websites.](https://developers.cloudflare.com/workers/examples/rewrite-links/)[Set security headersSet common security headers (X-XSS-Protection, X-Frame-Options, X-Content-Type-Options, Permissions-Policy, Referrer-Policy, Strict-Transport-Security, Content-Security-Policy).](https://developers.cloudflare.com/workers/examples/security-headers/)[Multiple Cron TriggersSet multiple Cron Triggers on three different schedules.](https://developers.cloudflare.com/workers/examples/multiple-cron-triggers/)[Setting Cron TriggersSet a Cron Trigger for your Worker.](https://developers.cloudflare.com/workers/examples/cron-trigger/)[Using the WebSockets APIUse the WebSockets API to communicate in real time with your Cloudflare Workers.](https://developers.cloudflare.com/workers/examples/websockets/)[Geolocation: Weather applicationFetch weather data from an API using the user's geolocation data.](https://developers.cloudflare.com/workers/examples/geolocation-app-weather/)[A/B testing with same-URL direct accessSet up an A/B test by controlling what response is served based on cookies. This version supports passing the request through to test and control on the origin, bypassing random assignment.](https://developers.cloudflare.com/workers/examples/ab-testing/)[Alter headersExample of how to add, change, or delete headers sent in a request or returned in a response.](https://developers.cloudflare.com/workers/examples/alter-headers/)[Auth with headersAllow or deny a request based on a known pre-shared key in a header. This is not meant to replace the WebCrypto API.](https://developers.cloudflare.com/workers/examples/auth-with-headers/)[Bulk origin overrideResolve requests to your domain to a set of proxy third-party origin URLs.](https://developers.cloudflare.com/workers/examples/bulk-origin-proxy/)[Using the Cache APIUse the Cache API to store responses in Cloudflare's cache.](https://developers.cloudflare.com/workers/examples/cache-api/)[Cache using fetchDetermine how to cache a resource by setting TTLs, custom cache keys, and cache headers in a fetch request.](https://developers.cloudflare.com/workers/examples/cache-using-fetch/)[CORS header proxyAdd the necessary CORS headers to a third party API response.](https://developers.cloudflare.com/workers/examples/cors-header-proxy/)[Country code redirectRedirect a response based on the country code in the header of a visitor.](https://developers.cloudflare.com/workers/examples/country-code-redirect/)[Data loss preventionProtect sensitive data to prevent data loss, and send alerts to a webhooks server in the event of a data breach.](https://developers.cloudflare.com/workers/examples/data-loss-prevention/)[Debugging logsSend debugging information in an errored response to a logging service.](https://developers.cloudflare.com/workers/examples/debugging-logs/)[Hot-link protectionBlock other websites from linking to your content. This is useful for protecting images.](https://developers.cloudflare.com/workers/examples/hot-link-protection/)[Logging headers to consoleExamine the contents of a Headers object by logging to console with a Map.](https://developers.cloudflare.com/workers/examples/logging-headers/)[Modify request propertyCreate a modified request with edited properties based off of an incoming request.](https://developers.cloudflare.com/workers/examples/modify-request-property/)[Modify responseFetch and modify response properties which are immutable by creating a copy first.](https://developers.cloudflare.com/workers/examples/modify-response/)[Read POSTServe an HTML form, then read POST requests. Use also to read JSON or POST data from an incoming request.](https://developers.cloudflare.com/workers/examples/read-post/)[Respond with another siteRespond to the Worker request with the response from another website (example.com in this example).](https://developers.cloudflare.com/workers/examples/respond-with-another-site/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}}]}
```

---

---
title: 103 Early Hints
description: Allow a client to request static assets while waiting for the HTML response.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ Headers ](https://developers.cloudflare.com/search/?tags=Headers)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/103-early-hints.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# 103 Early Hints

**Last reviewed:**  over 3 years ago 

Allow a client to request static assets while waiting for the HTML response.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/103-early-hints)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

`103` Early Hints is an HTTP status code designed to speed up content delivery. When enabled, Cloudflare can cache the `Link` headers marked with preload and/or preconnect from HTML pages and serve them in a `103` Early Hints response before reaching the origin server. Browsers can use these hints to fetch linked assets while waiting for the origin’s final response, dramatically improving page load speeds.

To ensure Early Hints are enabled on your zone:

1. In the Cloudflare dashboard, go to the **Speed settings** page.  
[ Go to **Settings** ](https://dash.cloudflare.com/?to=/:account/:zone/speed/optimization)
2. Go to **Content Optimization**.
3. Enable the **Early Hints** toggle to on.

You can return `Link` headers from a Worker running on your zone to speed up your page load times.

* [  JavaScript ](#tab-panel-7177)
* [  TypeScript ](#tab-panel-7178)
* [  Python ](#tab-panel-7179)
* [  Hono ](#tab-panel-7180)

JavaScript

```

const CSS = "body { color: red; }";

const HTML = `

<!doctype html>

<html lang="en">

<head>

    <meta charset="utf-8">

    <title>Early Hints test</title>

    <link rel="stylesheet" href="https://developers.cloudflare.com/test.css">

</head>

<body>

    <h1>Early Hints test page</h1>

</body>

</html>

`;


export default {

  async fetch(req) {

    // If request is for test.css, serve the raw CSS

    if (/test\.css$/.test(req.url)) {

      return new Response(CSS, {

        headers: {

          "content-type": "text/css",

        },

      });

    } else {

      // Serve raw HTML using Early Hints for the CSS file

      return new Response(HTML, {

        headers: {

          "content-type": "text/html",

          link: "</test.css>; rel=preload; as=style",

        },

      });

    }

  },

};


```

JavaScript

```

const CSS = "body { color: red; }";

const HTML = `

<!doctype html>

<html lang="en">

<head>

    <meta charset="utf-8">

    <title>Early Hints test</title>

    <link rel="stylesheet" href="https://developers.cloudflare.com/test.css">

</head>

<body>

    <h1>Early Hints test page</h1>

</body>

</html>

`;


export default {

  async fetch(req): Promise<Response> {

    // If request is for test.css, serve the raw CSS

    if (/test\.css$/.test(req.url)) {

      return new Response(CSS, {

        headers: {

          "content-type": "text/css",

        },

      });

    } else {

      // Serve raw HTML using Early Hints for the CSS file

      return new Response(HTML, {

        headers: {

          "content-type": "text/html",

          link: "</test.css>; rel=preload; as=style",

        },

      });

    }

  },

} satisfies ExportedHandler;


```

Python

```

import re

from workers import Response, WorkerEntrypoint


CSS = "body { color: red; }"

HTML = """

<!doctype html>

<html lang="en">

<head>

    <meta charset="utf-8">

    <title>Early Hints test</title>

    <link rel="stylesheet" href="https://developers.cloudflare.com/test.css">

</head>

<body>

    <h1>Early Hints test page</h1>

</body>

</html>

"""


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        if re.search("test.css", request.url):

            headers = {"content-type": "text/css"}

            return Response(CSS, headers=headers)

        else:

            headers = {"content-type": "text/html","link": "</test.css>; rel=preload; as=style"}

        return Response(HTML, headers=headers)


```

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


const CSS = "body { color: red; }";

const HTML = `

<!doctype html>

<html lang="en">

<head>

    <meta charset="utf-8">

    <title>Early Hints test</title>

    <link rel="stylesheet" href="https://developers.cloudflare.com/test.css">

</head>

<body>

    <h1>Early Hints test page</h1>

</body>

</html>

`;


// Serve CSS file

app.get("/test.css", (c) => {

  return c.body(CSS, {

    headers: {

      "content-type": "text/css",

    },

  });

});


// Serve HTML with early hints

app.get("*", (c) => {

  return c.html(HTML, {

    headers: {

      link: "</test.css>; rel=preload; as=style",

    },

  });

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/103-early-hints/","name":"103 Early Hints"}}]}
```

---

---
title: A/B testing with same-URL direct access
description: Set up an A/B test by controlling what response is served based on cookies. This version supports passing the request through to test and control on the origin, bypassing random assignment.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/ab-testing.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# A/B testing with same-URL direct access

**Last reviewed:**  over 5 years ago 

Set up an A/B test by controlling what response is served based on cookies. This version supports passing the request through to test and control on the origin, bypassing random assignment.

* [  JavaScript ](#tab-panel-7181)
* [  TypeScript ](#tab-panel-7182)
* [  Python ](#tab-panel-7183)
* [  Hono ](#tab-panel-7184)

JavaScript

```

const NAME = "myExampleWorkersABTest";


export default {

  async fetch(req) {

    const url = new URL(req.url);


    // Enable Passthrough to allow direct access to control and test routes.

    if (url.pathname.startsWith("/control") || url.pathname.startsWith("/test"))

      return fetch(req);


    // Determine which group this requester is in.

    const cookie = req.headers.get("cookie");


    if (cookie && cookie.includes(`${NAME}=control`)) {

      url.pathname = "/control" + url.pathname;

    } else if (cookie && cookie.includes(`${NAME}=test`)) {

      url.pathname = "/test" + url.pathname;

    } else {

      // If there is no cookie, this is a new client. Choose a group and set the cookie.

      const group = Math.random() < 0.5 ? "test" : "control"; // 50/50 split

      if (group === "control") {

        url.pathname = "/control" + url.pathname;

      } else {

        url.pathname = "/test" + url.pathname;

      }

      // Reconstruct response to avoid immutability

      let res = await fetch(url);

      res = new Response(res.body, res);

      // Set cookie to enable persistent A/B sessions.

      res.headers.append("Set-Cookie", `${NAME}=${group}; path=/`);

      return res;

    }

    return fetch(url);

  },

};


```

TypeScript

```

const NAME = "myExampleWorkersABTest";


export default {

  async fetch(req): Promise<Response> {

    const url = new URL(req.url);

    // Enable Passthrough to allow direct access to control and test routes.

    if (url.pathname.startsWith("/control") || url.pathname.startsWith("/test"))

      return fetch(req);

    // Determine which group this requester is in.

    const cookie = req.headers.get("cookie");

    if (cookie && cookie.includes(`${NAME}=control`)) {

      url.pathname = "/control" + url.pathname;

    } else if (cookie && cookie.includes(`${NAME}=test`)) {

      url.pathname = "/test" + url.pathname;

    } else {

      // If there is no cookie, this is a new client. Choose a group and set the cookie.

      const group = Math.random() < 0.5 ? "test" : "control"; // 50/50 split

      if (group === "control") {

        url.pathname = "/control" + url.pathname;

      } else {

        url.pathname = "/test" + url.pathname;

      }

      // Reconstruct response to avoid immutability

      let res = await fetch(url);

      res = new Response(res.body, res);

      // Set cookie to enable persistent A/B sessions.

      res.headers.append("Set-Cookie", `${NAME}=${group}; path=/`);

      return res;

    }

    return fetch(url);

  },

} satisfies ExportedHandler;


```

Python

```

import random

from urllib.parse import urlparse, urlunparse

from workers import Response, fetch, WorkerEntrypoint


NAME = "myExampleWorkersABTest"


class Default(WorkerEntrypoint):

  async def fetch(self, request):

    url = urlparse(request.url)

    # Uncomment below when testing locally

    # url = url._replace(netloc="example.com") if "localhost" in url.netloc else url


    # Enable Passthrough to allow direct access to control and test routes.

    if url.path.startswith("/control") or url.path.startswith("/test"):

      return fetch(urlunparse(url))


    # Determine which group this requester is in.

    cookie = request.headers.get("cookie")


    if cookie and f'{NAME}=control' in cookie:

      url = url._replace(path="/control" + url.path)

    elif cookie and f'{NAME}=test' in cookie:

      url = url._replace(path="/test" + url.path)

    else:

      # If there is no cookie, this is a new client. Choose a group and set the cookie.

      group = "test" if random.random() < 0.5 else "control"

      if group == "control":

        url = url._replace(path="/control" + url.path)

      else:

        url = url._replace(path="/test" + url.path)


      # Reconstruct response to avoid immutability

      res = await fetch(urlunparse(url))

      headers = dict(res.headers)

      headers["Set-Cookie"] = f'{NAME}={group}; path=/'

      return Response(res.body, headers=headers)


    return fetch(urlunparse(url))


```

TypeScript

```

import { Hono } from "hono";

import { getCookie, setCookie } from "hono/cookie";


const app = new Hono();


const NAME = "myExampleWorkersABTest";


// Enable passthrough to allow direct access to control and test routes

app.all("/control/*", (c) => fetch(c.req.raw));

app.all("/test/*", (c) => fetch(c.req.raw));


// Middleware to handle A/B testing logic

app.use("*", async (c) => {

  const url = new URL(c.req.url);


  // Determine which group this requester is in

  const abTestCookie = getCookie(c, NAME);


  if (abTestCookie === "control") {

    // User is in control group

    url.pathname = "/control" + c.req.path;

  } else if (abTestCookie === "test") {

    // User is in test group

    url.pathname = "/test" + c.req.path;

  } else {

    // If there is no cookie, this is a new client

    // Choose a group and set the cookie (50/50 split)

    const group = Math.random() < 0.5 ? "test" : "control";


    // Update URL path based on assigned group

    if (group === "control") {

      url.pathname = "/control" + c.req.path;

    } else {

      url.pathname = "/test" + c.req.path;

    }


    // Set cookie to enable persistent A/B sessions

    setCookie(c, NAME, group, {

      path: "/",

    });

  }


  const res = await fetch(url);


  return c.body(res.body, res);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/ab-testing/","name":"A/B testing with same-URL direct access"}}]}
```

---

---
title: Accessing the Cloudflare Object
description: Access custom Cloudflare properties and control how Cloudflare features are applied to every request.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/accessing-the-cloudflare-object.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Accessing the Cloudflare Object

**Last reviewed:**  about 4 years ago 

Access custom Cloudflare properties and control how Cloudflare features are applied to every request.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/accessing-the-cloudflare-object)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7185)
* [  TypeScript ](#tab-panel-7186)
* [  Hono ](#tab-panel-7187)
* [  Python ](#tab-panel-7188)

JavaScript

```

export default {

  async fetch(req) {

    const data =

      req.cf !== undefined

        ? req.cf

        : { error: "The `cf` object is not available inside the preview." };


    return new Response(JSON.stringify(data, null, 2), {

      headers: {

        "content-type": "application/json;charset=UTF-8",

      },

    });

  },

};


```

TypeScript

```

export default {

  async fetch(req): Promise<Response> {

    const data =

      req.cf !== undefined

        ? req.cf

        : { error: "The `cf` object is not available inside the preview." };


    return new Response(JSON.stringify(data, null, 2), {

      headers: {

        "content-type": "application/json;charset=UTF-8",

      },

    });

  },

} satisfies ExportedHandler;


```

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


app.get("*", async (c) => {

  // Access the raw request to get the cf object

  const req = c.req.raw;


  // Check if the cf object is available

  const data =

    req.cf !== undefined

      ? req.cf

      : { error: "The `cf` object is not available inside the preview." };


  // Return the data formatted with 2-space indentation

  return c.json(data);

});


export default app;


```

Python

```

import json

from workers import Response, WorkerEntrypoint

from js import JSON


class Default(WorkerEntrypoint):

  async def fetch(self, request):

    error = json.dumps({ "error": "The `cf` object is not available inside the preview." })

    data = request.cf if request.cf is not None else error

    headers = {"content-type":"application/json"}

    return Response(JSON.stringify(data, None, 2), headers=headers)


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/accessing-the-cloudflare-object/","name":"Accessing the Cloudflare Object"}}]}
```

---

---
title: Aggregate requests
description: Send two GET request to two urls and aggregates the responses into one response.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/aggregate-requests.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Aggregate requests

**Last reviewed:**  about 4 years ago 

Send two GET request to two urls and aggregates the responses into one response.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/aggregate-requests)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7189)
* [  TypeScript ](#tab-panel-7190)
* [  Hono ](#tab-panel-7191)
* [  Python ](#tab-panel-7192)

JavaScript

```

export default {

  async fetch(request) {

    // someHost is set up to return JSON responses

    const someHost = "https://jsonplaceholder.typicode.com";

    const url1 = someHost + "/todos/1";

    const url2 = someHost + "/todos/2";


    const responses = await Promise.all([fetch(url1), fetch(url2)]);

    const results = await Promise.all(responses.map((r) => r.json()));


    const options = {

      headers: { "content-type": "application/json;charset=UTF-8" },

    };

    return new Response(JSON.stringify(results), options);

  },

};


```

TypeScript

```

export default {

  async fetch(request) {

    // someHost is set up to return JSON responses

    const someHost = "https://jsonplaceholder.typicode.com";

    const url1 = someHost + "/todos/1";

    const url2 = someHost + "/todos/2";


    const responses = await Promise.all([fetch(url1), fetch(url2)]);

    const results = await Promise.all(responses.map((r) => r.json()));


    const options = {

      headers: { "content-type": "application/json;charset=UTF-8" },

    };

    return new Response(JSON.stringify(results), options);

  },

} satisfies ExportedHandler;


```

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


app.get("*", async (c) => {

  // someHost is set up to return JSON responses

  const someHost = "https://jsonplaceholder.typicode.com";

  const url1 = someHost + "/todos/1";

  const url2 = someHost + "/todos/2";


  // Fetch both URLs concurrently

  const responses = await Promise.all([fetch(url1), fetch(url2)]);


  // Parse JSON responses concurrently

  const results = await Promise.all(responses.map((r) => r.json()));


  // Return aggregated results

  return c.json(results);

});


export default app;


```

Python

```

from workers import Response, fetch, WorkerEntrypoint

import asyncio

import json


class Default(WorkerEntrypoint):

  async def fetch(self, request):

    # some_host is set up to return JSON responses

    some_host = "https://jsonplaceholder.typicode.com"

    url1 = some_host + "/todos/1"

    url2 = some_host + "/todos/2"


    responses = await asyncio.gather(fetch(url1), fetch(url2))

    results = await asyncio.gather(*(r.json() for r in responses))


    headers = {"content-type": "application/json;charset=UTF-8"}

    return Response.json(results, headers=headers)


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/aggregate-requests/","name":"Aggregate requests"}}]}
```

---

---
title: Alter headers
description: Example of how to add, change, or delete headers sent in a request or returned in a response.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Headers ](https://developers.cloudflare.com/search/?tags=Headers)[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/alter-headers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Alter headers

**Last reviewed:**  over 5 years ago 

Example of how to add, change, or delete headers sent in a request or returned in a response.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/alter-headers)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7193)
* [  TypeScript ](#tab-panel-7194)
* [  Python ](#tab-panel-7195)
* [  Hono ](#tab-panel-7196)

JavaScript

```

export default {

  async fetch(request) {

    const response = await fetch("https://example.com");


    // Clone the response so that it's no longer immutable

    const newResponse = new Response(response.body, response);


    // Add a custom header with a value

    newResponse.headers.append(

      "x-workers-hello",

      "Hello from Cloudflare Workers",

    );


    // Delete headers

    newResponse.headers.delete("x-header-to-delete");

    newResponse.headers.delete("x-header2-to-delete");


    // Adjust the value for an existing header

    newResponse.headers.set("x-header-to-change", "NewValue");


    return newResponse;

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const response = await fetch(request);


    // Clone the response so that it's no longer immutable

    const newResponse = new Response(response.body, response);


    // Add a custom header with a value

    newResponse.headers.append(

      "x-workers-hello",

      "Hello from Cloudflare Workers",

    );


    // Delete headers

    newResponse.headers.delete("x-header-to-delete");

    newResponse.headers.delete("x-header2-to-delete");


    // Adjust the value for an existing header

    newResponse.headers.set("x-header-to-change", "NewValue");


    return newResponse;

  },

} satisfies ExportedHandler;


```

Python

```

from workers import Response, fetch, WorkerEntrypoint


class Default(WorkerEntrypoint):

  async def fetch(self, request):

      response = await fetch("https://example.com")


      # Grab the response headers so they can be modified

      new_headers = response.headers


      # Add a custom header with a value

      new_headers["x-workers-hello"] = "Hello from Cloudflare Workers"


      # Delete headers

      if "x-header-to-delete" in new_headers:

          del new_headers["x-header-to-delete"]

      if "x-header2-to-delete" in new_headers:

          del new_headers["x-header2-to-delete"]


      # Adjust the value for an existing header

      new_headers["x-header-to-change"] = "NewValue"


      return Response(response.body, headers=new_headers)


```

TypeScript

```

import { Hono } from 'hono';


const app = new Hono();


app.use('*', async (c, next) => {

  // Process the request with the next middleware/handler

  await next();


  // After the response is generated, we can modify its headers


  // Add a custom header with a value

  c.res.headers.append(

    "x-workers-hello",

    "Hello from Cloudflare Workers with Hono"

  );


  // Delete headers

  c.res.headers.delete("x-header-to-delete");

  c.res.headers.delete("x-header2-to-delete");


  // Adjust the value for an existing header

  c.res.headers.set("x-header-to-change", "NewValue");

});


app.get('*', async (c) => {

  // Fetch content from example.com

  const response = await fetch("https://example.com");


  // Return the response body with original headers

  // (our middleware will modify the headers before sending)

  return new Response(response.body, {

    headers: response.headers

  });

});


export default app;


```

You can also use the [custom-headers-example template ↗](https://github.com/kristianfreeman/custom-headers-example) to deploy this code to your custom domain.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/alter-headers/","name":"Alter headers"}}]}
```

---

---
title: Write to Analytics Engine
description: Write custom analytics events to Workers Analytics Engine for high-cardinality, time-series data.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/analytics-engine.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Write to Analytics Engine

**Last reviewed:**  3 months ago 

Write custom analytics events to Workers Analytics Engine.

[Workers Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/) provides time-series analytics at scale. Use it to track custom metrics, build usage-based billing, or understand service health on a per-customer basis.

Unlike logs, Analytics Engine is designed for aggregated queries over high-cardinality data. Writes are non-blocking and do not impact request latency.

## Configure the binding

Add an Analytics Engine dataset binding to your Wrangler configuration file. The dataset is created automatically when you first write to it.

* [  wrangler.jsonc ](#tab-panel-7197)
* [  wrangler.toml ](#tab-panel-7198)

```

{

  "analytics_engine_datasets": [

    {

      "binding": "ANALYTICS",

      "dataset": "my_dataset",

    },

  ],

}


```

```

[[analytics_engine_datasets]]

binding = "ANALYTICS"

dataset = "my_dataset"


```

## Write data points

* [  JavaScript ](#tab-panel-7199)
* [  TypeScript ](#tab-panel-7200)

JavaScript

```

export default {

  async fetch(request, env) {

    const url = new URL(request.url);


    // Write a page view event

    env.ANALYTICS.writeDataPoint({

      blobs: [

        url.pathname,

        request.headers.get("cf-connecting-country") ?? "unknown",

      ],

      doubles: [1], // Count

      indexes: [url.hostname], // Sampling key

    });


    // Write a response timing event

    const start = Date.now();

    const response = await fetch(request);

    const duration = Date.now() - start;


    env.ANALYTICS.writeDataPoint({

      blobs: [url.pathname, response.status.toString()],

      doubles: [duration],

      indexes: [url.hostname],

    });


    // Writes are non-blocking - no need to await or use waitUntil()

    return response;

  },

};


```

TypeScript

```

interface Env {

  ANALYTICS: AnalyticsEngineDataset;

}


export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const url = new URL(request.url);


    // Write a page view event

    env.ANALYTICS.writeDataPoint({

      blobs: [

        url.pathname,

        request.headers.get("cf-connecting-country") ?? "unknown",

      ],

      doubles: [1], // Count

      indexes: [url.hostname], // Sampling key

    });


    // Write a response timing event

    const start = Date.now();

    const response = await fetch(request);

    const duration = Date.now() - start;


    env.ANALYTICS.writeDataPoint({

      blobs: [url.pathname, response.status.toString()],

      doubles: [duration],

      indexes: [url.hostname],

    });


    // Writes are non-blocking - no need to await or use waitUntil()

    return response;

  },

};


```

## Data point structure

Each data point consists of:

* **blobs** (strings) - Dimensions for grouping and filtering. Use for paths, regions, status codes, or customer IDs.
* **doubles** (numbers) - Numeric values to record, such as counts, durations, or sizes.
* **indexes** (strings) - A single string used as the [sampling key](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/#sampling). Group related events under the same index.

## Query your data

Query your data using the [SQL API](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/):

Terminal window

```

curl "https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/analytics_engine/sql" \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --data "SELECT blob1 AS path, SUM(_sample_interval) AS views FROM my_dataset WHERE timestamp > NOW() - INTERVAL '1' HOUR GROUP BY path ORDER BY views DESC LIMIT 10"


```

## Related resources

* [Analytics Engine documentation](https://developers.cloudflare.com/analytics/analytics-engine/) \- Full reference for Workers Analytics Engine.
* [SQL API reference](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/) \- Query syntax and available functions.
* [Grafana integration](https://developers.cloudflare.com/analytics/analytics-engine/grafana/) \- Visualize Analytics Engine data in Grafana.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/analytics-engine/","name":"Write to Analytics Engine"}}]}
```

---

---
title: Auth with headers
description: Allow or deny a request based on a known pre-shared key in a header. This is not meant to replace the WebCrypto API.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Authentication ](https://developers.cloudflare.com/search/?tags=Authentication)[ Web Crypto ](https://developers.cloudflare.com/search/?tags=Web%20Crypto)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/auth-with-headers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Auth with headers

**Last reviewed:**  over 5 years ago 

Allow or deny a request based on a known pre-shared key in a header. This is not meant to replace the WebCrypto API.

Caution when using in production

The example code contains a generic header key and value of `X-Custom-PSK` and `mypresharedkey`. To best protect your resources, change the header key and value in the Workers editor before saving your code.

* [  JavaScript ](#tab-panel-7201)
* [  TypeScript ](#tab-panel-7202)
* [  Python ](#tab-panel-7203)
* [  Hono ](#tab-panel-7204)

JavaScript

```

export default {

  async fetch(request) {

    /**

     * @param {string} PRESHARED_AUTH_HEADER_KEY Custom header to check for key

     * @param {string} PRESHARED_AUTH_HEADER_VALUE Hard coded key value

     */

    const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK";

    const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey";

    const psk = request.headers.get(PRESHARED_AUTH_HEADER_KEY);


    if (psk === PRESHARED_AUTH_HEADER_VALUE) {

      // Correct preshared header key supplied. Fetch request from origin.

      return fetch(request);

    }


    // Incorrect key supplied. Reject the request.

    return new Response("Sorry, you have supplied an invalid key.", {

      status: 403,

    });

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    /**

     * @param {string} PRESHARED_AUTH_HEADER_KEY Custom header to check for key

     * @param {string} PRESHARED_AUTH_HEADER_VALUE Hard coded key value

     */

    const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK";

    const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey";

    const psk = request.headers.get(PRESHARED_AUTH_HEADER_KEY);


    if (psk === PRESHARED_AUTH_HEADER_VALUE) {

      // Correct preshared header key supplied. Fetch request from origin.

      return fetch(request);

    }


    // Incorrect key supplied. Reject the request.

    return new Response("Sorry, you have supplied an invalid key.", {

      status: 403,

    });

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response, fetch


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK"

        PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey"


        psk = request.headers[PRESHARED_AUTH_HEADER_KEY]


        if psk == PRESHARED_AUTH_HEADER_VALUE:

            # Correct preshared header key supplied. Fetch request from origin.

            return fetch(request)


        # Incorrect key supplied. Reject the request.

        return Response("Sorry, you have supplied an invalid key.", status=403)


```

TypeScript

```

import { Hono } from 'hono';


const app = new Hono();


// Add authentication middleware

app.use('*', async (c, next) => {

  /**

   * Define authentication constants

   */

  const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK";

  const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey";


  // Get the pre-shared key from the request header

  const psk = c.req.header(PRESHARED_AUTH_HEADER_KEY);


  if (psk === PRESHARED_AUTH_HEADER_VALUE) {

    // Correct preshared header key supplied. Continue to the next handler.

    await next();

  } else {

    // Incorrect key supplied. Reject the request.

    return c.text("Sorry, you have supplied an invalid key.", 403);

  }

});


// Handle all authenticated requests by passing through to origin

app.all('*', async (c) => {

  return fetch(c.req.raw);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/auth-with-headers/","name":"Auth with headers"}}]}
```

---

---
title: HTTP Basic Authentication
description: Shows how to restrict access using the HTTP Basic schema.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Security ](https://developers.cloudflare.com/search/?tags=Security)[ Authentication ](https://developers.cloudflare.com/search/?tags=Authentication)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Rust ](https://developers.cloudflare.com/search/?tags=Rust) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/basic-auth.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# HTTP Basic Authentication

**Last reviewed:**  about 2 years ago 

Shows how to restrict access using the HTTP Basic schema.

Note

This example Worker makes use of the [Node.js Buffer API](https://developers.cloudflare.com/workers/runtime-apis/nodejs/buffer/), which is available as part of the Workers runtime [Node.js compatibility mode](https://developers.cloudflare.com/workers/runtime-apis/nodejs/). To run this Worker, you will need to [enable the nodejs\_compat compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

Caution when using in production

This code is provided as a sample, and is not suitable for production use. Basic Authentication sends credentials unencrypted, and must be used with an HTTPS connection to be considered secure. For a production-ready authentication system, consider using [Cloudflare Access ↗](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/self-hosted-public-app/).

* [  JavaScript ](#tab-panel-7205)
* [  TypeScript ](#tab-panel-7206)
* [  Rust ](#tab-panel-7207)
* [  Hono ](#tab-panel-7208)

JavaScript

```

/**

 * Shows how to restrict access using the HTTP Basic schema.

 * @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication

 * @see https://tools.ietf.org/html/rfc7617

 *

 */


import { Buffer } from "node:buffer";


const encoder = new TextEncoder();


/**

 * Protect against timing attacks by safely comparing values using `timingSafeEqual`.

 * Refer to https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#timingsafeequal for more details

 * @param {string} a

 * @param {string} b

 * @returns {boolean}

 */

function timingSafeEqual(a, b) {

  const aBytes = encoder.encode(a);

  const bBytes = encoder.encode(b);


  // Do not return early when lengths differ — that leaks the secret's

  // length through timing.  Compare against self and negate instead.

  if (aBytes.byteLength !== bBytes.byteLength) {

    return !crypto.subtle.timingSafeEqual(aBytes, aBytes);

  }


  return crypto.subtle.timingSafeEqual(aBytes, bBytes);

}


export default {

  /**

   *

   * @param {Request} request

   * @param {{PASSWORD: string}} env

   * @returns

   */

  async fetch(request, env) {

    const BASIC_USER = "admin";


    // You will need an admin password. This should be

    // attached to your Worker as an encrypted secret.

    // Refer to https://developers.cloudflare.com/workers/configuration/secrets/

    const BASIC_PASS = env.PASSWORD ?? "password";


    const url = new URL(request.url);


    switch (url.pathname) {

      case "/":

        return new Response("Anyone can access the homepage.");


      case "/logout":

        // Invalidate the "Authorization" header by returning a HTTP 401.

        // We do not send a "WWW-Authenticate" header, as this would trigger

        // a popup in the browser, immediately asking for credentials again.

        return new Response("Logged out.", { status: 401 });


      case "/admin": {

        // The "Authorization" header is sent when authenticated.

        const authorization = request.headers.get("Authorization");

        if (!authorization) {

          return new Response("You need to login.", {

            status: 401,

            headers: {

              // Prompts the user for credentials.

              "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"',

            },

          });

        }

        const [scheme, encoded] = authorization.split(" ");


        // The Authorization header must start with Basic, followed by a space.

        if (!encoded || scheme !== "Basic") {

          return new Response("Malformed authorization header.", {

            status: 400,

          });

        }


        const credentials = Buffer.from(encoded, "base64").toString();


        // The username & password are split by the first colon.

        //=> example: "username:password"

        const index = credentials.indexOf(":");

        const user = credentials.substring(0, index);

        const pass = credentials.substring(index + 1);


        if (

          !timingSafeEqual(BASIC_USER, user) ||

          !timingSafeEqual(BASIC_PASS, pass)

        ) {

          return new Response("You need to login.", {

            status: 401,

            headers: {

              // Prompts the user for credentials.

              "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"',

            },

          });

        }


        return new Response("🎉 You have private access!", {

          status: 200,

          headers: {

            "Cache-Control": "no-store",

          },

        });

      }

    }


    return new Response("Not Found.", { status: 404 });

  },

};


```

TypeScript

```

/**

 * Shows how to restrict access using the HTTP Basic schema.

 * @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication

 * @see https://tools.ietf.org/html/rfc7617

 *

 */


import { Buffer } from "node:buffer";


const encoder = new TextEncoder();


/**

 * Protect against timing attacks by safely comparing values using `timingSafeEqual`.

 * Refer to https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#timingsafeequal for more details

 */

function timingSafeEqual(a: string, b: string) {

  const aBytes = encoder.encode(a);

  const bBytes = encoder.encode(b);


  // Do not return early when lengths differ — that leaks the secret's

  // length through timing.  Compare against self and negate instead.

  if (aBytes.byteLength !== bBytes.byteLength) {

    return !crypto.subtle.timingSafeEqual(aBytes, aBytes);

  }


  return crypto.subtle.timingSafeEqual(aBytes, bBytes);

}


interface Env {

  PASSWORD: string;

}

export default {

  async fetch(request, env): Promise<Response> {

    const BASIC_USER = "admin";


    // You will need an admin password. This should be

    // attached to your Worker as an encrypted secret.

    // Refer to https://developers.cloudflare.com/workers/configuration/secrets/

    const BASIC_PASS = env.PASSWORD ?? "password";


    const url = new URL(request.url);


    switch (url.pathname) {

      case "/":

        return new Response("Anyone can access the homepage.");


      case "/logout":

        // Invalidate the "Authorization" header by returning a HTTP 401.

        // We do not send a "WWW-Authenticate" header, as this would trigger

        // a popup in the browser, immediately asking for credentials again.

        return new Response("Logged out.", { status: 401 });


      case "/admin": {

        // The "Authorization" header is sent when authenticated.

        const authorization = request.headers.get("Authorization");

        if (!authorization) {

          return new Response("You need to login.", {

            status: 401,

            headers: {

              // Prompts the user for credentials.

              "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"',

            },

          });

        }

        const [scheme, encoded] = authorization.split(" ");


        // The Authorization header must start with Basic, followed by a space.

        if (!encoded || scheme !== "Basic") {

          return new Response("Malformed authorization header.", {

            status: 400,

          });

        }


        const credentials = Buffer.from(encoded, "base64").toString();


        // The username and password are split by the first colon.

        //=> example: "username:password"

        const index = credentials.indexOf(":");

        const user = credentials.substring(0, index);

        const pass = credentials.substring(index + 1);


        if (

          !timingSafeEqual(BASIC_USER, user) ||

          !timingSafeEqual(BASIC_PASS, pass)

        ) {

          return new Response("You need to login.", {

            status: 401,

            headers: {

              // Prompts the user for credentials.

              "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"',

            },

          });

        }


        return new Response("🎉 You have private access!", {

          status: 200,

          headers: {

            "Cache-Control": "no-store",

          },

        });

      }

    }


    return new Response("Not Found.", { status: 404 });

  },

} satisfies ExportedHandler<Env>;


```

```

use base64::prelude::*;

use worker::*;


#[event(fetch)]

async fn fetch(req: Request, env: Env, _ctx: Context) -> Result<Response> {

    let basic_user = "admin";

    // You will need an admin password. This should be

    // attached to your Worker as an encrypted secret.

    // Refer to https://developers.cloudflare.com/workers/configuration/secrets/

    let basic_pass = match env.secret("PASSWORD") {

        Ok(s) => s.to_string(),

        Err(_) => "password".to_string(),

    };

    let url = req.url()?;


    match url.path() {

        "/" => Response::ok("Anyone can access the homepage."),

        // Invalidate the "Authorization" header by returning a HTTP 401.

        // We do not send a "WWW-Authenticate" header, as this would trigger

        // a popup in the browser, immediately asking for credentials again.

        "/logout" => Response::error("Logged out.", 401),

        "/admin" => {

            // The "Authorization" header is sent when authenticated.

            let authorization = req.headers().get("Authorization")?;

            if authorization == None {

                let mut headers = Headers::new();

                // Prompts the user for credentials.

                headers.set(

                    "WWW-Authenticate",

                    "Basic realm='my scope', charset='UTF-8'",

                )?;

                return Ok(Response::error("You need to login.", 401)?.with_headers(headers));

            }

            let authorization = authorization.unwrap();

            let auth: Vec<&str> = authorization.split(" ").collect();

            let scheme = auth[0];

            let encoded = auth[1];


            // The Authorization header must start with Basic, followed by a space.

            if encoded == "" || scheme != "Basic" {

                return Response::error("Malformed authorization header.", 400);

            }


            let buff = BASE64_STANDARD.decode(encoded).unwrap();

            let credentials = String::from_utf8_lossy(&buff);

            // The username & password are split by the first colon.

            //=> example: "username:password"

            let credentials: Vec<&str> = credentials.split(':').collect();

            let user = credentials[0];

            let pass = credentials[1];


            if user != basic_user || pass != basic_pass {

                let mut headers = Headers::new();

                // Prompts the user for credentials.

                headers.set(

                    "WWW-Authenticate",

                    "Basic realm='my scope', charset='UTF-8'",

                )?;

                return Ok(Response::error("You need to login.", 401)?.with_headers(headers));

            }


            let mut headers = Headers::new();

            headers.set("Cache-Control", "no-store")?;

            Ok(Response::ok("🎉 You have private access!")?.with_headers(headers))

        }

        _ => Response::error("Not Found.", 404),

    }

}


```

TypeScript

```

/**

 * Shows how to restrict access using the HTTP Basic schema with Hono.

 * @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication

 * @see https://tools.ietf.org/html/rfc7617

 */


import { Hono } from "hono";

import { basicAuth } from "hono/basic-auth";


// Define environment interface

interface Env {

  Bindings: {

    USERNAME: string;

    PASSWORD: string;

  };

}


const app = new Hono<Env>();


// Public homepage - accessible to everyone

app.get("/", (c) => {

  return c.text("Anyone can access the homepage.");

});


// Admin route - protected with Basic Auth

app.get(

  "/admin",

  async (c, next) => {

    const auth = basicAuth({

      username: c.env.USERNAME,

      password: c.env.PASSWORD,

    });


    return await auth(c, next);

  },

  (c) => {

    return c.text("🎉 You have private access!", 200, {

      "Cache-Control": "no-store",

    });

  },

);


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/basic-auth/","name":"HTTP Basic Authentication"}}]}
```

---

---
title: Block on TLS
description: Inspects the incoming request's TLS version and blocks if under TLSv1.2.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Security ](https://developers.cloudflare.com/search/?tags=Security)[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/block-on-tls.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Block on TLS

**Last reviewed:**  about 4 years ago 

Inspects the incoming request's TLS version and blocks if under TLSv1.2.

* [  JavaScript ](#tab-panel-7209)
* [  TypeScript ](#tab-panel-7210)
* [  Hono ](#tab-panel-7211)
* [  Python ](#tab-panel-7212)

JavaScript

```

export default {

  async fetch(request) {

    try {

      const tlsVersion = request.cf.tlsVersion;

      // Allow only TLS versions 1.2 and 1.3

      if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") {

        return new Response("Please use TLS version 1.2 or higher.", {

          status: 403,

        });

      }

      return fetch(request);

    } catch (err) {

      console.error(

        "request.cf does not exist in the previewer, only in production",

      );

      return new Response(`Error in workers script ${err.message}`, {

        status: 500,

      });

    }

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    try {

      const tlsVersion = request.cf.tlsVersion;

      // Allow only TLS versions 1.2 and 1.3

      if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") {

        return new Response("Please use TLS version 1.2 or higher.", {

          status: 403,

        });

      }

      return fetch(request);

    } catch (err) {

      console.error(

        "request.cf does not exist in the previewer, only in production",

      );

      return new Response(`Error in workers script ${err.message}`, {

        status: 500,

      });

    }

  },

} satisfies ExportedHandler;


```

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


// Middleware to check TLS version

app.use("*", async (c, next) => {

  // Access the raw request to get the cf object with TLS info

  const request = c.req.raw;

  const tlsVersion = request.cf?.tlsVersion;


  // Allow only TLS versions 1.2 and 1.3

  if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") {

    return c.text("Please use TLS version 1.2 or higher.", 403);

  }


  await next();


});


app.onError((err, c) => {

    console.error(

      "request.cf does not exist in the previewer, only in production",

    );

    return c.text(`Error in workers script: ${err.message}`, 500);

});


app.get("/", async (c) => {

  return c.text(`TLS Version: ${c.req.raw.cf.tlsVersion}`);

});


export default app;


```

Python

```

from workers import WorkerEntrypoint, Response, fetch


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        tls_version = request.cf.tlsVersion

        if tls_version not in ("TLSv1.2", "TLSv1.3"):

            return Response("Please use TLS version 1.2 or higher.", status=403)

        return fetch(request)


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/block-on-tls/","name":"Block on TLS"}}]}
```

---

---
title: Bulk origin override
description: Resolve requests to your domain to a set of proxy third-party origin URLs.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/bulk-origin-proxy.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Bulk origin override

**Last reviewed:**  over 5 years ago 

Resolve requests to your domain to a set of proxy third-party origin URLs.

* [  JavaScript ](#tab-panel-7213)
* [  TypeScript ](#tab-panel-7214)
* [  Hono ](#tab-panel-7215)
* [  Python ](#tab-panel-7216)

JavaScript

```

export default {

  async fetch(request) {

    /**

     * An object with different URLs to fetch

     * @param {Object} ORIGINS

     */

    const ORIGINS = {

      "starwarsapi.yourdomain.com": "swapi.dev",

      "google.yourdomain.com": "www.google.com",

    };


    const url = new URL(request.url);


    // Check if incoming hostname is a key in the ORIGINS object

    if (url.hostname in ORIGINS) {

      const target = ORIGINS[url.hostname];

      url.hostname = target;

      // If it is, proxy request to that third party origin

      return fetch(url.toString(), request);

    }

    // Otherwise, process request as normal

    return fetch(request);

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    /**

     * An object with different URLs to fetch

     * @param {Object} ORIGINS

     */

    const ORIGINS = {

      "starwarsapi.yourdomain.com": "swapi.dev",

      "google.yourdomain.com": "www.google.com",

    };


    const url = new URL(request.url);


    // Check if incoming hostname is a key in the ORIGINS object

    if (url.hostname in ORIGINS) {

      const target = ORIGINS[url.hostname];

      url.hostname = target;

      // If it is, proxy request to that third party origin

      return fetch(url.toString(), request);

    }

    // Otherwise, process request as normal

    return fetch(request);

  },

} satisfies ExportedHandler;


```

TypeScript

```

import { Hono } from "hono";

import { proxy } from "hono/proxy";


// An object with different URLs to fetch

const ORIGINS: Record<string, string> = {

  "starwarsapi.yourdomain.com": "swapi.dev",

  "google.yourdomain.com": "www.google.com",

};


const app = new Hono();


app.all("*", async (c) => {

  const url = new URL(c.req.url);


  // Check if incoming hostname is a key in the ORIGINS object

  if (url.hostname in ORIGINS) {

    const target = ORIGINS[url.hostname];

    url.hostname = target;


    // If it is, proxy request to that third party origin

    return proxy(url, c.req.raw);

  }


  // Otherwise, process request as normal

  return proxy(c.req.raw);

});


export default app;


```

Python

```

from workers import WorkerEntrypoint

from js import fetch, URL


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        # A dict with different URLs to fetch

        ORIGINS = {

          "starwarsapi.yourdomain.com": "swapi.dev",

          "google.yourdomain.com": "www.google.com",

        }


        url = URL.new(request.url)


        # Check if incoming hostname is a key in the ORIGINS object

        if url.hostname in ORIGINS:

            url.hostname = ORIGINS[url.hostname]

            # If it is, proxy request to that third party origin

            return fetch(url.toString(), request)


        # Otherwise, process request as normal

        return fetch(request)


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/bulk-origin-proxy/","name":"Bulk origin override"}}]}
```

---

---
title: Bulk redirects
description: Redirect requests to certain URLs based on a mapped object to the request's URL.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ Redirects ](https://developers.cloudflare.com/search/?tags=Redirects)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/bulk-redirects.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Bulk redirects

**Last reviewed:**  about 4 years ago 

Redirect requests to certain URLs based on a mapped object to the request's URL.

* [  JavaScript ](#tab-panel-7217)
* [  TypeScript ](#tab-panel-7218)
* [  Python ](#tab-panel-7219)
* [  Hono ](#tab-panel-7220)

JavaScript

```

export default {

  async fetch(request) {

    const externalHostname = "examples.cloudflareworkers.com";


    const redirectMap = new Map([

      ["/bulk1", "https://" + externalHostname + "/redirect2"],

      ["/bulk2", "https://" + externalHostname + "/redirect3"],

      ["/bulk3", "https://" + externalHostname + "/redirect4"],

      ["/bulk4", "https://google.com"],

    ]);


    const requestURL = new URL(request.url);

    const path = requestURL.pathname;

    const location = redirectMap.get(path);


    if (location) {

      return Response.redirect(location, 301);

    }

    // If request not in map, return the original request

    return fetch(request);

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const externalHostname = "examples.cloudflareworkers.com";


    const redirectMap = new Map([

      ["/bulk1", "https://" + externalHostname + "/redirect2"],

      ["/bulk2", "https://" + externalHostname + "/redirect3"],

      ["/bulk3", "https://" + externalHostname + "/redirect4"],

      ["/bulk4", "https://google.com"],

    ]);


    const requestURL = new URL(request.url);

    const path = requestURL.pathname;

    const location = redirectMap.get(path);


    if (location) {

      return Response.redirect(location, 301);

    }

    // If request not in map, return the original request

    return fetch(request);

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response, fetch

from urllib.parse import urlparse


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        external_hostname = "examples.cloudflareworkers.com"


        redirect_map = {

          "/bulk1": "https://" + external_hostname + "/redirect2",

          "/bulk2": "https://" + external_hostname + "/redirect3",

          "/bulk3": "https://" + external_hostname + "/redirect4",

          "/bulk4": "https://google.com",

          }


        url = urlparse(request.url)

        location = redirect_map.get(url.path, None)


        if location:

            return Response.redirect(location, 301)


        # If request not in map, return the original request

        return fetch(request)


```

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


// Configure your redirects

const externalHostname = "examples.cloudflareworkers.com";


const redirectMap = new Map([

  ["/bulk1", `https://${externalHostname}/redirect2`],

  ["/bulk2", `https://${externalHostname}/redirect3`],

  ["/bulk3", `https://${externalHostname}/redirect4`],

  ["/bulk4", "https://google.com"],

]);


// Middleware to handle redirects

app.use("*", async (c, next) => {

  const path = c.req.path;

  const location = redirectMap.get(path);


  if (location) {

    // If path is in our redirect map, perform the redirect

    return c.redirect(location, 301);

  }


  // Otherwise, continue to the next handler

  await next();

});


// Default handler for requests that don't match any redirects

app.all("*", async (c) => {

  // Pass through to origin

  return fetch(c.req.raw);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/bulk-redirects/","name":"Bulk redirects"}}]}
```

---

---
title: Using the Cache API
description: Use the Cache API to store responses in Cloudflare's cache.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ Caching ](https://developers.cloudflare.com/search/?tags=Caching)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/cache-api.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Using the Cache API

**Last reviewed:**  over 5 years ago 

Use the Cache API to store responses in Cloudflare's cache.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cache-api)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7221)
* [  TypeScript ](#tab-panel-7222)
* [  Python ](#tab-panel-7223)
* [  Hono ](#tab-panel-7224)

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    const cacheUrl = new URL(request.url);


    // Construct the cache key from the cache URL

    const cacheKey = new Request(cacheUrl.toString(), request);

    const cache = caches.default;


    // Check whether the value is already available in the cache

    // if not, you will need to fetch it from origin, and store it in the cache

    let response = await cache.match(cacheKey);


    if (!response) {

      console.log(

        `Response for request url: ${request.url} not present in cache. Fetching and caching request.`,

      );

      // If not in cache, get it from origin

      response = await fetch(request);


      // Must use Response constructor to inherit all of response's fields

      response = new Response(response.body, response);


      // Cache API respects Cache-Control headers. Setting s-maxage to 10

      // will limit the response to be in cache for 10 seconds max


      // Any changes made to the response here will be reflected in the cached value

      response.headers.append("Cache-Control", "s-maxage=10");


      ctx.waitUntil(cache.put(cacheKey, response.clone()));

    } else {

      console.log(`Cache hit for: ${request.url}.`);

    }

    return response;

  },

};


```

TypeScript

```

interface Env {}

export default {

  async fetch(request, env, ctx): Promise<Response> {

    const cacheUrl = new URL(request.url);


    // Construct the cache key from the cache URL

    const cacheKey = new Request(cacheUrl.toString(), request);

    const cache = caches.default;


    // Check whether the value is already available in the cache

    // if not, you will need to fetch it from origin, and store it in the cache

    let response = await cache.match(cacheKey);


    if (!response) {

      console.log(

        `Response for request url: ${request.url} not present in cache. Fetching and caching request.`,

      );

      // If not in cache, get it from origin

      response = await fetch(request);


      // Must use Response constructor to inherit all of response's fields

      response = new Response(response.body, response);


      // Cache API respects Cache-Control headers. Setting s-maxage to 10

      // will limit the response to be in cache for 10 seconds max


      // Any changes made to the response here will be reflected in the cached value

      response.headers.append("Cache-Control", "s-maxage=10");


      ctx.waitUntil(cache.put(cacheKey, response.clone()));

    } else {

      console.log(`Cache hit for: ${request.url}.`);

    }

    return response;

  },

} satisfies ExportedHandler<Env>;


```

Python

```

from workers import WorkerEntrypoint

from pyodide.ffi import create_proxy

from js import Response, Request, URL, caches, fetch


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        cache_url = request.url


        # Construct the cache key from the cache URL

        cache_key = Request.new(cache_url, request)

        cache = caches.default


        # Check whether the value is already available in the cache

        # if not, you will need to fetch it from origin, and store it in the cache

        response = await cache.match(cache_key)


        if response is None:

            print(f"Response for request url: {request.url} not present in cache. Fetching and caching request.")

            # If not in cache, get it from origin

            response = await fetch(request)

            # Must use Response constructor to inherit all of response's fields

            response = Response.new(response.body, response)


            # Cache API respects Cache-Control headers. Setting s-max-age to 10

            # will limit the response to be in cache for 10 seconds s-maxage

            # Any changes made to the response here will be reflected in the cached value

            response.headers.append("Cache-Control", "s-maxage=10")

            self.ctx.waitUntil(create_proxy(cache.put(cache_key, response.clone())))

        else:

            print(f"Cache hit for: {request.url}.")

        return response


```

TypeScript

```

import { Hono } from "hono";

import { cache } from "hono/cache";


const app = new Hono();


// We leverage hono built-in cache helper here

app.get(

  "*",

  cache({

    cacheName: "my-cache",

    cacheControl: "max-age=3600", // 1 hour

  }),

);


// Add a route to handle the request if it's not in cache

app.get("*", (c) => {

  return c.text("Hello from Hono!");

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/cache-api/","name":"Using the Cache API"}}]}
```

---

---
title: Cache POST requests
description: Cache POST requests using the Cache API.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ Caching ](https://developers.cloudflare.com/search/?tags=Caching)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/cache-post-request.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cache POST requests

**Last reviewed:**  about 4 years ago 

Cache POST requests using the Cache API.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cache-post-request)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7225)
* [  TypeScript ](#tab-panel-7226)
* [  Python ](#tab-panel-7227)
* [  Hono ](#tab-panel-7228)

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    async function sha256(message) {

      // encode as UTF-8

      const msgBuffer = await new TextEncoder().encode(message);

      // hash the message

      const hashBuffer = await crypto.subtle.digest("SHA-256", msgBuffer);

      // convert bytes to hex string

      return [...new Uint8Array(hashBuffer)]

        .map((b) => b.toString(16).padStart(2, "0"))

        .join("");

    }

    try {

      if (request.method.toUpperCase() === "POST") {

        const body = await request.clone().text();

        // Hash the request body to use it as a part of the cache key

        const hash = await sha256(body);

        const cacheUrl = new URL(request.url);

        // Store the URL in cache by prepending the body's hash

        cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash;

        // Convert to a GET to be able to cache

        const cacheKey = new Request(cacheUrl.toString(), {

          headers: request.headers,

          method: "GET",

        });


        const cache = caches.default;

        // Find the cache key in the cache

        let response = await cache.match(cacheKey);

        // Otherwise, fetch response to POST request from origin

        if (!response) {

          response = await fetch(request);

          ctx.waitUntil(cache.put(cacheKey, response.clone()));

        }

        return response;

      }

      return fetch(request);

    } catch (e) {

      return new Response("Error thrown " + e.message);

    }

  },

};


```

TypeScript

```

interface Env {}

export default {

  async fetch(request, env, ctx): Promise<Response> {

    async function sha256(message) {

      // encode as UTF-8

      const msgBuffer = await new TextEncoder().encode(message);

      // hash the message

      const hashBuffer = await crypto.subtle.digest("SHA-256", msgBuffer);

      // convert bytes to hex string

      return [...new Uint8Array(hashBuffer)]

        .map((b) => b.toString(16).padStart(2, "0"))

        .join("");

    }

    try {

      if (request.method.toUpperCase() === "POST") {

        const body = await request.clone().text();

        // Hash the request body to use it as a part of the cache key

        const hash = await sha256(body);

        const cacheUrl = new URL(request.url);

        // Store the URL in cache by prepending the body's hash

        cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash;

        // Convert to a GET to be able to cache

        const cacheKey = new Request(cacheUrl.toString(), {

          headers: request.headers,

          method: "GET",

        });


        const cache = caches.default;

        // Find the cache key in the cache

        let response = await cache.match(cacheKey);

        // Otherwise, fetch response to POST request from origin

        if (!response) {

          response = await fetch(request);

          ctx.waitUntil(cache.put(cacheKey, response.clone()));

        }

        return response;

      }

      return fetch(request);

    } catch (e) {

      return new Response("Error thrown " + e.message);

    }

  },

} satisfies ExportedHandler<Env>;


```

Python

```

import hashlib

from workers import WorkerEntrypoint

from pyodide.ffi import create_proxy

from js import fetch, URL, Headers, Request, caches


class Default(WorkerEntrypoint):

    async def fetch(self, request, _, ctx):

        if 'POST' in request.method:

            # Hash the request body to use it as a part of the cache key

            body = await request.clone().text()

            body_hash = hashlib.sha256(body.encode('UTF-8')).hexdigest()


            # Store the URL in cache by prepending the body's hash

            cache_url = URL.new(request.url)

            cache_url.pathname = "/posts" + cache_url.pathname + body_hash


            # Convert to a GET to be able to cache

            headers = Headers.new(dict(request.headers).items())

            cache_key = Request.new(cache_url.toString(), method='GET', headers=headers)


            # Find the cache key in the cache

            cache = caches.default

            response = await cache.match(cache_key)


            # Otherwise, fetch response to POST request from origin

            if response is None:

                response = await fetch(request)

                ctx.waitUntil(create_proxy(cache.put(cache_key, response.clone())))


            return response


        return fetch(request)


```

TypeScript

```

import { Hono } from "hono";

import { sha256 } from "hono/utils/crypto";


const app = new Hono();


// Middleware for caching POST requests

app.post("*", async (c) => {

  try {

    // Get the request body

    const body = await c.req.raw.clone().text();


    // Hash the request body to use it as part of the cache key

    const hash = await sha256(body);


    // Create the cache URL

    const cacheUrl = new URL(c.req.url);


    // Store the URL in cache by prepending the body's hash

    cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash;


    // Convert to a GET to be able to cache

    const cacheKey = new Request(cacheUrl.toString(), {

      headers: c.req.raw.headers,

      method: "GET",

    });


    const cache = caches.default;


    // Find the cache key in the cache

    let response = await cache.match(cacheKey);


    // If not in cache, fetch response to POST request from origin

    if (!response) {

      response = await fetch(c.req.raw);

      c.executionCtx.waitUntil(cache.put(cacheKey, response.clone()));

    }


    return response;

  } catch (e) {

    return c.text("Error thrown " + e.message, 500);

  }

});


// Handle all other HTTP methods

app.all("*", (c) => {

  return fetch(c.req.raw);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/cache-post-request/","name":"Cache POST requests"}}]}
```

---

---
title: Cache Tags using Workers
description: Send Additional Cache Tags using Workers
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Caching ](https://developers.cloudflare.com/search/?tags=Caching)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/cache-tags.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cache Tags using Workers

**Last reviewed:**  almost 4 years ago 

Send Additional Cache Tags using Workers

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cache-tags)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7229)
* [  TypeScript ](#tab-panel-7230)
* [  Hono ](#tab-panel-7231)
* [  Python ](#tab-panel-7232)

JavaScript

```

export default {

  async fetch(request) {

    const requestUrl = new URL(request.url);

    const params = requestUrl.searchParams;

    const tags =

      params && params.has("tags") ? params.get("tags").split(",") : [];

    const url = params && params.has("uri") ? params.get("uri") : "";

    if (!url) {

      const errorObject = {

        error: "URL cannot be empty",

      };

      return new Response(JSON.stringify(errorObject), { status: 400 });

    }

    const init = {

      cf: {

        cacheTags: tags,

      },

    };

    return fetch(url, init)

      .then((result) => {

        const cacheStatus = result.headers.get("cf-cache-status");

        const lastModified = result.headers.get("last-modified");

        const response = {

          cache: cacheStatus,

          lastModified: lastModified,

        };

        return new Response(JSON.stringify(response), {

          status: result.status,

        });

      })

      .catch((err) => {

        const errorObject = {

          error: err.message,

        };

        return new Response(JSON.stringify(errorObject), { status: 500 });

      });

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const requestUrl = new URL(request.url);

    const params = requestUrl.searchParams;

    const tags =

      params && params.has("tags") ? params.get("tags").split(",") : [];

    const url = params && params.has("uri") ? params.get("uri") : "";

    if (!url) {

      const errorObject = {

        error: "URL cannot be empty",

      };

      return new Response(JSON.stringify(errorObject), { status: 400 });

    }

    const init = {

      cf: {

        cacheTags: tags,

      },

    };

    return fetch(url, init)

      .then((result) => {

        const cacheStatus = result.headers.get("cf-cache-status");

        const lastModified = result.headers.get("last-modified");

        const response = {

          cache: cacheStatus,

          lastModified: lastModified,

        };

        return new Response(JSON.stringify(response), {

          status: result.status,

        });

      })

      .catch((err) => {

        const errorObject = {

          error: err.message,

        };

        return new Response(JSON.stringify(errorObject), { status: 500 });

      });

  },

} satisfies ExportedHandler;


```

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


app.all("*", async (c) => {

  const tags = c.req.query("tags") ? c.req.query("tags").split(",") : [];

  const uri = c.req.query("uri") ? c.req.query("uri") : "";


  if (!uri) {

    return c.json({ error: "URL cannot be empty" }, 400);

  }


  const init = {

    cf: {

      cacheTags: tags,

    },

  };


  const result = await fetch(uri, init);

  const cacheStatus = result.headers.get("cf-cache-status");

  const lastModified = result.headers.get("last-modified");


  const response = {

    cache: cacheStatus,

    lastModified: lastModified,

  };


  return c.json(response, result.status);

});


app.onError((err, c) => {

  return c.json({ error: err.message }, 500);

});


export default app;


```

Python

```

from workers import WorkerEntrypoint

from pyodide.ffi import to_js as _to_js

from js import Response, URL, Object, fetch


def to_js(x):

    return _to_js(x, dict_converter=Object.fromEntries)


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        request_url = URL.new(request.url)

        params = request_url.searchParams

        tags = params["tags"].split(",") if "tags" in params else []

        url = params["uri"] or None


        if url is None:

            error = {"error": "URL cannot be empty"}

            return Response.json(to_js(error), status=400)


        options = {"cf": {"cacheTags": tags}}

        result = await fetch(url, to_js(options))


        cache_status = result.headers["cf-cache-status"]

        last_modified = result.headers["last-modified"]

        response = {"cache": cache_status, "lastModified": last_modified}


        return Response.json(to_js(response), status=result.status)


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/cache-tags/","name":"Cache Tags using Workers"}}]}
```

---

---
title: Cache using fetch
description: Determine how to cache a resource by setting TTLs, custom cache keys, and cache headers in a fetch request.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Caching ](https://developers.cloudflare.com/search/?tags=Caching)[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python)[ Rust ](https://developers.cloudflare.com/search/?tags=Rust) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/cache-using-fetch.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cache using fetch

**Last reviewed:**  over 5 years ago 

Determine how to cache a resource by setting TTLs, custom cache keys, and cache headers in a fetch request.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cache-using-fetch)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7233)
* [  TypeScript ](#tab-panel-7234)
* [  Hono ](#tab-panel-7235)
* [  Python ](#tab-panel-7236)
* [  Rust ](#tab-panel-7237)

JavaScript

```

export default {

  async fetch(request) {

    const url = new URL(request.url);

    // Only use the path for the cache key, removing query strings

    // and always store using HTTPS, for example, https://www.example.com/file-uri-here

    const someCustomKey = `https://${url.hostname}${url.pathname}`;

    let response = await fetch(request, {

      cf: {

        // Always cache this fetch regardless of content type

        // for a max of 5 seconds before revalidating the resource

        cacheTtl: 5,

        cacheEverything: true,

        //Enterprise only feature, see Cache API for other plans

        cacheKey: someCustomKey,

      },

    });

    // Reconstruct the Response object to make its headers mutable.

    response = new Response(response.body, response);

    // Set cache control headers to cache on browser for 25 minutes

    response.headers.set("Cache-Control", "max-age=1500");

    return response;

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const url = new URL(request.url);

    // Only use the path for the cache key, removing query strings

    // and always store using HTTPS, for example, https://www.example.com/file-uri-here

    const someCustomKey = `https://${url.hostname}${url.pathname}`;

    let response = await fetch(request, {

      cf: {

        // Always cache this fetch regardless of content type

        // for a max of 5 seconds before revalidating the resource

        cacheTtl: 5,

        cacheEverything: true,

        //Enterprise only feature, see Cache API for other plans

        cacheKey: someCustomKey,

      },

    });

    // Reconstruct the Response object to make its headers mutable.

    response = new Response(response.body, response);

    // Set cache control headers to cache on browser for 25 minutes

    response.headers.set("Cache-Control", "max-age=1500");

    return response;

  },

} satisfies ExportedHandler;


```

TypeScript

```

import { Hono } from 'hono';


type Bindings = {};


const app = new Hono<{ Bindings: Bindings }>();


app.all('*', async (c) => {

  const url = new URL(c.req.url);


  // Only use the path for the cache key, removing query strings

  // and always store using HTTPS, for example, https://www.example.com/file-uri-here

  const someCustomKey = `https://${url.hostname}${url.pathname}`;


  // Fetch the request with custom cache settings

  let response = await fetch(c.req.raw, {

    cf: {

      // Always cache this fetch regardless of content type

      // for a max of 5 seconds before revalidating the resource

      cacheTtl: 5,

      cacheEverything: true,

      // Enterprise only feature, see Cache API for other plans

      cacheKey: someCustomKey,

    },

  });


  // Reconstruct the Response object to make its headers mutable

  response = new Response(response.body, response);


  // Set cache control headers to cache on browser for 25 minutes

  response.headers.set("Cache-Control", "max-age=1500");


  return response;

});


export default app;


```

Python

```

from workers import WorkerEntrypoint

from pyodide.ffi import to_js as _to_js

from js import Response, URL, Object, fetch


def to_js(x):

    return _to_js(x, dict_converter=Object.fromEntries)


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        url = URL.new(request.url)


        # Only use the path for the cache key, removing query strings

        # and always store using HTTPS, for example, https://www.example.com/file-uri-here

        some_custom_key = f"https://{url.hostname}{url.pathname}"


        response = await fetch(

            request,

            cf=to_js({

                # Always cache this fetch regardless of content type

                # for a max of 5 seconds before revalidating the resource

                "cacheTtl": 5,

                "cacheEverything": True,

                # Enterprise only feature, see Cache API for other plans

                "cacheKey": some_custom_key,

            }),

        )


        # Reconstruct the Response object to make its headers mutable

        response = Response.new(response.body, response)


        # Set cache control headers to cache on browser for 25 minutes

        response.headers["Cache-Control"] = "max-age=1500"


        return response


```

```

use worker::*;


#[event(fetch)]

async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result<Response> {

    let url = req.url()?;


    // Only use the path for the cache key, removing query strings

    // and always store using HTTPS, for example, https://www.example.com/file-uri-here

    let custom_key = format!(

        "https://{host}{path}",

        host = url.host_str().unwrap(),

        path = url.path()

    );


    let request = Request::new_with_init(

        url.as_str(),

        &RequestInit {

            headers: req.headers().clone(),

            method: req.method(),

            cf: CfProperties {

                // Always cache this fetch regardless of content type

                // for a max of 5 seconds before revalidating the resource

                cache_ttl: Some(5),

                cache_everything: Some(true),

                // Enterprise only feature, see Cache API for other plans

                cache_key: Some(custom_key),

                ..CfProperties::default()

            },

            ..RequestInit::default()

        },

    )?;


    let mut response = Fetch::Request(request).send().await?;


    // Set cache control headers to cache on browser for 25 minutes

    let _ = response.headers_mut().set("Cache-Control", "max-age=1500");

    Ok(response)

}


```

## Caching HTML resources

JavaScript

```

// Force Cloudflare to cache an asset

fetch(event.request, { cf: { cacheEverything: true } });


```

Setting the cache level to **Cache Everything** will override the default cacheability of the asset. For time-to-live (TTL), Cloudflare will still rely on headers set by the origin.

## Custom cache keys

Note

This feature is available only to Enterprise customers.

A request's cache key is what determines if two requests are the same for caching purposes. If a request has the same cache key as some previous request, then Cloudflare can serve the same cached response for both. For more about cache keys, refer to the [Create custom cache keys](https://developers.cloudflare.com/cache/how-to/cache-keys/#create-custom-cache-keys) documentation.

JavaScript

```

// Set cache key for this request to "some-string".

fetch(event.request, { cf: { cacheKey: "some-string" } });


```

Normally, Cloudflare computes the cache key for a request based on the request's URL. Sometimes, though, you may like different URLs to be treated as if they were the same for caching purposes. For example, if your website content is hosted from both Amazon S3 and Google Cloud Storage - you have the same content in both places, and you can use a Worker to randomly balance between the two. However, you do not want to end up caching two copies of your content. You could utilize custom cache keys to cache based on the original request URL rather than the subrequest URL:

* [  JavaScript ](#tab-panel-7238)
* [  TypeScript ](#tab-panel-7239)
* [  Hono ](#tab-panel-7240)

JavaScript

```

export default {

  async fetch(request) {

    let url = new URL(request.url);


    if (Math.random() < 0.5) {

      url.hostname = "example.s3.amazonaws.com";

    } else {

      url.hostname = "example.storage.googleapis.com";

    }


    let newRequest = new Request(url, request);

    return fetch(newRequest, {

      cf: { cacheKey: request.url },

    });

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    let url = new URL(request.url);


    if (Math.random() < 0.5) {

      url.hostname = "example.s3.amazonaws.com";

    } else {

      url.hostname = "example.storage.googleapis.com";

    }


    let newRequest = new Request(url, request);

    return fetch(newRequest, {

      cf: { cacheKey: request.url },

    });

  },

} satisfies ExportedHandler;


```

TypeScript

```

import { Hono } from 'hono';


type Bindings = {};


const app = new Hono<{ Bindings: Bindings }>();


app.all('*', async (c) => {

  const originalUrl = c.req.url;

  const url = new URL(originalUrl);


  // Randomly select a storage backend

  if (Math.random() < 0.5) {

    url.hostname = "example.s3.amazonaws.com";

  } else {

    url.hostname = "example.storage.googleapis.com";

  }


  // Create a new request to the selected backend

  const newRequest = new Request(url, c.req.raw);


  // Fetch using the original URL as the cache key

  return fetch(newRequest, {

    cf: { cacheKey: originalUrl },

  });

});


export default app;


```

Workers operating on behalf of different zones cannot affect each other's cache. You can only override cache keys when making requests within your own zone (in the above example `event.request.url` was the key stored), or requests to hosts that are not on Cloudflare. When making a request to another Cloudflare zone (for example, belonging to a different Cloudflare customer), that zone fully controls how its own content is cached within Cloudflare; you cannot override it.

## Override based on origin response code

JavaScript

```

// Force response to be cached for 86400 seconds for 200 status

// codes, 1 second for 404, and do not cache 500 errors.

fetch(request, {

  cf: { cacheTtlByStatus: { "200-299": 86400, 404: 1, "500-599": 0 } },

});


```

This option is a version of the `cacheTtl` feature which chooses a TTL based on the response's status code and does not automatically set `cacheEverything: true`. If the response to this request has a status code that matches, Cloudflare will cache for the instructed time, and override cache directives sent by the origin. You can review [details on the cacheTtl feature on the Request page](https://developers.cloudflare.com/workers/runtime-apis/request/#the-cf-property-requestinitcfproperties).

## Customize cache behavior based on request file type

Using custom cache keys and overrides based on response code, you can write a Worker that sets the TTL based on the response status code from origin, and request file type.

The following example demonstrates how you might use this to cache requests for streaming media assets:

* [  Module Worker ](#tab-panel-7241)
* [  Service Worker ](#tab-panel-7242)

index.js

```

export default {

  async fetch(request) {

    // Instantiate new URL to make it mutable

    const newRequest = new URL(request.url);


    const customCacheKey = `${newRequest.hostname}${newRequest.pathname}`;

    const queryCacheKey = `${newRequest.hostname}${newRequest.pathname}${newRequest.search}`;


    // Different asset types usually have different caching strategies. Most of the time media content such as audio, videos and images that are not user-generated content would not need to be updated often so a long TTL would be best. However, with HLS streaming, manifest files usually are set with short TTLs so that playback will not be affected, as this files contain the data that the player would need. By setting each caching strategy for categories of asset types in an object within an array, you can solve complex needs when it comes to media content for your application


    const cacheAssets = [

      {

        asset: "video",

        key: customCacheKey,

        regex:

          /(.*\/Video)|(.*\.(m4s|mp4|ts|avi|mpeg|mpg|mkv|bin|webm|vob|flv|m2ts|mts|3gp|m4v|wmv|qt))/,

        info: 0,

        ok: 31556952,

        redirects: 30,

        clientError: 10,

        serverError: 0,

      },

      {

        asset: "image",

        key: queryCacheKey,

        regex:

          /(.*\/Images)|(.*\.(jpg|jpeg|png|bmp|pict|tif|tiff|webp|gif|heif|exif|bat|bpg|ppm|pgn|pbm|pnm))/,

        info: 0,

        ok: 3600,

        redirects: 30,

        clientError: 10,

        serverError: 0,

      },

      {

        asset: "frontEnd",

        key: queryCacheKey,

        regex: /^.*\.(css|js)/,

        info: 0,

        ok: 3600,

        redirects: 30,

        clientError: 10,

        serverError: 0,

      },

      {

        asset: "audio",

        key: customCacheKey,

        regex:

          /(.*\/Audio)|(.*\.(flac|aac|mp3|alac|aiff|wav|ogg|aiff|opus|ape|wma|3gp))/,

        info: 0,

        ok: 31556952,

        redirects: 30,

        clientError: 10,

        serverError: 0,

      },

      {

        asset: "directPlay",

        key: customCacheKey,

        regex: /.*(\/Download)/,

        info: 0,

        ok: 31556952,

        redirects: 30,

        clientError: 10,

        serverError: 0,

      },

      {

        asset: "manifest",

        key: customCacheKey,

        regex: /^.*\.(m3u8|mpd)/,

        info: 0,

        ok: 3,

        redirects: 2,

        clientError: 1,

        serverError: 0,

      },

    ];


    const { asset, regex, ...cache } =

      cacheAssets.find(({ regex }) => newRequest.pathname.match(regex)) ?? {};


    const newResponse = await fetch(request, {

      cf: {

        cacheKey: cache.key,

        polish: false,

        cacheEverything: true,

        cacheTtlByStatus: {

          "100-199": cache.info,

          "200-299": cache.ok,

          "300-399": cache.redirects,

          "400-499": cache.clientError,

          "500-599": cache.serverError,

        },

        cacheTags: ["static"],

      },

    });


    const response = new Response(newResponse.body, newResponse);


    // For debugging purposes

    response.headers.set("debug", JSON.stringify(cache));

    return response;

  },

};


```

Service Workers are deprecated

Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers.

index.js

```

addEventListener("fetch", (event) => {

  return event.respondWith(handleRequest(event.request));

});


async function handleRequest(request) {

  // Instantiate new URL to make it mutable

  const newRequest = new URL(request.url);


  // Set `const` to be used in the array later on

  const customCacheKey = `${newRequest.hostname}${newRequest.pathname}`;

  const queryCacheKey = `${newRequest.hostname}${newRequest.pathname}${newRequest.search}`;


  // Set all variables needed to manipulate Cloudflare's cache using the fetch API in the `cf` object. You will be passing these variables in the objects down below.

  const cacheAssets = [

    {

      asset: "video",

      key: customCacheKey,

      regex:

        /(.*\/Video)|(.*\.(m4s|mp4|ts|avi|mpeg|mpg|mkv|bin|webm|vob|flv|m2ts|mts|3gp|m4v|wmv|qt))/,

      info: 0,

      ok: 31556952,

      redirects: 30,

      clientError: 10,

      serverError: 0,

    },

    {

      asset: "image",

      key: queryCacheKey,

      regex:

        /(.*\/Images)|(.*\.(jpg|jpeg|png|bmp|pict|tif|tiff|webp|gif|heif|exif|bat|bpg|ppm|pgn|pbm|pnm))/,

      info: 0,

      ok: 3600,

      redirects: 30,

      clientError: 10,

      serverError: 0,

    },

    {

      asset: "frontEnd",

      key: queryCacheKey,

      regex: /^.*\.(css|js)/,

      info: 0,

      ok: 3600,

      redirects: 30,

      clientError: 10,

      serverError: 0,

    },

    {

      asset: "audio",

      key: customCacheKey,

      regex:

        /(.*\/Audio)|(.*\.(flac|aac|mp3|alac|aiff|wav|ogg|aiff|opus|ape|wma|3gp))/,

      info: 0,

      ok: 31556952,

      redirects: 30,

      clientError: 10,

      serverError: 0,

    },

    {

      asset: "directPlay",

      key: customCacheKey,

      regex: /.*(\/Download)/,

      info: 0,

      ok: 31556952,

      redirects: 30,

      clientError: 10,

      serverError: 0,

    },

    {

      asset: "manifest",

      key: customCacheKey,

      regex: /^.*\.(m3u8|mpd)/,

      info: 0,

      ok: 3,

      redirects: 2,

      clientError: 1,

      serverError: 0,

    },

  ];


  // the `.find` method is used to find elements in an array (`cacheAssets`), in this case, `regex`, which can passed to the .`match` method to match on file extensions to cache, since they are many media types in the array. If you want to add more types, update the array. Refer to https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find for more information.

  const { asset, regex, ...cache } =

    cacheAssets.find(({ regex }) => newRequest.pathname.match(regex)) ?? {};


  const newResponse = await fetch(request, {

    cf: {

      cacheKey: cache.key,

      polish: false,

      cacheEverything: true,

      cacheTtlByStatus: {

        "100-199": cache.info,

        "200-299": cache.ok,

        "300-399": cache.redirects,

        "400-499": cache.clientError,

        "500-599": cache.serverError,

      },

      cacheTags: ["static"],

    },

  });


  const response = new Response(newResponse.body, newResponse);


  // For debugging purposes

  response.headers.set("debug", JSON.stringify(cache));

  return response;

}


```

## Using the HTTP Cache API

The `cache` mode can be set in `fetch` options. Currently Workers only support the `no-store` and `no-cache` mode for controlling the cache. When `no-store` is supplied the cache is bypassed on the way to the origin and the request is not cacheable. When `no-cache` is supplied the cache is forced to revalidate the currently cached response with the origin.

JavaScript

```

fetch(request, { cache: 'no-store'});

fetch(request, { cache: 'no-cache'});


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/cache-using-fetch/","name":"Cache using fetch"}}]}
```

---

---
title: Conditional response
description: Return a response based on the incoming request's URL, HTTP method, User Agent, IP address, ASN or device type.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/conditional-response.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Conditional response

**Last reviewed:**  about 4 years ago 

Return a response based on the incoming request's URL, HTTP method, User Agent, IP address, ASN or device type.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/conditional-response)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7243)
* [  TypeScript ](#tab-panel-7244)
* [  Python ](#tab-panel-7245)
* [  Hono ](#tab-panel-7246)

JavaScript

```

export default {

  async fetch(request) {

    const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"];

    // Return a new Response based on a URL's hostname

    const url = new URL(request.url);

    if (BLOCKED_HOSTNAMES.includes(url.hostname)) {

      return new Response("Blocked Host", { status: 403 });

    }

    // Block paths ending in .doc or .xml based on the URL's file extension

    const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/);

    if (forbiddenExtRegExp.test(url.pathname)) {

      return new Response("Blocked Extension", { status: 403 });

    }

    // On HTTP method

    if (request.method === "POST") {

      return new Response("Response for POST");

    }

    // On User Agent

    const userAgent = request.headers.get("User-Agent") || "";

    if (userAgent.includes("bot")) {

      return new Response("Block User Agent containing bot", { status: 403 });

    }

    // On Client's IP address

    const clientIP = request.headers.get("CF-Connecting-IP");

    if (clientIP === "1.2.3.4") {

      return new Response("Block the IP 1.2.3.4", { status: 403 });

    }

    // On ASN

    if (request.cf && request.cf.asn == 64512) {

      return new Response("Block the ASN 64512 response");

    }

    // On Device Type

    // Requires Enterprise "CF-Device-Type Header" zone setting or

    // Page Rule with "Cache By Device Type" setting applied.

    const device = request.headers.get("CF-Device-Type");

    if (device === "mobile") {

      return Response.redirect("https://mobile.example.com");

    }

    console.error(

      "Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker",

    );

    return fetch(request);

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"];

    // Return a new Response based on a URL's hostname

    const url = new URL(request.url);

    if (BLOCKED_HOSTNAMES.includes(url.hostname)) {

      return new Response("Blocked Host", { status: 403 });

    }

    // Block paths ending in .doc or .xml based on the URL's file extension

    const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/);

    if (forbiddenExtRegExp.test(url.pathname)) {

      return new Response("Blocked Extension", { status: 403 });

    }

    // On HTTP method

    if (request.method === "POST") {

      return new Response("Response for POST");

    }

    // On User Agent

    const userAgent = request.headers.get("User-Agent") || "";

    if (userAgent.includes("bot")) {

      return new Response("Block User Agent containing bot", { status: 403 });

    }

    // On Client's IP address

    const clientIP = request.headers.get("CF-Connecting-IP");

    if (clientIP === "1.2.3.4") {

      return new Response("Block the IP 1.2.3.4", { status: 403 });

    }

    // On ASN

    if (request.cf && request.cf.asn == 64512) {

      return new Response("Block the ASN 64512 response");

    }

    // On Device Type

    // Requires Enterprise "CF-Device-Type Header" zone setting or

    // Page Rule with "Cache By Device Type" setting applied.

    const device = request.headers.get("CF-Device-Type");

    if (device === "mobile") {

      return Response.redirect("https://mobile.example.com");

    }

    console.error(

      "Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker",

    );

    return fetch(request);

  },

} satisfies ExportedHandler;


```

Python

```

import re

from workers import WorkerEntrypoint, Response, fetch

from urllib.parse import urlparse


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        blocked_hostnames = ["nope.mywebsite.com", "bye.website.com"]

        url = urlparse(request.url)


        # Block on hostname

        if url.hostname in blocked_hostnames:

            return Response("Blocked Host", status=403)


        # On paths ending in .doc or .xml

        if re.search(r'\.(doc|xml)$', url.path):

            return Response("Blocked Extension", status=403)


        # On HTTP method

        if "POST" in request.method:

            return Response("Response for POST")


        # On User Agent

        user_agent = request.headers["User-Agent"] or ""

        if "bot" in user_agent:

            return Response("Block User Agent containing bot", status=403)


        # On Client's IP address

        client_ip = request.headers["CF-Connecting-IP"]

        if client_ip == "1.2.3.4":

            return Response("Block the IP 1.2.3.4", status=403)


        # On ASN

        if request.cf and request.cf.asn == 64512:

            return Response("Block the ASN 64512 response")


        # On Device Type

        # Requires Enterprise "CF-Device-Type Header" zone setting or

        # Page Rule with "Cache By Device Type" setting applied.

        device = request.headers["CF-Device-Type"]

        if device == "mobile":

            return Response.redirect("https://mobile.example.com")


        return fetch(request)


```

TypeScript

```

import { Hono } from "hono";

import { HTTPException } from "hono/http-exception";


const app = new Hono();


// Middleware to handle all conditions before reaching the main handler

app.use("*", async (c, next) => {

  const request = c.req.raw;

  const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"];

  const hostname = new URL(c.req.url)?.hostname;


  // Return a new Response based on a URL's hostname

  if (BLOCKED_HOSTNAMES.includes(hostname)) {

    return c.text("Blocked Host", 403);

  }


  // Block paths ending in .doc or .xml based on the URL's file extension

  const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/);

  if (forbiddenExtRegExp.test(c.req.pathname)) {

    return c.text("Blocked Extension", 403);

  }


  // On User Agent

  const userAgent = c.req.header("User-Agent") || "";

  if (userAgent.includes("bot")) {

    return c.text("Block User Agent containing bot", 403);

  }


  // On Client's IP address

  const clientIP = c.req.header("CF-Connecting-IP");

  if (clientIP === "1.2.3.4") {

    return c.text("Block the IP 1.2.3.4", 403);

  }


  // On ASN

  if (request.cf && request.cf.asn === 64512) {

    return c.text("Block the ASN 64512 response");

  }


  // On Device Type

  // Requires Enterprise "CF-Device-Type Header" zone setting or

  // Page Rule with "Cache By Device Type" setting applied.

  const device = c.req.header("CF-Device-Type");

  if (device === "mobile") {

    return c.redirect("https://mobile.example.com");

  }


  // Continue to the next handler

  await next();

});


// Handle POST requests differently

app.post("*", (c) => {

  return c.text("Response for POST");

});


// Default handler for other methods

app.get("*", async (c) => {

  console.error(

    "Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker",

  );


  // Fetch the original request

  return fetch(c.req.raw);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/conditional-response/","name":"Conditional response"}}]}
```

---

---
title: CORS header proxy
description: Add the necessary CORS headers to a third party API response.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Security ](https://developers.cloudflare.com/search/?tags=Security)[ Headers ](https://developers.cloudflare.com/search/?tags=Headers)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python)[ Rust ](https://developers.cloudflare.com/search/?tags=Rust) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/cors-header-proxy.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# CORS header proxy

**Last reviewed:**  over 5 years ago 

Add the necessary CORS headers to a third party API response.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cors-header-proxy)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7247)
* [  TypeScript ](#tab-panel-7248)
* [  Hono ](#tab-panel-7249)
* [  Python ](#tab-panel-7250)
* [  Rust ](#tab-panel-7251)

JavaScript

```

export default {

  async fetch(request) {

    const corsHeaders = {

      "Access-Control-Allow-Origin": "*",

      "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS",

      "Access-Control-Max-Age": "86400",

    };


    // The URL for the remote third party API you want to fetch from

    // but does not implement CORS

    const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi";


    // The endpoint you want the CORS reverse proxy to be on

    const PROXY_ENDPOINT = "/corsproxy/";


    // The rest of this snippet for the demo page

    function rawHtmlResponse(html) {

      return new Response(html, {

        headers: {

          "content-type": "text/html;charset=UTF-8",

        },

      });

    }


    const DEMO_PAGE = `

      <!DOCTYPE html>

      <html>

      <body>

        <h1>API GET without CORS Proxy</h1>

        <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#Checking_that_the_fetch_was_successful">Shows TypeError: Failed to fetch since CORS is misconfigured</a>

        <p id="noproxy-status"/>

        <code id="noproxy">Waiting</code>

        <h1>API GET with CORS Proxy</h1>

        <p id="proxy-status"/>

        <code id="proxy">Waiting</code>

        <h1>API POST with CORS Proxy + Preflight</h1>

        <p id="proxypreflight-status"/>

        <code id="proxypreflight">Waiting</code>

        <script>

        let reqs = {};

        reqs.noproxy = () => {

          return fetch("${API_URL}").then(r => r.json())

        }

        reqs.proxy = async () => {

          let href = "https://developers.cloudflare.com/workers/examples/cors-header-proxy/%3C/span%3E%3Cspan%20style="--0:#89DDFF;--1:#007474">${PROXY_ENDPOINT}?apiurl=${API_URL}"

          return fetch(window.location.origin + href).then(r => r.json())

        }

        reqs.proxypreflight = async () => {

          let href = "https://developers.cloudflare.com/workers/examples/cors-header-proxy/%3C/span%3E%3Cspan%20style="--0:#89DDFF;--1:#007474">${PROXY_ENDPOINT}?apiurl=${API_URL}"

          let response = await fetch(window.location.origin + href, {

            method: "POST",

            headers: {

              "Content-Type": "application/json"

            },

            body: JSON.stringify({

              msg: "Hello world!"

            })

          })

          return response.json()

        }

        (async () => {

        for (const [reqName, req] of Object.entries(reqs)) {

          try {

            let data = await req()

            document.getElementById(reqName).innerHTML = JSON.stringify(data)

          } catch (e) {

            document.getElementById(reqName).innerHTML = e

          }

        }

      })()

        </script>

      </body>

      </html>

    `;


    async function handleRequest(request) {

      const url = new URL(request.url);

      let apiUrl = url.searchParams.get("apiurl");


      if (apiUrl == null) {

        apiUrl = API_URL;

      }


      // Rewrite request to point to API URL. This also makes the request mutable

      // so you can add the correct Origin header to make the API server think

      // that this request is not cross-site.

      request = new Request(apiUrl, request);

      request.headers.set("Origin", new URL(apiUrl).origin);

      let response = await fetch(request);

      // Recreate the response so you can modify the headers


      response = new Response(response.body, response);

      // Set CORS headers


      response.headers.set("Access-Control-Allow-Origin", url.origin);


      // Append to/Add Vary header so browser will cache response correctly

      response.headers.append("Vary", "Origin");


      return response;

    }


    async function handleOptions(request) {

      if (

        request.headers.get("Origin") !== null &&

        request.headers.get("Access-Control-Request-Method") !== null &&

        request.headers.get("Access-Control-Request-Headers") !== null

      ) {

        // Handle CORS preflight requests.

        return new Response(null, {

          headers: {

            ...corsHeaders,

            "Access-Control-Allow-Headers": request.headers.get(

              "Access-Control-Request-Headers",

            ),

          },

        });

      } else {

        // Handle standard OPTIONS request.

        return new Response(null, {

          headers: {

            Allow: "GET, HEAD, POST, OPTIONS",

          },

        });

      }

    }


    const url = new URL(request.url);

    if (url.pathname.startsWith(PROXY_ENDPOINT)) {

      if (request.method === "OPTIONS") {

        // Handle CORS preflight requests

        return handleOptions(request);

      } else if (

        request.method === "GET" ||

        request.method === "HEAD" ||

        request.method === "POST"

      ) {

        // Handle requests to the API server

        return handleRequest(request);

      } else {

        return new Response(null, {

          status: 405,

          statusText: "Method Not Allowed",

        });

      }

    } else {

      return rawHtmlResponse(DEMO_PAGE);

    }

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const corsHeaders = {

      "Access-Control-Allow-Origin": "*",

      "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS",

      "Access-Control-Max-Age": "86400",

    };


    // The URL for the remote third party API you want to fetch from

    // but does not implement CORS

    const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi";


    // The endpoint you want the CORS reverse proxy to be on

    const PROXY_ENDPOINT = "/corsproxy/";


    // The rest of this snippet for the demo page

    function rawHtmlResponse(html) {

      return new Response(html, {

        headers: {

          "content-type": "text/html;charset=UTF-8",

        },

      });

    }


    const DEMO_PAGE = `

      <!DOCTYPE html>

      <html>

      <body>

        <h1>API GET without CORS Proxy</h1>

        <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#Checking_that_the_fetch_was_successful">Shows TypeError: Failed to fetch since CORS is misconfigured</a>

        <p id="noproxy-status"/>

        <code id="noproxy">Waiting</code>

        <h1>API GET with CORS Proxy</h1>

        <p id="proxy-status"/>

        <code id="proxy">Waiting</code>

        <h1>API POST with CORS Proxy + Preflight</h1>

        <p id="proxypreflight-status"/>

        <code id="proxypreflight">Waiting</code>

        <script>

        let reqs = {};

        reqs.noproxy = () => {

          return fetch("${API_URL}").then(r => r.json())

        }

        reqs.proxy = async () => {

          let href = "https://developers.cloudflare.com/workers/examples/cors-header-proxy/%3C/span%3E%3Cspan%20style="--0:#89DDFF;--1:#007474">${PROXY_ENDPOINT}?apiurl=${API_URL}"

          return fetch(window.location.origin + href).then(r => r.json())

        }

        reqs.proxypreflight = async () => {

          let href = "https://developers.cloudflare.com/workers/examples/cors-header-proxy/%3C/span%3E%3Cspan%20style="--0:#89DDFF;--1:#007474">${PROXY_ENDPOINT}?apiurl=${API_URL}"

          let response = await fetch(window.location.origin + href, {

            method: "POST",

            headers: {

              "Content-Type": "application/json"

            },

            body: JSON.stringify({

              msg: "Hello world!"

            })

          })

          return response.json()

        }

        (async () => {

        for (const [reqName, req] of Object.entries(reqs)) {

          try {

            let data = await req()

            document.getElementById(reqName).textContent = JSON.stringify(data)

          } catch (e) {

            document.getElementById(reqName).textContent = e

          }

        }

      })()

        </script>

      </body>

      </html>

    `;


    async function handleRequest(request) {

      const url = new URL(request.url);

      let apiUrl = url.searchParams.get("apiurl");


      if (apiUrl == null) {

        apiUrl = API_URL;

      }


      // Rewrite request to point to API URL. This also makes the request mutable

      // so you can add the correct Origin header to make the API server think

      // that this request is not cross-site.

      request = new Request(apiUrl, request);

      request.headers.set("Origin", new URL(apiUrl).origin);

      let response = await fetch(request);

      // Recreate the response so you can modify the headers


      response = new Response(response.body, response);

      // Set CORS headers


      response.headers.set("Access-Control-Allow-Origin", url.origin);


      // Append to/Add Vary header so browser will cache response correctly

      response.headers.append("Vary", "Origin");


      return response;

    }


    async function handleOptions(request) {

      if (

        request.headers.get("Origin") !== null &&

        request.headers.get("Access-Control-Request-Method") !== null &&

        request.headers.get("Access-Control-Request-Headers") !== null

      ) {

        // Handle CORS preflight requests.

        return new Response(null, {

          headers: {

            ...corsHeaders,

            "Access-Control-Allow-Headers": request.headers.get(

              "Access-Control-Request-Headers",

            ),

          },

        });

      } else {

        // Handle standard OPTIONS request.

        return new Response(null, {

          headers: {

            Allow: "GET, HEAD, POST, OPTIONS",

          },

        });

      }

    }


    const url = new URL(request.url);

    if (url.pathname.startsWith(PROXY_ENDPOINT)) {

      if (request.method === "OPTIONS") {

        // Handle CORS preflight requests

        return handleOptions(request);

      } else if (

        request.method === "GET" ||

        request.method === "HEAD" ||

        request.method === "POST"

      ) {

        // Handle requests to the API server

        return handleRequest(request);

      } else {

        return new Response(null, {

          status: 405,

          statusText: "Method Not Allowed",

        });

      }

    } else {

      return rawHtmlResponse(DEMO_PAGE);

    }

  },

} satisfies ExportedHandler;


```

TypeScript

```

import { Hono } from "hono";

import { cors } from "hono/cors";


// The URL for the remote third party API you want to fetch from

// but does not implement CORS

const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi";


// The endpoint you want the CORS reverse proxy to be on

const PROXY_ENDPOINT = "/corsproxy/";


const app = new Hono();


// Demo page handler

app.get("*", async (c) => {

  // Only handle non-proxy requests with this handler

  if (c.req.path.startsWith(PROXY_ENDPOINT)) {

    return next();

  }


  // Create the demo page HTML

  const DEMO_PAGE = `

    <!DOCTYPE html>

    <html>

    <body>

      <h1>API GET without CORS Proxy</h1>

      <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#Checking_that_the_fetch_was_successful">Shows TypeError: Failed to fetch since CORS is misconfigured</a>

      <p id="noproxy-status"/>

      <code id="noproxy">Waiting</code>

      <h1>API GET with CORS Proxy</h1>

      <p id="proxy-status"/>

      <code id="proxy">Waiting</code>

      <h1>API POST with CORS Proxy + Preflight</h1>

      <p id="proxypreflight-status"/>

      <code id="proxypreflight">Waiting</code>

      <script>

      let reqs = {};

      reqs.noproxy = () => {

        return fetch("${API_URL}").then(r => r.json())

      }

      reqs.proxy = async () => {

        let href = "https://developers.cloudflare.com/workers/examples/cors-header-proxy/%3C/span%3E%3Cspan%20style="--0:#89DDFF;--1:#007474">${PROXY_ENDPOINT}?apiurl=${API_URL}"

        return fetch(window.location.origin + href).then(r => r.json())

      }

      reqs.proxypreflight = async () => {

        let href = "https://developers.cloudflare.com/workers/examples/cors-header-proxy/%3C/span%3E%3Cspan%20style="--0:#89DDFF;--1:#007474">${PROXY_ENDPOINT}?apiurl=${API_URL}"

        let response = await fetch(window.location.origin + href, {

          method: "POST",

          headers: {

            "Content-Type": "application/json"

          },

          body: JSON.stringify({

            msg: "Hello world!"

          })

        })

        return response.json()

      }

      (async () => {

      for (const [reqName, req] of Object.entries(reqs)) {

        try {

          let data = await req()

          document.getElementById(reqName).innerHTML = JSON.stringify(data)

        } catch (e) {

          document.getElementById(reqName).innerHTML = e

        }

      }

    })()

      </script>

    </body>

    </html>

  `;


  return c.html(DEMO_PAGE);

});


// CORS proxy routes

app.on(["GET", "HEAD", "POST", "OPTIONS"], PROXY_ENDPOINT + "*", async (c) => {

  const url = new URL(c.req.url);


  // Handle OPTIONS preflight requests

  if (c.req.method === "OPTIONS") {

    const origin = c.req.header("Origin");

    const requestMethod = c.req.header("Access-Control-Request-Method");

    const requestHeaders = c.req.header("Access-Control-Request-Headers");


    if (origin && requestMethod && requestHeaders) {

      // Handle CORS preflight requests

      return new Response(null, {

        headers: {

          "Access-Control-Allow-Origin": "*",

          "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS",

          "Access-Control-Max-Age": "86400",

          "Access-Control-Allow-Headers": requestHeaders,

        },

      });

    } else {

      // Handle standard OPTIONS request

      return new Response(null, {

        headers: {

          Allow: "GET, HEAD, POST, OPTIONS",

        },

      });

    }

  }


  // Handle actual requests

  let apiUrl = url.searchParams.get("apiurl") || API_URL;


  // Rewrite request to point to API URL

  const modifiedRequest = new Request(apiUrl, c.req.raw);

  modifiedRequest.headers.set("Origin", new URL(apiUrl).origin);


  let response = await fetch(modifiedRequest);


  // Recreate the response so we can modify the headers

  response = new Response(response.body, response);


  // Set CORS headers

  response.headers.set("Access-Control-Allow-Origin", url.origin);


  // Append to/Add Vary header so browser will cache response correctly

  response.headers.append("Vary", "Origin");


  return response;

});


// Handle method not allowed for proxy endpoint

app.all(PROXY_ENDPOINT + "*", (c) => {

  return new Response(null, {

    status: 405,

    statusText: "Method Not Allowed",

  });

});


export default app;


```

Python

```

from workers import WorkerEntrypoint

from pyodide.ffi import to_js as _to_js

from js import Response, URL, fetch, Object, Request


def to_js(x):

    return _to_js(x, dict_converter=Object.fromEntries)


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        cors_headers = {

            "Access-Control-Allow-Origin": "*",

            "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS",

            "Access-Control-Max-Age": "86400",

        }


        api_url = "https://examples.cloudflareworkers.com/demos/demoapi"


        proxy_endpoint = "/corsproxy/"


        def raw_html_response(html):

            return Response.new(html, headers=to_js({"content-type": "text/html;charset=UTF-8"}))


        demo_page = f'''

        <!DOCTYPE html>

        <html>

        <body>

        <h1>API GET without CORS Proxy</h1>

        <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#Checking_that_the_fetch_was_successful">Shows TypeError: Failed to fetch since CORS is misconfigured</a>

        <p id="noproxy-status"/>

        <code id="noproxy">Waiting</code>

        <h1>API GET with CORS Proxy</h1>

        <p id="proxy-status"/>

        <code id="proxy">Waiting</code>

        <h1>API POST with CORS Proxy + Preflight</h1>

        <p id="proxypreflight-status"/>

        <code id="proxypreflight">Waiting</code>

        <script>

        let reqs = {{}};

        reqs.noproxy = () => {{

            return fetch("{api_url}").then(r => r.json())

        }}

        reqs.proxy = async () => {{

            let href = "https://developers.cloudflare.com/workers/examples/cors-header-proxy/%3C/span%3E%3Cspan%20style="--0:#8EA7ED;--1:#00268F">{proxy_endpoint}?apiurl={api_url}"

            return fetch(window.location.origin + href).then(r => r.json())

        }}

        reqs.proxypreflight = async () => {{

            let href = "https://developers.cloudflare.com/workers/examples/cors-header-proxy/%3C/span%3E%3Cspan%20style="--0:#8EA7ED;--1:#00268F">{proxy_endpoint}?apiurl={api_url}"

            let response = await fetch(window.location.origin + href, {{

            method: "POST",

            headers: {{

                "Content-Type": "application/json"

            }},

            body: JSON.stringify({{

                msg: "Hello world!"

            }})

            }})

            return response.json()

        }}

        (async () => {{

        for (const [reqName, req] of Object.entries(reqs)) {{

            try {{

            let data = await req()

            document.getElementById(reqName).innerHTML = JSON.stringify(data)

            }} catch (e) {{

            document.getElementById(reqName).innerHTML = e

            }}

        }}

        }})()

        </script>

        </body>

        </html>

        '''


        async def handle_request(request):

            url = URL.new(request.url)

            api_url2 = url.searchParams["apiurl"]


            if not api_url2:

                api_url2 = api_url


            request = Request.new(api_url2, request)

            request.headers["Origin"] = (URL.new(api_url2)).origin

            print(request.headers)

            response = await fetch(request)

            response = Response.new(response.body, response)

            response.headers["Access-Control-Allow-Origin"] = url.origin

            response.headers["Vary"] = "Origin"

            return response


        async def handle_options(request):

            if "Origin" in request.headers and "Access-Control-Request-Method" in request.headers and "Access-Control-Request-Headers" in request.headers:

                return Response.new(None, headers=to_js({

                **cors_headers,

                "Access-Control-Allow-Headers": request.headers["Access-Control-Request-Headers"]

                }))

            return Response.new(None, headers=to_js({"Allow": "GET, HEAD, POST, OPTIONS"}))


        url = URL.new(request.url)


        if url.pathname.startswith(proxy_endpoint):

            if request.method == "OPTIONS":

                return handle_options(request)

            if request.method in ("GET", "HEAD", "POST"):

                return handle_request(request)

            return Response.new(None, status=405, statusText="Method Not Allowed")

        return raw_html_response(demo_page)


```

```

use std::{borrow::Cow, collections::HashMap};

use worker::*;


fn raw_html_response(html: &str) -> Result<Response> {

    Response::from_html(html)

}

async fn handle_request(req: Request, api_url: &str) -> Result<Response> {

    let url = req.url().unwrap();

    let mut api_url2 = url

        .query_pairs()

        .find(|x| x.0 == Cow::Borrowed("apiurl"))

        .unwrap()

        .1

        .to_string();

    if api_url2 == String::from("") {

        api_url2 = api_url.to_string();

    }

    let mut request = req.clone_mut()?;

    *request.path_mut()? = api_url2.clone();

    if let url::Origin::Tuple(origin, _, _) = Url::parse(&api_url2)?.origin() {

        (*request.headers_mut()?).set("Origin", &origin)?;

    }

    let mut response = Fetch::Request(request).send().await?.cloned()?;

    let headers = response.headers_mut();

    if let url::Origin::Tuple(origin, _, _) = url.origin() {

        headers.set("Access-Control-Allow-Origin", &origin)?;

        headers.set("Vary", "Origin")?;

    }


    Ok(response)

}


fn handle_options(req: Request, cors_headers: &HashMap<&str, &str>) -> Result<Response> {

    let headers: Vec<_> = req.headers().keys().collect();

    if [

        "access-control-request-method",

        "access-control-request-headers",

        "origin",

    ]

    .iter()

    .all(|i| headers.contains(&i.to_string()))

    {

        let mut headers = Headers::new();

        for (k, v) in cors_headers.iter() {

            headers.set(k, v)?;

        }

        return Ok(Response::empty()?.with_headers(headers));

    }

    Response::empty()

}


#[event(fetch)]

async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result<Response> {

    let cors_headers = HashMap::from([

        ("Access-Control-Allow-Origin", "*"),

        ("Access-Control-Allow-Methods", "GET,HEAD,POST,OPTIONS"),

        ("Access-Control-Max-Age", "86400"),

    ]);

    let api_url = "https://examples.cloudflareworkers.com/demos/demoapi";

    let proxy_endpoint = "/corsproxy/";

    let demo_page = format!(

r#"


<!DOCTYPE html>


<html>

<body>

<h1>API GET without CORS Proxy</h1>

<a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#Checking_that_the_fetch_was_successful">Shows TypeError: Failed to fetch since CORS is misconfigured</a>

<p id="noproxy-status"/>

<code id="noproxy">Waiting</code>

<h1>API GET with CORS Proxy</h1>

<p id="proxy-status"/>

<code id="proxy">Waiting</code>

<h1>API POST with CORS Proxy + Preflight</h1>

<p id="proxypreflight-status"/>

<code id="proxypreflight">Waiting</code>

<script>

let reqs = {{}};

reqs.noproxy = () => {{

        return fetch("{api_url}").then(r => r.json())

    }}

reqs.proxy = async () => {{

        let href = "https://developers.cloudflare.com/workers/examples/cors-header-proxy/%7Bproxy_endpoint%7D?apiurl={api_url}"

        return fetch(window.location.origin + href).then(r => r.json())

    }}

reqs.proxypreflight = async () => {{

        let href = "https://developers.cloudflare.com/workers/examples/cors-header-proxy/%7Bproxy_endpoint%7D?apiurl={api_url}"

        let response = await fetch(window.location.origin + href, {{

        method: "POST",

        headers: {{

            "Content-Type": "application/json"

        }},

body: JSON.stringify({{

            msg: "Hello world!"

        }})

}})

return response.json()

}}

(async () => {{

    for (const [reqName, req] of Object.entries(reqs)) {{

        try {{

        let data = await req()

        document.getElementById(reqName).innerHTML = JSON.stringify(data)

        }} catch (e) {{

        document.getElementById(reqName).innerHTML = e

        }}

}}

}})()

</script>

</body>

</html>

"#

    );


    if req.url()?.path().starts_with(proxy_endpoint) {

        match req.method() {

            Method::Options => return handle_options(req, &cors_headers),

            Method::Get | Method::Head | Method::Post => return handle_request(req, api_url).await,

            _ => return Response::error("Method Not Allowed", 405),

        }

    }

    raw_html_response(&demo_page)


}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/cors-header-proxy/","name":"CORS header proxy"}}]}
```

---

---
title: Country code redirect
description: Redirect a response based on the country code in the header of a visitor.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Redirects ](https://developers.cloudflare.com/search/?tags=Redirects)[ Geolocation ](https://developers.cloudflare.com/search/?tags=Geolocation)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/country-code-redirect.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Country code redirect

**Last reviewed:**  over 5 years ago 

Redirect a response based on the country code in the header of a visitor.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/country-code-redirect)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7252)
* [  TypeScript ](#tab-panel-7253)
* [  Python ](#tab-panel-7254)
* [  Hono ](#tab-panel-7255)

JavaScript

```

export default {

  async fetch(request) {

    /**

     * A map of the URLs to redirect to

     * @param {Object} countryMap

     */

    const countryMap = {

      US: "https://example.com/us",

      EU: "https://example.com/eu",

    };


    // Use the cf object to obtain the country of the request

    // more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties

    const country = request.cf.country;


    if (country != null && country in countryMap) {

      const url = countryMap[country];

      // Remove this logging statement from your final output.

      console.log(

        `Based on ${country}-based request, your user would go to ${url}.`,

      );

      return Response.redirect(url);

    } else {

      return fetch("https://example.com", request);

    }

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    /**

     * A map of the URLs to redirect to

     * @param {Object} countryMap

     */

    const countryMap = {

      US: "https://example.com/us",

      EU: "https://example.com/eu",

    };


    // Use the cf object to obtain the country of the request

    // more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties

    const country = request.cf.country;


    if (country != null && country in countryMap) {

      const url = countryMap[country];

      return Response.redirect(url);

    } else {

      return fetch(request);

    }

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response, fetch


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        countries = {

            "US": "https://example.com/us",

            "EU": "https://example.com/eu",

        }


        # Use the cf object to obtain the country of the request

        # more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties

        country = request.cf.country


        if country and country in countries:

            url = countries[country]

            return Response.redirect(url)


        return fetch("https://example.com", request)


```

TypeScript

```

import { Hono } from 'hono';


// Define the RequestWithCf interface to add Cloudflare-specific properties

interface RequestWithCf extends Request {

  cf: {

    country: string;

    // Other CF properties can be added as needed

  };

}


const app = new Hono();


app.get('*', async (c) => {

  /**

   * A map of the URLs to redirect to

   */

  const countryMap: Record<string, string> = {

    US: "https://example.com/us",

    EU: "https://example.com/eu",

  };


  // Cast the raw request to include Cloudflare-specific properties

  const request = c.req.raw as RequestWithCf;


  // Use the cf object to obtain the country of the request

  // more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties

  const country = request.cf.country;


  if (country != null && country in countryMap) {

    const url = countryMap[country];

    // Redirect using Hono's redirect helper

    return c.redirect(url);

  } else {

    // Default fallback

    return fetch("https://example.com", request);

  }

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/country-code-redirect/","name":"Country code redirect"}}]}
```

---

---
title: Setting Cron Triggers
description: Set a Cron Trigger for your Worker.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/cron-trigger.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Setting Cron Triggers

**Last reviewed:**  over 4 years ago 

Set a Cron Trigger for your Worker.

* [  JavaScript ](#tab-panel-7256)
* [  TypeScript ](#tab-panel-7257)
* [  Python ](#tab-panel-7258)
* [  Hono ](#tab-panel-7259)

JavaScript

```

export default {

  async scheduled(controller, env, ctx) {

    console.log("cron processed");

  },

};


```

TypeScript

```

interface Env {}

export default {

  async scheduled(

    controller: ScheduledController,

    env: Env,

    ctx: ExecutionContext,

  ) {

    console.log("cron processed");

  },

};


```

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def scheduled(self, controller, env, ctx):

        print("cron processed")


```

TypeScript

```

import { Hono } from "hono";


interface Env {}


// Create Hono app

const app = new Hono<{ Bindings: Env }>();


// Regular routes for normal HTTP requests

app.get("/", (c) => c.text("Hello World!"));


// Export both the app and a scheduled function

export default {

  // The Hono app handles regular HTTP requests

  fetch: app.fetch,


  // The scheduled function handles Cron triggers

  async scheduled(

    controller: ScheduledController,

    env: Env,

    ctx: ExecutionContext,

  ) {

    console.log("cron processed");


    // You could also perform actions like:

    // - Fetching data from external APIs

    // - Updating KV or Durable Object storage

    // - Running maintenance tasks

    // - Sending notifications

  },

};


```

## Set Cron Triggers in Wrangler

Refer to [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/) for more information on how to add a Cron Trigger.

If you are deploying with Wrangler, set the cron syntax (once per hour as shown below) by adding this to your Wrangler file:

* [  wrangler.jsonc ](#tab-panel-7260)
* [  wrangler.toml ](#tab-panel-7261)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "worker",

  // ...

  "triggers": {

    "crons": [

      "0 * * * *"

    ]

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "worker"


[triggers]

crons = [ "0 * * * *" ]


```

You also can set a different Cron Trigger for each [environment](https://developers.cloudflare.com/workers/wrangler/environments/) in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). You need to put the `[triggers]` table under your chosen environment. For example:

* [  wrangler.jsonc ](#tab-panel-7262)
* [  wrangler.toml ](#tab-panel-7263)

```

{

  "env": {

    "dev": {

      "triggers": {

        "crons": [

          "0 * * * *"

        ]

      }

    }

  }

}


```

```

[env.dev.triggers]

crons = [ "0 * * * *" ]


```

## Test Cron Triggers using Wrangler

The recommended way of testing Cron Triggers is using Wrangler.

Cron Triggers can be tested using Wrangler by passing in the `--test-scheduled` flag to [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev). This will expose a `/__scheduled` (or `/cdn-cgi/handler/scheduled` for Python Workers) route which can be used to test using a HTTP request. To simulate different cron patterns, a `cron` query parameter can be passed in.

Terminal window

```

npx wrangler dev --test-scheduled


curl "http://localhost:8787/__scheduled?cron=0+*+*+*+*"


curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*" # Python Workers


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/cron-trigger/","name":"Setting Cron Triggers"}}]}
```

---

---
title: Data loss prevention
description: Protect sensitive data to prevent data loss, and send alerts to a webhooks server in the event of a data breach.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Security ](https://developers.cloudflare.com/search/?tags=Security)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/data-loss-prevention.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Data loss prevention

**Last reviewed:**  over 5 years ago 

Protect sensitive data to prevent data loss, and send alerts to a webhooks server in the event of a data breach.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/data-loss-prevention)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7264)
* [  TypeScript ](#tab-panel-7265)
* [  Python ](#tab-panel-7266)
* [  Hono ](#tab-panel-7267)

JavaScript

```

export default {

  async fetch(request) {

    const DEBUG = true;

    const SOME_HOOK_SERVER = "https://webhook.flow-wolf.io/hook";


    /**

     * Alert a data breach by posting to a webhook server

     */

    async function postDataBreach(request) {

      return await fetch(SOME_HOOK_SERVER, {

        method: "POST",

        headers: {

          "content-type": "application/json;charset=UTF-8",

        },

        body: JSON.stringify({

          ip: request.headers.get("cf-connecting-ip"),

          time: Date.now(),

          request: request,

        }),

      });

    }


    /**

     * Define personal data with regular expressions.

     * Respond with block if credit card data, and strip

     * emails and phone numbers from the response.

     * Execution will be limited to MIME type "text/*".

     */

    const response = await fetch(request);


    // Return origin response, if response wasn’t text

    const contentType = response.headers.get("content-type") || "";

    if (!contentType.toLowerCase().includes("text/")) {

      return response;

    }


    let text = await response.text();


    // When debugging replace the response

    // from the origin with an email

    text = DEBUG

      ? text.replace("You may use this", "me@example.com may use this")

      : text;

    const sensitiveRegexsMap = {

      creditCard: String.raw`\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b`,

      email: String.raw`\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b`,

      phone: String.raw`\b07\d{9}\b`,

    };


    for (const kind in sensitiveRegexsMap) {

      const sensitiveRegex = new RegExp(sensitiveRegexsMap[kind], "ig");

      const match = await sensitiveRegex.test(text);

      if (match) {

        // Alert a data breach

        await postDataBreach(request);

        // Respond with a block if credit card,

        // otherwise replace sensitive text with `*`s

        return kind === "creditCard"

          ? new Response(kind + " found\nForbidden\n", {

              status: 403,

              statusText: "Forbidden",

            })

          : new Response(text.replace(sensitiveRegex, "**********"), response);

      }

    }

    return new Response(text, response);

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const DEBUG = true;

    const SOME_HOOK_SERVER = "https://webhook.flow-wolf.io/hook";


    /**

     * Alert a data breach by posting to a webhook server

     */

    async function postDataBreach(request) {

      return await fetch(SOME_HOOK_SERVER, {

        method: "POST",

        headers: {

          "content-type": "application/json;charset=UTF-8",

        },

        body: JSON.stringify({

          ip: request.headers.get("cf-connecting-ip"),

          time: Date.now(),

          request: request,

        }),

      });

    }


    /**

     * Define personal data with regular expressions.

     * Respond with block if credit card data, and strip

     * emails and phone numbers from the response.

     * Execution will be limited to MIME type "text/*".

     */

    const response = await fetch(request);


    // Return origin response, if response wasn’t text

    const contentType = response.headers.get("content-type") || "";

    if (!contentType.toLowerCase().includes("text/")) {

      return response;

    }


    let text = await response.text();


    // When debugging replace the response

    // from the origin with an email

    text = DEBUG

      ? text.replace("You may use this", "me@example.com may use this")

      : text;

    const sensitiveRegexsMap = {

      creditCard: String.raw`\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b`,

      email: String.raw`\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b`,

      phone: String.raw`\b07\d{9}\b`,

    };


    for (const kind in sensitiveRegexsMap) {

      const sensitiveRegex = new RegExp(sensitiveRegexsMap[kind], "ig");

      const match = await sensitiveRegex.test(text);

      if (match) {

        // Alert a data breach

        await postDataBreach(request);

        // Respond with a block if credit card,

        // otherwise replace sensitive text with `*`s

        return kind === "creditCard"

          ? new Response(kind + " found\nForbidden\n", {

              status: 403,

              statusText: "Forbidden",

            })

          : new Response(text.replace(sensitiveRegex, "**********"), response);

      }

    }

    return new Response(text, response);

  },

} satisfies ExportedHandler;


```

Python

```

import re

from workers import WorkerEntrypoint

from datetime import datetime

from js import Response, fetch, JSON, Headers


# Alert a data breach by posting to a webhook server

async def post_data_breach(request):

    some_hook_server = "https://webhook.flow-wolf.io/hook"

    headers = Headers.new({"content-type": "application/json"}.items())

    body = JSON.stringify({

      "ip": request.headers["cf-connecting-ip"],

      "time": datetime.now(),

      "request": request,

    })

    return await fetch(some_hook_server, method="POST", headers=headers, body=body)


class Default(WorkerEntrypoint):

    async def fetch(self, request):

    debug = True


    # Define personal data with regular expressions.

    # Respond with block if credit card data, and strip

    # emails and phone numbers from the response.

    # Execution will be limited to MIME type "text/*".

    response = await fetch(request)


    # Return origin response, if response wasn’t text

    content_type = response.headers["content-type"] or ""

    if "text" not in content_type:

      return response


    text = await response.text()

    # When debugging replace the response from the origin with an email

    text = text.replace("You may use this", "me@example.com may use this") if debug else text


    sensitive_regex = [

    ("credit_card",

    r'\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b'),

    ("email", r'\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b'),

    ("phone", r'\b07\d{9}\b'),

    ]

    for (kind, regex) in sensitive_regex:

      match = re.search(regex, text, flags=re.IGNORECASE)

      if match:

        # Alert a data breach

        await post_data_breach(request)

        # Respond with a block if credit card, otherwise replace sensitive text with `*`s

        card_resp = Response.new(kind + " found\nForbidden\n", status=403,statusText="Forbidden")

        sensitive_resp = Response.new(re.sub(regex, "*"*10, text, flags=re.IGNORECASE), response)

        return card_resp if kind == "credit_card" else  sensitive_resp


    return Response.new(text, response)


```

TypeScript

```

import { Hono } from 'hono';


const app = new Hono();


// Configuration

const DEBUG = true;

const SOME_HOOK_SERVER = "https://webhook.flow-wolf.io/hook";


// Define sensitive data patterns

const sensitiveRegexsMap = {

  creditCard: String.raw`\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b`,

  email: String.raw`\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b`,

  phone: String.raw`\b07\d{9}\b`,

};


/**

 * Alert a data breach by posting to a webhook server

 */

async function postDataBreach(request: Request) {

  return await fetch(SOME_HOOK_SERVER, {

    method: "POST",

    headers: {

      "content-type": "application/json;charset=UTF-8",

    },

    body: JSON.stringify({

      ip: request.headers.get("cf-connecting-ip"),

      time: Date.now(),

      request: request,

    }),

  });

}


// Main middleware to handle data loss prevention

app.use('*', async (c) => {

  // Fetch the origin response

  const response = await fetch(c.req.raw);


  // Return origin response if response wasn't text

  const contentType = response.headers.get("content-type") || "";

  if (!contentType.toLowerCase().includes("text/")) {

    return response;

  }


  // Get the response text

  let text = await response.text();


  // When debugging, replace the response from the origin with an email

  text = DEBUG

    ? text.replace("You may use this", "me@example.com may use this")

    : text;


  // Check for sensitive data

  for (const kind in sensitiveRegexsMap) {

    const sensitiveRegex = new RegExp(sensitiveRegexsMap[kind], "ig");

    const match = sensitiveRegex.test(text);


    if (match) {

      // Alert a data breach

      await postDataBreach(c.req.raw);


      // Respond with a block if credit card, otherwise replace sensitive text with `*`s

      if (kind === "creditCard") {

        return c.text(`${kind} found\nForbidden\n`, 403);

      } else {

        return new Response(text.replace(sensitiveRegex, "**********"), {

          status: response.status,

          statusText: response.statusText,

          headers: response.headers,

        });

      }

    }

  }


  // Return the modified response

  return new Response(text, {

    status: response.status,

    statusText: response.statusText,

    headers: response.headers,

  });

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/data-loss-prevention/","name":"Data loss prevention"}}]}
```

---

---
title: Debugging logs
description: Send debugging information in an errored response to a logging service.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Debugging ](https://developers.cloudflare.com/search/?tags=Debugging)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/debugging-logs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Debugging logs

**Last reviewed:**  over 5 years ago 

Send debugging information in an errored response to a logging service.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/debugging-logs)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7268)
* [  TypeScript ](#tab-panel-7269)
* [  Python ](#tab-panel-7270)
* [  Hono ](#tab-panel-7271)

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    // Service configured to receive logs

    const LOG_URL = "https://log-service.example.com/";


    async function postLog(data) {

      return await fetch(LOG_URL, {

        method: "POST",

        body: data,

      });

    }


    let response;


    try {

      response = await fetch(request);

      if (!response.ok && !response.redirected) {

        const body = await response.text();

        throw new Error(

          "Bad response at origin. Status: " +

            response.status +

            " Body: " +

            // Ensure the string is small enough to be a header

            body.trim().substring(0, 10),

        );

      }

    } catch (err) {

      // Without ctx.waitUntil(), your fetch() to Cloudflare's

      // logging service may or may not complete

      ctx.waitUntil(postLog(err.toString()));

      const stack = JSON.stringify(err.stack) || err;

      // Copy the response and initialize body to the stack trace

      response = new Response(stack, response);

      // Add the error stack into a header to find out what happened

      response.headers.set("X-Debug-stack", stack);

      response.headers.set("X-Debug-err", err);

    }

    return response;

  },

};


```

TypeScript

```

interface Env {}

export default {

  async fetch(request, env, ctx): Promise<Response> {

    // Service configured to receive logs

    const LOG_URL = "https://log-service.example.com/";


    async function postLog(data) {

      return await fetch(LOG_URL, {

        method: "POST",

        body: data,

      });

    }


    let response;


    try {

      response = await fetch(request);

      if (!response.ok && !response.redirected) {

        const body = await response.text();

        throw new Error(

          "Bad response at origin. Status: " +

            response.status +

            " Body: " +

            // Ensure the string is small enough to be a header

            body.trim().substring(0, 10),

        );

      }

    } catch (err) {

      // Without ctx.waitUntil(), your fetch() to Cloudflare's

      // logging service may or may not complete

      ctx.waitUntil(postLog(err.toString()));

      const stack = JSON.stringify(err.stack) || err;

      // Copy the response and initialize body to the stack trace

      response = new Response(stack, response);

      // Add the error stack into a header to find out what happened

      response.headers.set("X-Debug-stack", stack);

      response.headers.set("X-Debug-err", err);

    }

    return response;

  },

} satisfies ExportedHandler<Env>;


```

Python

```

from workers import WorkerEntrypoint

from pyodide.ffi import create_proxy

from js import Response, fetch


async def post_log(data):

  log_url = "https://log-service.example.com/"

  await fetch(log_url, method="POST", body=data)


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        # Service configured to receive logs

        response = await fetch(request)


        try:

            if not response.ok and not response.redirected:

                body = await response.text()

            # Simulating an error. Ensure the string is small enough to be a header

            raise Exception(f'Bad response at origin. Status:{response.status} Body:{body.strip()[:10]}')

        except Exception as e:

            # Without ctx.waitUntil(), your fetch() to Cloudflare's

            # logging service may or may not complete

            self.ctx.waitUntil(create_proxy(post_log(str(e))))

            # Copy the response and add to header

            response = Response.new(stack, response)

            response.headers["X-Debug-err"] = str(e)


        return response


```

TypeScript

```

import { Hono } from 'hono';


// Define the environment with appropriate types

interface Env {}


const app = new Hono<{ Bindings: Env }>();


// Service configured to receive logs

const LOG_URL = "https://log-service.example.com/";


// Function to post logs to an external service

async function postLog(data: string) {

  return await fetch(LOG_URL, {

    method: "POST",

    body: data,

  });

}


// Middleware to handle error logging

app.use('*', async (c, next) => {

  try {

    // Process the request with the next handler

    await next();


    // After processing, check if the response indicates an error

    if (c.res && (!c.res.ok && !c.res.redirected)) {

      const body = await c.res.clone().text();

      throw new Error(

        "Bad response at origin. Status: " +

        c.res.status +

        " Body: " +

        // Ensure the string is small enough to be a header

        body.trim().substring(0, 10)

      );

    }


  } catch (err) {

    // Without waitUntil, the fetch to the logging service may not complete

    c.executionCtx.waitUntil(

      postLog(err.toString())

    );


    // Get the error stack or error itself

    const stack = JSON.stringify(err.stack) || err.toString();


    // Create a new response with the error information

    const response = c.res ?

      new Response(stack, {

        status: c.res.status,

        headers: c.res.headers

      }) :

      new Response(stack, { status: 500 });


    // Add debug headers

    response.headers.set("X-Debug-stack", stack);

    response.headers.set("X-Debug-err", err.toString());


    // Set the modified response

    c.res = response;

  }

});


// Default route handler that passes requests through

app.all('*', async (c) => {

  return fetch(c.req.raw);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/debugging-logs/","name":"Debugging logs"}}]}
```

---

---
title: Cookie parsing
description: Given the cookie name, get the value of a cookie. You can also use cookies for A/B testing.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Headers ](https://developers.cloudflare.com/search/?tags=Headers)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/extract-cookie-value.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cookie parsing

**Last reviewed:**  about 4 years ago 

Given the cookie name, get the value of a cookie. You can also use cookies for A/B testing.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/extract-cookie-value)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7272)
* [  TypeScript ](#tab-panel-7273)
* [  Python ](#tab-panel-7274)
* [  Hono ](#tab-panel-7275)

JavaScript

```

import { parse } from "cookie";

export default {

  async fetch(request) {

    // The name of the cookie

    const COOKIE_NAME = "__uid";

    const cookie = parse(request.headers.get("Cookie") || "");

    if (cookie[COOKIE_NAME] != null) {

      // Respond with the cookie value

      return new Response(cookie[COOKIE_NAME]);

    }

    return new Response("No cookie with name: " + COOKIE_NAME);

  },

};


```

TypeScript

```

import { parse } from "cookie";

export default {

  async fetch(request): Promise<Response> {

    // The name of the cookie

    const COOKIE_NAME = "__uid";

    const cookie = parse(request.headers.get("Cookie") || "");

    if (cookie[COOKIE_NAME] != null) {

      // Respond with the cookie value

      return new Response(cookie[COOKIE_NAME]);

    }

    return new Response("No cookie with name: " + COOKIE_NAME);

  },

} satisfies ExportedHandler;


```

Python

```

from http.cookies import SimpleCookie

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        # Name of the cookie

        cookie_name = "__uid"


        cookies = SimpleCookie(request.headers["Cookie"] or "")


        if cookie_name in cookies:

            # Respond with cookie value

            return Response(cookies[cookie_name].value)


        return Response("No cookie with name: " + cookie_name)


```

TypeScript

```

import { Hono } from 'hono';

import { getCookie } from 'hono/cookie';


const app = new Hono();


app.get('*', (c) => {

  // The name of the cookie

  const COOKIE_NAME = "__uid";


  // Get the specific cookie value using Hono's cookie helper

  const cookieValue = getCookie(c, COOKIE_NAME);


  if (cookieValue) {

    // Respond with the cookie value

    return c.text(cookieValue);

  }


  return c.text("No cookie with name: " + COOKIE_NAME);

});


export default app;


```

External dependencies

This example requires the npm package [cookie ↗](https://www.npmjs.com/package/cookie) to be installed in your JavaScript project.

The Hono example uses the built-in cookie utilities provided by Hono, so no external dependencies are needed for that implementation.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/extract-cookie-value/","name":"Cookie parsing"}}]}
```

---

---
title: Fetch HTML
description: Send a request to a remote server, read HTML from the response, and serve that HTML.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/fetch-html.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Fetch HTML

**Last reviewed:**  about 2 years ago 

Send a request to a remote server, read HTML from the response, and serve that HTML.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/fetch-html)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7276)
* [  TypeScript ](#tab-panel-7277)
* [  Python ](#tab-panel-7278)
* [  Hono ](#tab-panel-7279)

JavaScript

```

export default {

  async fetch(request) {

    /**

     * Replace `remote` with the host you wish to send requests to

     */

    const remote = "https://example.com";


    return await fetch(remote, request);

  },

};


```

[Run Worker in Playground](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbPYZb6HbW5QDGU2AAwBWAIwBmSaIBswgOzS5ALhYs2wDnC40+AkRKmyFcgLAAoAMLoqEAKY3sAESgBnGOhdRo1pSXV4CYhIqOGBbBgAiKBpbAA8AOgArFwjSVCgwe1DwqJiE5IjzKxt7CGwAFToYW184GBgwPgIoa2REuAA3OBdeBFgIAGpgdFxwW3NzOPckElxbVDhwCBIAbzMSEm66Kl4-WwheAAsACgRbAEcQWxcIAEpV9Y2SZAAqF8enl5IAJVsGuF4thIAAMzsM7MCSAB3LyHEgQQ5Aw4eZZ0SjQ1xwiDoEguey4EhnS7XCAueHoD4bF7ISm8aw3Qm2cFAhgkCKHCAQGAuJTIZBxUINWzxOnAVJmSlnCAgBBUTZQuBePYHE5g9B2AA0jOJN1uREeAF8NWYDURzKpmOpNNoePwhGJJOIZPJFEVrHYHM43B4vC0qL5-JpSCEwpEwoRNKk-BksqGImQwOgyIVLO7ShUqjVNvVGrxmq1ktYJmYVhFgIqqAB9YajTIRJS5Ob5FIG80Wq2BG26e0GJ1GRTMcxAA)

TypeScript

```

export default {

  async fetch(request: Request): Promise<Response> {

    /**

     * Replace `remote` with the host you wish to send requests to

     */

    const remote = "https://example.com";


    return await fetch(remote, request);

  },

};


```

Python

```

from workers import WorkerEntrypoint

from js import fetch


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        # Replace `remote` with the host you wish to send requests to

        remote = "https://example.com"

        return await fetch(remote, request)


```

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


app.all("*", async (c) => {

  /**

   * Replace `remote` with the host you wish to send requests to

   */

  const remote = "https://example.com";


  // Forward the request to the remote server

  return await fetch(remote, c.req.raw);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/fetch-html/","name":"Fetch HTML"}}]}
```

---

---
title: Fetch JSON
description: Send a GET request and read in JSON from the response. Use to fetch external data.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JSON ](https://developers.cloudflare.com/search/?tags=JSON)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/fetch-json.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Fetch JSON

**Last reviewed:**  about 4 years ago 

Send a GET request and read in JSON from the response. Use to fetch external data.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/fetch-json)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7280)
* [  TypeScript ](#tab-panel-7281)
* [  Python ](#tab-panel-7282)
* [  Hono ](#tab-panel-7283)

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    const url = "https://jsonplaceholder.typicode.com/todos/1";


    // gatherResponse returns both content-type & response body as a string

    async function gatherResponse(response) {

      const { headers } = response;

      const contentType = headers.get("content-type") || "";

      if (contentType.includes("application/json")) {

        return { contentType, result: JSON.stringify(await response.json()) };

      }

      return { contentType, result: await response.text() };

    }


    const response = await fetch(url);

    const { contentType, result } = await gatherResponse(response);


    const options = { headers: { "content-type": contentType } };

    return new Response(result, options);

  },

};


```

TypeScript

```

interface Env {}

export default {

  async fetch(request, env, ctx): Promise<Response> {

    const url = "https://jsonplaceholder.typicode.com/todos/1";


    // gatherResponse returns both content-type & response body as a string

    async function gatherResponse(response) {

      const { headers } = response;

      const contentType = headers.get("content-type") || "";

      if (contentType.includes("application/json")) {

        return { contentType, result: JSON.stringify(await response.json()) };

      }

      return { contentType, result: await response.text() };

    }


    const response = await fetch(url);

    const { contentType, result } = await gatherResponse(response);


    const options = { headers: { "content-type": contentType } };

    return new Response(result, options);

  },

} satisfies ExportedHandler<Env>;


```

Python

```

from workers import WorkerEntrypoint, Response, fetch

import json


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        url = "https://jsonplaceholder.typicode.com/todos/1"


        # gather_response returns both content-type & response body as a string

        async def gather_response(response):

            headers = response.headers

            content_type = headers["content-type"] or ""


            if "application/json" in content_type:

                return (content_type, json.dumps(await response.json()))

            return (content_type, await response.text())


        response = await fetch(url)

        content_type, result = await gather_response(response)


        headers = {"content-type": content_type}

        return Response(result, headers=headers)


```

TypeScript

```

import { Hono } from 'hono';


type Env = {};


const app = new Hono<{ Bindings: Env }>();


app.get('*', async (c) => {

  const url = "https://jsonplaceholder.typicode.com/todos/1";


  // gatherResponse returns both content-type & response body as a string

  async function gatherResponse(response: Response) {

    const { headers } = response;

    const contentType = headers.get("content-type") || "";


    if (contentType.includes("application/json")) {

      return { contentType, result: JSON.stringify(await response.json()) };

    }


    return { contentType, result: await response.text() };

  }


  const response = await fetch(url);

  const { contentType, result } = await gatherResponse(response);


  return new Response(result, {

    headers: { "content-type": contentType }

  });

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/fetch-json/","name":"Fetch JSON"}}]}
```

---

---
title: Geolocation: Weather application
description: Fetch weather data from an API using the user's geolocation data.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Geolocation ](https://developers.cloudflare.com/search/?tags=Geolocation)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/geolocation-app-weather.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Geolocation: Weather application

**Last reviewed:**  almost 5 years ago 

Fetch weather data from an API using the user's geolocation data.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/geolocation-app-weather)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7284)
* [  TypeScript ](#tab-panel-7285)
* [  Hono ](#tab-panel-7286)
* [  Python ](#tab-panel-7287)

JavaScript

```

export default {

  async fetch(request) {

    let endpoint = "https://api.waqi.info/feed/geo:";

    const token = ""; //Use a token from https://aqicn.org/api/

    let html_style = `body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}`;


    let html_content = "<h1>Weather 🌦</h1>";


    const latitude = request.cf.latitude;

    const longitude = request.cf.longitude;

    endpoint += `${latitude};${longitude}/?token=${token}`;

    const init = {

      headers: {

        "content-type": "application/json;charset=UTF-8",

      },

    };


    const response = await fetch(endpoint, init);

    const content = await response.json();


    html_content += `<p>This is a demo using Workers geolocation data. </p>`;

    html_content += `You are located at: ${latitude},${longitude}.</p>`;

    html_content += `<p>Based off sensor data from <a href="https://developers.cloudflare.com/workers/examples/geolocation-app-weather/%3C/span%3E%3Cspan%20style="--0:#89DDFF;--1:#007474">${content.data.city.url}">${content.data.city.name}</a>:</p>`;

    html_content += `<p>The AQI level is: ${content.data.aqi}.</p>`;

    html_content += `<p>The N02 level is: ${content.data.iaqi.no2?.v}.</p>`;

    html_content += `<p>The O3 level is: ${content.data.iaqi.o3?.v}.</p>`;

    html_content += `<p>The temperature is: ${content.data.iaqi.t?.v}°C.</p>`;


    let html = `

      <!DOCTYPE html>

      <head>

        <title>Geolocation: Weather</title>

      </head>

      <body>

        <style>${html_style}</style>

        <div id="container">

        ${html_content}

        </div>

      </body>`;


    return new Response(html, {

      headers: {

        "content-type": "text/html;charset=UTF-8",

      },

    });

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    let endpoint = "https://api.waqi.info/feed/geo:";

    const token = ""; //Use a token from https://aqicn.org/api/

    let html_style = `body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}`;


    let html_content = "<h1>Weather 🌦</h1>";


    const latitude = request.cf.latitude;

    const longitude = request.cf.longitude;

    endpoint += `${latitude};${longitude}/?token=${token}`;

    const init = {

      headers: {

        "content-type": "application/json;charset=UTF-8",

      },

    };


    const response = await fetch(endpoint, init);

    const content = await response.json();


    html_content += `<p>This is a demo using Workers geolocation data. </p>`;

    html_content += `You are located at: ${latitude},${longitude}.</p>`;

    html_content += `<p>Based off sensor data from <a href="https://developers.cloudflare.com/workers/examples/geolocation-app-weather/%3C/span%3E%3Cspan%20style="--0:#89DDFF;--1:#007474">${content.data.city.url}">${content.data.city.name}</a>:</p>`;

    html_content += `<p>The AQI level is: ${content.data.aqi}.</p>`;

    html_content += `<p>The N02 level is: ${content.data.iaqi.no2?.v}.</p>`;

    html_content += `<p>The O3 level is: ${content.data.iaqi.o3?.v}.</p>`;

    html_content += `<p>The temperature is: ${content.data.iaqi.t?.v}°C.</p>`;


    let html = `

      <!DOCTYPE html>

      <head>

        <title>Geolocation: Weather</title>

      </head>

      <body>

        <style>${html_style}</style>

        <div id="container">

        ${html_content}

        </div>

      </body>`;


    return new Response(html, {

      headers: {

        "content-type": "text/html;charset=UTF-8",

      },

    });

  },

} satisfies ExportedHandler;


```

TypeScript

```

import { Hono } from 'hono';

import { html } from 'hono/html';


type Bindings = {};


interface WeatherApiResponse {

  data: {

    aqi: number;

    city: {

      name: string;

      url: string;

    };

    iaqi: {

      no2?: { v: number };

      o3?: { v: number };

      t?: { v: number };

    };

  };

}


const app = new Hono<{ Bindings: Bindings }>();


app.get('*', async (c) => {

  // Get API endpoint

  let endpoint = "https://api.waqi.info/feed/geo:";

  const token = ""; // Use a token from https://aqicn.org/api/


  // Define styles

  const html_style = `body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}`;


  // Get geolocation from Cloudflare request

  const req = c.req.raw;

  const latitude = req.cf?.latitude;

  const longitude = req.cf?.longitude;


  // Create complete API endpoint with coordinates

  endpoint += `${latitude};${longitude}/?token=${token}`;


  // Fetch weather data

  const init = {

    headers: {

      "content-type": "application/json;charset=UTF-8",

    },

  };

  const response = await fetch(endpoint, init);

  const content = await response.json() as WeatherApiResponse;


  // Build HTML content

  const weatherContent = html`

    <h1>Weather 🌦</h1>

    <p>This is a demo using Workers geolocation data.</p>

    <p>You are located at: ${latitude},${longitude}.</p>

    <p>Based off sensor data from <a href="https://developers.cloudflare.com/workers/examples/geolocation-app-weather/%3C/span%3E%3Cspan%20style="--0:#89DDFF;--1:#007474">${content.data.city.url}">${content.data.city.name}</a>:</p>

    <p>The AQI level is: ${content.data.aqi}.</p>

    <p>The N02 level is: ${content.data.iaqi.no2?.v}.</p>

    <p>The O3 level is: ${content.data.iaqi.o3?.v}.</p>

    <p>The temperature is: ${content.data.iaqi.t?.v}°C.</p>

  `;


  // Complete HTML document

  const htmlDocument = html`

    <!DOCTYPE html>

    <head>

      <title>Geolocation: Weather</title>

    </head>

    <body>

      <style>${html_style}</style>

      <div id="container">

        ${weatherContent}

      </div>

    </body>

  `;


  // Return HTML response

  return c.html(htmlDocument);

});


export default app;


```

Python

```

from workers import WorkerEntrypoint, Response, fetch


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        endpoint = "https://api.waqi.info/feed/geo:"

        token = "" # Use a token from https://aqicn.org/api/

        html_style = "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}"

        html_content = "<h1>Weather 🌦</h1>"


        latitude = request.cf.latitude

        longitude = request.cf.longitude


        endpoint += f"{latitude};{longitude}/?token={token}"

        response = await fetch(endpoint)

        content = await response.json()


        html_content += "<p>This is a demo using Workers geolocation data. </p>"

        html_content += f"You are located at: {latitude},{longitude}.</p>"

        html_content += f"<p>Based off sensor data from <a href='{content['data']['city']['url']}'>{content['data']['city']['name']}</a>:</p>"

        html_content += f"<p>The AQI level is: {content['data']['aqi']}.</p>"

        html_content += f"<p>The N02 level is: {content['data']['iaqi']['no2']['v']}.</p>"

        html_content += f"<p>The O3 level is: {content['data']['iaqi']['o3']['v']}.</p>"

        html_content += f"<p>The temperature is: {content['data']['iaqi']['t']['v']}°C.</p>"


        html = f"""

        <!DOCTYPE html>

          <head>

            <title>Geolocation: Weather</title>

          </head>

          <body>

            <style>{html_style}</style>

            <div id="container">

            {html_content}

            </div>

          </body>

        """


        headers = {"content-type": "text/html;charset=UTF-8"}

        return Response(html, headers=headers)


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/geolocation-app-weather/","name":"Geolocation: Weather application"}}]}
```

---

---
title: Geolocation: Custom Styling
description: Personalize website styling based on localized user time.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Geolocation ](https://developers.cloudflare.com/search/?tags=Geolocation)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/geolocation-custom-styling.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Geolocation: Custom Styling

**Last reviewed:**  about 4 years ago 

Personalize website styling based on localized user time.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/geolocation-custom-styling)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7288)
* [  TypeScript ](#tab-panel-7289)
* [  Hono ](#tab-panel-7290)

JavaScript

```

export default {

  async fetch(request) {

    let grads = [

      [

        { color: "00000c", position: 0 },

        { color: "00000c", position: 0 },

      ],

      [

        { color: "020111", position: 85 },

        { color: "191621", position: 100 },

      ],

      [

        { color: "020111", position: 60 },

        { color: "20202c", position: 100 },

      ],

      [

        { color: "020111", position: 10 },

        { color: "3a3a52", position: 100 },

      ],

      [

        { color: "20202c", position: 0 },

        { color: "515175", position: 100 },

      ],

      [

        { color: "40405c", position: 0 },

        { color: "6f71aa", position: 80 },

        { color: "8a76ab", position: 100 },

      ],

      [

        { color: "4a4969", position: 0 },

        { color: "7072ab", position: 50 },

        { color: "cd82a0", position: 100 },

      ],

      [

        { color: "757abf", position: 0 },

        { color: "8583be", position: 60 },

        { color: "eab0d1", position: 100 },

      ],

      [

        { color: "82addb", position: 0 },

        { color: "ebb2b1", position: 100 },

      ],

      [

        { color: "94c5f8", position: 1 },

        { color: "a6e6ff", position: 70 },

        { color: "b1b5ea", position: 100 },

      ],

      [

        { color: "b7eaff", position: 0 },

        { color: "94dfff", position: 100 },

      ],

      [

        { color: "9be2fe", position: 0 },

        { color: "67d1fb", position: 100 },

      ],

      [

        { color: "90dffe", position: 0 },

        { color: "38a3d1", position: 100 },

      ],

      [

        { color: "57c1eb", position: 0 },

        { color: "246fa8", position: 100 },

      ],

      [

        { color: "2d91c2", position: 0 },

        { color: "1e528e", position: 100 },

      ],

      [

        { color: "2473ab", position: 0 },

        { color: "1e528e", position: 70 },

        { color: "5b7983", position: 100 },

      ],

      [

        { color: "1e528e", position: 0 },

        { color: "265889", position: 50 },

        { color: "9da671", position: 100 },

      ],

      [

        { color: "1e528e", position: 0 },

        { color: "728a7c", position: 50 },

        { color: "e9ce5d", position: 100 },

      ],

      [

        { color: "154277", position: 0 },

        { color: "576e71", position: 30 },

        { color: "e1c45e", position: 70 },

        { color: "b26339", position: 100 },

      ],

      [

        { color: "163C52", position: 0 },

        { color: "4F4F47", position: 30 },

        { color: "C5752D", position: 60 },

        { color: "B7490F", position: 80 },

        { color: "2F1107", position: 100 },

      ],

      [

        { color: "071B26", position: 0 },

        { color: "071B26", position: 30 },

        { color: "8A3B12", position: 80 },

        { color: "240E03", position: 100 },

      ],

      [

        { color: "010A10", position: 30 },

        { color: "59230B", position: 80 },

        { color: "2F1107", position: 100 },

      ],

      [

        { color: "090401", position: 50 },

        { color: "4B1D06", position: 100 },

      ],

      [

        { color: "00000c", position: 80 },

        { color: "150800", position: 100 },

      ],

    ];

    async function toCSSGradient(hour) {

      let css = "linear-gradient(to bottom,";

      const data = grads[hour];

      const len = data.length;

      for (let i = 0; i < len; i++) {

        const item = data[i];

        css += ` #${item.color} ${item.position}%`;

        if (i < len - 1) css += ",";

      }

      return css + ")";

    }

    let html_content = "";

    let html_style = `

      html{width:100vw; height:100vh;}

      body{padding:0; margin:0 !important;height:100%;}

      #container {

        display: flex;

        flex-direction:column;

        align-items: center;

        justify-content: center;

        height: 100%;

        color:white;

        font-family:sans-serif;

      }`;

    const timezone = request.cf.timezone;

    console.log(timezone);

    let localized_date = new Date(

      new Date().toLocaleString("en-US", { timeZone: timezone }),

    );

    let hour = localized_date.getHours();

    let minutes = localized_date.getMinutes();

    html_content += "<h1>" + hour + ":" + minutes + "</h1>";

    html_content += "<p>" + timezone + "<br/></p>";

    html_style += "body{background:" + (await toCSSGradient(hour)) + ";}";

    let html = `

      <!DOCTYPE html>

      <head>

        <title>Geolocation: Customized Design</title>

      </head>

      <body>

        <style> ${html_style}</style>

        <div id="container">

          ${html_content}

        </div>

      </body>`;

    return new Response(html, {

      headers: { "content-type": "text/html;charset=UTF-8" },

    });

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    let grads = [

      [

        { color: "00000c", position: 0 },

        { color: "00000c", position: 0 },

      ],

      [

        { color: "020111", position: 85 },

        { color: "191621", position: 100 },

      ],

      [

        { color: "020111", position: 60 },

        { color: "20202c", position: 100 },

      ],

      [

        { color: "020111", position: 10 },

        { color: "3a3a52", position: 100 },

      ],

      [

        { color: "20202c", position: 0 },

        { color: "515175", position: 100 },

      ],

      [

        { color: "40405c", position: 0 },

        { color: "6f71aa", position: 80 },

        { color: "8a76ab", position: 100 },

      ],

      [

        { color: "4a4969", position: 0 },

        { color: "7072ab", position: 50 },

        { color: "cd82a0", position: 100 },

      ],

      [

        { color: "757abf", position: 0 },

        { color: "8583be", position: 60 },

        { color: "eab0d1", position: 100 },

      ],

      [

        { color: "82addb", position: 0 },

        { color: "ebb2b1", position: 100 },

      ],

      [

        { color: "94c5f8", position: 1 },

        { color: "a6e6ff", position: 70 },

        { color: "b1b5ea", position: 100 },

      ],

      [

        { color: "b7eaff", position: 0 },

        { color: "94dfff", position: 100 },

      ],

      [

        { color: "9be2fe", position: 0 },

        { color: "67d1fb", position: 100 },

      ],

      [

        { color: "90dffe", position: 0 },

        { color: "38a3d1", position: 100 },

      ],

      [

        { color: "57c1eb", position: 0 },

        { color: "246fa8", position: 100 },

      ],

      [

        { color: "2d91c2", position: 0 },

        { color: "1e528e", position: 100 },

      ],

      [

        { color: "2473ab", position: 0 },

        { color: "1e528e", position: 70 },

        { color: "5b7983", position: 100 },

      ],

      [

        { color: "1e528e", position: 0 },

        { color: "265889", position: 50 },

        { color: "9da671", position: 100 },

      ],

      [

        { color: "1e528e", position: 0 },

        { color: "728a7c", position: 50 },

        { color: "e9ce5d", position: 100 },

      ],

      [

        { color: "154277", position: 0 },

        { color: "576e71", position: 30 },

        { color: "e1c45e", position: 70 },

        { color: "b26339", position: 100 },

      ],

      [

        { color: "163C52", position: 0 },

        { color: "4F4F47", position: 30 },

        { color: "C5752D", position: 60 },

        { color: "B7490F", position: 80 },

        { color: "2F1107", position: 100 },

      ],

      [

        { color: "071B26", position: 0 },

        { color: "071B26", position: 30 },

        { color: "8A3B12", position: 80 },

        { color: "240E03", position: 100 },

      ],

      [

        { color: "010A10", position: 30 },

        { color: "59230B", position: 80 },

        { color: "2F1107", position: 100 },

      ],

      [

        { color: "090401", position: 50 },

        { color: "4B1D06", position: 100 },

      ],

      [

        { color: "00000c", position: 80 },

        { color: "150800", position: 100 },

      ],

    ];

    async function toCSSGradient(hour) {

      let css = "linear-gradient(to bottom,";

      const data = grads[hour];

      const len = data.length;

      for (let i = 0; i < len; i++) {

        const item = data[i];

        css += ` #${item.color} ${item.position}%`;

        if (i < len - 1) css += ",";

      }

      return css + ")";

    }

    let html_content = "";

    let html_style = `

      html{width:100vw; height:100vh;}

      body{padding:0; margin:0 !important;height:100%;}

      #container {

        display: flex;

        flex-direction:column;

        align-items: center;

        justify-content: center;

        height: 100%;

        color:white;

        font-family:sans-serif;

      }`;

    const timezone = request.cf.timezone;

    console.log(timezone);

    let localized_date = new Date(

      new Date().toLocaleString("en-US", { timeZone: timezone }),

    );

    let hour = localized_date.getHours();

    let minutes = localized_date.getMinutes();

    html_content += "<h1>" + hour + ":" + minutes + "</h1>";

    html_content += "<p>" + timezone + "<br/></p>";

    html_style += "body{background:" + (await toCSSGradient(hour)) + ";}";

    let html = `

      <!DOCTYPE html>

      <head>

        <title>Geolocation: Customized Design</title>

      </head>

      <body>

        <style> ${html_style}</style>

        <div id="container">

          ${html_content}

        </div>

      </body>`;

    return new Response(html, {

      headers: { "content-type": "text/html;charset=UTF-8" },

    });

  },

} satisfies ExportedHandler;


```

TypeScript

```

import { Hono } from 'hono';


type Bindings = {};

type ColorStop = { color: string; position: number };


const app = new Hono<{ Bindings: Bindings }>();


// Gradient configurations for each hour of the day (0-23)

const grads: ColorStop[][] = [

  [

    { color: "00000c", position: 0 },

    { color: "00000c", position: 0 },

  ],

  [

    { color: "020111", position: 85 },

    { color: "191621", position: 100 },

  ],

  [

    { color: "020111", position: 60 },

    { color: "20202c", position: 100 },

  ],

  [

    { color: "020111", position: 10 },

    { color: "3a3a52", position: 100 },

  ],

  [

    { color: "20202c", position: 0 },

    { color: "515175", position: 100 },

  ],

  [

    { color: "40405c", position: 0 },

    { color: "6f71aa", position: 80 },

    { color: "8a76ab", position: 100 },

  ],

  [

    { color: "4a4969", position: 0 },

    { color: "7072ab", position: 50 },

    { color: "cd82a0", position: 100 },

  ],

  [

    { color: "757abf", position: 0 },

    { color: "8583be", position: 60 },

    { color: "eab0d1", position: 100 },

  ],

  [

    { color: "82addb", position: 0 },

    { color: "ebb2b1", position: 100 },

  ],

  [

    { color: "94c5f8", position: 1 },

    { color: "a6e6ff", position: 70 },

    { color: "b1b5ea", position: 100 },

  ],

  [

    { color: "b7eaff", position: 0 },

    { color: "94dfff", position: 100 },

  ],

  [

    { color: "9be2fe", position: 0 },

    { color: "67d1fb", position: 100 },

  ],

  [

    { color: "90dffe", position: 0 },

    { color: "38a3d1", position: 100 },

  ],

  [

    { color: "57c1eb", position: 0 },

    { color: "246fa8", position: 100 },

  ],

  [

    { color: "2d91c2", position: 0 },

    { color: "1e528e", position: 100 },

  ],

  [

    { color: "2473ab", position: 0 },

    { color: "1e528e", position: 70 },

    { color: "5b7983", position: 100 },

  ],

  [

    { color: "1e528e", position: 0 },

    { color: "265889", position: 50 },

    { color: "9da671", position: 100 },

  ],

  [

    { color: "1e528e", position: 0 },

    { color: "728a7c", position: 50 },

    { color: "e9ce5d", position: 100 },

  ],

  [

    { color: "154277", position: 0 },

    { color: "576e71", position: 30 },

    { color: "e1c45e", position: 70 },

    { color: "b26339", position: 100 },

  ],

  [

    { color: "163C52", position: 0 },

    { color: "4F4F47", position: 30 },

    { color: "C5752D", position: 60 },

    { color: "B7490F", position: 80 },

    { color: "2F1107", position: 100 },

  ],

  [

    { color: "071B26", position: 0 },

    { color: "071B26", position: 30 },

    { color: "8A3B12", position: 80 },

    { color: "240E03", position: 100 },

  ],

  [

    { color: "010A10", position: 30 },

    { color: "59230B", position: 80 },

    { color: "2F1107", position: 100 },

  ],

  [

    { color: "090401", position: 50 },

    { color: "4B1D06", position: 100 },

  ],

  [

    { color: "00000c", position: 80 },

    { color: "150800", position: 100 },

  ],

];


// Convert hour to CSS gradient

async function toCSSGradient(hour: number): Promise<string> {

  let css = "linear-gradient(to bottom,";

  const data = grads[hour];

  const len = data.length;


  for (let i = 0; i < len; i++) {

    const item = data[i];

    css += ` #${item.color} ${item.position}%`;

    if (i < len - 1) css += ",";

  }


  return css + ")";

}


app.get('*', async (c) => {

  const request = c.req.raw;


  // Base HTML style

  let html_style = `

    html{width:100vw; height:100vh;}

    body{padding:0; margin:0 !important;height:100%;}

    #container {

      display: flex;

      flex-direction:column;

      align-items: center;

      justify-content: center;

      height: 100%;

      color:white;

      font-family:sans-serif;

    }`;


  // Get timezone from Cloudflare request

  const timezone = request.cf?.timezone || 'UTC';

  console.log(timezone);


  // Get localized time

  let localized_date = new Date(

    new Date().toLocaleString("en-US", { timeZone: timezone })

  );


  let hour = localized_date.getHours();

  let minutes = localized_date.getMinutes();


  // Generate HTML content

  let html_content = `<h1>${hour}:${minutes}</h1>`;

  html_content += `<p>${timezone}<br/></p>`;


  // Add background gradient based on hour

  html_style += `body{background:${await toCSSGradient(hour)};}`;


  // Complete HTML document

  let html = `

    <!DOCTYPE html>

    <head>

      <title>Geolocation: Customized Design</title>

    </head>

    <body>

      <style>${html_style}</style>

      <div id="container">

        ${html_content}

      </div>

    </body>`;


  return c.html(html);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/geolocation-custom-styling/","name":"Geolocation: Custom Styling"}}]}
```

---

---
title: Geolocation: Hello World
description: Get all geolocation data fields and display them in HTML.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Geolocation ](https://developers.cloudflare.com/search/?tags=Geolocation)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/geolocation-hello-world.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Geolocation: Hello World

**Last reviewed:**  about 4 years ago 

Get all geolocation data fields and display them in HTML.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/geolocation-hello-world)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7291)
* [  TypeScript ](#tab-panel-7292)
* [  Python ](#tab-panel-7293)
* [  Hono ](#tab-panel-7294)

JavaScript

```

export default {

  async fetch(request) {

    let html_content = "";

    let html_style =

      "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f;}";


    html_content += "<p> Colo: " + request.cf.colo + "</p>";

    html_content += "<p> Country: " + request.cf.country + "</p>";

    html_content += "<p> City: " + request.cf.city + "</p>";

    html_content += "<p> Continent: " + request.cf.continent + "</p>";

    html_content += "<p> Latitude: " + request.cf.latitude + "</p>";

    html_content += "<p> Longitude: " + request.cf.longitude + "</p>";

    html_content += "<p> PostalCode: " + request.cf.postalCode + "</p>";

    html_content += "<p> MetroCode: " + request.cf.metroCode + "</p>";

    html_content += "<p> Region: " + request.cf.region + "</p>";

    html_content += "<p> RegionCode: " + request.cf.regionCode + "</p>";

    html_content += "<p> Timezone: " + request.cf.timezone + "</p>";


    let html = `<!DOCTYPE html>

      <head>

        <title> Geolocation: Hello World </title>

        <style> ${html_style} </style>

      </head>

      <body>

        <h1>Geolocation: Hello World!</h1>

        <p>You now have access to geolocation data about where your user is visiting from.</p>

        ${html_content}

      </body>`;


    return new Response(html, {

      headers: {

        "content-type": "text/html;charset=UTF-8",

      },

    });

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    let html_content = "";

    let html_style =

      "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f;}";


    html_content += "<p> Colo: " + request.cf.colo + "</p>";

    html_content += "<p> Country: " + request.cf.country + "</p>";

    html_content += "<p> City: " + request.cf.city + "</p>";

    html_content += "<p> Continent: " + request.cf.continent + "</p>";

    html_content += "<p> Latitude: " + request.cf.latitude + "</p>";

    html_content += "<p> Longitude: " + request.cf.longitude + "</p>";

    html_content += "<p> PostalCode: " + request.cf.postalCode + "</p>";

    html_content += "<p> MetroCode: " + request.cf.metroCode + "</p>";

    html_content += "<p> Region: " + request.cf.region + "</p>";

    html_content += "<p> RegionCode: " + request.cf.regionCode + "</p>";

    html_content += "<p> Timezone: " + request.cf.timezone + "</p>";


    let html = `<!DOCTYPE html>

      <head>

        <title> Geolocation: Hello World </title>

        <style> ${html_style} </style>

      </head>

      <body>

        <h1>Geolocation: Hello World!</h1>

        <p>You now have access to geolocation data about where your user is visiting from.</p>

        ${html_content}

      </body>`;


    return new Response(html, {

      headers: {

        "content-type": "text/html;charset=UTF-8",

      },

    });

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        html_content = ""

        html_style = "body{padding:6em font-family: sans-serif;} h1{color:#f6821f;}"


        html_content += "<p> Colo: " + request.cf.colo + "</p>"

        html_content += "<p> Country: " + request.cf.country + "</p>"

        html_content += "<p> City: " + request.cf.city + "</p>"

        html_content += "<p> Continent: " + request.cf.continent + "</p>"

        html_content += "<p> Latitude: " + request.cf.latitude + "</p>"

        html_content += "<p> Longitude: " + request.cf.longitude + "</p>"

        html_content += "<p> PostalCode: " + request.cf.postalCode + "</p>"

        html_content += "<p> Region: " + request.cf.region + "</p>"

        html_content += "<p> RegionCode: " + request.cf.regionCode + "</p>"

        html_content += "<p> Timezone: " + request.cf.timezone + "</p>"


        html = f"""

        <!DOCTYPE html>

          <head>

            <title> Geolocation: Hello World </title>

            <style> {html_style} </style>

          </head>

          <body>

            <h1>Geolocation: Hello World!</h1>

            <p>You now have access to geolocation data about where your user is visiting from.</p>

            {html_content}

          </body>

        """


        headers = {"content-type": "text/html;charset=UTF-8"}

        return Response(html, headers=headers)


```

TypeScript

```

import { Hono } from "hono";

import { html } from "hono/html";


// Define the RequestWithCf interface to add Cloudflare-specific properties

interface RequestWithCf extends Request {

  cf: {

    // Cloudflare-specific properties for geolocation

    colo: string;

    country: string;

    city: string;

    continent: string;

    latitude: string;

    longitude: string;

    postalCode: string;

    metroCode: string;

    region: string;

    regionCode: string;

    timezone: string;

    // Add other CF properties as needed

  };

}


const app = new Hono();


app.get("*", (c) => {

  // Cast the raw request to include Cloudflare-specific properties

  const request = c.req.raw;


  // Define styles

  const html_style =

    "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f;}";


  // Create content with geolocation data

  let html_content = html` <p>Colo: ${request.cf.colo}</p>

    <p>Country: ${request.cf.country}</p>

    <p>City: ${request.cf.city}</p>

    <p>Continent: ${request.cf.continent}</p>

    <p>Latitude: ${request.cf.latitude}</p>

    <p>Longitude: ${request.cf.longitude}</p>

    <p>PostalCode: ${request.cf.postalCode}</p>

    <p>MetroCode: ${request.cf.metroCode}</p>

    <p>Region: ${request.cf.region}</p>

    <p>RegionCode: ${request.cf.regionCode}</p>

    <p>Timezone: ${request.cf.timezone}</p>`;


  // Compose the full HTML

  const htmlContent = html`<!DOCTYPE html>

    <head>

      <title>Geolocation: Hello World</title>

      <style>

        ${html_style}

      </style>

    </head>

    <body>

      <h1>Geolocation: Hello World!</h1>

      <p>

        You now have access to geolocation data about where your user is

        visiting from.

      </p>

      ${html_content}

    </body> `;


  // Return the HTML response

  return c.html(htmlContent);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/geolocation-hello-world/","name":"Geolocation: Hello World"}}]}
```

---

---
title: Hot-link protection
description: Block other websites from linking to your content. This is useful for protecting images.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Security ](https://developers.cloudflare.com/search/?tags=Security)[ Headers ](https://developers.cloudflare.com/search/?tags=Headers)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/hot-link-protection.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Hot-link protection

**Last reviewed:**  over 5 years ago 

Block other websites from linking to your content. This is useful for protecting images.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/hot-link-protection)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7295)
* [  TypeScript ](#tab-panel-7296)
* [  Python ](#tab-panel-7297)
* [  Hono ](#tab-panel-7298)

JavaScript

```

export default {

  async fetch(request) {

    const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/";

    const PROTECTED_TYPE = "image/";


    // Fetch the original request

    const response = await fetch(request);


    // If it's an image, engage hotlink protection based on the

    // Referer header.

    const referer = request.headers.get("Referer");

    const contentType = response.headers.get("Content-Type") || "";


    if (referer && contentType.startsWith(PROTECTED_TYPE)) {

      // If the hostnames don't match, it's a hotlink

      if (new URL(referer).hostname !== new URL(request.url).hostname) {

        // Redirect the user to your website

        return Response.redirect(HOMEPAGE_URL, 302);

      }

    }


    // Everything is fine, return the response normally.

    return response;

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/";

    const PROTECTED_TYPE = "image/";


    // Fetch the original request

    const response = await fetch(request);


    // If it's an image, engage hotlink protection based on the

    // Referer header.

    const referer = request.headers.get("Referer");

    const contentType = response.headers.get("Content-Type") || "";


    if (referer && contentType.startsWith(PROTECTED_TYPE)) {

      // If the hostnames don't match, it's a hotlink

      if (new URL(referer).hostname !== new URL(request.url).hostname) {

        // Redirect the user to your website

        return Response.redirect(HOMEPAGE_URL, 302);

      }

    }


    // Everything is fine, return the response normally.

    return response;

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response, fetch

from urllib.parse import urlparse


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        homepage_url = "https://tutorial.cloudflareworkers.com/"

        protected_type = "image/"


        # Fetch the original request

        response = await fetch(request)


        # If it's an image, engage hotlink protection based on the referer header

        referer = request.headers["Referer"]

        content_type = response.headers["Content-Type"] or ""


        if referer and content_type.startswith(protected_type):

            # If the hostnames don't match, it's a hotlink

            if urlparse(referer).hostname != urlparse(request.url).hostname:

                # Redirect the user to your website

                return Response.redirect(homepage_url, 302)


        # Everything is fine, return the response normally

        return response


```

TypeScript

```

import { Hono } from 'hono';


const app = new Hono();


// Middleware for hot-link protection

app.use('*', async (c, next) => {

  const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/";

  const PROTECTED_TYPE = "image/";


  // Continue to the next handler to get the response

  await next();


  // If we have a response, check for hotlinking

  if (c.res) {

    // If it's an image, engage hotlink protection based on the Referer header

    const referer = c.req.header("Referer");

    const contentType = c.res.headers.get("Content-Type") || "";


    if (referer && contentType.startsWith(PROTECTED_TYPE)) {

      // If the hostnames don't match, it's a hotlink

      if (new URL(referer).hostname !== new URL(c.req.url).hostname) {

        // Redirect the user to your website

        c.res = c.redirect(HOMEPAGE_URL, 302);

      }

    }

  }

});


// Default route handler that passes through the request to the origin

app.all('*', async (c) => {

  // Fetch the original request

  return fetch(c.req.raw);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/hot-link-protection/","name":"Hot-link protection"}}]}
```

---

---
title: Custom Domain with Images
description: Set up custom domain for Images using a Worker or serve images using a prefix path and Cloudflare registered domain.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/images-workers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Custom Domain with Images

**Last reviewed:**  over 3 years ago 

Set up custom domain for Images using a Worker or serve images using a prefix path and Cloudflare registered domain.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/images-workers)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

To serve images from a custom domain:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select **Create application** \> **Workers** \> **Create Worker** and create your Worker.
3. In your Worker, select **Quick edit** and paste the following code.

* [  JavaScript ](#tab-panel-7299)
* [  TypeScript ](#tab-panel-7300)
* [  Hono ](#tab-panel-7301)
* [  Python ](#tab-panel-7302)

JavaScript

```

export default {

  async fetch(request) {

    // You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA

    const accountHash = "";


    const { pathname } = new URL(request.url);


    // A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public

    // will fetch "https://imagedelivery.net/<accountHash>/83eb7b2-5392-4565-b69e-aff66acddd00/public"


    return fetch(`https://imagedelivery.net/${accountHash}${pathname}`);

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    // You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA

    const accountHash = "";


    const { pathname } = new URL(request.url);


    // A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public

    // will fetch "https://imagedelivery.net/<accountHash>/83eb7b2-5392-4565-b69e-aff66acddd00/public"


    return fetch(`https://imagedelivery.net/${accountHash}${pathname}`);

  },

} satisfies ExportedHandler;


```

TypeScript

```

import { Hono } from 'hono';


interface Env {

  // You can store your account hash as a binding variable

  ACCOUNT_HASH?: string;

}


const app = new Hono<{ Bindings: Env }>();


app.get('*', async (c) => {

  // You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA

  // Either get it from environment or hardcode it here

  const accountHash = c.env.ACCOUNT_HASH || "";


  const url = new URL(c.req.url);


  // A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public

  // will fetch "https://imagedelivery.net/<accountHash>/83eb7b2-5392-4565-b69e-aff66acddd00/public"


  return fetch(`https://imagedelivery.net/${accountHash}${url.pathname}`);

});


export default app;


```

Python

```

from workers import WorkerEntrypoint

from js import URL, fetch


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        # You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA

        account_hash = ""

        url = URL.new(request.url)


        # A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public

        # will fetch "https://imagedelivery.net/<accountHash>/83eb7b2-5392-4565-b69e-aff66acddd00/public"

        return fetch(f'https://imagedelivery.net/{account_hash}{url.pathname}')


```

Another way you can serve images from a custom domain is by using the `cdn-cgi/imagedelivery` prefix path which is used as path to trigger `cdn-cgi` image proxy.

Below is an example showing the hostname as a Cloudflare proxied domain under the same account as the Image, followed with the prefix path and the image `<ACCOUNT_HASH>`, `<IMAGE_ID>` and `<VARIANT_NAME>` which can be found in the **Images** on the Cloudflare dashboard.

JavaScript

```

https://example.com/cdn-cgi/imagedelivery/<ACCOUNT_HASH>/<IMAGE_ID>/<VARIANT_NAME>


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/images-workers/","name":"Custom Domain with Images"}}]}
```

---

---
title: Logging headers to console
description: Examine the contents of a Headers object by logging to console with a Map.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Debugging ](https://developers.cloudflare.com/search/?tags=Debugging)[ Headers ](https://developers.cloudflare.com/search/?tags=Headers)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ Rust ](https://developers.cloudflare.com/search/?tags=Rust)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/logging-headers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Logging headers to console

**Last reviewed:**  over 5 years ago 

Examine the contents of a Headers object by logging to console with a Map.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/logging-headers)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7303)
* [  TypeScript ](#tab-panel-7304)
* [  Python ](#tab-panel-7305)
* [  Rust ](#tab-panel-7306)
* [  Hono ](#tab-panel-7307)

JavaScript

```

export default {

  async fetch(request) {

    console.log(new Map(request.headers));

    return new Response("Hello world");

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    console.log(new Map(request.headers));

    return new Response("Hello world");

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        print(dict(request.headers))

        return Response('Hello world')


```

```

use worker::*;


#[event(fetch)]

async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result<Response> {

    console_log!("{:?}", req.headers());

    Response::ok("hello world")

}


```

TypeScript

```

import { Hono } from 'hono';


const app = new Hono();


app.get('*', (c) => {

  // Different ways to log headers in Hono:


  // 1. Using Map to display headers in console

  console.log('Headers as Map:', new Map(c.req.raw.headers));


  // 2. Using spread operator to log headers

  console.log('Headers spread:', [...c.req.raw.headers]);


  // 3. Using Object.fromEntries to convert to an object

  console.log('Headers as Object:', Object.fromEntries(c.req.raw.headers));


  // 4. Hono's built-in header accessor (for individual headers)

  console.log('User-Agent:', c.req.header('User-Agent'));


  // 5. Using c.req.headers to get all headers

  console.log('All headers from Hono context:', c.req.header());


  return c.text('Hello world');

});


export default app;


```

---

## Console-logging headers

Use a `Map` if you need to log a `Headers` object to the console:

JavaScript

```

console.log(new Map(request.headers));


```

Use the `spread` operator if you need to quickly stringify a `Headers` object:

JavaScript

```

let requestHeaders = JSON.stringify([...request.headers]);


```

Use `Object.fromEntries` to convert the headers to an object:

JavaScript

```

let requestHeaders = Object.fromEntries(request.headers);


```

### The problem

When debugging Workers, examine the headers on a request or response. A common mistake is to try to log headers to the developer console via code like this:

JavaScript

```

console.log(request.headers);


```

Or this:

JavaScript

```

console.log(`Request headers: ${JSON.stringify(request.headers)}`);


```

Both attempts result in what appears to be an empty object — the string `"{}"` — even though calling `request.headers.has("Your-Header-Name")` might return true. This is the same behavior that browsers implement.

The reason this happens is because [Headers ↗](https://developer.mozilla.org/en-US/docs/Web/API/Headers) objects do not store headers in enumerable JavaScript properties, so the developer console and JSON stringifier do not know how to read the names and values of the headers. It is not actually an empty object, but rather an opaque object.

`Headers` objects are iterable, which you can take advantage of to develop a couple of quick one-liners for debug-printing headers.

### Pass headers through a Map

The first common idiom for making Headers `console.log()`\-friendly is to construct a `Map` object from the `Headers` object and log the `Map` object.

JavaScript

```

console.log(new Map(request.headers));


```

This works because:

* `Map` objects can be constructed from iterables, like `Headers`.
* The `Map` object does store its entries in enumerable JavaScript properties, so the developer console can see into it.

### Spread headers into an array

The `Map` approach works for calls to `console.log()`. If you need to stringify your headers, you will discover that stringifying a `Map` yields nothing more than `[object Map]`.

Even though a `Map` stores its data in enumerable properties, those properties are [Symbol ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/Symbol)\-keyed. Because of this, `JSON.stringify()` will [ignore Symbol-keyed properties ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/Symbol#symbols%5Fand%5Fjson.stringify) and you will receive an empty `{}`.

Instead, you can take advantage of the iterability of the `Headers` object in a new way by applying the [spread operator ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread%5Fsyntax) (`...`) to it.

JavaScript

```

let requestHeaders = JSON.stringify([...request.headers], null, 2);

console.log(`Request headers: ${requestHeaders}`);


```

### Convert headers into an object with Object.fromEntries (ES2019)

ES2019 provides [Object.fromEntries ↗](https://github.com/tc39/proposal-object-from-entries) which is a call to convert the headers into an object:

JavaScript

```

let headersObject = Object.fromEntries(request.headers);

let requestHeaders = JSON.stringify(headersObject, null, 2);

console.log(`Request headers: ${requestHeaders}`);


```

This results in something like:

JavaScript

```

Request headers: {

  "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",

  "accept-encoding": "gzip",

  "accept-language": "en-US,en;q=0.9",

  "cf-ipcountry": "US",

  // ...

}"


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/logging-headers/","name":"Logging headers to console"}}]}
```

---

---
title: Modify request property
description: Create a modified request with edited properties based off of an incoming request.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ Headers ](https://developers.cloudflare.com/search/?tags=Headers)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/modify-request-property.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Modify request property

**Last reviewed:**  over 5 years ago 

Create a modified request with edited properties based off of an incoming request.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/modify-request-property)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7308)
* [  TypeScript ](#tab-panel-7309)
* [  Python ](#tab-panel-7310)
* [  Hono ](#tab-panel-7311)

JavaScript

```

export default {

  async fetch(request) {

    /**

     * Example someHost is set up to return raw JSON

     * @param {string} someUrl the URL to send the request to, since we are setting hostname too only path is applied

     * @param {string} someHost the host the request will resolve too

     */

    const someHost = "example.com";

    const someUrl = "https://foo.example.com/api.js";


    /**

     * The best practice is to only assign new RequestInit properties

     * on the request object using either a method or the constructor

     */

    const newRequestInit = {

      // Change method

      method: "POST",

      // Change body

      body: JSON.stringify({ bar: "foo" }),

      // Change the redirect mode.

      redirect: "follow",

      // Change headers, note this method will erase existing headers

      headers: {

        "Content-Type": "application/json",

      },

      // Change a Cloudflare feature on the outbound response

      cf: { apps: false },

    };


    // Change just the host

    const url = new URL(someUrl);


    url.hostname = someHost;


    // Best practice is to always use the original request to construct the new request

    // to clone all the attributes. Applying the URL also requires a constructor

    // since once a Request has been constructed, its URL is immutable.

    const newRequest = new Request(

      url.toString(),

      new Request(request, newRequestInit),

    );


    // Set headers using method

    newRequest.headers.set("X-Example", "bar");

    newRequest.headers.set("Content-Type", "application/json");

    try {

      return await fetch(newRequest);

    } catch (e) {

      return new Response(JSON.stringify({ error: e.message }), {

        status: 500,

      });

    }

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    /**

     * Example someHost is set up to return raw JSON

     * @param {string} someUrl the URL to send the request to, since we are setting hostname too only path is applied

     * @param {string} someHost the host the request will resolve too

     */

    const someHost = "example.com";

    const someUrl = "https://foo.example.com/api.js";


    /**

     * The best practice is to only assign new RequestInit properties

     * on the request object using either a method or the constructor

     */

    const newRequestInit = {

      // Change method

      method: "POST",

      // Change body

      body: JSON.stringify({ bar: "foo" }),

      // Change the redirect mode.

      redirect: "follow",

      // Change headers, note this method will erase existing headers

      headers: {

        "Content-Type": "application/json",

      },

      // Change a Cloudflare feature on the outbound response

      cf: { apps: false },

    };


    // Change just the host

    const url = new URL(someUrl);


    url.hostname = someHost;


    // Best practice is to always use the original request to construct the new request

    // to clone all the attributes. Applying the URL also requires a constructor

    // since once a Request has been constructed, its URL is immutable.

    const newRequest = new Request(

      url.toString(),

      new Request(request, newRequestInit),

    );


    // Set headers using method

    newRequest.headers.set("X-Example", "bar");

    newRequest.headers.set("Content-Type", "application/json");

    try {

      return await fetch(newRequest);

    } catch (e) {

      return new Response(JSON.stringify({ error: e.message }), {

        status: 500,

      });

    }

  },

} satisfies ExportedHandler;


```

Python

```

import json

from workers import WorkerEntrypoint

from pyodide.ffi import to_js as _to_js

from js import Object, URL, Request, fetch, Response


def to_js(obj):

    return _to_js(obj, dict_converter=Object.fromEntries)


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        some_host = "example.com"

        some_url = "https://foo.example.com/api.js"


        # The best practice is to only assign new_request_init properties

        # on the request object using either a method or the constructor

        new_request_init = {

          "method": "POST", # Change method

          "body": json.dumps({ "bar": "foo" }), # Change body

          "redirect": "follow", # Change the redirect mode

          # Change headers, note this method will erase existing headers

          "headers": {

            "Content-Type": "application/json",

          },

          #  Change a Cloudflare feature on the outbound response

          "cf": { "apps": False },

        }


        # Change just the host

        url = URL.new(some_url)

        url.hostname = some_host


        # Best practice is to always use the original request to construct the new request

        # to clone all the attributes. Applying the URL also requires a constructor

        # since once a Request has been constructed, its URL is immutable.

        org_request = Request.new(request, new_request_init)

        new_request = Request.new(url.toString(),org_request)


        new_request.headers["X-Example"] =  "bar"

        new_request.headers["Content-Type"] = "application/json"


        try:

            return await fetch(new_request)

        except Exception as e:

            return Response.new({"error": str(e)}, status=500)


```

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


app.all("*", async (c) => {

  /**

   * Example someHost is set up to return raw JSON

   */

  const someHost = "example.com";

  const someUrl = "https://foo.example.com/api.js";


  // Create a URL object to modify the hostname

  const url = new URL(someUrl);

  url.hostname = someHost;


  // Create a new request

  // First create a clone of the original request with the new properties

  const requestClone = new Request(c.req.raw, {

    // Change method

    method: "POST",

    // Change body

    body: JSON.stringify({ bar: "foo" }),

    // Change the redirect mode

    redirect: "follow" as RequestRedirect,

    // Change headers, note this method will erase existing headers

    headers: {

      "Content-Type": "application/json",

      "X-Example": "bar",

    },

    // Change a Cloudflare feature on the outbound response

    cf: { apps: false },

  });


  // Then create a new request with the modified URL

  const newRequest = new Request(url.toString(), requestClone);


  // Send the modified request

  const response = await fetch(newRequest);


  // Return the response

  return response;

});


// Handle errors

app.onError((err, c) => {

  return err.getResponse();

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/modify-request-property/","name":"Modify request property"}}]}
```

---

---
title: Modify response
description: Fetch and modify response properties which are immutable by creating a copy first.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ Headers ](https://developers.cloudflare.com/search/?tags=Headers)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/modify-response.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Modify response

**Last reviewed:**  over 5 years ago 

Fetch and modify response properties which are immutable by creating a copy first.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/modify-response)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7312)
* [  TypeScript ](#tab-panel-7313)
* [  Python ](#tab-panel-7314)
* [  Hono ](#tab-panel-7315)

JavaScript

```

export default {

  async fetch(request) {

    /**

     * @param {string} headerNameSrc Header to get the new value from

     * @param {string} headerNameDst Header to set based off of value in src

     */

    const headerNameSrc = "foo"; //"Orig-Header"

    const headerNameDst = "Last-Modified";


    /**

     * Response properties are immutable. To change them, construct a new

     * Response and pass modified status or statusText in the ResponseInit

     * object. Response headers can be modified through the headers `set` method.

     */

    const originalResponse = await fetch(request);


    // Change status and statusText, but preserve body and headers

    let response = new Response(originalResponse.body, {

      status: 500,

      statusText: "some message",

      headers: originalResponse.headers,

    });


    // Change response body by adding the foo prop

    const originalBody = await originalResponse.json();

    const body = JSON.stringify({ foo: "bar", ...originalBody });

    response = new Response(body, response);


    // Add a header using set method

    response.headers.set("foo", "bar");


    // Set destination header to the value of the source header

    const src = response.headers.get(headerNameSrc);


    if (src != null) {

      response.headers.set(headerNameDst, src);

      console.log(

        `Response header "${headerNameDst}" was set to "${response.headers.get(

          headerNameDst,

        )}"`,

      );

    }

    return response;

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    /**

     * @param {string} headerNameSrc Header to get the new value from

     * @param {string} headerNameDst Header to set based off of value in src

     */

    const headerNameSrc = "foo"; //"Orig-Header"

    const headerNameDst = "Last-Modified";


    /**

     * Response properties are immutable. To change them, construct a new

     * Response and pass modified status or statusText in the ResponseInit

     * object. Response headers can be modified through the headers `set` method.

     */

    const originalResponse = await fetch(request);


    // Change status and statusText, but preserve body and headers

    let response = new Response(originalResponse.body, {

      status: 500,

      statusText: "some message",

      headers: originalResponse.headers,

    });


    // Change response body by adding the foo prop

    const originalBody = await originalResponse.json();

    const body = JSON.stringify({ foo: "bar", ...originalBody });

    response = new Response(body, response);


    // Add a header using set method

    response.headers.set("foo", "bar");


    // Set destination header to the value of the source header

    const src = response.headers.get(headerNameSrc);


    if (src != null) {

      response.headers.set(headerNameDst, src);

      console.log(

        `Response header "${headerNameDst}" was set to "${response.headers.get(

          headerNameDst,

        )}"`,

      );

    }

    return response;

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response, fetch

import json


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        header_name_src = "foo" # Header to get the new value from

        header_name_dst = "Last-Modified" # Header to set based off of value in src


        # Response properties are immutable. To change them, construct a new response

        original_response = await fetch(request)


        # Change status and statusText, but preserve body and headers

        response = Response(original_response.body, status=500, status_text="some message", headers=original_response.headers)


        # Change response body by adding the foo prop

        new_body = await original_response.json()

        new_body["foo"] = "bar"

        response.replace_body(json.dumps(new_body))


        # Add a new header

        response.headers["foo"] = "bar"


        # Set destination header to the value of the source header

        src = response.headers[header_name_src]


        if src is not None:

            response.headers[header_name_dst] = src

            print(f'Response header {header_name_dst} was set to {response.headers[header_name_dst]}')


        return response


```

TypeScript

```

import { Hono } from 'hono';


const app = new Hono();


app.get('*', async (c) => {

  /**

   * Header configuration

   */

  const headerNameSrc = "foo"; // Header to get the new value from

  const headerNameDst = "Last-Modified"; // Header to set based off of value in src


  /**

   * Response properties are immutable. With Hono, we can modify the response

   * by creating custom response objects.

   */

  const originalResponse = await fetch(c.req.raw);


  // Get the JSON body from the original response

  const originalBody = await originalResponse.json();


  // Modify the body by adding a new property

  const modifiedBody = {

    foo: "bar",

    ...originalBody

  };


  // Create a new custom response with modified status, headers, and body

  const response = new Response(JSON.stringify(modifiedBody), {

    status: 500,

    statusText: "some message",

    headers: originalResponse.headers,

  });


  // Add a header using set method

  response.headers.set("foo", "bar");


  // Set destination header to the value of the source header

  const src = response.headers.get(headerNameSrc);

  if (src != null) {

    response.headers.set(headerNameDst, src);

    console.log(

      `Response header "${headerNameDst}" was set to "${response.headers.get(headerNameDst)}"`

    );

  }


  return response;

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/modify-response/","name":"Modify response"}}]}
```

---

---
title: Multiple Cron Triggers
description: Set multiple Cron Triggers on three different schedules.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/multiple-cron-triggers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Multiple Cron Triggers

**Last reviewed:**  over 4 years ago 

Set multiple Cron Triggers on three different schedules.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/multiple-cron-triggers)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7316)
* [  TypeScript ](#tab-panel-7317)
* [  Hono ](#tab-panel-7318)

JavaScript

```

export default {

  async scheduled(event, env, ctx) {

    // Write code for updating your API

    switch (event.cron) {

      case "*/3 * * * *":

        // Every three minutes

        await updateAPI();

        break;

      case "*/10 * * * *":

        // Every ten minutes

        await updateAPI2();

        break;

      case "*/45 * * * *":

        // Every forty-five minutes

        await updateAPI3();

        break;

    }

    console.log("cron processed");

  },

};


```

TypeScript

```

interface Env {}

export default {

  async scheduled(

    controller: ScheduledController,

    env: Env,

    ctx: ExecutionContext,

  ) {

    // Write code for updating your API

    switch (controller.cron) {

      case "*/3 * * * *":

        // Every three minutes

        await updateAPI();

        break;

      case "*/10 * * * *":

        // Every ten minutes

        await updateAPI2();

        break;

      case "*/45 * * * *":

        // Every forty-five minutes

        await updateAPI3();

        break;

    }

    console.log("cron processed");

  },

};


```

TypeScript

```

import { Hono } from "hono";


interface Env {}


// Create Hono app

const app = new Hono<{ Bindings: Env }>();


// Regular routes for normal HTTP requests

app.get("/", (c) => c.text("Multiple Cron Trigger Example"));


// Export both the app and a scheduled function

export default {

  // The Hono app handles regular HTTP requests

  fetch: app.fetch,


  // The scheduled function handles Cron triggers

  async scheduled(

    controller: ScheduledController,

    env: Env,

    ctx: ExecutionContext,

  ) {

    // Check which cron schedule triggered this execution

    switch (controller.cron) {

      case "*/3 * * * *":

        // Every three minutes

        await updateAPI();

        break;

      case "*/10 * * * *":

        // Every ten minutes

        await updateAPI2();

        break;

      case "*/45 * * * *":

        // Every forty-five minutes

        await updateAPI3();

        break;

    }

    console.log("cron processed");

  },

};


```

## Test Cron Triggers using Wrangler

The recommended way of testing Cron Triggers is using Wrangler.

Cron Triggers can be tested using Wrangler by passing in the `--test-scheduled` flag to [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev). This will expose a `/__scheduled` (or `/cdn-cgi/handler/scheduled` for Python Workers) route which can be used to test using a HTTP request. To simulate different cron patterns, a `cron` query parameter can be passed in.

Terminal window

```

npx wrangler dev --test-scheduled


curl "http://localhost:8787/__scheduled?cron=*%2F3+*+*+*+*"


curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*" # Python Workers


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/multiple-cron-triggers/","name":"Multiple Cron Triggers"}}]}
```

---

---
title: Stream OpenAI API Responses
description: Use the OpenAI v4 SDK to stream responses from OpenAI.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/openai-sdk-streaming.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Stream OpenAI API Responses

**Last reviewed:**  over 2 years ago 

Use the OpenAI v4 SDK to stream responses from OpenAI.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/openai-sdk-streaming)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

In order to run this code, you must install the OpenAI SDK by running `npm i openai`.

Note

For analytics, caching, rate limiting, and more, you can also send requests like this through Cloudflare's [AI Gateway](https://developers.cloudflare.com/ai-gateway/usage/providers/openai/).

* [  TypeScript ](#tab-panel-7319)
* [  Hono ](#tab-panel-7320)

TypeScript

```

import OpenAI from "openai";


export default {

  async fetch(request, env, ctx): Promise<Response> {

    const openai = new OpenAI({

      apiKey: env.OPENAI_API_KEY,

    });


    // Create a TransformStream to handle streaming data

    let { readable, writable } = new TransformStream();

    let writer = writable.getWriter();

    const textEncoder = new TextEncoder();


    ctx.waitUntil(

      (async () => {

        const stream = await openai.chat.completions.create({

          model: "gpt-4o-mini",

          messages: [{ role: "user", content: "Tell me a story" }],

          stream: true,

        });


        // loop over the data as it is streamed and write to the writeable

        for await (const part of stream) {

          writer.write(

            textEncoder.encode(part.choices[0]?.delta?.content || ""),

          );

        }

        writer.close();

      })(),

    );


    // Send the readable back to the browser

    return new Response(readable);

  },

} satisfies ExportedHandler<Env>;


```

TypeScript

```

import { Hono } from "hono";

import { streamText } from "hono/streaming";

import OpenAI from "openai";


interface Env {

  OPENAI_API_KEY: string;

}


const app = new Hono<{ Bindings: Env }>();


app.get("*", async (c) => {

  const openai = new OpenAI({

    apiKey: c.env.OPENAI_API_KEY,

  });


  const chatStream = await openai.chat.completions.create({

    model: "gpt-4o-mini",

    messages: [{ role: "user", content: "Tell me a story" }],

    stream: true,

  });


  return streamText(c, async (stream) => {

    for await (const message of chatStream) {

      await stream.write(message.choices[0].delta.content || "");

    }

    stream.close();

  });

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/openai-sdk-streaming/","name":"Stream OpenAI API Responses"}}]}
```

---

---
title: Post JSON
description: Send a POST request with JSON data. Use to share data with external servers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JSON ](https://developers.cloudflare.com/search/?tags=JSON)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/post-json.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Post JSON

**Last reviewed:**  about 4 years ago 

Send a POST request with JSON data. Use to share data with external servers.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/post-json)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7321)
* [  TypeScript ](#tab-panel-7322)
* [  Python ](#tab-panel-7323)
* [  Hono ](#tab-panel-7324)

JavaScript

```

export default {

  async fetch(request) {

    /**

     * Example someHost is set up to take in a JSON request

     * Replace url with the host you wish to send requests to

     * @param {string} url the URL to send the request to

     * @param {BodyInit} body the JSON data to send in the request

     */

    const someHost = "https://examples.cloudflareworkers.com/demos";

    const url = someHost + "/requests/json";

    const body = {

      results: ["default data to send"],

      errors: null,

      msg: "I sent this to the fetch",

    };


    /**

     * gatherResponse awaits and returns a response body as a string.

     * Use await gatherResponse(..) in an async function to get the response body

     * @param {Response} response

     */

    async function gatherResponse(response) {

      const { headers } = response;

      const contentType = headers.get("content-type") || "";

      if (contentType.includes("application/json")) {

        return JSON.stringify(await response.json());

      } else if (contentType.includes("application/text")) {

        return response.text();

      } else if (contentType.includes("text/html")) {

        return response.text();

      } else {

        return response.text();

      }

    }


    const init = {

      body: JSON.stringify(body),

      method: "POST",

      headers: {

        "content-type": "application/json;charset=UTF-8",

      },

    };

    const response = await fetch(url, init);

    const results = await gatherResponse(response);

    return new Response(results, init);

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    /**

     * Example someHost is set up to take in a JSON request

     * Replace url with the host you wish to send requests to

     * @param {string} url the URL to send the request to

     * @param {BodyInit} body the JSON data to send in the request

     */

    const someHost = "https://examples.cloudflareworkers.com/demos";

    const url = someHost + "/requests/json";

    const body = {

      results: ["default data to send"],

      errors: null,

      msg: "I sent this to the fetch",

    };


    /**

     * gatherResponse awaits and returns a response body as a string.

     * Use await gatherResponse(..) in an async function to get the response body

     * @param {Response} response

     */

    async function gatherResponse(response) {

      const { headers } = response;

      const contentType = headers.get("content-type") || "";

      if (contentType.includes("application/json")) {

        return JSON.stringify(await response.json());

      } else if (contentType.includes("application/text")) {

        return response.text();

      } else if (contentType.includes("text/html")) {

        return response.text();

      } else {

        return response.text();

      }

    }


    const init = {

      body: JSON.stringify(body),

      method: "POST",

      headers: {

        "content-type": "application/json;charset=UTF-8",

      },

    };

    const response = await fetch(url, init);

    const results = await gatherResponse(response);

    return new Response(results, init);

  },

} satisfies ExportedHandler;


```

Python

```

import json

from workers import WorkerEntrypoint

from pyodide.ffi import to_js as _to_js

from js import Object, fetch, Response, Headers


def to_js(obj):

    return _to_js(obj, dict_converter=Object.fromEntries)


# gather_response returns both content-type & response body as a string

async def gather_response(response):

    headers = response.headers

    content_type = headers["content-type"] or ""


    if "application/json" in content_type:

        return (content_type, json.dumps(dict(await response.json())))

    return (content_type, await response.text())


class Default(WorkerEntrypoint):

    async def fetch(self, _request):

    url = "https://jsonplaceholder.typicode.com/todos/1"


    body = {

    "results": ["default data to send"],

    "errors": None,

    "msg": "I sent this to the fetch",

    }


    options = {

    "body": json.dumps(body),

    "method": "POST",

    "headers": {

      "content-type": "application/json;charset=UTF-8",

    },

    }


    response = await fetch(url, to_js(options))

    content_type, result = await gather_response(response)


    headers = Headers.new({"content-type": content_type}.items())

    return Response.new(result, headers=headers)


```

TypeScript

```

import { Hono } from 'hono';


const app = new Hono();


app.get('*', async (c) => {

  /**

   * Example someHost is set up to take in a JSON request

   * Replace url with the host you wish to send requests to

   */

  const someHost = "https://examples.cloudflareworkers.com/demos";

  const url = someHost + "/requests/json";

  const body = {

    results: ["default data to send"],

    errors: null,

    msg: "I sent this to the fetch",

  };


  /**

   * gatherResponse awaits and returns a response body as a string.

   * Use await gatherResponse(..) in an async function to get the response body

   */

  async function gatherResponse(response: Response) {

    const { headers } = response;

    const contentType = headers.get("content-type") || "";


    if (contentType.includes("application/json")) {

      return { contentType, result: JSON.stringify(await response.json()) };

    } else if (contentType.includes("application/text")) {

      return { contentType, result: await response.text() };

    } else if (contentType.includes("text/html")) {

      return { contentType, result: await response.text() };

    } else {

      return { contentType, result: await response.text() };

    }

  }


  const init = {

    body: JSON.stringify(body),

    method: "POST",

    headers: {

      "content-type": "application/json;charset=UTF-8",

    },

  };


  const response = await fetch(url, init);

  const { contentType, result } = await gatherResponse(response);


  return new Response(result, {

    headers: {

      "content-type": contentType,

    },

  });

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/post-json/","name":"Post JSON"}}]}
```

---

---
title: Using timingSafeEqual
description: Protect against timing attacks by safely comparing values using `timingSafeEqual`.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Security ](https://developers.cloudflare.com/search/?tags=Security)[ Web Crypto ](https://developers.cloudflare.com/search/?tags=Web%20Crypto)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/protect-against-timing-attacks.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Using timingSafeEqual

**Last reviewed:**  over 2 years ago 

Protect against timing attacks by safely comparing values using `timingSafeEqual`.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/protect-against-timing-attacks)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

The [crypto.subtle.timingSafeEqual](https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#timingsafeequal) function compares two values using a constant-time algorithm. The time taken is independent of the contents of the values.

When strings are compared using the equality operator (`==` or `===`), the comparison will end at the first mismatched character. By using `timingSafeEqual`, an attacker would not be able to use timing to find where at which point in the two strings there is a difference.

The `timingSafeEqual` function takes two `ArrayBuffer` or `TypedArray` values to compare. These buffers must be of equal length, otherwise an exception is thrown. Note that this function is not constant time with respect to the length of the parameters and also does not guarantee constant time for the surrounding code. Handling of secrets should be taken with care to not introduce timing side channels.

Warning

Do not return early when the input and secret have different lengths. An early return leaks the length of the secret through response timing. Instead, always perform a constant-time comparison as shown in the examples below — when lengths differ, compare the user input against itself and negate the result so the check still fails but takes the same amount of time.

In order to compare two strings, you must use the [TextEncoder](https://developers.cloudflare.com/workers/runtime-apis/encoding/#textencoder) API.

* [  TypeScript ](#tab-panel-7325)
* [  Python ](#tab-panel-7326)
* [  Hono ](#tab-panel-7327)

TypeScript

```

interface Environment {

  MY_SECRET_VALUE?: string;

}


export default {

  async fetch(req: Request, env: Environment) {

    if (!env.MY_SECRET_VALUE) {

      return new Response("Missing secret binding", { status: 500 });

    }


    const authToken = req.headers.get("Authorization") || "";


    const encoder = new TextEncoder();


    const userValue = encoder.encode(authToken);

    const secretValue = encoder.encode(env.MY_SECRET_VALUE);


    // Do not return early when lengths differ — that leaks the secret's

    // length through timing.  Instead, always perform a constant-time

    // comparison: when the lengths match compare directly; otherwise

    // compare the user input against itself (always true) and negate.

    const lengthsMatch = userValue.byteLength === secretValue.byteLength;

    const isEqual = lengthsMatch

      ? crypto.subtle.timingSafeEqual(userValue, secretValue)

      : !crypto.subtle.timingSafeEqual(userValue, userValue);


    if (!isEqual) {

      return new Response("Unauthorized", { status: 401 });

    }


    return new Response("Welcome!");

  },

};


```

Python

```

from workers import WorkerEntrypoint, Response

from js import TextEncoder, crypto


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        auth_token = request.headers["Authorization"] or ""

        secret = self.env.MY_SECRET_VALUE


        if secret is None:

            return Response("Missing secret binding", status=500)


        encoder = TextEncoder.new()

        user_value = encoder.encode(auth_token)

        secret_value = encoder.encode(secret)


        # Do not return early when lengths differ — that leaks the secret's

        # length through timing.  Always perform a constant-time comparison.

        if user_value.byteLength == secret_value.byteLength:

            is_equal = crypto.subtle.timingSafeEqual(user_value, secret_value)

        else:

            is_equal = not crypto.subtle.timingSafeEqual(user_value, user_value)


        if not is_equal:

            return Response("Unauthorized", status=401)


        return Response("Welcome!")


```

TypeScript

```

import { Hono } from 'hono';


interface Environment {

  Bindings: {

    MY_SECRET_VALUE?: string;

  }

}


const app = new Hono<Environment>();


// Middleware to handle authentication with timing-safe comparison

app.use('*', async (c, next) => {

  const secret = c.env.MY_SECRET_VALUE;


  if (!secret) {

    return c.text("Missing secret binding", 500);

  }


  const authToken = c.req.header("Authorization") || "";


  const encoder = new TextEncoder();


  const userValue = encoder.encode(authToken);

  const secretValue = encoder.encode(secret);


  // Do not return early when lengths differ — that leaks the secret's

  // length through timing.  Instead, always perform a constant-time

  // comparison: when the lengths match compare directly; otherwise

  // compare the user input against itself (always true) and negate.

  const lengthsMatch = userValue.byteLength === secretValue.byteLength;

  const isEqual = lengthsMatch

    ? crypto.subtle.timingSafeEqual(userValue, secretValue)

    : !crypto.subtle.timingSafeEqual(userValue, userValue);


  if (!isEqual) {

    return c.text("Unauthorized", 401);

  }


  // If we got here, the auth token is valid

  await next();

});


// Protected route

app.get('*', (c) => {

  return c.text("Welcome!");

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/protect-against-timing-attacks/","name":"Using timingSafeEqual"}}]}
```

---

---
title: Read POST
description: Serve an HTML form, then read POST requests. Use also to read JSON or POST data from an incoming request.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JSON ](https://developers.cloudflare.com/search/?tags=JSON)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python)[ Rust ](https://developers.cloudflare.com/search/?tags=Rust) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/read-post.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Read POST

**Last reviewed:**  over 5 years ago 

Serve an HTML form, then read POST requests. Use also to read JSON or POST data from an incoming request.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/read-post)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7328)
* [  TypeScript ](#tab-panel-7329)
* [  Python ](#tab-panel-7330)
* [  Rust ](#tab-panel-7331)
* [  Hono ](#tab-panel-7332)

JavaScript

```

export default {

  async fetch(request) {

    /**

     * rawHtmlResponse returns HTML inputted directly

     * into the worker script

     * @param {string} html

     */

    function rawHtmlResponse(html) {

      return new Response(html, {

        headers: {

          "content-type": "text/html;charset=UTF-8",

        },

      });

    }


    /**

     * readRequestBody reads in the incoming request body

     * Use await readRequestBody(..) in an async function to get the string

     * @param {Request} request the incoming request to read from

     */

    async function readRequestBody(request) {

      const contentType = request.headers.get("content-type");

      if (contentType.includes("application/json")) {

        return JSON.stringify(await request.json());

      } else if (contentType.includes("application/text")) {

        return request.text();

      } else if (contentType.includes("text/html")) {

        return request.text();

      } else if (contentType.includes("form")) {

        const formData = await request.formData();

        const body = {};

        for (const entry of formData.entries()) {

          body[entry[0]] = entry[1];

        }

        return JSON.stringify(body);

      } else {

        // Perhaps some other type of data was submitted in the form

        // like an image, or some other binary data.

        return "a file";

      }

    }


    const { url } = request;

    if (url.includes("form")) {

      return rawHtmlResponse(someForm);

    }

    if (request.method === "POST") {

      const reqBody = await readRequestBody(request);

      const retBody = `The request body sent in was ${reqBody}`;

      return new Response(retBody);

    } else if (request.method === "GET") {

      return new Response("The request was a GET");

    }

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    /**

     * rawHtmlResponse returns HTML inputted directly

     * into the worker script

     * @param {string} html

     */

    function rawHtmlResponse(html) {

      return new Response(html, {

        headers: {

          "content-type": "text/html;charset=UTF-8",

        },

      });

    }


    /**

     * readRequestBody reads in the incoming request body

     * Use await readRequestBody(..) in an async function to get the string

     * @param {Request} request the incoming request to read from

     */

    async function readRequestBody(request: Request) {

      const contentType = request.headers.get("content-type");

      if (contentType.includes("application/json")) {

        return JSON.stringify(await request.json());

      } else if (contentType.includes("application/text")) {

        return request.text();

      } else if (contentType.includes("text/html")) {

        return request.text();

      } else if (contentType.includes("form")) {

        const formData = await request.formData();

        const body = {};

        for (const entry of formData.entries()) {

          body[entry[0]] = entry[1];

        }

        return JSON.stringify(body);

      } else {

        // Perhaps some other type of data was submitted in the form

        // like an image, or some other binary data.

        return "a file";

      }

    }


    const { url } = request;

    if (url.includes("form")) {

      return rawHtmlResponse(someForm);

    }

    if (request.method === "POST") {

      const reqBody = await readRequestBody(request);

      const retBody = `The request body sent in was ${reqBody}`;

      return new Response(retBody);

    } else if (request.method === "GET") {

      return new Response("The request was a GET");

    }

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint

from js import Object, Response, Headers, JSON


async def read_request_body(request):

    headers = request.headers

    content_type = headers["content-type"] or ""


    if "application/json" in content_type:

        return JSON.stringify(await request.json())

    if "form" in content_type:

        form = await request.formData()

        data = Object.fromEntries(form.entries())

        return JSON.stringify(data)

    return await request.text()


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        def raw_html_response(html):

            headers = Headers.new({"content-type": "text/html;charset=UTF-8"}.items())

            return Response.new(html, headers=headers)


        if "form" in request.url:

            return raw_html_response("")


        if "POST" in request.method:

            req_body = await read_request_body(request)

            ret_body = f"The request body sent in was {req_body}"

            return Response.new(ret_body)


        return Response.new("The request was not POST")


```

```

use serde::{Deserialize, Serialize};

use worker::*;


fn raw_html_response(html: &str) -> Result<Response> {

    Response::from_html(html)

}


#[derive(Deserialize, Serialize, Debug)]

struct Payload {

    msg: String,

}


async fn read_request_body(mut req: Request) -> String {

    let ctype = req.headers().get("content-type").unwrap().unwrap();

    match ctype.as_str() {

        "application/json" => format!("{:?}", req.json::<Payload>().await.unwrap()),

        "text/html" => req.text().await.unwrap(),

        "multipart/form-data" => format!("{:?}", req.form_data().await.unwrap()),

        _ => String::from("a file"),

    }

}


#[event(fetch)]

async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result<Response> {

    if String::from(req.url()?).contains("form") {

        return raw_html_response("some html form");

    }


    match req.method() {

        Method::Post => {

            let req_body = read_request_body(req).await;

            Response::ok(format!("The request body sent in was {}", req_body))

        }

        _ => Response::ok(format!("The result was a {:?}", req.method())),

    }

}


```

TypeScript

```

import { Hono } from "hono";

import { html } from "hono/html";


const app = new Hono();


/**

 * readRequestBody reads in the incoming request body

 * @param {Request} request the incoming request to read from

 */

async function readRequestBody(request: Request): Promise<string> {

  const contentType = request.headers.get("content-type") || "";


  if (contentType.includes("application/json")) {

    const body = await request.json();

    return JSON.stringify(body);

  } else if (contentType.includes("application/text")) {

    return request.text();

  } else if (contentType.includes("text/html")) {

    return request.text();

  } else if (contentType.includes("form")) {

    const formData = await request.formData();

    const body: Record<string, string> = {};

    for (const [key, value] of formData.entries()) {

      body[key] = value.toString();

    }

    return JSON.stringify(body);

  } else {

    // Perhaps some other type of data was submitted in the form

    // like an image, or some other binary data.

    return "a file";

  }

}


const someForm = html`<!DOCTYPE html>

  <html>

    <body>

      <form action="/" method="post">

        <div>

          <label for="message">Message:</label>

          <input id="message" name="message" type="text" />

        </div>

        <div>

          <button>Submit</button>

        </div>

      </form>

    </body>

  </html>`;


app.get("*", async (c) => {

  const url = c.req.url;


  if (url.includes("form")) {

    return c.html(someForm);

  }


  return c.text("The request was a GET");

});


app.post("*", async (c) => {

  const reqBody = await readRequestBody(c.req.raw);

  const retBody = `The request body sent in was ${reqBody}`;

  return c.text(retBody);

});


export default app;


```

Prevent potential errors when accessing request.body

The body of a [Request ↗](https://developer.mozilla.org/en-US/docs/Web/API/Request) can only be accessed once. If you previously used `request.formData()` in the same request, you may encounter a TypeError when attempting to access `request.body`.

To avoid errors, create a clone of the Request object with `request.clone()` for each subsequent attempt to access a Request's body. Keep in mind that Workers have a [memory limit of 128 MB per Worker](https://developers.cloudflare.com/workers/platform/limits/#memory) and loading particularly large files into a Worker's memory multiple times may reach this limit. To ensure memory usage does not reach this limit, consider using [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/read-post/","name":"Read POST"}}]}
```

---

---
title: Redirect
description: Redirect requests from one URL to another or from one set of URLs to another set.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ Redirects ](https://developers.cloudflare.com/search/?tags=Redirects)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python)[ Rust ](https://developers.cloudflare.com/search/?tags=Rust) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/redirect.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Redirect

**Last reviewed:**  about 4 years ago 

Redirect requests from one URL to another or from one set of URLs to another set.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/redirect)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

## Redirect all requests to one URL

* [  JavaScript ](#tab-panel-7338)
* [  TypeScript ](#tab-panel-7339)
* [  Python ](#tab-panel-7340)
* [  Rust ](#tab-panel-7341)
* [  Hono ](#tab-panel-7342)

JavaScript

```

export default {

  async fetch(request) {

    const destinationURL = "https://example.com";

    const statusCode = 301;

    return Response.redirect(destinationURL, statusCode);

  },

};


```

[Run Worker in Playground](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbPYZb6HbW5QDGU2AAwA2AKwBOAEzDhAZlkAOAIziAXCxZtgHOFxp8BIidLmKVAWABQAYXRUIAU3vYAIlADOMdO6jQ7qki08AmISKjhgBwYAIigaBwAPADoAK3do0lQoMCcIqNj45LToq1t7JwhsABU6GAcAuBgYMD4CKDtkFLgANzh3XgRYCABqYHRccAcrK0SvJBJcB1Q4cAgSAG9LEhI+uipeQIcIXgALAAoEBwBHEAd3CABKDa3tkl47e4W76HC-KgBVABKABkSAwSNEThAIDB3KpkMhEhFmg4ku9gBkXtt3lRPvcCCB3LZFmCSLJBEoiFiSJcICAEFQSIC7l5cajLjxLrwIGdFvc4m07EDgQAaEj4ulE8YOB5U7YAXxFlnlRCsGmYWh0eh4-CEYikMnkynEpTsjmcbk83l87SoASCOlI4UiMUihB0GUC2VyLuiZDA6DIJRsZoq1Vq9R2TRavEFVE67js00s62iwDgcQA+mMJjloqoCosiul5Wr1ZqQtqDHrjIazOJmFYgA)

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const destinationURL = "https://example.com";

    const statusCode = 301;

    return Response.redirect(destinationURL, statusCode);

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    def fetch(self, request):

        destinationURL = "https://example.com"

        statusCode = 301

        return Response.redirect(destinationURL, statusCode)


```

```

use worker::*;


#[event(fetch)]

async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result<Response> {

    let destination_url = Url::parse("https://example.com")?;

    let status_code = 301;

    Response::redirect_with_status(destination_url, status_code)

}


```

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


app.all("*", (c) => {

  const destinationURL = "https://example.com";

  const statusCode = 301;

  return c.redirect(destinationURL, statusCode);

});


export default app;


```

## Redirect requests from one domain to another

* [  JavaScript ](#tab-panel-7333)
* [  TypeScript ](#tab-panel-7334)
* [  Python ](#tab-panel-7335)
* [  Rust ](#tab-panel-7336)
* [  Hono ](#tab-panel-7337)

JavaScript

```

export default {

  async fetch(request) {

    const base = "https://example.com";

    const statusCode = 301;


    const url = new URL(request.url);

    const { pathname, search } = url;


    const destinationURL = `${base}${pathname}${search}`;

    console.log(destinationURL);


    return Response.redirect(destinationURL, statusCode);

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const base = "https://example.com";

    const statusCode = 301;


    const url = new URL(request.url);

    const { pathname, search } = url;


    const destinationURL = `${base}${pathname}${search}`;

    console.log(destinationURL);


    return Response.redirect(destinationURL, statusCode);

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response

from urllib.parse import urlparse


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        base = "https://example.com"

        statusCode = 301


        url = urlparse(request.url)


        destinationURL = f'{base}{url.path}{url.query}'

        print(destinationURL)


        return Response.redirect(destinationURL, statusCode)


```

```

use worker::*;


#[event(fetch)]

async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result<Response> {

    let mut base = Url::parse("https://example.com")?;

    let status_code = 301;


    let url = req.url()?;


    base.set_path(url.path());

    base.set_query(url.query());


    console_log!("{:?}", base.to_string());


    Response::redirect_with_status(base, status_code)

}


```

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


app.all("*", (c) => {

  const base = "https://example.com";

  const statusCode = 301;


  const { pathname, search } = new URL(c.req.url);


  const destinationURL = `${base}${pathname}${search}`;

  console.log(destinationURL);


  return c.redirect(destinationURL, statusCode);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/redirect/","name":"Redirect"}}]}
```

---

---
title: Respond with another site
description: Respond to the Worker request with the response from another website (example.com in this example).
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/respond-with-another-site.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Respond with another site

**Last reviewed:**  over 5 years ago 

Respond to the Worker request with the response from another website (example.com in this example).

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/respond-with-another-site)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7343)
* [  TypeScript ](#tab-panel-7344)
* [  Python ](#tab-panel-7345)

JavaScript

```

export default {

  async fetch(request) {

    function MethodNotAllowed(request) {

      return new Response(`Method ${request.method} not allowed.`, {

        status: 405,

        headers: {

          Allow: "GET",

        },

      });

    }

    // Only GET requests work with this proxy.

    if (request.method !== "GET") return MethodNotAllowed(request);

    return fetch(`https://example.com`);

  },

};


```

[Run Worker in Playground](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbPYZb6HbW5QDGU2AAwA2AEwBOYVPEBGABwBWQQC4WLNsA5wuNPgJESpw2YsEBYAFABhdFQgBTO9gAiUAM4x0bqNFvKSGngExCRUcMD2DABEUDT2AB4AdABWblGkqFBgjuGRMXFJqVGWNnaOENgAKnQw9v5wMDBgfARQtsjJcABucG68CLAQANTA6Ljg9paWCZ5IJLj2qHDgECQA3hYkJL10VLwB9hC8ABYAFAj2AI4g9m4QAJTrm1sB1Ly+VCQAsofHYwBy6AgAEEwGB0AB3ey4c5XG53R4bF4vC4QEAIT5UewQkgAJVuniobnspwABj8IH9cCQACRrC7XW4QRIRSljAC+oSB2zBkOhiVJABonsjkXcCCA3P4ACyCBSC56ikjHexwBYIKUipUvUHgiH+KIAcQAopUogrtSR2RbRez7kRFVbHchkCQAPJUMB0EgmyokBnwiBuEgQzAAaxDPmOJEp7hIMAQ6HidESjqgqBIsMZdxZvzGJAAhAwGCQjaaoo9UejPhSqYCQbyoTCA0z7Y6qxiDkczqTjhAIDApS6EuEmvZErx0MBSW2ttaLOyiJY1MwNFodDx+EIxJJpPIlCVbA4nK4PF4fG0qP5AlpSGEItFWWrgukAlkcg+omRwWRitYj+UVQ1HU2yNM0vCtO0qS2FMFhrFEwBwLEAD6ozjNkUTKPkCyFGk7LLiua7BBuejboYe6mMwlhAA)

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    function MethodNotAllowed(request) {

      return new Response(`Method ${request.method} not allowed.`, {

        status: 405,

        headers: {

          Allow: "GET",

        },

      });

    }

    // Only GET requests work with this proxy.

    if (request.method !== "GET") return MethodNotAllowed(request);

    return fetch(`https://example.com`);

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response, fetch


class Default(WorkerEntrypoint):

    def fetch(self, request):

        def method_not_allowed(request):

            msg = f'Method {request.method} not allowed.'

            headers = {"Allow": "GET"}

            return Response(msg, headers=headers, status=405)


        # Only GET requests work with this proxy.

        if request.method != "GET":

            return method_not_allowed(request)


        return fetch("https://example.com")


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/respond-with-another-site/","name":"Respond with another site"}}]}
```

---

---
title: Return small HTML page
description: Deliver an HTML page from an HTML string directly inside the Worker script.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python)[ Rust ](https://developers.cloudflare.com/search/?tags=Rust) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/return-html.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Return small HTML page

**Last reviewed:**  about 2 years ago 

Deliver an HTML page from an HTML string directly inside the Worker script.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/return-html)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7346)
* [  TypeScript ](#tab-panel-7347)
* [  Python ](#tab-panel-7348)
* [  Rust ](#tab-panel-7349)
* [  Hono ](#tab-panel-7350)

JavaScript

```

export default {

  async fetch(request) {

    const html = `<!DOCTYPE html>

    <body>

      <h1>Hello World</h1>

      <p>This markup was generated by a Cloudflare Worker.</p>

    </body>`;


    return new Response(html, {

      headers: {

        "content-type": "text/html;charset=UTF-8",

      },

    });

  },

};


```

[Run Worker in Playground](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbPYZb6HbW5QDGU2AAwAmAIwBOccIAsADgCso6fIBcLFm2Ac4XGnwEiJUuYuUBYAFABhdFQgBTO9gAiUAM4x0bqNFsqSmngExCRUcMD2DABEUDT2AB4AdABWblGkqFBgjuGRMXFJqVGWNnaOENgAKnQw9v5wMDBgfARQtsjJcABucG68CLAQANTA6Ljg9paWCZ5IJLj2qHDgECQA3hYkJL10VLwB9hC8ABYAFAj2AI4g9m4QAJTrm1skvLZ3JMcQwGAkDCQAAwAPABCZwAeSslQAmgAFACin2+YAAfM8tkCKLg6GiXi8gcdRCiABL2MBgdAkADqmDAuCByEJuLxJCBMBRlWO7hIwEQAGsQDASAB3XokADmjnsCAI9lw5Do2xIVgpIFwqDAiHs1MwfOliQZ7PRrOQWJxAKIFmNFwgIAQVFC9mFJAASrdPFQ3PZTl8fgAaJ4sz72OALBBufwbINbKJvMpOCA1exRfxRBzxFC+sBEE6IL0QBgAVUqADFsLIon7jVsAL5VvE1+6W2tVmtESzqZiabS6Hj8IRiSQyBRKeQlWwOJyuDxeHxtKj+QLaUhhCLRCKEbTpAJZHJrqJkClkYrWCflKpJ+qNZq8VrtVK2KYWNZRXmxAD6o3G2RT+QWhTSGsO07btgl7fQByMYdTHkZhLCAA)

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const html = `<!DOCTYPE html>

    <body>

      <h1>Hello World</h1>

      <p>This markup was generated by a Cloudflare Worker.</p>

    </body>`;


    return new Response(html, {

      headers: {

        "content-type": "text/html;charset=UTF-8",

      },

    });

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        html = """<!DOCTYPE html>

        <body>

          <h1>Hello World</h1>

          <p>This markup was generated by a Cloudflare Worker.</p>

        </body>"""


        headers = {"content-type": "text/html;charset=UTF-8"}

        return Response(html, headers=headers)


```

```

use worker::*;


#[event(fetch)]

async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result<Response> {

    let html = r#"<!DOCTYPE html>

    <body>

      <h1>Hello World</h1>

      <p>This markup was generated by a Cloudflare Worker.</p>

    </body>

    "#;

    Response::from_html(html)

}


```

TypeScript

```

import { Hono } from "hono";

import { html } from "hono/html";


const app = new Hono();


app.get("*", (c) => {

  const doc = html`<!DOCTYPE html>

    <body>

      <h1>Hello World</h1>

      <p>This markup was generated by a Cloudflare Worker with Hono.</p>

    </body>`;


  return c.html(doc);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/return-html/","name":"Return small HTML page"}}]}
```

---

---
title: Return JSON
description: Return JSON directly from a Worker script, useful for building APIs and middleware.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JSON ](https://developers.cloudflare.com/search/?tags=JSON)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python)[ Rust ](https://developers.cloudflare.com/search/?tags=Rust) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/return-json.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Return JSON

**Last reviewed:**  about 2 years ago 

Return JSON directly from a Worker script, useful for building APIs and middleware.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/return-json)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7351)
* [  TypeScript ](#tab-panel-7352)
* [  Python ](#tab-panel-7353)
* [  Rust ](#tab-panel-7354)
* [  Hono ](#tab-panel-7355)

JavaScript

```

export default {

  async fetch(request) {

    const data = {

      hello: "world",

    };


    return Response.json(data);

  },

};


```

[Run Worker in Playground](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbPYZb6HbW5QDGU2AAwBOAKwBGUQDZhwgOwAORQC4WLNsA5wuNPgJETpsxYoCwAKADC6KhACmt7ABEoAZxjpXUaDeUkNeATEJFRwwHYMAERQNHYAHgB0AFaukaSoUGAOYRHRsYkpkRbWtg4Q2AAqdDB2fnAwMGB8BFA2yElwAG5wrrwIsBAA1MDouOB2FhbxHkgkuHaocOAQJADe5iQkPXRUvP52ELwAFgAUCHYAjiB2rhAAlGsbmyS8NrdzQSQMj8-PR3ZgMDoPyRADumDAuEiABonpsAL5EcxwkjnCAgBBUEgAJRuHiorjsyVcNhOWjuSIRsMRFjUzA0Wh0PH4QjEkhk8iUCmKNnsjhc7k83laVD8AS0pFC4Si4UIWjS-ky2WlkTIQLIRSsvLKlWqtS2DSavBabRSNkm5lWkWAcBiAH0RmMspFlHl5gVUvDaXSGUEmXpWYYOSYFMwLEA)

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const data = {

      hello: "world",

    };


    return Response.json(data);

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response

import json


class Default(WorkerEntrypoint):

    def fetch(self, request):

        data = json.dumps({"hello": "world"})

        headers = {"content-type": "application/json"}

        return Response(data, headers=headers)


```

```

use serde::{Deserialize, Serialize};

use worker::*;


#[derive(Deserialize, Serialize, Debug)]

struct Json {

    hello: String,

}


#[event(fetch)]

async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result<Response> {

    let data = Json {

        hello: String::from("world"),

    };

    Response::from_json(&data)

}


```

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


app.get("*", (c) => {

  const data = {

    hello: "world",

  };


  return c.json(data);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/return-json/","name":"Return JSON"}}]}
```

---

---
title: Rewrite links
description: Rewrite URL links in HTML using the HTMLRewriter. This is useful for JAMstack websites.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/rewrite-links.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Rewrite links

**Last reviewed:**  about 4 years ago 

Rewrite URL links in HTML using the HTMLRewriter. This is useful for JAMstack websites.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/rewrite-links)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

* [  JavaScript ](#tab-panel-7356)
* [  TypeScript ](#tab-panel-7357)
* [  Python ](#tab-panel-7358)
* [  Hono ](#tab-panel-7359)

JavaScript

```

export default {

  async fetch(request) {

    const OLD_URL = "developer.mozilla.org";

    const NEW_URL = "mynewdomain.com";


    class AttributeRewriter {

      constructor(attributeName) {

        this.attributeName = attributeName;

      }

      element(element) {

        const attribute = element.getAttribute(this.attributeName);

        if (attribute) {

          element.setAttribute(

            this.attributeName,

            attribute.replace(OLD_URL, NEW_URL),

          );

        }

      }

    }


    const rewriter = new HTMLRewriter()

      .on("a", new AttributeRewriter("href"))

      .on("img", new AttributeRewriter("src"));


    const res = await fetch(request);

    const contentType = res.headers.get("Content-Type");


    // If the response is HTML, it can be transformed with

    // HTMLRewriter -- otherwise, it should pass through

    if (contentType.startsWith("text/html")) {

      return rewriter.transform(res);

    } else {

      return res;

    }

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const OLD_URL = "developer.mozilla.org";

    const NEW_URL = "mynewdomain.com";


    class AttributeRewriter {

      constructor(attributeName) {

        this.attributeName = attributeName;

      }

      element(element) {

        const attribute = element.getAttribute(this.attributeName);

        if (attribute) {

          element.setAttribute(

            this.attributeName,

            attribute.replace(OLD_URL, NEW_URL),

          );

        }

      }

    }


    const rewriter = new HTMLRewriter()

      .on("a", new AttributeRewriter("href"))

      .on("img", new AttributeRewriter("src"));


    const res = await fetch(request);

    const contentType = res.headers.get("Content-Type");


    // If the response is HTML, it can be transformed with

    // HTMLRewriter -- otherwise, it should pass through

    if (contentType.startsWith("text/html")) {

      return rewriter.transform(res);

    } else {

      return res;

    }

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint

from pyodide.ffi import create_proxy

from js import HTMLRewriter, fetch


class AttributeRewriter:

    old_url = "developer.mozilla.org"

    new_url = "mynewdomain.com"


    def __init__(self, attr_name):

        self.attr_name = attr_name


    def element(self, element):

        attr = element.getAttribute(self.attr_name)

        if attr:

            element.setAttribute(

                self.attr_name, attr.replace(self.old_url, self.new_url)

            )


href = create_proxy(AttributeRewriter("href"))

src = create_proxy(AttributeRewriter("src"))

rewriter = HTMLRewriter.new().on("a", href).on("img", src)


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        res = await fetch(request)

        content_type = res.headers["Content-Type"]


        # If the response is HTML, it can be transformed with

        # HTMLRewriter -- otherwise, it should pass through

        if content_type.startswith("text/html"):

            return rewriter.transform(res)

        return res


```

TypeScript

```

import { Hono } from 'hono';

import { html } from 'hono/html';


const app = new Hono();


app.get('*', async (c) => {

  const OLD_URL = "developer.mozilla.org";

  const NEW_URL = "mynewdomain.com";


  class AttributeRewriter {

    attributeName: string;


    constructor(attributeName: string) {

      this.attributeName = attributeName;

    }


    element(element: Element) {

      const attribute = element.getAttribute(this.attributeName);

      if (attribute) {

        element.setAttribute(

          this.attributeName,

          attribute.replace(OLD_URL, NEW_URL)

        );

      }

    }

  }


  // Make a fetch request using the original request

  const res = await fetch(c.req.raw);

  const contentType = res.headers.get("Content-Type") || "";


  // If the response is HTML, transform it with HTMLRewriter

  if (contentType.startsWith("text/html")) {

    const rewriter = new HTMLRewriter()

      .on("a", new AttributeRewriter("href"))

      .on("img", new AttributeRewriter("src"));


    return new Response(rewriter.transform(res).body, {

      headers: res.headers

    });

  } else {

    // Pass through the response as is

    return res;

  }

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/rewrite-links/","name":"Rewrite links"}}]}
```

---

---
title: Set security headers
description: Set common security headers (X-XSS-Protection, X-Frame-Options, X-Content-Type-Options, Permissions-Policy, Referrer-Policy, Strict-Transport-Security, Content-Security-Policy).
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Security ](https://developers.cloudflare.com/search/?tags=Security)[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python)[ Rust ](https://developers.cloudflare.com/search/?tags=Rust) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/security-headers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Set security headers

**Last reviewed:**  about 4 years ago 

Set common security headers (X-XSS-Protection, X-Frame-Options, X-Content-Type-Options, Permissions-Policy, Referrer-Policy, Strict-Transport-Security, Content-Security-Policy).

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/security-headers)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

To inject CSP nonces into inline `<script>` tags using HTMLRewriter, refer to this [CSP nonce example](https://developers.cloudflare.com/workers/examples/spa-shell/#add-csp-nonces).

* [  JavaScript ](#tab-panel-7360)
* [  TypeScript ](#tab-panel-7361)
* [  Python ](#tab-panel-7362)
* [  Rust ](#tab-panel-7363)
* [  Hono ](#tab-panel-7364)

JavaScript

```

export default {

  async fetch(request) {

    const DEFAULT_SECURITY_HEADERS = {

      /*

    Secure your application with Content-Security-Policy headers.

    Enabling these headers will permit content from a trusted domain and all its subdomains.

    @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy

    "Content-Security-Policy": "default-src 'self' example.com *.example.com",

    */

      /*

    You can also set Strict-Transport-Security headers.

    These are not automatically set because your website might get added to Chrome's HSTS preload list.

    Here's the code if you want to apply it:

    "Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload",

    */

      /*

    Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below:

    "Permissions-Policy": "interest-cohort=()",

    */

      /*

    X-XSS-Protection header prevents a page from loading if an XSS attack is detected.

    @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-XSS-Protection

    */

      "X-XSS-Protection": "0",

      /*

    X-Frame-Options header prevents click-jacking attacks.

    @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options

    */

      "X-Frame-Options": "DENY",

      /*

    X-Content-Type-Options header prevents MIME-sniffing.

    @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options

    */

      "X-Content-Type-Options": "nosniff",

      "Referrer-Policy": "strict-origin-when-cross-origin",

      "Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";',

      "Cross-Origin-Opener-Policy": 'same-site; report-to="default";',

      "Cross-Origin-Resource-Policy": "same-site",

    };

    const BLOCKED_HEADERS = [

      "Public-Key-Pins",

      "X-Powered-By",

      "X-AspNet-Version",

    ];


    let response = await fetch(request);

    let newHeaders = new Headers(response.headers);


    const tlsVersion = request.cf.tlsVersion;

    console.log(tlsVersion);

    // This sets the headers for HTML responses:

    if (

      newHeaders.has("Content-Type") &&

      !newHeaders.get("Content-Type").includes("text/html")

    ) {

      return new Response(response.body, {

        status: response.status,

        statusText: response.statusText,

        headers: newHeaders,

      });

    }


    Object.keys(DEFAULT_SECURITY_HEADERS).map((name) => {

      newHeaders.set(name, DEFAULT_SECURITY_HEADERS[name]);

    });


    BLOCKED_HEADERS.forEach((name) => {

      newHeaders.delete(name);

    });


    if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") {

      return new Response("You need to use TLS version 1.2 or higher.", {

        status: 400,

      });

    } else {

      return new Response(response.body, {

        status: response.status,

        statusText: response.statusText,

        headers: newHeaders,

      });

    }

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const DEFAULT_SECURITY_HEADERS = {

      /*

    Secure your application with Content-Security-Policy headers.

    Enabling these headers will permit content from a trusted domain and all its subdomains.

    @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy

    "Content-Security-Policy": "default-src 'self' example.com *.example.com",

    */

      /*

    You can also set Strict-Transport-Security headers.

    These are not automatically set because your website might get added to Chrome's HSTS preload list.

    Here's the code if you want to apply it:

    "Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload",

    */

      /*

    Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below:

    "Permissions-Policy": "interest-cohort=()",

    */

      /*

    X-XSS-Protection header prevents a page from loading if an XSS attack is detected.

    @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-XSS-Protection

    */

      "X-XSS-Protection": "0",

      /*

    X-Frame-Options header prevents click-jacking attacks.

    @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options

    */

      "X-Frame-Options": "DENY",

      /*

    X-Content-Type-Options header prevents MIME-sniffing.

    @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options

    */

      "X-Content-Type-Options": "nosniff",

      "Referrer-Policy": "strict-origin-when-cross-origin",

      "Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";',

      "Cross-Origin-Opener-Policy": 'same-site; report-to="default";',

      "Cross-Origin-Resource-Policy": "same-site",

    };

    const BLOCKED_HEADERS = [

      "Public-Key-Pins",

      "X-Powered-By",

      "X-AspNet-Version",

    ];


    let response = await fetch(request);

    let newHeaders = new Headers(response.headers);


    const tlsVersion = request.cf.tlsVersion;

    console.log(tlsVersion);

    // This sets the headers for HTML responses:

    if (

      newHeaders.has("Content-Type") &&

      !newHeaders.get("Content-Type").includes("text/html")

    ) {

      return new Response(response.body, {

        status: response.status,

        statusText: response.statusText,

        headers: newHeaders,

      });

    }


    Object.keys(DEFAULT_SECURITY_HEADERS).map((name) => {

      newHeaders.set(name, DEFAULT_SECURITY_HEADERS[name]);

    });


    BLOCKED_HEADERS.forEach((name) => {

      newHeaders.delete(name);

    });


    if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") {

      return new Response("You need to use TLS version 1.2 or higher.", {

        status: 400,

      });

    } else {

      return new Response(response.body, {

        status: response.status,

        statusText: response.statusText,

        headers: newHeaders,

      });

    }

  },

} satisfies ExportedHandler;


```

Python

```

from workers import WorkerEntrypoint, Response, fetch


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        default_security_headers = {

            # Secure your application with Content-Security-Policy headers.

            #Enabling these headers will permit content from a trusted domain and all its subdomains.

            #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy

            "Content-Security-Policy": "default-src 'self' example.com *.example.com",

            #You can also set Strict-Transport-Security headers.

            #These are not automatically set because your website might get added to Chrome's HSTS preload list.

            #Here's the code if you want to apply it:

            "Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload",

            #Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below:

            "Permissions-Policy": "interest-cohort=()",

            #X-XSS-Protection header prevents a page from loading if an XSS attack is detected.

            #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-XSS-Protection

            "X-XSS-Protection": "0",

            #X-Frame-Options header prevents click-jacking attacks.

            #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options

            "X-Frame-Options": "DENY",

            #X-Content-Type-Options header prevents MIME-sniffing.

            #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options

            "X-Content-Type-Options": "nosniff",

            "Referrer-Policy": "strict-origin-when-cross-origin",

            "Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";',

            "Cross-Origin-Opener-Policy": 'same-site; report-to="default";',

            "Cross-Origin-Resource-Policy": "same-site",

        }

        blocked_headers = ["Public-Key-Pins", "X-Powered-By" ,"X-AspNet-Version"]


        res = await fetch(request)

        new_headers = res.headers


        # This sets the headers for HTML responses

        if "text/html" in new_headers["Content-Type"]:

            return Response(res.body, status=res.status, statusText=res.statusText, headers=new_headers)


        for name in default_security_headers:

            new_headers[name] = default_security_headers[name]


        for name in blocked_headers:

            del new_headers["name"]


        tls = request.cf.tlsVersion


        if not tls in ("TLSv1.2", "TLSv1.3"):

            return Response("You need to use TLS version 1.2 or higher.", status=400)

        return Response(res.body, status=res.status, statusText=res.statusText, headers=new_headers)


```

```

use std::collections::HashMap;

use worker::*;


#[event(fetch)]

async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result<Response> {

    let default_security_headers = HashMap::from([

        //Secure your application with Content-Security-Policy headers.

        //Enabling these headers will permit content from a trusted domain and all its subdomains.

        //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy

        (

            "Content-Security-Policy",

            "default-src 'self' example.com *.example.com",

        ),

        //You can also set Strict-Transport-Security headers.

        //These are not automatically set because your website might get added to Chrome's HSTS preload list.

        //Here's the code if you want to apply it:

        (

            "Strict-Transport-Security",

            "max-age=63072000; includeSubDomains; preload",

        ),

        //Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below:

        ("Permissions-Policy", "interest-cohort=()"),

        //X-XSS-Protection header prevents a page from loading if an XSS attack is detected.

        //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-XSS-Protection

        ("X-XSS-Protection", "0"),

        //X-Frame-Options header prevents click-jacking attacks.

        //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options

        ("X-Frame-Options", "DENY"),

        //X-Content-Type-Options header prevents MIME-sniffing.

        //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options

        ("X-Content-Type-Options", "nosniff"),

        ("Referrer-Policy", "strict-origin-when-cross-origin"),

        (

            "Cross-Origin-Embedder-Policy",

            "require-corp; report-to='default';",

        ),

        (

            "Cross-Origin-Opener-Policy",

            "same-site; report-to='default';",

        ),

        ("Cross-Origin-Resource-Policy", "same-site"),

    ]);

    let blocked_headers = ["Public-Key-Pins", "X-Powered-By", "X-AspNet-Version"];

    let tls = req.cf().unwrap().tls_version();

    let res = Fetch::Request(req).send().await?;

    let mut new_headers = res.headers().clone();


    // This sets the headers for HTML responses

    if Some(String::from("text/html")) == new_headers.get("Content-Type")? {

        return Ok(Response::from_body(res.body().clone())?

            .with_headers(new_headers)

            .with_status(res.status_code()));

    }

    for (k, v) in default_security_headers {

        new_headers.set(k, v)?;

    }


    for k in blocked_headers {

        new_headers.delete(k)?;

    }


    if !vec!["TLSv1.2", "TLSv1.3"].contains(&tls.as_str()) {

        return Response::error("You need to use TLS version 1.2 or higher.", 400);

    }

    Ok(Response::from_body(res.body().clone())?

        .with_headers(new_headers)

        .with_status(res.status_code()))


}


```

TypeScript

```

import { Hono } from "hono";

import { secureHeaders } from "hono/secure-headers";


const app = new Hono();

app.use(secureHeaders());


// Handle all other requests by passing through to origin

app.all("*", async (c) => {

  return fetch(c.req.raw);

});


export default app;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/security-headers/","name":"Set security headers"}}]}
```

---

---
title: Sign requests
description: Verify a signed request using the HMAC and SHA-256 algorithms or return a 403.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Security ](https://developers.cloudflare.com/search/?tags=Security)[ Web Crypto ](https://developers.cloudflare.com/search/?tags=Web%20Crypto)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/signing-requests.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Sign requests

**Last reviewed:**  about 2 years ago 

Verify a signed request using the HMAC and SHA-256 algorithms or return a 403.

If you want to get started quickly, click on the button below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/signing-requests)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers.

Note

This example Worker makes use of the [Node.js Buffer API](https://developers.cloudflare.com/workers/runtime-apis/nodejs/buffer/), which is available as part of the Workers runtime [Node.js compatibility mode](https://developers.cloudflare.com/workers/runtime-apis/nodejs/). To run this Worker, you will need to [enable the nodejs\_compat compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/#get-started).

You can both verify and generate signed requests from within a Worker using the [Web Crypto APIs ↗](https://developer.mozilla.org/en-US/docs/Web/API/Crypto/subtle).

The following Worker will:

* For request URLs beginning with `/generate/`, replace `/generate/` with `/`, sign the resulting path with its timestamp, and return the full, signed URL in the response body.
* For all other request URLs, verify the signed URL and allow the request through.

* [  JavaScript ](#tab-panel-7365)
* [  TypeScript ](#tab-panel-7366)
* [  Hono ](#tab-panel-7367)
* [  Python ](#tab-panel-7368)

JavaScript

```

import { Buffer } from "node:buffer";


const encoder = new TextEncoder();


// How long an HMAC token should be valid for, in seconds

const EXPIRY = 60;


export default {

  /**

   *

   * @param {Request} request

   * @param {{SECRET_DATA: string}} env

   * @returns

   */

  async fetch(request, env) {

    // You will need some secret data to use as a symmetric key. This should be

    // attached to your Worker as an encrypted secret.

    // Refer to https://developers.cloudflare.com/workers/configuration/secrets/

    const secretKeyData = encoder.encode(

      env.SECRET_DATA ?? "my secret symmetric key",

    );


    // Import your secret as a CryptoKey for both 'sign' and 'verify' operations

    const key = await crypto.subtle.importKey(

      "raw",

      secretKeyData,

      { name: "HMAC", hash: "SHA-256" },

      false,

      ["sign", "verify"],

    );


    const url = new URL(request.url);


    // This is a demonstration Worker that allows unauthenticated access to /generate

    // In a real application you would want to make sure that

    // users could only generate signed URLs when authenticated

    if (url.pathname.startsWith("/generate/")) {

      url.pathname = url.pathname.replace("/generate/", "/");


      const timestamp = Math.floor(Date.now() / 1000);


      // This contains all the data about the request that you want to be able to verify

      // Here we only sign the timestamp and the pathname, but often you will want to

      // include more data (for instance, the URL hostname or query parameters)

      const dataToAuthenticate = `${url.pathname}${timestamp}`;


      const mac = await crypto.subtle.sign(

        "HMAC",

        key,

        encoder.encode(dataToAuthenticate),

      );


      // Refer to https://developers.cloudflare.com/workers/runtime-apis/nodejs/

      // for more details on using Node.js APIs in Workers

      const base64Mac = Buffer.from(mac).toString("base64");


      url.searchParams.set("verify", `${timestamp}-${base64Mac}`);


      return new Response(`${url.pathname}${url.search}`);

      // Verify all non /generate requests

    } else {

      // Make sure you have the minimum necessary query parameters.

      if (!url.searchParams.has("verify")) {

        return new Response("Missing query parameter", { status: 403 });

      }


      const [timestamp, hmac] = url.searchParams.get("verify").split("-");


      const assertedTimestamp = Number(timestamp);


      const dataToAuthenticate = `${url.pathname}${assertedTimestamp}`;


      const receivedMac = Buffer.from(hmac, "base64");


      // Use crypto.subtle.verify() to guard against timing attacks. Since HMACs use

      // symmetric keys, you could implement this by calling crypto.subtle.sign() and

      // then doing a string comparison -- this is insecure, as string comparisons

      // bail out on the first mismatch, which leaks information to potential

      // attackers.

      const verified = await crypto.subtle.verify(

        "HMAC",

        key,

        receivedMac,

        encoder.encode(dataToAuthenticate),

      );


      if (!verified) {

        return new Response("Invalid MAC", { status: 403 });

      }


      // Signed requests expire after one minute. Note that this value should depend on your specific use case

      if (Date.now() / 1000 > assertedTimestamp + EXPIRY) {

        return new Response(

          `URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`,

          { status: 403 },

        );

      }

    }


    return fetch(new URL(url.pathname, "https://example.com"), request);

  },

};


```

TypeScript

```

import { Buffer } from "node:buffer";


const encoder = new TextEncoder();


// How long an HMAC token should be valid for, in seconds

const EXPIRY = 60;


interface Env {

  SECRET_DATA: string;

}

export default {

  async fetch(request, env): Promise<Response> {

    // You will need some secret data to use as a symmetric key. This should be

    // attached to your Worker as an encrypted secret.

    // Refer to https://developers.cloudflare.com/workers/configuration/secrets/

    const secretKeyData = encoder.encode(

      env.SECRET_DATA ?? "my secret symmetric key",

    );


    // Import your secret as a CryptoKey for both 'sign' and 'verify' operations

    const key = await crypto.subtle.importKey(

      "raw",

      secretKeyData,

      { name: "HMAC", hash: "SHA-256" },

      false,

      ["sign", "verify"],

    );


    const url = new URL(request.url);


    // This is a demonstration Worker that allows unauthenticated access to /generate

    // In a real application you would want to make sure that

    // users could only generate signed URLs when authenticated

    if (url.pathname.startsWith("/generate/")) {

      url.pathname = url.pathname.replace("/generate/", "/");


      const timestamp = Math.floor(Date.now() / 1000);


      // This contains all the data about the request that you want to be able to verify

      // Here we only sign the timestamp and the pathname, but often you will want to

      // include more data (for instance, the URL hostname or query parameters)

      const dataToAuthenticate = `${url.pathname}${timestamp}`;


      const mac = await crypto.subtle.sign(

        "HMAC",

        key,

        encoder.encode(dataToAuthenticate),

      );


      // Refer to https://developers.cloudflare.com/workers/runtime-apis/nodejs/

      // for more details on using NodeJS APIs in Workers

      const base64Mac = Buffer.from(mac).toString("base64");


      url.searchParams.set("verify", `${timestamp}-${base64Mac}`);


      return new Response(`${url.pathname}${url.search}`);

      // Verify all non /generate requests

    } else {

      // Make sure you have the minimum necessary query parameters.

      if (!url.searchParams.has("verify")) {

        return new Response("Missing query parameter", { status: 403 });

      }


      const [timestamp, hmac] = url.searchParams.get("verify").split("-");


      const assertedTimestamp = Number(timestamp);


      const dataToAuthenticate = `${url.pathname}${assertedTimestamp}`;


      const receivedMac = Buffer.from(hmac, "base64");


      // Use crypto.subtle.verify() to guard against timing attacks. Since HMACs use

      // symmetric keys, you could implement this by calling crypto.subtle.sign() and

      // then doing a string comparison -- this is insecure, as string comparisons

      // bail out on the first mismatch, which leaks information to potential

      // attackers.

      const verified = await crypto.subtle.verify(

        "HMAC",

        key,

        receivedMac,

        encoder.encode(dataToAuthenticate),

      );


      if (!verified) {

        return new Response("Invalid MAC", { status: 403 });

      }


      // Signed requests expire after one minute. Note that this value should depend on your specific use case

      if (Date.now() / 1000 > assertedTimestamp + EXPIRY) {

        return new Response(

          `URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`,

          { status: 403 },

        );

      }

    }


    return fetch(new URL(url.pathname, "https://example.com"), request);

  },

} satisfies ExportedHandler<Env>;


```

TypeScript

```

import { Buffer } from "node:buffer";

import { Hono } from "hono";

import { proxy } from "hono/proxy";


const encoder = new TextEncoder();


// How long an HMAC token should be valid for, in seconds

const EXPIRY = 60;


interface Env {

  SECRET_DATA: string;

}


const app = new Hono();


// Handle URL generation requests

app.get("/generate/*", async (c) => {

  const env = c.env;


  // You will need some secret data to use as a symmetric key

  const secretKeyData = encoder.encode(

    env.SECRET_DATA ?? "my secret symmetric key",

  );


  // Import the secret as a CryptoKey for both 'sign' and 'verify' operations

  const key = await crypto.subtle.importKey(

    "raw",

    secretKeyData,

    { name: "HMAC", hash: "SHA-256" },

    false,

    ["sign", "verify"],

  );


  // Replace "/generate/" prefix with "/"

  let pathname = c.req.path.replace("/generate/", "/");


  const timestamp = Math.floor(Date.now() / 1000);


  // Data to authenticate: pathname + timestamp

  const dataToAuthenticate = `${pathname}${timestamp}`;


  // Sign the data

  const mac = await crypto.subtle.sign(

    "HMAC",

    key,

    encoder.encode(dataToAuthenticate),

  );


  // Convert the signature to base64

  const base64Mac = Buffer.from(mac).toString("base64");


  // Add verification parameter to URL

  url.searchParams.set("verify", `${timestamp}-${base64Mac}`);


  return c.text(`${pathname}${url.search}`);

});


// Handle verification for all other requests

app.all("*", async (c) => {

  const env = c.env;

  const url = c.req.url;


  // You will need some secret data to use as a symmetric key

  const secretKeyData = encoder.encode(

    env.SECRET_DATA ?? "my secret symmetric key",

  );


  // Import the secret as a CryptoKey for both 'sign' and 'verify' operations

  const key = await crypto.subtle.importKey(

    "raw",

    secretKeyData,

    { name: "HMAC", hash: "SHA-256" },

    false,

    ["sign", "verify"],

  );


  // Make sure the request has the verification parameter

  if (!c.req.query("verify")) {

    return c.text("Missing query parameter", 403);

  }


  // Extract timestamp and signature

  const [timestamp, hmac] = c.req.query("verify")!.split("-");

  const assertedTimestamp = Number(timestamp);


  // Recreate the data that should have been signed

  const dataToAuthenticate = `${c.req.path}${assertedTimestamp}`;


  // Convert base64 signature back to ArrayBuffer

  const receivedMac = Buffer.from(hmac, "base64");


  // Verify the signature

  const verified = await crypto.subtle.verify(

    "HMAC",

    key,

    receivedMac,

    encoder.encode(dataToAuthenticate),

  );


  // If verification fails, return 403

  if (!verified) {

    return c.text("Invalid MAC", 403);

  }


  // Check if the signature has expired

  if (Date.now() / 1000 > assertedTimestamp + EXPIRY) {

    return c.text(

      `URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`,

      403,

    );

  }


  // If verification passes, proxy the request to example.com

  return proxy(`https://example.com/${c.req.path}`, ...c.req);

});


export default app;


```

Python

```

from workers import WorkerEntrypoint

from pyodide.ffi import to_js as _to_js

from js import Response, URL, TextEncoder, Buffer, fetch, Object, crypto


def to_js(x):

    return _to_js(x, dict_converter=Object.fromEntries)


encoder = TextEncoder.new()


# How long an HMAC token should be valid for, in seconds

EXPIRY = 60


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        # Get the secret key

        secret_key_data = encoder.encode(getattr(self.env, "SECRET_DATA", None) or "my secret symmetric key")


        # Import the secret as a CryptoKey for both 'sign' and 'verify' operations

        key = await crypto.subtle.importKey(

            "raw",

            secret_key_data,

            to_js({"name": "HMAC", "hash": "SHA-256"}),

            False,

            ["sign", "verify"]

        )


        url = URL.new(request.url)


        if url.pathname.startswith("/generate/"):

            url.pathname = url.pathname.replace("/generate/", "/", 1)


            timestamp = int(Date.now() / 1000)


            # Data to authenticate

            data_to_authenticate = f"{url.pathname}{timestamp}"


            # Sign the data

            mac = await crypto.subtle.sign(

                "HMAC",

                key,

                encoder.encode(data_to_authenticate)

            )


            # Convert to base64

            base64_mac = Buffer.from(mac).toString("base64")


            # Set the verification parameter

            url.searchParams.set("verify", f"{timestamp}-{base64_mac}")


            return Response.new(f"{url.pathname}{url.search}")

        else:

            # Verify the request

            if not "verify" in url.searchParams:

                return Response.new("Missing query parameter", status=403)


            verify_param = url.searchParams.get("verify")

            timestamp, hmac = verify_param.split("-")


            asserted_timestamp = int(timestamp)


            data_to_authenticate = f"{url.pathname}{asserted_timestamp}"


            received_mac = Buffer.from(hmac, "base64")


            # Verify the signature

            verified = await crypto.subtle.verify(

                "HMAC",

                key,

                received_mac,

                encoder.encode(data_to_authenticate)

            )


            if not verified:

                return Response.new("Invalid MAC", status=403)


            # Check expiration

            if Date.now() / 1000 > asserted_timestamp + EXPIRY:

                expiry_date = Date.new((asserted_timestamp + EXPIRY) * 1000)

                return Response.new(f"URL expired at {expiry_date}", status=403)


        # Proxy to example.com if verification passes

        return fetch(URL.new(f"https://example.com{url.pathname}"), request)


```

## Validate signed requests using the WAF

The provided example code for signing requests is compatible with the [is\_timed\_hmac\_valid\_v0()](https://developers.cloudflare.com/ruleset-engine/rules-language/functions/#hmac-validation) Rules language function. This means that you can verify requests signed by the Worker script using a [custom rule](https://developers.cloudflare.com/waf/custom-rules/use-cases/configure-token-authentication/#option-2-configure-using-custom-rules).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/signing-requests/","name":"Sign requests"}}]}
```

---

---
title: Single Page App (SPA) shell with bootstrap data
description: Use HTMLRewriter to inject prefetched bootstrap data into an SPA shell, eliminating client-side data fetching on initial load. Works with Workers Static Assets or an externally hosted SPA.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ SPA ](https://developers.cloudflare.com/search/?tags=SPA) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/spa-shell.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Single Page App (SPA) shell with bootstrap data

**Last reviewed:**  about 1 month ago 

Use HTMLRewriter to inject bootstrap data into an SPA shell — whether the shell is served from Workers Static Assets or fetched from an external origin.

This example uses a Worker and [HTMLRewriter](https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/) to inject prefetched API data into a single-page application (SPA) shell. The Worker fetches bootstrap data in parallel with the HTML shell and streams the result to the browser, so the SPA has everything it needs before its JavaScript runs.

Two variants are shown:

1. **Static Assets** — The SPA is deployed using [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/)
2. **External origin** — The SPA is hosted outside Cloudflare, and the Worker sits in front of it as a reverse proxy, improving performance

Both variants use the same HTMLRewriter injection technique and the same client-side consumption pattern. Choose the one that matches your deployment.

This pattern works with any SPA framework — React, Vue, Svelte, or others. For framework-specific deployment guides, refer to [Web applications](https://developers.cloudflare.com/workers/framework-guides/web-apps/).

---

## Option 1: Single Page App (SPA) built entirely on Workers

Use this variant when your SPA build output is deployed as part of your Worker using [Static Assets](https://developers.cloudflare.com/workers/static-assets/).

### Configure static assets

Set `not_found_handling` to `"single-page-application"` so that every route returns `index.html`. Use `run_worker_first` to route all requests through your Worker except hashed assets under `/assets/*`, which are served directly.

* [  wrangler.jsonc ](#tab-panel-7369)
* [  wrangler.toml ](#tab-panel-7370)

```

{

  "name": "my-spa",

  "main": "src/worker.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": ["nodejs_compat"],

  "assets": {

    "directory": "./dist",

    "binding": "ASSETS",

    "not_found_handling": "single-page-application",

    "run_worker_first": ["/*", "!/assets/*"],

  },

}


```

```

name = "my-spa"

main = "src/worker.ts"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


[assets]

directory = "./dist"

binding = "ASSETS"

not_found_handling = "single-page-application"

run_worker_first = [ "/*", "!/assets/*" ]


```

For more details on these options, refer to [Static Assets routing](https://developers.cloudflare.com/workers/static-assets/routing/) and the [run\_worker\_first reference](https://developers.cloudflare.com/workers/static-assets/binding/#run%5Fworker%5Ffirst).

### Inject bootstrap data with HTMLRewriter

The Worker starts fetching API data immediately, then fetches the SPA shell from static assets. HTMLRewriter streams the `<head>` to the browser right away. When the `<body>` handler runs, it awaits the API response and prepends a `<script>` tag containing the serialized data.

If the API call fails, the shell still loads and the SPA falls back to client-side data fetching.

* [  JavaScript ](#tab-panel-7373)
* [  TypeScript ](#tab-panel-7374)

JavaScript

```

// Env is generated by `wrangler types` — run it whenever you change your config.

// Do not manually define Env — it drifts from your actual bindings.


export default {

  async fetch(request, env) {

    const url = new URL(request.url);


    // Serve root-level static files (favicon.ico, robots.txt) directly.

    // Hashed assets under /assets/* skip the Worker entirely via run_worker_first.

    if (url.pathname.match(/\.\w+$/) && !url.pathname.endsWith(".html")) {

      return env.ASSETS.fetch(request);

    }


    // Start fetching bootstrap data immediately — do not await yet.

    const dataPromise = fetchBootstrapData(env, url.pathname, request.headers);


    // Fetch the SPA shell from static assets (co-located, sub-millisecond).

    const shell = await env.ASSETS.fetch(

      new Request(new URL("/index.html", request.url)),

    );


    // Use HTMLRewriter to stream the shell and inject data into <body>.

    return new HTMLRewriter()

      .on("body", {

        async element(el) {

          const data = await dataPromise;

          if (data) {

            el.prepend(

              `<script>window.__BOOTSTRAP_DATA__=${JSON.stringify(data)}</script>`,

              { html: true },

            );

          }

        },

      })

      .transform(shell);

  },

};


async function fetchBootstrapData(env, pathname, headers) {

  try {

    const res = await fetch(`${env.API_BASE_URL}/api/bootstrap`, {

      headers: {

        Cookie: headers.get("Cookie") || "",

        "X-Request-Path": pathname,

      },

    });

    if (!res.ok) return null;

    return await res.json();

  } catch {

    // If the API is down, the shell still loads and the SPA

    // falls back to client-side data fetching.

    return null;

  }

}


```

TypeScript

```

// Env is generated by `wrangler types` — run it whenever you change your config.

// Do not manually define Env — it drifts from your actual bindings.


export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const url = new URL(request.url);


    // Serve root-level static files (favicon.ico, robots.txt) directly.

    // Hashed assets under /assets/* skip the Worker entirely via run_worker_first.

    if (url.pathname.match(/\.\w+$/) && !url.pathname.endsWith(".html")) {

      return env.ASSETS.fetch(request);

    }


    // Start fetching bootstrap data immediately — do not await yet.

    const dataPromise = fetchBootstrapData(env, url.pathname, request.headers);


    // Fetch the SPA shell from static assets (co-located, sub-millisecond).

    const shell = await env.ASSETS.fetch(

      new Request(new URL("/index.html", request.url)),

    );


    // Use HTMLRewriter to stream the shell and inject data into <body>.

    return new HTMLRewriter()

      .on("body", {

        async element(el) {

          const data = await dataPromise;

          if (data) {

            el.prepend(

              `<script>window.__BOOTSTRAP_DATA__=${JSON.stringify(data)}</script>`,

              { html: true },

            );

          }

        },

      })

      .transform(shell);

  },

} satisfies ExportedHandler<Env>;


async function fetchBootstrapData(

  env: Env,

  pathname: string,

  headers: Headers,

): Promise<unknown | null> {

  try {

    const res = await fetch(`${env.API_BASE_URL}/api/bootstrap`, {

      headers: {

        Cookie: headers.get("Cookie") || "",

        "X-Request-Path": pathname,

      },

    });

    if (!res.ok) return null;

    return await res.json();

  } catch {

    // If the API is down, the shell still loads and the SPA

    // falls back to client-side data fetching.

    return null;

  }

}


```

---

## Option 2: SPA hosted on an external origin

Use this variant when your HTML, CSS, and JavaScript are deployed outside Cloudflare. The Worker fetches the SPA shell from the external origin, uses HTMLRewriter to inject bootstrap data, and streams the modified response to the browser.

### Configure the Worker

Because the SPA is not in Workers Static Assets, you do not need an `assets` block. Instead, store the external origin URL as an environment variable. Attach the Worker to your domain with a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) or a [Route](https://developers.cloudflare.com/workers/configuration/routing/routes/).

* [  wrangler.jsonc ](#tab-panel-7371)
* [  wrangler.toml ](#tab-panel-7372)

```

{

  "name": "my-spa-proxy",

  "main": "src/worker.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": ["nodejs_compat"],

  "vars": {

    "SPA_ORIGIN": "https://my-spa.example-hosting.com",

    "API_BASE_URL": "https://api.example.com",

  },

}


```

```

name = "my-spa-proxy"

main = "src/worker.ts"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


[vars]

SPA_ORIGIN = "https://my-spa.example-hosting.com"

API_BASE_URL = "https://api.example.com"


```

### Inject bootstrap data with HTMLRewriter

The Worker fetches both the SPA shell and API data in parallel. When the SPA origin responds, HTMLRewriter streams the HTML while injecting bootstrap data into `<body>`. Static assets (CSS, JS, images) are passed through to the external origin without modification.

* [  JavaScript ](#tab-panel-7375)
* [  TypeScript ](#tab-panel-7376)

JavaScript

```

// Env is generated by `wrangler types` — run it whenever you change your config.

// Do not manually define Env — it drifts from your actual bindings.


export default {

  async fetch(request, env) {

    const url = new URL(request.url);


    // Pass static asset requests through to the external origin unmodified.

    if (url.pathname.match(/\.\w+$/) && !url.pathname.endsWith(".html")) {

      return fetch(new Request(`${env.SPA_ORIGIN}${url.pathname}`, request));

    }


    // Start fetching bootstrap data immediately — do not await yet.

    const dataPromise = fetchBootstrapData(env, url.pathname, request.headers);


    // Fetch the SPA shell from the external origin.

    // SPA routers serve index.html for all routes.

    const shell = await fetch(`${env.SPA_ORIGIN}/index.html`);


    if (!shell.ok) {

      return new Response("Origin returned an error", { status: 502 });

    }


    // Use HTMLRewriter to stream the shell and inject data into <body>.

    return new HTMLRewriter()

      .on("body", {

        async element(el) {

          const data = await dataPromise;

          if (data) {

            el.prepend(

              `<script>window.__BOOTSTRAP_DATA__=${JSON.stringify(data)}</script>`,

              { html: true },

            );

          }

        },

      })

      .transform(shell);

  },

};


async function fetchBootstrapData(env, pathname, headers) {

  try {

    const res = await fetch(`${env.API_BASE_URL}/api/bootstrap`, {

      headers: {

        Cookie: headers.get("Cookie") || "",

        "X-Request-Path": pathname,

      },

    });

    if (!res.ok) return null;

    return await res.json();

  } catch {

    // If the API is down, the shell still loads and the SPA

    // falls back to client-side data fetching.

    return null;

  }

}


```

TypeScript

```

// Env is generated by `wrangler types` — run it whenever you change your config.

// Do not manually define Env — it drifts from your actual bindings.


export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const url = new URL(request.url);


    // Pass static asset requests through to the external origin unmodified.

    if (url.pathname.match(/\.\w+$/) && !url.pathname.endsWith(".html")) {

      return fetch(new Request(`${env.SPA_ORIGIN}${url.pathname}`, request));

    }


    // Start fetching bootstrap data immediately — do not await yet.

    const dataPromise = fetchBootstrapData(env, url.pathname, request.headers);


    // Fetch the SPA shell from the external origin.

    // SPA routers serve index.html for all routes.

    const shell = await fetch(`${env.SPA_ORIGIN}/index.html`);


    if (!shell.ok) {

      return new Response("Origin returned an error", { status: 502 });

    }


    // Use HTMLRewriter to stream the shell and inject data into <body>.

    return new HTMLRewriter()

      .on("body", {

        async element(el) {

          const data = await dataPromise;

          if (data) {

            el.prepend(

              `<script>window.__BOOTSTRAP_DATA__=${JSON.stringify(data)}</script>`,

              { html: true },

            );

          }

        },

      })

      .transform(shell);

  },

} satisfies ExportedHandler<Env>;


async function fetchBootstrapData(

  env: Env,

  pathname: string,

  headers: Headers,

): Promise<unknown | null> {

  try {

    const res = await fetch(`${env.API_BASE_URL}/api/bootstrap`, {

      headers: {

        Cookie: headers.get("Cookie") || "",

        "X-Request-Path": pathname,

      },

    });

    if (!res.ok) return null;

    return await res.json();

  } catch {

    // If the API is down, the shell still loads and the SPA

    // falls back to client-side data fetching.

    return null;

  }

}


```

## Consume prefetched data in your SPA

On the client, read `window.__BOOTSTRAP_DATA__` before making any API calls. If the data exists, use it directly. Otherwise, fall back to a normal fetch.

src/App.tsx

```

// React example — works the same way in Vue, Svelte, or any other framework.

import { useEffect, useState } from "react";


function App() {

  const [data, setData] = useState(window.__BOOTSTRAP_DATA__ || null);

  const [loading, setLoading] = useState(!data);


  useEffect(() => {

    if (data) return; // Already have prefetched data — skip the API call.


    fetch("/api/bootstrap")

      .then((res) => res.json())

      .then((result) => {

        setData(result);

        setLoading(false);

      });

  }, []);


  if (loading) return <LoadingSpinner />;

  return <Dashboard data={data} />;

}


```

Add a type declaration so TypeScript recognizes the global property:

global.d.ts

```

declare global {

  interface Window {

    __BOOTSTRAP_DATA__?: unknown;

  }

}


```

## Additional injection techniques

You can chain multiple HTMLRewriter handlers to inject more than bootstrap data.

### Set meta tags

Inject Open Graph or other `<meta>` tags based on the request path. This gives social-media crawlers correct previews without a full server-side rendering framework.

TypeScript

```

new HTMLRewriter()

  .on("head", {

    element(el) {

      el.append(`<meta property="og:title" content="${title}" />`, {

        html: true,

      });

    },

  })

  .transform(shell);


```

### Add CSP nonces

Generate a nonce per request and inject it into both the Content-Security-Policy header and each inline `<script>` tag.

TypeScript

```

const nonce = crypto.randomUUID();


const response = new HTMLRewriter()

  .on("script", {

    element(el) {

      el.setAttribute("nonce", nonce);

    },

  })

  .transform(shell);


response.headers.set(

  "Content-Security-Policy",

  `script-src 'nonce-${nonce}' 'strict-dynamic';`,

);


return response;


```

### Inject user configuration

Expose feature flags or environment-specific settings to the SPA without an extra API round-trip.

TypeScript

```

new HTMLRewriter()

  .on("body", {

    element(el) {

      el.prepend(

        `<script>window.__APP_CONFIG__=${JSON.stringify({

          apiBase: env.API_BASE_URL,

          featureFlags: { darkMode: true },

        })}</script>`,

        { html: true },

      );

    },

  })

  .transform(shell);


```

## Related resources

* [HTMLRewriter](https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/) — Streaming HTML parser and transformer.
* [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/) — Serve static files alongside your Worker.
* [Static Assets routing](https://developers.cloudflare.com/workers/static-assets/routing/) — Configure `run_worker_first` and `not_found_handling`.
* [Static Assets binding](https://developers.cloudflare.com/workers/static-assets/binding/) — Reference for the `ASSETS` binding and routing options.
* [Custom Domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) — Attach a Worker to a domain as the origin.
* [Routes](https://developers.cloudflare.com/workers/configuration/routing/routes/) — Run a Worker in front of an existing origin server.
* [Workers Best Practices](https://developers.cloudflare.com/workers/best-practices/workers-best-practices/) — Code patterns and configuration guidance for Workers.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/spa-shell/","name":"Single Page App (SPA) shell with bootstrap data"}}]}
```

---

---
title: Stream large JSON
description: Parse and transform large JSON request and response bodies using streaming.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Middleware ](https://developers.cloudflare.com/search/?tags=Middleware)[ JSON ](https://developers.cloudflare.com/search/?tags=JSON)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/streaming-json.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Stream large JSON

**Last reviewed:**  4 months ago 

Parse and transform large JSON request and response bodies using streaming.

Use the [Streams API](https://developers.cloudflare.com/workers/runtime-apis/streams/) to process JSON payloads that would exceed a Worker's 128 MB memory limit if fully buffered. Streaming allows you to parse and transform JSON data incrementally as it arrives. This is faster than buffering the entire payload into memory, as your Worker can start processing data incrementally, and allows your Worker to handle multi-gigabyte payloads or files within its memory limits.

The [@streamparser/json-whatwg ↗](https://www.npmjs.com/package/@streamparser/json-whatwg) library provides a streaming JSON parser compatible with the Web Streams API.

Install the dependency:

Terminal window

```

npm install @streamparser/json-whatwg


```

## Stream a JSON request body

This example parses a large JSON request body and extracts specific fields without loading the entire payload into memory.

* [  TypeScript ](#tab-panel-7377)
* [  JavaScript ](#tab-panel-7378)

TypeScript

```

import { JSONParser } from "@streamparser/json-whatwg";


export default {

  async fetch(request): Promise<Response> {

    const parser = new JSONParser({ paths: ["$.users.*"] });


    const users: string[] = [];


    // Pipe the request body through the JSON parser

    const reader = request.body

      .pipeThrough(parser)

      .getReader();


    // Process matching JSON values as they stream in

    while (true) {

      const { done, value } = await reader.read();

      if (done) break;

      // Extract only the name field from each user object

      if (value.value?.name) {

        users.push(value.value.name);

      }

    }


    return Response.json({ userNames: users });

  },

} satisfies ExportedHandler;


```

JavaScript

```

import { JSONParser } from "@streamparser/json-whatwg";


export default {

  async fetch(request) {

    const parser = new JSONParser({ paths: ["$.users.*"] });


    const users = [];


    // Pipe the request body through the JSON parser

    const reader = request.body

      .pipeThrough(parser)

      .getReader();


    // Process matching JSON values as they stream in

    while (true) {

      const { done, value } = await reader.read();

      if (done) break;

      // Extract only the name field from each user object

      if (value.value?.name) {

        users.push(value.value.name);

      }

    }


    return Response.json({ userNames: users });

  },

};


```

## Stream and transform a JSON response

This example fetches a large JSON response from an upstream API, transforms specific fields, and streams the modified response to the client.

* [  TypeScript ](#tab-panel-7379)
* [  JavaScript ](#tab-panel-7380)

TypeScript

```

import { JSONParser } from "@streamparser/json-whatwg";


export default {

  async fetch(request): Promise<Response> {

    const response = await fetch("https://api.example.com/large-dataset.json");


    const parser = new JSONParser({ paths: ["$.items.*"] });


    const { readable, writable } = new TransformStream();

    const writer = writable.getWriter();

    const encoder = new TextEncoder();


    // Process the upstream response in the background

    (async () => {

      const reader = response.body

        .pipeThrough(parser)

        .getReader();


      await writer.write(encoder.encode('{"processedItems":['));

      let first = true;


      while (true) {

        const { done, value } = await reader.read();

        if (done) break;


        // Transform each item as it streams through

        const item = value.value;

        const transformed = {

          id: item.id,

          title: item.title.toUpperCase(),

          processed: true,

        };


        if (!first) await writer.write(encoder.encode(","));

        first = false;

        await writer.write(encoder.encode(JSON.stringify(transformed)));

      }


      await writer.write(encoder.encode("]}"));

      await writer.close();

    })();


    return new Response(readable, {

      headers: { "Content-Type": "application/json" },

    });

  },

} satisfies ExportedHandler;


```

JavaScript

```

import { JSONParser } from "@streamparser/json-whatwg";


export default {

  async fetch(request) {

    const response = await fetch("https://api.example.com/large-dataset.json");


    const parser = new JSONParser({ paths: ["$.items.*"] });


    const { readable, writable } = new TransformStream();

    const writer = writable.getWriter();

    const encoder = new TextEncoder();


    // Process the upstream response in the background

    (async () => {

      const reader = response.body

        .pipeThrough(parser)

        .getReader();


      await writer.write(encoder.encode('{"processedItems":['));

      let first = true;


      while (true) {

        const { done, value } = await reader.read();

        if (done) break;


        // Transform each item as it streams through

        const item = value.value;

        const transformed = {

          id: item.id,

          title: item.title.toUpperCase(),

          processed: true,

        };


        if (!first) await writer.write(encoder.encode(","));

        first = false;

        await writer.write(encoder.encode(JSON.stringify(transformed)));

      }


      await writer.write(encoder.encode("]}"));

      await writer.close();

    })();


    return new Response(readable, {

      headers: { "Content-Type": "application/json" },

    });

  },

};


```

## Related resources

* [Streams API](https://developers.cloudflare.com/workers/runtime-apis/streams/) \- Learn more about streaming in Workers
* [TransformStream](https://developers.cloudflare.com/workers/runtime-apis/streams/transformstream/) \- Create custom stream transformations
* [@streamparser/json-whatwg ↗](https://www.npmjs.com/package/@streamparser/json-whatwg) \- Streaming JSON parser documentation

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/streaming-json/","name":"Stream large JSON"}}]}
```

---

---
title: Turnstile with Workers
description: Inject [Turnstile](/turnstile/) implicitly into HTML elements using the HTMLRewriter runtime API.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ Python ](https://developers.cloudflare.com/search/?tags=Python) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/turnstile-html-rewriter.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Turnstile with Workers

**Last reviewed:**  about 3 years ago 

Inject [Turnstile](https://developers.cloudflare.com/turnstile/) implicitly into HTML elements using the HTMLRewriter runtime API.

* [  JavaScript ](#tab-panel-7381)
* [  TypeScript ](#tab-panel-7382)
* [  Hono ](#tab-panel-7383)
* [  Python ](#tab-panel-7384)

JavaScript

```

export default {

  async fetch(request, env) {

    const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret)

    const TURNSTILE_ATTR_NAME = "your_id_to_replace"; // The id of the element to put a Turnstile widget in

    let res = await fetch(request);


    // Instantiate the API to run on specific elements, for example, `head`, `div`

    let newRes = new HTMLRewriter()


      // `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API

      .on("head", {

        element(element) {

          // In this case, you are using `append` to add a new script to the `head` element

          element.append(

            `<script src="https://challenges.cloudflare.com/turnstile/v0/api.js" async defer></script>`,

            { html: true },

          );

        },

      })

      .on("div", {

        element(element) {

          // Add a turnstile widget element into if an element with the id of TURNSTILE_ATTR_NAME is found

          if (element.getAttribute("id") === TURNSTILE_ATTR_NAME) {

            element.append(

              `<div class="cf-turnstile" data-sitekey="${SITE_KEY}"></div>`,

              { html: true },

            );

          }

        },

      })

      .transform(res);

    return newRes;

  },

};


```

TypeScript

```

export default {

  async fetch(request, env): Promise<Response> {

    const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret)

    const TURNSTILE_ATTR_NAME = "your_id_to_replace"; // The id of the element to put a Turnstile widget in


    let res = await fetch(request);


    // Instantiate the API to run on specific elements, for example, `head`, `div`

    let newRes = new HTMLRewriter()


      // `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API

      .on("head", {

        element(element) {

          // In this case, you are using `append` to add a new script to the `head` element

          element.append(

            `<script src="https://challenges.cloudflare.com/turnstile/v0/api.js" async defer></script>`,

            { html: true },

          );

        },

      })

      .on("div", {

        element(element) {

          // Add a turnstile widget element into if an element with the id of TURNSTILE_ATTR_NAME is found

          if (element.getAttribute("id") === TURNSTILE_ATTR_NAME) {

            element.append(

              `<div class="cf-turnstile" data-sitekey="${SITE_KEY}" data-theme="light"></div>`,

              { html: true },

            );

          }

        },

      })

      .transform(res);

    return newRes;

  },

} satisfies ExportedHandler<Env>;


```

TypeScript

```

import { Hono } from "hono";


interface Env {

  SITE_KEY: string;

  SECRET_KEY: string;

  TURNSTILE_ATTR_NAME?: string;

}


const app = new Hono<{ Bindings: Env }>();


// Middleware to inject Turnstile widget

app.use("*", async (c, next) => {

  const SITE_KEY = c.env.SITE_KEY; // The Turnstile Sitekey from environment

  const TURNSTILE_ATTR_NAME = c.env.TURNSTILE_ATTR_NAME || "your_id_to_replace"; // The target element ID


  // Process the request through the original endpoint

  await next();


  // Only process HTML responses

  const contentType = c.res.headers.get("content-type");

  if (!contentType || !contentType.includes("text/html")) {

    return;

  }


  // Clone the response to make it modifiable

  const originalResponse = c.res;

  const responseBody = await originalResponse.text();


  // Create an HTMLRewriter instance to modify the HTML

  const rewriter = new HTMLRewriter()

    // Add the Turnstile script to the head

    .on("head", {

      element(element) {

        element.append(

          `<script src="https://challenges.cloudflare.com/turnstile/v0/api.js" async defer></script>`,

          { html: true },

        );

      },

    })

    // Add the Turnstile widget to the target div

    .on("div", {

      element(element) {

        if (element.getAttribute("id") === TURNSTILE_ATTR_NAME) {

          element.append(

            `<div class="cf-turnstile" data-sitekey="${SITE_KEY}" data-theme="light"></div>`,

            { html: true },

          );

        }

      },

    });


  // Create a new response with the same properties as the original

  const modifiedResponse = new Response(responseBody, {

    status: originalResponse.status,

    statusText: originalResponse.statusText,

    headers: originalResponse.headers,

  });


  // Transform the response using HTMLRewriter

  c.res = rewriter.transform(modifiedResponse);

});


// Handle POST requests for form submission with Turnstile validation

app.post("*", async (c) => {

  const formData = await c.req.formData();

  const token = formData.get("cf-turnstile-response");

  const ip = c.req.header("CF-Connecting-IP");


  // If no token, return an error

  if (!token) {

    return c.text("Missing Turnstile token", 400);

  }


  // Prepare verification data

  const verifyFormData = new FormData();

  verifyFormData.append("secret", c.env.SECRET_KEY || "");

  verifyFormData.append("response", token.toString());

  if (ip) verifyFormData.append("remoteip", ip);


  // Verify the token with Turnstile API

  const verifyResult = await fetch(

    "https://challenges.cloudflare.com/turnstile/v0/siteverify",

    {

      method: "POST",

      body: verifyFormData,

    },

  );


  const outcome = await verifyResult.json<{ success: boolean }>;


  // If verification fails, return an error

  if (!outcome.success) {

    return c.text("The provided Turnstile token was not valid!", 401);

  }


  // If verification succeeds, proceed with the original request

  // You would typically handle the form submission logic here


  // For this example, we'll just send a success response

  return c.text("Form submission successful!");

});


// Default handler for GET requests

app.get("*", async (c) => {

  // Fetch the original content (you'd replace this with your actual content source)

  return await fetch(c.req.raw);

});


export default app;


```

Python

```

from workers import WorkerEntrypoint

from pyodide.ffi import create_proxy

from js import HTMLRewriter, fetch


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        site_key = self.env.SITE_KEY

        attr_name = self.env.TURNSTILE_ATTR_NAME

        res = await fetch(request)


        class Append:

            def element(self, element):

                s = '<script src="https://challenges.cloudflare.com/turnstile/v0/api.js" async defer></script>'

                element.append(s, {"html": True})


        class AppendOnID:

            def __init__(self, name):

                self.name = name

            def element(self, element):

                # You are using the `getAttribute` method here to retrieve the `id` or `class` of an element

                if element.getAttribute("id") == self.name:

                    div = f'<div class="cf-turnstile" data-sitekey="{site_key}" data-theme="light"></div>'

                    element.append(div, { "html": True })


        # Instantiate the API to run on specific elements, for example, `head`, `div`

        head = create_proxy(Append())

        div = create_proxy(AppendOnID(attr_name))

        new_res = HTMLRewriter.new().on("head", head).on("div", div).transform(res)


        return new_res


```

Note

This is only half the implementation for Turnstile. The corresponding token that is a result of a widget being rendered also needs to be verified using the [Siteverify API](https://developers.cloudflare.com/turnstile/get-started/server-side-validation/). Refer to the example below for one such implementation.

JavaScript

```

async function handlePost(request, env) {

    const body = await request.formData();

    // Turnstile injects a token in `cf-turnstile-response`.

    const token = body.get('cf-turnstile-response');

    const ip = request.headers.get('CF-Connecting-IP');


    // Validate the token by calling the `/siteverify` API.

    let formData = new FormData();


    // `secret_key` here is the Turnstile Secret key, which should be set using Wrangler secrets

    formData.append('secret', self.env.SECRET_KEY);

    formData.append('response', token);

    formData.append('remoteip', ip); //This is optional.


    const url = 'https://challenges.cloudflare.com/turnstile/v0/siteverify';

    const result = await fetch(url, {

        body: formData,

        method: 'POST',

    });


    const outcome = await result.json();


    if (!outcome.success) {

        return new Response('The provided Turnstile token was not valid!', { status: 401 });

    }

    // The Turnstile token was successfully validated. Proceed with your application logic.

    // Validate login, redirect user, etc.


  // Clone the original request with a new body

    const newRequest = new Request(request, {

        body: request.body, // Reuse the body

        method: request.method,

        headers: request.headers

    });


    return await fetch(newRequest);

}


export default {

  async fetch(request, env) {

    const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret)

    const TURNSTILE_ATTR_NAME = 'your_id_to_replace'; // The id of the element to put a Turnstile widget in


    let res = await fetch(request)


    if (request.method === 'POST') {

      return handlePost(request, env)

    }


    // Instantiate the API to run on specific elements, for example, `head`, `div`

    let newRes = new HTMLRewriter()

      // `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API

      .on('head', {

        element(element) {


          // In this case, you are using `append` to add a new script to the `head` element

          element.append(`<script src="https://challenges.cloudflare.com/turnstile/v0/api.js" async defer></script>`, { html: true });

        },

      })

      .on('div', {

        element(element) {

          // You are using the `getAttribute` method here to retrieve the `id` or `class` of an element

          if (element.getAttribute('id') === <NAME_OF_ATTRIBUTE>) {

            element.append(`<div class="cf-turnstile" data-sitekey="${SITE_KEY}" data-theme="light"></div>`, { html: true });

          }

        },

      })

      .transform(res);

    return newRes

  }

}


```

Prevent potential errors when accessing request.body

The body of a [Request ↗](https://developer.mozilla.org/en-US/docs/Web/API/Request) can only be accessed once. If you previously used `request.formData()` in the same request, you may encounter a TypeError when attempting to access `request.body`.

To avoid errors, create a clone of the Request object with `request.clone()` for each subsequent attempt to access a Request's body. Keep in mind that Workers have a [memory limit of 128 MB per Worker](https://developers.cloudflare.com/workers/platform/limits/#memory) and loading particularly large files into a Worker's memory multiple times may reach this limit. To ensure memory usage does not reach this limit, consider using [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/turnstile-html-rewriter/","name":"Turnstile with Workers"}}]}
```

---

---
title: Using the WebSockets API
description: Use the WebSockets API to communicate in real time with your Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ WebSockets ](https://developers.cloudflare.com/search/?tags=WebSockets)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ Rust ](https://developers.cloudflare.com/search/?tags=Rust) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/examples/websockets.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Using the WebSockets API

**Last reviewed:**  almost 5 years ago 

Use the WebSockets API to communicate in real time with your Cloudflare Workers.

WebSockets allow you to communicate in real time with your Cloudflare Workers serverless functions. In this guide, you will learn the basics of WebSockets on Cloudflare Workers, both from the perspective of writing WebSocket servers in your Workers functions, as well as connecting to and working with those WebSocket servers as a client.

WebSockets are open connections sustained between the client and the origin server. Inside a WebSocket connection, the client and the origin can pass data back and forth without having to reestablish sessions. This makes exchanging data within a WebSocket connection fast. WebSockets are often used for real-time applications such as live chat and gaming.

Note

WebSockets utilize an event-based system for receiving and sending messages, much like the Workers runtime model of responding to events.

Note

If your application needs to coordinate among multiple WebSocket connections, such as a chat room or game match, you will need clients to send messages to a single-point-of-coordination. Durable Objects provide a single-point-of-coordination for Cloudflare Workers, and are often used in parallel with WebSockets to persist state over multiple clients and connections. In this case, refer to [Durable Objects](https://developers.cloudflare.com/durable-objects/) to get started, and prefer using the Durable Objects' extended [WebSockets API](https://developers.cloudflare.com/durable-objects/best-practices/websockets/).

## Write a WebSocket Server

WebSocket servers in Cloudflare Workers allow you to receive messages from a client in real time. This guide will show you how to set up a WebSocket server in Workers.

A client can make a WebSocket request in the browser by instantiating a new instance of `WebSocket`, passing in the URL for your Workers function:

JavaScript

```

// In client-side JavaScript, connect to your Workers function using WebSockets:

const websocket = new WebSocket(

  "wss://example-websocket.signalnerve.workers.dev",

);


```

Note

For more details about creating and working with WebSockets in the client, refer to [Writing a WebSocket client](#write-a-websocket-client).

When an incoming WebSocket request reaches the Workers function, it will contain an `Upgrade` header, set to the string value `websocket`. Check for this header before continuing to instantiate a WebSocket:

* [  JavaScript ](#tab-panel-7385)
* [  Rust ](#tab-panel-7386)

JavaScript

```

async function handleRequest(request) {

  const upgradeHeader = request.headers.get('Upgrade');

  if (!upgradeHeader || upgradeHeader !== 'websocket') {

    return new Response('Expected Upgrade: websocket', { status: 426 });

  }

}


```

```

use worker::*;


#[event(fetch)]

async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result<worker::Response> {

    let upgrade_header = match req.headers().get("Upgrade") {

        Some(h) => h.to_str().unwrap(),

        None => "",

    };

    if upgrade_header != "websocket" {

        return worker::Response::error("Expected Upgrade: websocket", 426);

    }

}


```

After you have appropriately checked for the `Upgrade` header, you can create a new instance of `WebSocketPair`, which contains server and client WebSockets. One of these WebSockets should be handled by the Workers function and the other should be returned as part of a `Response` with the [101 status code ↗](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/101), indicating the request is switching protocols:

* [  JavaScript ](#tab-panel-7387)
* [  Rust ](#tab-panel-7388)

JavaScript

```

async function handleRequest(request) {

  const upgradeHeader = request.headers.get('Upgrade');

  if (!upgradeHeader || upgradeHeader !== 'websocket') {

    return new Response('Expected Upgrade: websocket', { status: 426 });

  }


  const webSocketPair = new WebSocketPair();

  const client = webSocketPair[0],

    server = webSocketPair[1];


  return new Response(null, {

    status: 101,

    webSocket: client,

  });

}


```

```

use worker::*;


#[event(fetch)]

async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result<worker::Response> {

    let upgrade_header = match req.headers().get("Upgrade") {

        Some(h) => h.to_str().unwrap(),

        None => "",

    };

    if upgrade_header != "websocket" {

        return worker::Response::error("Expected Upgrade: websocket", 426);

    }


    let ws = WebSocketPair::new()?;

    let client = ws.client;

    let server = ws.server;

    server.accept()?;


    worker::Response::from_websocket(client)


}


```

The `WebSocketPair` constructor returns an Object, with the `0` and `1` keys each holding a `WebSocket` instance as its value. It is common to grab the two WebSockets from this pair using [Object.values ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5Fobjects/Object/values) and [ES6 destructuring ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring%5Fassignment), as seen in the below example.

In order to begin communicating with the `client` WebSocket in your Worker, call `accept` on the `server` WebSocket. This will tell the Workers runtime that it should listen for WebSocket data and keep the connection open with your `client` WebSocket:

* [  JavaScript ](#tab-panel-7389)
* [  Rust ](#tab-panel-7390)

JavaScript

```

async function handleRequest(request) {

  const upgradeHeader = request.headers.get('Upgrade');

  if (!upgradeHeader || upgradeHeader !== 'websocket') {

    return new Response('Expected Upgrade: websocket', { status: 426 });

  }


  const webSocketPair = new WebSocketPair();

  const [client, server] = Object.values(webSocketPair);


  server.accept();


  return new Response(null, {

    status: 101,

    webSocket: client,

  });

}


```

```

use worker::*;


#[event(fetch)]

async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result<worker::Response> {

    let upgrade_header = match req.headers().get("Upgrade") {

        Some(h) => h.to_str().unwrap(),

        None => "",

    };

    if upgrade_header != "websocket" {

        return worker::Response::error("Expected Upgrade: websocket", 426);

    }


    let ws = WebSocketPair::new()?;

    let client = ws.client;

    let server = ws.server;

    server.accept()?;


    worker::Response::from_websocket(client)


}


```

WebSockets emit a number of [Events](https://developers.cloudflare.com/workers/runtime-apis/websockets/#events) that can be connected to using `addEventListener`. The below example hooks into the `message` event and emits a `console.log` with the data from it:

* [  JavaScript ](#tab-panel-7391)
* [  Rust ](#tab-panel-7392)
* [  Hono ](#tab-panel-7393)

JavaScript

```

async function handleRequest(request) {

  const upgradeHeader = request.headers.get('Upgrade');

  if (!upgradeHeader || upgradeHeader !== 'websocket') {

    return new Response('Expected Upgrade: websocket', { status: 426 });

  }


  const webSocketPair = new WebSocketPair();

  const [client, server] = Object.values(webSocketPair);


  server.accept();

  server.addEventListener('message', event => {

    console.log(event.data);

  });


  return new Response(null, {

    status: 101,

    webSocket: client,

  });

}


```

```

use futures::StreamExt;

use worker::*;


#[event(fetch)]

async fn fetch(req: HttpRequest, _env: Env, _ctx: Context) -> Result<worker::Response> {

    let upgrade_header = match req.headers().get("Upgrade") {

        Some(h) => h.to_str().unwrap(),

        None => "",

    };

    if upgrade_header != "websocket" {

        return worker::Response::error("Expected Upgrade: websocket", 426);

    }


    let ws = WebSocketPair::new()?;

    let client = ws.client;

    let server = ws.server;

    server.accept()?;


    wasm_bindgen_futures::spawn_local(async move {

        let mut event_stream = server.events().expect("could not open stream");

        while let Some(event) = event_stream.next().await {

            match event.expect("received error in websocket") {

                WebsocketEvent::Message(msg) => server.send(&msg.text()).unwrap(),

                WebsocketEvent::Close(event) => console_log!("{:?}", event),

            }

        }

    });

    worker::Response::from_websocket(client)


}


```

TypeScript

```

import { Hono } from 'hono'

import { upgradeWebSocket } from 'hono/cloudflare-workers'


const app = new Hono()


app.get(

  '*',

  upgradeWebSocket((c) => {

    return {

      onMessage(event, ws) {

        console.log('Received message from client:', event.data)

        ws.send(`Echo: ${event.data}`)

      },

      onClose: () => {

        console.log('WebSocket closed:', event)

      },

      onError: () => {

        console.error('WebSocket error:', event)

      },

    }

  })

)


export default app;


```

### Connect to the WebSocket server from a client

Writing WebSocket clients that communicate with your Workers function is a two-step process: first, create the WebSocket instance, and then attach event listeners to it:

JavaScript

```

const websocket = new WebSocket(

  "wss://websocket-example.signalnerve.workers.dev",

);

websocket.addEventListener("message", (event) => {

  console.log("Message received from server");

  console.log(event.data);

});


```

WebSocket clients can send messages back to the server using the [send](https://developers.cloudflare.com/workers/runtime-apis/websockets/#send) function:

JavaScript

```

websocket.send("MESSAGE");


```

When the WebSocket interaction is complete, the client can close the connection using [close](https://developers.cloudflare.com/workers/runtime-apis/websockets/#close):

JavaScript

```

websocket.close();


```

For an example of this in practice, refer to the [websocket-template ↗](https://github.com/cloudflare/websocket-template) to get started with WebSockets.

## Write a WebSocket client

Cloudflare Workers supports the `new WebSocket(url)` constructor. A Worker can establish a WebSocket connection to a remote server in the same manner as the client implementation described above.

Additionally, Cloudflare supports establishing WebSocket connections by making a fetch request to a URL with the `Upgrade` header set.

JavaScript

```

async function websocket(url) {

  // Make a fetch request including `Upgrade: websocket` header.

  // The Workers Runtime will automatically handle other requirements

  // of the WebSocket protocol, like the Sec-WebSocket-Key header.

  let resp = await fetch(url, {

    headers: {

      Upgrade: "websocket",

    },

  });


  // If the WebSocket handshake completed successfully, then the

  // response has a `webSocket` property.

  let ws = resp.webSocket;

  if (!ws) {

    throw new Error("server didn't accept WebSocket");

  }


  // Call accept() to indicate that you'll be handling the socket here

  // in JavaScript, as opposed to returning it on to a client.

  ws.accept();


  // Now you can send and receive messages like before.

  ws.send("hello");

  ws.addEventListener("message", (msg) => {

    console.log(msg.data);

  });

}


```

## WebSocket compression

Cloudflare Workers supports WebSocket compression. Refer to [WebSocket Compression](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#websocket-compression) for more information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/examples/","name":"Examples"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/examples/websockets/","name":"Using the WebSockets API"}}]}
```

---

---
title: Tutorials
description: View tutorials to help you get started with Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Tutorials

View tutorials to help you get started with Workers.

## Docs

| Name                                                                                                                                                                                   | Last Updated       | Difficulty   |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | ------------ |
| [Generate OG images for Astro sites](https://developers.cloudflare.com/browser-rendering/how-to/og-images-astro/)                                                                      | Intermediate       |              |
| [Build a Comments API](https://developers.cloudflare.com/d1/tutorials/build-a-comments-api/)                                                                                           | 18 days ago        | Intermediate |
| [Deploy an Express.js application on Cloudflare Workers](https://developers.cloudflare.com/workers/tutorials/deploy-an-express-app/)                                                   | 5 months ago       | Beginner     |
| [Connect to a PostgreSQL database with Cloudflare Workers](https://developers.cloudflare.com/workers/tutorials/postgres/)                                                              | 9 months ago       | Beginner     |
| [Query D1 using Prisma ORM](https://developers.cloudflare.com/d1/tutorials/d1-and-prisma-orm/)                                                                                         | 10 months ago      | Beginner     |
| [Migrate from Netlify to Workers](https://developers.cloudflare.com/workers/static-assets/migration-guides/netlify-to-workers/)                                                        | 11 months ago      | Beginner     |
| [Migrate from Vercel to Workers](https://developers.cloudflare.com/workers/static-assets/migration-guides/vercel-to-workers/)                                                          | 11 months ago      | Beginner     |
| [Tutorial - React SPA with an API](https://developers.cloudflare.com/workers/vite-plugin/tutorial/)                                                                                    | 12 months ago      |              |
| [Connect to a MySQL database with Cloudflare Workers](https://developers.cloudflare.com/workers/tutorials/mysql/)                                                                      | about 1 year ago   | Beginner     |
| [Set up and use a Prisma Postgres database](https://developers.cloudflare.com/workers/tutorials/using-prisma-postgres-with-workers/)                                                   | about 1 year ago   | Beginner     |
| [Store and Catalog AI Generated Images with R2 (Part 3)](https://developers.cloudflare.com/workers-ai/guides/tutorials/image-generation-playground/image-generator-store-and-catalog/) | about 1 year ago   | Beginner     |
| [Build a Retrieval Augmented Generation (RAG) AI](https://developers.cloudflare.com/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai/)                            | over 1 year ago    | Beginner     |
| [Using BigQuery with Workers AI](https://developers.cloudflare.com/workers-ai/guides/tutorials/using-bigquery-with-workers-ai/)                                                        | over 1 year ago    | Beginner     |
| [How to Build an Image Generator using Workers AI](https://developers.cloudflare.com/workers-ai/guides/tutorials/image-generation-playground/)                                         | over 1 year ago    | Beginner     |
| [Build an AI Image Generator Playground (Part 1)](https://developers.cloudflare.com/workers-ai/guides/tutorials/image-generation-playground/image-generator-flux/)                     | over 1 year ago    | Beginner     |
| [Add New AI Models to your Playground (Part 2)](https://developers.cloudflare.com/workers-ai/guides/tutorials/image-generation-playground/image-generator-flux-newmodels/)             | over 1 year ago    | Beginner     |
| [Use event notification to summarize PDF files on upload](https://developers.cloudflare.com/r2/tutorials/summarize-pdf/)                                                               | over 1 year ago    | Intermediate |
| [Handle rate limits of external APIs](https://developers.cloudflare.com/queues/tutorials/handle-rate-limits/)                                                                          | over 1 year ago    | Beginner     |
| [Build an API to access D1 using a proxy Worker](https://developers.cloudflare.com/d1/tutorials/build-an-api-to-access-d1/)                                                            | over 1 year ago    | Intermediate |
| [Deploy a Worker](https://developers.cloudflare.com/pulumi/tutorial/hello-world/)                                                                                                      | over 1 year ago    | Beginner     |
| [Build a web crawler with Queues and Browser Rendering](https://developers.cloudflare.com/queues/tutorials/web-crawler-with-browser-rendering/)                                        | over 1 year ago    | Intermediate |
| [Create a fine-tuned OpenAI model with R2](https://developers.cloudflare.com/workers/tutorials/create-finetuned-chatgpt-ai-models-with-r2/)                                            | almost 2 years ago | Intermediate |
| [Build a Slackbot](https://developers.cloudflare.com/workers/tutorials/build-a-slackbot/)                                                                                              | almost 2 years ago | Beginner     |
| [Use Workers KV directly from Rust](https://developers.cloudflare.com/workers/tutorials/workers-kv-from-rust/)                                                                         | almost 2 years ago | Intermediate |
| [Build a todo list Jamstack application](https://developers.cloudflare.com/workers/tutorials/build-a-jamstack-app/)                                                                    | almost 2 years ago | Beginner     |
| [Send Emails With Postmark](https://developers.cloudflare.com/workers/tutorials/send-emails-with-postmark/)                                                                            | almost 2 years ago | Beginner     |
| [Send Emails With Resend](https://developers.cloudflare.com/workers/tutorials/send-emails-with-resend/)                                                                                | almost 2 years ago | Beginner     |
| [Log and store upload events in R2 with event notifications](https://developers.cloudflare.com/r2/tutorials/upload-logs-event-notifications/)                                          | about 2 years ago  | Beginner     |
| [Create custom headers for Cloudflare Access-protected origins with Workers](https://developers.cloudflare.com/cloudflare-one/tutorials/access-workers/)                               | over 2 years ago   | Intermediate |
| [Create a serverless, globally distributed time-series API with Timescale](https://developers.cloudflare.com/hyperdrive/tutorials/serverless-timeseries-api-with-timescale/)           | over 2 years ago   | Beginner     |
| [Deploy a Browser Rendering Worker with Durable Objects](https://developers.cloudflare.com/browser-rendering/workers-bindings/browser-rendering-with-do/)                              | over 2 years ago   | Beginner     |
| [GitHub SMS notifications using Twilio](https://developers.cloudflare.com/workers/tutorials/github-sms-notifications-using-twilio/)                                                    | over 2 years ago   | Beginner     |
| [Deploy a real-time chat application](https://developers.cloudflare.com/workers/tutorials/deploy-a-realtime-chat-app/)                                                                 | over 2 years ago   | Intermediate |
| [Build a QR code generator](https://developers.cloudflare.com/workers/tutorials/build-a-qr-code-generator/)                                                                            | almost 3 years ago | Beginner     |
| [Securely access and upload assets with Cloudflare R2](https://developers.cloudflare.com/workers/tutorials/upload-assets-with-r2/)                                                     | almost 3 years ago | Beginner     |
| [OpenAI GPT function calling with JavaScript and Cloudflare Workers](https://developers.cloudflare.com/workers/tutorials/openai-function-calls-workers/)                               | almost 3 years ago | Beginner     |
| [Handle form submissions with Airtable](https://developers.cloudflare.com/workers/tutorials/handle-form-submissions-with-airtable/)                                                    | almost 3 years ago | Beginner     |
| [Connect to and query your Turso database using Workers](https://developers.cloudflare.com/workers/tutorials/connect-to-turso-using-workers/)                                          | about 3 years ago  | Beginner     |
| [Generate YouTube thumbnails with Workers and Cloudflare Image Resizing](https://developers.cloudflare.com/workers/tutorials/generate-youtube-thumbnails-with-workers-and-images/)     | about 3 years ago  | Intermediate |

## Videos

[ Play ](https://youtube.com/watch?v=xu4Wb-IppmM) 

OpenAI Relay Server on Cloudflare Workers

In this video, Craig Dennis walks you through the deployment of OpenAI's relay server to use with their realtime API.

[ Play ](https://youtube.com/watch?v=B2bLUc3iOsI) 

Deploy your React App to Cloudflare Workers

Learn how to deploy an existing React application to Cloudflare Workers.

[ Play ](https://youtube.com/watch?v=L6gR4Yr3UW8) 

Cloudflare Workflows | Schedule and Sleep For Your Apps (Part 3 of 3)

Cloudflare Workflows allows you to initiate sleep as an explicit step, which can be useful when you want a Workflow to wait, schedule work ahead, or pause until an input or other external state is ready.

[ Play ](https://youtube.com/watch?v=y4PPsvHrQGA) 

Cloudflare Workflows | Batching and Monitoring Your Durable Execution (Part 2 of 3)

Workflows exposes metrics such as execution, error rates, steps, and total duration!

[ Play ](https://youtube.com/watch?v=slS4RBV0SBk) 

Cloudflare Workflows | Introduction (Part 1 of 3)

In this video, we introduce Cloudflare Workflows, the Newest Developer Platform Primitive at Cloudflare.

[ Play ](https://youtube.com/watch?v=W45MIi%5Ft%5Fgo) 

Building Front-End Applications | Now Supported by Cloudflare Workers

You can now build front-end applications, just like you do on Cloudflare Pages, but with the added benefit of Workers.

[ Play ](https://youtube.com/watch?v=10-kiyJNr8s) 

Build a private AI chatbot using Meta's Llama 3.1

In this video, you will learn how to set up a private AI chat powered by Llama 3.1 for secure, fast interactions, deploy the model on Cloudflare Workers for serverless, scalable performance and use Cloudflare's Workers AI for seamless integration and edge computing benefits.

[ Play ](https://youtube.com/watch?v=HXOpxNaKUzw) 

How to Build Event-Driven Applications with Cloudflare Queues

In this video, we demonstrate how to build an event-driven application using Cloudflare Queues. Event-driven system lets you decouple services, allowing them to process and scale independently.

[ Play ](https://youtube.com/watch?v=bwJkwD-F0kQ) 

Welcome to the Cloudflare Developer Channel

Welcome to the Cloudflare Developers YouTube channel. We've got tutorials and working demos and everything you need to level up your projects. Whether you're working on your next big thing or just dorking around with some side projects, we've got you covered! So why don't you come hang out, subscribe to our developer channel and together we'll build something awesome. You're gonna love it.

[ Play ](https://youtube.com/watch?v=doKt9wWQF9A) 

AI meets Maps | Using Cloudflare AI, Langchain, Mapbox, Folium and Streamlit

Welcome to RouteMe, a smart tool that helps you plan the most efficient route between landmarks in any city. Powered by Cloudflare Workers AI, Langchain and Mapbox. This Streamlit webapp uses LLMs and Mapbox off my scripts API to solve the classic traveling salesman problem, turning your sightseeing into an optimized adventure!

[ Play ](https://youtube.com/watch?v=9IjfyBJsJRQ) 

Use Vectorize to add additional context to your AI Applications through RAG

A RAG based AI Chat app that uses Vectorize to access video game data for employees of Gamertown.

[ Play ](https://youtube.com/watch?v=dttu4QtKkO0) 

Build Rust Powered Apps

In this video, we will show you how to build a global database using workers-rs to keep track of every country and city you’ve visited.

[ Play ](https://youtube.com/watch?v=QTsaAhFvX9o) 

Stateful Apps with Cloudflare Workers

Learn how to access external APIs, cache and retrieve data using Workers KV, and create SQL-driven applications with Cloudflare D1.

[ Play ](https://youtube.com/watch?v=H7Qe96fqg1M) 

Learn Cloudflare Workers - Full Course for Beginners

Learn how to build your first Cloudflare Workers application and deploy it to Cloudflare's global network.

[ Play ](https://youtube.com/watch?v=CHfKeFakGAI) 

How to use Cloudflare AI models and inference in Python with Jupyter Notebooks

Cloudflare Workers AI provides a ton of AI models and inference capabilities. In this video, we will explore how to make use of Cloudflare’s AI model catalog using a Python Jupyter Notebook.

[ Play ](https://youtube.com/watch?v=9JM5Z0KzQsQ) 

Learn AI Development (models, embeddings, vectors)

In this workshop, Kristian Freeman, Cloudflare Developer Advocate, teaches the basics of AI Development - models, embeddings, and vectors (including vector databases).

[ Play ](https://youtube.com/watch?v=idKdjA8t0jw) 

Optimize your AI App & fine-tune models (AI Gateway, R2)

In this workshop, Kristian Freeman, Cloudflare Developer Advocate, shows how to optimize your existing AI applications with Cloudflare AI Gateway, and how to finetune OpenAI models using R2.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}}]}
```

---

---
title: Build a todo list Jamstack application
description: This tutorial explains how to build a todo list application using HTML, CSS, and JavaScript.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/build-a-jamstack-app.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Build a todo list Jamstack application

**Last reviewed:**  almost 2 years ago 

In this tutorial, you will build a todo list application using HTML, CSS, and JavaScript. The application data will be stored in [Workers KV](https://developers.cloudflare.com/kv/api/).

![Preview of a finished todo list. Continue reading for instructions on how to set up a todo list.](https://developers.cloudflare.com/_astro/finished.CHDh55j7_Z2saS5S.webp) 

Before starting this project, you should have some experience with HTML, CSS, and JavaScript. You will learn:

1. How building with Workers makes allows you to focus on writing code and ship finished products.
2. How the addition of Workers KV makes this tutorial a great introduction to building full, data-driven applications.

If you would like to see the finished code for this project, find the [project on GitHub ↗](https://github.com/lauragift21/cloudflare-workers-todos) and refer to the [live demo ↗](https://todos.examples.workers.dev/) to review what you will be building.

## Before you start

All of the tutorials assume you have already completed the [Get started guide](https://developers.cloudflare.com/workers/get-started/guide/), which gets you set up with a Cloudflare Workers account, [C3 ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare), and [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

## 1\. Create a new Workers project

First, use the [create-cloudflare ↗](https://www.npmjs.com/package/create-cloudflare) CLI tool to create a new Cloudflare Workers project named `todos`. In this tutorial, you will use the default `Hello World` template to create a Workers project.

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- todos
```

```
yarn create cloudflare todos
```

```
pnpm create cloudflare@latest todos
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `JavaScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

Move into your newly created directory:

Terminal window

```

cd todos


```

Inside of your new `todos` Worker project directory, `index.js` represents the entry point to your Cloudflare Workers application.

All incoming HTTP requests to a Worker are passed to the [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) as a [request](https://developers.cloudflare.com/workers/runtime-apis/request/) object. After a request is received by the Worker, the response your application constructs will be returned to the user. This tutorial will guide you through understanding how the request/response pattern works and how you can use it to build fully featured applications.

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    return new Response("Hello World!");

  },

};


```

In your default `index.js` file, you can see that request/response pattern in action. The `fetch` constructs a new `Response` with the body text `'Hello World!'`.

When a Worker receives a `request`, the Worker returns the newly constructed response to the client. Your Worker will serve new responses directly from [Cloudflare's global network ↗](https://www.cloudflare.com/network) instead of continuing to your origin server. A standard server would accept requests and return responses. Cloudflare Workers allows you to respond by constructing responses directly on the Cloudflare global network.

## 2\. Review project details

Any project you deploy to Cloudflare Workers can make use of modern JavaScript tooling like [ES modules](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/), `npm` packages, and [async/await ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async%5Ffunction) functions to build your application. In addition to writing Workers, you can use Workers to [build full applications](https://developers.cloudflare.com/workers/tutorials/build-a-slackbot/) using the same tooling and process as in this tutorial.

In this tutorial, you will build a todo list application running on Workers that allows reading data from a [KV](https://developers.cloudflare.com/kv/) store and using the data to populate an HTML response to send to the client.

The work needed to create this application is split into three tasks:

1. Write data to KV.
2. Rendering data from KV.
3. Adding todos from the application UI.

For the remainder of this tutorial you will complete each task, iterating on your application, and then publish it to your own domain.

## 3\. Write data to KV

To begin, you need to understand how to populate your todo list with actual data. To do this, use [Cloudflare Workers KV](https://developers.cloudflare.com/kv/) — a key-value store that you can access inside of your Worker to read and write data.

To get started with KV, set up a namespace. All of your cached data will be stored inside that namespace and, with configuration, you can access that namespace inside the Worker with a predefined variable. Use Wrangler to create a new namespace called `TODOS` with the [kv namespace create command](https://developers.cloudflare.com/workers/wrangler/commands/kv/#kv-namespace-create) and get the associated namespace ID by running the following command in your terminal:

Create a new KV namespace

```

npx wrangler kv namespace create "TODOS" --preview


```

The associated namespace can be combined with a `--preview` flag to interact with a preview namespace instead of a production namespace. Namespaces can be added to your application by defining them inside your Wrangler configuration. Copy your newly created namespace ID, and in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), define a `kv_namespaces` key to set up your namespace:

* [  wrangler.jsonc ](#tab-panel-7754)
* [  wrangler.toml ](#tab-panel-7755)

```

{

  "kv_namespaces": [

    {

      "binding": "TODOS",

      "id": "<YOUR_ID>",

      "preview_id": "<YOUR_PREVIEW_ID>"

    }

  ]

}


```

```

[[kv_namespaces]]

binding = "TODOS"

id = "<YOUR_ID>"

preview_id = "<YOUR_PREVIEW_ID>"


```

The defined namespace, `TODOS`, will now be available inside of your codebase. With that, it is time to understand the [KV API](https://developers.cloudflare.com/kv/api/). A KV namespace has three primary methods you can use to interface with your cache: `get`, `put`, and `delete`.

Start storing data by defining an initial set of data, which you will put inside of the cache using the `put` method. The following example defines a `defaultData` object instead of an array of todo items. You may want to store metadata and other information inside of this cache object later on. Given that data object, use `JSON.stringify` to add a string into the cache:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    const defaultData = {

      todos: [

        {

          id: 1,

          name: "Finish the Cloudflare Workers blog post",

          completed: false,

        },

      ],

    };

    await env.TODOS.put("data", JSON.stringify(defaultData));

    return new Response("Hello World!");

  },

};


```

Workers KV is an eventually consistent, global datastore. Any writes within a region are immediately reflected within that same region but will not be immediately available in other regions. However, those writes will eventually be available everywhere and, at that point, Workers KV guarantees that data within each region will be consistent.

Given the presence of data in the cache and the assumption that your cache is eventually consistent, this code needs a slight adjustment: the application should check the cache and use its value, if the key exists. If it does not, you will use `defaultData` as the data source for now (it should be set in the future) and write it to the cache for future use. After breaking out the code into a few functions for simplicity, the result looks like this:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    const defaultData = {

      todos: [

        {

          id: 1,

          name: "Finish the Cloudflare Workers blog post",

          completed: false,

        },

      ],

    };

    const setCache = (data) => env.TODOS.put("data", data);

    const getCache = () => env.TODOS.get("data");


    let data;


    const cache = await getCache();

    if (!cache) {

      await setCache(JSON.stringify(defaultData));

      data = defaultData;

    } else {

      data = JSON.parse(cache);

    }


    return new Response(JSON.stringify(data));

  },

};


```

## Render data from KV

Given the presence of data in your code, which is the cached data object for your application, you should take this data and render it in a user interface.

To do this, make a new `html` variable in your Workers script and use it to build up a static HTML template that you can serve to the client. In `fetch`, construct a new `Response` with a `Content-Type: text/html` header and serve it to the client:

JavaScript

```

const html = `<!DOCTYPE html>

<html>

  <head>

    <meta charset="UTF-8" />

    <meta name="viewport" content="width=device-width,initial-scale=1" />

    <title>Todos</title>

  </head>

  <body>

    <h1>Todos</h1>

  </body>

</html>

`;


async fetch (request, env, ctx) {

  // previous code

  return new Response(html, {

      headers: {

        'Content-Type': 'text/html'

      }

    });

}


```

You have a static HTML site being rendered and you can begin populating it with data. In the body, add a `div` tag with an `id` of `todos`:

JavaScript

```

const html = `<!DOCTYPE html>

<html>

  <head>

    <meta charset="UTF-8" />

    <meta name="viewport" content="width=device-width,initial-scale=1" />

    <title>Todos</title>

  </head>

  <body>

    <h1>Todos</h1>

    <div id="todos"></div>

  </body>

</html>

`;


```

Add a `<script>` element at the end of the body content that takes a `todos` array. For each `todo` in the array, create a `div` element and appends it to the `todos` HTML element:

JavaScript

```

const html = `<!DOCTYPE html>

<html>

  <head>

    <meta charset="UTF-8" />

    <meta name="viewport" content="width=device-width,initial-scale=1" />

    <title>Todos</title>

  </head>

  <body>

    <h1>Todos</h1>

    <div id="todos"></div>

  </body>

  <script>

    window.todos = []

    var todoContainer = document.querySelector("#todos")

    window.todos.forEach(todo => {

      var el = document.createElement("div")

      el.textContent = todo.name

      todoContainer.appendChild(el)

    })

  </script>

</html>

`;


```

Your static page can take in `window.todos` and render HTML based on it, but you have not actually passed in any data from KV. To do this, you will need to make a few changes.

First, your `html` variable will change to a function. The function will take in a `todos` argument, which will populate the `window.todos` variable in the above code sample:

JavaScript

```

const html = (todos) => `

<!doctype html>

<html>

  <!-- existing content -->

  <script>

    window.todos = ${todos}

    var todoContainer = document.querySelector("#todos")

    // ...

  <script>

</html>

`;


```

In `fetch`, use the retrieved KV data to call the `html` function and generate a `Response` based on it:

JavaScript

```

async fetch (request, env, ctx) {

  const body = html(JSON.stringify(data.todos).replace(/</g, '\\u003c'));

  return new Response(body, {

    headers: { 'Content-Type': 'text/html' },

  });

}


```

## 4\. Add todos from the user interface (UI)

At this point, you have built a Cloudflare Worker that takes data from Cloudflare KV and renders a static page based on that Worker. That static page reads data and generates a todo list based on that data. The remaining task is creating todos from inside the application UI. You can add todos using the KV API — update the cache by running `env.TODOS.put(newData)`.

To update a todo item, you will add a second handler in your Workers script, designed to watch for `PUT` requests to `/`. When a request body is received at that URL, the Worker will send the new todo data to your KV store.

Add this new functionality in `fetch`: if the request method is a PUT, it will take the request body and update the cache.

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    const setCache = (data) => env.TODOS.put("data", data);


    if (request.method === "PUT") {

      const body = await request.text();

      try {

        JSON.parse(body);

        await setCache(body);

        return new Response(body, { status: 200 });

      } catch (err) {

        return new Response(err, { status: 500 });

      }

    }

    // previous code

  },

};


```

Check that the request is a `PUT` and wrap the remainder of the code in a `try...catch` block. First, parse the body of the request coming in, ensuring that it is JSON, before you update the cache with the new data and return it to the user. If anything goes wrong, return a `500` status code. If the route is hit with an HTTP method other than `PUT` — for example, `POST` or `DELETE` — return a `404` error.

With this script, you can now add some dynamic functionality to your HTML page to actually hit this route. First, create an input for your todo name and a button for submitting the todo.

JavaScript

```

const html = (todos) => `

<!doctype html>

<html>

  <!-- existing content -->

  <div>

    <input type="text" name="name" placeholder="A new todo"></input>

    <button id="create">Create</button>

  </div>

  <!-- existing script -->

</html>

`;


```

Given that input and button, add a corresponding JavaScript function to watch for clicks on the button — once the button is clicked, the browser will `PUT` to `/` and submit the todo.

JavaScript

```

const html = (todos) => `

<!doctype html>

<html>

  <!-- existing content -->

  <script>

    // Existing JavaScript code


    var createTodo = function() {

      var input = document.querySelector("input[name=name]")

      if (input.value.length) {

        todos = [].concat(todos, {

          id: todos.length + 1,

          name: input.value,

          completed: false,

        })

        fetch("/", {

          method: "PUT",

          body: JSON.stringify({ todos: todos }),

        })

      }

    }


    document.querySelector("#create").addEventListener("click", createTodo)

  </script>

</html>

`;


```

This code updates the cache. Remember that the KV cache is eventually consistent — even if you were to update your Worker to read from the cache and return it, you have no guarantees it will actually be up to date. Instead, update the list of todos locally, by taking your original code for rendering the todo list, making it a reusable function called `populateTodos`, and calling it when the page loads and when the cache request has finished:

JavaScript

```

const html = (todos) => `

<!doctype html>

<html>

  <!-- existing content -->

  <script>

    var populateTodos = function() {

      var todoContainer = document.querySelector("#todos")

      todoContainer.innerHTML = null

      window.todos.forEach(todo => {

        var el = document.createElement("div")

        el.textContent = todo.name

        todoContainer.appendChild(el)

      })

    }


    populateTodos()


    var createTodo = function() {

      var input = document.querySelector("input[name=name]")

      if (input.value.length) {

        todos = [].concat(todos, {

          id: todos.length + 1,

          name: input.value,

          completed: false,

        })

        fetch("/", {

          method: "PUT",

          body: JSON.stringify({ todos: todos }),

        })

        populateTodos()

        input.value = ""

      }

    }


    document.querySelector("#create").addEventListener("click", createTodo)

  </script>

`;


```

With the client-side code in place, deploying the new version of the function should put all these pieces together. The result is an actual dynamic todo list.

## 5\. Update todos from the application UI

For the final piece of your todo list, you need to be able to update todos — specifically, marking them as completed.

Luckily, a great deal of the infrastructure for this work is already in place. You can update the todo list data in the cache, as evidenced by your `createTodo` function. Performing updates on a todo is more of a client-side task than a Worker-side one.

To start, the `populateTodos` function can be updated to generate a `div` for each todo. In addition, move the name of the todo into a child element of that `div`:

JavaScript

```

const html = (todos) => `

<!doctype html>

<html>

  <!-- existing content -->

  <script>

    var populateTodos = function() {

      var todoContainer = document.querySelector("#todos")

      todoContainer.innerHTML = null

      window.todos.forEach(todo => {

        var el = document.createElement("div")

        var name = document.createElement("span")

        name.textContent = todo.name

        el.appendChild(name)

        todoContainer.appendChild(el)

      })

    }

  </script>

`;


```

You have designed the client-side part of this code to handle an array of todos and render a list of HTML elements. There is a number of things that you have been doing that you have not quite had a use for yet – specifically, the inclusion of IDs and updating the todo's completed state. These things work well together to actually support updating todos in the application UI.

To start, it would be useful to attach the ID of each todo in the HTML. By doing this, you can then refer to the element later in order to correspond it to the todo in the JavaScript part of your code. Data attributes and the corresponding `dataset` method in JavaScript are a perfect way to implement this. When you generate your `div` element for each todo, you can attach a data attribute called todo to each `div`:

JavaScript

```

const html = (todos) => `

<!doctype html>

<html>

  <!-- existing content -->

  <script>

    var populateTodos = function() {

      var todoContainer = document.querySelector("#todos")

      todoContainer.innerHTML = null

      window.todos.forEach(todo => {

        var el = document.createElement("div")

        el.dataset.todo = todo.id


        var name = document.createElement("span")

        name.textContent = todo.name


        el.appendChild(name)

        todoContainer.appendChild(el)

      })

    }

  </script>

`;


```

Inside your HTML, each `div` for a todo now has an attached data attribute, which looks like:

```

<div data-todo="1"></div>

<div data-todo="2"></div>


```

You can now generate a checkbox for each todo element. This checkbox will default to unchecked for new todos but you can mark it as checked as the element is rendered in the window:

JavaScript

```

const html = (todos) => `

<!doctype html>

<html>

  <!-- existing content -->

  <script>

    window.todos.forEach(todo => {

      var el = document.createElement("div")

      el.dataset.todo = todo.id


      var name = document.createElement("span")

      name.textContent = todo.name


      var checkbox = document.createElement("input")

      checkbox.type = "checkbox"

      checkbox.checked = todo.completed ? 1 : 0


      el.appendChild(checkbox)

      el.appendChild(name)

      todoContainer.appendChild(el)

    })

  </script>

`;


```

The checkbox is set up to correctly reflect the value of completed on each todo but it does not yet update when you actually check the box. To do this, attach the `completeTodo` function as an event listener on the `click` event. Inside the function, inspect the checkbox element, find its parent (the todo `div`), and use its `todo` data attribute to find the corresponding todo in the data array. You can toggle the completed status, update its properties, and rerender the UI:

JavaScript

```

const html = (todos) => `

<!doctype html>

<html>

  <!-- existing content -->

  <script>

    var populateTodos = function() {

      window.todos.forEach(todo => {

        // Existing todo element set up code

        checkbox.addEventListener("click", completeTodo)

      })

    }


    var completeTodo = function(evt) {

      var checkbox = evt.target

      var todoElement = checkbox.parentNode


      var newTodoSet = [].concat(window.todos)

      var todo = newTodoSet.find(t => t.id == todoElement.dataset.todo)

      todo.completed = !todo.completed

      todos = newTodoSet

      updateTodos()

    }

  </script>

`;


```

The final result of your code is a system that checks the `todos` variable, updates your Cloudflare KV cache with that value, and then does a re-render of the UI based on the data it has locally.

## 6\. Conclusion and next steps

By completing this tutorial, you have built a static HTML, CSS, and JavaScript application that is transparently powered by Workers and Workers KV, which take full advantage of Cloudflare's global network.

If you would like to keep improving on your project, you can implement a better design (you can refer to a live version available at [todos.signalnerve.workers.dev ↗](https://todos.signalnerve.workers.dev/)), or make additional improvements to security and speed.

You may also want to add user-specific caching. Right now, the cache key is always `data` – this means that any visitor to the site will share the same todo list with other visitors. Within your Worker, you could use values from the client request to create and maintain user-specific lists. For example, you may generate a cache key based on the requesting IP:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    const defaultData = {

      todos: [

        {

          id: 1,

          name: "Finish the Cloudflare Workers blog post",

          completed: false,

        },

      ],

    };

    const setCache = (key, data) => env.TODOS.put(key, data);

    const getCache = (key) => env.TODOS.get(key);


    const ip = request.headers.get("CF-Connecting-IP");

    const myKey = `data-${ip}`;


    if (request.method === "PUT") {

      const body = await request.text();

      try {

        JSON.parse(body);

        await setCache(myKey, body);

        return new Response(body, { status: 200 });

      } catch (err) {

        return new Response(err, { status: 500 });

      }

    }


    let data;


    const cache = await getCache();

    if (!cache) {

      await setCache(myKey, JSON.stringify(defaultData));

      data = defaultData;

    } else {

      data = JSON.parse(cache);

    }


    const body = html(JSON.stringify(data.todos).replace(/</g, "\\u003c"));


    return new Response(body, {

      headers: {

        "Content-Type": "text/html",

      },

    });

  },

};


```

After making these changes and deploying the Worker one more time, your todo list application now includes per-user functionality while still taking full advantage of Cloudflare's global network.

The final version of your Worker script should look like this:

JavaScript

```

const html = (todos) => `

<!DOCTYPE html>

<html>

  <head>

    <meta charset="UTF-8">

    <meta name="viewport" content="width=device-width,initial-scale=1">

    <title>Todos</title>

    <link href="https://cdn.jsdelivr.net/npm/tailwindcss/dist/tailwind.min.css" rel="stylesheet"></link>

  </head>


  <body class="bg-blue-100">

    <div class="w-full h-full flex content-center justify-center mt-8">

      <div class="bg-white shadow-md rounded px-8 pt-6 py-8 mb-4">

        <h1 class="block text-grey-800 text-md font-bold mb-2">Todos</h1>

        <div class="flex">

          <input class="shadow appearance-none border rounded w-full py-2 px-3 text-grey-800 leading-tight focus:outline-none focus:shadow-outline" type="text" name="name" placeholder="A new todo"></input>

          <button class="bg-blue-500 hover:bg-blue-800 text-white font-bold ml-2 py-2 px-4 rounded focus:outline-none focus:shadow-outline" id="create" type="submit">Create</button>

        </div>

        <div class="mt-4" id="todos"></div>

      </div>

    </div>

  </body>


  <script>

    window.todos = ${todos}


    var updateTodos = function() {

      fetch("/", { method: "PUT", body: JSON.stringify({ todos: window.todos }) })

      populateTodos()

    }


    var completeTodo = function(evt) {

      var checkbox = evt.target

      var todoElement = checkbox.parentNode

      var newTodoSet = [].concat(window.todos)

      var todo = newTodoSet.find(t => t.id == todoElement.dataset.todo)

      todo.completed = !todo.completed

      window.todos = newTodoSet

      updateTodos()

    }


    var populateTodos = function() {

      var todoContainer = document.querySelector("#todos")

      todoContainer.innerHTML = null


      window.todos.forEach(todo => {

        var el = document.createElement("div")

        el.className = "border-t py-4"

        el.dataset.todo = todo.id


        var name = document.createElement("span")

        name.className = todo.completed ? "line-through" : ""

        name.textContent = todo.name


        var checkbox = document.createElement("input")

        checkbox.className = "mx-4"

        checkbox.type = "checkbox"

        checkbox.checked = todo.completed ? 1 : 0

        checkbox.addEventListener("click", completeTodo)


        el.appendChild(checkbox)

        el.appendChild(name)

        todoContainer.appendChild(el)

      })

    }


    populateTodos()


    var createTodo = function() {

      var input = document.querySelector("input[name=name]")

      if (input.value.length) {

        window.todos = [].concat(todos, { id: window.todos.length + 1, name: input.value, completed: false })

        input.value = ""

        updateTodos()

      }

    }


    document.querySelector("#create").addEventListener("click", createTodo)

  </script>

</html>

`;


export default {

  async fetch(request, env, ctx) {

    const defaultData = {

      todos: [

        {

          id: 1,

          name: "Finish the Cloudflare Workers blog post",

          completed: false,

        },

      ],

    };

    const setCache = (key, data) => env.TODOS.put(key, data);

    const getCache = (key) => env.TODOS.get(key);


    const ip = request.headers.get("CF-Connecting-IP");

    const myKey = `data-${ip}`;


    if (request.method === "PUT") {

      const body = await request.text();

      try {

        JSON.parse(body);

        await setCache(myKey, body);

        return new Response(body, { status: 200 });

      } catch (err) {

        return new Response(err, { status: 500 });

      }

    }


    let data;


    const cache = await getCache();

    if (!cache) {

      await setCache(myKey, JSON.stringify(defaultData));

      data = defaultData;

    } else {

      data = JSON.parse(cache);

    }


    const body = html(JSON.stringify(data.todos).replace(/</g, "\\u003c"));


    return new Response(body, {

      headers: {

        "Content-Type": "text/html",

      },

    });

  },

};


```

You can find the source code for this project, as well as a README with deployment instructions, [on GitHub ↗](https://github.com/lauragift21/cloudflare-workers-todos).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/build-a-jamstack-app/","name":"Build a todo list Jamstack application"}}]}
```

---

---
title: Build a QR code generator
description: This tutorial shows you how to build and publish a Worker application that generates QR codes. The final version of the codebase is available on GitHub.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/build-a-qr-code-generator.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Build a QR code generator

**Last reviewed:**  almost 3 years ago 

In this tutorial, you will build and publish a Worker application that generates QR codes.

If you would like to review the code for this tutorial, the final version of the codebase is [available on GitHub ↗](https://github.com/kristianfreeman/workers-qr-code-generator). You can take the code provided in the example repository, customize it, and deploy it for use in your own projects.

## Before you start

All of the tutorials assume you have already completed the [Get started guide](https://developers.cloudflare.com/workers/get-started/guide/), which gets you set up with a Cloudflare Workers account, [C3 ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare), and [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

## 1\. Create a new Workers project

First, use the [create-cloudflare CLI](https://developers.cloudflare.com/pages/get-started/c3) to create a new Cloudflare Workers project. To do this, open a terminal window and run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- qr-code-generator
```

```
yarn create cloudflare qr-code-generator
```

```
pnpm create cloudflare@latest qr-code-generator
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `JavaScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

Then, move into your newly created directory:

Terminal window

```

cd qr-code-generator


```

Inside of your new `qr-code-generator` Worker project directory, `index.js` represents the entry point to your Cloudflare Workers application.

All Cloudflare Workers applications start by listening for `fetch` events, which are triggered when a client makes a request to a Workers route. After a request is received by the Worker, the response your application constructs will be returned to the user. This tutorial will guide you through understanding how the request/response pattern works and how you can use it to build fully featured applications.

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    return new Response("Hello Worker!");

  },

};


```

In your default `index.js` file, you can see that request/response pattern in action. The `fetch` constructs a new `Response` with the body text `'Hello Worker!'`.

When a Worker receives a `fetch` event, the Worker returns the newly constructed response to the client. Your Worker will serve new responses directly from [Cloudflare's global network ↗](https://www.cloudflare.com/network) instead of continuing to your origin server. A standard server would accept requests and return responses. Cloudflare Workers allows you to respond quickly by constructing responses directly on the Cloudflare global network.

## 2\. Handle Incoming Request

Any project you publish to Cloudflare Workers can make use of modern JavaScript tooling like ES modules, `npm` packages, and [async/await ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async%5Ffunction) functions to build your application. In addition to writing Workers, you can use Workers to [build full applications](https://developers.cloudflare.com/workers/tutorials/build-a-slackbot/) using the same tooling and process as in this tutorial.

The QR code generator you will build in this tutorial will be a Worker that runs on a single route and receives requests. Each request will contain a text message (a URL, for example), which the function will encode into a QR code. The function will then respond with the QR code in PNG image format.

At this point in the tutorial, your Worker function can receive requests and return a simple response with the text `"Hello Worker!"`. To handle data coming into your Worker, check if the incoming request is a `POST` request:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    if (request.method === "POST") {

      return new Response("Hello Worker!");

    }

  },

};


```

Currently, if an incoming request is not a `POST`, the function will return `undefined`. However, a Worker always needs to return a `Response`. Since the function should only accept incoming `POST` requests, return a new `Response` with a [405 status code ↗](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/405) if the incoming request is not a `POST`:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    if (request.method === "POST") {

      return new Response("Hello Worker!");

    }


    return new Response("Expected POST request", {

      status: 405,

    });

  },

};


```

You have established the basic flow of the request. You will now set up a response to incoming valid requests. If a `POST` request comes in, the function should generate a QR code. To start, move the `"Hello Worker!"` response into a new function, `generateQRCode`, which will ultimately contain the bulk of your function’s logic:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    if (request.method === "POST") {

    }

  },

};


async function generateQRCode(request) {

  // TODO: Include QR code generation

  return new Response("Hello worker!");

}


```

With the `generateQRCode` function filled out, call it within `fetch` function and return its result directly to the client:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    if (request.method === "POST") {

      return generateQRCode(request);

    }

  },

};


```

## 3\. Build a QR code generator

All projects deployed to Cloudflare Workers support npm packages. This support makes it easy to rapidly build out functionality in your Workers. The ['qrcode-svg' ↗](https://github.com/papnkukn/qrcode-svg) package is a great way to take text and encode it into a QR code. In the command line, install and save 'qrcode-svg' to your project’s 'package.json':

 npm  yarn  pnpm  bun 

```
npm i qrcode-svg
```

```
yarn add qrcode-svg
```

```
pnpm add qrcode-svg
```

```
bun add qrcode-svg
```

In `index.js`, import the `qrcode-svg` package as the variable `QRCode`. In the `generateQRCode` function, parse the incoming request as JSON using `request.json`, and generate a new QR code using the `qrcode-svg` package. The QR code is generated as an SVG. Construct a new instance of `Response`, passing in the SVG data as the body, and a `Content-Type` header of `image/svg+xml`. This will allow browsers to properly parse the data coming back from your Worker as an image:

JavaScript

```

import QRCode from "qrcode-svg";


async function generateQRCode(request) {

  const { text } = await request.json();

  const qr = new QRCode({ content: text || "https://workers.dev" });

  return new Response(qr.svg(), {

    headers: { "Content-Type": "image/svg+xml" },

  });

}


```

## 4\. Test in an application UI

The Worker will execute when a user sends a `POST` request to a route, but it is best practice to also provide a proper interface for testing the function. At this point in the tutorial, if any request is received by your function that is not a `POST`, a `405` response is returned. The new version of `fetch` should return a new `Response` with a static HTML document instead of the `405` error:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    if (request.method === "POST") {

      return generateQRCode(request);

    }


    return new Response(landing, {

      headers: {

        "Content-Type": "text/html",

      },

    });

  },

};


async function generateQRCode(request) {

  const { text } = await request.json();

  const qr = new QRCode({ content: text || "https://workers.dev" });

  return new Response(qr.svg(), {

    headers: { "Content-Type": "image/svg+xml" },

  });

}


const landing = `

<h1>QR Generator</h1>

<p>Click the below button to generate a new QR code. This will make a request to your Worker.</p>

<input type="text" id="text" value="https://workers.dev"></input>

<button onclick="generate()">Generate QR Code</button>

<p>Generated QR Code Image</p>

<img id="qr" src="#" />

<script>

  function generate() {

    fetch(window.location.pathname, {

      method: "POST",

      headers: { "Content-Type": "application/json" },

      body: JSON.stringify({ text: document.querySelector("#text").value })

    })

    .then(response => response.blob())

    .then(blob => {

      const reader = new FileReader();

      reader.onloadend = function () {

        document.querySelector("#qr").src = reader.result; // Update the image source with the newly generated QR code

      }

      reader.readAsDataURL(blob);

    })

  }

</script>

`;


```

The `landing` variable, which is a static HTML string, sets up an `input` tag and a corresponding `button`, which calls the `generateQRCode` function. This function will make an HTTP `POST` request back to your Worker, allowing you to see the corresponding QR code image returned on the page.

With the above steps complete, your Worker is ready. The full version of the code looks like this:

JavaScript

```

const QRCode = require("qrcode-svg");


export default {

  async fetch(request, env, ctx) {

    if (request.method === "POST") {

      return generateQRCode(request);

    }


    return new Response(landing, {

      headers: {

        "Content-Type": "text/html",

      },

    });

  },

};


async function generateQRCode(request) {

  const { text } = await request.json();

  const qr = new QRCode({ content: text || "https://workers.dev" });

  return new Response(qr.svg(), {

    headers: { "Content-Type": "image/svg+xml" },

  });

}


const landing = `

<h1>QR Generator</h1>

<p>Click the below button to generate a new QR code. This will make a request to your Worker.</p>

<input type="text" id="text" value="https://workers.dev"></input>

<button onclick="generate()">Generate QR Code</button>

<p>Generated QR Code Image</p>

<img id="qr" src="#" />

<script>

  function generate() {

    fetch(window.location.pathname, {

      method: "POST",

      headers: { "Content-Type": "application/json" },

      body: JSON.stringify({ text: document.querySelector("#text").value })

    })

    .then(response => response.blob())

    .then(blob => {

      const reader = new FileReader();

      reader.onloadend = function () {

        document.querySelector("#qr").src = reader.result; // Update the image source with the newly generated QR code

      }

      reader.readAsDataURL(blob);

    })

  }

</script>

`;


```

## 5\. Deploy your Worker

With all the above steps complete, you have written the code for a QR code generator on Cloudflare Workers.

Wrangler has built-in support for bundling, uploading, and releasing your Cloudflare Workers application. To do this, run `npx wrangler deploy`, which will build and deploy your code.

Deploy your Worker project

```

npx wrangler deploy


```

## Related resources

In this tutorial, you built and deployed a Worker application for generating QR codes. If you would like to see the full source code for this application, you can find it [on GitHub ↗](https://github.com/kristianfreeman/workers-qr-code-generator).

If you want to get started building your own projects, review the existing list of [Quickstart templates](https://developers.cloudflare.com/workers/get-started/quickstarts/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/build-a-qr-code-generator/","name":"Build a QR code generator"}}]}
```

---

---
title: Build a Slackbot
description: Learn how to build a Slackbot with Hono and TypeScript in Cloudflare Workerss
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Hono ](https://developers.cloudflare.com/search/?tags=Hono)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/build-a-slackbot.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Build a Slackbot

**Last reviewed:**  almost 2 years ago 

In this tutorial, you will build a [Slack ↗](https://slack.com) bot using [Cloudflare Workers](https://developers.cloudflare.com/workers/). Your bot will make use of GitHub webhooks to send messages to a Slack channel when issues are updated or created, and allow users to write a command to look up GitHub issues from inside Slack.

![After following this tutorial, you will be able to create a Slackbot like the one in this example. Continue reading to build your Slackbot.](https://developers.cloudflare.com/_astro/issue-command.BJRwbx5d_Z1dTC4D.webp) 

This tutorial is recommended for people who are familiar with writing web applications. You will use TypeScript as the programming language and [Hono ↗](https://hono.dev/) as the web framework. If you have built an application with tools like [Node ↗](https://nodejs.org) and [Express ↗](https://expressjs.com), this project will feel very familiar to you. If you are new to writing web applications or have wanted to build something like a Slack bot in the past, but were intimidated by deployment or configuration, Workers will be a way for you to focus on writing code and shipping projects.

If you would like to review the code or how the bot works in an actual Slack channel before proceeding with this tutorial, you can access the final version of the codebase [on GitHub ↗](https://github.com/yusukebe/workers-slack-bot). From GitHub, you can add your own Slack API keys and deploy it to your own Slack channels for testing.

---

## Before you start

All of the tutorials assume you have already completed the [Get started guide](https://developers.cloudflare.com/workers/get-started/guide/), which gets you set up with a Cloudflare Workers account, [C3 ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare), and [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

## Set up Slack

This tutorial assumes that you already have a Slack account, and the ability to create and manage Slack applications.

### Configure a Slack application

To post messages from your Cloudflare Worker into a Slack channel, you will need to create an application in Slack’s UI. To do this, go to Slack’s API section, at [api.slack.com/apps ↗](https://api.slack.com/apps), and select **Create New App**.

![To create a Slackbot, first create a Slack App](https://developers.cloudflare.com/_astro/create-a-slack-app.D5_bKo4M_2uKImL.webp) 

Slack applications have many features. You will make use of two of them, Incoming Webhooks and Slash Commands, to build your Worker-powered Slack bot.

#### Incoming Webhook

Incoming Webhooks are URLs that you can use to send messages to your Slack channels. Your incoming webhook will be paired with GitHub’s webhook support to send messages to a Slack channel whenever there are updates to issues in a given repository. You will see the code in more detail as you build your application. First, create a Slack webhook:

1. On the sidebar of Slack's UI, select **Incoming Webhooks**.
2. In **Webhook URLs for your Workspace**, select **Add New Webhook to Workspace**.
3. On the following screen, select the channel that you want your webhook to send messages to (you can select a room, like #general or #code, or be messaged directly by your Slack bot when the webhook is called.)
4. Authorize the new webhook URL.

After authorizing your webhook URL, you will be returned to the **Incoming Webhooks** page and can view your new webhook URL. You will add this into your Workers code later. Next, you will add the second component to your Slack bot: a Slash Command.

![Select Add New Webhook to Workspace to add a new Webhook URL in Slack's dashboard](https://developers.cloudflare.com/_astro/slack-incoming-webhook.DWpFxzq__1i7jiW.webp) 

#### Slash Command

A Slash Command in Slack is a custom-configured command that can be attached to a URL request. For example, if you configured `/weather <zip>`, Slack would make an HTTP POST request to a configured URL, passing the text `<zip>` to get the weather for a specified zip code. In your application, you will use the `/issue` command to look up GitHub issues using the [GitHub API ↗](https://developer.github.com). Typing `/issue cloudflare/wrangler#1` will send the text `cloudflare/wrangler#1` in a HTTP POST request to your application, which the application will use to find the [relevant GitHub issue ↗](https://github.com/cloudflare/wrangler-legacy/issues/1).

1. On the Slack sidebar, select **Slash Commands**.
2. Create your first slash command.

For this tutorial, you will use the command `/issue`. The request URL should be the `/lookup` path on your application URL: for example, if your application will be hosted at `https://myworkerurl.com`, the Request URL should be `https://myworkerurl.com/lookup`.

![You must create a Slash Command in Slack's dashboard and attach it to a Request URL](https://developers.cloudflare.com/_astro/create-slack-command.CBy2ieO7_Z1W4NaQ.webp) 

### Configure your GitHub Webhooks

Your Cloudflare Workers application will be able to handle incoming requests from Slack. It should also be able to receive events directly from GitHub. If a GitHub issue is created or updated, you can make use of GitHub webhooks to send that event to your Workers application and post a corresponding message in Slack.

To configure a webhook:

1. Go to your GitHub repository's **Settings** \> **Webhooks** \> **Add webhook**.

If you have a repository like `https://github.com/user/repo`, you can access the **Webhooks** page directly at `https://github.com/user/repo/settings/hooks`.

1. Set the Payload URL to the `/webhook` path on your Worker URL.

For example, if your Worker will be hosted at `https://myworkerurl.com`, the Payload URL should be `https://myworkerurl.com/webhook`.

1. In the **Content type** dropdown, select **application/json**.

The **Content type** for your payload can either be a URL-encoded payload (`application/x-www-form-urlencoded`) or JSON (`application/json`). For the purpose of this tutorial and to make parsing the payload sent to your application, select JSON.

1. In **Which events would you like to trigger this webhook?**, select **Let me select individual events**.

GitHub webhooks allow you to specify which events you would like to have sent to your webhook. By default, the webhook will send `push` events from your repository. For the purpose of this tutorial, you will choose **Let me select individual events**.

1. Select the **Issues** event type.

There are many different event types that can be enabled for your webhook. Selecting **Issues** will send every issue-related event to your webhook, including when issues are opened, edited, deleted, and more. If you would like to expand your Slack bot application in the future, you can select more of these events after the tutorial.

1. Select **Add webhook**.
![Create a GitHub Webhook in the GitHub dashboard](https://developers.cloudflare.com/_astro/new-github-webhook.DtHDy8MC_1V7hhX.webp) 

When your webhook is created, it will attempt to send a test payload to your application. Since your application is not actually deployed yet, leave the configuration as it is. You will later return to your repository to create, edit, and close some issues to ensure that the webhook is working once your application is deployed.

## Init

To initiate the project, use the command line interface [C3 (create-cloudflare-cli) ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare).

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- slack-bot
```

```
yarn create cloudflare slack-bot
```

```
pnpm create cloudflare@latest slack-bot
```

Follow these steps to create a Hono project.

* For _What would you like to start with_?, select `Framework Starter`.
* For _Which development framework do you want to use?_, select `Hono`.
* For, _Do you want to deploy your application?_, select `No`.

Go to the `slack-bot` directory:

Terminal window

```

cd slack-bot


```

Open `src/index.ts` in an editor to find the following code.

TypeScript

```

import { Hono } from "hono";


type Bindings = {

  [key in keyof CloudflareBindings]: CloudflareBindings[key];

};


const app = new Hono<{ Bindings: Bindings }>();


app.get("/", (c) => {

  return c.text("Hello Hono!");

});


export default app;


```

This is a minimal application using Hono. If a GET access comes in on the path `/`, it will return a response with the text `Hello Hono!`. It also returns a message `404 Not Found` with status code 404 if any other path or method is accessed.

To run the application on your local machine, execute the following command.

 npm  yarn  pnpm  bun 

```
npm i -- dev
```

```
yarn add dev
```

```
pnpm add dev
```

```
bun add dev
```

Access to `http://localhost:8787` in your browser after the server has been started, and you can see the message.

Hono helps you to create your Workers application easily and quickly.

## Build

Now, let's create a Slack bot on Cloudflare Workers.

### Separating files

You can create your application in several files instead of writing all endpoints and functions in one file. With Hono, it is able to add routing of child applications to the parent application using the function `app.route()`.

For example, imagine the following Web API application.

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


app.get("/posts", (c) => c.text("Posts!"));

app.post("/posts", (c) => c.text("Created!", 201));


export default app;


```

You can add the routes under `/api/v1`.

TypeScript

```

import { Hono } from "hono";

import api from "./api";


const app = new Hono();


app.route("/api/v1", api);


export default app;


```

It will return `Posts!` when accessing `GET /api/v1/posts`.

The Slack bot will have two child applications called "route" each.

1. `lookup` route will take requests from Slack (sent when a user uses the `/issue` command), and look up the corresponding issue using the GitHub API. This application will be added to `/lookup` in the main application.
2. `webhook` route will be called when an issue changes on GitHub, via a configured webhook. This application will be add to `/webhook` in the main application.

Create the route files in a directory named `routes`.

Create new folders and files

```

mkdir -p src/routes

touch src/routes/lookup.ts

touch src/routes/webhook.ts


```

Then update the main application.

TypeScript

```

import { Hono } from "hono";

import lookup from "./routes/lookup";

import webhook from "./routes/webhook";


const app = new Hono();


app.route("/lookup", lookup);

app.route("/webhook", webhook);


export default app;


```

### Defining TypeScript types

Before implementing the actual functions, you need to define the TypeScript types you will use in this project. Create a new file in the application at `src/types.ts` and write the code. `Bindings` is a type that describes the Cloudflare Workers environment variables. `Issue` is a type for a GitHub issue and `User` is a type for a GitHub user. You will need these later.

TypeScript

```

export type Bindings = {

  SLACK_WEBHOOK_URL: string;

};


export type Issue = {

  html_url: string;

  title: string;

  body: string;

  state: string;

  created_at: string;

  number: number;

  user: User;

};


type User = {

  html_url: string;

  login: string;

  avatar_url: string;

};


```

### Creating the lookup route

Start creating the lookup route in `src/routes/lookup.ts`.

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


export default app;


```

To understand how you should design this function, you need to understand how Slack slash commands send data to URLs.

According to the [documentation for Slack slash commands ↗](https://api.slack.com/interactivity/slash-commands), Slack sends an HTTP POST request to your specified URL, with a `application/x-www-form-urlencoded` content type. For example, if someone were to type `/issue cloudflare/wrangler#1`, you could expect a data payload in the format:

```

token=gIkuvaNzQIHg97ATvDxqgjtO

&team_id=T0001

&team_domain=example

&enterprise_id=E0001

&enterprise_name=Globular%20Construct%20Inc

&channel_id=C2147483705

&channel_name=test

&user_id=U2147483697

&user_name=Steve

&command=/issue

&text=cloudflare/wrangler#1

&response_url=https://hooks.slack.com/commands/1234/5678

&trigger_id=13345224609.738474920.8088930838d88f008e0


```

Given this payload body, you need to parse it, and get the value of the `text` key. With that `text`, for example, `cloudflare/wrangler#1`, you can parse that string into known piece of data (`owner`, `repo`, and `issue_number`), and use it to make a request to GitHub’s API, to retrieve the issue data.

With Slack slash commands, you can respond to a slash command by returning structured data as the response to the incoming slash command. In this case, you should use the response from GitHub’s API to present a formatted version of the GitHub issue, including pieces of data like the title of the issue, who created it, and the date it was created. Slack’s new [Block Kit ↗](https://api.slack.com/block-kit) framework will allow you to return a detailed message response, by constructing text and image blocks with the data from GitHub’s API.

#### Parsing slash commands

To begin, the `lookup` route should parse the messages coming from Slack. As previously mentioned, the Slack API sends an HTTP POST in URL Encoded format. You can get the variable `text` by parsing it with `c.req.json()`.

TypeScript

```

import { Hono } from "hono";


const app = new Hono();


app.post("/", async (c) => {

  const { text } = await c.req.parseBody();

  if (typeof text !== "string") {

    return c.notFound();

  }

});


export default app;


```

Given a `text` variable, that contains text like `cloudflare/wrangler#1`, you should parse that text, and get the individual parts from it for use with GitHub’s API: `owner`, `repo`, and `issue_number`.

To do this, create a new file in your application, at `src/utils/github.ts`. This file will contain a number of “utility” functions for working with GitHub’s API. The first of these will be a string parser, called `parseGhIssueString`:

TypeScript

```

const ghIssueRegex =

  /(?<owner>[\w.-]*)\/(?<repo>[\w.-]*)\#(?<issue_number>\d*)/;


export const parseGhIssueString = (text: string) => {

  const match = text.match(ghIssueRegex);

  return match ? (match.groups ?? {}) : {};

};


```

`parseGhIssueString` takes in a `text` input, matches it against `ghIssueRegex`, and if a match is found, returns the `groups` object from that match, making use of the `owner`, `repo`, and `issue_number` capture groups defined in the regex. By exporting this function from `src/utils/github.ts`, you can make use of it back in `src/handlers/lookup.ts`:

TypeScript

```

import { Hono } from "hono";

import { parseGhIssueString } from "../utils/github";


const app = new Hono();


app.post("/", async (c) => {

  const { text } = await c.req.parseBody();

  if (typeof text !== "string") {

    return c.notFound();

  }


  const { owner, repo, issue_number } = parseGhIssueString(text);

});


export default app;


```

#### Making requests to GitHub’s API

With this data, you can make your first API lookup to GitHub. Again, make a new function in `src/utils/github.ts`, to make a `fetch` request to the GitHub API for the issue data:

TypeScript

```

const ghIssueRegex =

  /(?<owner>[\w.-]*)\/(?<repo>[\w.-]*)\#(?<issue_number>\d*)/;


export const parseGhIssueString = (text: string) => {

  const match = text.match(ghIssueRegex);

  return match ? (match.groups ?? {}) : {};

};


export const fetchGithubIssue = (

  owner: string,

  repo: string,

  issue_number: string,

) => {

  const url = `https://api.github.com/repos/${owner}/${repo}/issues/${issue_number}`;

  const headers = { "User-Agent": "simple-worker-slack-bot" };

  return fetch(url, { headers });

};


```

Back in `src/handlers/lookup.ts`, use `fetchGitHubIssue` to make a request to GitHub’s API, and parse the response:

TypeScript

```

import { Hono } from "hono";

import { fetchGithubIssue, parseGhIssueString } from "../utils/github";

import { Issue } from "../types";


const app = new Hono();


app.post("/", async (c) => {

  const { text } = await c.req.parseBody();

  if (typeof text !== "string") {

    return c.notFound();

  }


  const { owner, repo, issue_number } = parseGhIssueString(text);

  const response = await fetchGithubIssue(owner, repo, issue_number);

  const issue = await response.json<Issue>();

});


export default app;


```

#### Constructing a Slack message

After you have received a response back from GitHub’s API, the final step is to construct a Slack message with the issue data, and return it to the user. The final result will look something like this:

![A successful Slack Message will have the components listed below](https://developers.cloudflare.com/_astro/issue-slack-message.8mahQ-Ir_Rfht0.webp) 

You can see four different pieces in the above screenshot:

1. The first line (bolded) links to the issue, and shows the issue title
2. The following lines (including code snippets) are the issue body
3. The last line of text shows the issue status, the issue creator (with a link to the user’s GitHub profile), and the creation date for the issue
4. The profile picture of the issue creator, on the right-hand side

The previously mentioned [Block Kit ↗](https://api.slack.com/block-kit) framework will help take the issue data (in the structure lined out in [GitHub’s REST API documentation ↗](https://developer.github.com/v3/issues/)) and format it into something like the above screenshot.

Create another file, `src/utils/slack.ts`, to contain the function `constructGhIssueSlackMessage`, a function for taking issue data, and turning it into a collection of blocks. Blocks are JavaScript objects that Slack will use to format the message:

TypeScript

```

import { Issue } from "../types";


export const constructGhIssueSlackMessage = (

  issue: Issue,

  issue_string: string,

  prefix_text?: string,

) => {

  const issue_link = `<${issue.html_url}|${issue_string}>`;

  const user_link = `<${issue.user.html_url}|${issue.user.login}>`;

  const date = new Date(Date.parse(issue.created_at)).toLocaleDateString();


  const text_lines = [

    prefix_text,

    `*${issue.title} - ${issue_link}*`,

    issue.body,

    `*${issue.state}* - Created by ${user_link} on ${date}`,

  ];

};


```

Slack messages accept a variant of Markdown, which supports bold text via asterisks (`*bolded text*`), and links in the format `<https://yoururl.com|Display Text>`.

Given that format, construct `issue_link`, which takes the `html_url` property from the GitHub API `issue` data (in format `https://github.com/cloudflare/wrangler-legacy/issues/1`), and the `issue_string` sent from the Slack slash command, and combines them into a clickable link in the Slack message.

`user_link` is similar, using `issue.user.html_url` (in the format `https://github.com/signalnerve`, a GitHub user) and the user’s GitHub username (`issue.user.login`), to construct a clickable link to the GitHub user.

Finally, parse `issue.created_at`, an ISO 8601 string, convert it into an instance of a JavaScript `Date`, and turn it into a formatted string, in the format `MM/DD/YY`.

With those variables in place, `text_lines` is an array of each line of text for the Slack message. The first line is the **issue title** and the **issue link**, the second is the **issue body**, and the final line is the **issue state** (for example, open or closed), the **user link**, and the **creation date**.

With the text constructed, you can finally construct your Slack message, returning an array of blocks for Slack’s [Block Kit ↗](https://api.slack.com/block-kit). In this case, there is only have one block: a [section ↗](https://api.slack.com/reference/messaging/blocks#section) block with Markdown text, and an accessory image of the user who created the issue. Return that single block inside of an array, to complete the `constructGhIssueSlackMessage` function:

TypeScript

```

import { Issue } from "../types";


export const constructGhIssueSlackMessage = (

  issue: Issue,

  issue_string: string,

  prefix_text?: string,

) => {

  const issue_link = `<${issue.html_url}|${issue_string}>`;

  const user_link = `<${issue.user.html_url}|${issue.user.login}>`;

  const date = new Date(Date.parse(issue.created_at)).toLocaleDateString();


  const text_lines = [

    prefix_text,

    `*${issue.title} - ${issue_link}*`,

    issue.body,

    `*${issue.state}* - Created by ${user_link} on ${date}`,

  ];


  return [

    {

      type: "section",

      text: {

        type: "mrkdwn",

        text: text_lines.join("\n"),

      },

      accessory: {

        type: "image",

        image_url: issue.user.avatar_url,

        alt_text: issue.user.login,

      },

    },

  ];

};


```

#### Finishing the lookup route

In `src/handlers/lookup.ts`, use `constructGhIssueSlackMessage` to construct `blocks`, and return them as a new response with `c.json()` when the slash command is called:

TypeScript

```

import { Hono } from "hono";

import { fetchGithubIssue, parseGhIssueString } from "../utils/github";

import { constructGhIssueSlackMessage } from "../utils/slack";

import { Issue } from "../types";


const app = new Hono();


app.post("/", async (c) => {

  const { text } = await c.req.parseBody();

  if (typeof text !== "string") {

    return c.notFound();

  }


  const { owner, repo, issue_number } = parseGhIssueString(text);

  const response = await fetchGithubIssue(owner, repo, issue_number);

  const issue = await response.json<Issue>();

  const blocks = constructGhIssueSlackMessage(issue, text);


  return c.json({

    blocks,

    response_type: "in_channel",

  });

});


export default app;


```

One additional parameter passed into the response is `response_type`. By default, responses to slash commands are ephemeral, meaning that they are only seen by the user who writes the slash command. Passing a `response_type` of `in_channel`, as seen above, will cause the response to appear for all users in the channel.

If you would like the messages to remain private, remove the `response_type` line. This will cause `response_type` to default to `ephemeral`.

#### Handling errors

The `lookup` route is almost complete, but there are a number of errors that can occur in the route, such as parsing the body from Slack, getting the issue from GitHub, or constructing the Slack message itself. Although Hono applications can handle errors without having to do anything, you can customize the response returned in the following way.

TypeScript

```

import { Hono } from "hono";

import { fetchGithubIssue, parseGhIssueString } from "../utils/github";

import { constructGhIssueSlackMessage } from "../utils/slack";

import { Issue } from "../types";


const app = new Hono();


app.post("/", async (c) => {

  const { text } = await c.req.parseBody();

  if (typeof text !== "string") {

    return c.notFound();

  }


  const { owner, repo, issue_number } = parseGhIssueString(text);

  const response = await fetchGithubIssue(owner, repo, issue_number);

  const issue = await response.json<Issue>();

  const blocks = constructGhIssueSlackMessage(issue, text);


  return c.json({

    blocks,

    response_type: "in_channel",

  });

});


app.onError((_e, c) => {

  return c.text(

    "Uh-oh! We couldn't find the issue you provided. " +

      "We can only find public issues in the following format: `owner/repo#issue_number`.",

  );

});


export default app;


```

### Creating the webhook route

You are now halfway through implementing the routes for your Workers application. In implementing the next route, `src/routes/webhook.ts`, you will re-use a lot of the code that you have already written for the lookup route.

At the beginning of this tutorial, you configured a GitHub webhook to track any events related to issues in your repository. When an issue is opened, for example, the function corresponding to the path `/webhook` on your Workers application should take the data sent to it from GitHub, and post a new message in the configured Slack channel.

In `src/routes/webhook.ts`, define a blank Hono application. The difference from the `lookup` route is that the `Bindings` is passed as a generics for the `new Hono()`. This is necessary to give the appropriate TypeScript type to `SLACK_WEBHOOK_URL` which will be used later.

TypeScript

```

import { Hono } from "hono";

import { Bindings } from "../types";


const app = new Hono<{ Bindings: Bindings }>();


export default app;


```

Much like with the `lookup` route, you will need to parse the incoming payload inside of `request`, get the relevant issue data from it (refer to [the GitHub API documentation on IssueEvent ↗](https://developer.github.com/v3/activity/events/types/#issuesevent) for the full payload schema), and send a formatted message to Slack to indicate what has changed. The final version will look something like this:

![A successful Webhook Message example](https://developers.cloudflare.com/_astro/webhook_example.EQJW9q2u_ZBVQ2l.webp) 

Compare this message format to the format returned when a user uses the `/issue` slash command. You will see that there is only one actual difference between the two: the addition of an action text on the first line, in the format `An issue was $action:`. This action, which is sent as part of the `IssueEvent` from GitHub, will be used as you construct a very familiar looking collection of blocks using Slack’s Block Kit.

#### Parsing event data

To start filling out the route, parse the request body formatted JSON into an object and construct some helper variables:

TypeScript

```

import { Hono } from "hono";

import { constructGhIssueSlackMessage } from "../utils/slack";


const app = new Hono();


app.post("/", async (c) => {

  const { action, issue, repository } = await c.req.json();

  const prefix_text = `An issue was ${action}:`;

  const issue_string = `${repository.owner.login}/${repository.name}#${issue.number}`;

});


export default app;


```

An `IssueEvent`, the payload sent from GitHub as part of your webhook configuration, includes an `action` (what happened to the issue: for example, it was opened, closed, locked, etc.), the `issue` itself, and the `repository`, among other things.

Use `c.req.json()` to convert the payload body of the request from JSON into a plain JS object. Use ES6 destructuring to set `action`, `issue` and `repository` as variables you can use in your code. `prefix_text` is a string indicating what happened to the issue, and `issue_string` is the familiar string `owner/repo#issue_number` that you have seen before: while the `lookup` route directly used the text sent from Slack to fill in `issue_string`, you will construct it directly based on the data passed in the JSON payload.

#### Constructing and sending a Slack message

The messages your Slack bot sends back to your Slack channel from the `lookup` and `webhook` routes are incredibly similar. Because of this, you can re-use the existing `constructGhIssueSlackMessage` to continue populating `src/handlers/webhook.ts`. Import the function from `src/utils/slack.ts`, and pass the issue data into it:

TypeScript

```

import { Hono } from "hono";

import { constructGhIssueSlackMessage } from "../utils/slack";


const app = new Hono();


app.post("/", async (c) => {

  const { action, issue, repository } = await c.req.json();

  const prefix_text = `An issue was ${action}:`;

  const issue_string = `${repository.owner.login}/${repository.name}#${issue.number}`;

  const blocks = constructGhIssueSlackMessage(issue, issue_string, prefix_text);

});


export default app;


```

Importantly, the usage of `constructGhIssueSlackMessage` in this handler adds one additional argument to the function, `prefix_text`. Update the corresponding function inside of `src/utils/slack.ts`, adding `prefix_text` to the collection of `text_lines` in the message block, if it has been passed in to the function.

Add a utility function, `compact`, which takes an array, and filters out any `null` or `undefined` values from it. This function will be used to remove `prefix_text` from `text_lines` if it has not actually been passed in to the function, such as when called from `src/handlers/lookup.ts`. The full (and final) version of the `src/utils/slack.ts` looks like this:

TypeScript

```

import { Issue } from "../types";


const compact = (array: unknown[]) => array.filter((el) => el);


export const constructGhIssueSlackMessage = (

  issue: Issue,

  issue_string: string,

  prefix_text?: string,

) => {

  const issue_link = `<${issue.html_url}|${issue_string}>`;

  const user_link = `<${issue.user.html_url}|${issue.user.login}>`;

  const date = new Date(Date.parse(issue.created_at)).toLocaleDateString();


  const text_lines = [

    prefix_text,

    `*${issue.title} - ${issue_link}*`,

    issue.body,

    `*${issue.state}* - Created by ${user_link} on ${date}`,

  ];


  return [

    {

      type: "section",

      text: {

        type: "mrkdwn",

        text: compact(text_lines).join("\n"),

      },

      accessory: {

        type: "image",

        image_url: issue.user.avatar_url,

        alt_text: issue.user.login,

      },

    },

  ];

};


```

Back in `src/handlers/webhook.ts`, the `blocks` that are returned from `constructGhIssueSlackMessage` become the body in a new `fetch` request, an HTTP POST request to a Slack webhook URL. Once that request completes, return a response with status code `200`, and the body text `"OK"`:

TypeScript

```

import { Hono } from "hono";

import { constructGhIssueSlackMessage } from "../utils/slack";

import { Bindings } from "../types";


const app = new Hono<{ Bindings: Bindings }>();


app.post("/", async (c) => {

  const { action, issue, repository } = await c.req.json();

  const prefix_text = `An issue was ${action}:`;

  const issue_string = `${repository.owner.login}/${repository.name}#${issue.number}`;

  const blocks = constructGhIssueSlackMessage(issue, issue_string, prefix_text);


  const fetchResponse = await fetch(c.env.SLACK_WEBHOOK_URL, {

    body: JSON.stringify({ blocks }),

    method: "POST",

    headers: { "Content-Type": "application/json" },

  });


  return c.text("OK");

});


export default app;


```

The constant `SLACK_WEBHOOK_URL` represents the Slack Webhook URL that you created all the way back in the [Incoming Webhook](https://developers.cloudflare.com/workers/tutorials/build-a-slackbot/#incoming-webhook) section of this tutorial.

Warning

Since this webhook allows developers to post directly to your Slack channel, keep it secret.

To use this constant inside of your codebase, use the [wrangler secret](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret) command:

Set the SLACK\_WEBHOOK\_URL secret

```

npx wrangler secret put SLACK_WEBHOOK_URL


```

```

Enter a secret value: https://hooks.slack.com/services/abc123


```

#### Handling errors

Similarly to the `lookup` route, the `webhook` route should include some basic error handling. Unlike `lookup`, which sends responses directly back into Slack, if something goes wrong with your webhook, it may be useful to actually generate an erroneous response, and return it to GitHub.

To do this, write the custom error handler with `app.onError()` and return a new response with a status code of `500`. The final version of `src/routes/webhook.ts` looks like this:

TypeScript

```

import { Hono } from "hono";

import { constructGhIssueSlackMessage } from "../utils/slack";

import { Bindings } from "../types";


const app = new Hono<{ Bindings: Bindings }>();


app.post("/", async (c) => {

  const { action, issue, repository } = await c.req.json();

  const prefix_text = `An issue was ${action}:`;

  const issue_string = `${repository.owner.login}/${repository.name}#${issue.number}`;

  const blocks = constructGhIssueSlackMessage(issue, issue_string, prefix_text);


  const fetchResponse = await fetch(c.env.SLACK_WEBHOOK_URL, {

    body: JSON.stringify({ blocks }),

    method: "POST",

    headers: { "Content-Type": "application/json" },

  });


  if (!fetchResponse.ok) throw new Error();


  return c.text("OK");

});


app.onError((_e, c) => {

  return c.json(

    {

      message: "Unable to handle webhook",

    },

    500,

  );

});


export default app;


```

## Deploy

By completing the preceding steps, you have finished writing the code for your Slack bot. You can now deploy your application.

Wrangler has built-in support for bundling, uploading, and releasing your Cloudflare Workers application. To do this, run the following command which will build and deploy your code.

 npm  yarn  pnpm  bun 

```
npm i -- deploy
```

```
yarn add deploy
```

```
pnpm add deploy
```

```
bun add deploy
```

Deploying your Workers application should now cause issue updates to start appearing in your Slack channel, as the GitHub webhook can now successfully reach your Workers webhook route:

![When you create new issue, a Slackbot will now appear in your Slack channel](https://developers.cloudflare.com/images/workers/tutorials/slackbot/create-new-issue.gif) 

## Related resources

In this tutorial, you built and deployed a Cloudflare Workers application that can respond to GitHub webhook events, and allow GitHub API lookups within Slack. If you would like to review the full source code for this application, you can find the repository [on GitHub ↗](https://github.com/yusukebe/workers-slack-bot).

If you want to get started building your own projects, review the existing list of [Quickstart templates](https://developers.cloudflare.com/workers/get-started/quickstarts/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/build-a-slackbot/","name":"Build a Slackbot"}}]}
```

---

---
title: Connect to and query your Turso database using Workers
description: This tutorial will guide you on how to build globally distributed applications with Cloudflare Workers, and Turso, an edge-hosted distributed database based on libSQL.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ SQL ](https://developers.cloudflare.com/search/?tags=SQL) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/connect-to-turso-using-workers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Connect to and query your Turso database using Workers

**Last reviewed:**  about 3 years ago 

This tutorial will guide you on how to build globally distributed applications with Cloudflare Workers, and [Turso ↗](https://chiselstrike.com/), an edge-hosted distributed database based on libSQL. By using Workers and Turso, you can create applications that are close to your end users without having to maintain or operate infrastructure in tens or hundreds of regions.

Note

For a more seamless experience, refer to the [Turso Database Integration guide](https://developers.cloudflare.com/workers/databases/third-party-integrations/turso/). The Turso Database Integration will guide you through connecting your Worker to a Turso database by securely configuring your database credentials as [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) in your Worker.

## Prerequisites

Before continuing with this tutorial, you should have:

* Successfully [created up your first Cloudflare Worker](https://developers.cloudflare.com/workers/get-started/guide/) and/or have deployed a Cloudflare Worker before.
* Installed [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/), a command-line tool for building Cloudflare Workers.
* A [GitHub account ↗](https://github.com/), required for authenticating to Turso.
* A basic familiarity with installing and using command-line interface (CLI) applications.

## Install the Turso CLI

You will need the Turso CLI to create and populate a database. Run either of the following two commands in your terminal to install the Turso CLI:

Terminal window

```

# On macOS or Linux with Homebrew

brew install chiselstrike/tap/turso


# Manual scripted installation

curl -sSfL <https://get.tur.so/install.sh> | bash


```

After you have installed the Turso CLI, verify that the CLI is in your shell path:

Terminal window

```

turso --version


```

```

# This should output your current Turso CLI version (your installed version may be higher):

turso version v0.51.0


```

## Create and populate a database

Before you create your first Turso database, you need to log in to the CLI using your GitHub account by running:

Terminal window

```

turso auth login


```

```

Waiting for authentication...

✔  Success! Logged in as <your GitHub username>


```

`turso auth login` will open a browser window and ask you to sign into your GitHub account, if you are not already logged in. The first time you do this, you will need to give the Turso application permission to use your account. Select **Approve** to grant Turso the permissions needed.

After you have authenticated, you can create a database by running `turso db create <DATABASE_NAME>`. Turso will automatically choose a location closest to you.

Terminal window

```

turso db create my-db


```

```

# Example:

[===>                ]

Creating database my-db in Los Angeles, California (US) (lax)

# Once succeeded:

Created database my-db in Los Angeles, California (US) (lax) in 34 seconds.


```

With your first database created, you can now connect to it directly and execute SQL against it:

Terminal window

```

turso db shell my-db


```

To get started with your database, create and define a schema for your first table. In this example, you will create a `example_users` table with one column: `email` (of type `text`) and then populate it with one email address.

In the shell you just opened, paste in the following SQL:

```

create table example_users (email text);

insert into example_users values ('foo@bar.com');


```

If the SQL statements succeeded, there will be no output. Note that the trailing semi-colons (`;`) are necessary to terminate each SQL statement.

Type `.quit` to exit the shell.

## Use Wrangler to create a Workers project

The Workers command-line interface, [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/), allows you to create, locally develop, and deploy your Workers projects.

To create a new Workers project (named `worker-turso-ts`), run the following:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- worker-turso-ts
```

```
yarn create cloudflare worker-turso-ts
```

```
pnpm create cloudflare@latest worker-turso-ts
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `TypeScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

To start developing your Worker, `cd` into your new project directory:

Terminal window

```

cd worker-turso-ts


```

In your project directory, you now have the following files:

* `wrangler.json` / `wrangler.toml`: [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/)
* `src/index.ts`: A minimal Hello World Worker written in TypeScript
* `package.json`: A minimal Node dependencies configuration file.
* `tsconfig.json`: TypeScript configuration that includes Workers types. Only generated if indicated.

For this tutorial, only the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) and `src/index.ts` file are relevant. You will not need to edit the other files, and they should be left as is.

## Configure your Worker for your Turso database

The Turso client library requires two pieces of information to make a connection:

1. `LIBSQL_DB_URL` \- The connection string for your Turso database.
2. `LIBSQL_DB_AUTH_TOKEN` \- The authentication token for your Turso database. This should be kept a secret, and not committed to source code.

To get the URL for your database, run the following Turso CLI command, and copy the result:

Terminal window

```

turso db show my-db --url


```

```

libsql://my-db-<your-github-username>.turso.io


```

Open the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) in your editor and at the bottom of the file, create a new `[vars]` section representing the [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/) for your project:

* [  wrangler.jsonc ](#tab-panel-7756)
* [  wrangler.toml ](#tab-panel-7757)

```

{

  "vars": {

    "LIBSQL_DB_URL": "paste-your-url-here"

  }

}


```

```

[vars]

LIBSQL_DB_URL = "paste-your-url-here"


```

Save the changes to the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).

Next, create a long-lived authentication token for your Worker to use when connecting to your database. Run the following Turso CLI command, and copy the output to your clipboard:

Terminal window

```

turso db tokens create my-db -e none

# Will output a long text string (an encoded JSON Web Token)


```

To keep this token secret:

1. You will create a `.dev.vars` file for local development. Do not commit this file to source control. You should add `.dev.vars to your `.gitignore\` file if you are using Git.
* You will also create a [secret](https://developers.cloudflare.com/workers/configuration/secrets/) to keep your authentication token confidential.

First, create a new file called `.dev.vars` with the following structure. Paste your authentication token in the quotation marks:

```

LIBSQL_DB_AUTH_TOKEN="<YOUR_AUTH_TOKEN>"


```

Save your changes to `.dev.vars`. Next, store the authentication token as a secret for your production Worker to reference. Run the following `wrangler secret` command to create a Secret with your token:

Terminal window

```

# Ensure you specify the secret name exactly: your Worker will need to reference it later.

npx wrangler secret put LIBSQL_DB_AUTH_TOKEN


```

```

? Enter a secret value: › <paste your token here>


```

Select `<Enter>` on your keyboard to save the token as a secret. Both `LIBSQL_DB_URL` and `LIBSQL_DB_AUTH_TOKEN` will be available in your Worker's environment at runtime.

## Install extra libraries

Install the Turso client library and a router:

 npm  yarn  pnpm  bun 

```
npm i @libsql/client itty-router
```

```
yarn add @libsql/client itty-router
```

```
pnpm add @libsql/client itty-router
```

```
bun add @libsql/client itty-router
```

The `@libsql/client` library allows you to query a Turso database. The `itty-router` library is a lightweight router you will use to help handle incoming requests to the worker.

## Write your Worker

You will now write a Worker that will:

1. Handle an HTTP request.
2. Route it to a specific handler to either list all users in our database or add a new user.
3. Return the results and/or success.

Open `src/index.ts` and delete the existing template. Copy the below code exactly as is and paste it into the file:

TypeScript

```

import { Client as LibsqlClient, createClient } from "@libsql/client/web";

import { Router, RouterType } from "itty-router";


export interface Env {

  // The environment variable containing your the URL for your Turso database.

  LIBSQL_DB_URL?: string;

  // The Secret that contains the authentication token for your Turso database.

  LIBSQL_DB_AUTH_TOKEN?: string;


  // These objects are created before first use, then stashed here

  // for future use

  router?: RouterType;

}


export default {

  async fetch(request, env): Promise<Response> {

    if (env.router === undefined) {

      env.router = buildRouter(env);

    }


    return env.router.fetch(request);

  },

} satisfies ExportedHandler<Env>;


function buildLibsqlClient(env: Env): LibsqlClient {

  const url = env.LIBSQL_DB_URL?.trim();

  if (url === undefined) {

    throw new Error("LIBSQL_DB_URL env var is not defined");

  }


  const authToken = env.LIBSQL_DB_AUTH_TOKEN?.trim();

  if (authToken === undefined) {

    throw new Error("LIBSQL_DB_AUTH_TOKEN env var is not defined");

  }


  return createClient({ url, authToken });

}


function buildRouter(env: Env): RouterType {

  const router = Router();


  router.get("/users", async () => {

    const client = buildLibsqlClient(env);

    const rs = await client.execute("select * from example_users");

    return Response.json(rs);

  });


  router.get("/add-user", async (request) => {

    const client = buildLibsqlClient(env);

    const email = request.query.email;

    if (email === undefined) {

      return new Response("Missing email", { status: 400 });

    }

    if (typeof email !== "string") {

      return new Response("email must be a single string", { status: 400 });

    }

    if (email.length === 0) {

      return new Response("email length must be > 0", { status: 400 });

    }


    try {

      await client.execute({

        sql: "insert into example_users values (?)",

        args: [email],

      });

    } catch (e) {

      console.error(e);

      return new Response("database insert failed");

    }


    return new Response("Added");

  });


  router.all("*", () => new Response("Not Found.", { status: 404 }));


  return router;

}


```

Save your `src/index.ts` file with your changes.

Note:

* The libSQL client library import '@libsql/client/web' must be imported exactly as shown when working with Cloudflare workers. The non-web import will not work in the Workers environment.
* The `Env` interface contains the environment variable and secret you defined earlier.
* The `Env` interface also caches the libSQL client object and router, which are created on the first request to the Worker.
* The `/users` route fetches all rows from the `example_users` table you created in the Turso shell. It simply serializes the `ResultSet` object as JSON directly to the caller.
* The `/add-user` route inserts a new row using a value provided in the query string.

With your environment configured and your code ready, you will now test your Worker locally before you deploy.

## Run the Worker locally with Wrangler

To run a local instance of our Worker (entirely on your machine), run the following command:

Terminal window

```

npx wrangler dev


```

You should be able to review output similar to the following:

```

Your worker has access to the following bindings:

- Vars:

  - LIBSQL_DB_URL: "your-url"

⎔ Starting a local server...

╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮

│ [b] open a browser, [d] open Devtools, [l] turn off local mode, [c] clear console, [x] to exit                                                                    │

╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Debugger listening on ws://127.0.0.1:61918/1064babd-bc9d-4bed-b171-b35dab3b7680

For help, see: https://nodejs.org/en/docs/inspector

Debugger attached.

[mf:inf] Worker reloaded! (40.25KiB)

[mf:inf] Listening on 0.0.0.0:8787

[mf:inf] - http://127.0.0.1:8787

[mf:inf] - http://192.168.1.136:8787

[mf:inf] Updated `Request.cf` object cache!


```

The localhost address — the one with `127.0.0.1` in it — is a web-server running locally on your machine.

Connect to it and validate your Worker returns the email address you inserted when you created your `example_users` table by visiting the `/users` route in your browser: [http://127.0.0.1:8787/users ↗](http://127.0.0.1:8787/users).

You should see JSON similar to the following containing the data from the `example_users` table:

```

{

  "columns": ["email"],

  "rows": [{ "email": "foo@bar.com" }],

  "rowsAffected": 0

}


```

Warning

If you see an error instead of a list of users, double check that:

* You have entered the correct value for your `LIBSQL_DB_URL` in the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* You have set a secret called `LIBSQL_DB_AUTH_TOKEN` with your database authentication token.

Both of these need to be present and match the variable names in your Worker's code.

Test the `/add-users` route and pass it an email address to insert: [http://127.0.0.1:8787/add-user?email=test@test.com ↗](http://127.0.0.1:8787/add-user?email=test@test.com.)

You should see the text `“Added”`. If you load the first URL with the `/users` route again ([http://127.0.0.1:8787/users ↗](http://127.0.0.1:8787/users)), it will show the newly added row. You can repeat this as many times as you like. Note that due to its design, your application will not stop you from adding duplicate email addresses.

Quit Wrangler by typing `q` into the shell where it was started.

## Deploy to Cloudflare

After you have validated that your Worker can connect to your Turso database, deploy your Worker. Run the following Wrangler command to deploy your Worker to the Cloudflare global network:

Terminal window

```

npx wrangler deploy


```

The first time you run this command, it will launch a browser, ask you to sign in with your Cloudflare account, and grant permissions to Wrangler.

The `deploy` command will output the following:

```

Your worker has access to the following bindings:

- Vars:

  - LIBSQL_DB_URL: "your-url"

...

Published worker-turso-ts (0.19 sec)

  https://worker-turso-ts.<your-Workers-subdomain>.workers.dev

Current Deployment ID: f9e6b48f-5aac-40bd-8f44-8a40be2212ff


```

You have now deployed a Worker that can connect to your Turso database, query it, and insert new data.

## Optional: Clean up

To clean up the resources you created as part of this tutorial:

* If you do not want to keep this Worker, run `npx wrangler delete worker-turso-ts` to delete the deployed Worker.
* You can also delete your Turso database via `turso db destroy my-db`.

## Related resources

* Find the [complete project source code on GitHub ↗](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-turso-ts/).
* Understand how to [debug your Cloudflare Worker](https://developers.cloudflare.com/workers/observability/).
* Join the [Cloudflare Developer Discord ↗](https://discord.cloudflare.com).
* Join the [ChiselStrike (Turso) Discord ↗](https://discord.com/invite/4B5D7hYwub).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/connect-to-turso-using-workers/","name":"Connect to and query your Turso database using Workers"}}]}
```

---

---
title: Create a fine-tuned OpenAI model with R2
description: In this tutorial, you will use the OpenAI API and Cloudflare R2 to create a fine-tuned model.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI)[ Hono ](https://developers.cloudflare.com/search/?tags=Hono)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/create-finetuned-chatgpt-ai-models-with-r2.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Create a fine-tuned OpenAI model with R2

**Last reviewed:**  almost 2 years ago 

In this tutorial, you will use the [OpenAI ↗](https://openai.com) API and [Cloudflare R2](https://developers.cloudflare.com/r2) to create a [fine-tuned model ↗](https://platform.openai.com/docs/guides/fine-tuning).

This feature in OpenAI's API allows you to derive a custom model from OpenAI's various large language models based on a set of custom instructions and example answers. These instructions and example answers are written in a document, known as a fine-tune document. This document will be stored in R2 and dynamically provided to OpenAI's APIs when creating a new fine-tune model.

In order to use this feature, you will do the following tasks:

1. Upload a fine-tune document to R2.
2. Read the R2 file and upload it to OpenAI.
3. Create a new fine-tuned model based on the document.
![Demo](https://developers.cloudflare.com/_astro/finetune-example.Df8cOHyQ_1PgFLK.webp) 

To review the completed code for this application, refer to the [GitHub repository for this tutorial ↗](https://github.com/kristianfreeman/openai-finetune-r2-example).

## Prerequisites

Before you start, make sure you have:

* A Cloudflare account with access to R2\. If you do not have a Cloudflare account, [sign up ↗](https://dash.cloudflare.com/sign-up/workers-and-pages) before continuing. Then purchase R2 from your Cloudflare dashboard.
* An OpenAI API key.
* A fine-tune document, structured as [JSON Lines ↗](https://jsonlines.org/). Use the [example document ↗](https://github.com/kristianfreeman/openai-finetune-r2-example/blob/16ca53ca9c8589834abe317487eeedb8a24c7643/example%5Fdata.jsonl) in the source code.

## 1\. Create a Worker application

First, use the `c3` CLI to create a new Cloudflare Workers project.

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- finetune-chatgpt-model
```

```
yarn create cloudflare finetune-chatgpt-model
```

```
pnpm create cloudflare@latest finetune-chatgpt-model
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `TypeScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

The above options will create the "Hello World" TypeScript project.

Move into your newly created directory:

Terminal window

```

cd finetune-chatgpt-model


```

## 2\. Upload a fine-tune document to R2

Next, upload the fine-tune document to R2\. R2 is a key-value store that allows you to store and retrieve files from within your Workers application. You will use [Wrangler](https://developers.cloudflare.com/workers/wrangler) to create a new R2 bucket.

To create a new R2 bucket use the [wrangler r2 bucket create](https://developers.cloudflare.com/workers/wrangler/commands/r2/#r2-bucket-create) command. Note that you are logged in with your Cloudflare account. If not logged in via Wrangler, use the [wrangler login](https://developers.cloudflare.com/workers/wrangler/commands/general/#login) command.

Terminal window

```

npx wrangler r2 bucket create <BUCKET_NAME>


```

Replace `<BUCKET_NAME>` with your desired bucket name. Note that bucket names must be lowercase and can only contain dashes.

Next, upload a file using the [wrangler r2 object put](https://developers.cloudflare.com/workers/wrangler/commands/r2/#r2-object-put) command.

Terminal window

```

npx wrangler r2 object put <PATH> -f <FILE_NAME>


```

`<PATH>` is the combined bucket and file path of the file you want to upload -- for example, `fine-tune-ai/finetune.jsonl`, where `fine-tune-ai` is the bucket name. Replace `<FILE_NAME>` with the local filename of your fine-tune document.

## 3\. Bind your bucket to the Worker

A binding is how your Worker interacts with external resources such as the R2 bucket.

To bind the R2 bucket to your Worker, add the following to your Wrangler file. Update the binding property to a valid JavaScript variable identifier. Replace `<YOUR_BUCKET_NAME>` with the name of the bucket you created in [step 2](#2-upload-a-fine-tune-document-to-r2):

* [  wrangler.jsonc ](#tab-panel-7758)
* [  wrangler.toml ](#tab-panel-7759)

```

{

  "r2_buckets": [

    {

      "binding": "MY_BUCKET", // <~ valid JavaScript variable name

      "bucket_name": "<YOUR_BUCKET_NAME>"

    }

  ]

}


```

```

[[r2_buckets]]

binding = "MY_BUCKET"

bucket_name = "<YOUR_BUCKET_NAME>"


```

## 4\. Initialize your Worker application

You will use [Hono ↗](https://hono.dev/), a lightweight framework for building Cloudflare Workers applications. Hono provides an interface for defining routes and middleware functions. Inside your project directory, run the following command to install Hono:

 npm  yarn  pnpm  bun 

```
npm i hono
```

```
yarn add hono
```

```
pnpm add hono
```

```
bun add hono
```

You also need to install the [OpenAI Node API library ↗](https://www.npmjs.com/package/openai). This library provides convenient access to the OpenAI REST API in a Node.js project. To install the library, execute the following command:

 npm  yarn  pnpm  bun 

```
npm i openai
```

```
yarn add openai
```

```
pnpm add openai
```

```
bun add openai
```

Next, open the `src/index.ts` file and replace the default code with the below code. Replace `<MY_BUCKET>` with the binding name you set in Wrangler file.

TypeScript

```

import { Context, Hono } from "hono";

import OpenAI from "openai";


type Bindings = {

  <MY_BUCKET>: R2Bucket

  OPENAI_API_KEY: string

}


type Variables = {

  openai: OpenAI

}


const app = new Hono<{ Bindings: Bindings, Variables: Variables }>()


app.use('*', async (c, next) => {

  const openai = new OpenAI({

    apiKey: c.env.OPENAI_API_KEY,

  })

  c.set("openai", openai)

  await next()

})


app.onError((err, c) => {

  return c.text(err.message, 500)

})


export default app;


```

In the above code, you first import the required packages and define the types. Then, you initialize `app` as a new Hono instance. Using the `use` middleware function, you add the OpenAI API client to the context of all routes. This middleware function allows you to access the client from within any route handler. `onError()` defines an error handler to return any errors as a JSON response.

## 5\. Read R2 files and upload them to OpenAI

In this section, you will define the route and function responsible for handling file uploads.

In `createFile`, your Worker reads the file from R2 and converts it to a `File` object. Your Worker then uses the OpenAI API to upload the file and return the response.

The `GET /files` route listens for `GET` requests with a query parameter `file`, representing a filename of an uploaded fine-tune document in R2\. The function uses the `createFile` function to manage the file upload process.

Replace `<MY_BUCKET>` with the binding name you set in Wrangler file.

TypeScript

```

// New import added at beginning of file

import { toFile } from 'openai/uploads'


const createFile = async (c: Context, r2Object: R2ObjectBody) => {

  const openai: OpenAI = c.get("openai")


  const blob = await r2Object.blob()

  const file = await toFile(blob, r2Object.key)


  const uploadedFile = await openai.files.create({

    file,

    purpose: "fine-tune",

  })


  return uploadedFile

}


app.get('/files', async c => {

  const fileQueryParam = c.req.query("file")

  if (!fileQueryParam) return c.text("Missing file query param", 400)


  const file = await c.env.<MY_BUCKET>.get(fileQueryParam)

  if (!file) return c.text("Couldn't find file", 400)


  const uploadedFile = await createFile(c, file)

  return c.json(uploadedFile)

})


```

## 6\. Create fine-tuned models

This section includes the `GET /models` route and the `createModel` function. The function `createModel` takes care of specifying the details and initiating the fine-tuning process with OpenAI. The route handles incoming requests for creating a new fine-tuned model.

TypeScript

```

const createModel = async (c: Context, fileId: string) => {

  const openai: OpenAI = c.get("openai");


  const body = {

    training_file: fileId,

    model: "gpt-4o-mini",

  };


  return openai.fineTuning.jobs.create(body);

};


app.get("/models", async (c) => {

  const fileId = c.req.query("file_id");

  if (!fileId) return c.text("Missing file ID query param", 400);


  const model = await createModel(c, fileId);

  return c.json(model);

});


```

## 7\. List all fine-tune jobs

This section describes the `GET /jobs` route and the corresponding `getJobs` function. The function interacts with OpenAI's API to fetch a list of all fine-tuning jobs. The route provides an interface for retrieving this information.

TypeScript

```

const getJobs = async (c: Context) => {

  const openai: OpenAI = c.get("openai");

  const resp = await openai.fineTuning.jobs.list();

  return resp.data;

};


app.get("/jobs", async (c) => {

  const jobs = await getJobs(c);

  return c.json(jobs);

});


```

## 8\. Deploy your application

After you have created your Worker application and added the required functions, deploy the application.

Before you deploy, you must set the `OPENAI_API_KEY` [secret](https://developers.cloudflare.com/workers/configuration/secrets/) for your application. Do this by running the [wrangler secret put](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret-put) command:

Terminal window

```

npx wrangler secret put OPENAI_API_KEY


```

To deploy your Worker application to the Cloudflare global network:

1. Make sure you are in your Worker project's directory, then run the [wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) command:

Terminal window

```

npx wrangler deploy


```

1. Wrangler will package and upload your code.
2. After your application is deployed, Wrangler will provide you with your Worker's URL.

## 9\. View the fine-tune job status and use the model

To use your application, create a new fine-tune job by making a request to the `/files` with a `file` query param matching the filename you uploaded earlier:

Terminal window

```

curl https://your-worker-url.com/files?file=finetune.jsonl


```

When the file is uploaded, issue another request to `/models`, passing the `file_id` query parameter. This should match the `id` returned as JSON from the `/files` route:

Terminal window

```

curl https://your-worker-url.com/models?file_id=file-abc123


```

Finally, visit `/jobs` to see the status of your fine-tune jobs in OpenAI. Once the fine-tune job has completed, you can see the `fine_tuned_model` value, indicating a fine-tuned model has been created.

![Jobs](https://developers.cloudflare.com/_astro/finetune-jobs.BQ_jbiJu_Z2n2Er.webp) 

Visit the [OpenAI Playground ↗](https://platform.openai.com/playground) in order to use your fine-tune model. Select your fine-tune model from the top-left dropdown of the interface.

![Demo](https://developers.cloudflare.com/_astro/finetune-example.Df8cOHyQ_1PgFLK.webp) 

Use it in any API requests you make to OpenAI's chat completions endpoints. For instance, in the below code example:

JavaScript

```

openai.chat.completions.create({

  messages: [{ role: "system", content: "You are a helpful assistant." }],

  model: "ft:gpt-4o-mini:my-org:custom_suffix:id",

});


```

## Next steps

To build more with Workers, refer to [Tutorials](https://developers.cloudflare.com/workers/tutorials).

If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord ↗](https://discord.cloudflare.com) to connect with other developers and the Cloudflare team.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/create-finetuned-chatgpt-ai-models-with-r2/","name":"Create a fine-tuned OpenAI model with R2"}}]}
```

---

---
title: Deploy a real-time chat application
description: This tutorial shows how to deploy a serverless, real-time chat application. The chat application uses a Durable Object to control each chat room.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/deploy-a-realtime-chat-app.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Deploy a real-time chat application

**Last reviewed:**  over 2 years ago 

In this tutorial, you will deploy a serverless, real-time chat application that runs using [Durable Objects](https://developers.cloudflare.com/durable-objects/).

This chat application uses a Durable Object to control each chat room. Users connect to the Object using WebSockets. Messages from one user are broadcast to all the other users. The chat history is also stored in durable storage. Real-time messages are relayed directly from one user to others without going through the storage layer.

## Before you start

All of the tutorials assume you have already completed the [Get started guide](https://developers.cloudflare.com/workers/get-started/guide/), which gets you set up with a Cloudflare Workers account, [C3 ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare), and [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

## Clone the chat application repository

Open your terminal and clone the [workers-chat-demo ↗](https://github.com/cloudflare/workers-chat-demo) repository:

Terminal window

```

git clone https://github.com/cloudflare/workers-chat-demo.git


```

## Authenticate Wrangler

After you have cloned the repository, authenticate Wrangler by running:

Terminal window

```

npx wrangler login


```

## Deploy your project

When you are ready to deploy your application, run:

Terminal window

```

npx wrangler deploy


```

Your application will be deployed to your `*.workers.dev` subdomain.

To deploy your application to a custom domain within the Cloudflare dashboard, go to your Worker > **Triggers** \> **Add Custom Domain**.

To deploy your application to a custom domain using Wrangler, open your project's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).

To configure a route in your Wrangler configuration file, add the following to your environment:

* [  wrangler.jsonc ](#tab-panel-7760)
* [  wrangler.toml ](#tab-panel-7761)

```

{

  "routes": [

    {

      "pattern": "example.com/about",

      "zone_id": "<YOUR_ZONE_ID>"

    }

  ]

}


```

```

[[routes]]

pattern = "example.com/about"

zone_id = "<YOUR_ZONE_ID>"


```

If you have specified your zone ID in the environment of your Wrangler configuration file, you will not need to write it again in object form.

To configure a subdomain in your Wrangler configuration file, add the following to your environment:

* [  wrangler.jsonc ](#tab-panel-7762)
* [  wrangler.toml ](#tab-panel-7763)

```

{

  "routes": [

    {

      "pattern": "subdomain.example.com",

      "custom_domain": true

    }

  ]

}


```

```

[[routes]]

pattern = "subdomain.example.com"

custom_domain = true


```

To test your live application:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select your Worker > **Triggers** \> **Routes** \> Select the `edge-chat-demo.<SUBDOMAIN>.workers.dev` route.
3. Enter a name in the **your name** field.
4. Choose whether to enter a public room or create a private room.
5. Send the link to other participants. You will be able to view room participants on the right side of the screen.

## Uninstall your application

To uninstall your chat application, modify your Wrangler file to remove the `durable_objects` bindings and add a `deleted_classes` migration:

* [  wrangler.jsonc ](#tab-panel-7764)
* [  wrangler.toml ](#tab-panel-7765)

```

{

  "durable_objects": {

    "bindings": []

  },

  // Indicate that you want the ChatRoom and RateLimiter classes to be callable as Durable Objects.

  "migrations": [

    {

      "tag": "v1",

      "new_sqlite_classes": [

        "ChatRoom",

        "RateLimiter"

      ]

    },

    {

      "tag": "v2", // Should be unique for each entry

      "deleted_classes": [

        "ChatRoom",

        "RateLimiter"

      ]

    }

  ]

}


```

```

[durable_objects]

bindings = [ ]


[[migrations]]

tag = "v1"

new_sqlite_classes = [ "ChatRoom", "RateLimiter" ]


[[migrations]]

tag = "v2"

deleted_classes = [ "ChatRoom", "RateLimiter" ]


```

Then run `npx wrangler deploy`.

To delete your Worker:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker.
3. Select **Manage Service** \> **Delete**. For complete instructions on set up and deletion, refer to the `README.md` in your cloned repository.

By completing this tutorial, you have deployed a real-time chat application with Durable Objects and Cloudflare Workers.

## Related resources

Continue building with other Cloudflare Workers tutorials below.

* [Build a Slackbot](https://developers.cloudflare.com/workers/tutorials/build-a-slackbot/)
* [Create SMS notifications for your GitHub repository using Twilio](https://developers.cloudflare.com/workers/tutorials/github-sms-notifications-using-twilio/)
* [Build a QR code generator](https://developers.cloudflare.com/workers/tutorials/build-a-qr-code-generator/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/deploy-a-realtime-chat-app/","name":"Deploy a real-time chat application"}}]}
```

---

---
title: Deploy an Express.js application on Cloudflare Workers
description: Learn how to deploy an Express.js application on Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/deploy-an-express-app.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Deploy an Express.js application on Cloudflare Workers

**Last reviewed:**  5 months ago 

In this tutorial, you will learn how to deploy an [Express.js ↗](https://expressjs.com/) application on Cloudflare Workers using the [Cloudflare Workers platform](https://developers.cloudflare.com/workers/) and [D1 database](https://developers.cloudflare.com/d1/). You will build a Members Registry API with basic Create, Read, Update, and Delete (CRUD) operations. You will use D1 as the database for storing and retrieving member data.

## Before you start

All of the tutorials assume you have already completed the [Get started guide](https://developers.cloudflare.com/workers/get-started/guide/), which gets you set up with a Cloudflare Workers account, [C3 ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare), and [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

## Quick start

If you want to skip the steps and get started quickly, select **Deploy to Cloudflare** below.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/express-on-workers)

This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. Use this option if you are familiar with Cloudflare Workers, and wish to skip the step-by-step guidance.

You may wish to manually follow the steps if you are new to Cloudflare Workers.

## 1\. Create a new Cloudflare Workers project

Use [C3 ↗](https://developers.cloudflare.com/learning-paths/workers/get-started/c3-and-wrangler/#c3), the command-line tool for Cloudflare's developer products, to create a new directory and initialize a new Worker project:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- express-d1-app
```

```
yarn create cloudflare express-d1-app
```

```
pnpm create cloudflare@latest express-d1-app
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `TypeScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

Change into your new project directory:

```

cd express-d1-app


```

## 2\. Install Express and dependencies

In this tutorial, you will use [Express.js ↗](https://expressjs.com/), a popular web framework for Node.js. To use Express in a Cloudflare Workers environment, install Express along with the necessary TypeScript types:

 npm  yarn  pnpm  bun 

```
npm i express @types/express
```

```
yarn add express @types/express
```

```
pnpm add express @types/express
```

```
bun add express @types/express
```

Express.js on Cloudflare Workers requires the `nodejs_compat` [compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/). This flag enables Node.js APIs and allows Express to run on the Workers runtime. Add the following to your Wrangler configuration file:

* [  wrangler.jsonc ](#tab-panel-7766)
* [  wrangler.toml ](#tab-panel-7767)

```

{

  "compatibility_flags": [

    "nodejs_compat"

  ]

}


```

```

compatibility_flags = [ "nodejs_compat" ]


```

## 3\. Create a D1 database

You will now create a D1 database to store member information. Use the `wrangler d1 create` command to create a new database:

```

npx wrangler d1 create members-db


```

The command will create a new D1 database and ask you the following questions:

* **Would you like Wrangler to add it on your behalf?**: Type `Y`.
* **What binding name would you like to use?**: Type `DB` and press Enter.
* **For local dev, do you want to connect to the remote resource instead of a local resource?**: Type `N`.

```

 ⛅️ wrangler 4.44.0

───────────────────

✅ Successfully created DB 'members-db' in region WNAM

Created your new D1 database.


To access your new D1 Database in your Worker, add the following snippet to your configuration file:

{

  "d1_databases": [

    {

      "binding": "members_db",

      "database_name": "members-db",

      "database_id": "<unique-ID-for-your-database>"

    }

  ]

}

✔ Would you like Wrangler to add it on your behalf? … yes

✔ What binding name would you like to use? … DB

✔ For local dev, do you want to connect to the remote resource instead of a local resource? … no


```

The binding will be added to your Wrangler configuration file.

* [  wrangler.jsonc ](#tab-panel-7768)
* [  wrangler.toml ](#tab-panel-7769)

```

{

  "d1_databases": [

    {

      "binding": "DB",

      "database_name": "members-db",

      "database_id": "<unique-ID-for-your-database>"

    }

  ]

}


```

```

[[d1_databases]]

binding = "DB"

database_name = "members-db"

database_id = "<unique-ID-for-your-database>"


```

## 4\. Create database schema

Create a directory called `schemas` in your project root, and inside it, create a file called `schema.sql`:

schemas/schema.sql

```

DROP TABLE IF EXISTS members;

CREATE TABLE IF NOT EXISTS members (

  id INTEGER PRIMARY KEY AUTOINCREMENT,

  name TEXT NOT NULL,

  email TEXT NOT NULL UNIQUE,

  joined_date TEXT NOT NULL

);


-- Insert sample data

INSERT INTO members (name, email, joined_date) VALUES

  ('Alice Johnson', 'alice@example.com', '2024-01-15'),

  ('Bob Smith', 'bob@example.com', '2024-02-20'),

  ('Carol Williams', 'carol@example.com', '2024-03-10');


```

This schema creates a `members` table with an auto-incrementing ID, name, email, and join date fields. It also inserts three sample members.

Execute the schema file against your D1 database:

```

npx wrangler d1 execute members-db --file=./schemas/schema.sql


```

The above command creates the table in your local development database. You will deploy the schema to production later.

## 5\. Initialize Express application

Update your `src/index.ts` file to set up Express with TypeScript. Replace the file content with the following:

src/index.ts

```

import { env } from "cloudflare:workers";

import { httpServerHandler } from "cloudflare:node";

import express from "express";


const app = express();


// Middleware to parse JSON bodies

app.use(express.json());


// Health check endpoint

app.get("/", (req, res) => {

  res.json({ message: "Express.js running on Cloudflare Workers!" });

});


app.listen(3000);

export default httpServerHandler({ port: 3000 });


```

This code initializes Express and creates a basic health check endpoint. The key import `import { env } from "cloudflare:workers"` allows you to access [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) like your D1 database from anywhere in your code. The [httpServerHandler](https://developers.cloudflare.com/workers/runtime-apis/nodejs/http/#httpserverhandler) integrates Express with the Workers runtime, enabling your application to handle HTTP requests on Cloudflare's network.

Next, execute the typegen command to generate type definitions for your Worker environment:

```

npm run cf-typegen


```

## 6\. Implement read operations

Add endpoints to retrieve members from the database. Update your `src/index.ts` file by adding the following routes after the health check endpoint:

src/index.ts

```

// GET all members

app.get('/api/members', async (req, res) => {

  try {

    const { results } = await env.DB.prepare('SELECT * FROM members ORDER BY joined_date DESC').all();


    res.json({ success: true, members: results });

  } catch (error) {

    res.status(500).json({ success: false, error: 'Failed to fetch members' });

  }

});


// GET a single member by ID

app.get('/api/members/:id', async (req, res) => {

  try {

    const { id } = req.params;


    const { results } = await env.DB.prepare('SELECT * FROM members WHERE id = ?').bind(id).all();


    if (results.length === 0) {

      return res.status(404).json({ success: false, error: 'Member not found' });

    }


    res.json({ success: true, member: results[0] });

  } catch (error) {

    res.status(500).json({ success: false, error: 'Failed to fetch member' });

  }

});


```

These routes use the D1 binding (`env.DB`) to prepare SQL statements and execute them. Since you imported `env` from `cloudflare:workers` at the top of the file, it is accessible throughout your application. The `prepare`, `bind`, and `all` methods on the D1 binding allow you to safely query the database. Refer to [D1 Workers Binding API](https://developers.cloudflare.com/d1/worker-api/) for all available methods.

## 7\. Implement create operation

Add an endpoint to create new members. Add the following route to your `src/index.ts` file:

src/index.ts

```

// POST - Create a new member

app.post("/api/members", async (req, res) => {

  try {

    const { name, email } = req.body;


    // Validate input

    if (!name || !email) {

      return res.status(400).json({

        success: false,

        error: "Name and email are required",

      });

    }


    // Basic email validation (simplified for tutorial purposes)

    // For production, consider using a validation library or more comprehensive checks

    if (!email.includes("@") || !email.includes(".")) {

      return res.status(400).json({

        success: false,

        error: "Invalid email format",

      });

    }


    const joined_date = new Date().toISOString().split("T")[0];


    const result = await env.DB.prepare(

      "INSERT INTO members (name, email, joined_date) VALUES (?, ?, ?)"

    )

      .bind(name, email, joined_date)

      .run();


    if (result.success) {

      res.status(201).json({

        success: true,

        message: "Member created successfully",

        id: result.meta.last_row_id,

      });

    } else {

      res

        .status(500)

        .json({ success: false, error: "Failed to create member" });

    }

  } catch (error: any) {

    // Handle unique constraint violation

    if (error.message?.includes("UNIQUE constraint failed")) {

      return res.status(409).json({

        success: false,

        error: "Email already exists",

      });

    }

    res.status(500).json({ success: false, error: "Failed to create member" });

  }

});


```

This endpoint validates the input, checks the email format, and inserts a new member into the database. It also handles duplicate email addresses by checking for unique constraint violations.

## 8\. Implement update operation

Add an endpoint to update existing members. Add the following route to your `src/index.ts` file:

src/index.ts

```

app.put("/api/members/:id", async (req, res) => {

  try {

    const { id } = req.params;

    const { name, email } = req.body;


    // Validate input

    if (!name && !email) {

      return res.status(400).json({

        success: false,

        error: "At least one field (name or email) is required",

      });

    }


    // Basic email validation if provided (simplified for tutorial purposes)

    // For production, consider using a validation library or more comprehensive checks

    if (email && (!email.includes("@") || !email.includes("."))) {

      return res.status(400).json({

        success: false,

        error: "Invalid email format",

      });

    }


    // Build dynamic update query

    const updates: string[] = [];

    const values: any[] = [];


    if (name) {

      updates.push("name = ?");

      values.push(name);

    }

    if (email) {

      updates.push("email = ?");

      values.push(email);

    }


    values.push(id);


    const result = await env.DB.prepare(

      `UPDATE members SET ${updates.join(", ")} WHERE id = ?`

    )

      .bind(...values)

      .run();


    if (result.meta.changes === 0) {

      return res

        .status(404)

        .json({ success: false, error: "Member not found" });

    }


    res.json({ success: true, message: "Member updated successfully" });

  } catch (error: any) {

    if (error.message?.includes("UNIQUE constraint failed")) {

      return res.status(409).json({

        success: false,

        error: "Email already exists",

      });

    }

    res.status(500).json({ success: false, error: "Failed to update member" });

  }

});


```

This endpoint allows updating either the name, email, or both fields of an existing member. It builds a dynamic SQL query based on the provided fields.

## 9\. Implement delete operation

Add an endpoint to delete members. Add the following route to your `src/index.ts` file:

src/index.ts

```

// DELETE - Delete a member

app.delete("/api/members/:id", async (req, res) => {

  try {

    const { id } = req.params;


    const result = await env.DB.prepare("DELETE FROM members WHERE id = ?")

      .bind(id)

      .run();


    if (result.meta.changes === 0) {

      return res

        .status(404)

        .json({ success: false, error: "Member not found" });

    }


    res.json({ success: true, message: "Member deleted successfully" });

  } catch (error) {

    res.status(500).json({ success: false, error: "Failed to delete member" });

  }

});


```

This endpoint deletes a member by their ID and returns an error if the member does not exist.

## 10\. Test locally

Start the development server to test your API locally:

```

npm run dev


```

The development server will start, and you can access your API at `http://localhost:8787`.

Open a new terminal window and test the endpoints using `curl`:

Get all members

```

curl http://localhost:8787/api/members


```

```

{

  "success": true,

  "members": [

    {

      "id": 1,

      "name": "Alice Johnson",

      "email": "alice@example.com",

      "joined_date": "2024-01-15"

    },

    {

      "id": 2,

      "name": "Bob Smith",

      "email": "bob@example.com",

      "joined_date": "2024-02-20"

    },

    {

      "id": 3,

      "name": "Carol Williams",

      "email": "carol@example.com",

      "joined_date": "2024-03-10"

    }

  ]

}


```

Test creating a new member:

Create a member

```

curl -X POST http://localhost:8787/api/members \

  -H "Content-Type: application/json" \

  -d '{"name": "David Brown", "email": "david@example.com"}'


```

```

{

  "success": true,

  "message": "Member created successfully",

  "id": 4

}


```

Test getting a single member:

Get a member by ID

```

curl http://localhost:8787/api/members/1


```

Test updating a member:

Update a member

```

curl -X PUT http://localhost:8787/api/members/1 \

  -H "Content-Type: application/json" \

  -d '{"name": "Alice Cooper"}'


```

Test deleting a member:

Delete a member

```

curl -X DELETE http://localhost:8787/api/members/4


```

## 11\. Deploy to Cloudflare Workers

Before deploying to production, execute the schema file against your remote (production) database:

```

npx wrangler d1 execute members-db --remote --file=./schemas/schema.sql


```

Now deploy your application to the Cloudflare network:

```

npm run deploy


```

```

⛅️ wrangler 4.44.0

───────────────────

Total Upload: 1743.64 KiB / gzip: 498.65 KiB

Worker Startup Time: 48 ms

Your Worker has access to the following bindings:

Binding                  Resource

env.DB (members-db)      D1 Database


Uploaded express-d1-app (2.99 sec)

Deployed express-d1-app triggers (5.26 sec)

  https://<your-subdomain>.workers.dev

Current Version ID: <version-id>


```

After successful deployment, Wrangler will output your Worker's URL.

## 12\. Test production deployment

Test your deployed API using the provided URL. Replace `<your-worker-url>` with your actual Worker URL:

Test production API

```

curl https://<your-worker-url>/api/members


```

You should see the same member data you created in the production database.

Create a new member in production:

Create a member in production

```

curl -X POST https://<your-worker-url>/api/members \

  -H "Content-Type: application/json" \

  -d '{"name": "Eva Martinez", "email": "eva@example.com"}'


```

Your Express.js application with D1 database is now running on Cloudflare Workers.

## Conclusion

In this tutorial, you built a Members Registry API using Express.js and D1 database, then deployed it to Cloudflare Workers. You implemented full CRUD operations (Create, Read, Update, Delete) and learned how to:

* Set up an Express.js application for Cloudflare Workers
* Create and configure a D1 database with bindings
* Implement database operations using D1's prepared statements
* Test your API locally and in production

## Next steps

* Learn more about [D1 database features](https://developers.cloudflare.com/d1/)
* Explore [Workers routing and middleware](https://developers.cloudflare.com/workers/runtime-apis/)
* Add authentication to your API using [Workers authentication](https://developers.cloudflare.com/workers/runtime-apis/handlers/)
* Implement pagination for large datasets using [D1 query optimization](https://developers.cloudflare.com/d1/worker-api/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/deploy-an-express-app/","name":"Deploy an Express.js application on Cloudflare Workers"}}]}
```

---

---
title: Generate YouTube thumbnails with Workers and Cloudflare Image Resizing
description: This tutorial explains how to programmatically generate a custom YouTube thumbnail using Cloudflare Workers. You may want to customize the thumbnail's design, call-to-actions and images used to encourage more viewers to watch your video.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript)[ Rust ](https://developers.cloudflare.com/search/?tags=Rust) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/generate-youtube-thumbnails-with-workers-and-images.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Generate YouTube thumbnails with Workers and Cloudflare Image Resizing

**Last reviewed:**  about 3 years ago 

In this tutorial, you will learn how to programmatically generate a custom YouTube thumbnail using Cloudflare Workers and Cloudflare Image Resizing. You may want to generate a custom YouTube thumbnail to customize the thumbnail's design, call-to-actions and images used to encourage more viewers to watch your video.

This tutorial will help you understand how to work with [Images](https://developers.cloudflare.com/images/),[Image Resizing](https://developers.cloudflare.com/images/transform-images/) and [Cloudflare Workers](https://developers.cloudflare.com/workers/).

## Before you start

All of the tutorials assume you have already completed the [Get started guide](https://developers.cloudflare.com/workers/get-started/guide/), which gets you set up with a Cloudflare Workers account, [C3 ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare), and [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

To follow this tutorial, make sure you have Node, Cargo, and [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/) installed on your machine.

## Learning goals

In this tutorial, you will learn how to:

* Upload Images to Cloudflare with the Cloudflare dashboard or API.
* Set up a Worker project with Wrangler.
* Manipulate images with image transformations in your Worker.

## Upload your image

To generate a custom thumbnail image, you first need to upload a background image to Cloudflare Images. This will serve as the image you use for transformations to generate the thumbnails.

Cloudflare Images allows you to store, resize, optimize and deliver images in a fast and secure manner. To get started, upload your images to the Cloudflare dashboard or use the Upload API.

### Upload with the dashboard

To upload an image using the Cloudflare dashboard:

1. In the Cloudflare dashboard, go to the **Transformations** page.  
[ Go to **Transformations** ](https://dash.cloudflare.com/?to=/:account/images/transformations)
2. Use **Quick Upload** to either drag and drop an image or click to browse and choose a file from your local files.
3. After the image is uploaded, view it using the generated URL.

### Upload with the API

To upload your image with the [Upload via URL](https://developers.cloudflare.com/images/upload-images/upload-url/) API, refer to the example below:

Terminal window

```

curl --request POST \

 --url https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/images/v1 \

 --header 'Authorization: Bearer <API_TOKEN>' \

 --form 'url=<PATH_TO_IMAGE>' \

 --form 'metadata={"key":"value"}' \

 --form 'requireSignedURLs=false'


```

* `ACCOUNT_ID`: The current user's account id which can be found in your account settings.
* `API_TOKEN`: Needs to be generated to scoping Images permission.
* `PATH_TO_IMAGE`: Indicates the URL for the image you want to upload.

You will then receive a response similar to this:

```

{

  "result": {

    "id": "2cdc28f0-017a-49c4-9ed7-87056c83901",

    "filename": "image.jpeg",

    "metadata": {

      "key": "value"

    },

    "uploaded": "2022-01-31T16:39:28.458Z",

    "requireSignedURLs": false,

    "variants": [

      "https://imagedelivery.net/Vi7wi5KSItxGFsWRG2Us6Q/2cdc28f0-017a-49c4-9ed7-87056c83901/public",

      "https://imagedelivery.net/Vi7wi5KSItxGFsWRG2Us6Q/2cdc28f0-017a-49c4-9ed7-87056c83901/thumbnail"

    ]

  },

  "success": true,

  "errors": [],

  "messages": []

}


```

Now that you have uploaded your image, you will use it as the background image for your video's thumbnail.

## Create a Worker to transform text to image

After uploading your image, create a Worker that will enable you to transform text to image. This image can be used as an overlay on the background image you uploaded. Use the [rustwasm-worker-template ↗](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-rust).

You will need the following before you begin:

* A recent version of [Rust ↗](https://rustup.rs/).
* Access to the `cargo-generate` subcommand:  
Terminal window  
```  
cargo install cargo-generate  
```

Create a new Worker project using the `worker-rust` template:

Terminal window

```

cargo generate https://github.com/cloudflare/rustwasm-worker-template


```

You will now make a few changes to the files in your project directory.

1. In the `lib.rs` file, add the following code block:

```

use worker::*;

mod utils;


#[event(fetch)]

pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> {

   // Optionally, get more helpful error messages written to the console in the case of a panic.

   utils::set_panic_hook();


   let router = Router::new();

   router

       .get("/", |_, _| Response::ok("Hello from Workers!"))

       .run(req, env)

       .await

}


```

1. Update the `Cargo.toml` file in your `worker-to-text` project directory to use [text-to-png ↗](https://github.com/RookAndPawn/text-to-png), a Rust package for rendering text to PNG. Add the package as a dependency by running:

Terminal window

```

cargo add text-to-png@0.2.0


```

1. Import the `text_to_png` library into your `worker-to-text` project's `lib.rs` file.

```

use text_to_png::{TextPng, TextRenderer};

use worker::*;

mod utils;


#[event(fetch)]

pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> {

   // Optionally, get more helpful error messages written to the console in the case of a panic.

   utils::set_panic_hook();


   let router = Router::new();

   router

       .get("/", |_, _| Response::ok("Hello from Workers!"))

       .run(req, env)

       .await

}


```

1. Update `lib.rs` to create a `handle-slash` function that will activate the image transformation based on the text passed to the URL as a query parameter.

```

use text_to_png::{TextPng, TextRenderer};

use worker::*;

mod utils;


#[event(fetch)]

pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> {

   // Optionally, get more helpful error messages written to the console in the case of a panic.

   utils::set_panic_hook();


   let router = Router::new();

   router

       .get("/", |_, _| Response::ok("Hello from Workers!"))

       .run(req, env)

       .await

}


async fn handle_slash(text: String) -> Result<Response> {}


```

1. In the `handle-slash` function, call the `TextRenderer` by assigning it to a renderer value, specifying that you want to use a custom font. Then, use the `render_text_to_png_data` method to transform the text into image format. In this example, the custom font (`Inter-Bold.ttf`) is located in an `/assets` folder at the root of the project which will be used for generating the thumbnail. You must update this portion of the code to point to your custom font file.

```

use text_to_png::{TextPng, TextRenderer};

use worker::*;

mod utils;


#[event(fetch)]

pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> {

   // Optionally, get more helpful error messages written to the console in the case of a panic.

   utils::set_panic_hook();


   let router = Router::new();

   router

       .get("/", |_, _| Response::ok("Hello from Workers!"))

       .run(req, env)

       .await

}


async fn handle_slash(text: String) -> Result<Response> {

  let renderer = TextRenderer::try_new_with_ttf_font_data(include_bytes!("../assets/Inter-Bold.ttf"))

    .expect("Example font is definitely loadable");


  let text_png: TextPng = renderer.render_text_to_png_data(text.replace("+", " "), 60, "003682").unwrap();

}


```

1. Rewrite the `Router` function to call `handle_slash` when a query is passed in the URL, otherwise return the `"Hello Worker!"` as the response.

```

use text_to_png::{TextPng, TextRenderer};

use worker::*;

mod utils;


#[event(fetch)]

pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> {

   // Optionally, get more helpful error messages written to the console in the case of a panic.

   utils::set_panic_hook();


  let router = Router::new();

    router

      .get_async("/", |req, _| async move {

        if let Some(text) = req.url()?.query() {

          handle_slash(text.into()).await

        } else {

          handle_slash("Hello Worker!".into()).await

        }

      })

      .run(req, env)

        .await

}


async fn handle_slash(text: String) -> Result<Response> {

  let renderer = TextRenderer::try_new_with_ttf_font_data(include_bytes!("../assets/Inter-Bold.ttf"))

    .expect("Example font is definitely loadable");


  let text_png: TextPng = renderer.render_text_to_png_data(text.replace("+", " "), 60, "003682").unwrap();

}


```

1. In your `lib.rs` file, set the headers to `content-type: image/png` so that the response is correctly rendered as a PNG image.

```

use text_to_png::{TextPng, TextRenderer};

use worker::*;

mod utils;


#[event(fetch)]

pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> {

   // Optionally, get more helpful error messages written to the console in the case of a panic.

   utils::set_panic_hook();


   let router = Router::new();

    router

      .get_async("/", |req, _| async move {

        if let Some(text) = req.url()?.query() {

          handle_slash(text.into()).await

        } else {

          handle_slash("Hello Worker!".into()).await

        }

      })

      .run(req, env)

        .await

}


async fn handle_slash(text: String) -> Result<Response> {

  let renderer = TextRenderer::try_new_with_ttf_font_data(include_bytes!("../assets/Inter-Bold.ttf"))

    .expect("Example font is definitely loadable");


  let text_png: TextPng = renderer.render_text_to_png_data(text.replace("+", " "), 60, "003682").unwrap();


  let mut headers = Headers::new();

  headers.set("content-type", "image/png")?;


  Ok(Response::from_bytes(text_png.data)?.with_headers(headers))

}


```

The final `lib.rs` file should look as follows. Find the full code as an example repository on [GitHub ↗](https://github.com/cloudflare/workers-sdk/tree/main/templates/examples/worker-to-text).

```

use text_to_png::{TextPng, TextRenderer};

use worker::*;


mod utils;


#[event(fetch)]

pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> {

    // Optionally, get more helpful error messages written to the console in the case of a panic.

    utils::set_panic_hook();


    let router = Router::new();


    router

        .get_async("/", |req, _| async move {

            if let Some(text) = req.url()?.query() {

                handle_slash(text.into()).await

            } else {

                handle_slash("Hello Worker!".into()).await

            }

        })

        .run(req, env)

        .await

}


async fn handle_slash(text: String) -> Result<Response> {

    let renderer = TextRenderer::try_new_with_ttf_font_data(include_bytes!("../assets/Inter-Bold.ttf"))

    .expect("Example font is definitely loadable");


    let text = if text.len() > 128 {

        "Nope".into()

    } else {

        text

    };


    let text = urlencoding::decode(&text).map_err(|_| worker::Error::BadEncoding)?;


    let text_png: TextPng = renderer.render_text_to_png_data(text.replace("+", " "), 60, "003682").unwrap();


    let mut headers = Headers::new();

    headers.set("content-type", "image/png")?;


    Ok(Response::from_bytes(text_png.data)?.with_headers(headers))

}


```

After you have finished updating your project, start a local server for developing your Worker by running:

Terminal window

```

npx wrangler dev


```

This should spin up a `localhost` instance with the image displayed:

![Run wrangler dev to start a local server for your Worker](https://developers.cloudflare.com/_astro/hello-worker.ot1qb0cF_Z2j0gbO.webp) 

Adding a query parameter with custom text, you should receive:

![Follow the instructions above to receive an output image](https://developers.cloudflare.com/_astro/build-serverles.BHasze4F_Zc150.webp) 

To deploy your Worker, open your Wrangler file and update the `name` key with your project's name. Below is an example with this tutorial's project name:

* [  wrangler.jsonc ](#tab-panel-7770)
* [  wrangler.toml ](#tab-panel-7771)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "worker-to-text"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "worker-to-text"


```

Then run the `npx wrangler deploy` command to deploy your Worker.

Terminal window

```

npx wrangler deploy


```

A `.workers.dev` domain will be generated for your Worker after running `wrangler deploy`. You will use this domain in the main thumbnail image.

## Create a Worker to display the original image

Create a Worker to serve the image you uploaded to Images by running:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- thumbnail-image
```

```
yarn create cloudflare thumbnail-image
```

```
pnpm create cloudflare@latest thumbnail-image
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `JavaScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

To start developing your Worker, `cd` into your new project directory:

Terminal window

```

cd thumbnail-image


```

This will create a new Worker project named `thumbnail-image`. In the `src/index.js` file, add the following code block:

JavaScript

```

export default {

  async fetch(request, env) {

    const url = new URL(request.url);

    if (url.pathname === "/original-image") {

      const image = await fetch(

        `https://imagedelivery.net/${env.CLOUDFLARE_ACCOUNT_HASH}/${IMAGE_ID}/public`,

      );

      return image;

    }

    return new Response("Image Resizing with a Worker");

  },

};


```

Update `env.CLOUDFLARE_ACCOUNT_HASH` with your [Cloudflare account ID](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids/). Update `env.IMAGE_ID` with your [image ID](https://developers.cloudflare.com/images/get-started/).

Run your Worker and go to the `/original-image` route to review your image.

## Add custom text on your image

You will now use [Cloudflare image transformations](https://developers.cloudflare.com/images/transform-images/), with the `fetch` method, to add your dynamic text image as an overlay on top of your background image. Start by displaying the resulting image on a different route. Call the new route `/thumbnail`.

JavaScript

```

export default {

  async fetch(request, env) {

    const url = new URL(request.url);

    if (url.pathname === "/original-image") {

      const image = await fetch(

        `https://imagedelivery.net/${env.CLOUDFLARE_ACCOUNT_HASH}/${IMAGE_ID}/public`,

      );

      return image;

    }


    if (url.pathname === "/thumbnail") {

    }


    return new Response("Image Resizing with a Worker");

  },

};


```

Next, use the `fetch` method to apply the image transformation changes on top of the background image. The overlay options are nested in `options.cf.image`.

JavaScript

```

export default {

  async fetch(request, env) {

    const url = new URL(request.url);


    if (url.pathname === "/original-image") {

      const image = await fetch(

        `https://imagedelivery.net/${env.CLOUDFLARE_ACCOUNT_HASH}/${IMAGE_ID}/public`,

      );

      return image;

    }


    if (url.pathname === "/thumbnail") {

      fetch(imageURL, {

        cf: {

          image: {},

        },

      });

    }


    return new Response("Image Resizing with a Worker");

  },

};


```

The `imageURL` is the URL of the image you want to use as a background image. In the `cf.image` object, specify the options you want to apply to the background image.

Note

At time of publication, Cloudflare image transformations do not allow resizing images in a Worker that is stored in Cloudflare Images. Instead of using the image you served on the `/original-image` route, you will use the same image from a different source.

Add your background image to an assets directory on GitHub and push your changes to GitHub. Copy the URL of the image upload by performing a left click on the image and selecting the **Copy Remote File Url** option.

Replace the `imageURL` value with the copied remote URL.

JavaScript

```

if (url.pathname === "/thumbnail") {

  const imageURL =

    "https://github.com/lauragift21/social-image-demo/blob/1ed9044463b891561b7438ecdecbdd9da48cdb03/assets/cover.png?raw=true";

  fetch(imageURL, {

    cf: {

      image: {},

    },

  });

}


```

Next, add overlay options in the image object. Resize the image to the preferred width and height for YouTube thumbnails and use the [draw](https://developers.cloudflare.com/images/transform-images/draw-overlays/) option to add overlay text using the deployed URL of your `text-to-image` Worker.

JavaScript

```

fetch(imageURL, {

  cf: {

    image: {

      width: 1280,

      height: 720,

      draw: [

        {

          url: "https://text-to-image.examples.workers.dev",

          left: 40,

        },

      ],

    },

  },

});


```

Image transformations can only be tested when you deploy your Worker.

To deploy your Worker, open your Wrangler file and update the `name` key with your project's name. Below is an example with this tutorial's project name:

* [  wrangler.jsonc ](#tab-panel-7772)
* [  wrangler.toml ](#tab-panel-7773)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "thumbnail-image"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "thumbnail-image"


```

Deploy your Worker by running:

Terminal window

```

npx wrangler deploy


```

The command deploys your Worker to custom `workers.dev` subdomain. Go to your `.workers.dev` subdomain and go to the `/thumbnail` route.

You should see the resized image with the text `Hello Workers!`.

![Follow the steps above to generate your resized image.](https://developers.cloudflare.com/_astro/thumbnail.z6EOGa1__kzK0u.webp) 

You will now make text applied dynamic. Making your text dynamic will allow you change the text and have it update on the image automatically.

To add dynamic text, append any text attached to the `/thumbnail` URL using query parameters and pass it down to the `text-to-image` Worker URL as a parameter.

JavaScript

```

for (const title of url.searchParams.values()) {

  try {

    const editedImage = await fetch(imageURL, {

      cf: {

        image: {

          width: 1280,

          height: 720,

          draw: [

            {

              url: `https://text-to-image.examples.workers.dev/?${title}`,

              left: 50,

            },

          ],

        },

      },

    });

    return editedImage;

  } catch (error) {

    console.log(error);

  }

}


```

This will always return the text you pass as a query string in the generated image. This example URL, [https://socialcard.cdnuptime.com/thumbnail?Getting%20Started%20With%20Cloudflare%20Images ↗](https://socialcard.cdnuptime.com/thumbnail?Getting%20Started%20With%20Cloudflare%20Images), will generate the following image:

![An example thumbnail.](https://developers.cloudflare.com/_astro/thumbnail2.Bi3AcUzr_Z1qijsM.webp) 

By completing this tutorial, you have successfully made a custom YouTube thumbnail generator.

## Related resources

In this tutorial, you learned how to use Cloudflare Workers and Cloudflare image transformations to generate custom YouTube thumbnails. To learn more about Cloudflare Workers and image transformations, refer to [Resize an image with a Worker](https://developers.cloudflare.com/images/transform-images/transform-via-workers/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/generate-youtube-thumbnails-with-workers-and-images/","name":"Generate YouTube thumbnails with Workers and Cloudflare Image Resizing"}}]}
```

---

---
title: GitHub SMS notifications using Twilio
description: This tutorial shows you how to build an SMS notification system on Workers to receive updates on a GitHub repository. Your Worker will send you a text update using Twilio when there is new activity on your repository.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/github-sms-notifications-using-twilio.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# GitHub SMS notifications using Twilio

**Last reviewed:**  over 2 years ago 

In this tutorial, you will learn to build an SMS notification system on Workers to receive updates on a GitHub repository. Your Worker will send you a text update using Twilio when there is new activity on your repository.

You will learn how to:

* Build webhooks using Workers.
* Integrate Workers with GitHub and Twilio.
* Use Worker secrets with Wrangler.
![Animated gif of receiving a text message on your phone after pushing changes to a repository](https://developers.cloudflare.com/images/workers/tutorials/github-sms/video-of-receiving-a-text-after-pushing-to-a-repo.gif) 

---

## Before you start

All of the tutorials assume you have already completed the [Get started guide](https://developers.cloudflare.com/workers/get-started/guide/), which gets you set up with a Cloudflare Workers account, [C3 ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare), and [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

## Create a Worker project

Start by using `npm create cloudflare@latest` to create a Worker project in the command line:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- github-twilio-notifications
```

```
yarn create cloudflare github-twilio-notifications
```

```
pnpm create cloudflare@latest github-twilio-notifications
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `JavaScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

Make note of the URL that your application was deployed to. You will be using it when you configure your GitHub webhook.

Terminal window

```

cd github-twilio-notifications


```

Inside of your new `github-sms-notifications` directory, `src/index.js` represents the entry point to your Cloudflare Workers application. You will configure this file for most of the tutorial.

You will also need a GitHub account and a repository for this tutorial. If you do not have either setup, [create a new GitHub account ↗](https://github.com/join) and [create a new repository ↗](https://docs.github.com/en/get-started/quickstart/create-a-repo) to continue with this tutorial.

First, create a webhook for your repository to post updates to your Worker. Inside of your Worker, you will then parse the updates. Finally, you will send a `POST` request to Twilio to send a text message to you.

You can reference the finished code at this [GitHub repository ↗](https://github.com/rickyrobinett/workers-sdk/tree/main/templates/examples/github-sms-notifications-using-twilio).

---

## Configure GitHub

To start, configure a GitHub webhook to post to your Worker when there is an update to the repository:

1. Go to your GitHub repository's **Settings** \> **Webhooks** \> **Add webhook**.
2. Set the Payload URL to the `/webhook` path on the Worker URL that you made note of when your application was first deployed.
3. In the **Content type** dropdown, select _application/json_.
4. In the **Secret** field, input a secret key of your choice.
5. In **Which events would you like to trigger this webhook?**, select **Let me select individual events**. Select the events you want to get notifications for (such as **Pull requests**, **Pushes**, and **Branch or tag creation**).
6. Select **Add webhook** to finish configuration.
![Following instructions to set up your webhook in the GitHub webhooks settings dashboard](https://developers.cloudflare.com/_astro/github-config-screenshot.BR7flpMR_Z1Yrdgb.webp) 

---

## Parsing the response

With your local environment set up, parse the repository update with your Worker.

Initially, your generated `index.js` should look like this:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    return new Response("Hello World!");

  },

};


```

Use the `request.method` property of [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) to check if the request coming to your application is a `POST` request, and send an error response if the request is not a `POST` request.

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    if (request.method !== "POST") {

      return new Response("Please send a POST request!");

    }

  },

};


```

Next, validate that the request is sent with the right secret key. GitHub attaches a hash signature for [each payload using the secret key ↗](https://docs.github.com/en/developers/webhooks-and-events/webhooks/securing-your-webhooks). Use a helper function called `checkSignature` on the request to ensure the hash is correct. Then, you can access data from the webhook by parsing the request as JSON.

JavaScript

```

async fetch(request, env, ctx) {

  if(request.method !== 'POST') {

    return new Response('Please send a POST request!');

  }

  try {

    const rawBody = await request.text();


    if (!checkSignature(rawBody, request.headers, env.GITHUB_SECRET_TOKEN)) {

      return new Response("Wrong password, try again", {status: 403});

    }

  } catch (e) {

    return new Response(`Error:  ${e}`);

  }

},


```

The `checkSignature` function will use the Node.js crypto library to hash the received payload with your known secret key to ensure it matches the request hash. GitHub uses an HMAC hexdigest to compute the hash in the SHA-256 format. You will place this function at the top of your `index.js` file, before your export.

JavaScript

```

import { createHmac, timingSafeEqual } from "node:crypto";

import { Buffer } from "node:buffer";


function checkSignature(text, headers, githubSecretToken) {

  const hmac = createHmac("sha256", githubSecretToken);

  hmac.update(text);

  const expectedSignature = hmac.digest("hex");

  const actualSignature = headers.get("x-hub-signature-256");


  const trusted = Buffer.from(`sha256=${expectedSignature}`, "ascii");

  const untrusted = Buffer.from(actualSignature, "ascii");


  return (

    trusted.byteLength == untrusted.byteLength &&

    timingSafeEqual(trusted, untrusted)

  );

}


```

To make this work, you need to use [wrangler secret put](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret-put) to set your `GITHUB_SECRET_TOKEN`. This token is the secret you picked earlier when configuring you GitHub webhook:

Terminal window

```

npx wrangler secret put GITHUB_SECRET_TOKEN


```

Add the nodejs\_compat flag to your Wrangler file:

* [  wrangler.jsonc ](#tab-panel-7774)
* [  wrangler.toml ](#tab-panel-7775)

```

{

  "compatibility_flags": [

    "nodejs_compat"

  ]

}


```

```

compatibility_flags = [ "nodejs_compat" ]


```

---

## Sending a text with Twilio

You will send a text message to you about your repository activity using Twilio. You need a Twilio account and a phone number that can receive text messages. [Refer to the Twilio guide to get set up ↗](https://www.twilio.com/messaging/sms). (If you are new to Twilio, they have [an interactive game ↗](https://www.twilio.com/quest) where you can learn how to use their platform and get some free credits for beginners to the service.)

You can then create a helper function to send text messages by sending a `POST` request to the Twilio API endpoint. [Refer to the Twilio reference ↗](https://www.twilio.com/docs/sms/api/message-resource#create-a-message-resource) to learn more about this endpoint.

Create a new function called `sendText()` that will handle making the request to Twilio:

JavaScript

```

async function sendText(accountSid, authToken, message) {

  const endpoint = `https://api.twilio.com/2010-04-01/Accounts/${accountSid}/Messages.json`;


  const encoded = new URLSearchParams({

    To: "%YOUR_PHONE_NUMBER%",

    From: "%YOUR_TWILIO_NUMBER%",

    Body: message,

  });


  const token = btoa(`${accountSid}:${authToken}`);


  const request = {

    body: encoded,

    method: "POST",

    headers: {

      Authorization: `Basic ${token}`,

      "Content-Type": "application/x-www-form-urlencoded",

    },

  };


  const response = await fetch(endpoint, request);

  const result = await response.json();


  return Response.json(result);

}


```

To make this work, you need to set some secrets to hide your `ACCOUNT_SID` and `AUTH_TOKEN` from the source code. You can set secrets with [wrangler secret put](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret-put) in your command line.

Terminal window

```

npx wrangler secret put TWILIO_ACCOUNT_SID

npx wrangler secret put TWILIO_AUTH_TOKEN


```

Modify your `githubWebhookHandler` to send a text message using the `sendText` function you just made.

JavaScript

```

async fetch(request, env, ctx) {

  if(request.method !== 'POST') {

    return new Response('Please send a POST request!');

  }

  try {

    const rawBody = await request.text();

    if (!checkSignature(rawBody, request.headers, env.GITHUB_SECRET_TOKEN)) {

      return new Response('Wrong password, try again', {status: 403});

    }


    const action = request.headers.get('X-GitHub-Event');

    const json = JSON.parse(rawBody);

    const repoName = json.repository.full_name;

    const senderName = json.sender.login;


    return await sendText(

      env.TWILIO_ACCOUNT_SID,

      env.TWILIO_AUTH_TOKEN,

      `${senderName} completed ${action} onto your repo ${repoName}`

    );

  } catch (e) {

    return new Response(`Error:  ${e}`);

  }

};


```

Run the `npx wrangler deploy` command to redeploy your Worker project:

Terminal window

```

npx wrangler deploy


```

![Video of receiving a text after pushing to a repo](https://developers.cloudflare.com/images/workers/tutorials/github-sms/video-of-receiving-a-text-after-pushing-to-a-repo.gif) 

Now when you make an update (that you configured in the GitHub **Webhook** settings) to your repository, you will get a text soon after. If you have never used Git before, refer to the [GIT Push and Pull Tutorial ↗](https://www.datacamp.com/tutorial/git-push-pull) for pushing to your repository.

Reference the finished code [on GitHub ↗](https://github.com/rickyrobinett/workers-sdk/tree/main/templates/examples/github-sms-notifications-using-twilio).

By completing this tutorial, you have learned how to build webhooks using Workers, integrate Workers with GitHub and Twilio, and use Worker secrets with Wrangler.

## Related resources

* [Build a JAMStack app](https://developers.cloudflare.com/workers/tutorials/build-a-jamstack-app/)
* [Build a QR code generator](https://developers.cloudflare.com/workers/tutorials/build-a-qr-code-generator/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/github-sms-notifications-using-twilio/","name":"GitHub SMS notifications using Twilio"}}]}
```

---

---
title: Handle form submissions with Airtable
description: Use Cloudflare Workers and Airtable to persist form submissions from a front-end user interface. Workers will handle incoming form submissions and use Airtables REST API to asynchronously persist the data in an Airtable base.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Forms ](https://developers.cloudflare.com/search/?tags=Forms)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/handle-form-submissions-with-airtable.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Handle form submissions with Airtable

**Last reviewed:**  almost 3 years ago 

In this tutorial, you will use [Cloudflare Workers](https://developers.cloudflare.com/workers/) and [Airtable ↗](https://airtable.com) to persist form submissions from a front-end user interface. Airtable is a free-to-use spreadsheet solution that has an approachable API for developers. Workers will handle incoming form submissions and use Airtable's [REST API ↗](https://airtable.com/api) to asynchronously persist the data in an Airtable base (Airtable's term for a spreadsheet) for later reference.

![GIF of a complete Airtable and serverless function integration](https://developers.cloudflare.com/images/workers/tutorials/airtable/example.gif) 

## Before you start

All of the tutorials assume you have already completed the [Get started guide](https://developers.cloudflare.com/workers/get-started/guide/), which gets you set up with a Cloudflare Workers account, [C3 ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare), and [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

## 1\. Create a form

For this tutorial, you will be building a Workers function that handles input from a contact form. The form this tutorial references will collect a first name, last name, email address, phone number, message subject, and a message.

Build a form

If this is your first time building a form and you would like to follow a tutorial to create a form with Cloudflare Pages, refer to the [HTML forms](https://developers.cloudflare.com/pages/tutorials/forms) tutorial.

Review a simplified example of the form used in this tuttorial. Note that the `action` parameter of the `<form>` tag should point to the deployed Workers application that you will build in this tutorial.

Your front-end code

```

<form action="https://workers-airtable-form.signalnerve.workers.dev/submit" method="POST">

  <div>

    <label for="first_name">First name</label>

    <input type="text" name="first_name" id="first_name" autocomplete="given-name" placeholder="Ellen" required />

  </div>


  <div>

    <label for="last_name">Last name</label>

    <input type="text" name="last_name" id="last_name" autocomplete="family-name" placeholder="Ripley" required />

  </div>


  <div>

    <label for="email">Email</label>

      <input id="email" name="email" type="email" autocomplete="email" placeholder="eripley@nostromo.com" required />

    </div>

  </div>


  <div>

    <label for="phone">

      Phone

      <span>Optional</span>

    </label>

    <input type="text" name="phone" id="phone" autocomplete="tel" placeholder="+1 (123) 456-7890" />

  </div>


  <div>

    <label for="subject">Subject</label>

    <input type="text" name="subject" id="subject" placeholder="Your example subject" required />

  </div>


  <div>

    <label for="message">

      Message

      <span>Max 500 characters</span>

    </label>

    <textarea id="message" name="message" rows="4" placeholder="Tenetur quaerat expedita vero et illo. Tenetur explicabo dolor voluptatem eveniet. Commodi est beatae id voluptatum porro laudantium. Quam placeat accusamus vel officiis vel. Et perferendis dicta ut perspiciatis quos iste. Tempore autem molestias voluptates in sapiente enim doloremque." required></textarea>

  </div>


  <div>

    <button type="submit">

      Submit

    </button>

  </div>

</form>


```

## 2\. Create a Worker project

To handle the form submission, create and deploy a Worker that parses the incoming form data and prepares it for submission to Airtable.

Create a new `airtable-form-handler` Worker project:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- airtable-form-handler
```

```
yarn create cloudflare airtable-form-handler
```

```
pnpm create cloudflare@latest airtable-form-handler
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `JavaScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

Then, move into the newly created directory:

Terminal window

```

cd airtable-form-handler


```

## 3\. Configure an Airtable base

When your Worker is complete, it will send data up to an Airtable base via Airtable's REST API.

If you do not have an Airtable account, create one (the free plan is sufficient to complete this tutorial). In Airtable's dashboard, create a new base by selecting **Start from scratch**.

After you have created a new base, set it up for use with the front-end form. Delete the existing columns, and create six columns, with the following field types:

| Field name   | Airtable field type |
| ------------ | ------------------- |
| First Name   | "Single line text"  |
| Last Name    | "Single line text"  |
| Email        | "Email"             |
| Phone Number | "Phone number"      |
| Subject      | "Single line text"  |
| Message      | "Long text"         |

Note that the field names are case-sensitive. If you change the field names, you will need to exactly match your new field names in the API request you make to Airtable later in the tutorial. Finally, you can optionally rename your table -- by defaulte it will have a name like Table 1\. In the below code, we assume the table has been renamed with a more descriptive name, like `Form Submissions`.

Next, navigate to [Airtable's API page ↗](https://airtable.com/api) and select your new base. Note that you must be logged into Airtable to see your base information. In the API documentation page, find your **Airtable base ID**.

You will also need to create a **Personal access token** that you'll use to access your Airtable base. You can do so by visiting the [Personal access tokens ↗](https://airtable.com/create/tokens) page on Airtable's website and creating a new token. Make sure that you configure the token in the following way:

* Scope: the `data.records:write` scope must be set on the token
* Access: access should be granted to the base you have been working with in this tutorial

The results access token should now be set in your application. To make the token available in your codebase, use the [wrangler secret](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret) command. The `secret` command encrypts and stores environment variables for use in your function, without revealing them to users.

Run `wrangler secret put`, passing `AIRTABLE_ACCESS_TOKEN` as the name of your secret:

Terminal window

```

npx wrangler secret put AIRTABLE_ACCESS_TOKEN


```

```

Enter the secret text you would like assigned to the variable AIRTABLE_ACCESS_TOKEN on the script named airtable-form-handler:

******

🌀  Creating the secret for script name airtable-form-handler

✨  Success! Uploaded secret AIRTABLE_ACCESS_TOKEN.


```

Before you continue, review the keys that you should have from Airtable:

1. **Airtable Table Name**: The name for your table, like Form Submissions.
2. **Airtable Base ID**: The alphanumeric base ID found at the top of your base's API page.
3. **Airtable Access Token**: A Personal Access Token created by the user to access information about your new Airtable base.

## 4\. Submit data to Airtable

With your Airtable base set up, and the keys and IDs you need to communicate with the API ready, you will now set up your Worker to persist data from your form into Airtable.

In your Worker project's `index.js` file, replace the default code with a Workers fetch handler that can respond to requests. When the URL requested has a pathname of `/submit`, you will handle a new form submission, otherwise, you will return a `404 Not Found` response.

JavaScript

```

export default {

  async fetch(request, env) {

    const url = new URL(request.url);

    if (url.pathname === "/submit") {

      await submitHandler(request, env);

    }

    return new Response("Not found", { status: 404 });

  },

};


```

The `submitHandler` has two functions. First, it will parse the form data coming from your HTML5 form. Once the data is parsed, use the Airtable API to persist a new row (a new form submission) to your table:

JavaScript

```

async function submitHandler(request, env) {

  if (request.method !== "POST") {

    return new Response("Method Not Allowed", {

      status: 405,

    });

  }

  const body = await request.formData();


  const { first_name, last_name, email, phone, subject, message } =

    Object.fromEntries(body);


  // The keys in "fields" are case-sensitive, and

  // should exactly match the field names you set up

  // in your Airtable table, such as "First Name".

  const reqBody = {

    fields: {

      "First Name": first_name,

      "Last Name": last_name,

      Email: email,

      "Phone Number": phone,

      Subject: subject,

      Message: message,

    },

  };

  await createAirtableRecord(env, reqBody);

}


// Existing code

// export default ...


```

Prevent potential errors when accessing request.body

The body of a [Request ↗](https://developer.mozilla.org/en-US/docs/Web/API/Request) can only be accessed once. If you previously used `request.formData()` in the same request, you may encounter a TypeError when attempting to access `request.body`.

To avoid errors, create a clone of the Request object with `request.clone()` for each subsequent attempt to access a Request's body. Keep in mind that Workers have a [memory limit of 128 MB per Worker](https://developers.cloudflare.com/workers/platform/limits/#memory) and loading particularly large files into a Worker's memory multiple times may reach this limit. To ensure memory usage does not reach this limit, consider using [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/).

While the majority of this function is concerned with parsing the request body (the data being sent as part of the request), there are two important things to note. First, if the HTTP method sent to this function is not `POST`, you will return a new response with the status code of [405 Method Not Allowed ↗](https://httpstatuses.com/405).

The variable `reqBody` represents a collection of fields, which are key-value pairs for each column in your Airtable table. By formatting `reqBody` as an object with a collection of fields, you are creating a new record in your table with a value for each field.

Then you call `createAirtableRecord` (the function you will define next). The `createAirtableRecord` function accepts a `body` parameter, which conforms to the Airtable API's required format — namely, a JavaScript object containing key-value pairs under `fields`, representing a single record to be created on your table:

JavaScript

```

async function createAirtableRecord(env, body) {

  try {

    const result = fetch(

      `https://api.airtable.com/v0/${env.AIRTABLE_BASE_ID}/${encodeURIComponent(env.AIRTABLE_TABLE_NAME)}`,

      {

        method: "POST",

        body: JSON.stringify(body),

        headers: {

          Authorization: `Bearer ${env.AIRTABLE_ACCESS_TOKEN}`,

          "Content-Type": "application/json",

        },

      },

    );

    return result;

  } catch (error) {

    console.error(error);

  }

}


// Existing code

// async function submitHandler

// export default ...


```

To make an authenticated request to Airtable, you need to provide four constants that represent data about your Airtable account, base, and table name. You have already set `AIRTABLE_ACCESS_TOKEN` using `wrangler secret`, since it is a value that should be encrypted. The **Airtable base ID** and **table name**, and `FORM_URL` are values that can be publicly shared in places like GitHub. Use Wrangler's [vars](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#vars) feature to pass public environment variables from your Wrangler file.

Add a `vars` table at the end of your Wrangler file:

* [  wrangler.jsonc ](#tab-panel-7776)
* [  wrangler.toml ](#tab-panel-7777)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "workers-airtable-form",

  "main": "src/index.js",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "vars": {

    "AIRTABLE_BASE_ID": "exampleBaseId",

    "AIRTABLE_TABLE_NAME": "Form Submissions"

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "workers-airtable-form"

main = "src/index.js"

# Set this to today's date

compatibility_date = "2026-04-03"


[vars]

AIRTABLE_BASE_ID = "exampleBaseId"

AIRTABLE_TABLE_NAME = "Form Submissions"


```

With all these fields submitted, it is time to deploy your Workers serverless function and get your form communicating with it. First, publish your Worker:

Deploy your Worker

```

npx wrangler deploy


```

Your Worker project will deploy to a unique URL — for example, `https://workers-airtable-form.cloudflare.workers.dev`. This represents the first part of your front-end form's `action` attribute — the second part is the path for your form handler, which is `/submit`. In your front-end UI, configure your `form` tag as seen below:

```

<form

  action="https://workers-airtable-form.cloudflare.workers.dev/submit"

  method="POST"

  class="..."

>

  <!-- The rest of your HTML form -->

</form>


```

After you have deployed your new form (refer to the [HTML forms](https://developers.cloudflare.com/pages/tutorials/forms) tutorial if you need help creating a form), you should be able to submit a new form submission and see the value show up immediately in Airtable:

![Example GIF of complete Airtable and serverless function integration](https://developers.cloudflare.com/images/workers/tutorials/airtable/example.gif) 

## Conclusion

With this tutorial completed, you have created a Worker that can accept form submissions and persist them to Airtable. You have learned how to parse form data, set up environment variables, and use the `fetch` API to make requests to external services outside of your Worker.

## Related resources

* [Build a Slackbot](https://developers.cloudflare.com/workers/tutorials/build-a-slackbot)
* [Build a To-Do List Jamstack App](https://developers.cloudflare.com/workers/tutorials/build-a-jamstack-app)
* [Build a blog using Nuxt.js and Sanity.io on Cloudflare Pages](https://developers.cloudflare.com/pages/tutorials/build-a-blog-using-nuxt-and-sanity)
* [James Quick's video on building a Cloudflare Workers + Airtable integration ↗](https://www.youtube.com/watch?v=tFQ2kbiu1K4)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/handle-form-submissions-with-airtable/","name":"Handle form submissions with Airtable"}}]}
```

---

---
title: Connect to a MySQL database with Cloudflare Workers
description: This tutorial explains how to connect to a Cloudflare database using TCP Sockets and Hyperdrive. The Workers application you create in this tutorial will interact with a product database inside of MySQL.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ MySQL ](https://developers.cloudflare.com/search/?tags=MySQL)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ SQL ](https://developers.cloudflare.com/search/?tags=SQL) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/mysql.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Connect to a MySQL database with Cloudflare Workers

**Last reviewed:**  about 1 year ago 

In this tutorial, you will learn how to create a Cloudflare Workers application and connect it to a MySQL database using [TCP Sockets](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) and [Hyperdrive](https://developers.cloudflare.com/hyperdrive/). The Workers application you create in this tutorial will interact with a product database inside of MySQL.

Note

We recommend using [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) to connect to your MySQL database. Hyperdrive provides optimal performance and will ensure secure connectivity between your Worker and your MySQL database.

When connecting directly to your MySQL database (without Hyperdrive), the MySQL drivers rely on unsupported Node.js APIs to create secure connections, which prevents connections.

## Prerequisites

To continue:

1. Sign up for a [Cloudflare account ↗](https://dash.cloudflare.com/sign-up/workers-and-pages) if you have not already.
2. Install [npm ↗](https://docs.npmjs.com/getting-started).
3. Install [Node.js ↗](https://nodejs.org/en/). Use a Node version manager like [Volta ↗](https://volta.sh/) or [nvm ↗](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node.js versions. [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/) requires a Node version of `16.17.0` or later.
4. Make sure you have access to a MySQL database.

## 1\. Create a Worker application

First, use the [create-cloudflare CLI ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) to create a new Worker application. To do this, open a terminal window and run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- mysql-tutorial
```

```
yarn create cloudflare mysql-tutorial
```

```
pnpm create cloudflare@latest mysql-tutorial
```

This will prompt you to install the [create-cloudflare ↗](https://www.npmjs.com/package/create-cloudflare) package and lead you through a setup wizard.

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `TypeScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

If you choose to deploy, you will be asked to authenticate (if not logged in already), and your project will be deployed. If you deploy, you can still modify your Worker code and deploy again at the end of this tutorial.

Now, move into the newly created directory:

Terminal window

```

cd mysql-tutorial


```

## 2\. Enable Node.js compatibility

[Node.js compatibility](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) is required for database drivers, including mysql2, and needs to be configured for your Workers project.

To enable both built-in runtime APIs and polyfills for your Worker or Pages project, add the [nodejs\_compat](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag) [compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag) to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), and set your compatibility date to September 23rd, 2024 or later. This will enable [Node.js compatibility](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) for your Workers project.

* [  wrangler.jsonc ](#tab-panel-7780)
* [  wrangler.toml ](#tab-panel-7781)

```

{

  "compatibility_flags": [

    "nodejs_compat"

  ],

  // Set this to today's date

  "compatibility_date": "2026-04-03"

}


```

```

compatibility_flags = [ "nodejs_compat" ]

# Set this to today's date

compatibility_date = "2026-04-03"


```

## 3\. Create a Hyperdrive configuration

Create a Hyperdrive configuration using the connection string for your MySQL database.

Terminal window

```

npx wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string="mysql://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name"


```

This command outputs the Hyperdrive configuration `id` that will be used for your Hyperdrive [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/). Set up your binding by specifying the `id` in the Wrangler file.

* [  wrangler.jsonc ](#tab-panel-7778)
* [  wrangler.toml ](#tab-panel-7779)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "hyperdrive-example",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": [

    "nodejs_compat"

  ],

  // Pasted from the output of `wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string=[...]` above.

  "hyperdrive": [

    {

      "binding": "HYPERDRIVE",

      "id": "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>"

    }

  ]

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "hyperdrive-example"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


[[hyperdrive]]

binding = "HYPERDRIVE"

id = "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>"


```

## 4\. Query your database from your Worker

Install the [mysql2 ↗](https://github.com/sidorares/node-mysql2) driver:

 npm  yarn  pnpm  bun 

```
npm i mysql2@>3.13.0
```

```
yarn add mysql2@>3.13.0
```

```
pnpm add mysql2@>3.13.0
```

```
bun add mysql2@>3.13.0
```

Note

`mysql2` v3.13.0 or later is required

Add the required Node.js compatibility flags and Hyperdrive binding to your `wrangler.jsonc` file:

* [  wrangler.jsonc ](#tab-panel-7782)
* [  wrangler.toml ](#tab-panel-7783)

```

{

  // required for database drivers to function

  "compatibility_flags": [

    "nodejs_compat"

  ],

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "hyperdrive": [

    {

      "binding": "HYPERDRIVE",

      "id": "<your-hyperdrive-id-here>"

    }

  ]

}


```

```

compatibility_flags = [ "nodejs_compat" ]

# Set this to today's date

compatibility_date = "2026-04-03"


[[hyperdrive]]

binding = "HYPERDRIVE"

id = "<your-hyperdrive-id-here>"


```

Create a new `connection` instance and pass the Hyperdrive parameters:

TypeScript

```

// mysql2 v3.13.0 or later is required

import { createConnection } from "mysql2/promise";


export default {

  async fetch(request, env, ctx): Promise<Response> {

    // Create a new connection on each request. Hyperdrive maintains the underlying

    // database connection pool, so creating a new connection is fast.

    const connection = await createConnection({

      host: env.HYPERDRIVE.host,

      user: env.HYPERDRIVE.user,

      password: env.HYPERDRIVE.password,

      database: env.HYPERDRIVE.database,

      port: env.HYPERDRIVE.port,


      // Required to enable mysql2 compatibility for Workers

      disableEval: true,

    });


    try {

      // Sample query

      const [results, fields] = await connection.query("SHOW tables;");


      // Return result rows as JSON

      return Response.json({ results, fields });

    } catch (e) {

      console.error(e);

      return Response.json(

        { error: e instanceof Error ? e.message : e },

        { status: 500 },

      );

    }

  },

} satisfies ExportedHandler<Env>;


```

Note

The minimum version of `mysql2` required for Hyperdrive is `3.13.0`.

## 5\. Deploy your Worker

Run the following command to deploy your Worker:

Terminal window

```

npx wrangler deploy


```

Your application is now live and accessible at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`.

## Next steps

To build more with databases and Workers, refer to [Tutorials](https://developers.cloudflare.com/workers/tutorials) and explore the [Databases documentation](https://developers.cloudflare.com/workers/databases).

If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord ↗](https://discord.cloudflare.com) to connect with fellow developers and the Cloudflare team.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/mysql/","name":"Connect to a MySQL database with Cloudflare Workers"}}]}
```

---

---
title: OpenAI GPT function calling with JavaScript and Cloudflare Workers
description: Build a project that leverages OpenAI's function calling feature, available in OpenAI's latest Chat Completions API models.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI)[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/openai-function-calls-workers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# OpenAI GPT function calling with JavaScript and Cloudflare Workers

**Last reviewed:**  almost 3 years ago 

In this tutorial, you will build a project that leverages [OpenAI's function calling ↗](https://platform.openai.com/docs/guides/function-calling) feature, available in OpenAI's latest Chat Completions API models.

The function calling feature allows the AI model to intelligently decide when to call a function based on the input, and respond in JSON format to match the function's signature. You will use the function calling feature to request for the model to determine a website URL which contains information relevant to a message from the user, retrieve the text content of the site, and, finally, return a final response from the model informed by real-time web data.

## What you will learn

* How to use OpenAI's function calling feature.
* Integrating OpenAI's API in a Cloudflare Worker.
* Fetching and processing website content using Cheerio.
* Handling API responses and function calls in JavaScript.
* Storing API keys as secrets with Wrangler.

---

## Before you start

All of the tutorials assume you have already completed the [Get started guide](https://developers.cloudflare.com/workers/get-started/guide/), which gets you set up with a Cloudflare Workers account, [C3 ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare), and [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

## 1\. Create a new Worker project

Create a Worker project in the command line:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- openai-function-calling-workers
```

```
yarn create cloudflare openai-function-calling-workers
```

```
pnpm create cloudflare@latest openai-function-calling-workers
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `JavaScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

Go to your new `openai-function-calling-workers` Worker project:

Terminal window

```

cd openai-function-calling-workers


```

Inside of your new `openai-function-calling-workers` directory, find the `src/index.js` file. You will configure this file for most of the tutorial.

You will also need an OpenAI account and API key for this tutorial. If you do not have one, [create a new OpenAI account ↗](https://platform.openai.com/signup) and [create an API key ↗](https://platform.openai.com/account/api-keys) to continue with this tutorial. Make sure to store you API key somewhere safe so you can use it later.

## 2\. Make a request to OpenAI

With your Worker project created, make your first request to OpenAI. You will use the OpenAI node library to interact with the OpenAI API. In this project, you will also use the Cheerio library to handle processing the HTML content of websites

 npm  yarn  pnpm  bun 

```
npm i openai cheerio
```

```
yarn add openai cheerio
```

```
pnpm add openai cheerio
```

```
bun add openai cheerio
```

Now, define the structure of your Worker in `index.js`:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    // Initialize OpenAI API

    // Handle incoming requests

    return new Response("Hello World!");

  },

};


```

Above `export default`, add the imports for `openai` and `cheerio`:

JavaScript

```

import OpenAI from "openai";

import * as cheerio from "cheerio";


```

Within your `fetch` function, instantiate your `OpenAI` client:

JavaScript

```

async fetch(request, env, ctx) {

  const openai = new OpenAI({

    apiKey: env.OPENAI_API_KEY,

  });


  // Handle incoming requests

  return new Response('Hello World!');

},


```

Use [wrangler secret put](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret-put) to set `OPENAI_API_KEY`. This [secret's](https://developers.cloudflare.com/workers/configuration/secrets/) value is the API key you created earlier in the OpenAI dashboard:

Terminal window

```

npx wrangler secret put <OPENAI_API_KEY>


```

For local development, create a new file `.dev.vars` in your Worker project and add this line. Make sure to replace `OPENAI_API_KEY` with your own OpenAI API key:

```

OPENAI_API_KEY = "<YOUR_OPENAI_API_KEY>"


```

Now, make a request to the OpenAI [Chat Completions API ↗](https://platform.openai.com/docs/guides/gpt/chat-completions-api):

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    const openai = new OpenAI({

      apiKey: env.OPENAI_API_KEY,

    });


    const url = new URL(request.url);

    const message = url.searchParams.get("message");


    const messages = [

      {

        role: "user",

        content: message ? message : "What's in the news today?",

      },

    ];


    const tools = [

      {

        type: "function",

        function: {

          name: "read_website_content",

          description: "Read the content on a given website",

          parameters: {

            type: "object",

            properties: {

              url: {

                type: "string",

                description: "The URL to the website to read",

              },

            },

            required: ["url"],

          },

        },

      },

    ];


    const chatCompletion = await openai.chat.completions.create({

      model: "gpt-4o-mini",

      messages: messages,

      tools: tools,

      tool_choice: "auto",

    });


    const assistantMessage = chatCompletion.choices[0].message;

    console.log(assistantMessage);


    //Later you will continue handling the assistant's response here

    return new Response(assistantMessage.content);

  },

};


```

Review the arguments you are passing to OpenAI:

* **model**: This is the model you want OpenAI to use for your request. In this case, you are using `gpt-4o-mini`.
* **messages**: This is an array containing all messages that are part of the conversation. Initially you provide a message from the user, and we later add the response from the model. The content of the user message is either the `message` query parameter from the request URL or the default "What's in the news today?".
* **tools**: An array containing the actions available to the AI model. In this example you only have one tool, `read_website_content`, which reads the content on a given website.  
   * **name**: The name of your function. In this case, it is `read_website_content`.  
   * **description**: A short description that lets the model know the purpose of the function. This is optional but helps the model know when to select the tool.  
   * **parameters**: A JSON Schema object which describes the function. In this case we request a response containing an object with the required property `url`.
* **tool\_choice**: This argument is technically optional as `auto` is the default. This argument indicates that either a function call or a normal message response can be returned by OpenAI.

## 3\. Building your `read_website_content()` function

You will now need to define the `read_website_content` function, which is referenced in the `tools` array. The `read_website_content` function fetches the content of a given URL and extracts the text from `<p>` tags using the `cheerio` library:

Add this code above the `export default` block in your `index.js` file:

JavaScript

```

async function read_website_content(url) {

  console.log("reading website content");


  const response = await fetch(url);

  const body = await response.text();

  let cheerioBody = cheerio.load(body);

  const resp = {

    website_body: cheerioBody("p").text(),

    url: url,

  };

  return JSON.stringify(resp);

}


```

In this function, you take the URL that you received from OpenAI and use JavaScript's [Fetch API ↗](https://developer.mozilla.org/en-US/docs/Web/API/Fetch%5FAPI/Using%5FFetch) to pull the content of the website and extract the paragraph text. Now we need to determine when to call this function.

## 4\. Process the Assistant's Messages

Next, we need to process the response from the OpenAI API to check if it includes any function calls. If a function call is present, you should execute the corresponding function in your Worker. Note that the assistant may request multiple function calls.

Modify the fetch method within the `export default` block as follows:

JavaScript

```

// ... your previous code ...


if (assistantMessage.tool_calls) {

  for (const toolCall of assistantMessage.tool_calls) {

    if (toolCall.function.name === "read_website_content") {

      const url = JSON.parse(toolCall.function.arguments).url;

      const websiteContent = await read_website_content(url);

      messages.push({

        role: "tool",

        tool_call_id: toolCall.id,

        name: toolCall.function.name,

        content: websiteContent,

      });

    }

  }


  const secondChatCompletion = await openai.chat.completions.create({

    model: "gpt-4o-mini",

    messages: messages,

  });


  return new Response(secondChatCompletion.choices[0].message.content);

} else {

  // this is your existing return statement

  return new Response(assistantMessage.content);

}


```

Check if the assistant message contains any function calls by checking for the `tool_calls` property. Because the AI model can call multiple functions by default, you need to loop through any potential function calls and add them to the `messages` array. Each `read_website_content` call will invoke the `read_website_content` function you defined earlier and pass the URL generated by OpenAI as an argument. \`

The `secondChatCompletion` is needed to provide a response informed by the data you retrieved from each function call. Now, the last step is to deploy your Worker.

Test your code by running `npx wrangler dev` and open the provided url in your browser. This will now show you OpenAI’s response using real-time information from the retrieved web data.

## 5\. Deploy your Worker application

To deploy your application, run the `npx wrangler deploy` command to deploy your Worker application:

Terminal window

```

npx wrangler deploy


```

You can now preview your Worker at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`. Going to this URL will display the response from OpenAI. Optionally, add the `message` URL parameter to write a custom message: for example, `https://<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev/?message=What is the weather in NYC today?`.

## 6\. Next steps

Reference the [finished code for this tutorial on GitHub ↗](https://github.com/LoganGrasby/Cloudflare-OpenAI-Functions-Demo/blob/main/src/worker.js).

To continue working with Workers and AI, refer to [the guide on using LangChain and Cloudflare Workers together ↗](https://blog.cloudflare.com/langchain-and-cloudflare/) or [how to build a ChatGPT plugin with Cloudflare Workers ↗](https://blog.cloudflare.com/magic-in-minutes-how-to-build-a-chatgpt-plugin-with-cloudflare-workers/).

If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord ↗](https://discord.cloudflare.com) to connect with fellow developers and the Cloudflare team.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/openai-function-calls-workers/","name":"OpenAI GPT function calling with JavaScript and Cloudflare Workers"}}]}
```

---

---
title: Connect to a PostgreSQL database with Cloudflare Workers
description: This tutorial explains how to connect to a Postgres database with Cloudflare Workers. The Workers application you create in this tutorial will interact with a product database inside of Postgres.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Postgres ](https://developers.cloudflare.com/search/?tags=Postgres)[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ SQL ](https://developers.cloudflare.com/search/?tags=SQL) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/postgres.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Connect to a PostgreSQL database with Cloudflare Workers

**Last reviewed:**  9 months ago 

In this tutorial, you will learn how to create a Cloudflare Workers application and connect it to a PostgreSQL database using [TCP Sockets](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) and [Hyperdrive](https://developers.cloudflare.com/hyperdrive/). The Workers application you create in this tutorial will interact with a product database inside of PostgreSQL.

## Prerequisites

To continue:

1. Sign up for a [Cloudflare account ↗](https://dash.cloudflare.com/sign-up/workers-and-pages) if you have not already.
2. Install [npm ↗](https://docs.npmjs.com/getting-started).
3. Install [Node.js ↗](https://nodejs.org/en/). Use a Node version manager like [Volta ↗](https://volta.sh/) or [nvm ↗](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node.js versions. [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/) requires a Node version of `16.17.0` or later.
4. Make sure you have access to a PostgreSQL database.

## 1\. Create a Worker application

First, use the [create-cloudflare CLI ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) to create a new Worker application. To do this, open a terminal window and run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- postgres-tutorial
```

```
yarn create cloudflare postgres-tutorial
```

```
pnpm create cloudflare@latest postgres-tutorial
```

This will prompt you to install the [create-cloudflare ↗](https://www.npmjs.com/package/create-cloudflare) package and lead you through a setup wizard.

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `TypeScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

If you choose to deploy, you will be asked to authenticate (if not logged in already), and your project will be deployed. If you deploy, you can still modify your Worker code and deploy again at the end of this tutorial.

Now, move into the newly created directory:

Terminal window

```

cd postgres-tutorial


```

### Enable Node.js compatibility

[Node.js compatibility](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) is required for database drivers, including Postgres.js, and needs to be configured for your Workers project.

To enable both built-in runtime APIs and polyfills for your Worker or Pages project, add the [nodejs\_compat](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag) [compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag) to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), and set your compatibility date to September 23rd, 2024 or later. This will enable [Node.js compatibility](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) for your Workers project.

* [  wrangler.jsonc ](#tab-panel-7788)
* [  wrangler.toml ](#tab-panel-7789)

```

{

  "compatibility_flags": [

    "nodejs_compat"

  ],

  // Set this to today's date

  "compatibility_date": "2026-04-03"

}


```

```

compatibility_flags = [ "nodejs_compat" ]

# Set this to today's date

compatibility_date = "2026-04-03"


```

## 2\. Add the PostgreSQL connection library

To connect to a PostgreSQL database, you will need the `pg` library. In your Worker application directory, run the following command to install the library:

 npm  yarn  pnpm  bun 

```
npm i pg
```

```
yarn add pg
```

```
pnpm add pg
```

```
bun add pg
```

Next, install the TypeScript types for the `pg` library to enable type checking and autocompletion in your TypeScript code:

 npm  yarn  pnpm  bun 

```
npm i -D @types/pg
```

```
yarn add -D @types/pg
```

```
pnpm add -D @types/pg
```

```
bun add -d @types/pg
```

Note

Make sure you are using `pg` (`node-postgres`) version `8.16.3` or higher.

## 3\. Configure the connection to the PostgreSQL database

Choose one of the two methods to connect to your PostgreSQL database:

1. [Use a connection string](#use-a-connection-string).
2. [Set explicit parameters](#set-explicit-parameters).

### Use a connection string

A connection string contains all the information needed to connect to a database. It is a URL that contains the following information:

```

postgresql://username:password@host:port/database


```

Replace `username`, `password`, `host`, `port`, and `database` with the appropriate values for your PostgreSQL database.

Set your connection string as a [secret](https://developers.cloudflare.com/workers/configuration/secrets/) so that it is not stored as plain text. Use [wrangler secret put](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret) with the example variable name `DB_URL`:

Terminal window

```

npx wrangler secret put DB_URL


```

```

➜  wrangler secret put DB_URL

-------------------------------------------------------

? Enter a secret value: › ********************

✨ Success! Uploaded secret DB_URL


```

Set your `DB_URL` secret locally in a `.dev.vars` file as documented in [Local Development with Secrets](https://developers.cloudflare.com/workers/configuration/secrets/).

.dev.vars

```

DB_URL="<ENTER YOUR POSTGRESQL CONNECTION STRING>"


```

### Set explicit parameters

Configure each database parameter as an [environment variable](https://developers.cloudflare.com/workers/configuration/environment-variables/) via the [Cloudflare dashboard](https://developers.cloudflare.com/workers/configuration/environment-variables/#add-environment-variables-via-the-dashboard) or in your Wrangler file. Refer to an example of a Wrangler file configuration:

* [  wrangler.jsonc ](#tab-panel-7784)
* [  wrangler.toml ](#tab-panel-7785)

```

{

  "vars": {

    "DB_USERNAME": "postgres",

    // Set your password by creating a secret so it is not stored as plain text

    "DB_HOST": "ep-aged-sound-175961.us-east-2.aws.neon.tech",

    "DB_PORT": 5432,

    "DB_NAME": "productsdb"

  }

}


```

```

[vars]

DB_USERNAME = "postgres"

DB_HOST = "ep-aged-sound-175961.us-east-2.aws.neon.tech"

DB_PORT = 5_432

DB_NAME = "productsdb"


```

To set your password as a [secret](https://developers.cloudflare.com/workers/configuration/secrets/) so that it is not stored as plain text, use [wrangler secret put](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret). `DB_PASSWORD` is an example variable name for this secret to be accessed in your Worker:

Terminal window

```

npx wrangler secret put DB_PASSWORD


```

```

-------------------------------------------------------

? Enter a secret value: › ********************

✨ Success! Uploaded secret DB_PASSWORD


```

## 4\. Connect to the PostgreSQL database in the Worker

Open your Worker's main file (for example, `worker.ts`) and import the `Client` class from the `pg` library:

TypeScript

```

import { Client } from "pg";


```

In the `fetch` event handler, connect to the PostgreSQL database using your chosen method, either the connection string or the explicit parameters.

### Use a connection string

TypeScript

```

// create a new Client instance using the connection string

const sql = new Client({ connectionString: env.DB_URL });

// connect to the PostgreSQL database

await sql.connect();


```

### Set explicit parameters

TypeScript

```

// create a new Client instance using explicit parameters

const sql = new Client({

  username: env.DB_USERNAME,

  password: env.DB_PASSWORD,

  host: env.DB_HOST,

  port: env.DB_PORT,

  database: env.DB_NAME,

  ssl: true, // Enable SSL for secure connections

});

// connect to the PostgreSQL database

await sql.connect();


```

## 5\. Interact with the products database

To demonstrate how to interact with the products database, you will fetch data from the `products` table by querying the table when a request is received.

Note

If you are following along in your own PostgreSQL instance, set up the `products` using the following SQL `CREATE TABLE` statement. This statement defines the columns and their respective data types for the `products` table:

```

CREATE TABLE products (

  id SERIAL PRIMARY KEY,

  name VARCHAR(255) NOT NULL,

  description TEXT,

  price DECIMAL(10, 2) NOT NULL

);


```

Replace the existing code in your `worker.ts` file with the following code:

TypeScript

```

import { Client } from "pg";


export default {

  async fetch(request, env, ctx): Promise<Response> {

    // Create a new Client instance using the connection string

    // or explicit parameters as shown in the previous steps.

    // Here, we are using the connection string method.

    const sql = new Client({

      connectionString: env.DB_URL,

    });

        // Connect to the PostgreSQL database

        await sql.connect();


        // Query the products table

        const result = await sql.query("SELECT * FROM products");


        // Return the result as JSON

        return new Response(JSON.stringify(result.rows), {

            headers: {

                "Content-Type": "application/json",

            },

        });

  },

} satisfies ExportedHandler<Env>;


```

This code establishes a connection to the PostgreSQL database within your Worker application and queries the `products` table, returning the results as a JSON response.

## 6\. Deploy your Worker

Run the following command to deploy your Worker:

Terminal window

```

npx wrangler deploy


```

Your application is now live and accessible at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`.

After deploying, you can interact with your PostgreSQL products database using your Cloudflare Worker. Whenever a request is made to your Worker's URL, it will fetch data from the `products` table and return it as a JSON response. You can modify the query as needed to retrieve the desired data from your products database.

## 7\. Insert a new row into the products database

To insert a new row into the `products` table, create a new API endpoint in your Worker that handles a `POST` request. When a `POST` request is received with a JSON payload, the Worker will insert a new row into the `products` table with the provided data.

Assume the `products` table has the following columns: `id`, `name`, `description`, and `price`.

Add the following code snippet inside the `fetch` event handler in your `worker.ts` file, before the existing query code:

TypeScript

```

import { Client } from "pg";


export default {

  async fetch(request, env, ctx): Promise<Response> {

    // Create a new Client instance using the connection string

    // or explicit parameters as shown in the previous steps.

    // Here, we are using the connection string method.

    const sql = new Client({

      connectionString: env.DB_URL,

    });

        // Connect to the PostgreSQL database

        await sql.connect();


        const url = new URL(request.url);

        if (request.method === "POST" && url.pathname === "/products") {

            // Parse the request's JSON payload

            const productData = (await request.json()) as {

                name: string;

                description: string;

                price: number;

            };


            const name = productData.name,

                description = productData.description,

                price = productData.price;


            // Insert the new product into the products table

            const insertResult = await sql.query(

                `INSERT INTO products(name, description, price) VALUES($1, $2, $3)

    RETURNING *`,

                [name, description, price],

            );


            // Return the inserted row as JSON

            return new Response(JSON.stringify(insertResult.rows), {

                headers: { "Content-Type": "application/json" },

            });

        }


        // Query the products table

        const result = await sql.query("SELECT * FROM products");


        // Return the result as JSON

        return new Response(JSON.stringify(result.rows), {

            headers: {

                "Content-Type": "application/json",

            },

        });

  },

} satisfies ExportedHandler<Env>;


```

This code snippet does the following:

1. Checks if the request is a `POST` request and the URL path is `/products`.
2. Parses the JSON payload from the request.
3. Constructs an `INSERT` SQL query using the provided product data.
4. Executes the query, inserting the new row into the `products` table.
5. Returns the inserted row as a JSON response.

Now, when you send a `POST` request to your Worker's URL with the `/products` path and a JSON payload, the Worker will insert a new row into the `products` table with the provided data. When a request to `/` is made, the Worker will return all products in the database.

After making these changes, deploy the Worker again by running:

Terminal window

```

npx wrangler deploy


```

You can now use your Cloudflare Worker to insert new rows into the `products` table. To test this functionality, send a `POST` request to your Worker's URL with the `/products` path, along with a JSON payload containing the new product data:

```

{

  "name": "Sample Product",

  "description": "This is a sample product",

  "price": 19.99

}


```

You have successfully created a Cloudflare Worker that connects to a PostgreSQL database and handles fetching data and inserting new rows into a products table.

## 8\. Use Hyperdrive to accelerate queries

Create a Hyperdrive configuration using the connection string for your PostgreSQL database.

Terminal window

```

npx wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string="postgres://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name" --caching-disabled


```

This command outputs the Hyperdrive configuration `id` that will be used for your Hyperdrive [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/). Set up your binding by specifying the `id` in the Wrangler file.

* [  wrangler.jsonc ](#tab-panel-7786)
* [  wrangler.toml ](#tab-panel-7787)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "hyperdrive-example",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": [

    "nodejs_compat"

  ],

  // Pasted from the output of `wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string=[...]` above.

  "hyperdrive": [

    {

      "binding": "HYPERDRIVE",

      "id": "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>"

    }

  ]

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "hyperdrive-example"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


[[hyperdrive]]

binding = "HYPERDRIVE"

id = "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>"


```

Create the types for your Hyperdrive binding using the following command:

Terminal window

```

npx wrangler types


```

Replace your existing connection string in your Worker code with the Hyperdrive connection string.

JavaScript

```

export default {

  async fetch(request, env, ctx): Promise<Response> {

    const sql = new Client({connectionString: env.HYPERDRIVE.connectionString})


    const url = new URL(request.url);


    //rest of the routes and database queries

  },

} satisfies ExportedHandler<Env>;


```

## 9\. Redeploy your Worker

Run the following command to deploy your Worker:

Terminal window

```

npx wrangler deploy


```

Your Worker application is now live and accessible at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`, using Hyperdrive. Hyperdrive accelerates database queries by pooling your connections and caching your requests across the globe.

## Next steps

To build more with databases and Workers, refer to [Tutorials](https://developers.cloudflare.com/workers/tutorials) and explore the [Databases documentation](https://developers.cloudflare.com/workers/databases).

If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord ↗](https://discord.cloudflare.com) to connect with fellow developers and the Cloudflare team.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/postgres/","name":"Connect to a PostgreSQL database with Cloudflare Workers"}}]}
```

---

---
title: Send Emails With Postmark
description: This tutorial explains how to send transactional emails from Workers using Postmark.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/send-emails-with-postmark.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Send Emails With Postmark

**Last reviewed:**  almost 2 years ago 

In this tutorial, you will learn how to send transactional emails from Workers using [Postmark ↗](https://postmarkapp.com/). At the end of this tutorial, you’ll be able to:

* Create a Worker to send emails.
* Sign up and add a Cloudflare domain to Postmark.
* Send emails from your Worker using Postmark.
* Store API keys securely with secrets.

## Prerequisites

To continue with this tutorial, you’ll need:

* A [Cloudflare account ↗](https://dash.cloudflare.com/sign-up/workers-and-pages), if you don’t already have one.
* A [registered](https://developers.cloudflare.com/registrar/get-started/register-domain/) domain.
* Installed [npm ↗](https://docs.npmjs.com/getting-started).
* A [Postmark account ↗](https://account.postmarkapp.com/sign%5Fup).

## Create a Worker project

Start by using [C3](https://developers.cloudflare.com/pages/get-started/c3/) to create a Worker project in the command line, then, answer the prompts:

Terminal window

```

npm create cloudflare@latest


```

Alternatively, you can use CLI arguments to speed things up:

Terminal window

```

npm create cloudflare@latest email-with-postmark -- --type=hello-world --ts=false --git=true --deploy=false


```

This creates a simple hello-world Worker having the following content:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    return new Response("Hello World!");

  },

};


```

## Add your domain to Postmark

If you don’t already have a Postmark account, you can sign up for a [free account here ↗](https://account.postmarkapp.com/sign%5Fup). After signing up, check your inbox for a link to confirm your sender signature. This verifies and enables you to send emails from your registered email address.

To enable email sending from other addresses on your domain, navigate to `Sender Signatures` on the Postmark dashboard, `Add Domain or Signature` \> `Add Domain`, then type in your domain and click on `Verify Domain`.

Next, you’re presented with a list of DNS records to add to your Cloudflare domain. On your Cloudflare dashboard, select the domain you entered earlier and navigate to `DNS` \> `Records`. Copy/paste the DNS records (DKIM, and Return-Path) from Postmark to your Cloudflare domain.

![Image of adding DNS records to a Cloudflare domain](https://developers.cloudflare.com/_astro/add_dns_records.CuwqhmEV_Z1PK0DA.webp) 

Note

If you need more help adding DNS records in Cloudflare, refer to [Manage DNS records](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-dns-records/).

When that’s done, head back to Postmark and click on the `Verify` buttons. If all records are properly configured, your domain status should be updated to `Verified`.

![Image of domain verification on the Postmark dashboard](https://developers.cloudflare.com/_astro/verified_domain.CSwUI8xQ_ZJiRKw.webp) 

To grab your API token, navigate to the `Servers` tab, then `My First Server` \> `API Tokens`, then copy your API key to a safe place.

## Send emails from your Worker

The final step is putting it all together in a Worker. In your Worker, make a post request with `fetch` to Postmark’s email API and include your token and message body:

Note

[Postmark’s JavaScript library ↗](https://www.npmjs.com/package/postmark) is currently not supported on Workers. Use the [email API ↗](https://postmarkapp.com/developer/user-guide/send-email-with-api) instead.

```

export default {

  async fetch(request, env, ctx) {

    return await fetch("https://api.postmarkapp.com/email", {

      method: "POST",

      headers: {

        "Content-Type": "application/json",

        "X-Postmark-Server-Token": "your_postmark_api_token_here",

      },

      body: JSON.stringify({

        From: "hello@example.com",

        To: "someone@example.com",

        Subject: "Hello World",

        HtmlBody: "<p>Hello from Workers</p>",

      }),

    });

  },

};


```

To test your code locally, run the following command and navigate to [http://localhost:8787/ ↗](http://localhost:8787/) in a browser:

Terminal window

```

npm start


```

Deploy your Worker with `npm run deploy`.

## Move API token to Secrets

Sensitive information such as API keys and token should always be stored in secrets. All secrets are encrypted to add an extra layer of protection. That said, it’s a good idea to move your API token to a secret and access it from the environment of your Worker.

To add secrets for local development, create a `.dev.vars` file which works exactly like a `.env` file:

```

POSTMARK_API_TOKEN=your_postmark_api_token_here


```

Also ensure the secret is added to your deployed worker by running:

Add secret to deployed Worker

```

npx wrangler secret put POSTMARK_API_TOKEN


```

The added secret can be accessed on via the `env` parameter passed to your Worker’s fetch event handler:

```

export default {

  async fetch(request, env, ctx) {

    return await fetch("https://api.postmarkapp.com/email", {

      method: "POST",

      headers: {

        "Content-Type": "application/json",

        "X-Postmark-Server-Token": env.POSTMARK_API_TOKEN,

      },

      body: JSON.stringify({

        From: "hello@example.com",

        To: "someone@example.com",

        Subject: "Hello World",

        HtmlBody: "<p>Hello from Workers</p>",

      }),

    });

  },

};


```

And finally, deploy this update with `npm run deploy`.

## Related resources

* [Storing API keys and tokens with Secrets](https://developers.cloudflare.com/workers/configuration/secrets/).
* [Transferring your domain to Cloudflare](https://developers.cloudflare.com/registrar/get-started/transfer-domain-to-cloudflare/).
* [Send emails from Workers](https://developers.cloudflare.com/email-routing/email-workers/send-email-workers/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/send-emails-with-postmark/","name":"Send Emails With Postmark"}}]}
```

---

---
title: Send Emails With Resend
description: This tutorial explains how to send emails from Cloudflare Workers using Resend.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ JavaScript ](https://developers.cloudflare.com/search/?tags=JavaScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/send-emails-with-resend.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Send Emails With Resend

**Last reviewed:**  almost 2 years ago 

In this tutorial, you will learn how to send transactional emails from Workers using [Resend ↗](https://resend.com/). At the end of this tutorial, you’ll be able to:

* Create a Worker to send emails.
* Sign up and add a Cloudflare domain to Resend.
* Send emails from your Worker using Resend.
* Store API keys securely with secrets.

## Prerequisites

To continue with this tutorial, you’ll need:

* A [Cloudflare account ↗](https://dash.cloudflare.com/sign-up/workers-and-pages), if you don’t already have one.
* A [registered](https://developers.cloudflare.com/registrar/get-started/register-domain/) domain.
* Installed [npm ↗](https://docs.npmjs.com/getting-started).
* A [Resend account ↗](https://resend.com/signup).

## Create a Worker project

Start by using [C3](https://developers.cloudflare.com/pages/get-started/c3/) to create a Worker project in the command line, then, answer the prompts:

Terminal window

```

npm create cloudflare@latest


```

Alternatively, you can use CLI arguments to speed things up:

Terminal window

```

npm create cloudflare@latest email-with-resend -- --type=hello-world --ts=false --git=true --deploy=false


```

This creates a simple hello-world Worker having the following content:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    return new Response("Hello World!");

  },

};


```

## Add your domain to Resend

If you don’t already have a Resend account, you can sign up for a [free account here ↗](https://resend.com/signup). After signing up, go to `Domains` using the side menu, and click the button to add a new domain. On the modal, enter the domain you want to add and then select a region.

Next, you’re presented with a list of DNS records to add to your Cloudflare domain. On your Cloudflare dashboard, select the domain you entered earlier and navigate to `DNS` \> `Records`. Copy/paste the DNS records (DKIM, SPF, and DMARC records) from Resend to your Cloudflare domain.

![Image of adding DNS records to a Cloudflare domain](https://developers.cloudflare.com/_astro/add_dns_records.Brij3X2H_3CIvl.webp) 

Note

If you need more help adding DNS records in Cloudflare, refer to [Manage DNS records](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-dns-records/).

When that’s done, head back to Resend and click on the `Verify DNS Records` button. If all records are properly configured, your domain status should be updated to `Verified`.

![Image of domain verification on the Resend dashboard](https://developers.cloudflare.com/_astro/verified_domain.ouYLJaQl_l764f.webp) 

Lastly, navigate to `API Keys` with the side menu, to create an API key. Give your key a descriptive name and the appropriate permissions. Click the button to add your key and then copy your API key to a safe location.

## Send emails from your Worker

The final step is putting it all together in a Worker. Open up a terminal in the directory of the Worker you created earlier. Then, install the Resend SDK:

Terminal window

```

npm i resend


```

In your Worker, import and use the Resend library like so:

```

import { Resend } from "resend";


export default {

  async fetch(request, env, ctx) {

    const resend = new Resend("your_resend_api_key");


    const { data, error } = await resend.emails.send({

      from: "hello@example.com",

      to: "someone@example.com",

      subject: "Hello World",

      html: "<p>Hello from Workers</p>",

    });


    return Response.json({ data, error });

  },

};


```

To test your code locally, run the following command and navigate to [http://localhost:8787/ ↗](http://localhost:8787/) in a browser:

Terminal window

```

npm start


```

Deploy your Worker with `npm run deploy`.

## Move API keys to Secrets

Sensitive information such as API keys and token should always be stored in secrets. All secrets are encrypted to add an extra layer of protection. That said, it’s a good idea to move your API key to a secret and access it from the environment of your Worker.

To add secrets for local development, create a `.dev.vars` file which works exactly like a `.env` file:

```

RESEND_API_KEY=your_resend_api_key


```

Also ensure the secret is added to your deployed worker by running:

Add secret to deployed Worker

```

npx wrangler secret put RESEND_API_KEY


```

The added secret can be accessed on via the `env` parameter passed to your Worker’s fetch event handler:

```

import { Resend } from "resend";


export default {

  async fetch(request, env, ctx) {

    const resend = new Resend(env.RESEND_API_KEY);


    const { data, error } = await resend.emails.send({

      from: "hello@example.com",

      to: "someone@example.com",

      subject: "Hello World",

      html: "<p>Hello from Workers</p>",

    });


    return Response.json({ data, error });

  },

};


```

And finally, deploy this update with `npm run deploy`.

## Related resources

* [Storing API keys and tokens with Secrets](https://developers.cloudflare.com/workers/configuration/secrets/).
* [Transferring your domain to Cloudflare](https://developers.cloudflare.com/registrar/get-started/transfer-domain-to-cloudflare/).
* [Send emails from Workers](https://developers.cloudflare.com/email-routing/email-workers/send-email-workers/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/send-emails-with-resend/","name":"Send Emails With Resend"}}]}
```

---

---
title: Securely access and upload assets with Cloudflare R2
description: This tutorial explains how to create a TypeScript-based Cloudflare Workers project that can securely access files from and upload files to a CloudFlare R2 bucket.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/upload-assets-with-r2.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Securely access and upload assets with Cloudflare R2

**Last reviewed:**  almost 3 years ago 

This tutorial explains how to create a TypeScript-based Cloudflare Workers project that can securely access files from and upload files to a [Cloudflare R2](https://developers.cloudflare.com/r2/) bucket. Cloudflare R2 allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.

## Prerequisites

To continue:

1. Sign up for a [Cloudflare account ↗](https://dash.cloudflare.com/sign-up/workers-and-pages) if you have not already.
2. Install [npm ↗](https://docs.npmjs.com/getting-started).
3. Install [Node.js ↗](https://nodejs.org/en/). Use a Node version manager like [Volta ↗](https://volta.sh/) or [nvm ↗](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node.js versions. [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/) requires a Node version of `16.17.0` or later.

## Create a Worker application

First, use the [create-cloudflare CLI ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) to create a new Worker. To do this, open a terminal window and run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- upload-r2-assets
```

```
yarn create cloudflare upload-r2-assets
```

```
pnpm create cloudflare@latest upload-r2-assets
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `TypeScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

Move into your newly created directory:

Terminal window

```

cd upload-r2-assets


```

## Create an R2 bucket

Before you integrate R2 bucket access into your Worker application, an R2 bucket must be created:

Terminal window

```

npx wrangler r2 bucket create <YOUR_BUCKET_NAME>


```

Replace `<YOUR_BUCKET_NAME>` with the name you want to assign to your bucket. List your account's R2 buckets to verify that a new bucket has been added:

Terminal window

```

npx wrangler r2 bucket list


```

## Configure access to an R2 bucket

After your new R2 bucket is ready, use it inside your Worker application.

Use your R2 bucket inside your Worker project by modifying the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) to include an R2 bucket [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/). Add the following R2 bucket binding to your Wrangler file:

* [  wrangler.jsonc ](#tab-panel-7790)
* [  wrangler.toml ](#tab-panel-7791)

```

{

  "r2_buckets": [

    {

      "binding": "MY_BUCKET",

      "bucket_name": "<YOUR_BUCKET_NAME>"

    }

  ]

}


```

```

[[r2_buckets]]

binding = "MY_BUCKET"

bucket_name = "<YOUR_BUCKET_NAME>"


```

Give your R2 bucket binding name. Replace `<YOUR_BUCKET_NAME>` with the name of the R2 bucket you created earlier.

Your Worker application can now access your R2 bucket using the `MY_BUCKET` variable. You can now perform CRUD (Create, Read, Update, Delete) operations on the contents of the bucket.

## Fetch from an R2 bucket

After setting up an R2 bucket binding, you will implement the functionalities for the Worker to interact with the R2 bucket, such as, fetching files from the bucket and uploading files to the bucket.

To fetch files from the R2 bucket, use the `BINDING.get` function. In the below example, the R2 bucket binding is called `MY_BUCKET`. Using `.get(key)`, you can retrieve an asset based on the URL pathname as the key. In this example, the URL pathname is `/image.png`, and the asset key is `image.png`.

TypeScript

```

interface Env {

  MY_BUCKET: R2Bucket;

}

export default {

  async fetch(request, env): Promise<Response> {

    // For example, the request URL my-worker.account.workers.dev/image.png

    const url = new URL(request.url);

    const key = url.pathname.slice(1);

    // Retrieve the key "image.png"

    const object = await env.MY_BUCKET.get(key);


    if (object === null) {

      return new Response("Object Not Found", { status: 404 });

    }


    const headers = new Headers();

    object.writeHttpMetadata(headers);

    headers.set("etag", object.httpEtag);


    return new Response(object.body, {

      headers,

    });

  },

} satisfies ExportedHandler<Env>;


```

The code written above fetches and returns data from the R2 bucket when a `GET` request is made to the Worker application using a specific URL path.

## Upload securely to an R2 bucket

Next, you will add the ability to upload to your R2 bucket using authentication. To securely authenticate your upload requests, use [Wrangler's secret capability](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret). Wrangler was installed when you ran the `create cloudflare@latest` command.

Create a secret value of your choice -- for instance, a random string or password. Using the Wrangler CLI, add the secret to your project as `AUTH_SECRET`:

Terminal window

```

npx wrangler secret put AUTH_SECRET


```

Now, add a new code path that handles a `PUT` HTTP request. This new code will check that the previously uploaded secret is correctly used for authentication, and then upload to R2 using `MY_BUCKET.put(key, data)`:

TypeScript

```

interface Env {

  MY_BUCKET: R2Bucket;

  AUTH_SECRET: string;

}

export default {

  async fetch(request, env): Promise<Response> {

    if (request.method === "PUT") {

      // Note that you could require authentication for all requests

      // by moving this code to the top of the fetch function.

      const auth = request.headers.get("Authorization");

      const expectedAuth = `Bearer ${env.AUTH_SECRET}`;


      if (!auth || auth !== expectedAuth) {

        return new Response("Unauthorized", { status: 401 });

      }


      const url = new URL(request.url);

      const key = url.pathname.slice(1);

      await env.MY_BUCKET.put(key, request.body);

      return new Response(`Object ${key} uploaded successfully!`);

    }


    // include the previous code here...

  },

} satisfies ExportedHandler<Env>;


```

This approach ensures that only clients who provide a valid bearer token, via the `Authorization` header equal to the `AUTH_SECRET` value, will be permitted to upload to the R2 bucket. If you used a different binding name than `AUTH_SECRET`, replace it in the code above.

## Deploy your Worker application

After completing your Cloudflare Worker project, deploy it to Cloudflare. Make sure you are in your Worker application directory that you created for this tutorial, then run:

Terminal window

```

npx wrangler deploy


```

Your application is now live and accessible at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`.

You have successfully created a Cloudflare Worker that allows you to interact with an R2 bucket to accomplish tasks such as uploading and downloading files. You can now use this as a starting point for your own projects.

## Next steps

To build more with R2 and Workers, refer to [Tutorials](https://developers.cloudflare.com/workers/tutorials/) and the [R2 documentation](https://developers.cloudflare.com/r2/).

If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord ↗](https://discord.cloudflare.com) to connect with fellow developers and the Cloudflare team.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/upload-assets-with-r2/","name":"Securely access and upload assets with Cloudflare R2"}}]}
```

---

---
title: Set up and use a Prisma Postgres database
description: This tutorial shows you how to set up a Cloudflare Workers project with Prisma ORM.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ TypeScript ](https://developers.cloudflare.com/search/?tags=TypeScript)[ SQL ](https://developers.cloudflare.com/search/?tags=SQL)[ Prisma ORM ](https://developers.cloudflare.com/search/?tags=Prisma%20ORM)[ Postgres ](https://developers.cloudflare.com/search/?tags=Postgres) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/using-prisma-postgres-with-workers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Set up and use a Prisma Postgres database

**Last reviewed:**  about 1 year ago 

[Prisma Postgres ↗](https://www.prisma.io/postgres) is a managed, serverless PostgreSQL database. It supports features like connection pooling, caching, real-time subscriptions, and query optimization recommendations.

In this tutorial, you will learn how to:

* Set up a Cloudflare Workers project with [Prisma ORM ↗](https://www.prisma.io/docs).
* Create a Prisma Postgres instance from the Prisma CLI.
* Model data and run migrations with Prisma Postgres.
* Query the database from Workers.
* Deploy the Worker to Cloudflare.

## Prerequisites

To follow this guide, ensure you have the following:

* Node.js `v18.18` or higher installed.
* An active [Cloudflare account ↗](https://dash.cloudflare.com/).
* A basic familiarity with installing and using command-line interface (CLI) applications.

## 1\. Create a new Worker project

Begin by using [C3](https://developers.cloudflare.com/pages/get-started/c3/) to create a Worker project in the command line:

Terminal window

```

npm create cloudflare@latest prisma-postgres-worker -- --type=hello-world --ts=true --git=true --deploy=false


```

Then navigate into your project:

Terminal window

```

cd ./prisma-postgres-worker


```

Your initial `src/index.ts` file currently contains a simple request handler:

src/index.ts

```

export default {

  async fetch(request, env, ctx): Promise<Response> {

    return new Response("Hello World!");

  },

} satisfies ExportedHandler<Env>;


```

## 2\. Setup Prisma in your project

In this step, you will set up Prisma ORM with a Prisma Postgres database using the CLI. Then you will create and execute helper scripts to create tables in the database and generate a Prisma client to query it.

### 2.1\. Install required dependencies

Install Prisma CLI as a dev dependency:

 npm  yarn  pnpm  bun 

```
npm i -D prisma
```

```
yarn add -D prisma
```

```
pnpm add -D prisma
```

```
bun add -d prisma
```

Install the [Prisma Accelerate client extension ↗](https://www.npmjs.com/package/@prisma/extension-accelerate) as it is required for Prisma Postgres:

 npm  yarn  pnpm  bun 

```
npm i @prisma/extension-accelerate
```

```
yarn add @prisma/extension-accelerate
```

```
pnpm add @prisma/extension-accelerate
```

```
bun add @prisma/extension-accelerate
```

Install the [dotenv-cli package ↗](https://www.npmjs.com/package/dotenv-cli) to load environment variables from `.dev.vars`:

 npm  yarn  pnpm  bun 

```
npm i -D dotenv-cli
```

```
yarn add -D dotenv-cli
```

```
pnpm add -D dotenv-cli
```

```
bun add -d dotenv-cli
```

### 2.2\. Create a Prisma Postgres database and initialize Prisma

Initialize Prisma in your application:

 npm  yarn  pnpm 

```
npx prisma@latest init --db
```

```
yarn dlx prisma@latest init --db
```

```
pnpx prisma@latest init --db
```

If you do not have a [Prisma Data Platform ↗](https://console.prisma.io/) account yet, or if you are not logged in, the command will prompt you to log in using one of the available authentication providers. A browser window will open so you can log in or create an account. Return to the CLI after you have completed this step.

Once logged in (or if you were already logged in), the CLI will prompt you to select a project name and a database region.

Once the command has terminated, it will have created:

* A project in your [Platform Console ↗](https://console.prisma.io/) containing a Prisma Postgres database instance.
* A `prisma` folder containing `schema.prisma`, where you will define your database schema.
* An `.env` file in the project root, which will contain the Prisma Postgres database url `DATABASE_URL=<your-prisma-postgres-database-url>`.

Note that Cloudflare Workers do not support `.env` files. You will use a file called `.dev.vars` instead of the `.env` file that was just created.

### 2.3\. Prepare environment variables

Rename the `.env` file in the root of your application to `.dev.vars` file:

Terminal window

```

mv .env .dev.vars


```

### 2.4\. Apply database schema changes

Open the `schema.prisma` file in the `prisma` folder and add the following `User` model to your database:

prisma/schema.prisma

```

generator client {

  provider = "prisma-client-js"

}


datasource db {

  provider = "postgresql"

  url      = env("DATABASE_URL")

}


model User {

  id  Int @id @default(autoincrement())

  email String

  name   String

}


```

Next, add the following helper scripts to the `scripts` section of your `package.json`:

package.json

```

"scripts": {

  "migrate": "dotenv -e .dev.vars -- npx prisma migrate dev",

  "generate": "dotenv -e .dev.vars -- npx prisma generate --no-engine",

  "studio": "dotenv -e .dev.vars -- npx prisma studio",

  // Additional worker scripts...

}


```

Run the migration script to apply changes to the database:

Terminal window

```

npm run migrate


```

When prompted, provide a name for the migration (for example, `init`).

After these steps are complete, Prisma ORM is fully set up and connected to your Prisma Postgres database.

## 3\. Develop the application

Modify the `src/index.ts` file and replace its contents with the following code:

src/index.ts

```

import { PrismaClient } from "@prisma/client/edge";

import { withAccelerate } from "@prisma/extension-accelerate";


export interface Env {

  DATABASE_URL: string;

}


export default {

  async fetch(request, env, ctx): Promise<Response> {

    const path = new URL(request.url).pathname;

    if (path === "/favicon.ico")

      return new Response("Resource not found", {

        status: 404,

        headers: {

          "Content-Type": "text/plain",

        },

      });


    const prisma = new PrismaClient({

      datasourceUrl: env.DATABASE_URL,

    }).$extends(withAccelerate());


    const user = await prisma.user.create({

      data: {

        email: `Jon${Math.ceil(Math.random() * 1000)}@gmail.com`,

        name: "Jon Doe",

      },

    });


    const userCount = await prisma.user.count();


    return new Response(`\

Created new user: ${user.name} (${user.email}).

Number of users in the database: ${userCount}.

    `);

  },

} satisfies ExportedHandler<Env>;


```

Run the development server:

Terminal window

```

npm run dev


```

Visit [https://localhost:8787 ↗](https://localhost:8787) to see your app display the following output:

Terminal window

```

Number of users in the database: 1


```

Every time you refresh the page, a new user is created. The number displayed will increment by `1` with each refresh as it returns the total number of users in your database.

## 4\. Deploy the application to Cloudflare

When the application is deployed to Cloudflare, it needs access to the `DATABASE_URL` environment variable that is defined locally in `.dev.vars`. You can use the [npx wrangler secret put](https://developers.cloudflare.com/workers/configuration/secrets/#adding-secrets-to-your-project) command to upload the `DATABASE_URL` to the deployment environment:

Terminal window

```

npx wrangler secret put DATABASE_URL


```

When prompted, paste the `DATABASE_URL` value (from `.dev.vars`). If you are logged in via the Wrangler CLI, you will see a prompt asking if you'd like to create a new Worker. Confirm by choosing "yes":

Terminal window

```

✔ There doesn't seem to be a Worker called "prisma-postgres-worker". Do you want to create a new Worker with that name and add secrets to it? … yes


```

Then execute the following command to deploy your project to Cloudflare Workers:

Terminal window

```

npm run deploy


```

The `wrangler` CLI will bundle and upload your application.

If you are not already logged in, the `wrangler` CLI will open a browser window prompting you to log in to the Cloudflare dashboard.

Note

If you belong to multiple accounts, select the account where you want to deploy the project.

Once the deployment completes, verify the deployment by visiting the live URL provided in the deployment output, such as `https://{PROJECT_NAME}.workers.dev`. If you encounter any issues, ensure the secrets were added correctly and check the deployment logs for errors.

## Next steps

Congratulations on building and deploying a simple application with Prisma Postgres and Cloudflare Workers!

To enhance your application further:

* Add [caching ↗](https://www.prisma.io/docs/postgres/caching) to your queries.
* Explore the [Prisma Postgres documentation ↗](https://www.prisma.io/docs/postgres/getting-started).

To see how to build a real-time application with Cloudflare Workers and Prisma Postgres, read [this ↗](https://www.prisma.io/docs/guides/prisma-postgres-realtime-on-cloudflare) guide.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/using-prisma-postgres-with-workers/","name":"Set up and use a Prisma Postgres database"}}]}
```

---

---
title: Use Workers KV directly from Rust
description: This tutorial will teach you how to read and write to KV directly from Rust using workers-rs. You will use Workers KV from Rust to build an app to store and retrieve cities.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Rust ](https://developers.cloudflare.com/search/?tags=Rust) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/tutorials/workers-kv-from-rust.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Use Workers KV directly from Rust

**Last reviewed:**  almost 2 years ago 

This tutorial will teach you how to read and write to KV directly from Rust using [workers-rs ↗](https://github.com/cloudflare/workers-rs).

## Before you start

All of the tutorials assume you have already completed the [Get started guide](https://developers.cloudflare.com/workers/get-started/guide/), which gets you set up with a Cloudflare Workers account, [C3 ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare), and [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

## Prerequisites

To complete this tutorial, you will need:

* [Git ↗](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).
* [Wrangler](https://developers.cloudflare.com/workers/wrangler/) CLI.
* The [Rust ↗](https://www.rust-lang.org/tools/install) toolchain.
* And `cargo-generate` sub-command by running:

Terminal window

```

cargo install cargo-generate


```

## 1\. Create your Worker project in Rust

Open a terminal window, and run the following command to generate a Worker project template in Rust:

Terminal window

```

cargo generate cloudflare/workers-rs


```

Then select `template/hello-world-http` template, give your project a descriptive name and select enter. A new project should be created in your directory. Open the project in your editor and run `npx wrangler dev` to compile and run your project.

In this tutorial, you will use Workers KV from Rust to build an app to store and retrieve cities by a given country name.

## 2\. Create a KV namespace

In the terminal, use Wrangler to create a KV namespace for `cities`. This generates a configuration to be added to the project:

Terminal window

```

npx wrangler kv namespace create cities


```

To add this configuration to your project, open the Wrangler file and create an entry for `kv_namespaces` above the build command:

* [  wrangler.jsonc ](#tab-panel-7792)
* [  wrangler.toml ](#tab-panel-7793)

```

{

  "kv_namespaces": [

    {

      "binding": "cities",

      "id": "e29b263ab50e42ce9b637fa8370175e8"

    }

  ]

}


```

```

[[kv_namespaces]]

binding = "cities"

id = "e29b263ab50e42ce9b637fa8370175e8"


```

With this configured, you can access the KV namespace with the binding `"cities"` from Rust.

## 3\. Write data to KV

For this app, you will create two routes: A `POST` route to receive and store the city in KV, and a `GET` route to retrieve the city of a given country. For example, a `POST` request to `/France` with a body of `{"city": "Paris"}` should create an entry of Paris as a city in France. A `GET` request to `/France` should retrieve from KV and respond with Paris.

Install [Serde ↗](https://serde.rs/) as a project dependency to handle JSON `cargo add serde`. Then create an app router and a struct for `Country` in `src/lib.rs`:

```

use serde::{Deserialize, Serialize};

use worker::*;


#[event(fetch)]

async fn fetch(req: Request, env: Env, _ctx: Context) -> Result<Response> {

    let router = Router::new();


    #[derive(Serialize, Deserialize, Debug)]

    struct Country {

        city: String,

    }


    router

        // TODO:

        .post_async("/:country", |_, _| async move { Response::empty() })

        // TODO:

        .get_async("/:country", |_, _| async move { Response::empty() })

        .run(req, env)

        .await

}


```

For the post handler, you will retrieve the country name from the path and the city name from the request body. Then, you will save this in KV with the country as key and the city as value. Finally, the app will respond with the city name:

```

.post_async("/:country", |mut req, ctx| async move {

    let country = ctx.param("country").unwrap();

    let city = match req.json::<Country>().await {

        Ok(c) => c.city,

        Err(_) => String::from(""),

    };

    if city.is_empty() {

        return Response::error("Bad Request", 400);

    };

    return match ctx.kv("cities")?.put(country, &city)?.execute().await {

        Ok(_) => Response::ok(city),

        Err(_) => Response::error("Bad Request", 400),

    };

})


```

Save the file and make a `POST` request to test this endpoint:

Terminal window

```

curl --json '{"city": "Paris"}' http://localhost:8787/France


```

## 4\. Read data from KV

To retrieve cities stored in KV, write a `GET` route that pulls the country name from the path and searches KV. You also need some error handling if the country is not found:

```

.get_async("/:country", |_req, ctx| async move {

    if let Some(country) = ctx.param("country") {

        return match ctx.kv("cities")?.get(country).text().await? {

            Some(city) => Response::ok(city),

            None => Response::error("Country not found", 404),

        };

    }

    Response::error("Bad Request", 400)

})


```

Save and make a curl request to test the endpoint:

Terminal window

```

curl http://localhost:8787/France


```

## 5\. Deploy your project

The source code for the completed app should include the following:

```

use serde::{Deserialize, Serialize};

use worker::*;


#[event(fetch)]

async fn fetch(req: Request, env: Env, _ctx: Context) -> Result<Response> {

    let router = Router::new();


    #[derive(Serialize, Deserialize, Debug)]

    struct Country {

        city: String,

    }


    router

        .post_async("/:country", |mut req, ctx| async move {

            let country = ctx.param("country").unwrap();

            let city = match req.json::<Country>().await {

                Ok(c) => c.city,

                Err(_) => String::from(""),

            };

            if city.is_empty() {

                return Response::error("Bad Request", 400);

            };

            return match ctx.kv("cities")?.put(country, &city)?.execute().await {

                Ok(_) => Response::ok(city),

                Err(_) => Response::error("Bad Request", 400),

            };

        })

        .get_async("/:country", |_req, ctx| async move {

            if let Some(country) = ctx.param("country") {

                return match ctx.kv("cities")?.get(country).text().await? {

                    Some(city) => Response::ok(city),

                    None => Response::error("Country not found", 404),

                };

            }

            Response::error("Bad Request", 400)

        })

        .run(req, env)

        .await

}


```

To deploy your Worker, run the following command:

Terminal window

```

npx wrangler deploy


```

## Related resources

* [Rust support in Workers](https://developers.cloudflare.com/workers/languages/rust/).
* [Using KV in Workers](https://developers.cloudflare.com/kv/get-started/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/tutorials/workers-kv-from-rust/","name":"Use Workers KV directly from Rust"}}]}
```

---

---
title: Demos and architectures
description: Learn how you can use Workers within your existing application and architecture.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/demos.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Demos and architectures

Learn how you can use Workers within your existing application and architecture.

## Demos

Explore the following demo applications for Workers.

* [Starter code for D1 Sessions API: ↗](https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template) An introduction to D1 Sessions API. This demo simulates purchase orders administration.
* [JavaScript-native RPC on Cloudflare Workers <> Named Entrypoints: ↗](https://github.com/cloudflare/js-rpc-and-entrypoints-demo) This is a collection of examples of communicating between multiple Cloudflare Workers using the remote-procedure call (RPC) system that is built into the Workers runtime.
* [Workers for Platforms Example Project: ↗](https://github.com/cloudflare/workers-for-platforms-example) Explore how you could manage thousands of Workers with a single Cloudflare Workers account.
* [Cloudflare Workers Chat Demo: ↗](https://github.com/cloudflare/workers-chat-demo) This is a demo app written on Cloudflare Workers utilizing Durable Objects to implement real-time chat with stored history.
* [Turnstile Demo: ↗](https://github.com/cloudflare/turnstile-demo-workers) A simple demo with a Turnstile-protected form, using Cloudflare Workers. With the code in this repository, we demonstrate implicit rendering and explicit rendering.
* [Wildebeest: ↗](https://github.com/cloudflare/wildebeest) Wildebeest is an ActivityPub and Mastodon-compatible server whose goal is to allow anyone to operate their Fediverse server and identity on their domain without needing to keep infrastructure, with minimal setup and maintenance, and running in minutes.
* [D1 Northwind Demo: ↗](https://github.com/cloudflare/d1-northwind) This is a demo of the Northwind dataset, running on Cloudflare Workers, and D1 - Cloudflare's SQL database, running on SQLite.
* [Multiplayer Doom Workers: ↗](https://github.com/cloudflare/doom-workers) A WebAssembly Doom port with multiplayer support running on top of Cloudflare's global network using Workers, WebSockets, Pages, and Durable Objects.
* [Queues Web Crawler: ↗](https://github.com/cloudflare/queues-web-crawler) An example use-case for Queues, a web crawler built on Browser Rendering and Puppeteer. The crawler finds the number of links to Cloudflare.com on the site, and archives a screenshot to Workers KV.
* [DMARC Email Worker: ↗](https://github.com/cloudflare/dmarc-email-worker) A Cloudflare worker script to process incoming DMARC reports, store them, and produce analytics.
* [Access External Auth Rule Example Worker: ↗](https://github.com/cloudflare/workers-access-external-auth-example) This is a worker that allows you to quickly setup an external evalutation rule in Cloudflare Access.

## Reference architectures

Explore the following reference architectures that use Workers:

[Fullstack applicationsA practical example of how these services come together in a real fullstack application architecture.](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/fullstack-application/)[Storing user generated contentStore user-generated content in R2 for fast, secure, and cost-effective architecture.](https://developers.cloudflare.com/reference-architecture/diagrams/storage/storing-user-generated-content/)[Optimizing and securing connected transportation systemsThis diagram showcases Cloudflare components optimizing connected transportation systems. It illustrates how their technologies minimize latency, ensure reliability, and strengthen security for critical data flow.](https://developers.cloudflare.com/reference-architecture/diagrams/iot/optimizing-and-securing-connected-transportation-systems/)[Ingesting BigQuery Data into Workers AIYou can connect a Cloudflare Worker to get data from Google BigQuery and pass it to Workers AI, to run AI Models, powered by serverless GPUs.](https://developers.cloudflare.com/reference-architecture/diagrams/ai/bigquery-workers-ai/)[Event notifications for storageUse Cloudflare Workers or an external service to monitor for notifications about data changes and then handle them appropriately.](https://developers.cloudflare.com/reference-architecture/diagrams/storage/event-notifications-for-storage/)[Extend ZTNA with external authorization and serverless computingCloudflare's ZTNA enhances access policies using external API calls and Workers for robust security. It verifies user authentication and authorization, ensuring only legitimate access to protected resources.](https://developers.cloudflare.com/reference-architecture/diagrams/sase/augment-access-with-serverless/)[Cloudflare Security ArchitectureThis document provides insight into how this network and platform are architected from a security perspective, how they are operated, and what services are available for businesses to address their own security challenges.](https://developers.cloudflare.com/reference-architecture/architectures/security/)[Composable AI architectureThe architecture diagram illustrates how AI applications can be built end-to-end on Cloudflare, or single services can be integrated with external infrastructure and services.](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-composable/)[A/B-testing using WorkersCloudflare's low-latency, fully serverless compute platform, Workers offers powerful capabilities to enable A/B testing using a server-side implementation.](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/a-b-testing-using-workers/)[Serverless global APIsAn example architecture of a serverless API on Cloudflare and aims to illustrate how different compute and data products could interact with each other.](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-global-apis/)[Serverless ETL pipelinesCloudflare enables fully serverless ETL pipelines, significantly reducing complexity, accelerating time to production, and lowering overall costs.](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-etl/)[Egress-free object storage in multi-cloud setupsLearn how to use R2 to get egress-free object storage in multi-cloud setups.](https://developers.cloudflare.com/reference-architecture/diagrams/storage/egress-free-storage-multi-cloud/)[Retrieval Augmented Generation (RAG)RAG combines retrieval with generative models for better text. It uses external knowledge to create factual, relevant responses, improving coherence and accuracy in NLP tasks like chatbots.](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-rag/)[Automatic captioning for video uploadsBy integrating automatic speech recognition technology into video platforms, content creators, publishers, and distributors can reach a broader audience, including individuals with hearing impairments or those who prefer to consume content in different languages.](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-video-caption/)[Serverless image content managementLeverage various components of Cloudflare's ecosystem to construct a scalable image management solution](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-image-content-management/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/demos/","name":"Demos and architectures"}}]}
```

---

---
title: Development &#38; testing
description: Develop and test your Workers locally.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/development-testing/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Development & testing

You can build, run, and test your Worker code on your own local machine before deploying it to Cloudflare's network. This is made possible through [Miniflare](https://developers.cloudflare.com/workers/testing/miniflare/), a simulator that executes your Worker code using the same runtime used in production, [workerd ↗](https://github.com/cloudflare/workerd).

[By default](https://developers.cloudflare.com/workers/development-testing/#defaults), your Worker's bindings [connect to locally simulated resources](https://developers.cloudflare.com/workers/development-testing/#bindings-during-local-development), but can be configured to interact with the real, production resource with [remote bindings](https://developers.cloudflare.com/workers/development-testing/#remote-bindings).

## Core concepts

### Worker execution vs Bindings

When developing Workers, it's important to understand two distinct concepts:

* **Worker execution**: Where your Worker code actually runs (on your local machine vs on Cloudflare's infrastructure).
* [**Bindings**](https://developers.cloudflare.com/workers/runtime-apis/bindings/): How your Worker interacts with Cloudflare resources (like [KV namespaces](https://developers.cloudflare.com/kv), [R2 buckets](https://developers.cloudflare.com/r2), [D1 databases](https://developers.cloudflare.com/d1), [Queues](https://developers.cloudflare.com/queues/), [Durable Objects](https://developers.cloudflare.com/durable-objects/), etc). In your Worker code, these are accessed via the `env` object (such as `env.MY_KV`).

## Local development

**You can start a local development server using:**

1. The Cloudflare Workers CLI [**Wrangler**](https://developers.cloudflare.com/workers/wrangler/), using the built-in [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) command.

 npm  yarn  pnpm 

```
npx wrangler dev
```

```
yarn wrangler dev
```

```
pnpm wrangler dev
```

1. [**Vite** ↗](https://vite.dev/), using the [**Cloudflare Vite plugin**](https://developers.cloudflare.com/workers/vite-plugin/).

 npm  yarn  pnpm 

```
npx vite dev
```

```
yarn vite dev
```

```
pnpm vite dev
```

Both Wrangler and the Cloudflare Vite plugin use [Miniflare](https://developers.cloudflare.com/workers/testing/miniflare/) under the hood, and are developed and maintained by the Cloudflare team. For guidance on choosing when to use Wrangler versus Vite, see our guide [Choosing between Wrangler & Vite](https://developers.cloudflare.com/workers/development-testing/wrangler-vs-vite/).

* [Get started with Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/)
* [Get started with the Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/get-started/)

### Defaults

By default, running `wrangler dev` / `vite dev` (when using the [Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/get-started/)) means that:

* Your Worker code runs on your local machine.
* All resources your Worker is bound to in your [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/) are simulated locally.

### Bindings during local development

[Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) are interfaces that allow your Worker to interact with various Cloudflare resources (like [KV namespaces](https://developers.cloudflare.com/kv), [R2 buckets](https://developers.cloudflare.com/r2), [D1 databases](https://developers.cloudflare.com/d1), [Queues](https://developers.cloudflare.com/queues/), [Durable Objects](https://developers.cloudflare.com/durable-objects/), etc). In your Worker code, these are accessed via the `env` object (such as `env.MY_KV`).

During local development, your Worker code interacts with these bindings using the exact same API calls (such as `env.MY_KV.put()`) as it would in a deployed environment. These local resources are initially empty, but you can populate them with data, as documented in [Adding local data](https://developers.cloudflare.com/workers/development-testing/local-data/).

* By default, bindings connect to **local resource simulations** (except for [AI bindings](https://developers.cloudflare.com/workers-ai/configuration/bindings/), as AI models always run remotely).
* You can override this default behavior and **connect to the remote resource** on a per-binding basis with [remote bindings](https://developers.cloudflare.com/workers/development-testing/#remote-bindings). This lets you connect to real, production resources while still running your Worker code locally.
* When using `wrangler dev`, you can temporarily disable all [remote bindings](https://developers.cloudflare.com/workers/development-testing/#remote-bindings) (and connect only to local resources) by providing the `--local` flag (i.e. `wrangler dev --local`)

## Remote bindings

**Remote bindings** are bindings that are configured to connect to the deployed, remote resource during local development _instead_ of the locally simulated resource. Remote bindings are supported by [**Wrangler**](https://developers.cloudflare.com/workers/wrangler/), the [**Cloudflare Vite plugin**](https://developers.cloudflare.com/workers/vite-plugin/), and the `@cloudflare/vitest-pool-workers` package. You can configure remote bindings by setting `remote: true` in the binding definition.

### Example configuration

* [  wrangler.jsonc ](#tab-panel-7161)
* [  wrangler.toml ](#tab-panel-7162)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",


  "r2_buckets": [

    {

      "bucket_name": "screenshots-bucket",

      "binding": "screenshots_bucket",

      "remote": true,

    },

  ],

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"


[[r2_buckets]]

bucket_name = "screenshots-bucket"

binding = "screenshots_bucket"

remote = true


```

When remote bindings are configured, your Worker still **executes locally**, only the underlying resources your bindings connect to change. For all bindings marked with `remote: true`, Miniflare will route its operations (such as `env.MY_KV.put()`) to the deployed resource. All other bindings not explicitly configured with `remote: true` continue to use their default local simulations.

### Integration with environments

Remote Bindings work well together with [Workers Environments](https://developers.cloudflare.com/workers/wrangler/environments). To protect production data, you can create a development or staging environment and specify different resources in your [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/) than you would use for production.

**For example:**

* [  wrangler.jsonc ](#tab-panel-7173)
* [  wrangler.toml ](#tab-panel-7174)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",


  "env": {

    "production": {

      "r2_buckets": [

        {

          "bucket_name": "screenshots-bucket",

          "binding": "screenshots_bucket",

        },

      ],

    },

    "staging": {

      "r2_buckets": [

        {

          "bucket_name": "preview-screenshots-bucket",

          "binding": "screenshots_bucket",

          "remote": true,

        },

      ],

    },

  },

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"


[[env.production.r2_buckets]]

bucket_name = "screenshots-bucket"

binding = "screenshots_bucket"


[[env.staging.r2_buckets]]

bucket_name = "preview-screenshots-bucket"

binding = "screenshots_bucket"

remote = true


```

Running `wrangler dev -e staging` (or `CLOUDFLARE_ENV=staging vite dev`) with the above configuration means that:

* Your Worker code runs locally
* All calls made to `env.screenshots_bucket` will use the `preview-screenshots-bucket` resource, rather than the production `screenshots-bucket`.

### Recommended remote bindings

We recommend configuring specific bindings to connect to their remote counterparts. These services often rely on Cloudflare's network infrastructure or have complex backends that are not fully simulated locally.

The following bindings are recommended to have `remote: true` in your Wrangler configuration:

#### [Browser Rendering](https://developers.cloudflare.com/workers/wrangler/configuration/#browser-rendering):

To interact with a real headless browser for rendering. There is no current local simulation for Browser Rendering.

* [  wrangler.jsonc ](#tab-panel-7159)
* [  wrangler.toml ](#tab-panel-7160)

```

{

  "browser": {

    "binding": "MY_BROWSER",

    "remote": true

  },

}


```

```

[browser]

binding = "MY_BROWSER"

remote = true


```

#### [Workers AI](https://developers.cloudflare.com/workers/wrangler/configuration/#workers-ai):

To utilize actual AI models deployed on Cloudflare's network for inference. There is no current local simulation for Workers AI.

* [  wrangler.jsonc ](#tab-panel-7163)
* [  wrangler.toml ](#tab-panel-7164)

```

{

  "ai": {

    "binding": "AI",

    "remote": true

  },

}


```

```

[ai]

binding = "AI"

remote = true


```

#### [Vectorize](https://developers.cloudflare.com/workers/wrangler/configuration/#vectorize-indexes):

To connect to your production Vectorize indexes for accurate vector search and similarity operations. There is no current local simulation for Vectorize.

* [  wrangler.jsonc ](#tab-panel-7165)
* [  wrangler.toml ](#tab-panel-7166)

```

{

  "vectorize": [

    {

      "binding": "MY_VECTORIZE_INDEX",

      "index_name": "my-prod-index",

      "remote": true

    }

  ],

}


```

```

[[vectorize]]

binding = "MY_VECTORIZE_INDEX"

index_name = "my-prod-index"

remote = true


```

#### [mTLS](https://developers.cloudflare.com/workers/wrangler/configuration/#mtls-certificates):

To verify that the certificate exchange and validation process work as expected. There is no current local simulation for mTLS bindings.

* [  wrangler.jsonc ](#tab-panel-7169)
* [  wrangler.toml ](#tab-panel-7170)

```

{

  "mtls_certificates": [

    {

      "binding": "MY_CLIENT_CERT_FETCHER",

      "certificate_id": "<YOUR_UPLOADED_CERT_ID>",

      "remote": true

      }

  ]

}


```

```

[[mtls_certificates]]

binding = "MY_CLIENT_CERT_FETCHER"

certificate_id = "<YOUR_UPLOADED_CERT_ID>"

remote = true


```

#### [Images](https://developers.cloudflare.com/workers/wrangler/configuration/#images):

To connect to a high-fidelity version of the Images API, and verify that all transformations work as expected. Local simulation for Cloudflare Images is [limited with only a subset of features](https://developers.cloudflare.com/images/transform-images/bindings/#interact-with-your-images-binding-locally).

* [  wrangler.jsonc ](#tab-panel-7167)
* [  wrangler.toml ](#tab-panel-7168)

```

{

  "images": {

    "binding": "IMAGES" ,

    "remote": true

  }

}


```

```

[images]

binding = "IMAGES"

remote = true


```

Note

If `remote: true` is not specified for Browser Rendering, Vectorize, mTLS, or Images, Cloudflare **will issue a warning**. This prompts you to consider enabling it for a more production-like testing experience.

If a Workers AI binding has `remote` set to `false`, Cloudflare will **produce an error**. If the property is omitted, Cloudflare will connect to the remote resource and issue a warning to add the property to configuration.

#### [Dispatch Namespaces](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/reference/local-development/):

Workers for Platforms users can configure `remote: true` in dispatch namespace binding definitions:

* [  wrangler.jsonc ](#tab-panel-7171)
* [  wrangler.toml ](#tab-panel-7172)

```

{

  "dispatch_namespaces": [

    {

      "binding": "DISPATCH_NAMESPACE",

      "namespace": "testing",

      "remote":true

    }

  ]

}


```

```

[[dispatch_namespaces]]

binding = "DISPATCH_NAMESPACE"

namespace = "testing"

remote = true


```

This allows you to run your [dynamic dispatch Worker](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#dynamic-dispatch-worker) locally, while connecting it to your remote dispatch namespace binding. This allows you to test changes to your core dispatching logic against real, deployed [user Workers](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#user-workers).

### Unsupported remote bindings

Certain bindings are not supported for remote connections (i.e. with `remote: true`) during local development. These will always use local simulations or local values.

If `remote: true` is specified in Wrangler configuration for any of the following unsupported binding types, Cloudflare **will issue an error**. See [all supported and unsupported bindings for remote bindings](https://developers.cloudflare.com/workers/development-testing/bindings-per-env/).

* [**Durable Objects**](https://developers.cloudflare.com/workers/wrangler/configuration/#durable-objects): Enabling remote connections for Durable Objects may be supported in the future, but currently will always run locally. However, using Durable Objects in combination with remote bindings is possible. Refer to [Using remote resources with Durable Objects and Workflows](#using-remote-resources-with-durable-objects-and-workflows) below.
* [**Workflows**](https://developers.cloudflare.com/workflows/): Enabling remote connections for Workflows may be supported in the future, but currently will only run locally. However, using Workflows in combination with remote bindings is possible. Refer to [Using remote resources with Durable Objects and Workflows](#using-remote-resources-with-durable-objects-and-workflows) below.
* [**Environment Variables (vars)**](https://developers.cloudflare.com/workers/wrangler/configuration/#environment-variables): Environment variables are intended to be distinct between local development and deployed environments. They are easily configurable locally (such as in a `.dev.vars` file or directly in Wrangler configuration).
* [**Secrets**](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets): Like environment variables, secrets are expected to have different values in local development versus deployed environments for security reasons. Use `.dev.vars` for local secret management.
* [**Static Assets**](https://developers.cloudflare.com/workers/wrangler/configuration/#assets) Static assets are always served from your local disk during development for speed and direct feedback on changes.
* [**Version Metadata**](https://developers.cloudflare.com/workers/runtime-apis/bindings/version-metadata/): Since your Worker code is running locally, version metadata (like commit hash, version tags) associated with a specific deployed version is not applicable or accurate.
* [**Analytics Engine**](https://developers.cloudflare.com/analytics/analytics-engine/): Local development sessions typically don't contribute data directly to production Analytics Engine.
* [**Hyperdrive**](https://developers.cloudflare.com/workers/wrangler/configuration/#hyperdrive): This is being actively worked on, but is currently unsupported.
* [**Rate Limiting**](https://developers.cloudflare.com/workers/runtime-apis/bindings/rate-limit/#configuration): Local development sessions typically should not share or affect rate limits of your deployed Workers. Rate limiting logic should be tested against local simulations.

Note

If you have use-cases for connecting to any of the remote resources above, please [open a feature request ↗](https://github.com/cloudflare/workers-sdk/issues) in our [workers-sdk repository ↗](https://github.com/cloudflare/workers-sdk).

#### Using remote resources with Durable Objects and Workflows

While Durable Object and Workflow bindings cannot currently be remote, you can still use them during local development and have them interact with remote resources.

There are two recommended patterns for this:

* **Local Durable Objects/Workflows with remote bindings:**  
When you enable remote bindings in your [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration), your locally running Durable Objects and Workflows can access remote resources. This allows such bindings, although run locally, to interact with remote resources during local development.
* **Accessing remote Durable Objects/Workflows via service bindings:**  
To interact with remote Durable Object or Workflow instances, deploy a Worker that defines those. Then, in your local Worker, configure a remote [service binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) pointing to the deployed Worker. Your local Worker will be then able to interact with the remote deployed Worker, which in turn can communicate with the remote Durable Objects/Workflows. Using this method, you can create a communication channel via the remote service binding, effectively using the deployed Worker as a proxy interface to the remote bindings during local development.

### Important Considerations

* **Data modification**: Operations (writes, deletes, updates) on bindings connected remotely will affect your actual data in the targeted Cloudflare resource (be it preview or production).
* **Billing**: Interactions with remote Cloudflare services through these connections will incur standard operational costs for those services (such as KV operations, R2 storage/operations, AI requests, D1 usage).
* **Network latency**: Expect network latency for operations on these remotely connected bindings, as they involve communication over the internet.
* **CI and non-interactive environments**: If your worker uses [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/), Wrangler must authenticate with Access when connecting to remote bindings. In non-interactive environments such as CI/CD pipelines, set the `CLOUDFLARE_ACCESS_CLIENT_ID` and `CLOUDFLARE_ACCESS_CLIENT_SECRET` [system environment variables](https://developers.cloudflare.com/workers/wrangler/system-environment-variables/) to authenticate using an [Access Service Token](https://developers.cloudflare.com/cloudflare-one/access-controls/service-credentials/service-tokens/). Without these variables, Wrangler throws an error instead of launching the interactive `cloudflared access login` flow.

### API

Wrangler provides programmatic utilities to help tooling authors support remote binding connections when running Workers code with [Miniflare](https://developers.cloudflare.com/workers/testing/miniflare/).

**Key APIs include:**

* [startRemoteProxySession](#startRemoteProxySession): Starts a proxy session that allows interaction with remote bindings.
* [unstable\_convertConfigBindingsToStartWorkerBindings](#unstable%5Fconvertconfigbindingstostartworkerbindings): Utility for converting binding definitions.
* [experimental\_maybeStartOrUpdateProxySession](#experimental%5Fmaybestartorupdatemixedmodesession): Convenience function to easily start or update a proxy session.

#### `startRemoteProxySession`

This function starts a proxy session for a given set of bindings. It accepts options to control session behavior, including an `auth` option with your Cloudflare account ID and API token for remote binding access.

It returns an object with:

* `ready` ` Promise<void> `: Resolves when the session is ready.
* `dispose` ` () => Promise<void> `: Stops the session.
* `updateBindings` ` (bindings: StartDevWorkerInput['bindings']) => Promise<void> `: Updates session bindings.
* `remoteProxyConnectionString` ` remoteProxyConnectionString `: String to pass to Miniflare for remote binding access.

#### `unstable_convertConfigBindingsToStartWorkerBindings`

The `unstable_readConfig` utility returns an `Unstable_Config` object which includes the definition of the bindings included in the configuration file. These bindings definitions are however not directly compatible with `startRemoteProxySession`. It can be quite convenient to however read the binding declarations with `unstable_readConfig` and then pass them to `startRemoteProxySession`, so for this wrangler exposes `unstable_convertConfigBindingsToStartWorkerBindings` which is a simple utility to convert the bindings in an `Unstable_Config` object into a structure that can be passed to `startRemoteProxySession`.

Note

This type conversion is temporary. In the future, the types will be unified so you can pass the config object directly to `startRemoteProxySession`.

#### `maybeStartOrUpdateRemoteProxySession`

This wrapper simplifies proxy session management. It takes:

* An object that contains either:  
   * the path to a Wrangler configuration and a potential target environment  
   * the name of the Worker and the bindings it is using
* The current proxy session details (this parameter can be set to `null` or not being provided if none).
* Potentially the auth data to use for the remote proxy session.

It returns an object with the proxy session details if started or updated, or `null` if no proxy session is needed.

The function:

* Based on the first argument prepares the input arguments for the proxy session.
* If there are no remote bindings to be used (nor a pre-existing proxy session) it returns null, signaling that no proxy session is needed.
* If the details of an existing proxy session have been provided it updates the proxy session accordingly.
* Otherwise if starts a new proxy session.
* Returns the proxy session details (that can later be passed as the second argument to `maybeStartOrUpdateRemoteProxySession`).

#### Example

Here's a basic example of using Miniflare with `maybeStartOrUpdateRemoteProxySession` to provide a local dev session with remote bindings. This example uses a single hardcoded KV binding.

* [  JavaScript ](#tab-panel-7175)
* [  TypeScript ](#tab-panel-7176)

JavaScript

```

import { Miniflare, MiniflareOptions } from "miniflare";

import { maybeStartOrUpdateRemoteProxySession } from "wrangler";


let mf;


let remoteProxySessionDetails = null;


async function startOrUpdateDevSession() {

  remoteProxySessionDetails = await maybeStartOrUpdateRemoteProxySession(

    {

      bindings: {

        MY_KV: {

          type: "kv_namespace",

          id: "kv-id",

          remote: true,

        },

      },

    },

    remoteProxySessionDetails,

  );


  const miniflareOptions = {

    scriptPath: "./worker.js",

    kvNamespaces: {

      MY_KV: {

        id: "kv-id",

        remoteProxyConnectionString:

          remoteProxySessionDetails?.session.remoteProxyConnectionString,

      },

    },

  };


  if (!mf) {

    mf = new Miniflare(miniflareOptions);

  } else {

    mf.setOptions(miniflareOptions);

  }

}


// ... tool logic that invokes `startOrUpdateDevSession()` ...


// ... once the dev session is no longer needed run

// `remoteProxySessionDetails?.session.dispose()`


```

TypeScript

```

import { Miniflare, MiniflareOptions } from "miniflare";

import { maybeStartOrUpdateRemoteProxySession } from "wrangler";


let mf: Miniflare | null;


let remoteProxySessionDetails: Awaited<

  ReturnType<typeof maybeStartOrUpdateRemoteProxySession>

> | null = null;


async function startOrUpdateDevSession() {

  remoteProxySessionDetails = await maybeStartOrUpdateRemoteProxySession(

    {

      bindings: {

        MY_KV: {

          type: "kv_namespace",

          id: "kv-id",

          remote: true,

        },

      },

    },

    remoteProxySessionDetails,

  );


  const miniflareOptions: MiniflareOptions = {

    scriptPath: "./worker.js",

    kvNamespaces: {

      MY_KV: {

        id: "kv-id",

        remoteProxyConnectionString:

          remoteProxySessionDetails?.session.remoteProxyConnectionString,

      },

    },

  };


  if (!mf) {

    mf = new Miniflare(miniflareOptions);

  } else {

    mf.setOptions(miniflareOptions);

  }

}


// ... tool logic that invokes `startOrUpdateDevSession()` ...


// ... once the dev session is no longer needed run

// `remoteProxySessionDetails?.session.dispose()`


```

## `wrangler dev --remote` (Legacy)

Separate from Miniflare-powered local development, Wrangler also offers a fully remote development mode via [wrangler dev --remote](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev). Remote development is [**not** supported in the Vite plugin](https://developers.cloudflare.com/workers/development-testing/wrangler-vs-vite/).

 npm  yarn  pnpm 

```
npx wrangler dev --remote
```

```
yarn wrangler dev --remote
```

```
pnpm wrangler dev --remote
```

During **remote development**, all of your Worker code is uploaded to a temporary preview environment on Cloudflare's infrastructure, and changes to your code are automatically uploaded as you save.

When using remote development, all bindings automatically connect to their remote resources. Unlike local development, you cannot configure bindings to use local simulations - they will always use the deployed resources on Cloudflare's network.

### When to use Remote development

* For most development tasks, the most efficient and productive experience will be local development along with [remote bindings](https://developers.cloudflare.com/workers/development-testing/#remote-bindings) when needed.
* You may want to use `wrangler dev --remote` for testing features or behaviors that are highly specific to Cloudflare's network and cannot be adequately simulated locally or tested via remote bindings.

### Considerations

* Iteration is significantly slower than local development due to the upload/deployment step for each change.

### Limitations

* When you run a remote development session using the `--remote` flag, a limit of 50 [routes](https://developers.cloudflare.com/workers/configuration/routing/routes/) per zone is enforced. Learn more in[ Workers platform limits](https://developers.cloudflare.com/workers/platform/limits/#routes-and-domains-when-using-wrangler-dev---remote).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/development-testing/","name":"Development & testing"}}]}
```

---

---
title: Supported bindings per development mode
description: Supported bindings per development mode
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/development-testing/bindings-per-env.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Supported bindings per development mode

## Local development

**Local simulations**: During local development, your Worker code always executes locally and bindings connect to locally simulated resources [by default](https://developers.cloudflare.com/workers/development-testing/#remote-bindings). This is supported in [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).

**Remote binding connections:**: Allows you to connect to remote resources on a [per-binding basis](https://developers.cloudflare.com/workers/development-testing/#remote-bindings). This is supported in [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).

| Binding                                 | Local simulations | Remote binding connections |
| --------------------------------------- | ----------------- | -------------------------- |
| **AI**                                  | ❌                 | ✅                          |
| **Assets**                              | ✅                 | ❌                          |
| **Analytics Engine**                    | ✅                 | ❌                          |
| **Browser Rendering**                   | ✅                 | ✅                          |
| **D1**                                  | ✅                 | ✅                          |
| **Durable Objects**                     | ✅                 | ❌ [1](#user-content-fn-1)  |
| **Containers**                          | ✅                 | ❌                          |
| **Email Bindings**                      | ✅                 | ✅                          |
| **Hyperdrive**                          | ✅                 | ❌                          |
| **Images**                              | ✅                 | ✅                          |
| **KV**                                  | ✅                 | ✅                          |
| **Media Transformations**               | ❌                 | ✅                          |
| **mTLS**                                | ❌                 | ✅                          |
| **Queues**                              | ✅                 | ✅                          |
| **R2**                                  | ✅                 | ✅                          |
| **Rate Limiting**                       | ✅                 | ❌                          |
| **Service Bindings (multiple Workers)** | ✅                 | ✅                          |
| **Vectorize**                           | ❌                 | ✅                          |
| **Workflows**                           | ✅                 | ❌                          |

## Remote development

During remote development, all of your Worker code is uploaded and executed on Cloudflare's infrastructure, and bindings always connect to remote resources. **We recommend using local development with remote binding connections instead** for faster iteration and debugging.

Supported only in [wrangler dev --remote](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) \- there is **no Vite plugin equivalent**.

| Binding                                 | Remote development |
| --------------------------------------- | ------------------ |
| **AI**                                  | ✅                  |
| **Assets**                              | ✅                  |
| **Analytics Engine**                    | ✅                  |
| **Browser Rendering**                   | ✅                  |
| **D1**                                  | ✅                  |
| **Durable Objects**                     | ✅                  |
| **Containers**                          | ❌                  |
| **Email Bindings**                      | ✅                  |
| **Hyperdrive**                          | ✅                  |
| **Images**                              | ✅                  |
| **KV**                                  | ✅                  |
| **Media Transformations**               | ✅                  |
| **mTLS**                                | ✅                  |
| **Queues**                              | ❌                  |
| **R2**                                  | ✅                  |
| **Rate Limiting**                       | ✅                  |
| **Service Bindings (multiple Workers)** | ✅                  |
| **Vectorize**                           | ✅                  |
| **Workflows**                           | ❌                  |

## Footnotes

1. Refer to [Using remote resources with Durable Objects and Workflows](https://developers.cloudflare.com/workers/development-testing/#using-remote-resources-with-durable-objects-and-workflows) for recommended workarounds. [↩](#user-content-fnref-1)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/development-testing/","name":"Development & testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/development-testing/bindings-per-env/","name":"Supported bindings per development mode"}}]}
```

---

---
title: Environment variables and secrets
description: Configuring environment variables and secrets for local development
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/development-testing/environment-variables.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Environment variables and secrets

Warning

Do not use `vars` to store sensitive information in your Worker's Wrangler configuration file. Use secrets instead.

Put secrets for use in local development in either a `.dev.vars` file or a `.env` file, in the same directory as the Wrangler configuration file.

Note

You can use the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property) to declare which secret names your Worker requires. When defined, only the keys listed in `secrets.required` are loaded from `.dev.vars` or `.env`. Additional keys are excluded and missing keys produce a warning.

Choose to use either `.dev.vars` or `.env` but not both. If you define a `.dev.vars` file, then values in `.env` files will not be included in the `env` object during local development.

These files should be formatted using the [dotenv ↗](https://hexdocs.pm/dotenvy/dotenv-file-format.html) syntax. For example:

.dev.vars / .env

```

SECRET_KEY="value"

API_TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"


```

Do not commit secrets to git

The `.dev.vars` and `.env` files should not committed to git. Add `.dev.vars*` and `.env*` to your project's `.gitignore` file.

To set different secrets for each Cloudflare environment, create files named `.dev.vars.<environment-name>` or `.env.<environment-name>`.

When you select a Cloudflare environment in your local development, the corresponding environment-specific file will be loaded ahead of the generic `.dev.vars` (or `.env`) file.

* When using `.dev.vars.<environment-name>` files, all secrets must be defined per environment. If `.dev.vars.<environment-name>` exists then only this will be loaded; the `.dev.vars` file will not be loaded.
* In contrast, all matching `.env` files are loaded and the values are merged. For each variable, the value from the most specific file is used, with the following precedence:  
   * `.env.<environment-name>.local` (most specific)  
   * `.env.local`  
   * `.env.<environment-name>`  
   * `.env` (least specific)

Controlling `.env` handling

It is possible to control how `.env` files are loaded in local development by setting environment variables on the process running the tools.

* To disable loading local dev vars from `.env` files without providing a `.dev.vars` file, set the `CLOUDFLARE_LOAD_DEV_VARS_FROM_DOT_ENV` environment variable to `"false"`.
* To include every environment variable defined in your system's process environment as a local development variable, ensure there is no `.dev.vars` and then set the `CLOUDFLARE_INCLUDE_PROCESS_ENV` environment variable to `"true"`. This is not needed when using the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property), which loads from `process.env` automatically.

### Basic setup

Here are steps to set up environment variables for local development using either `.dev.vars` or `.env` files.

1. Create a `.dev.vars` / `.env` file in your project root.
2. Add key-value pairs:  
.dev.vars/.env  
```  
API_HOST="localhost:3000"  
DEBUG="true"  
SECRET_TOKEN="my-local-secret-token"  
```
3. Run your `dev` command  
**Wrangler**  
 npm  yarn  pnpm  
```  
npx wrangler dev  
```  
```  
yarn wrangler dev  
```  
```  
pnpm wrangler dev  
```  
**Vite plugin**  
 npm  yarn  pnpm  
```  
npx vite dev  
```  
```  
yarn vite dev  
```  
```  
pnpm vite dev  
```

## Multiple local environments

To simulate different local environments, you can provide environment-specific files. For example, you might have a `staging` environment that requires different settings than your development environment.

1. Create a file named `.dev.vars.<environment-name>`/`.env.<environment-name>`. For example, we can use `.dev.vars.staging`/`.env.staging`.
2. Add key-value pairs:  
.dev.vars.staging/.env.staging  
```  
API_HOST="staging.localhost:3000"  
DEBUG="false"  
SECRET_TOKEN="staging-token"  
```
3. Specify the environment when running the `dev` command:  
**Wrangler**  
 npm  yarn  pnpm  
```  
npx wrangler dev --env staging  
```  
```  
yarn wrangler dev --env staging  
```  
```  
pnpm wrangler dev --env staging  
```  
**Vite plugin**  
 npm  yarn  pnpm  
```  
CLOUDFLARE_ENV=staging npx vite dev  
```  
```  
CLOUDFLARE_ENV=staging yarn vite dev  
```  
```  
CLOUDFLARE_ENV=staging pnpm vite dev  
```  
   * If using `.dev.vars.staging`, only the values from that file will be applied instead of `.dev.vars`.  
   * If using `.env.staging`, the values will be merged with `.env` files, with the most specific file taking precedence.

## Learn more

* To learn how to configure multiple environments in Wrangler configuration, [read the documentation](https://developers.cloudflare.com/workers/wrangler/environments/#%5Ftop).
* To learn how to use Wrangler environments and Vite environments together, [read the Vite plugin documentation](https://developers.cloudflare.com/workers/vite-plugin/reference/cloudflare-environments/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/development-testing/","name":"Development & testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/development-testing/environment-variables/","name":"Environment variables and secrets"}}]}
```

---

---
title: Adding local data
description: Populating local resources with data
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/development-testing/local-data.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Adding local data

Whether you are using Wrangler or the [Cloudflare Vite plugin ↗](https://developers.cloudflare.com/workers/vite-plugin/), your workflow for **accessing** data during local development remains the same. However, you can only [populate local resources with data](https://developers.cloudflare.com/workers/development-testing/local-data/#populating-local-resources-with-data) via the Wrangler CLI.

### How it works

When you run either `wrangler dev` or [vite ↗](https://vite.dev/guide/cli#dev-server), [Miniflare](https://developers.cloudflare.com/workers/testing/miniflare/) automatically creates **local versions** of your resources (like [KV](https://developers.cloudflare.com/kv), [D1](https://developers.cloudflare.com/d1/), or [R2](https://developers.cloudflare.com/r2)). This means you **don’t** need to manually set up separate local instances for each service. However, newly created local resources **won’t** contain any data — you'll need to use Wrangler commands with the `--local` flag to populate them. Changes made to local resources won’t affect production data.

## Populating local resources with data

When you first start developing, your local resources will be empty. You'll need to populate them with data using the Wrangler CLI.

### KV namespaces

Syntax note

Since version 3.60.0, Wrangler supports the `kv ...` syntax. If you are using versions below 3.60.0, the command follows the `kv:...` syntax. Learn more in the [Wrangler commands for KV page](https://developers.cloudflare.com/kv/reference/kv-commands/).

#### [Add a single key-value pair](https://developers.cloudflare.com/workers/wrangler/commands/kv/#kv-key)

 npm  yarn  pnpm 

```
npx wrangler kv key put <KEY> <VALUE> --binding=<BINDING> --local 
```

```
yarn wrangler kv key put <KEY> <VALUE> --binding=<BINDING> --local 
```

```
pnpm wrangler kv key put <KEY> <VALUE> --binding=<BINDING> --local 
```

#### [Bulk upload](https://developers.cloudflare.com/workers/wrangler/commands/kv/#kv-bulk)

 npm  yarn  pnpm 

```
npx wrangler kv bulk put <FILENAME.json> --binding=<BINDING> --local
```

```
yarn wrangler kv bulk put <FILENAME.json> --binding=<BINDING> --local
```

```
pnpm wrangler kv bulk put <FILENAME.json> --binding=<BINDING> --local
```

### R2 buckets

#### [Upload a file](https://developers.cloudflare.com/workers/wrangler/commands/r2/#r2-object)

 npm  yarn  pnpm 

```
npx wrangler r2 object put <BUCKET>/<KEY> --file=<PATH_TO_FILE> --local
```

```
yarn wrangler r2 object put <BUCKET>/<KEY> --file=<PATH_TO_FILE> --local
```

```
pnpm wrangler r2 object put <BUCKET>/<KEY> --file=<PATH_TO_FILE> --local
```

You may also include [other metadata](https://developers.cloudflare.com/workers/wrangler/commands/r2/#r2-object-put).

### D1 databases

#### [Execute a SQL statement](https://developers.cloudflare.com/workers/wrangler/commands/d1/#d1-execute)

 npm  yarn  pnpm 

```
npx wrangler d1 execute <DATABASE_NAME> --command="<SQL_QUERY>" --local
```

```
yarn wrangler d1 execute <DATABASE_NAME> --command="<SQL_QUERY>" --local
```

```
pnpm wrangler d1 execute <DATABASE_NAME> --command="<SQL_QUERY>" --local
```

#### [Execute a SQL file](https://developers.cloudflare.com/workers/wrangler/commands/d1/#d1-execute)

 npm  yarn  pnpm 

```
npx wrangler d1 execute <DATABASE_NAME> --file=./schema.sql --local
```

```
yarn wrangler d1 execute <DATABASE_NAME> --file=./schema.sql --local
```

```
pnpm wrangler d1 execute <DATABASE_NAME> --file=./schema.sql --local
```

### Durable Objects

For Durable Objects, unlike KV, D1, and R2, there are no CLI commands to populate them with local data. To add data to Durable Objects during local development, you must write application code that creates Durable Object instances and [calls methods on them that store state](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/). This typically involves creating development endpoints or test routes that initialize your Durable Objects with the desired data.

## Where local data gets stored

By default, both Wrangler and the Vite plugin store local binding data in the same location: the `.wrangler/state` folder in your project directory. This folder stores data in subdirectories for all local bindings: KV namespaces, R2 buckets, D1 databases, Durable Objects, etc.

### Clearing local storage

You can delete the `.wrangler/state` folder at any time to reset your local environment, and Miniflare will recreate it the next time you run your `dev` command. You can also delete specific sub-folders within `.wrangler/state` for more targeted clean-up.

### Changing the local data directory

If you prefer to specify a different directory for local storage, you can do so through the Wranlger CLI or in the Vite plugin's configuration.

#### Using Wrangler

Use the [\--persist-to](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) flag with `wrangler dev`. You need to specify this flag every time you run the `dev` command:

 npm  yarn  pnpm 

```
npx wrangler dev --persist-to <DIRECTORY>
```

```
yarn wrangler dev --persist-to <DIRECTORY>
```

```
pnpm wrangler dev --persist-to <DIRECTORY>
```

Note

The local persistence folder (like `.wrangler/state` or any custom folder you set) should be added to your `.gitignore` to avoid committing local development data to version control.

Using `--local` with `--persist-to`

If you run `wrangler dev --persist-to <DIRECTORY>` to specify a custom location for local data, you must also include the same `--persist-to <DIRECTORY>` when running other Wrangler commands that modify local data (and be sure to include the `--local` flag).

For example, to create a KV key named `test` with a value of `12345` in a local KV namespace, run:

 npm  yarn  pnpm 

```
npx wrangler kv key put test 12345 --binding MY_KV_NAMESPACE --local --persist-to worker-local
```

```
yarn wrangler kv key put test 12345 --binding MY_KV_NAMESPACE --local --persist-to worker-local
```

```
pnpm wrangler kv key put test 12345 --binding MY_KV_NAMESPACE --local --persist-to worker-local
```

This command:

* Sets the KV key `test` to `12345` in the binding `MY_KV_NAMESPACE` (defined in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/)).
* Uses `--persist-to worker-local` to ensure the data is created in the **worker-local** directory instead of the default `.wrangler/state`.
* Adds the `--local` flag, indicating you want to modify local data.

If `--persist-to` is not specified, Wrangler defaults to using `.wrangler/state` for local data.

#### Using the Cloudflare Vite plugin

To customize where the Vite plugin stores local data, configure the [persistState option](https://developers.cloudflare.com/workers/vite-plugin/reference/api/#interface-pluginconfig) in your Vite config file:

vite.config.js

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";


export default defineConfig({

  plugins: [

    cloudflare({

      persistState: { path: "./my-custom-directory" },

    }),

  ],

});


```

#### Sharing state between tools

If you want Wrangler and the Vite plugin to share the same state, configure them to use the same persistence path.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/development-testing/","name":"Development & testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/development-testing/local-data/","name":"Adding local data"}}]}
```

---

---
title: Developing with multiple Workers
description: Learn how to develop with multiple Workers using different approaches and configurations.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/development-testing/multi-workers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Developing with multiple Workers

When building complex applications, you may want to run multiple Workers during development. This guide covers the different approaches for running multiple Workers locally and when to use each approach.

## Single dev command

Note

We recommend this approach as the default for most development workflows as it ensures the best compatibility with bindings.

You can run multiple Workers in a single dev command by passing multiple configuration files to your dev server:

**Using Wrangler**

 npm  yarn  pnpm 

```
npx wrangler dev -c ./app/wrangler.jsonc -c ./api/wrangler.jsonc
```

```
yarn wrangler dev -c ./app/wrangler.jsonc -c ./api/wrangler.jsonc
```

```
pnpm wrangler dev -c ./app/wrangler.jsonc -c ./api/wrangler.jsonc
```

The first config (`./app/wrangler.jsonc`) is treated as the primary Worker, exposed at `http://localhost:8787`. Additional configs (e.g. `./api/wrangler.jsonc`) run as auxiliary Workers, available via service bindings or tail consumers from the primary Worker.

**Using the Vite plugin**

Configure `auxiliaryWorkers` in your Vite configuration:

vite.config.js

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";


export default defineConfig({

  plugins: [

    cloudflare({

      configPath: "./app/wrangler.jsonc",

      auxiliaryWorkers: [

        {

          configPath: "./api/wrangler.jsonc",

        },

      ],

    }),

  ],

});


```

Then run:

 npm  yarn  pnpm 

```
npx vite dev
```

```
yarn vite dev
```

```
pnpm vite dev
```

**Use this approach when:**

* You want the simplest setup for development
* Workers are part of the same application or codebase
* You need to access a Durable Object namespace or Workflow from another Worker using `script_name`, or set up Queues where the producer and consumer Workers are separated.

## Multiple dev commands

You can also run each Worker in separate dev commands, each with its own terminal and configuration.

 npm  yarn  pnpm 

```
# Terminal 1
npx wrangler dev -c ./app/wrangler.jsonc
```

```
# Terminal 1
yarn wrangler dev -c ./app/wrangler.jsonc
```

```
# Terminal 1
pnpm wrangler dev -c ./app/wrangler.jsonc
```

 npm  yarn  pnpm 

```
# Terminal 2
npx wrangler dev -c ./api/wrangler.jsonc
```

```
# Terminal 2
yarn wrangler dev -c ./api/wrangler.jsonc
```

```
# Terminal 2
pnpm wrangler dev -c ./api/wrangler.jsonc
```

These Workers run in different dev commands but can still communicate with each other via service bindings or tail consumers **regardless of whether they are started with `wrangler dev` or `vite dev`**.

Note

You can also combine both approaches — for example, run a group of Workers together through `vite dev` using `auxiliaryWorkers`, while running another Worker separately with `wrangler dev`. This allows you to keep tightly coupled Workers running under a single dev command, while keeping independent or shared Workers in separate ones.

**Use this approach when:**

* You want each Worker to be accessible on its own local URL during development, since only the primary Worker is exposed when using a single dev command
* Each Worker has its own build setup or tooling — for example, one uses Vite with custom plugins while another is a vanilla Wrangler project
* You need the flexibility to run and develop Workers independently without restructuring your project or consolidating configs

This setup is especially useful in larger projects where each team maintains a subset of Workers. Running everything in a single dev command might require significant restructuring or build integration that isn't always practical.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/development-testing/","name":"Development & testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/development-testing/multi-workers/","name":"Developing with multiple Workers"}}]}
```

---

---
title: Testing
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/development-testing/testing.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Testing

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/development-testing/","name":"Development & testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/development-testing/testing/","name":"Testing"}}]}
```

---

---
title: Vite Plugin
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/development-testing/vite-plugin.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Vite Plugin

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/development-testing/","name":"Development & testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/development-testing/vite-plugin/","name":"Vite Plugin"}}]}
```

---

---
title: Choosing between Wrangler &#38; Vite
description: Choosing between Wrangler and Vite for local development
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/development-testing/wrangler-vs-vite.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Choosing between Wrangler & Vite

# When to use Wrangler vs Vite

Deciding between Wrangler and the Cloudflare Vite plugin depends on your project's focus and development workflow. Here are some quick guidelines to help you choose:

## When to use Wrangler

* **Backend & Workers-focused:**If you're primarily building APIs, serverless functions, or background tasks, use Wrangler.
* **Remote development:**If your project needs the ability to run your worker remotely on Cloudflare's network, use Wrangler's `--remote` flag.
* **Simple frontends:**If you have minimal frontend requirements and don’t need hot reloading or advanced bundling, Wrangler may be sufficient.

## When to use the Cloudflare Vite Plugin

Use the [Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) for:

* **Frontend-centric development:**If you already use Vite with modern frontend frameworks like React, Vue, Svelte, or Solid, the Vite plugin integrates into your development workflow.
* **React Router v7:**If you are using [React Router v7 ↗](https://reactrouter.com/) (the successor to Remix), it is officially supported by the Vite plugin as a full-stack SSR framework.
* **Rapid iteration (HMR):**If you need near-instant updates in the browser, the Vite plugin provides [Hot Module Replacement (HMR) ↗](https://vite.dev/guide/features.html#hot-module-replacement) during local development.
* **Advanced optimizations:**If you require more advanced optimizations (code splitting, efficient bundling, CSS handling, build time transformations, etc.), Vite is a strong fit.
* **Greater flexibility:**Due to Vite's advanced configuration options and large ecosystem of plugins, there is more flexibility to customize your development experience and build output.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/development-testing/","name":"Development & testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/development-testing/wrangler-vs-vite/","name":"Choosing between Wrangler & Vite"}}]}
```

---

---
title: Playground
description: The quickest way to experiment with Cloudflare Workers is in the Playground. It does not require any setup or authentication. The Playground is a sandbox which gives you an instant way to preview and test a Worker directly in the browser.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/playground.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Playground

Browser support

The Cloudflare Workers Playground is currently only supported in Firefox and Chrome desktop browsers. In Safari, it will show a `PreviewRequestFailed` error message.

The quickest way to experiment with Cloudflare Workers is in the [Playground ↗](https://workers.cloudflare.com/playground). It does not require any setup or authentication. The Playground is a sandbox which gives you an instant way to preview and test a Worker directly in the browser.

The Playground uses the same editor as the authenticated experience. The Playground provides the ability to [share](#share) the code you write as well as [deploy](#deploy) it instantly to Cloudflare's global network. This way, you can try new things out and deploy them when you are ready.

[ Launch the Playground ](https://workers.cloudflare.com/playground) 

## Hello Cloudflare Workers

When you arrive in the Playground, you will see this default code:

JavaScript

```

import welcome from "welcome.html";


/**

 * @typedef {Object} Env

 */


export default {

  /**

   * @param {Request} request

   * @param {Env} env

   * @param {ExecutionContext} ctx

   * @returns {Response}

   */

  fetch(request, env, ctx) {

    console.log("Hello Cloudflare Workers!");


    return new Response(welcome, {

      headers: {

        "content-type": "text/html",

      },

    });

  },

};


```

This is an example of a multi-module Worker that is receiving a [request](https://developers.cloudflare.com/workers/runtime-apis/request/), logging a message to the console, and then returning a [response](https://developers.cloudflare.com/workers/runtime-apis/response/) body containing the content from `welcome.html`.

Refer to the [Fetch handler documentation](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) to learn more.

## Use the Playground

As you edit the default code, the Worker will auto-update such that the preview on the right shows your Worker running just as it would in a browser. If your Worker uses URL paths, you can enter those in the input field on the right to navigate to them. The Playground provides type-checking via JSDoc comments and [workers-types ↗](https://www.npmjs.com/package/@cloudflare/workers-types). The Playground also provides pretty error pages in the event of application errors.

To test a raw HTTP request (for example, to test a `POST` request), go to the **HTTP** tab and select **Send**. You can add and edit headers via this panel, as well as edit the body of a request.

## Log viewer

The Playground and the quick editor in the Workers dashboard include a lightweight log viewer at the bottom of the preview panel. The log viewer displays the output of any calls to `console.log` made during preview runs.

The log viewer supports the following:

* Logging primitive values, objects, and arrays.
* Clearing the log output between runs.

At this time, the log viewer does not support logging class instances or their properties (for example, `request.url`).

If you need a more complete development experience with full debugging capabilities, you can use [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/) locally. To clone an existing Worker from your dashboard for local development, sign up and use the [wrangler init --from-dash](https://developers.cloudflare.com/workers/wrangler/commands/general/#init) command once your worker is deployed.

## Share

To share what you have created, select **Copy Link** in the top right of the screen. This will copy a unique URL to your clipboard that you can share with anyone. These links do not expire, so you can bookmark your creation and share it at any time. Users that open a shared link will see the Playground with the shared code and preview.

## Deploy

You can deploy a Worker from the Playground. If you are already logged in, you can review the Worker before deploying. Otherwise, you will be taken through the first-time user onboarding flow before you can review and deploy.

Once deployed, your Worker will get its own unique URL and be available almost instantly on Cloudflare's global network. From here, you can add [Custom Domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), [storage resources](https://developers.cloudflare.com/workers/platform/storage-options/), and more.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/playground/","name":"Playground"}}]}
```

---

---
title: Configuration
description: Worker configuration is managed through a Wrangler configuration file, which defines your project settings, bindings, and deployment options. Wrangler is the command-line tool used to develop, test, and deploy Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Configuration

Worker configuration is managed through a [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), which defines your project settings, bindings, and deployment options. Wrangler is the command-line tool used to develop, test, and deploy Workers.

For more information on Wrangler, refer to [Wrangler](https://developers.cloudflare.com/workers/wrangler/).

* [ Bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/)
* [ Compatibility dates ](https://developers.cloudflare.com/workers/configuration/compatibility-dates/)
* [ Compatibility flags ](https://developers.cloudflare.com/workers/configuration/compatibility-flags/)
* [ Cron Triggers ](https://developers.cloudflare.com/workers/configuration/cron-triggers/)
* [ Environment variables ](https://developers.cloudflare.com/workers/configuration/environment-variables/)
* [ Integrations ](https://developers.cloudflare.com/workers/configuration/integrations/)
* [ Multipart upload metadata ](https://developers.cloudflare.com/workers/configuration/multipart-upload-metadata/)
* [ Page Rules ](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/)
* [ Placement ](https://developers.cloudflare.com/workers/configuration/placement/)
* [ Preview URLs ](https://developers.cloudflare.com/workers/configuration/previews/)
* [ Routes and domains ](https://developers.cloudflare.com/workers/configuration/routing/)
* [ Secrets ](https://developers.cloudflare.com/workers/configuration/secrets/)
* [ Versions & Deployments ](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/)
* [ Workers Sites ](https://developers.cloudflare.com/workers/configuration/sites/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}}]}
```

---

---
title: Bindings (env)
description: Worker Bindings that allow for interaction with other Cloudflare Resources.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Bindings ](https://developers.cloudflare.com/search/?tags=Bindings) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Bindings (env)

Bindings allow your Worker to interact with resources on the Cloudflare Developer Platform. Bindings provide better performance and less restrictions when accessing resources from Workers than the [REST APIs](https://developers.cloudflare.com/api/) which are intended for non-Workers applications.

The following bindings are available today:

* [ AI ](https://developers.cloudflare.com/workers-ai/get-started/workers-wrangler/#2-connect-your-worker-to-workers-ai)
* [ Analytics Engine ](https://developers.cloudflare.com/analytics/analytics-engine)
* [ Assets ](https://developers.cloudflare.com/workers/static-assets/binding/)
* [ Browser Rendering ](https://developers.cloudflare.com/browser-rendering)
* [ D1 ](https://developers.cloudflare.com/d1/worker-api/)
* [ Dispatcher (Workers for Platforms) ](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/dynamic-dispatch/)
* [ Durable Objects ](https://developers.cloudflare.com/durable-objects/api/)
* [ Dynamic Worker Loaders ](https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/)
* [ Environment Variables ](https://developers.cloudflare.com/workers/configuration/environment-variables/)
* [ Hyperdrive ](https://developers.cloudflare.com/hyperdrive)
* [ Images ](https://developers.cloudflare.com/images/transform-images/bindings/)
* [ KV ](https://developers.cloudflare.com/kv/api/)
* [ Media Transformations ](https://developers.cloudflare.com/stream/transform-videos/bindings/)
* [ mTLS ](https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls/)
* [ Queues ](https://developers.cloudflare.com/queues/configuration/javascript-apis/)
* [ R2 ](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/)
* [ Rate Limiting ](https://developers.cloudflare.com/workers/runtime-apis/bindings/rate-limit/)
* [ Secrets ](https://developers.cloudflare.com/workers/configuration/secrets/)
* [ Secrets Store ](https://developers.cloudflare.com/secrets-store/integrations/workers/)
* [ Service bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/)
* [ Vectorize ](https://developers.cloudflare.com/vectorize/reference/client-api/)
* [ Version metadata ](https://developers.cloudflare.com/workers/runtime-apis/bindings/version-metadata/)
* [ Workflows ](https://developers.cloudflare.com/workflows/)

## What is a binding?

When you declare a binding on your Worker, you grant it a specific capability, such as being able to read and write files to an [R2](https://developers.cloudflare.com/r2/) bucket. For example:

* [  wrangler.jsonc ](#tab-panel-7524)
* [  wrangler.toml ](#tab-panel-7525)

```

{

  "main": "./src/index.js",

  "r2_buckets": [

    {

      "binding": "MY_BUCKET",

      "bucket_name": "<MY_BUCKET_NAME>"

    }

  ]

}


```

```

main = "./src/index.js"


[[r2_buckets]]

binding = "MY_BUCKET"

bucket_name = "<MY_BUCKET_NAME>"


```

* [  JavaScript ](#tab-panel-7510)
* [  Python ](#tab-panel-7511)

JavaScript

```

export default {

  async fetch(request, env) {

    const url = new URL(request.url);

    const key = url.pathname.slice(1);

    await env.MY_BUCKET.put(key, request.body);

    return new Response(`Put ${key} successfully!`);

  },

};


```

Python

```

from workers import WorkerEntrypoint, Response

from urllib.parse import urlparse


class Default(WorkerEntrypoint):

  async def fetch(self, request):

    url = urlparse(request.url)

    key = url.path.slice(1)

    await self.env.MY_BUCKET.put(key, request.body)

    return Response(f"Put {key} successfully!")


```

You can think of a binding as a permission and an API in one piece. With bindings, you never have to add secret keys or tokens to your Worker in order to access resources on your Cloudflare account — the permission is embedded within the API itself. The underlying secret is never exposed to your Worker's code, and therefore can't be accidentally leaked.

## Making changes to bindings

When you deploy a change to your Worker, and only change its bindings (i.e. you don't change the Worker's code), Cloudflare may reuse existing isolates that are already running your Worker. This improves performance — you can change an environment variable or other binding without unnecessarily reloading your code.

As a result, you must be careful when "polluting" global scope with derivatives of your bindings. Anything you create there might continue to exist despite making changes to any underlying bindings. Consider an external client instance which uses a secret API key accessed from `env`: if you put this client instance in global scope and then make changes to the secret, a client instance using the original value might continue to exist. The correct approach would be to create a new client instance for each request.

The following is a good approach:

TypeScript

```

export default {

  fetch(request, env) {

    let client = new Client(env.MY_SECRET); // `client` is guaranteed to be up-to-date with the latest value of `env.MY_SECRET` since a new instance is constructed with every incoming request


    // ... do things with `client`

  },

};


```

Compared to this alternative, which might have surprising and unwanted behavior:

TypeScript

```

let client = undefined;


export default {

  fetch(request, env) {

    client ??= new Client(env.MY_SECRET); // `client` here might not be updated when `env.MY_SECRET` changes, since it may already exist in global scope


    // ... do things with `client`

  },

};


```

If you have more advanced needs, explore the [AsyncLocalStorage API](https://developers.cloudflare.com/workers/runtime-apis/nodejs/asynclocalstorage/), which provides a mechanism for exposing values down to child execution handlers.

## How to access `env`

Bindings are located on the `env` object, which can be accessed in several ways:

* It is an argument to entrypoint handlers such as [fetch](https://developers.cloudflare.com/workers/runtime-apis/fetch/):  
JavaScript  
```  
export default {  
  async fetch(request, env) {  
    return new Response(`Hi, ${env.NAME}`);  
  },  
};  
```
* It is as class property on [WorkerEntrypoint](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/#bindings-env),[DurableObject](https://developers.cloudflare.com/durable-objects/), and [Workflow](https://developers.cloudflare.com/workflows/):  
   * [  JavaScript ](#tab-panel-7512)  
   * [  Python ](#tab-panel-7513)  
JavaScript  
```  
export class MyDurableObject extends DurableObject {  
  async sayHello() {  
    return `Hi, ${this.env.NAME}!`;  
  }  
}  
```  
Python  
```  
from workers import WorkerEntrypoint, Response  
class Default(WorkerEntrypoint):  
  async def fetch(self, request):  
    return Response(f"Hi {self.env.NAME}")  
```
* It can be imported from `cloudflare:workers`:  
   * [  JavaScript ](#tab-panel-7514)  
   * [  Python ](#tab-panel-7515)  
JavaScript  
```  
import { env } from "cloudflare:workers";  
console.log(`Hi, ${env.Name}`);  
```  
Python  
```  
from workers import import_from_javascript  
env = import_from_javascript("cloudflare:workers").env  
print(f"Hi, {env.NAME}")  
```

### Importing `env` as a global

Importing `env` from `cloudflare:workers` is useful when you need to access a binding such as [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) or [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/)in top-level global scope. For example, to initialize an API client:

* [  JavaScript ](#tab-panel-7516)
* [  Python ](#tab-panel-7517)

JavaScript

```

import { env } from "cloudflare:workers";

import ApiClient from "example-api-client";


// API_KEY and LOG_LEVEL now usable in top-level scope

let apiClient = ApiClient.new({ apiKey: env.API_KEY });

const LOG_LEVEL = env.LOG_LEVEL || "info";


export default {

  fetch(req) {

    // you can use apiClient or LOG_LEVEL, configured before any request is handled

  },

};


```

Python

```

from workers import WorkerEntrypoint, env

from example_api_client import ApiClient


api_client = ApiClient(api_key=env.API_KEY)

LOG_LEVEL = getattr(env, "LOG_LEVEL", "info")


class Default(WorkerEntrypoint):

  async def fetch(self, request):

    # ...


```

Workers do not allow I/O from outside a request context. This means that even though `env` is accessible from the top-level scope, you will not be able to access every binding's methods.

For instance, environment variables and secrets are accessible, and you are able to call `env.NAMESPACE.get` to get a [Durable Object stub](https://developers.cloudflare.com/durable-objects/api/stub/) in the top-level context. However, calling methods on the Durable Object stub, making [calls to a KV store](https://developers.cloudflare.com/kv/api/), and [calling to other Workers](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings) will not work.

* [  JavaScript ](#tab-panel-7518)
* [  Python ](#tab-panel-7519)

JavaScript

```

import { env } from "cloudflare:workers";


// This would error!

// env.KV.get('my-key')


export default {

  async fetch(req) {

    // This works

    let myVal = await env.KV.get("my-key");

    Response.new(myVal);

  },

};


```

Python

```

from workers import Response, WorkerEntrypoint, env


# This would fail!

# env.KV.get('my-key')


class Default(WorkerEntrypoint):

  async def fetch(self, request):

    # This works

    mv_val = await env.KV.get("my-key")

    return Response(my_val)


```

Additionally, importing `env` from `cloudflare:workers` lets you avoid passing `env`as an argument through many function calls if you need to access a binding from a deeply-nested function. This can be helpful in a complex codebase.

* [  JavaScript ](#tab-panel-7520)
* [  Python ](#tab-panel-7521)

JavaScript

```

import { env } from "cloudflare:workers";


export default {

  fetch(req) {

    Response.new(sayHello());

  },

};


// env is not an argument to sayHello...

function sayHello() {

  let myName = getName();

  return `Hello, ${myName}`;

}


// ...nor is it an argument to getName

function getName() {

  return env.MY_NAME;

}


```

Python

```

from workers import Response, WorkerEntrypoint, env


class Default(WorkerEntrypoint):

  def fetch(req):

    return Response(say_hello())


# env is not an argument to say_hello...

def say_hello():

  my_name = get_name()

  return f"Hello, {myName}"


# ...nor is it an argument to getName

def get_name():

  return env.MY_NAME


```

Note

While using `env` from `cloudflare:workers` may be simpler to write than passing it through a series of function calls, passing `env` as an argument is a helpful pattern for dependency injection and testing.

### Overriding `env` values

The `withEnv` function provides a mechanism for overriding values of `env`.

Imagine a user has defined the [environment variable](https://developers.cloudflare.com/workers/configuration/environment-variables/)"NAME" to be "Alice" in their Wrangler configuration file and deployed a Worker. By default, logging`env.NAME` would print "Alice". Using the `withEnv` function, you can override the value of "NAME".

* [  JavaScript ](#tab-panel-7522)
* [  Python ](#tab-panel-7523)

JavaScript

```

import { env, withEnv } from "cloudflare:workers";


function logName() {

  console.log(env.NAME);

}


export default {

  fetch(req) {

    // this will log "Alice"

    logName();


    withEnv({ NAME: "Bob" }, () => {

      // this will log "Bob"

      logName();

    });


    // ...etc...

  },

};


```

Python

```

from workers import Response, WorkerEntrypoint, env, patch_env


def log_name():

  print(env.NAME)


class Default(WorkerEntrypoint):

  async def fetch(req):

    # this will log "Alice"

    log_name()


    with patch_env(NAME="Bob"):

      # this will log "Bob"

      log_name()


    # ...etc...


```

This can be useful when testing code that relies on an imported `env` object.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}}]}
```

---

---
title: Compatibility dates
description: Opt into a specific version of the Workers runtime for your Workers project.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/compatibility-dates.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Compatibility dates

Cloudflare regularly updates the Workers runtime. These updates apply to all Workers globally and should never cause a Worker that is already deployed to stop functioning. Sometimes, though, some changes may be backwards-incompatible. In particular, there might be bugs in the runtime API that existing Workers may inadvertently depend upon. Cloudflare implements bug fixes that new Workers can opt into while existing Workers will continue to see the buggy behavior to prevent breaking deployed Workers.

The compatibility date and flags are how you, as a developer, opt into these runtime changes. [Compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags) will often have a date in which they are enabled by default, and so, by specifying a `compatibility_date` for your Worker, you can quickly enable all of these various compatibility flags up to, and including, that date.

## Setting compatibility date

When you start your project, you should always set `compatibility_date` to the current date. You should occasionally update the `compatibility_date` field. When updating, you should refer to the [compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags) page to find out what has changed, and you should be careful to test your Worker to see if the changes affect you, updating your code as necessary. The new compatibility date takes effect when you next run the [npx wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) command.

There is no need to update your `compatibility_date` if you do not want to. The Workers runtime will support old compatibility dates forever. If, for some reason, Cloudflare finds it is necessary to make a change that will break live Workers, Cloudflare will actively contact affected developers. That said, Cloudflare aims to avoid this if at all possible.

However, even though you do not need to update the `compatibility_date` field, it is a good practice to do so for two reasons:

1. Sometimes, new features can only be made available to Workers that have a current `compatibility_date`. To access the latest features, you need to stay up-to-date.
2. Generally, other than the [compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags) page, the Workers documentation may only describe the current `compatibility_date`, omitting information about historical behavior. If your Worker uses an old `compatibility_date`, you will need to continuously refer to the compatibility flags page in order to check if any of the APIs you are using have changed.

#### Via Wrangler

The compatibility date can be set in a Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).

* [  wrangler.jsonc ](#tab-panel-7048)
* [  wrangler.toml ](#tab-panel-7049)

```

{

  // Opt into backwards-incompatible changes through April 5, 2022.

  "compatibility_date": "2022-04-05"

}


```

```

compatibility_date = "2022-04-05"


```

#### Via the Cloudflare Dashboard

When a Worker is created through the Cloudflare Dashboard, the compatibility date is automatically set to the current date.

The compatibility date can be updated in the Workers settings on the [Cloudflare dashboard ↗](https://dash.cloudflare.com/).

#### Via the Cloudflare API

The compatibility date can be set when uploading a Worker using the [Workers Script API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) in the request body's `metadata` field.

If a compatibility date is not specified on upload via the API, it defaults to the oldest compatibility date, before any flags took effect (2021-11-02). When creating new Workers, it is highly recommended to set the compatibility date to the current date when uploading via the API.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/compatibility-dates/","name":"Compatibility dates"}}]}
```

---

---
title: Compatibility flags
description: Opt into a specific features of the Workers runtime for your Workers project.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/compatibility-flags.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Compatibility flags

Compatibility flags enable specific features. They can be useful if you want to help the Workers team test upcoming changes that are not yet enabled by default, or if you need to hold back a change that your code depends on but still want to apply other compatibility changes.

Compatibility flags will often have a date in which they are enabled by default, and so, by specifying a [compatibility\_date](https://developers.cloudflare.com/workers/configuration/compatibility-dates) for your Worker, you can quickly enable all of these various compatibility flags up to, and including, that date.

## Setting compatibility flags

You may provide a list of `compatibility_flags`, which enable or disable specific changes.

#### Via Wrangler

Compatibility flags can be set in a Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).

This example enables the specific flag `formdata_parser_supports_files`, which is described [below](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#formdata-parsing-supports-file). As of the specified date, `2021-09-14`, this particular flag was not yet enabled by default, but, by specifying it in `compatibility_flags`, we can enable it anyway. `compatibility_flags` can also be used to disable changes that became the default in the past.

* [  wrangler.jsonc ](#tab-panel-7052)
* [  wrangler.toml ](#tab-panel-7053)

```

{

  // Opt into backwards-incompatible changes through September 14, 2021.

  "compatibility_date": "2021-09-14",

  // Also opt into an upcoming fix to the FormData API.

  "compatibility_flags": [

    "formdata_parser_supports_files"

  ]

}


```

```

compatibility_date = "2021-09-14"

compatibility_flags = [ "formdata_parser_supports_files" ]


```

#### Via the Cloudflare Dashboard

Compatibility flags can be updated in the Workers settings on the [Cloudflare dashboard ↗](https://dash.cloudflare.com/).

#### Via the Cloudflare API

Compatibility flags can be set when uploading a Worker using the [Workers Script API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) in the request body's `metadata` field.

## Node.js compatibility flag

Note

[The nodejs\_compat flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) also enables `nodejs_compat_v2` as long as your compatibility date is 2024-09-23 or later. The v2 flag improves runtime Node.js compatibility by bundling additional polyfills and globals into your Worker. However, this improvement increases bundle size.

If your compatibility date is 2024-09-22 or before and you want to enable v2, add the `nodejs_compat_v2` in addition to the `nodejs_compat` flag. If your compatibility date is after 2024-09-23, but you want to disable v2 to avoid increasing your bundle size, add the `no_nodejs_compat_v2` in addition to the `nodejs_compat flag`.

A [growing subset](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) of Node.js APIs are available directly as [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/nodejs), with no need to add polyfills to your own code. To enable these APIs in your Worker, add the `nodejs_compat` compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):

To enable both built-in runtime APIs and polyfills for your Worker or Pages project, add the [nodejs\_compat](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag) [compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag) to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), and set your compatibility date to September 23rd, 2024 or later. This will enable [Node.js compatibility](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) for your Workers project.

* [  wrangler.jsonc ](#tab-panel-7056)
* [  wrangler.toml ](#tab-panel-7057)

```

{

  "compatibility_flags": [

    "nodejs_compat"

  ],

  // Set this to today's date

  "compatibility_date": "2026-04-03"

}


```

```

compatibility_flags = [ "nodejs_compat" ]

# Set this to today's date

compatibility_date = "2026-04-03"


```

* [  wrangler.jsonc ](#tab-panel-7050)
* [  wrangler.toml ](#tab-panel-7051)

```

{

  "compatibility_flags": [

    "nodejs_compat"

  ]

}


```

```

compatibility_flags = [ "nodejs_compat" ]


```

As additional Node.js APIs are added, they will be made available under the `nodejs_compat` compatibility flag. Unlike most other compatibility flags, we do not expect the `nodejs_compat` to become active by default at a future date.

The Node.js `AsyncLocalStorage` API is a particularly useful feature for Workers. To enable only the `AsyncLocalStorage` API, use the `nodejs_als` compatibility flag.

* [  wrangler.jsonc ](#tab-panel-7054)
* [  wrangler.toml ](#tab-panel-7055)

```

{

  "compatibility_flags": [

    "nodejs_als"

  ]

}


```

```

compatibility_flags = [ "nodejs_als" ]


```

## Flags history

Newest flags are listed first.

### Use an isolated PID namespace for containers

| **Default as of**   | 2026-04-01                     |
| ------------------- | ------------------------------ |
| **Flag to enable**  | containers\_pid\_namespace     |
| **Flag to disable** | no\_containers\_pid\_namespace |

When `containers_pid_namespace` is set, containers will use an isolated PID namespace. The `ENTRYPOINT` of your container will have PID 1.

When unset, the container shares the PID namespace with the virtual machine (VM) containing the container. The `ENTRYPOINT` of your container will _not_ have PID 1 and other processes running on the VM (that are not part of your container) will be visible.

### Durable Object `deleteAll()` deletes alarms

| **Default as of**   | 2026-02-24                    |
| ------------------- | ----------------------------- |
| **Flag to enable**  | delete\_all\_deletes\_alarm   |
| **Flag to disable** | delete\_all\_preserves\_alarm |

With the `delete_all_deletes_alarm` flag set, calling `deleteAll()` on a Durable Object's storage will delete any active alarm in addition to all stored data. Previously, `deleteAll()` only deleted user-stored data, and alarms required a separate `deleteAlarm()` call to remove. This change applies to both KV-backed and SQLite-backed Durable Objects.

### Duplicate stubs in RPC params instead of transferring ownership

| **Default as of**   | 2026-01-20                   |
| ------------------- | ---------------------------- |
| **Flag to enable**  | rpc\_params\_dup\_stubs      |
| **Flag to disable** | rpc\_params\_transfer\_stubs |

Changes the ownership semantics of RPC stubs embedded in the parameters of an RPC call, fixing compatibility issues with [Cap'n Web ↗](https://github.com/cloudflare/capnweb).

When the [Workers RPC system](https://developers.cloudflare.com/workers/runtime-apis/rpc/) was first introduced, RPC stubs that were embedded in the params or return value of some other call had their ownership transferred. That is, the original stub was implicitly disposed, with a duplicate stub being delivered to the destination.

This turns out to compose poorly with another rule: in the callee, any stubs received in the params of a call are automatically disposed when the call returns. These two rules combine to mean that if you proxy a call -- i.e. the implementation of an RPC just makes another RPC call passing along the same params -- then any stubs in the params get disposed twice. Worse, if the eventual recipient of the stub wants to keep a duplicate past the end of the call, this may not work because the copy of the stub in the proxy layer gets disposed anyway, breaking the connection.

For this reason, the pure-JS implementation of Cap'n Web switched to saying that stubs in params do NOT transfer ownership -- they are simply duplicated. This compat flag fixes the Workers Runtime built-in RPC to match Cap'n Web behavior.

One common use case that this fixes is clients that subscribe to callbacks from a Durable Object via Cap'n Web. In this use case, the client app passes a callback function over a Cap'n Web WebSocket to a stateless Worker, which in turn forwards the stub over Workers RPC to a Durable Object. The Durable Object stores a [dup()](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle/#the-dup-method) of the stub in order to call it back later to notify the client of events. Unfortunately, before this flag, this didn't work: as soon as the subscribe function itself returned, the Cap'n Web stub in the stateless worker would be disposed (because it was a parameter to a call that returned, and it was not `dup()`ed within the context of the stateless worker). Hence, when the Durable Object later tried to call the subscription callback, it would receive "Error: RPC stub used after being disposed", despite the fact that it had carefully `dup()`ed the stub at its end.

### Enable ctx.exports

| **Default as of**   | 2025-11-17            |
| ------------------- | --------------------- |
| **Flag to enable**  | enable\_ctx\_exports  |
| **Flag to disable** | disable\_ctx\_exports |

This flag enables [the ctx.exports API](https://developers.cloudflare.com/workers/runtime-apis/context/#exports), which contains automatically-configured loopback bindings for your Worker's top-level exports. This allows you to skip configuring explicit bindings for your `WorkerEntrypoint`s and Durable Object namespaces defined in the same Worker.

### Automatic tracing

| **Flag to enable** | enable\_workers\_observability\_tracing |
| ------------------ | --------------------------------------- |

This flag will enable [Workers Tracing](https://developers.cloudflare.com/workers/observability/traces/) by default if you have the following configured in your Wrangler configuration file:

```

{

  "observability": {

    "enabled": true

  }

}


```

You can also explictly turn on automatic tracing without the flag and with older compatibility dates by setting the following:

```

{

  "observability": {

    "traces": {

      "enabled": true

    }

  }

}


```

### Enable `process` v2 implementation

| **Default as of**   | 2025-09-15                   |
| ------------------- | ---------------------------- |
| **Flag to enable**  | enable\_nodejs\_process\_v2  |
| **Flag to disable** | disable\_nodejs\_process\_v2 |

When enabled after 2025-09-15, the `enable_nodejs_process_v2` flag along with the [nodejs\_compat](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) compat flag ensures a comprehensive Node.js-compatible `process` implementation, updating from the previous minimal process implementation that only provided the limited `nextTick`, `env`, `exit`, `getBuiltinModule`, `platform` and `features` properties.

To continue using the previous minimal implementation after the compat date, set the `disable_nodejs_process_v2` flag instead.

Most Node.js-supported process properties are implemented where possible, with undefined exports for unsupported features. See the [process documentation](https://developers.cloudflare.com/workers/runtime-apis/nodejs/process/) for Workers-specific implementation details.

### Enable Node.js HTTP server modules

| **Default as of**   | 2025-09-01                             |
| ------------------- | -------------------------------------- |
| **Flag to enable**  | enable\_nodejs\_http\_server\_modules  |
| **Flag to disable** | disable\_nodejs\_http\_server\_modules |

The `enable_nodejs_http_server_modules` flag enables the availability of Node.js HTTP server modules such as `node:_http_server` in Workers.

The `disable_nodejs_http_server_modules` flag disables the availability of these server modules.

This enables compatibility with Node.js libraries and existing code that use the standard Node.js HTTP server APIs. The available functionality includes:

* `http.createServer()` for creating HTTP servers
* `http.Server` class for server instances
* `http.ServerResponse` for handling server responses

This flag must be used in combination with the `enable_nodejs_http_modules` flag to enable full features of `node:http`.

This flag is automatically enabled for Workers using a compatibility date of 2025-09-01 or later when `nodejs_compat` is enabled.

See the [Node.js documentation ↗](https://nodejs.org/docs/latest/api/http.html) for more details about the Node.js HTTP APIs.

### Enable availability of `node:http` and `node:https` modules

| **Default as of**   | 2025-08-15                     |
| ------------------- | ------------------------------ |
| **Flag to enable**  | enable\_nodejs\_http\_modules  |
| **Flag to disable** | disable\_nodejs\_http\_modules |

The `enable_nodejs_http_modules` flag enables the availability of Node.js`node:http` and `node:https` modules in Workers (client APIS only).

The `disable_nodejs_http_modules` flag disables the availability of these modules.

This enables compatibility with Node.js libraries and existing code that use the standard node:http and node:https APIs for making HTTP requests. The available functionality includes:

* `http.request()` and `https.request()` for making HTTP/HTTPS requests
* `http.get()` and `https.get()` for making GET requests
* Request and response objects with standard Node.js APIs
* Support for standard HTTP methods, headers, and options

See the [Node.js documentation ↗](https://nodejs.org/docs/latest/api/http.html)for more details about the Node.js APIs.

### Expose global MessageChannel and MessagePort

| **Default as of**   | 2025-08-15                           |
| ------------------- | ------------------------------------ |
| **Flag to enable**  | expose\_global\_message\_channel     |
| **Flag to disable** | no\_expose\_global\_message\_channel |

When the `expose_global_message_channel` flag is set, Workers will expose the `MessageChannel` and `MessagePort` constructors globally.

When the `no_expose_global_message_channel` flag is set, Workers will not expose these.

### Disable global handlers for Python Workers

| **Default as of**   | 2025-08-14                            |
| ------------------- | ------------------------------------- |
| **Flag to enable**  | python\_no\_global\_handlers          |
| **Flag to disable** | disable\_python\_no\_global\_handlers |

When the `python_no_global_handlers` flag is set, Python Workers will disable the global handlers and enforce their use via default entrypoint classes.

### Enable `cache: no-cache` HTTP standard API

| **Default as of**   | 2025-08-07                 |
| ------------------- | -------------------------- |
| **Flag to enable**  | cache\_no\_cache\_enabled  |
| **Flag to disable** | cache\_no\_cache\_disabled |

When you enable the `cache_no_cache_enabled` compatibility flag, you can specify the `no-cache`value for the `cache` property of the Request interface. When this compatibility flag is not enabled, or `cache_option_disabled` is set, the Workers runtime will throw a `TypeError` saying`Unsupported cache mode: no-cache`.

When this flag is enabled you can instruct Cloudflare to force its cache to revalidate the response from a subrequest you make from your Worker using the [fetch()API](https://developers.cloudflare.com/workers/runtime-apis/fetch/):

When `no-cache` is specified:

* All requests have the headers `Pragma: no-cache` and `Cache-Control: no-cache` are set on them.
* Subrequests to origins not hosted by Cloudflare force Cloudflare's cache to revalidate with the origin.

Revalidating with the origin means that the Worker request will first look for a match in Cloudflare's cache, then:

* If there is a match, a conditional request is sent to the origin, regardless of whether or not the match is fresh or stale. If the resource has not changed, the cached version is returned. If the resource has changed, it will be downloaded from the origin, updated in the cache, and returned.
* If there is no match, Workers will make a standard request to the origin and cache the response.

Examples using `cache: 'no-cache'`:

JavaScript

```

const response = await fetch("https://example.com", { cache: "no-cache" });


```

The cache value can also be set on a `Request` object.

JavaScript

```

const request = new Request("https://example.com", { cache: "no-cache" });

const response = await fetch(request);


```

### Set the `this` value of EventTarget event handlers

| **Default as of**   | 2025-08-01                   |
| ------------------- | ---------------------------- |
| **Flag to enable**  | set\_event\_target\_this     |
| **Flag to disable** | no\_set\_event\_target\_this |

When the `set_event_target_this` flag is se, Workers will set the `this` value of event handlers to the `EventTarget` instance that the event is being dispatched on. This is compliant with the specification.

When then `no_set_event_target_this` flag is set, Workers will not set the`this` value of event handlers, and it will be `undefined` instead.

### Set forwardable email full headers

| **Default as of**   | 2025-08-01                               |
| ------------------- | ---------------------------------------- |
| **Flag to enable**  | set\_forwardable\_email\_full\_headers   |
| **Flag to disable** | set\_forwardable\_email\_single\_headers |

The original version of the headers sent to edgeworker were truncated to a single value for specific header names, such as To and Cc. With the`set_forwardable_email_full_headers` flag set, Workers will receive the full header values to the worker script.

### Pedantic Web Platform Tests (WPT) compliance

| **Flag to enable**  | pedantic\_wpt      |
| ------------------- | ------------------ |
| **Flag to disable** | non\_pedantic\_wpt |

The `pedantic_wpt` flag enables strict compliance with Web Platform Tests (WPT) in Workers. Initially this only effects `Event` and `EventTarget` APIs but will be expanded to other APIs in the future. There is no default enable date for this flag.

### Bind AsyncLocalStorage snapshots to the request

| **Default as of**   | 2025-06-16                                     |
| ------------------- | ---------------------------------------------- |
| **Flag to enable**  | bind\_asynclocalstorage\_snapshot\_to\_request |
| **Flag to disable** | do\_not\_bind\_asynclocalstorage\_snapshot\_to |

The AsyncLocalStorage frame can capture values that are bound to the current request context. This is not always in the users control since we use the ALS storage frame to propagate internal trace spans as well as user-provided values. When the `bind_asynclocalstorage_snapshot_to_request`flag is set, the runtime binds the snapshot / bound functions to the current request context and will throw an error if the bound functions are called outside of the request in which they were created.

The `do_not_bind_asynclocalstorage_snapshot_to` flag disables this behavior.

### Throw on unrecognized import assertions

| **Default as of**   | 2025-06-16                                 |
| ------------------- | ------------------------------------------ |
| **Flag to enable**  | throw\_on\_unrecognized\_import\_assertion |
| **Flag to disable** | ignore\_unrecognized\_import\_assertion    |

The `throw_on_unrecognized_import_assertion` flag controls how Workers handle import attributes that are not recognized by the runtime. Previously, Workers would ignore all import attributes, which is not compliant with the specification. Runtimes are expected to throw an error when an import attribute is encountered that is not recognized.

When the `ignore_unrecognized_import_assertion` flag is set, Workers will ignore unrecognized import attributes.

### Enable eval during startup

| **Default as of**   | 2025-06-01                      |
| ------------------- | ------------------------------- |
| **Flag to enable**  | allow\_eval\_during\_startup    |
| **Flag to disable** | disallow\_eval\_during\_startup |

When the `allow_eval_during_startup` flag is set, Workers can use `eval()`and `new Function(text)` during the startup phase of a Worker script. This allows for dynamic code execution at the beginning of a Worker lifecycle.

When the `disallow_eval_during_startup` flag is set, using `eval()` or`new Function(text)` during the startup phase will throw an error.

### Enable `Request.signal` for incoming requests

| **Flag to enable**  | enable\_request\_signal  |
| ------------------- | ------------------------ |
| **Flag to disable** | disable\_request\_signal |

When you use the `enable_request_signal` compatibility flag, you can attach an event listener to [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) objects, using the [signal property ↗](https://developer.mozilla.org/en-US/docs/Web/API/Request/signal). This allows you to perform tasks when the request to your Worker is canceled by the client.

### Enable `navigator.language`

| **Default as of**   | 2025-05-19                   |
| ------------------- | ---------------------------- |
| **Flag to enable**  | enable\_navigator\_language  |
| **Flag to disable** | disable\_navigator\_language |

When the `enable_navigator_language` flag is set, the `navigator.language` property will be available in Workers. For now, the value of `navigator.language` will always be `en`.

When the `disable_navigator_language` flag is set, the `navigator.language` property will not be available.

### Disallowing importable environment

| **Flag to enable**  | disallow\_importable\_env |
| ------------------- | ------------------------- |
| **Flag to disable** | allow\_importable\_env    |

When the `disallow_importable_env` flag is enabled, Workers will not allow importing the environment variables via the `cloudflare:workers` module and will not populate the environment variables in the global `process.env` object when Node.js compatibility is enabled.

There is no default enabled date for this flag.

### Enable `FinalizationRegistry` and `WeakRef`

| **Default as of**   | 2025-05-05         |
| ------------------- | ------------------ |
| **Flag to enable**  | enable\_weak\_ref  |
| **Flag to disable** | disable\_weak\_ref |

Enables the use of [FinalizationRegistry ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/FinalizationRegistry) and [WeakRef ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/WeakRef) built-ins.

* `FinalizationRegistry` allows you to register a cleanup callback that runs after an object has been garbage-collected.
* `WeakRef` creates a weak reference to an object, allowing it to be garbage-collected if no other strong references exist.

Behaviour

`FinalizationRegistry` cleanup callbacks may execute at any point during your request lifecycle, even after your invoked handler has completed (similar to `ctx.waitUntil()`). These callbacks do not have an associated async context. You cannot perform any I/O within them, including emitting events to a tail Worker.

These APIs are fundamentally non-deterministic. The timing and execution of garbage collection are unpredictable, and you **should not rely on them for essential program logic**. Additionally, cleanup callbacks registered with `FinalizationRegistry` may **never be executed**, including but not limited to cases where garbage collection is not triggered, or your Worker gets evicted.

### Passthrough AbortSignal of incoming request to subrequests

| **Flag to enable**  | request\_signal\_passthrough     |
| ------------------- | -------------------------------- |
| **Flag to disable** | no\_request\_signal\_passthrough |

When the `request_signal_passthrough` flag set, the `AbortSignal` of an incoming request will be passed through to subrequests when the request is forwarded to a subrequest using the `fetch()` API.

The the `no_request_signal_passthrough` flag is set, the `AbortSignal` of the incoming request will not be passed through.

### Navigation requests prefer asset serving

| **Default as of**   | 2025-04-01                                  |
| ------------------- | ------------------------------------------- |
| **Flag to enable**  | assets\_navigation\_prefers\_asset\_serving |
| **Flag to disable** | assets\_navigation\_has\_no\_effect         |

For Workers with [static assets](https://developers.cloudflare.com/workers/static-assets/) and this compatibility flag enabled, navigation requests (requests which have a `Sec-Fetch-Mode: navigate` header) will prefer to be served by our asset-serving logic, even when an exact asset match cannot be found. This is particularly useful for applications which operate in either [Single Page Application (SPA) mode](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/) or have [custom 404 pages](https://developers.cloudflare.com/workers/static-assets/routing/static-site-generation/#custom-404-pages), as this now means the fallback pages of `200 /index.html` and `404 /404.html` will be served ahead of invoking a Worker script and will therefore avoid incurring a charge.

Without this flag, the runtime will continue to apply the old behavior of invoking a Worker script (if present) for any requests which do not exactly match a static asset.

When `assets.run_worker_first = true` is set, this compatibility flag has no effect. The `assets.run_worker_first = true` setting ensures the Worker script executes before any asset-serving logic.

### Enable auto-populating `process.env`

| **Default as of**   | 2025-04-01                                      |
| ------------------- | ----------------------------------------------- |
| **Flag to enable**  | nodejs\_compat\_populate\_process\_env          |
| **Flag to disable** | nodejs\_compat\_do\_not\_populate\_process\_env |

When you enable the `nodejs_compat_populate_process_env` compatibility flag and the [nodejs\_compat](https://developers.cloudflare.com/workers/runtime-apis/nodejs/)flag is also enabled, `process.env` will be populated with values from any bindings with text or JSON values. This means that if you have added [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/),[secrets](https://developers.cloudflare.com/workers/configuration/secrets/), or [version metadata](https://developers.cloudflare.com/workers/runtime-apis/bindings/version-metadata/)bindings, these values can be accessed on `process.env`.

JavaScript

```

const apiClient = ApiClient.new({ apiKey: process.env.API_KEY });

const LOG_LEVEL = process.env.LOG_LEVEL || "info";


```

This makes accessing these values easier and conforms to common Node.js patterns, which can reduce toil and help with compatibility for existing Node.js libraries.

If users do not wish for these values to be accessible via `process.env`, they can use the`nodejs_compat_do_not_populate_process_env` flag. In this case, `process.env` will still be available, but will not have values automatically added.

If the `disallow_importable_env` compatibility flag is set, the `process.env` will also not be populated.

### Queue consumers don't wait for `ctx.waitUntil()` to resolve

| **Flag to enable** | queue\_consumer\_no\_wait\_for\_wait\_until |
| ------------------ | ------------------------------------------- |

By default, [Queues](https://developers.cloudflare.com/queues/) Consumer Workers acknowledge messages only after promises passed to [ctx.waitUntil()](https://developers.cloudflare.com/workers/runtime-apis/context) have resolved. This behavior can cause queue consumers which utilize `ctx.waitUntil()` to process messages slowly. The default behavior is documented in the [Queues Consumer Configuration Guide](https://developers.cloudflare.com/queues/configuration/javascript-apis#consumer).

This Consumer Worker is an example of a Worker which utilizes `ctx.waitUntil()`. Under the default behavior, this consumer Worker will only acknowledge a batch of messages after the sleep function has resolved.

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    // omitted

  },


  async queue(batch, env, ctx) {

    console.log(`received batch of ${batch.messages.length} messages to queue ${batch.queue}`);

    for (let i = 0; i < batch.messages.length; ++i) {

      console.log(`message #${i}: ${JSON.stringify(batch.messages[i])}`);

    }

    ctx.waitUntil(sleep(30 * 1000));

  }

};


function sleep(ms) {

  return new Promise(resolve => setTimeout(resolve, ms));

}


```

If the `queue_consumer_no_wait_for_wait_until` flag is enabled, Queues consumers will no longer wait for promises passed to `ctx.waitUntil()` to resolve before acknowledging messages. This can improve the performance of queue consumers which utilize `ctx.waitUntil()`. With the flag enabled, in the above example, the consumer Worker will acknowledge the batch without waiting for the sleep function to resolve.

Using this flag will not affect the behavior of `ctx.waitUntil()`. `ctx.waitUntil()` will continue to extend the lifetime of your consumer Worker to continue to work even after the batch of messages has been acknowledged.

### Apply TransformStream backpressure fix

| **Default as of**   | 2024-12-16                             |
| ------------------- | -------------------------------------- |
| **Flag to enable**  | fixup-transform-stream-backpressure    |
| **Flag to disable** | original-transform-stream-backpressure |

The original implementation of `TransformStream` included a bug that would cause backpressure signaling to fail after the first write to the transform. Unfortunately, the fix can cause existing code written to address the bug to fail. Therefore, the `fixup-transform-stream-backpressure` compat flag is provided to enable the fix.

The fix is enabled by default with compatibility dates of 2024-12-16 or later.

To restore the original backpressure logic, disable the fix using the`original-transform-stream-backpressure` flag.

### Disable top-level await in require(...)

| **Default as of**   | 2024-12-02                              |
| ------------------- | --------------------------------------- |
| **Flag to enable**  | disable\_top\_level\_await\_in\_require |
| **Flag to disable** | enable\_top\_level\_await\_in\_require  |

Workers implements the ability to use the Node.js style `require(...)` method to import modules in the Worker bundle. Historically, this mechanism allowed required modules to use top-level await. This, however, is not Node.js compatible.

The `disable_top_level_await_in_require` compat flag will cause `require()`to fail if the module uses a top-level await. This flag is default enabled with a compatibility date of 2024-12-02 or later.

To restore the original behavior allowing top-level await, use the`enable_top_level_await_in_require` compatibility flag.

### Enable `cache: no-store` HTTP standard API

| **Default as of**   | 2024-11-11              |
| ------------------- | ----------------------- |
| **Flag to enable**  | cache\_option\_enabled  |
| **Flag to disable** | cache\_option\_disabled |

When you enable the `cache_option_enabled` compatibility flag, you can specify a value for the `cache` property of the Request interface. When this compatibility flag is not enabled, or `cache_option_disabled` is set, the Workers runtime will throw an `Error` saying `The 'cache' field on 'RequestInitializerDict' is not implemented.`

When this flag is enabled you can instruct Cloudflare not to cache the response from a subrequest you make from your Worker using the [fetch() API](https://developers.cloudflare.com/workers/runtime-apis/fetch/):

The only cache option enabled with `cache_option_enabled` is `'no-store'`. Specifying any other value will cause the Workers runtime to throw a `TypeError` with the message `Unsupported cache mode: <the-mode-you-specified>`.

When `no-store` is specified:

* All requests have the headers `Pragma: no-cache` and `Cache-Control: no-cache` are set on them.
* Subrequests to origins not hosted by Cloudflare bypass Cloudflare's cache.

Examples using `cache: 'no-store'`:

JavaScript

```

const response = await fetch("https://example.com", { cache: "no-store" });


```

The cache value can also be set on a `Request` object.

JavaScript

```

const request = new Request("https://example.com", { cache: "no-store" });

const response = await fetch(request);


```

### Global fetch() strictly public

| **Flag to enable**  | global\_fetch\_strictly\_public |
| ------------------- | ------------------------------- |
| **Flag to disable** | global\_fetch\_private\_origin  |

When the `global_fetch_strictly_public` compatibility flag is enabled, the global [fetch() function](https://developers.cloudflare.com/workers/runtime-apis/fetch/) will strictly route requests as if they were made on the public Internet.

This means requests to a Worker's own zone will loop back to the "front door" of Cloudflare and will be treated like a request from the Internet, possibly even looping back to the same Worker again.

When the `global_fetch_strictly_public` is not enabled, such requests are routed to the zone's origin server, ignoring any Workers mapped to the URL and also bypassing Cloudflare security settings.

### Upper-case HTTP methods

| **Default as of**   | 2024-10-14                          |
| ------------------- | ----------------------------------- |
| **Flag to enable**  | upper\_case\_all\_http\_methods     |
| **Flag to disable** | no\_upper\_case\_all\_http\_methods |

HTTP methods are expected to be upper-cased. Per the fetch spec, if the method is specified as `get`, `post`, `put`, `delete`, `head`, or `options`, implementations are expected to uppercase the method. All other method names would generally be expected to throw as unrecognized (for example, `patch` would be an error while `PATCH` is accepted). This is a bit restrictive, even if it is in the spec. This flag modifies the behavior to uppercase all methods prior to parsing so that the method is always recognized if it is a known method.

To restore the standard behavior, use the `no_upper_case_all_http_methods`compatibility flag.

### Automatically set the Symbol.toStringTag for Workers API objects

| **Default as of**   | 2024-09-26                  |
| ------------------- | --------------------------- |
| **Flag to enable**  | set\_tostring\_tag          |
| **Flag to disable** | do\_not\_set\_tostring\_tag |

A change was made to set the Symbol.toStringTag on all Workers API objects in order to fix several spec compliance bugs. Unfortunately, this change was more breaking than anticipated. The `do_not_set_tostring_tag` compat flag restores the original behavior with compatibility dates of 2024-09-26 or earlier.

### Allow specifying a custom port when making a subrequest with the fetch() API

| **Default as of**   | 2024-09-02            |
| ------------------- | --------------------- |
| **Flag to enable**  | allow\_custom\_ports  |
| **Flag to disable** | ignore\_custom\_ports |

When this flag is enabled, and you specify a port when making a subrequest with the [fetch() API](https://developers.cloudflare.com/workers/runtime-apis/fetch/), the port number you specify will be used.

When you make a subrequest to a website that uses Cloudflare ("Orange Clouded") — only [ports supported by Cloudflare's reverse proxy](https://developers.cloudflare.com/fundamentals/reference/network-ports/#network-ports-compatible-with-cloudflares-proxy) can be specified. If you attempt to specify an unsupported port, it will be ignored.

When you make a subrequest to a website that does not use Cloudflare ("Grey Clouded") - any port can be specified.

For example:

JavaScript

```

const response = await fetch("https://example.com:8000");


```

With allow\_custom\_ports the above example would fetch `https://example.com:8000` rather than`https://example.com:443`.

Note that creating a WebSocket client with a call to `new WebSocket(url)` will also obey this flag.

### Properly extract blob MIME type from `content-type` headers

| **Default as of**   | 2024-06-03                 |
| ------------------- | -------------------------- |
| **Flag to enable**  | blob\_standard\_mime\_type |
| **Flag to disable** | blob\_legacy\_mime\_type   |

When calling `response.blob.type()`, the MIME type will now be properly extracted from `content-type` headers, per the [WHATWG spec ↗](https://fetch.spec.whatwg.org/#concept-header-extract-mime-type).

### Use standard URL parsing in `fetch()`

| **Default as of**   | 2024-06-03           |
| ------------------- | -------------------- |
| **Flag to enable**  | fetch\_standard\_url |
| **Flag to disable** | fetch\_legacy\_url   |

The `fetch_standard_url` flag makes `fetch()` use [WHATWG URL Standard ↗](https://url.spec.whatwg.org/) parsing rules. The original implementation would throw `TypeError: Fetch API cannot load` errors with some URLs where standard parsing does not, for instance with the inclusion of whitespace before the URL. URL errors will now be thrown immediately upon calling `new Request()` with an improper URL. Previously, URL errors were thrown only once `fetch()` was called.

### Returning empty Uint8Array on final BYOB read

| **Default as of**   | 2024-05-13                                |
| ------------------- | ----------------------------------------- |
| **Flag to enable**  | internal\_stream\_byob\_return\_view      |
| **Flag to disable** | internal\_stream\_byob\_return\_undefined |

In the original implementation of BYOB ("Bring your own buffer") `ReadableStreams`, the `read()` method would return `undefined` when the stream was closed and there was no more data to read. This behavior was inconsistent with the standard `ReadableStream` behavior, which returns an empty `Uint8Array` when the stream is closed.

When the `internal_stream_byob_return_view` flag is used, the BYOB `read()` will implement standard behavior.

JavaScript

```

const resp = await fetch('https://example.org');

const reader = resp.body.getReader({ mode: 'byob' });

await result = await reader.read(new Uint8Array(10));


if (result.done) {

  // The result gives us an empty Uint8Array...

  console.log(result.value.byteLength); // 0


  // However, it is backed by the same underlying memory that was passed

  // into the read call.

  console.log(result.value.buffer.byteLength); // 10

}


```

### Brotli Content-Encoding support

| **Default as of**   | 2024-04-29                    |
| ------------------- | ----------------------------- |
| **Flag to enable**  | brotli\_content\_encoding     |
| **Flag to disable** | no\_brotli\_content\_encoding |

When the `brotli_content_encoding` compatibility flag is enabled, Workers supports the `br` content encoding and can request and respond with data encoded using the [Brotli ↗](https://developer.mozilla.org/en-US/docs/Glossary/Brotli%5Fcompression) compression algorithm. This reduces the amount of data that needs to be fetched and can be used to pass through the original compressed data to the client. See the Fetch API [documentation](https://developers.cloudflare.com/workers/runtime-apis/fetch/#how-the-accept-encoding-header-is-handled) for details.

### Durable Object stubs and Service Bindings support RPC

| **Default as of**   | 2024-04-03 |
| ------------------- | ---------- |
| **Flag to enable**  | rpc        |
| **Flag to disable** | no\_rpc    |

With this flag on, [Durable Object](https://developers.cloudflare.com/durable-objects/) stubs and [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) support [RPC](https://developers.cloudflare.com/workers/runtime-apis/rpc/). This means that these objects now appear as if they define every possible method name. Calling any method name sends an RPC to the remote Durable Object or Worker service.

For most applications, this change will have no impact unless you use it. However, it is possible some existing code will be impacted if it explicitly checks for the existence of method names that were previously not defined on these types. For example, we have seen code in the wild which iterates over [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) and tries to auto-detect their types based on what methods they implement. Such code will now see service bindings as implementing every method, so may misinterpret service bindings as being some other type. In the cases we have seen, the impact was benign (nothing actually broke), but out of caution we are guarding this change behind a flag.

### Handling custom thenables

| **Default as of**   | 2024-04-01                    |
| ------------------- | ----------------------------- |
| **Flag to enable**  | unwrap\_custom\_thenables     |
| **Flag to disable** | no\_unwrap\_custom\_thenables |

With the `unwrap_custom_thenables` flag set, various Workers APIs that accept promises will also correctly handle custom thenables (objects with a `then` method) that are not native promises, but are intended to be treated as such). For example, the `waitUntil` method of the `ExecutionContext`object will correctly handle custom thenables, allowing them to be used in place of native promises.

JavaScript

```

async fetch(req, env, ctx) {

  ctx.waitUntil({ then(res) {

    // Resolve the thenable after 1 second

    setTimeout(res, 1000);

  } });

  // ...

}


```

### Fetchers no longer have get/put/delete helper methods

| **Default as of**   | 2024-03-26                     |
| ------------------- | ------------------------------ |
| **Flag to enable**  | fetcher\_no\_get\_put\_delete  |
| **Flag to disable** | fetcher\_has\_get\_put\_delete |

[Durable Object](https://developers.cloudflare.com/durable-objects/) stubs and [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) both implement a `fetch()` method which behaves similarly to the global `fetch()` method, but requests are instead sent to the destination represented by the object, rather than being routed based on the URL.

Historically, API objects that had such a `fetch()` method also had methods `get()`, `put()`, and `delete()`. These methods were thin wrappers around `fetch()` which would perform the corresponding HTTP method and automatically handle writing/reading the request/response bodies as needed.

These methods were a very early idea from many years ago, but were never actually documented, and therefore rarely (if ever) used. Enabling the `fetcher_no_get_put_delete`, or setting a compatibility date on or after `2024-03-26` disables these methods for your Worker.

This change paves a future path for you to be able to define your own custom methods using these names. Without this change, you would be unable to define your own `get`, `put`, and `delete` methods, since they would conflict with these built-in helper methods.

### Queues send messages in `JSON` format

| **Default as of**   | 2024-03-18                 |
| ------------------- | -------------------------- |
| **Flag to enable**  | queues\_json\_messages     |
| **Flag to disable** | no\_queues\_json\_messages |

With the `queues_json_messages` flag set, Queue bindings will serialize values passed to `send()` or `sendBatch()` into JSON format by default (when no specific `contentType` is provided).

### Suppress global `importScripts()`

| **Default as of**   | 2024-03-04                |
| ------------------- | ------------------------- |
| **Flag to enable**  | no\_global\_importscripts |
| **Flag to disable** | global\_importscripts     |

Suppresses the global `importScripts()` function. This method was included in the Workers global scope but was marked explicitly as non-implemented. However, the presence of the function could cause issues with some libraries. This compatibility flag removes the function from the global scope.

### Node.js AsyncLocalStorage

| **Flag to enable**  | nodejs\_als     |
| ------------------- | --------------- |
| **Flag to disable** | no\_nodejs\_als |

Enables the availability of the Node.js [AsyncLocalStorage ↗](https://nodejs.org/api/async%5Fhooks.html#async%5Fhooks%5Fclass%5Fasynclocalstorage) API in Workers.

### Python Workers

| **Flag to enable** | python\_workers |
| ------------------ | --------------- |

This flag enables first class support for Python. [Python Workers](https://developers.cloudflare.com/workers/languages/python/) implement the majority of Python's [standard library](https://developers.cloudflare.com/workers/languages/python/stdlib), support all [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings), [environment variable](https://developers.cloudflare.com/workers/configuration/environment-variables), and [secrets](https://developers.cloudflare.com/workers/configuration/secrets), and integration with JavaScript objects and functions via a [foreign function interface](https://developers.cloudflare.com/workers/languages/python/ffi).

### WebCrypto preserve publicExponent field

| **Default as of**   | 2023-12-01                             |
| ------------------- | -------------------------------------- |
| **Flag to enable**  | crypto\_preserve\_public\_exponent     |
| **Flag to disable** | no\_crypto\_preserve\_public\_exponent |

In the WebCrypto API, the `publicExponent` field of the algorithm of RSA keys would previously be an `ArrayBuffer`. Using this flag, `publicExponent` is a `Uint8Array` as mandated by the specification.

### `Vectorize` query with metadata optionally returned

| **Default as of**   | 2023-11-08                           |
| ------------------- | ------------------------------------ |
| **Flag to enable**  | vectorize\_query\_metadata\_optional |
| **Flag to disable** | vectorize\_query\_original           |

A set value on `vectorize_query_metadata_optional` indicates that the Vectorize query operation should accept newer arguments with `returnValues` and `returnMetadata` specified discretely over the older argument `returnVectors`. This also changes the return format. If the vector values have been indicated for return, the return value is now a flattened vector object with `score` attached where it previously contained a nested vector object.

### WebSocket Compression

| **Default as of**   | 2023-08-15                   |
| ------------------- | ---------------------------- |
| **Flag to enable**  | web\_socket\_compression     |
| **Flag to disable** | no\_web\_socket\_compression |

The Workers runtime did not support WebSocket compression when the initial WebSocket implementation was released. Historically, the runtime has stripped or ignored the `Sec-WebSocket-Extensions` header -- but is now capable of fully complying with the WebSocket Compression RFC. Since many clients are likely sending `Sec-WebSocket-Extensions: permessage-deflate` to their Workers today (`new WebSocket(url)` automatically sets this in browsers), we have decided to maintain prior behavior if this flag is absent.

If the flag is present, the Workers runtime is capable of using WebSocket Compression on both inbound and outbound WebSocket connections.

Like browsers, calling `new WebSocket(url)` in a Worker will automatically set the `Sec-WebSocket-Extensions: permessage-deflate` header. If you are using the non-standard `fetch()` API to obtain a WebSocket, you can include the `Sec-WebSocket-Extensions` header with value `permessage-deflate` and include any of the compression parameters defined in [RFC-7692 ↗](https://datatracker.ietf.org/doc/html/rfc7692#section-7).

### Strict crypto error checking

| **Default as of**   | 2023-08-01                 |
| ------------------- | -------------------------- |
| **Flag to enable**  | strict\_crypto\_checks     |
| **Flag to disable** | no\_strict\_crypto\_checks |

Perform additional error checking in the Web Crypto API to conform with the specification and reject possibly unsafe key parameters:

* For RSA key generation, key sizes are required to be multiples of 128 bits as boringssl may otherwise truncate the key.
* The size of imported RSA keys must be at least 256 bits and at most 16384 bits, as with newly generated keys.
* The public exponent for imported RSA keys is restricted to the commonly used values `[3, 17, 37, 65537]`.
* In conformance with the specification, an error will be thrown when trying to import a public ECDH key with non-empty usages.

### Strict compression error checking

| **Default as of**   | 2023-08-01                      |
| ------------------- | ------------------------------- |
| **Flag to enable**  | strict\_compression\_checks     |
| **Flag to disable** | no\_strict\_compression\_checks |

Perform additional error checking in the Compression Streams API and throw an error if a `DecompressionStream` has trailing data or gets closed before the full compressed data has been provided.

### Override cache rules cache settings in `request.cf` object for Fetch API

| **Default as of**   | 2025-04-02                               |
| ------------------- | ---------------------------------------- |
| **Flag to enable**  | request\_cf\_overrides\_cache\_rules     |
| **Flag to disable** | no\_request\_cf\_overrides\_cache\_rules |

This flag changes the behavior of cache when requesting assets via the [Fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch). Cache settings specified in the `request.cf` object, such as `cacheEverything` and `cacheTtl`, are now given precedence over any [Cache Rules](https://developers.cloudflare.com/cache/how-to/cache-rules/) set.

### Bot Management data

| **Default as of**   | 2023-08-01                     |
| ------------------- | ------------------------------ |
| **Flag to enable**  | no\_cf\_botmanagement\_default |
| **Flag to disable** | cf\_botmanagement\_default     |

This flag streamlines Workers requests by reducing unnecessary properties in the `request.cf` object.

With the flag enabled - either by default after 2023-08-01 or by setting the `no_cf_botmanagement_default` flag - Cloudflare will only include the [Bot Management object](https://developers.cloudflare.com/bots/reference/bot-management-variables/) in a Worker's `request.cf` if the account has access to Bot Management.

With the flag disabled, Cloudflare will include a default Bot Management object, regardless of whether the account is entitled to Bot Management.

### URLSearchParams delete() and has() value argument

| **Default as of**   | 2023-07-01                                   |
| ------------------- | -------------------------------------------- |
| **Flag to enable**  | urlsearchparams\_delete\_has\_value\_arg     |
| **Flag to disable** | no\_urlsearchparams\_delete\_has\_value\_arg |

The WHATWG introduced additional optional arguments to the `URLSearchParams` object [delete() ↗](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams/delete) and [has() ↗](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams/has) methods that allow for more precise control over the removal of query parameters. Because the arguments are optional and change the behavior of the methods when present there is a risk of breaking existing code. If your compatibility date is set to July 1, 2023 or after, this compatibility flag will be enabled by default.

For an example of how this change could break existing code, consider code that uses the `Array` `forEach()` method to iterate through a number of parameters to delete:

JavaScript

```

const usp = new URLSearchParams();

// ...

['abc', 'xyz'].forEach(usp.delete.bind(usp));


```

The `forEach()` automatically passes multiple parameters to the function that is passed in. Prior to the addition of the new standard parameters, these extra arguments would have been ignored.

Now, however, the additional arguments have meaning and change the behavior of the function. With this flag, the example above would need to be changed to:

JavaScript

```

const usp = new URLSearchParams();

// ...

['abc', 'xyz'].forEach((key) => usp.delete(key));


```

### Use a spec compliant URL implementation in redirects

| **Default as of**   | 2023-03-14                        |
| ------------------- | --------------------------------- |
| **Flag to enable**  | response\_redirect\_url\_standard |
| **Flag to disable** | response\_redirect\_url\_original |

Change the URL implementation used in `Response.redirect()` to be spec-compliant (WHATWG URL Standard).

### Dynamic Dispatch Exception Propagation

| **Default as of**   | 2023-03-01                                    |
| ------------------- | --------------------------------------------- |
| **Flag to enable**  | dynamic\_dispatch\_tunnel\_exceptions         |
| **Flag to disable** | dynamic\_dispatch\_treat\_exceptions\_as\_500 |

Previously, when using Workers for Platforms' [dynamic dispatch API](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/dynamic-dispatch/) to send an HTTP request to a user Worker, if the user Worker threw an exception, the dynamic dispatch Worker would receive an HTTP `500` error with no body. When the `dynamic_dispatch_tunnel_exceptions` compatibility flag is enabled, the exception will instead propagate back to the dynamic dispatch Worker. The `fetch()` call in the dynamic dispatch Worker will throw the same exception. This matches the similar behavior of [service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) and [Durable Objects](https://developers.cloudflare.com/durable-objects/).

### `Headers` supports `getSetCookie()`

| **Default as of**   | 2023-03-01                      |
| ------------------- | ------------------------------- |
| **Flag to enable**  | http\_headers\_getsetcookie     |
| **Flag to disable** | no\_http\_headers\_getsetcookie |

Adds the [getSetCookie() ↗](https://developer.mozilla.org/en-US/docs/Web/API/Headers/getSetCookie) method to the [Headers ↗](https://developer.mozilla.org/en-US/docs/Web/API/Headers) API in Workers.

JavaScript

```

const response = await fetch("https://example.com");

let cookieValues = response.headers.getSetCookie();


```

### Node.js compatibility

| **Flag to enable**  | nodejs\_compat     |
| ------------------- | ------------------ |
| **Flag to disable** | no\_nodejs\_compat |

Enables [Node.js APIs](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) in the Workers Runtime.

Note that some Node.js APIs are only enabled if your Worker's compatibility date is set to on or after the following dates:

| Node.js API                                                                                                                                                 | Enabled after |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- |
| [http.server](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#enable-nodejs-http-server-modules)                               | 2025-09-01    |
| [node:http, node:https](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#enable-availability-of-nodehttp-and-nodehttps-modules) | 2025-08-15    |
| [process.env](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#enable-auto-populating-processenv)                               | 2025-04-01    |
| [Disable Top-level Await in require()](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#disable-top-level-await-in-require)     | 2024-12-02    |

When enabling `nodejs_compat`, we recommend using the latest version of [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/), and the latest compatiblity date, in order to maximize compatibility. Some older versions of Wrangler inject additional polyfills that are no longer neccessary, as they are provided by the Workers runtime, if your Worker is using a more recent compatibility date.

If you see errors using a particular NPM package on Workers, you should first try updating your compatibility date and use the latest version of [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/) or the [Cloudflare Vite Plugin](https://developers.cloudflare.com/workers/vite-plugin/). If you still encounter issues, please report them by [opening a GitHub issue ↗](https://github.com/cloudflare/workers-sdk/issues/new?template=bug-template.yaml).

### Streams Constructors

| **Default as of**   | 2022-11-30                     |
| ------------------- | ------------------------------ |
| **Flag to enable**  | streams\_enable\_constructors  |
| **Flag to disable** | streams\_disable\_constructors |

Adds the work-in-progress `new ReadableStream()` and `new WritableStream()` constructors backed by JavaScript underlying sources and sinks.

### Compliant TransformStream constructor

| **Default as of**   | 2022-11-30                                      |
| ------------------- | ----------------------------------------------- |
| **Flag to enable**  | transformstream\_enable\_standard\_constructor  |
| **Flag to disable** | transformstream\_disable\_standard\_constructor |

Previously, the `new TransformStream()` constructor was not compliant with the Streams API standard. Use the `transformstream_enable_standard_constructor` to opt-in to the backwards-incompatible change to make the constructor compliant. Must be used in combination with the `streams_enable_constructors` flag.

### CommonJS modules do not export a module namespace

| **Default as of**   | 2022-10-31                  |
| ------------------- | --------------------------- |
| **Flag to enable**  | export\_commonjs\_default   |
| **Flag to disable** | export\_commonjs\_namespace |

CommonJS modules were previously exporting a module namespace (an object like `{ default: module.exports }`) rather than exporting only the `module.exports`. When this flag is enabled, the export is fixed.

### Do not throw from async functions

| **Default as of**   | 2022-10-31                           |
| ------------------- | ------------------------------------ |
| **Flag to enable**  | capture\_async\_api\_throws          |
| **Flag to disable** | do\_not\_capture\_async\_api\_throws |

The `capture_async_api_throws` compatibility flag will ensure that, in conformity with the standards API, async functions will only ever reject if they throw an error. The inverse `do_not_capture_async_api_throws` flag means that async functions which contain an error may throw that error synchronously rather than rejecting.

### New URL parser implementation

| **Default as of**   | 2022-10-31    |
| ------------------- | ------------- |
| **Flag to enable**  | url\_standard |
| **Flag to disable** | url\_original |

The original implementation of the [URL ↗](https://developer.mozilla.org/en-US/docs/Web/API/URL) API in Workers was not fully compliant with the [WHATWG URL Standard ↗](https://url.spec.whatwg.org/), differing in several ways, including:

* The original implementation collapsed sequences of multiple slashes into a single slash:  
`new URL("https://example.com/a//b").toString() === "https://example.com/a/b"`
* The original implementation would throw `"TypeError: Invalid URL string."` if it encountered invalid percent-encoded escape sequences, like `https://example.com/a%%b`.
* The original implementation would percent-encode or percent-decode certain content differently:  
`new URL("https://example.com/a%40b?c d%20e?f").toString() === "https://example.com/a@b?c+d+e%3Ff"`
* The original implementation lacked more recently implemented `URL` features, like [URL.canParse() ↗](https://developer.mozilla.org/en-US/docs/Web/API/URL/canParse%5Fstatic).

Set the compatibility date of your Worker to a date after `2022-10-31` or enable the `url_standard` compatibility flag to opt-in the fully spec compliant `URL` API implementation.

Refer to the [response\_redirect\_url\_standard compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#use-a-spec-compliant-url-implementation-in-redirects) , which affects the URL implementation used in `Response.redirect()`.

### `R2` bucket `list` respects the `include` option

| **Default as of**  | 2022-08-04               |
| ------------------ | ------------------------ |
| **Flag to enable** | r2\_list\_honor\_include |

With the `r2_list_honor_include` flag set, the `include` argument to R2 `list` options is honored. With an older compatibility date and without this flag, the `include` argument behaves implicitly as `include: ["httpMetadata", "customMetadata"]`.

### Do not substitute `null` on `TypeError`

| **Default as of**   | 2022-06-01                              |
| ------------------- | --------------------------------------- |
| **Flag to enable**  | dont\_substitute\_null\_on\_type\_error |
| **Flag to disable** | substitute\_null\_on\_type\_error       |

There was a bug in the runtime that meant that when being passed into built-in APIs, invalid values were sometimes mistakenly coalesced with `null`. Instead, a `TypeError` should have been thrown. The `dont_substitute_null_on_type_error` fixes this behavior so that an error is correctly thrown in these circumstances.

### Minimal subrequests

| **Default as of**   | 2022-04-05               |
| ------------------- | ------------------------ |
| **Flag to enable**  | minimal\_subrequests     |
| **Flag to disable** | no\_minimal\_subrequests |

With the `minimal_subrequests` flag set, `fetch()` subrequests sent to endpoints on the Worker's own zone (also called same-zone subrequests) have a reduced set of features applied to them. In general, these features should not have been initially applied to same-zone subrequests, and very few user-facing behavior changes are anticipated. Specifically, Workers might observe the following behavior changes with the new flag:

* Response bodies will not be opportunistically gzipped before being transmitted to the Workers runtime. If a Worker reads the response body, it will read it in plaintext, as has always been the case, so disabling this prevents unnecessary decompression. Meanwhile, if the Worker passes the response through to the client, Cloudflare's HTTP proxy will opportunistically gzip the response body on that side of the Workers runtime instead. The behavior change observable by a Worker script should be that some `Content-Encoding: gzip` headers will no longer appear.
* Automatic Platform Optimization may previously have been applied on both the Worker's initiating request and its subrequests in some circumstances. It will now only apply to the initiating request.
* Link prefetching will now only apply to the Worker's response, not responses to the Worker's subrequests.

### Global `navigator`

| **Default as of**   | 2022-03-21            |
| ------------------- | --------------------- |
| **Flag to enable**  | global\_navigator     |
| **Flag to disable** | no\_global\_navigator |

With the `global_navigator` flag set, a new global `navigator` property is available from within Workers. Currently, it exposes only a single `navigator.userAgent` property whose value is set to `'Cloudflare-Workers'`. This property can be used to reliably determine whether code is running within the Workers environment.

### Do not use the Custom Origin Trust Store for external subrequests

| **Default as of**   | 2022-03-08                    |
| ------------------- | ----------------------------- |
| **Flag to enable**  | no\_cots\_on\_external\_fetch |
| **Flag to disable** | cots\_on\_external\_fetch     |

The `no_cots_on_external_fetch` flag disables the use of the [Custom Origin Trust Store](https://developers.cloudflare.com/ssl/origin-configuration/custom-origin-trust-store/) when making external (grey-clouded) subrequests from a Cloudflare Worker.

### Setters/getters on API object prototypes

| **Default as of**   | 2022-01-31                                    |
| ------------------- | --------------------------------------------- |
| **Flag to enable**  | workers\_api\_getters\_setters\_on\_prototype |
| **Flag to disable** | workers\_api\_getters\_setters\_on\_instance  |

Originally, properties on Workers API objects were defined as instance properties as opposed to prototype properties. This broke subclassing at the JavaScript layer, preventing a subclass from correctly overriding the superclass getters/setters. This flag controls the breaking change made to set those getters/setters on the prototype template instead.

This changes applies to:

* `AbortSignal`
* `AbortController`
* `Blob`
* `Body`
* `DigestStream`
* `Event`
* `File`
* `Request`
* `ReadableStream`
* `ReadableStreamDefaultReader`
* `ReadableStreamBYOBReader`
* `Response`
* `TextDecoder`
* `TextEncoder`
* `TransformStream`
* `URL`
* `WebSocket`
* `WritableStream`
* `WritableStreamDefaultWriter`

### Durable Object `stub.fetch()` requires a full URL

| **Default as of**   | 2021-11-10                                    |
| ------------------- | --------------------------------------------- |
| **Flag to enable**  | durable\_object\_fetch\_requires\_full\_url   |
| **Flag to disable** | durable\_object\_fetch\_allows\_relative\_url |

Originally, when making a request to a Durable Object by calling `stub.fetch(url)`, a relative URL was accepted as an input. The URL would be interpreted relative to the placeholder URL `http://fake-host`, and the resulting absolute URL was delivered to the destination object's `fetch()` handler. This behavior was incorrect — full URLs were meant to be required. This flag makes full URLs required.

### `fetch()` improperly interprets unknown protocols as HTTP

| **Default as of**   | 2021-11-10                                  |
| ------------------- | ------------------------------------------- |
| **Flag to enable**  | fetch\_refuses\_unknown\_protocols          |
| **Flag to disable** | fetch\_treats\_unknown\_protocols\_as\_http |

Originally, if the `fetch()` function was passed a URL specifying any protocol other than `http:` or `https:`, it would silently treat it as if it were `http:`. For example, `fetch()` would appear to accept `ftp:` URLs, but it was actually making HTTP requests instead.

Note that Cloudflare Workers supports a non-standard extension to `fetch()` to make it support WebSockets. However, when making an HTTP request that is intended to initiate a WebSocket handshake, you should still use `http:` or `https:` as the protocol, not `ws:` nor `wss:`.

The `ws:` and `wss:` URL schemes are intended to be used together with the `new WebSocket()` constructor, which exclusively supports WebSocket. The extension to `fetch()` is designed to support HTTP and WebSocket in the same request (the response may or may not choose to initiate a WebSocket), and so all requests are considered to be HTTP.

### Streams BYOB reader detaches buffer

| **Default as of**   | 2021-11-10                                       |
| ------------------- | ------------------------------------------------ |
| **Flag to enable**  | streams\_byob\_reader\_detaches\_buffer          |
| **Flag to disable** | streams\_byob\_reader\_does\_not\_detach\_buffer |

Originally, the Workers runtime did not detach the `ArrayBuffer`s from user-provided TypedArrays when using the [BYOB reader's read() method](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestreambyobreader/#methods), as required by the Streams spec, meaning it was possible to inadvertently reuse the same buffer for multiple `read()` calls. This change makes Workers conform to the spec.

User code should never try to reuse an `ArrayBuffer` that has been passed into a [BYOB reader's read() method](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestreambyobreader/#methods). Instead, user code can reuse the `ArrayBuffer` backing the result of the `read()` promise, as in the example below.

JavaScript

```

// Consume and discard `readable` using a single 4KiB buffer.

let reader = readable.getReader({ mode: "byob" });

let arrayBufferView = new Uint8Array(4096);

while (true) {

  let result = await reader.read(arrayBufferView);

  if (result.done) break;

  // Optionally something with `result` here.

  // Re-use the same memory for the next `read()` by creating

  // a new Uint8Array backed by the result's ArrayBuffer.

  arrayBufferView = new Uint8Array(result.value.buffer);

}


```

The more recently added extension method `readAtLeast()` will always detach the `ArrayBuffer` and is unaffected by this feature flag setting.

### `FormData` parsing supports `File`

| **Default as of**   | 2021-11-03                                     |
| ------------------- | ---------------------------------------------- |
| **Flag to enable**  | formdata\_parser\_supports\_files              |
| **Flag to disable** | formdata\_parser\_converts\_files\_to\_strings |

[The FormData API ↗](https://developer.mozilla.org/en-US/docs/Web/API/FormData) is used to parse data (especially HTTP request bodies) in `multipart/form-data` format.

Originally, the Workers runtime's implementation of the `FormData` API incorrectly converted uploaded files to strings. Therefore, `formData.get("filename")` would return a string containing the file contents instead of a `File` object. This change fixes the problem, causing files to be represented using `File` as specified in the standard.

### `HTMLRewriter` handling of `<esi:include>`

| **Flag to enable** | html\_rewriter\_treats\_esi\_include\_as\_void\_tag |
| ------------------ | --------------------------------------------------- |

The HTML5 standard defines a fixed set of elements as void elements, meaning they do not use an end tag: `<area>`, `<base>`, `<br>`, `<col>`, `<command>`, `<embed>`, `<hr>`, `<img>`, `<input>`, `<keygen>`, `<link>`, `<meta>`, `<param>`, `<source>`, `<track>`, and `<wbr>`.

HTML5 does not recognize XML self-closing tag syntax. For example, `<script src="https://developers.cloudflare.com/workers/configuration/compatibility-flags/foo.js" />` does not specify a script element with no body. A `</script>` ending tag is still required. The `/>` syntax simply is not recognized by HTML5 at all and it is treated the same as `>`. However, many developers still like to use this syntax, as a holdover from XHTML, a standard which failed to gain traction in the early 2000's.

`<esi:include>` and `<esi:comment>` are two tags that are not part of the HTML5 standard, but are instead used as part of [Edge Side Includes ↗](https://en.wikipedia.org/wiki/Edge%5FSide%5FIncludes), a technology for server-side HTML modification. These tags are not expected to contain any body and are commonly written with XML self-closing syntax.

`HTMLRewriter` was designed to parse standard HTML5, not ESI. However, it would be useful to be able to implement some parts of ESI using `HTMLRewriter`. To that end, this compatibility flag causes `HTMLRewriter` to treat `<esi:include>` and `<esi:comment>` as void tags, so that they can be parsed and handled properly.

## Experimental flags

These flags can be enabled via `compatibility_flags`, but are not yet scheduled to become default on any particular date.

### Queue consumers don't wait for `ctx.waitUntil()` to resolve

| **Flag to enable** | queue\_consumer\_no\_wait\_for\_wait\_until |
| ------------------ | ------------------------------------------- |

By default, [Queues](https://developers.cloudflare.com/queues/) Consumer Workers acknowledge messages only after promises passed to [ctx.waitUntil()](https://developers.cloudflare.com/workers/runtime-apis/context) have resolved. This behavior can cause queue consumers which utilize `ctx.waitUntil()` to process messages slowly. The default behavior is documented in the [Queues Consumer Configuration Guide](https://developers.cloudflare.com/queues/configuration/javascript-apis#consumer).

This Consumer Worker is an example of a Worker which utilizes `ctx.waitUntil()`. Under the default behavior, this consumer Worker will only acknowledge a batch of messages after the sleep function has resolved.

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    // omitted

  },


  async queue(batch, env, ctx) {

    console.log(`received batch of ${batch.messages.length} messages to queue ${batch.queue}`);

    for (let i = 0; i < batch.messages.length; ++i) {

      console.log(`message #${i}: ${JSON.stringify(batch.messages[i])}`);

    }

    ctx.waitUntil(sleep(30 * 1000));

  }

};


function sleep(ms) {

  return new Promise(resolve => setTimeout(resolve, ms));

}


```

If the `queue_consumer_no_wait_for_wait_until` flag is enabled, Queues consumers will no longer wait for promises passed to `ctx.waitUntil()` to resolve before acknowledging messages. This can improve the performance of queue consumers which utilize `ctx.waitUntil()`. With the flag enabled, in the above example, the consumer Worker will acknowledge the batch without waiting for the sleep function to resolve.

Using this flag will not affect the behavior of `ctx.waitUntil()`. `ctx.waitUntil()` will continue to extend the lifetime of your consumer Worker to continue to work even after the batch of messages has been acknowledged.

### `HTMLRewriter` handling of `<esi:include>`

| **Flag to enable** | html\_rewriter\_treats\_esi\_include\_as\_void\_tag |
| ------------------ | --------------------------------------------------- |

The HTML5 standard defines a fixed set of elements as void elements, meaning they do not use an end tag: `<area>`, `<base>`, `<br>`, `<col>`, `<command>`, `<embed>`, `<hr>`, `<img>`, `<input>`, `<keygen>`, `<link>`, `<meta>`, `<param>`, `<source>`, `<track>`, and `<wbr>`.

HTML5 does not recognize XML self-closing tag syntax. For example, `<script src="https://developers.cloudflare.com/workers/configuration/compatibility-flags/foo.js" />` does not specify a script element with no body. A `</script>` ending tag is still required. The `/>` syntax simply is not recognized by HTML5 at all and it is treated the same as `>`. However, many developers still like to use this syntax, as a holdover from XHTML, a standard which failed to gain traction in the early 2000's.

`<esi:include>` and `<esi:comment>` are two tags that are not part of the HTML5 standard, but are instead used as part of [Edge Side Includes ↗](https://en.wikipedia.org/wiki/Edge%5FSide%5FIncludes), a technology for server-side HTML modification. These tags are not expected to contain any body and are commonly written with XML self-closing syntax.

`HTMLRewriter` was designed to parse standard HTML5, not ESI. However, it would be useful to be able to implement some parts of ESI using `HTMLRewriter`. To that end, this compatibility flag causes `HTMLRewriter` to treat `<esi:include>` and `<esi:comment>` as void tags, so that they can be parsed and handled properly.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/compatibility-flags/","name":"Compatibility flags"}}]}
```

---

---
title: Cron Triggers
description: Enable your Worker to be executed on a schedule.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/cron-triggers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cron Triggers

## Background

Cron Triggers allow users to map a cron expression to a Worker using a [scheduled() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/) that enables Workers to be executed on a schedule.

Cron Triggers are ideal for running periodic jobs, such as for maintenance or calling third-party APIs to collect up-to-date data. Workers scheduled by Cron Triggers will run on underutilized machines to make the best use of Cloudflare's capacity and route traffic efficiently.

Note

Cron Triggers can also be combined with [Workflows](https://developers.cloudflare.com/workflows/) to trigger multi-step, long-running tasks. You can [bind to a Workflow](https://developers.cloudflare.com/workflows/build/workers-api/) directly from your Cron Trigger to execute a Workflow on a schedule.

Cron Triggers execute on UTC time.

## Add a Cron Trigger

### 1\. Define a scheduled event listener

To respond to a Cron Trigger, you must add a ["scheduled" handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/) to your Worker.

* [  JavaScript ](#tab-panel-7058)
* [  TypeScript ](#tab-panel-7059)
* [  Python ](#tab-panel-7060)

JavaScript

```

export default {

  async scheduled(controller, env, ctx) {

    console.log("cron processed");

  },

};


```

TypeScript

```

interface Env {}

export default {

  async scheduled(

    controller: ScheduledController,

    env: Env,

    ctx: ExecutionContext,

  ) {

    console.log("cron processed");

  },

};


```

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def scheduled(self, controller, env, ctx):

        print("cron processed")


```

Refer to the following additional examples to write your code:

* [Setting Cron Triggers](https://developers.cloudflare.com/workers/examples/cron-trigger/)
* [Multiple Cron Triggers](https://developers.cloudflare.com/workers/examples/multiple-cron-triggers/)

### 2\. Update configuration

Cron Trigger changes take time to propagate.

Changes such as adding a new Cron Trigger, updating an old Cron Trigger, or deleting a Cron Trigger may take several minutes (up to 15 minutes) to propagate to the Cloudflare global network.

After you have updated your Worker code to include a `"scheduled"` event, you must update your Worker project configuration.

#### Via the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/)

If a Worker is managed with Wrangler, Cron Triggers should be exclusively managed through the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).

Refer to the example below for a Cron Triggers configuration:

* [  wrangler.jsonc ](#tab-panel-7063)
* [  wrangler.toml ](#tab-panel-7064)

```

{

  "triggers": {

    // Schedule cron triggers:

    // - At every 3rd minute

    // - At 15:00 (UTC) on first day of the month

    // - At 23:59 (UTC) on the last weekday of the month

    "crons": [

      "*/3 * * * *",

      "0 15 1 * *",

      "59 23 LW * *"

    ]

  }

}


```

```

[triggers]

crons = [ "*/3 * * * *", "0 15 1 * *", "59 23 LW * *" ]


```

You also can set a different Cron Trigger for each [environment](https://developers.cloudflare.com/workers/wrangler/environments/) in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). You need to put the `triggers` array under your chosen environment. For example:

* [  wrangler.jsonc ](#tab-panel-7065)
* [  wrangler.toml ](#tab-panel-7066)

```

{

  "env": {

    "dev": {

      "triggers": {

        "crons": [

          "0 * * * *"

        ]

      }

    }

  }

}


```

```

[env.dev.triggers]

crons = [ "0 * * * *" ]


```

#### Via the dashboard

To add Cron Triggers in the Cloudflare dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker > **Settings** \> **Triggers** \> **Cron Triggers**.

## Supported cron expressions

Cloudflare supports cron expressions with five fields, along with most [Quartz scheduler ↗](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html#introduction)\-like cron syntax extensions:

| Field         | Values                                                             | Characters   |
| ------------- | ------------------------------------------------------------------ | ------------ |
| Minute        | 0-59                                                               | \* , - /     |
| Hours         | 0-23                                                               | \* , - /     |
| Days of Month | 1-31                                                               | \* , - / L W |
| Months        | 1-12, case-insensitive 3-letter abbreviations ("JAN", "aug", etc.) | \* , - /     |
| Weekdays      | 1-7, case-insensitive 3-letter abbreviations ("MON", "fri", etc.)  | \* , - / L # |

Note

Days of the week go from 1 = Sunday to 7 = Saturday, which is different on some other cron systems (where 0 = Sunday and 6 = Saturday). To avoid ambiguity you may prefer to use the three-letter abbreviations (e.g. `SUN` rather than 1).

### Examples

Some common time intervals that may be useful for setting up your Cron Trigger:

* `* * * * *`  
   * At every minute
* `*/30 * * * *`  
   * At every 30th minute
* `45 * * * *`  
   * On the 45th minute of every hour
* `0 17 * * sun` or `0 17 * * 1`  
   * 17:00 (UTC) on Sunday
* `10 7 * * mon-fri` or `10 7 * * 2-6`  
   * 07:10 (UTC) on weekdays
* `0 15 1 * *`  
   * 15:00 (UTC) on first day of the month
* `0 18 * * 6L` or `0 18 * * friL`  
   * 18:00 (UTC) on the last Friday of the month
* `59 23 LW * *`  
   * 23:59 (UTC) on the last weekday of the month

## Test Cron Triggers locally

Test Cron Triggers using Wrangler with [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev), or using the [Cloudflare Vite plugin ↗](https://developers.cloudflare.com/workers/vite-plugin/). This will expose a `/cdn-cgi/handler/scheduled` route which can be used to test using a HTTP request.

Terminal window

```

curl "http://localhost:8787/cdn-cgi/handler/scheduled"


```

To simulate different cron patterns, a `cron` query parameter can be passed in.

Terminal window

```

curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*"


```

Optionally, you can also pass a `time` query parameter to override `controller.scheduledTime` in your scheduled event listener.

Terminal window

```

curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*&time=1745856238"


```

## View past events

To view the execution history of Cron Triggers, view **Cron Events**:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your **Worker**.
3. Select **Settings**.
4. Under **Trigger Events**, select **View events**.

Cron Events stores the 100 most recent invocations of the Cron scheduled event. [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs) also records invocation logs for the Cron Trigger with a longer retention period and a filter & query interface. If you are interested in an API to access Cron Events, use Cloudflare's [GraphQL Analytics API](https://developers.cloudflare.com/analytics/graphql-api).

Note

It can take up to 30 minutes before events are displayed in **Past Cron Events** when creating a new Worker or changing a Worker's name.

Refer to [Metrics and Analytics](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/) for more information.

## Remove a Cron Trigger

### Via the dashboard

To delete a Cron Trigger on a deployed Worker via the dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select your Worker.
3. Go to **Triggers** \> select the three dot icon next to the Cron Trigger you want to remove > **Delete**.

#### Via the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/)

If a Worker is managed with Wrangler, Cron Triggers should be exclusively managed through the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).

When deploying a Worker with Wrangler any previous Cron Triggers are replaced with those specified in the `triggers` array.

* If the `crons` property is an empty array then all the Cron Triggers are removed.
* If the `triggers` or `crons` property are `undefined` then the currently deploy Cron Triggers are left in-place.

* [  wrangler.jsonc ](#tab-panel-7061)
* [  wrangler.toml ](#tab-panel-7062)

```

{

  "triggers": {

    // Remove all cron triggers:

    "crons": []

  }

}


```

```

[triggers]

crons = [ ]


```

## Limits

Refer to [Limits](https://developers.cloudflare.com/workers/platform/limits/) to track the maximum number of Cron Triggers per Worker.

## Green Compute

With Green Compute enabled, your Cron Triggers will only run on Cloudflare points of presence that are located in data centers that are powered purely by renewable energy. Organizations may claim that they are powered by 100 percent renewable energy if they have procured sufficient renewable energy to account for their overall energy use.

Renewable energy can be purchased in a number of ways, including through on-site generation (wind turbines, solar panels), directly from renewable energy producers through contractual agreements called Power Purchase Agreements (PPA), or in the form of Renewable Energy Credits (REC, IRECs, GoOs) from an energy credit market.

Green Compute can be configured at the account level:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In the **Account details** section, find **Compute Setting**.
3. Select **Change**.
4. Select **Green Compute**.
5. Select **Confirm**.

## Related resources

* [Triggers](https://developers.cloudflare.com/workers/wrangler/configuration/#triggers) \- Review Wrangler configuration file syntax for Cron Triggers.
* Learn how to access Cron Triggers in [ES modules syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) for an optimized experience.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/cron-triggers/","name":"Cron Triggers"}}]}
```

---

---
title: Environment variables
description: You can add environment variables, which are a type of binding, to attach text strings or JSON values to your Worker.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/environment-variables.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Environment variables

## Background

You can add environment variables, which are a type of binding, to attach text strings or JSON values to your Worker. Environment variables are available on the [env parameter](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/#parameters) passed to your Worker's [fetch event handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/).

Text strings and JSON values are not encrypted and are useful for storing application configuration.

## Add environment variables via Wrangler

To add env variables using Wrangler, define text and JSON via the `[vars]` configuration in your Wrangler file. In the following example, `API_HOST` and `API_ACCOUNT_ID` are text values and `SERVICE_X_DATA` is a JSON value.

* [  wrangler.jsonc ](#tab-panel-7073)
* [  wrangler.toml ](#tab-panel-7074)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker-dev",

  "vars": {

    "API_HOST": "example.com",

    "API_ACCOUNT_ID": "example_user",

    "SERVICE_X_DATA": {

      "URL": "service-x-api.dev.example",

      "MY_ID": 123

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker-dev"


[vars]

API_HOST = "example.com"

API_ACCOUNT_ID = "example_user"


  [vars.SERVICE_X_DATA]

  URL = "service-x-api.dev.example"

  MY_ID = 123


```

Refer to the following example on how to access the `API_HOST` environment variable in your Worker code:

* [  JavaScript ](#tab-panel-7067)
* [  TypeScript ](#tab-panel-7068)

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    return new Response(`API host: ${env.API_HOST}`);

  },

};


```

TypeScript

```

export interface Env {

  API_HOST: string;

}


export default {

  async fetch(request, env, ctx): Promise<Response> {

    return new Response(`API host: ${env.API_HOST}`);

  },

} satisfies ExportedHandler<Env>;


```

### Import `env` for global access

You can also import `env` from [cloudflare:workers](https://developers.cloudflare.com/workers/runtime-apis/bindings/#importing-env-as-a-global) to access environment variables from anywhere in your code, including outside of request handlers:

* [  JavaScript ](#tab-panel-7071)
* [  TypeScript ](#tab-panel-7072)

JavaScript

```

import { env } from "cloudflare:workers";


// Access environment variables at the top level

const apiHost = env.API_HOST;


export default {

  async fetch(request) {

    return new Response(`API host: ${apiHost}`);

  },

};


```

TypeScript

```

import { env } from "cloudflare:workers";


// Access environment variables at the top level

const apiHost = env.API_HOST;


export default {

  async fetch(request: Request): Promise<Response> {

    return new Response(`API host: ${apiHost}`);

  },

};


```

This approach is useful when you need to:

* Initialize configuration or API clients at the top level of your Worker.
* Access environment variables from deeply nested functions without passing `env` through every function call.

For more details, refer to [Importing env as a global](https://developers.cloudflare.com/workers/runtime-apis/bindings/#importing-env-as-a-global).

### Configuring different environments in Wrangler

[Environments in Wrangler](https://developers.cloudflare.com/workers/wrangler/environments) let you specify different configurations for the same Worker, including different values for `vars` in each environment. As `vars` is a [non-inheritable key](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys), they are not inherited by environments and must be specified for each environment.

The example below sets up two environments, `staging` and `production`, with different values for `API_HOST`.

* [  wrangler.jsonc ](#tab-panel-7069)
* [  wrangler.toml ](#tab-panel-7070)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker-dev",

  // top level environment

  "vars": {

    "API_HOST": "api.example.com"

  },

  "env": {

    "staging": {

      "vars": {

        "API_HOST": "staging.example.com"

      }

    },

    "production": {

      "vars": {

        "API_HOST": "production.example.com"

      }

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker-dev"


[vars]

API_HOST = "api.example.com"


[env.staging.vars]

API_HOST = "staging.example.com"


[env.production.vars]

API_HOST = "production.example.com"


```

To run Wrangler commands in specific environments, you can pass in the `--env` or `-e` flag. For example, you can develop the Worker in an environment called `staging` by running `npx wrangler dev --env staging`, and deploy it with `npx wrangler deploy --env staging`.

Learn about [environments in Wrangler](https://developers.cloudflare.com/workers/wrangler/environments).

## Add environment variables via the dashboard

To add environment variables via the dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker.
3. Select **Settings**.
4. Under **Variables and Secrets**, select **Add**.
5. Select a **Type**, input a **Variable name**, and input its **Value**. This variable will be made available to your Worker.
6. (Optional) To add multiple environment variables, select **Add variable**.
7. Select **Deploy** to implement your changes.

Plaintext strings and secrets

Select the **Secret** type if your environment variable is a [secret](https://developers.cloudflare.com/workers/configuration/secrets/). Alternatively, consider [Cloudflare Secrets Store](https://developers.cloudflare.com/secrets-store/), for account-level secrets.

## Compare secrets and environment variables

Use secrets for sensitive information

Do not use plaintext environment variables to store sensitive information. Use [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) or [Secrets Store bindings](https://developers.cloudflare.com/secrets-store/integrations/workers/) instead.

[Secrets](https://developers.cloudflare.com/workers/configuration/secrets/) are [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/). The difference is secret values are not visible within Wrangler or Cloudflare dashboard after you define them. This means that sensitive data, including passwords or API tokens, should always be encrypted to prevent data leaks. To your Worker, there is no difference between an environment variable and a secret. The secret's value is passed through as defined.

### Local development with secrets

Warning

Do not use `vars` to store sensitive information in your Worker's Wrangler configuration file. Use secrets instead.

Put secrets for use in local development in either a `.dev.vars` file or a `.env` file, in the same directory as the Wrangler configuration file.

Note

You can use the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property) to declare which secret names your Worker requires. When defined, only the keys listed in `secrets.required` are loaded from `.dev.vars` or `.env`. Additional keys are excluded and missing keys produce a warning.

Choose to use either `.dev.vars` or `.env` but not both. If you define a `.dev.vars` file, then values in `.env` files will not be included in the `env` object during local development.

These files should be formatted using the [dotenv ↗](https://hexdocs.pm/dotenvy/dotenv-file-format.html) syntax. For example:

.dev.vars / .env

```

SECRET_KEY="value"

API_TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"


```

Do not commit secrets to git

The `.dev.vars` and `.env` files should not committed to git. Add `.dev.vars*` and `.env*` to your project's `.gitignore` file.

To set different secrets for each Cloudflare environment, create files named `.dev.vars.<environment-name>` or `.env.<environment-name>`.

When you select a Cloudflare environment in your local development, the corresponding environment-specific file will be loaded ahead of the generic `.dev.vars` (or `.env`) file.

* When using `.dev.vars.<environment-name>` files, all secrets must be defined per environment. If `.dev.vars.<environment-name>` exists then only this will be loaded; the `.dev.vars` file will not be loaded.
* In contrast, all matching `.env` files are loaded and the values are merged. For each variable, the value from the most specific file is used, with the following precedence:  
   * `.env.<environment-name>.local` (most specific)  
   * `.env.local`  
   * `.env.<environment-name>`  
   * `.env` (least specific)

Controlling `.env` handling

It is possible to control how `.env` files are loaded in local development by setting environment variables on the process running the tools.

* To disable loading local dev vars from `.env` files without providing a `.dev.vars` file, set the `CLOUDFLARE_LOAD_DEV_VARS_FROM_DOT_ENV` environment variable to `"false"`.
* To include every environment variable defined in your system's process environment as a local development variable, ensure there is no `.dev.vars` and then set the `CLOUDFLARE_INCLUDE_PROCESS_ENV` environment variable to `"true"`. This is not needed when using the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property), which loads from `process.env` automatically.

## Environment variables and Node.js compatibility

When you enable [nodejs\_compat](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) and the [nodejs\_compat\_populate\_process\_env](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs%5Fcompat%5Fpopulate%5Fprocess%5Fenv) compatibility flag (enabled by default for compatibility dates on or after 2025-04-01), environment variables are available via the global `process.env`.

The `process.env` will be populated lazily the first time that `process` is accessed in the worker.

Text variable values are exposed directly.

JSON variable values that evaluate to string values are exposed as the parsed value.

JSON variable values that do not evaluate to string values are exposed as the raw JSON string.

For example, imagine a Worker with three environment variables, two text values, and one JSON value:

```

[vars]

FOO =  "abc"

BAR =  "abc"

BAZ = { "a": 123 }


```

Environment variables can be added using either the `wrangler.{json|jsonc|toml}` file or via the Cloudflare dashboard UI.

The values of `process.env.FOO` and `process.env.BAR` will each be the JavaScript string `"abc"`.

The value of `process.env.BAZ` will be the JSON-encoded string `"{ \"a\": 123 }"`.

Note

Note also that because secrets are a form of environment variable within the runtime, secrets are also exposed via `process.env`.

## Related resources

* Migrating environment variables from [Service Worker format to ES modules syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/#environment-variables).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/environment-variables/","name":"Environment variables"}}]}
```

---

---
title: Integrations
description: Integrate with third-party services and products.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/integrations/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Integrations

One of the key features of Cloudflare Workers is the ability to integrate with other services and products. In this document, we will explain the types of integrations available with Cloudflare Workers and provide step-by-step instructions for using them.

## Types of integrations

Cloudflare Workers offers several types of integrations, including:

* [Databases](https://developers.cloudflare.com/workers/databases/): Cloudflare Workers can be integrated with a variety of databases, including SQL and NoSQL databases. This allows you to store and retrieve data from your databases directly from your Cloudflare Workers code.
* [APIs](https://developers.cloudflare.com/workers/configuration/integrations/apis/): Cloudflare Workers can be used to integrate with external APIs, allowing you to access and use the data and functionality exposed by those APIs in your own code.
* [Third-party services](https://developers.cloudflare.com/workers/configuration/integrations/external-services/): Cloudflare Workers can be used to integrate with a wide range of third-party services, such as payment gateways, authentication providers, and more. This makes it possible to use these services in your Cloudflare Workers code.

## How to use integrations

To use any of the available integrations:

* Determine which integration you want to use and make sure you have the necessary accounts and credentials for it.
* In your Cloudflare Workers code, import the necessary libraries or modules for the integration.
* Use the provided APIs and functions to connect to the integration and access its data or functionality.
* Store necessary secrets and keys using secrets via [wrangler secret put <KEY>](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret).

## Tips and best practices

To help you get the most out of using integrations with Cloudflare Workers:

* Secure your integrations and protect sensitive data. Ensure you use secure authentication and authorization where possible, and ensure the validity of libraries you import.
* Use [caching](https://developers.cloudflare.com/workers/reference/how-the-cache-works) to improve performance and reduce the load on an external service.
* Split your Workers into service-oriented architecture using [Service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) to make your application more modular, easier to maintain, and more performant.
* Use [Custom Domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) when communicating with external APIs and services, which create a DNS record on your behalf and treat your Worker as an application instead of a proxy.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/integrations/","name":"Integrations"}}]}
```

---

---
title: APIs
description: To integrate with third party APIs from Cloudflare Workers, use the fetch API to make HTTP requests to the API endpoint. Then use the response data to modify or manipulate your content as needed.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/integrations/apis.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# APIs

To integrate with third party APIs from Cloudflare Workers, use the [fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch/) to make HTTP requests to the API endpoint. Then use the response data to modify or manipulate your content as needed.

For example, if you want to integrate with a weather API, make a fetch request to the API endpoint and retrieve the current weather data. Then use this data to display the current weather conditions on your website.

To make the `fetch()` request, add the following code to your project's `src/index.js` file:

JavaScript

```

async function handleRequest(request) {

  // Make the fetch request to the third party API endpoint

  const response = await fetch("https://weather-api.com/endpoint", {

    method: "GET",

    headers: {

      "Content-Type": "application/json",

    },

  });


  // Retrieve the data from the response

  const data = await response.json();


  // Use the data to modify or manipulate your content as needed

  return new Response(data);

}


```

## Authentication

If your API requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [wrangler secret](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret) command:

Terminal window

```

wrangler secret put SECRET_NAME


```

Then, retrieve the secret value in your code using the following code snippet:

JavaScript

```

const secretValue = env.SECRET_NAME;


```

Then use the secret value to authenticate with the external service. For example, if the external service requires an API key for authentication, include it in your request headers.

For services that require mTLS authentication, use [mTLS certificates](https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls) to present a client certificate.

## Tips

* Use the [Cache API](https://developers.cloudflare.com/workers/runtime-apis/cache/) to cache data from the third party API. This allows you to optimize cacheable requests made to the API. Integrating with third party APIs from Cloudflare Workers adds additional functionality and features to your application.
* Use [Custom Domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) when communicating with external APIs, which treat your Worker as your core application.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/integrations/","name":"Integrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/configuration/integrations/apis/","name":"APIs"}}]}
```

---

---
title: External Services
description: Many external services provide libraries and SDKs to interact with their APIs. While many Node-compatible libraries work on Workers right out of the box, some, which implement fs, http/net, or access the browser window do not directly translate to the Workers runtime, which is v8-based.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/integrations/external-services.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# External Services

Many external services provide libraries and SDKs to interact with their APIs. While many Node-compatible libraries work on Workers right out of the box, some, which implement `fs`, `http/net`, or access the browser `window` do not directly translate to the Workers runtime, which is v8-based.

## Authentication

If your service requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [wrangler secret](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret) command:

Terminal window

```

wrangler secret put SECRET_NAME


```

Then, retrieve the secret value in your code using the following code snippet:

JavaScript

```

const secretValue = env.SECRET_NAME;


```

Then use the secret value to authenticate with the external service. For example, if the external service requires an API key for authentication, include the secret in your library's configuration.

For services that require mTLS authentication, use [mTLS certificates](https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls) to present a client certificate.

Use [Custom Domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) when communicating with external APIs, which treat your Worker as your core application.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/integrations/","name":"Integrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/configuration/integrations/external-services/","name":"External Services"}}]}
```

---

---
title: Multipart upload metadata
description: If you're using the Workers Script Upload API or Version Upload API directly, multipart/form-data uploads require you to specify a metadata part. This metadata defines the Worker's configuration in JSON format, analogue to the wrangler.toml file.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/multipart-upload-metadata.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Multipart upload metadata

Note

There is a new API for uploading Workers. Refer to [these docs](https://developers.cloudflare.com/workers/platform/infrastructure-as-code#cloudflare-rest-api) for more information.

If you're using the [Workers Script Upload API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/) or [Version Upload API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) directly, `multipart/form-data` uploads require you to specify a `metadata` part. This metadata defines the Worker's configuration in JSON format, analogue to the [wrangler.toml file](https://developers.cloudflare.com/workers/wrangler/configuration/).

## Sample `metadata`

```

{

  "main_module": "main.js",

  "bindings": [

    {

      "type": "plain_text",

      "name": "MESSAGE",

      "text": "Hello, world!"

    }

  ],

  "compatibility_date": "2021-09-14"

}


```

Note

See examples of metadata being used with the Workers Script Upload API [here](https://developers.cloudflare.com/workers/platform/infrastructure-as-code#cloudflare-rest-api).

## Attributes

The following attributes are configurable at the top-level.

Note

At a minimum, the `main_module` key is required to upload a Worker.

* `main_module` ` string ` required  
   * The part name that contains the module entry point of the Worker that will be executed. For example, `main.js`.
* `assets` ` object ` optional  
   * [Asset](https://developers.cloudflare.com/workers/static-assets/) configuration for a Worker.  
   * `config` ` object ` optional  
         * [html\_handling](https://developers.cloudflare.com/workers/static-assets/routing/advanced/html-handling/) determines the redirects and rewrites of requests for HTML content.  
         * [not\_found\_handling](https://developers.cloudflare.com/workers/static-assets/#routing-behavior) determines the response when a request does not match a static asset.  
   * `jwt` field provides a token authorizing assets to be attached to a Worker.
* `keep_assets` ` boolean ` optional  
   * Specifies whether assets should be retained from a previously uploaded Worker version; used in lieu of providing a completion token.
* `bindings` array\[object\] optional  
   * [Bindings](#bindings) to expose in the Worker.
* `placement` ` object ` optional  
   * [Smart placement](https://developers.cloudflare.com/workers/configuration/placement/) object for the Worker.  
   * `mode` field only supports `smart` for automatic placement.
* `compatibility_date` ` string ` optional  
   * [Compatibility Date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/#setting-compatibility-date) indicating targeted support in the Workers runtime. Backwards incompatible fixes to the runtime following this date will not affect this Worker. Highly recommended to set a `compatibility_date`, otherwise if on upload via the API, it defaults to the oldest compatibility date before any flags took effect (2021-11-02).
* `compatibility_flags` array\[string\] optional  
   * [Compatibility Flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#setting-compatibility-flags) that enable or disable certain features in the Workers runtime. Used to enable upcoming features or opt in or out of specific changes not included in a `compatibility_date`.

## Additional attributes: [Workers Script Upload API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/)

For [immediately deployed uploads](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#upload-a-new-version-and-deploy-it-immediately), the following **additional** attributes are configurable at the top-level.

Note

Except for `annotations`, these attributes are **not available** for version uploads.

* `migrations` array\[object\] optional  
   * [Durable Objects migrations](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/) to apply.
* `logpush` ` boolean ` optional  
   * Whether [Logpush](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/hostname-analytics/#logpush) is turned on for the Worker.
* `tail_consumers` array\[object\] optional  
   * [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/) that will consume logs from the attached Worker.
* `tags` array\[string\] optional  
   * List of strings to use as tags for this Worker.
* `annotations` ` object ` optional  
   * Annotations object for the Worker version created by this upload. Also available on the [Version Upload API](#additional-attributes-version-upload-api).  
   * `workers/message` specifies a custom message for the version.  
   * `workers/tag` specifies a custom identifier for the version.

## Additional attributes: [Version Upload API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/versions/methods/create/)

For [version uploads](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#upload-a-new-version-to-be-gradually-deployed-or-deployed-at-a-later-time), the following **additional** attributes are configurable at the top-level.

* `annotations` ` object ` optional  
   * Annotations object specific to the Worker version.  
   * `workers/message` specifies a custom message for the version.  
   * `workers/tag` specifies a custom identifier for the version.  
   * `workers/alias` specifies a custom alias for this version.

## Bindings

Workers can interact with resources on the Cloudflare Developer Platform using [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/). Refer to the JSON example below that shows how to add bindings in the `metadata` part.

```

{

  "bindings": [

    {

      "type": "ai",

      "name": "<VARIABLE_NAME>"

    },

    {

      "type": "analytics_engine",

      "name": "<VARIABLE_NAME>",

      "dataset": "<DATASET>"

    },

    {

      "type": "assets",

      "name": "<VARIABLE_NAME>"

    },

    {

      "type": "browser_rendering",

      "name": "<VARIABLE_NAME>"

    },

    {

      "type": "d1",

      "name": "<VARIABLE_NAME>",

      "id": "<D1_ID>"

    },

    {

      "type": "durable_object_namespace",

      "name": "<VARIABLE_NAME>",

      "class_name": "<DO_CLASS_NAME>"

    },

    {

      "type": "hyperdrive",

      "name": "<VARIABLE_NAME>",

      "id": "<HYPERDRIVE_ID>"

    },

    {

      "type": "kv_namespace",

      "name": "<VARIABLE_NAME>",

      "namespace_id": "<KV_ID>"

    },

    {

      "type": "mtls_certificate",

      "name": "<VARIABLE_NAME>",

      "certificate_id": "<MTLS_CERTIFICATE_ID>"

    },

    {

      "type": "plain_text",

      "name": "<VARIABLE_NAME>",

      "text": "<VARIABLE_VALUE>"

    },

    {

      "type": "queue",

      "name": "<VARIABLE_NAME>",

      "queue_name": "<QUEUE_NAME>"

    },

    {

      "type": "r2_bucket",

      "name": "<VARIABLE_NAME>",

      "bucket_name": "<R2_BUCKET_NAME>"

    },

    {

      "type": "secret_text",

      "name": "<VARIABLE_NAME>",

      "text": "<SECRET_VALUE>"

    },

    {

      "type": "service",

      "name": "<VARIABLE_NAME>",

      "service": "<SERVICE_NAME>",

      "environment": "production"

    },

    {

      "type": "vectorize",

      "name": "<VARIABLE_NAME>",

      "index_name": "<INDEX_NAME>"

    },

    {

      "type": "version_metadata",

      "name": "<VARIABLE_NAME>"

    }

  ]

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/multipart-upload-metadata/","name":"Multipart upload metadata"}}]}
```

---

---
title: Placement
description: Control where your Worker runs to reduce latency.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/placement.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Placement

By default, [Workers](https://developers.cloudflare.com/workers/) and [Pages Functions](https://developers.cloudflare.com/pages/functions/) run in a data center closest to where the request was received. If your Worker makes requests to back-end infrastructure such as databases or APIs, it may be more performant to run that Worker closer to your back-end than the end user.

* [  wrangler.jsonc ](#tab-panel-7077)
* [  wrangler.toml ](#tab-panel-7078)

```

{

  "placement": {

    // Use one of the following options (mutually exclusive):

    "mode": "smart", // Cloudflare automatically places your Worker closest to the upstream with the most requests

    "region": "gcp:us-east4", // Explicit cloud region to run your Worker closest to - e.g. "gcp:us-east4" or "aws:us-east-1"

    "host": "db.example.com:5432", // A host to probe (TCP/layer 4) - e.g. a database host - and place your Worker closest to

    "hostname": "api.example.com", // A hostname to probe (HTTP/layer 7) - e.g. an API endpoint - and place your Worker closest to

  },

}


```

```

[placement]

mode = "smart"

region = "gcp:us-east4"

host = "db.example.com:5432"

hostname = "api.example.com"


```

Placement can reduce the overall latency of a Worker request by minimizing roundtrip latency of requests between your Worker and back-end services. You can achieve single-digit millisecond latency to databases, APIs, and other services running in legacy cloud infrastructure.

| Option     | Best for                                                        | Configuration    |
| ---------- | --------------------------------------------------------------- | ---------------- |
| **Smart**  | Multiple back-end services, or unknown infrastructure locations | mode = "smart"   |
| **Region** | Single back-end service in a known cloud region                 | region           |
| **Host**   | Single back-end service not in a major cloud provider           | host or hostname |

## Understand placement

Consider a user in Sydney, Australia accessing an application running on Workers. This application makes multiple round trips to a database in Frankfurt, Germany.

![A user located in Sydney, AU connecting to a Worker in the same region which then makes multiple round trips to a database located in Frankfurt, DE. ](https://developers.cloudflare.com/_astro/workers-smart-placement-disabled.CgvAE24H_2lFyUf.webp) 

The latency from multiple round trips between Sydney and Frankfurt adds up. By placing the Worker near the database, Cloudflare reduces the total request duration.

![A user located in Sydney, AU connecting to a Worker in Frankfurt, DE which then makes multiple round trips to a database also located in Frankfurt, DE. ](https://developers.cloudflare.com/_astro/workers-smart-placement-enabled.D6RN33at_Z2gprT.webp) 

## Enable Smart Placement

Smart Placement automatically analyzes your Worker's traffic patterns and places it in an optimal location. Use Smart Placement when:

* Your Worker connects to multiple back-end services
* You do not know the exact location of your infrastructure
* Your back-end services are distributed or replicated

Smart Placement is enabled on a per-Worker basis. Once enabled, it analyzes the [request duration](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/#request-duration) of the Worker in different Cloudflare locations on a regular basis.

For each candidate location, Smart Placement considers the Worker's performance and the network latency added by forwarding the request. If a candidate location is significantly faster, the request is forwarded there. Otherwise, the Worker runs in the default location closest to the request.

Smart Placement only considers locations where the Worker has previously run. It cannot place your Worker in a location that does not normally receive traffic.

### Review limitations

* Smart Placement only affects the execution of [fetch event handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/). It does not affect [RPC methods](https://developers.cloudflare.com/workers/runtime-apis/rpc/) or [named entrypoints](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/#named-entrypoints).
* Workers without a fetch event handler are ignored by Smart Placement.
* [Static assets](https://developers.cloudflare.com/workers/static-assets/) are always served from the location nearest to the incoming request. If your code retrieves assets via the [static assets binding](https://developers.cloudflare.com/workers/static-assets/binding/), assets are served from the location where your Worker runs.

### Enable smart placement

Smart Placement is available on all Workers plans.

#### Configure with Wrangler

Add the following to your Wrangler configuration file:

* [  wrangler.jsonc ](#tab-panel-7075)
* [  wrangler.toml ](#tab-panel-7076)

```

{

  "placement": {

    "mode": "smart",

  },

}


```

```

[placement]

mode = "smart"


```

Smart Placement may take up to 15 minutes to analyze your Worker after deployment.

#### Configure in the dashboard

1. Go to **Workers & Pages**.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select your Worker.
3. Go to **Settings** \> **General**.
4. Under **Placement**, select **Smart**.

Smart Placement requires consistent traffic to the Worker from multiple locations to make a placement decision. The analysis process may take up to 15 minutes.

### Check placement status

Query your Worker's placement status through the Workers API:

Terminal window

```

curl -X GET https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/workers/services/$WORKER_NAME \

-H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

-H "Content-Type: application/json" | jq .


```

Possible placement states:

| Status                    | Description                                                                                                       |
| ------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| _(not present)_           | The Worker has not been analyzed yet. It runs in the default location closest to the request.                     |
| SUCCESS                   | The Worker was analyzed and will be optimized by Smart Placement.                                                 |
| INSUFFICIENT\_INVOCATIONS | The Worker has not received enough requests from multiple locations to make a placement decision.                 |
| UNSUPPORTED\_APPLICATION  | Smart Placement made the Worker slower and reverted the placement. This state is rare (fewer than 1% of Workers). |

### Review request duration analytics

Once Smart Placement is enabled, data about request duration is collected. Request duration is measured at the data center closest to the end user. By default, 1% of requests are not routed with Smart Placement to serve as a baseline for comparison.

View your Worker's [request duration analytics](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/#request-duration) to measure the impact of Smart Placement.

### Check the `cf-placement` header

Cloudflare adds a `cf-placement` header to all requests when placement is enabled. Use this header to check whether a request was routed with Smart Placement and where the Worker processed the request.

The header value includes a placement type and an airport code indicating the data center location:

* `remote-LHR` — The request was routed using Smart Placement to a data center near London.
* `local-EWR` — The request was not routed using Smart Placement. The Worker ran in the default location near Newark.

Warning

The `cf-placement` header may be removed before Smart Placement exits beta.

## Configure explicit Placement Hints

Placement Hints let you explicitly specify where your Worker runs. Use Placement Hints when:

* You know the exact location of your back-end infrastructure
* Your Worker connects to a single database, API, or service
* Your infrastructure is single-homed (not replicated or anycasted)

Examples include a primary database, a virtual machine, or a Kubernetes cluster in a specific region. Reducing round-trip latency from 20 to 30 milliseconds per query to 1 to 3 milliseconds improves response times.

Note

Workers run on [Cloudflare's global network ↗](https://www.cloudflare.com/network/), not inside cloud provider regions. Placement Hints run your Worker in the data center with the lowest latency to your specified cloud region. At extremely high request volumes (hundreds of thousands of requests per second or more), Cloudflare may run instances across a more distributed area to balance traffic.

### Specify a cloud region

If your infrastructure runs in AWS, GCP, or Azure, set the `placement.region` property using the format `{provider}:{region}`:

* [  wrangler.jsonc ](#tab-panel-7079)
* [  wrangler.toml ](#tab-panel-7080)

```

{

  "placement": {

    "region": "aws:us-east-1", // Explicit cloud region to run your Worker closest to - e.g. "gcp:us-east4" or "aws:us-east-1"

  },

}


```

```

[placement]

region = "aws:us-east-1"


```

Cloudflare maps your specified cloud region to the data center with the lowest latency to that region. Cloudflare automatically adjusts placement to account for network maintenance or changes, so you do not need to specify failover regions.

### Specify a host endpoint

If your infrastructure is not in a major cloud provider, you can specify an endpoint for Cloudflare to probe. Cloudflare will triangulate the position of your external host and place Workers in a nearby region.

Note

Host-based placement is experimental.

Set `placement.host` to identify a layer 4 service. Cloudflare uses TCP CONNECT checks to measure latency and selects the best data center.

* [  wrangler.jsonc ](#tab-panel-7081)
* [  wrangler.toml ](#tab-panel-7082)

```

{

  "placement": {

    "host": "my_database_host.com:5432", // A host to probe (TCP/layer 4) - e.g. a database host - and place your Worker closest to

  },

}


```

```

[placement]

host = "my_database_host.com:5432"


```

Set `placement.hostname` to identify a layer 7 service. Cloudflare uses HTTP HEAD checks to measure latency and selects the best data center.

* [  wrangler.jsonc ](#tab-panel-7083)
* [  wrangler.toml ](#tab-panel-7084)

```

{

  "placement": {

    "hostname": "my_api_server.com", // A hostname to probe (HTTP/layer 7) - e.g. an API endpoint - and place your Worker closest to

  },

}


```

```

[placement]

hostname = "my_api_server.com"


```

Probes are sent from public IP ranges, not Cloudflare IP ranges. Cloudflare rechecks service location at regular intervals. These probes locate single-homed resources and do not work correctly for broadcast, anycast, multicast, or replicated resources.

### List supported regions

Placement Hints support Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure region identifiers:

| Provider | Format         | Examples                                            |
| -------- | -------------- | --------------------------------------------------- |
| AWS      | aws:{region}   | aws:us-east-1, aws:us-west-2, aws:eu-central-1      |
| GCP      | gcp:{region}   | gcp:us-east4, gcp:europe-west1, gcp:asia-east1      |
| Azure    | azure:{region} | azure:westeurope, azure:eastus, azure:southeastasia |

For a full list of region codes, refer to [AWS regions ↗](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html), [GCP regions ↗](https://cloud.google.com/compute/docs/regions-zones), or [Azure regions ↗](https://learn.microsoft.com/en-us/azure/reliability/regions-list).

## Placement Behavior

Workers placement behaves in similar fashion when either Smart Placement or Placement Hints are used. The following behavior applies to both.

### Review limitations

The following limitations apply to both Smart Placement and Placement Hints:

* Placement only affects the execution of [fetch event handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/). It does not affect [RPC methods](https://developers.cloudflare.com/workers/runtime-apis/rpc/) or [named entrypoints](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/#named-entrypoints).  
   * Workers without a fetch event handler are ignored by placement.  
   * [Static assets](https://developers.cloudflare.com/workers/static-assets/) are always served from the location nearest to the incoming request. If your code retrieves assets via the [static assets binding](https://developers.cloudflare.com/workers/static-assets/binding/), assets are served from the location where your Worker runs.

### `cf-placement` header

Cloudflare adds a `cf-placement` header to all requests when placement is enabled. Use this header to check whether a request was routed with placement and where the Worker processed the request.

The header value includes a placement type and an airport code indicating the data center location:

* `remote-LHR` — The request was routed using Smart Placement to a data center near London.
* `local-EWR` — The request was not routed using Smart Placement. The Worker ran in the default location near Newark.

Warning

The `cf-placement` header may be removed before Smart Placement exits beta.

## Multiple Workers

If you are building full-stack applications on Workers, split your edge logic (authentication, routing) and back-end logic (database queries, API calls) into separate Workers. Use [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) to connect them with type-safe RPC.

![Smart Placement and Service Bindings](https://developers.cloudflare.com/_astro/smart-placement-service-bindings.Ce58BYeF_ZmD4l8.webp) 

Enable placement on your back-end Worker to invoke it close to your database, while the edge Worker handles authentication close to the user.

### Example: Edge authentication with a placed back-end

This example shows two Workers:

* `auth-worker` — runs at the edge (no placement), handles authentication
* `app-worker` — placed near your database, handles data queries

* [ auth-worker ](#tab-panel-7089)
* [ app-worker ](#tab-panel-7090)

* [  wrangler.jsonc ](#tab-panel-7085)
* [  wrangler.toml ](#tab-panel-7086)

```

{

  "name": "auth-worker",

  "main": "src/index.ts",

  "services": [{ "binding": "APP", "service": "app-worker" }],

}


```

```

name = "auth-worker"

main = "src/index.ts"


[[services]]

binding = "APP"

service = "app-worker"


```

auth-worker/src/index.ts

```

import { AppWorker } from "../app-worker/src/index";


interface Env {

  APP: Service<AppWorker>;

}


export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const authHeader = request.headers.get("Authorization");

    if (!authHeader?.startsWith("Bearer ")) {

      return new Response("Unauthorized", { status: 401 });

    }


    const userId = await validateToken(authHeader.slice(7));

    if (!userId) {

      return new Response("Invalid token", { status: 403 });

    }


    // Call the placed back-end Worker via RPC

    const data = await env.APP.getUser(userId);

    return Response.json(data);

  },

};


async function validateToken(token: string): Promise<string | null> {

  return token === "valid" ? "user-123" : null;

}


```

* [  wrangler.jsonc ](#tab-panel-7087)
* [  wrangler.toml ](#tab-panel-7088)

```

{

  "name": "app-worker",

  "main": "src/index.ts",

  "placement": {

    // Use one of the following options (mutually exclusive):

    // "mode": "smart", // Cloudflare automatically places your Worker closest to the upstream with the most requests

    "region": "aws:us-east-1", // Explicit cloud region to run your Worker closest to - e.g. "gcp:us-east4" or "aws:us-east-1"

    // "host": "db.example.com:5432", // A host to probe (TCP/layer 4) - e.g. a database host - and place your Worker closest to

    // "hostname": "api.example.com", // A hostname to probe (HTTP/layer 7) - e.g. an API endpoint - and place your Worker closest to

  },

}


```

```

name = "app-worker"

main = "src/index.ts"


[placement]

region = "aws:us-east-1"


```

app-worker/src/index.ts

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class AppWorker extends WorkerEntrypoint {

  async fetch() {

    return new Response(null, { status: 404 });

  }


  // Each method runs near your database - multiple queries stay fast

  async getUser(userId: string) {

    const user = await this.env.DB.prepare("SELECT * FROM users WHERE id = ?")

      .bind(userId)

      .first();

    return user;

  }


  async getUserListings(userId: string) {

    // Multiple round-trips to the DB are low-latency when placed nearby

    const user = await this.env.DB.prepare("SELECT * FROM users WHERE id = ?")

      .bind(userId)

      .first();

    const listings = await this.env.DB.prepare(

      "SELECT * FROM listings WHERE owner_id = ?",

    )

      .bind(userId)

      .all();

    const reviews = await this.env.DB.prepare(

      "SELECT * FROM reviews WHERE listing_id IN (SELECT id FROM listings WHERE owner_id = ?)",

    )

      .bind(userId)

      .all();


    return { user, listings: listings.results, reviews: reviews.results };

  }

}


```

The `auth-worker` runs at the edge to reject unauthorized requests quickly. Authenticated requests are forwarded via RPC to `app-worker`, which runs near your database for fast queries.

### Durable Objects

[Durable Objects](https://developers.cloudflare.com/durable-objects/) provide automatic placement without configuration. Queries to a Durable Object's embedded [SQLite database](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/) are effectively [zero-latency ↗](https://blog.cloudflare.com/sqlite-in-durable-objects/) because compute runs in the same process as the data.

Do as much work as possible within the Durable Object and return a composite result, rather than making multiple round-trips from your Worker:

src/index.ts

```

import { DurableObject } from "cloudflare:workers";


type Session = { id: string; user_id: string; created_at: number };

type PromptHistory = {

  id: string;

  session_id: string;

  role: string;

  content: string;

};


export class AgentHistory extends DurableObject {

  async getSessionContext(sessionId: string) {

    // All queries execute with zero network latency — compute and data are colocated

    const session = this.ctx.storage.sql

      .exec<Session>("SELECT * FROM sessions WHERE id = ?", sessionId)

      .one();

    const prompts = this.ctx.storage.sql

      .exec<PromptHistory>(

        "SELECT * FROM prompt_history WHERE session_id = ? ORDER BY created_at",

        sessionId,

      )

      .toArray();


    return { session, prompts };

  }

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/placement/","name":"Placement"}}]}
```

---

---
title: Preview URLs
description: Preview URLs allow you to preview new versions of your project without deploying it to production.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/previews.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Preview URLs

# Overview

Preview URLs allow you to preview new versions of your Worker without deploying it to production.

There are two types of preview URLs:

* **Versioned Preview URLs**: A unique URL generated automatically for each new version of your Worker.
* **Aliased Preview URLs**: A static, human-readable alias that you can manually assign to a Worker version.

Both preview URL types follow the format: `<VERSION_PREFIX OR ALIAS>-<WORKER_NAME>.<SUBDOMAIN>.workers.dev`.

Preview URLs can be:

* Integrated into CI/CD pipelines, allowing automatic generation of preview environments for every pull request.
* Used for collaboration between teams to test code changes in a live environment and verify updates.
* Used to test new API endpoints, validate data formats, and ensure backward compatibility with existing services.

When testing zone level performance or security features for a version, we recommend using [version overrides](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#version-overrides) so that your zone's performance and security settings apply.

Note

Preview URLs are only available for Worker versions uploaded after 2024-09-25.

## Types of Preview URLs

### Versioned Preview URLs

Every time you create a new [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#versions) of your Worker, a unique static version preview URL is generated automatically. These URLs use a version prefix and follow the format `<VERSION_PREFIX>-<WORKER_NAME>.<SUBDOMAIN>.workers.dev`.

New versions of a Worker are created when you run:

* [wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy)
* [wrangler versions upload](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-upload)
* Or when you make edits via the Cloudflare dashboard

If Preview URLs have been enabled, they are public and available immediately after version creation.

Note

Minimum required Wrangler version: 3.74.0\. Check your version by running `wrangler --version`. To update Wrangler, refer to [Install/Update Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

#### View versioned preview URLs using Wrangler

The [wrangler versions upload](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-upload) command uploads a new [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#versions) of your Worker and returns a preview URL for each version uploaded.

#### View versioned preview URLs on the Workers dashboard

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select your Worker.
3. Go to the **Deployments** tab, and find the version you would like to view.

### Aliased preview URLs

Aliased preview URLs let you assign a persistent, readable alias to a specific Worker version. These are useful for linking to stable previews across many versions (e.g. to share an upcoming but still actively being developed new feature). A common workflow would be to assign an alias for the branch that you're working on. These types of preview URLs follow the same pattern as other preview URLs:`<ALIAS>-<WORKER_NAME>.<SUBDOMAIN>.workers.dev`

Note

Minimum required Wrangler version: `4.21.0`. Check your version by running `wrangler --version`. To update Wrangler, refer to [Install/Update Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

#### Create an Alias

Aliases may be created during `versions upload`, by providing the `--preview-alias` flag with a valid alias name:

Terminal window

```

wrangler versions upload --preview-alias staging


```

The resulting alias would be associated with this version, and immediately available at:`staging-<WORKER_NAME>.<SUBDOMAIN>.workers.dev`

#### Rules and limitations

* Aliases may only be created during version upload.
* Aliases must use only lowercase letters, numbers, and dashes.
* Aliases must begin with a lowercase letter.
* The alias and Worker name combined (with a dash) must not exceed 63 characters due to DNS label limits.
* Only the 1000 most recently deployed aliases are retained. When a new alias is created beyond this limit, the least recently deployed alias is deleted.

## Manage access to Preview URLs

When enabled, all preview URLs are available publicly. You can use [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/) to require visitors to authenticate before accessing preview URLs. You can limit access to yourself, your teammates, your organization, or anyone else you specify in your [access policy](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/).

To limit your preview URLs to authorized emails only:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker.
3. Go to **Settings** \> **Domains & Routes**.
4. For Preview URLs, click **Enable Cloudflare Access**.
5. Optionally, to configure the Access application, click **Manage Cloudflare Access**. There, you can change the email addresses you want to authorize. View [Access policies](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/#selectors) to learn about configuring alternate rules.
6. [Validate the Access JWT ↗](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/authorization-cookie/validating-json/#cloudflare-workers-example) in your Worker script using the audience (`aud`) tag and JWKs URL provided.

## Toggle Preview URLs (Enable or Disable)

Note:

* Preview URLs are enabled by default when `workers_dev` is enabled.
* Preview URLs are disabled by default when `workers_dev` is disabled.
* Disabling Preview URLs will disable routing to both versioned and aliased preview URLs.

### From the Dashboard

To toggle Preview URLs for a Worker:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker.
3. Go to **Settings** \> **Domains & Routes**.
4. For Preview URLs, click **Enable** or **Disable**.
5. Confirm your action.

### From the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/)

Note

Wrangler 3.91.0 or higher is required to use this feature.

Note

Older Wrangler versions will default to Preview URLs being enabled.

To toggle Preview URLs for a Worker, include any of the following in your Worker's Wrangler file:

* [  wrangler.jsonc ](#tab-panel-7091)
* [  wrangler.toml ](#tab-panel-7092)

```

{

  "preview_urls": true

}


```

```

preview_urls = true


```

* [  wrangler.jsonc ](#tab-panel-7093)
* [  wrangler.toml ](#tab-panel-7094)

```

{

  "preview_urls": false

}


```

```

preview_urls = false


```

If not given, `preview_urls = workers_dev` is the default.

Warning

If you enable or disable Preview URLs in the Cloudflare dashboard, but do not update your Worker's Wrangler file accordingly, the Preview URLs status will change the next time you deploy your Worker with Wrangler.

## Limitations

* Preview URLs are not generated for Workers that implement a [Durable Object](https://developers.cloudflare.com/durable-objects/).
* Preview URLs are not currently generated for [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/) [user Workers](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#user-workers). This is a temporary limitation, we are working to remove it.
* You cannot currently configure Preview URLs to run on a subdomain other than [workers.dev](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/).
* You cannot view logs for Preview URLs today, this includes Workers Logs, Wrangler tail and Logpush.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/previews/","name":"Preview URLs"}}]}
```

---

---
title: Routes and domains
description: Connect your Worker to an external endpoint (via Routes, Custom Domains or a `workers.dev` subdomain) such that it can be accessed by the Internet.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/routing/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Routes and domains

To allow a Worker to receive inbound HTTP requests, you must connect it to an external endpoint such that it can be accessed by the Internet.

There are three types of routes:

* [Custom Domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains): Routes to a domain or subdomain (such as `example.com` or `shop.example.com`) within a Cloudflare zone where the Worker is the origin.
* [Routes](https://developers.cloudflare.com/workers/configuration/routing/routes/): Routes that are set within a Cloudflare zone where your origin server, if you have one, is behind a Worker that the Worker can communicate with.
* [workers.dev](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/): A `workers.dev` subdomain route is automatically created for each Worker to help you getting started quickly. You may choose to [disable](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) your `workers.dev` subdomain.

## What is best for me?

It's recommended to run production Workers on a [Workers route or custom domain](https://developers.cloudflare.com/workers/configuration/routing/), rather than on your `workers.dev` subdomain. Your `workers.dev` subdomain is treated as a [Free website ↗](https://www.cloudflare.com/plans/) and is intended for personal or hobby projects that aren't business-critical.

Custom Domains are recommended for use cases where your Worker is your application's origin server. Custom Domains can also be invoked within the same zone via `fetch()`, unlike Routes.

Routes are recommended for use cases where your application's origin server is external to Cloudflare. Note that Routes cannot be the target of a same-zone `fetch()` call.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/routing/","name":"Routes and domains"}}]}
```

---

---
title: Custom Domains
description: Custom Domains allow you to connect your Worker to a domain or subdomain, without having to make changes to your DNS settings or perform any certificate management. After you set up a Custom Domain for your Worker, Cloudflare will create DNS records and issue necessary certificates on your behalf. The created DNS records will point directly to your Worker. Unlike Routes, Custom Domains point all paths of a domain or subdomain to your Worker.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/routing/custom-domains.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Custom Domains

## Background

Custom Domains allow you to connect your Worker to a domain or subdomain, without having to make changes to your DNS settings or perform any certificate management. After you set up a Custom Domain for your Worker, Cloudflare will create DNS records and issue necessary certificates on your behalf. The created DNS records will point directly to your Worker. Unlike [Routes](https://developers.cloudflare.com/workers/configuration/routing/routes/#set-up-a-route), Custom Domains point all paths of a domain or subdomain to your Worker.

Custom Domains are routes to a domain or subdomain (such as `example.com` or `shop.example.com`) within a Cloudflare zone where the Worker is the origin.

Custom Domains are recommended if you want to connect your Worker to the Internet and do not have an application server that you want to always communicate with. If you do have external dependencies, you can create a `Request` object with the target URI, and use `fetch()` to reach out.

Custom Domains can stack on top of each other. For example, if you have Worker A attached to `app.example.com` and Worker B attached to `api.example.com`, Worker A can call `fetch()` on `api.example.com` and invoke Worker B.

![Custom Domains can stack on top of each other, like any external dependencies](https://developers.cloudflare.com/_astro/custom-domains-subrequest.C6c84jN5_1oQWRD.webp) 

Custom Domains can also be invoked within the same zone via `fetch()`, unlike Routes.

## Add a Custom Domain

To add a Custom Domain, you must have:

1. An [active Cloudflare zone](https://developers.cloudflare.com/dns/zone-setups/).
2. A Worker to invoke.

Custom Domains can be attached to your Worker via the Cloudflare dashboard, [Wrangler](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/#set-up-a-custom-domain-in-your-wrangler-configuration-file) or the [API](https://developers.cloudflare.com/api/resources/workers/subresources/domains/methods/list/).

Warning

You cannot create a Custom Domain on a hostname with an existing CNAME DNS record or on a zone you do not own.

### Set up a Custom Domain in the dashboard

To set up a Custom Domain in the dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker.
3. Go to **Settings** \> **Domains & Routes** \> **Add** \> **Custom Domain**.
4. Enter the domain you want to configure for your Worker.
5. Select **Add Custom Domain**.

After you have added the domain or subdomain, Cloudflare will create a new DNS record for you. You can add multiple Custom Domains.

### Set up a Custom Domain in your Wrangler configuration file

To configure a Custom Domain in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), add the `custom_domain=true` option on each pattern under `routes`. For example, to configure a Custom Domain:

* [  wrangler.jsonc ](#tab-panel-7095)
* [  wrangler.toml ](#tab-panel-7096)

```

{

  "routes": [

    {

      "pattern": "shop.example.com",

      "custom_domain": true

    }

  ]

}


```

```

[[routes]]

pattern = "shop.example.com"

custom_domain = true


```

To configure multiple Custom Domains:

* [  wrangler.jsonc ](#tab-panel-7099)
* [  wrangler.toml ](#tab-panel-7100)

```

{

  "routes": [

    {

      "pattern": "shop.example.com",

      "custom_domain": true

    },

    {

      "pattern": "shop-two.example.com",

      "custom_domain": true

    }

  ]

}


```

```

[[routes]]

pattern = "shop.example.com"

custom_domain = true


[[routes]]

pattern = "shop-two.example.com"

custom_domain = true


```

## Worker to Worker communication

On the same zone, the only way for a Worker to communicate with another Worker running on a [route](https://developers.cloudflare.com/workers/configuration/routing/routes/#set-up-a-route), or on a [workers.dev](https://developers.cloudflare.com/workers/configuration/routing/routes/#%5Ftop) subdomain, is via [service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/).

On the same zone, if a Worker is attempting to communicate with a target Worker running on a Custom Domain rather than a route, the limitation is removed. Fetch requests sent on the same zone from one Worker to another Worker running on a Custom Domain will succeed without a service binding.

For example, consider the following scenario, where both Workers are running on the `example.com` Cloudflare zone:

* `worker-a` running on the [route](https://developers.cloudflare.com/workers/configuration/routing/routes/#set-up-a-route) `auth.example.com/*`.
* `worker-b` running on the [route](https://developers.cloudflare.com/workers/configuration/routing/routes/#set-up-a-route) `shop.example.com/*`.

If `worker-a` sends a fetch request to `worker-b`, the request will fail, because of the limitation on same-zone fetch requests. `worker-a` must have a service binding to `worker-b` for this request to resolve.

worker-a

```

export default {

  fetch(request) {

    // This will fail

    return fetch("https://shop.example.com")

  }

}


```

However, if `worker-b` was instead set up to run on the Custom Domain `shop.example.com`, the fetch request would succeed.

## Request matching behaviour

Custom Domains do not support [wildcard DNS records](https://developers.cloudflare.com/dns/manage-dns-records/reference/wildcard-dns-records/). An incoming request must exactly match the domain or subdomain your Custom Domain is registered to. Other parts (path, query parameters) of the URL are not considered when executing this matching logic. For example, if you create a Custom Domain on `api.example.com` attached to your `api-gateway` Worker, a request to either `api.example.com/login` or `api.example.com/user` would invoke the same `api-gateway` Worker.

![Custom Domains follow standard DNS ordering and matching logic](https://developers.cloudflare.com/_astro/custom-domains-api-gateway.DmeJZDoL_Z1d0vv1.webp) 

## Interaction with Routes

A Worker running on a Custom Domain is treated as an origin. Any Workers running on routes before your Custom Domain can optionally call the Worker registered on your Custom Domain by issuing `fetch(request)` with the incoming `Request` object. That means that you are able to set up Workers to run before a request gets to your Custom Domain Worker. In other words, you can chain together two Workers in the same request.

For example, consider the following workflow:

1. A Custom Domain for `api.example.com` points to your `api-worker` Worker.
2. A route added to `api.example.com/auth` points to your `auth-worker` Worker.
3. A request to `api.example.com/auth` will trigger your `auth-worker` Worker.
4. Using `fetch(request)` within the `auth-worker` Worker will invoke the `api-worker` Worker, as if it was a normal application server.

auth-worker

```

export default {

  fetch(request) {

    const url = new URL(request.url)

    if(url.searchParams.get("auth") !== "SECRET_TOKEN") {

      return new Response(null, { status: 401 })

    } else {

      // This will invoke `api-worker`

      return fetch(request)

    }

  }

}


```

## Certificates

Creating a Custom Domain will also generate an [Advanced Certificate](https://developers.cloudflare.com/ssl/edge-certificates/advanced-certificate-manager/) on your target zone for your target hostname.

These certificates are generated with default settings. To override these settings, delete the generated certificate and create your own certificate in the Cloudflare dashboard. Refer to [Manage advanced certificates](https://developers.cloudflare.com/ssl/edge-certificates/advanced-certificate-manager/manage-certificates/) for instructions.

## Migrate from Routes

If you are currently invoking a Worker using a [route](https://developers.cloudflare.com/workers/configuration/routing/routes/) with `/*`, and you have a CNAME record pointing to `100::` or similar, a Custom Domain is a recommended replacement.

### Migrate from Routes via the dashboard

To migrate the route `example.com/*`:

1. In the Cloudflare dashboard, go to the **DNS Records** page for your domain.  
[ Go to **Records** ](https://dash.cloudflare.com/?to=/:account/:zone/dns/records)
2. Delete the CNAME record for `example.com`.
3. Go to **Account Home** \> **Workers & Pages**.
4. In **Overview**, select your Worker > **Settings** \> **Domains & Routes**.
5. Select **Add** \> **Custom domain** and add `example.com`.
6. Delete the route `example.com/*` located in your Worker > **Settings** \> **Domains & Routes**.

### Migrate from Routes via Wrangler

To migrate the route `example.com/*` in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):

1. In the Cloudflare dashboard, go to the **DNS Records** page for your domain.  
[ Go to **Records** ](https://dash.cloudflare.com/?to=/:account/:zone/dns/records)
2. Delete the CNAME record for `example.com`.
3. Add the following to your Wrangler file:  
   * [  wrangler.jsonc ](#tab-panel-7097)  
   * [  wrangler.toml ](#tab-panel-7098)  
```  
{  
  "routes": [  
    {  
      "pattern": "example.com",  
      "custom_domain": true  
    }  
  ]  
}  
```  
```  
[[routes]]  
pattern = "example.com"  
custom_domain = true  
```
4. Run `npx wrangler deploy` to create the Custom Domain your Worker will run on.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/routing/","name":"Routes and domains"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/configuration/routing/custom-domains/","name":"Custom Domains"}}]}
```

---

---
title: Routes
description: Routes allow users to map a URL pattern to a Worker. When a request comes in to the Cloudflare network that matches the specified URL pattern, your Worker will execute on that route.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/routing/routes.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Routes

## Background

Routes allow users to map a URL pattern to a Worker. When a request comes in to the Cloudflare network that matches the specified URL pattern, your Worker will execute on that route.

Routes are a set of rules that evaluate against a request's URL. Routes are recommended for you if you have a designated application server you always need to communicate with. Calling `fetch()` on the incoming `Request` object will trigger a subrequest to your application server, as defined in the **DNS** settings of your Cloudflare zone.

Routes add Workers functionality to your existing proxied hostnames, in front of your application server. These allow your Workers to act as a proxy and perform any necessary work before reaching out to an application server behind Cloudflare.

![Routes work with your applications defined in Cloudflare DNS](https://developers.cloudflare.com/_astro/routes-diagram.CfGSi1RG_Z1jppef.webp) 

Routes can `fetch()` Custom Domains and take precedence if configured on the same hostname. If you would like to run a logging Worker in front of your application, for example, you can create a Custom Domain on your application Worker for `app.example.com`, and create a Route for your logging Worker at `app.example.com/*`. Calling `fetch()` will invoke the application Worker on your Custom Domain. Note that Routes cannot be the target of a same-zone `fetch()` call.

## Set up a route

To add a route, you must have:

1. An [active Cloudflare zone](https://developers.cloudflare.com/dns/zone-setups/).
2. A Worker to invoke.
3. A DNS record set up for the [domain](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-zone-apex/) or [subdomain](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-subdomain/) proxied by Cloudflare (also known as orange-clouded) you would like to route to.

Warning

Route setup will differ depending on if your application's origin is a Worker or not. If your Worker is your application's origin, use [Custom Domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/).

If your Worker is not your application's origin, follow the instructions below to set up a route.

Note

Routes can also be created via the API. Refer to the [Workers Routes API documentation](https://developers.cloudflare.com/api/resources/workers/subresources/routes/methods/create/) for more information.

### Set up a route in the dashboard

Before you set up a route, make sure you have a DNS record set up for the [domain](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-zone-apex/) or [subdomain](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-subdomain/) you would like to route to.

To set up a route in the dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker.
3. Go to **Settings** \> **Domains & Routes** \> **Add** \> **Route**.
4. Select the zone and enter the route pattern.
5. Select **Add route**.

### Set up a route in the Wrangler configuration file

Before you set up a route, make sure you have a DNS record set up for the [domain](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-zone-apex/) or [subdomain](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-subdomain/) you would like to route to.

To configure a route using your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), refer to the following example.

* [  wrangler.jsonc ](#tab-panel-7101)
* [  wrangler.toml ](#tab-panel-7102)

```

{

  "routes": [

    {

      "pattern": "subdomain.example.com/*",

      "zone_name": "example.com"

    },

    // or

    {

      "pattern": "subdomain.example.com/*",

      "zone_id": "<YOUR_ZONE_ID>"

    }

  ]

}


```

```

[[routes]]

pattern = "subdomain.example.com/*"

zone_name = "example.com"


[[routes]]

pattern = "subdomain.example.com/*"

zone_id = "<YOUR_ZONE_ID>"


```

Add the `zone_name` or `zone_id` option after each route. The `zone_name` and `zone_id` options are interchangeable. If using `zone_id`, find your zone ID by:

1. Go to the Zone Overview page in the Cloudflare dashboard.  
[ Go to **Overview** ](https://dash.cloudflare.com/?to=/:account/:zone/)
2. Find the **Zone ID** in the left-hand side of **Overview**.

To add multiple routes:

* [  wrangler.jsonc ](#tab-panel-7103)
* [  wrangler.toml ](#tab-panel-7104)

```

{

  "routes": [

    {

      "pattern": "subdomain.example.com/*",

      "zone_name": "example.com"

    },

    {

      "pattern": "subdomain-two.example.com/example",

      "zone_id": "<YOUR_ZONE_ID>"

    }

  ]

}


```

```

[[routes]]

pattern = "subdomain.example.com/*"

zone_name = "example.com"


[[routes]]

pattern = "subdomain-two.example.com/example"

zone_id = "<YOUR_ZONE_ID>"


```

## Matching behavior

Route patterns look like this:

```

https://*.example.com/images/*


```

This pattern would match all HTTPS requests destined for a subhost of example.com and whose paths are prefixed by `/images/`.

A pattern to match all requests looks like this:

```

*example.com/*


```

While they look similar to a [regex ↗](https://en.wikipedia.org/wiki/Regular%5Fexpression) pattern, route patterns follow specific rules:

* The only supported operator is the wildcard (`*`), which matches zero or more of any character.
* Route patterns may not contain infix wildcards or query parameters. For example, neither `example.com/*.jpg` nor `example.com/?foo=*` are valid route patterns.
* When more than one route pattern could match a request URL, the most specific route pattern wins. For example, the pattern `www.example.com/*` would take precedence over `*.example.com/*` when matching a request for `https://www.example.com/`. The pattern `example.com/hello/*` would take precedence over `example.com/*` when matching a request for `example.com/hello/world`.
* Route pattern matching considers the entire request URL, including the query parameter string. Since route patterns may not contain query parameters, the only way to have a route pattern match URLs with query parameters is to terminate it with a wildcard, `*`.
* The path component of route patterns is case sensitive, for example, `example.com/Images/*` and `example.com/images/*` are two distinct routes.
* For routes created before October 15th, 2023, the host component of route patterns is case sensitive, for example, `example.com/*` and `Example.com/*` are two distinct routes.
* For routes created on or after October 15th, 2023, the host component of route patterns is not case sensitive, for example, `example.com/*` and `Example.com/*` are equivalent routes.

A route can be specified without being associated with a Worker. This will act to negate any less specific patterns. For example, consider this pair of route patterns, one with a Workers script and one without:

```

*example.com/images/cat.png -> <no script>

*example.com/images/*       -> worker-script


```

In this example, all requests destined for example.com and whose paths are prefixed by `/images/` would be routed to `worker-script`, _except_ for `/images/cat.png`, which would bypass Workers completely. Requests with a path of `/images/cat.png?foo=bar` would be routed to `worker-script`, due to the presence of the query string.

## Validity

The following set of rules govern route pattern validity.

#### Route patterns must include your zone

If your zone is `example.com`, then the simplest possible route pattern you can have is `example.com`, which would match `http://example.com/` and `https://example.com/`, and nothing else. As with a URL, there is an implied path of `/` if you do not specify one.

#### Route patterns may not contain any query parameters

For example, `https://example.com/?anything` is not a valid route pattern.

#### Route patterns may optionally begin with `http://` or `https://`

If you omit a scheme in your route pattern, it will match both `http://` and `https://` URLs. If you include `http://` or `https://`, it will only match HTTP or HTTPS requests, respectively.

* `https://*.example.com/` matches `https://www.example.com/` but not `http://www.example.com/`.
* `*.example.com/` matches both `https://www.example.com/` and `http://www.example.com/`.

#### Hostnames may optionally begin with `*`

If a route pattern hostname begins with `*`, then it matches the host and all subhosts. If a route pattern hostname begins with `*.`, then it only matches all subhosts.

* `*example.com/` matches `https://example.com/` and `https://www.example.com/`.
* `*.example.com/` matches `https://www.example.com/` but not `https://example.com/`.

Warning

Because `*` matches zero or more of **any character** (not just subdomains), `*example.com` will also match hostnames that are not subdomains of `example.com`. If you only want to match `example.com` and its subdomains, use two separate routes (`example.com/*` and `*.example.com/*`) instead.

The following examples illustrate the difference between `*example.com/*` and `*.example.com/*`:

| Request URL                  | \*example.com/\* | \*.example.com/\* |
| ---------------------------- | ---------------- | ----------------- |
| https://example.com/         | Matches          | Does not match    |
| https://www.example.com/path | Matches          | Matches           |
| https://myexample.com/       | Matches          | Does not match    |
| https://not-example.com/     | Does not match   | Does not match    |

#### Paths may optionally end with `*`

If a route pattern path ends with `*`, then it matches all suffixes of that path.

* `https://example.com/path*` matches `https://example.com/path` and `https://example.com/path2` and `https://example.com/path/readme.txt`

Warning

There is a well-known bug associated with path matching concerning wildcards (`*`) and forward slashes (`/`) that is documented in [Known issues](https://developers.cloudflare.com/workers/platform/known-issues/).

#### Domains and subdomains must have a DNS Record

All domains and subdomains must have a [DNS record](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-dns-records/) to be proxied on Cloudflare and used to invoke a Worker. For example, if you want to put a Worker on `myname.example.com`, and you have added `example.com` to Cloudflare but have not added any DNS records for `myname.example.com`, any request to `myname.example.com` will result in the error `ERR_NAME_NOT_RESOLVED`.

Warning

If you have previously used the Cloudflare dashboard to add an `AAAA` record for `myname` to `example.com`, pointing to `100::` (the [reserved IPv6 discard prefix ↗](https://tools.ietf.org/html/rfc6666)), Cloudflare recommends creating a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) pointing to your Worker instead.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/routing/","name":"Routes and domains"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/configuration/routing/routes/","name":"Routes"}}]}
```

---

---
title: workers.dev
description: Cloudflare Workers accounts come with a workers.dev subdomain that is configurable in the Cloudflare dashboard. Your workers.dev subdomain allows you getting started quickly by deploying Workers without first onboarding your custom domain to Cloudflare.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/routing/workers-dev.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# workers.dev

Cloudflare Workers accounts come with a `workers.dev` subdomain that is configurable in the Cloudflare dashboard. Your `workers.dev` subdomain allows you getting started quickly by deploying Workers without first onboarding your custom domain to Cloudflare.

It's recommended to run production Workers on a [Workers route or custom domain](https://developers.cloudflare.com/workers/configuration/routing/), rather than on your `workers.dev` subdomain. Your `workers.dev` subdomain is treated as a [Free website ↗](https://www.cloudflare.com/plans/) and is intended for personal or hobby projects that aren't business-critical.

## Configure `workers.dev`

`workers.dev` subdomains take the format: `<YOUR_ACCOUNT_SUBDOMAIN>.workers.dev`. To change your `workers.dev` subdomain:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select **Change** next to **Your subdomain**.

All Workers are assigned a `workers.dev` route when they are created or renamed following the syntax `<YOUR_WORKER_NAME>.<YOUR_SUBDOMAIN>.workers.dev`. The [name](https://developers.cloudflare.com/workers/wrangler/configuration/#inheritable-keys) field in your Worker configuration is used as the subdomain for the deployed Worker.

## Manage access to `workers.dev`

When enabled, your `workers.dev` URL is available publicly. You can use [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/) to require visitors to authenticate before accessing preview URLs. You can limit access to yourself, your teammates, your organization, or anyone else you specify in your [access policy](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/).

To limit your `workers.dev` URL to authorized emails only:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker.
3. Go to **Settings** \> **Domains & Routes**.
4. For `workers.dev`, click **Enable Cloudflare Access**.
5. Optionally, to configure the Access application, click **Manage Cloudflare Access**. There, you can change the email addresses you want to authorize. View [Access policies](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/#selectors) to learn about configuring alternate rules.
6. [Validate the Access JWT ↗](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/authorization-cookie/validating-json/#cloudflare-workers-example) in your Worker script using the audience (`aud`) tag and JWKs URL provided.

## Disabling `workers.dev`

### Disabling `workers.dev` in the dashboard

To disable the `workers.dev` route for a Worker:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker.
3. Go to **Settings** \> **Domains & Routes**.
4. On `workers.dev` click "Disable".
5. Confirm you want to disable.

### Disabling `workers.dev` in the Wrangler configuration file

To disable the `workers.dev` route for a Worker, include the following in your Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):

* [  wrangler.jsonc ](#tab-panel-7105)
* [  wrangler.toml ](#tab-panel-7106)

```

{

  "workers_dev": false

}


```

```

workers_dev = false


```

When you redeploy your Worker with this change, the `workers.dev` route will be disabled. Disabling your `workers.dev` route does not disable Preview URLs. Learn how to [disable Preview URLs](https://developers.cloudflare.com/workers/configuration/previews/#disabling-preview-urls).

If you do not specify `workers_dev = false` but add a [routes component](https://developers.cloudflare.com/workers/wrangler/configuration/#routes) to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), the value of `workers_dev` will be inferred as `false` on the next deploy.

Warning

If you disable your `workers.dev` route in the Cloudflare dashboard but do not update your Worker's Wrangler file with `workers_dev = false`, the `workers.dev` route will be re-enabled the next time you deploy your Worker with Wrangler.

## Limitations

When deploying a Worker with a `workers.dev` subdomain enabled, your Worker name must meet the following requirements:

* Must be 63 characters or less
* Must contain only alphanumeric characters (`a-z`, `A-Z`, `0-9`) and dashes (`-`)
* Cannot start or end with a dash (`-`)

These restrictions apply because the Worker name is used as a DNS label in your `workers.dev` URL. DNS labels have a maximum length of 63 characters and cannot begin or end with a dash.

Note

Worker names can be up to 255 characters when not using a `workers.dev` subdomain. If you need a longer name, you can disable `workers.dev` and use [routes](https://developers.cloudflare.com/workers/configuration/routing/routes/) or [custom domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) instead.

## Related resources

* [Announcing workers.dev ↗](https://blog.cloudflare.com/announcing-workers-dev)
* [Wrangler routes configuration](https://developers.cloudflare.com/workers/wrangler/configuration/#types-of-routes)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/routing/","name":"Routes and domains"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/configuration/routing/workers-dev/","name":"workers.dev"}}]}
```

---

---
title: Secrets
description: Store sensitive information, like API keys and auth tokens, in your Worker.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/secrets.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Secrets

## Background

Secrets are a type of binding that allow you to attach encrypted text values to your Worker. Secrets are used for storing sensitive information like API keys and auth tokens.

You can access secrets in your Worker code through:

* The [env parameter](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/#parameters) passed to your Worker's [fetch event handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/).
* Importing `env` from [cloudflare:workers](https://developers.cloudflare.com/workers/runtime-apis/bindings/#importing-env-as-a-global) to access secrets from anywhere in your code.
* [process.env](https://developers.cloudflare.com/workers/configuration/environment-variables) in Workers that have [Node.js compatibility](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) enabled.

## Access your secrets with Workers

Secrets can be accessed from Workers as you would any other [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/). For instance, given a `DB_CONNECTION_STRING` secret, you can access it in your Worker code through the `env` parameter:

index.js

```

import postgres from "postgres";


export default {

  async fetch(request, env, ctx) {

    const sql = postgres(env.DB_CONNECTION_STRING);


    const result = await sql`SELECT * FROM products;`;


    return new Response(JSON.stringify(result), {

      headers: { "Content-Type": "application/json" },

    });

  },

};


```

You can also import `env` from `cloudflare:workers` to access secrets from anywhere in your code, including outside of request handlers:

* [  JavaScript ](#tab-panel-7107)
* [  TypeScript ](#tab-panel-7108)

JavaScript

```

import { env } from "cloudflare:workers";

import postgres from "postgres";


// Initialize the database client at the top level using a secret

const sql = postgres(env.DB_CONNECTION_STRING);


export default {

  async fetch(request) {

    const result = await sql`SELECT * FROM products;`;


    return new Response(JSON.stringify(result), {

      headers: { "Content-Type": "application/json" },

    });

  },

};


```

TypeScript

```

import { env } from "cloudflare:workers";

import postgres from "postgres";


// Initialize the database client at the top level using a secret

const sql = postgres(env.DB_CONNECTION_STRING);


export default {

  async fetch(request: Request): Promise<Response> {

    const result = await sql`SELECT * FROM products;`;


    return new Response(JSON.stringify(result), {

      headers: { "Content-Type": "application/json" },

    });

  },

};


```

For more details on accessing `env` globally, refer to [Importing env as a global](https://developers.cloudflare.com/workers/runtime-apis/bindings/#importing-env-as-a-global).

Secrets Store (beta)

Secrets described on this page are defined and managed on a per-Worker level. If you want to use account-level secrets, refer to [Secrets Store](https://developers.cloudflare.com/secrets-store/). Account-level secrets are configured on your Worker as a [Secrets Store binding](https://developers.cloudflare.com/secrets-store/integrations/workers/).

## Local Development with Secrets

Warning

Do not use `vars` to store sensitive information in your Worker's Wrangler configuration file. Use secrets instead.

Put secrets for use in local development in either a `.dev.vars` file or a `.env` file, in the same directory as the Wrangler configuration file.

Note

You can use the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property) to declare which secret names your Worker requires. When defined, only the keys listed in `secrets.required` are loaded from `.dev.vars` or `.env`. Additional keys are excluded and missing keys produce a warning.

Choose to use either `.dev.vars` or `.env` but not both. If you define a `.dev.vars` file, then values in `.env` files will not be included in the `env` object during local development.

These files should be formatted using the [dotenv ↗](https://hexdocs.pm/dotenvy/dotenv-file-format.html) syntax. For example:

.dev.vars / .env

```

SECRET_KEY="value"

API_TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"


```

Do not commit secrets to git

The `.dev.vars` and `.env` files should not committed to git. Add `.dev.vars*` and `.env*` to your project's `.gitignore` file.

To set different secrets for each Cloudflare environment, create files named `.dev.vars.<environment-name>` or `.env.<environment-name>`.

When you select a Cloudflare environment in your local development, the corresponding environment-specific file will be loaded ahead of the generic `.dev.vars` (or `.env`) file.

* When using `.dev.vars.<environment-name>` files, all secrets must be defined per environment. If `.dev.vars.<environment-name>` exists then only this will be loaded; the `.dev.vars` file will not be loaded.
* In contrast, all matching `.env` files are loaded and the values are merged. For each variable, the value from the most specific file is used, with the following precedence:  
   * `.env.<environment-name>.local` (most specific)  
   * `.env.local`  
   * `.env.<environment-name>`  
   * `.env` (least specific)

Controlling `.env` handling

It is possible to control how `.env` files are loaded in local development by setting environment variables on the process running the tools.

* To disable loading local dev vars from `.env` files without providing a `.dev.vars` file, set the `CLOUDFLARE_LOAD_DEV_VARS_FROM_DOT_ENV` environment variable to `"false"`.
* To include every environment variable defined in your system's process environment as a local development variable, ensure there is no `.dev.vars` and then set the `CLOUDFLARE_INCLUDE_PROCESS_ENV` environment variable to `"true"`. This is not needed when using the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property), which loads from `process.env` automatically.

## Secrets on deployed Workers

### Validate secrets before deploy

You can declare the secret names your Worker requires using the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property) in your Wrangler configuration. When defined, `wrangler deploy` and `wrangler versions upload` will fail with a clear error if any required secrets are not configured on the Worker.

### Adding secrets to your project

#### Via Wrangler

Secrets can be added through [wrangler secret put](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret) or [wrangler versions secret put](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-secret-put) commands.

`wrangler secret put` creates a new version of the Worker and deploys it immediately.

Terminal window

```

npx wrangler secret put <KEY>


```

If using [gradual deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret put` command. This will only create a new version of the Worker, that can then be deploying using [wrangler versions deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-deploy).

Note

Wrangler versions before 3.73.0 require you to specify a `--x-versions` flag.

Terminal window

```

npx wrangler versions secret put <KEY>


```

#### Via the dashboard

To add a secret via the dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker > **Settings**.
3. Under **Variables and Secrets**, select **Add**.
4. Select the type **Secret**, input a **Variable name**, and input its **Value**. This secret will be made available to your Worker but the value will be hidden in Wrangler and the dashboard.
5. (Optional) To add more secrets, select **Add variable**.
6. Select **Deploy** to implement your changes.

### Delete secrets from your project

#### Via Wrangler

Secrets can be deleted through [wrangler secret delete](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret-delete) or [wrangler versions secret delete](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-secret-delete) commands.

`wrangler secret delete` creates a new version of the Worker and deploys it immediately.

Terminal window

```

npx wrangler secret delete <KEY>


```

If using [gradual deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret delete` command. This will only create a new version of the Worker, that can then be deploying using [wrangler versions deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-deploy).

Terminal window

```

npx wrangler versions secret delete <KEY>


```

#### Via the dashboard

To delete a secret from your Worker project via the dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker > **Settings**.
3. Under **Variables and Secrets**, select **Edit**.
4. In the **Edit** drawer, select **X** next to the secret you want to delete.
5. Select **Deploy** to implement your changes.
6. (Optional) Instead of using the edit drawer, you can click the delete icon next to the secret.

## Compare secrets and environment variables

Use secrets for sensitive information

Do not use plaintext environment variables to store sensitive information. Use [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) or [Secrets Store bindings](https://developers.cloudflare.com/secrets-store/integrations/workers/) instead.

[Secrets](https://developers.cloudflare.com/workers/configuration/secrets/) are [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/). The difference is secret values are not visible within Wrangler or Cloudflare dashboard after you define them. This means that sensitive data, including passwords or API tokens, should always be encrypted to prevent data leaks. To your Worker, there is no difference between an environment variable and a secret. The secret's value is passed through as defined.

## Related resources

* [Wrangler secret commands](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret) \- Review the Wrangler commands to create, delete and list secrets.
* [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property) \- Declare required secret names in your Wrangler configuration. Used for validation during local development and deploy, and as the source of truth for type generation.
* [Cloudflare Secrets Store](https://developers.cloudflare.com/secrets-store/) \- Encrypt and store sensitive information as secrets that are securely reusable across your account.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/secrets/","name":"Secrets"}}]}
```

---

---
title: Workers Sites
description: Use [Workers Static Assets](/workers/static-assets/) to host full-stack applications instead of Workers Sites. Do not use Workers Sites for new projects.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/sites/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workers Sites

Use Workers Static Assets Instead

You should use [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/) to host full-stack applications instead of Workers Sites. It has been deprecated in Wrangler v4, and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) does not support Workers Sites. Do not use Workers Sites for new projects.

Workers Sites enables developers to deploy static applications directly to Workers. It can be used for deploying applications built with static site generators like [Hugo ↗](https://gohugo.io) and [Gatsby ↗](https://www.gatsbyjs.org), or front-end frameworks like [Vue ↗](https://vuejs.org) and [React ↗](https://reactjs.org).

To deploy with Workers Sites, select from one of these three approaches depending on the state of your target project:

---

## 1\. Start from scratch

If you are ready to start a brand new project, this quick start guide will help you set up the infrastructure to deploy a HTML website to Workers.

[ Start from scratch ](https://developers.cloudflare.com/workers/configuration/sites/start-from-scratch/) 

---

## 2\. Deploy an existing static site

If you have an existing project or static assets that you want to deploy with Workers, this quick start guide will help you install Wrangler and configure Workers Sites for your project.

[ Start from an existing static site ](https://developers.cloudflare.com/workers/configuration/sites/start-from-existing/) 

---

## 3\. Add static assets to an existing Workers project

If you already have a Worker deployed to Cloudflare, this quick start guide will show you how to configure the existing codebase to use Workers Sites.

[ Start from an existing Worker ](https://developers.cloudflare.com/workers/configuration/sites/start-from-worker/) 

Note

Workers Sites is built on Workers KV, and usage rates may apply. Refer to [Pricing](https://developers.cloudflare.com/workers/platform/pricing/) to learn more.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/sites/","name":"Workers Sites"}}]}
```

---

---
title: Workers Sites configuration
description: Workers Sites require the latest version of Wrangler.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/sites/configuration.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workers Sites configuration

Use Workers Static Assets Instead

You should use [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/) to host full-stack applications instead of Workers Sites. It has been deprecated in Wrangler v4, and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) does not support Workers Sites. Do not use Workers Sites for new projects.

Workers Sites require the latest version of [Wrangler ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler).

## Wrangler configuration file

There are a few specific configuration settings for Workers Sites in your Wrangler file:

* `bucket` required  
   * The directory containing your static assets, path relative to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). Example: `bucket = "./public"`.
* `include` optional  
   * A list of gitignore-style patterns for files or directories in `bucket` you exclusively want to upload. Example: `include = ["upload_dir"]`.
* `exclude` optional  
   * A list of gitignore-style patterns for files or directories in `bucket` you want to exclude from uploads. Example: `exclude = ["ignore_dir"]`.

To learn more about the optional `include` and `exclude` fields, refer to [Ignoring subsets of static assets](#ignoring-subsets-of-static-assets).

Note

If your project uses [environments](https://developers.cloudflare.com/workers/wrangler/environments/), make sure to place `site` above any environment-specific configuration blocks.

Example of a [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):

* [  wrangler.jsonc ](#tab-panel-7113)
* [  wrangler.toml ](#tab-panel-7114)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "docs-site-blah",

  "site": {

    "bucket": "./public"

  },

  "env": {

    "production": {

      "name": "docs-site",

      "route": "https://example.com/docs*"

    },

    "staging": {

      "name": "docs-site-staging",

      "route": "https://staging.example.com/docs*"

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "docs-site-blah"


[site]

bucket = "./public"


[env.production]

name = "docs-site"

route = "https://example.com/docs*"


[env.staging]

name = "docs-site-staging"

route = "https://staging.example.com/docs*"


```

## Storage limits

For very exceptionally large pages, Workers Sites might not work for you. There is a 25 MiB limit per page or file.

## Ignoring subsets of static assets

Workers Sites require [Wrangler ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler) \- make sure to use the [latest version](https://developers.cloudflare.com/workers/wrangler/install-and-update/#update-wrangler).

There are cases where users may not want to upload certain static assets to their Workers Sites. In this case, Workers Sites can also be configured to ignore certain files or directories using logic similar to [Cargo's optional include and exclude fields ↗](https://doc.rust-lang.org/cargo/reference/manifest.html#the-exclude-and-include-fields-optional).

This means that you should use gitignore semantics when declaring which directory entries to include or ignore in uploads.

### Exclusively including files/directories

If you want to include only a certain set of files or directories in your `bucket`, you can add an `include` field to your `[site]` section of your Wrangler file:

* [  wrangler.jsonc ](#tab-panel-7109)
* [  wrangler.toml ](#tab-panel-7110)

```

{

  "site": {

    "bucket": "./public",

    "include": [ // must be an array.

      "included_dir"

    ]

  }

}


```

```

[site]

bucket = "./public"

include = [ "included_dir" ]


```

Wrangler will only upload files or directories matching the patterns in the `include` array.

### Excluding files/directories

If you want to exclude files or directories in your `bucket`, you can add an `exclude` field to your `[site]` section of your Wrangler file:

* [  wrangler.jsonc ](#tab-panel-7111)
* [  wrangler.toml ](#tab-panel-7112)

```

{

  "site": {

    "bucket": "./public",

    "exclude": [ // must be an array.

      "excluded_dir"

    ]

  }

}


```

```

[site]

bucket = "./public"

exclude = [ "excluded_dir" ]


```

Wrangler will ignore files or directories matching the patterns in the `exclude` array when uploading assets to Workers KV.

### Include > exclude

If you provide both `include` and `exclude` fields, the `include` field will be used and the `exclude` field will be ignored.

### Default ignored entries

Wrangler will always ignore:

* `node_modules`
* Hidden files and directories
* Symlinks

#### More about include/exclude patterns

Learn more about the standard patterns used for include and exclude in the [gitignore documentation ↗](https://git-scm.com/docs/gitignore).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/sites/","name":"Workers Sites"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/configuration/sites/configuration/","name":"Workers Sites configuration"}}]}
```

---

---
title: Start from existing
description: Workers Sites require Wrangler — make sure to use the latest version.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/sites/start-from-existing.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Start from existing

Use Workers Static Assets Instead

You should use [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/) to host full-stack applications instead of Workers Sites. It has been deprecated in Wrangler v4, and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) does not support Workers Sites. Do not use Workers Sites for new projects.

Workers Sites require [Wrangler ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler) — make sure to use the [latest version](https://developers.cloudflare.com/workers/wrangler/install-and-update/#update-wrangler).

To deploy a pre-existing static site project, start with a pre-generated site. Workers Sites works with all static site generators, for example:

* [Hugo ↗](https://gohugo.io/getting-started/quick-start/)
* [Gatsby ↗](https://www.gatsbyjs.org/docs/quick-start/), requires Node
* [Jekyll ↗](https://jekyllrb.com/docs/), requires Ruby
* [Eleventy ↗](https://www.11ty.io/#quick-start), requires Node
* [WordPress ↗](https://wordpress.org) (refer to the tutorial on [deploying static WordPress sites with Pages](https://developers.cloudflare.com/pages/how-to/deploy-a-wordpress-site/))

## Getting started

1. Run the `wrangler init` command in the root of your project's directory to generate a basic Worker:  
Terminal window  
```  
wrangler init -y  
```  
This command adds/update the following files:  
   * `wrangler.jsonc`: The file containing project configuration.  
   * `package.json`: Wrangler `devDependencies` are added.  
   * `tsconfig.json`: Added if not already there to support writing the Worker in TypeScript.  
   * `src/index.ts`: A basic Cloudflare Worker, written in TypeScript.
2. Add your site's build/output directory to the Wrangler file:  
   * [  wrangler.jsonc ](#tab-panel-7117)  
   * [  wrangler.toml ](#tab-panel-7118)  
```  
{  
  "site": {  
    "bucket": "./public" // <-- Add your build directory name here.  
  }  
}  
```  
```  
[site]  
bucket = "./public"  
```  
The default directories for the most popular static site generators are listed below:  
   * Hugo: `public`  
   * Gatsby: `public`  
   * Jekyll: `_site`  
   * Eleventy: `_site`
3. Install the `@cloudflare/kv-asset-handler` package in your project:  
Terminal window  
```  
npm i -D @cloudflare/kv-asset-handler  
```
4. Replace the contents of `src/index.ts` with the following code snippet:

* [  Module Worker ](#tab-panel-7115)
* [  Service Worker ](#tab-panel-7116)

JavaScript

```

import { getAssetFromKV } from "@cloudflare/kv-asset-handler";

import manifestJSON from "__STATIC_CONTENT_MANIFEST";

const assetManifest = JSON.parse(manifestJSON);


export default {

  async fetch(request, env, ctx) {

    try {

      // Add logic to decide whether to serve an asset or run your original Worker code

      return await getAssetFromKV(

        {

          request,

          waitUntil: ctx.waitUntil.bind(ctx),

        },

        {

          ASSET_NAMESPACE: env.__STATIC_CONTENT,

          ASSET_MANIFEST: assetManifest,

        },

      );

    } catch (e) {

      let pathname = new URL(request.url).pathname;

      return new Response(`"${pathname}" not found`, {

        status: 404,

        statusText: "not found",

      });

    }

  },

};


```

Service Workers are deprecated

Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers.

JavaScript

```

import { getAssetFromKV } from "@cloudflare/kv-asset-handler";


addEventListener("fetch", (event) => {

  event.respondWith(handleEvent(event));

});


async function handleEvent(event) {

  try {

    // Add logic to decide whether to serve an asset or run your original Worker code

    return await getAssetFromKV(event);

  } catch (e) {

    let pathname = new URL(event.request.url).pathname;

    return new Response(`"${pathname}" not found`, {

      status: 404,

      statusText: "not found",

    });

  }

}


```

1. Run `wrangler dev` or `npx wrangler deploy` to preview or deploy your site on Cloudflare. Wrangler will automatically upload the assets found in the configured directory.  
Terminal window  
```  
npx wrangler deploy  
```
2. Deploy your site to a [custom domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) that you own and have already attached as a Cloudflare zone. Add a `route` property to the Wrangler file.  
   * [  wrangler.jsonc ](#tab-panel-7119)  
   * [  wrangler.toml ](#tab-panel-7120)  
```  
{  
  "route": "https://example.com/*"  
}  
```  
```  
route = "https://example.com/*"  
```  
Note  
Refer to the documentation on [Routes](https://developers.cloudflare.com/workers/configuration/routing/routes/) to configure a `route` properly.

Learn more about [configuring your project](https://developers.cloudflare.com/workers/wrangler/configuration/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/sites/","name":"Workers Sites"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/configuration/sites/start-from-existing/","name":"Start from existing"}}]}
```

---

---
title: Start from scratch
description: This guide shows how to quickly start a new Workers Sites project from scratch.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/sites/start-from-scratch.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Start from scratch

Use Workers Static Assets Instead

You should use [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/) to host full-stack applications instead of Workers Sites. It has been deprecated in Wrangler v4, and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) does not support Workers Sites. Do not use Workers Sites for new projects.

This guide shows how to quickly start a new Workers Sites project from scratch.

## Getting started

1. Ensure you have the latest version of [git ↗](https://git-scm.com/downloads) and [Node.js ↗](https://nodejs.org/en/download/) installed.
2. In your terminal, clone the `worker-sites-template` starter repository. The following example creates a project called `my-site`:  
Terminal window  
```  
git clone --depth=1 --branch=wrangler2 https://github.com/cloudflare/worker-sites-template my-site  
```
3. Run `npm install` to install all dependencies.
4. You can preview your site by running the [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) command:  
Terminal window  
```  
wrangler dev  
```
5. Deploy your site to Cloudflare:  
Terminal window  
```  
npx wrangler deploy  
```

## Project layout

The template project contains the following files and directories:

* `public`: The static assets for your project. By default it contains an `index.html` and a `favicon.ico`.
* `src`: The Worker configured for serving your assets. You do not need to edit this but if you want to see how it works or add more functionality to your Worker, you can edit `src/index.ts`.
* `wrangler.jsonc`: The file containing project configuration. The `bucket` property tells Wrangler where to find the static assets (e.g. `site = { bucket = "./public" }`).
* `package.json`/`package-lock.json`: define the required Node.js dependencies.

## Customize the `wrangler.jsonc` file:

* Change the `name` property to the name of your project:  
   * [  wrangler.jsonc ](#tab-panel-7121)  
   * [  wrangler.toml ](#tab-panel-7122)  
```  
{  
  "$schema": "./node_modules/wrangler/config-schema.json",  
  "name": "my-site"  
}  
```  
```  
"$schema" = "./node_modules/wrangler/config-schema.json"  
name = "my-site"  
```
* Consider updating`compatibility_date` to today's date to get access to the most recent Workers features:  
   * [  wrangler.jsonc ](#tab-panel-7123)  
   * [  wrangler.toml ](#tab-panel-7124)  
```  
{  
  "compatibility_date": "yyyy-mm-dd"  
}  
```  
```  
compatibility_date = "yyyy-mm-dd"  
```
* Deploy your site to a [custom domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) that you own and have already attached as a Cloudflare zone:  
   * [  wrangler.jsonc ](#tab-panel-7125)  
   * [  wrangler.toml ](#tab-panel-7126)  
```  
{  
  "route": "https://example.com/*"  
}  
```  
```  
route = "https://example.com/*"  
```  
Note  
Refer to the documentation on [Routes](https://developers.cloudflare.com/workers/configuration/routing/routes/) to configure a `route` properly.

Learn more about [configuring your project](https://developers.cloudflare.com/workers/wrangler/configuration/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/sites/","name":"Workers Sites"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/configuration/sites/start-from-scratch/","name":"Start from scratch"}}]}
```

---

---
title: Start from Worker
description: Workers Sites require Wrangler — make sure to use the latest version.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/sites/start-from-worker.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Start from Worker

Use Workers Static Assets Instead

You should use [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/) to host full-stack applications instead of Workers Sites. It has been deprecated in Wrangler v4, and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) does not support Workers Sites. Do not use Workers Sites for new projects.

Workers Sites require [Wrangler ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler) — make sure to use the [latest version](https://developers.cloudflare.com/workers/wrangler/install-and-update/#update-wrangler).

If you have a pre-existing Worker project, you can use Workers Sites to serve static assets to the Worker.

## Getting started

1. Create a directory that will contain the assets in the root of your project (for example, `./public`)
2. Add configuration to your Wrangler file to point to it.  
   * [  wrangler.jsonc ](#tab-panel-7129)  
   * [  wrangler.toml ](#tab-panel-7130)  
```  
{  
  "site": {  
    "bucket": "./public" // Add the directory with your static assets!  
  }  
}  
```  
```  
[site]  
bucket = "./public"  
```
3. Install the `@cloudflare/kv-asset-handler` package in your project:  
Terminal window  
```  
npm i -D @cloudflare/kv-asset-handler  
```
4. Import the `getAssetFromKV()` function into your Worker entry point and use it to respond with static assets.

* [  Module Worker ](#tab-panel-7127)
* [  Service Worker ](#tab-panel-7128)

JavaScript

```

import { getAssetFromKV } from "@cloudflare/kv-asset-handler";

import manifestJSON from "__STATIC_CONTENT_MANIFEST";

const assetManifest = JSON.parse(manifestJSON);


export default {

  async fetch(request, env, ctx) {

    try {

      // Add logic to decide whether to serve an asset or run your original Worker code

      return await getAssetFromKV(

        {

          request,

          waitUntil: ctx.waitUntil.bind(ctx),

        },

        {

          ASSET_NAMESPACE: env.__STATIC_CONTENT,

          ASSET_MANIFEST: assetManifest,

        },

      );

    } catch (e) {

      let pathname = new URL(request.url).pathname;

      return new Response(`"${pathname}" not found`, {

        status: 404,

        statusText: "not found",

      });

    }

  },

};


```

JavaScript

```

import { getAssetFromKV } from "@cloudflare/kv-asset-handler";


addEventListener("fetch", (event) => {

  event.respondWith(handleEvent(event));

});


async function handleEvent(event) {

  try {

    // Add logic to decide whether to serve an asset or run your original Worker code

    return await getAssetFromKV(event);

  } catch (e) {

    let pathname = new URL(event.request.url).pathname;

    return new Response(`"${pathname}" not found`, {

      status: 404,

      statusText: "not found",

    });

  }

}


```

For more information on the configurable options of `getAssetFromKV()` refer to [kv-asset-handler docs ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/kv-asset-handler).

1. Run `wrangler deploy` or `npx wrangler deploy` as you would normally with your Worker project. Wrangler will automatically upload the assets found in the configured directory.  
Terminal window  
```  
npx wrangler deploy  
```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/sites/","name":"Workers Sites"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/configuration/sites/start-from-worker/","name":"Start from Worker"}}]}
```

---

---
title: Versions &#38; Deployments
description: Upload versions of Workers and create deployments to release new versions.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/versions-and-deployments/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Versions & Deployments

Versions track changes to your Worker. Deployments configure how those changes are deployed to your traffic.

You can upload changes (versions) to your Worker independent of changing the version that is actively serving traffic (deployment).

![Versions and Deployments](https://developers.cloudflare.com/_astro/versions-and-deployments.Dnwtp7bX_1XrgKm.webp) 

Using versions and deployments is useful if:

* You are running critical applications on Workers and want to reduce risk when deploying new versions of your Worker using a rolling deployment strategy.
* You want to monitor for performance differences when deploying new versions of your Worker.
* You have a CI/CD pipeline configured for Workers but want to cut manual releases.

## Versions

A version is defined by the state of code as well as the state of configuration in a Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). Versions track historical changes to [bundled code](https://developers.cloudflare.com/workers/wrangler/bundling/), [static assets](https://developers.cloudflare.com/workers/static-assets/) and changes to configuration like [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) and [compatibility date and compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) over time.

Versions also track metadata associated with a version, including: the version ID, the user that created the version, deploy source, and timestamp. Optionally, a version message and version tag can be configured on version upload.

Note

State changes for associated Workers [storage resources](https://developers.cloudflare.com/workers/platform/storage-options/) such as [KV](https://developers.cloudflare.com/kv/), [R2](https://developers.cloudflare.com/r2/), [Durable Objects](https://developers.cloudflare.com/durable-objects/) and [D1](https://developers.cloudflare.com/d1/) are not tracked with versions.

## Deployments

Deployments track the version(s) of your Worker that are actively serving traffic. A deployment can consist of one or two versions of a Worker.

By default, Workers supports an all-at-once deployment model where traffic is immediately shifted from one version to the newly deployed version automatically. Alternatively, you can use [gradual deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/) to create a rolling deployment strategy.

You can also track metadata associated with a deployment, including: the user that created the deployment, deploy source, timestamp and the version(s) in the deployment. Optionally, you can configure a deployment message when you create a deployment.

## Use versions and deployments

### Create a new version

Review the different ways you can create versions of your Worker and deploy them.

#### Upload a new version and deploy it immediately

A new version that is automatically deployed to 100% of traffic when:

* Changes are uploaded with [wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) via the Cloudflare Dashboard
* Changes are deployed with the command [npx wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) via [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds)
* Changes are uploaded with the [Workers Script Upload API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/)

#### Upload a new version to be gradually deployed or deployed at a later time

Note

Wrangler versions before 3.73.0 require you to specify a `--x-versions` flag.

To create a new version of your Worker that is not deployed immediately, use the [wrangler versions upload](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-upload) command or create a new version via the Cloudflare dashboard using the **Save** button. You can find the **Save** option under the down arrow beside the "Deploy" button.

Versions created in this way can then be deployed all at once or gradually deployed using the [wrangler versions deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-deploy) command or via the Cloudflare dashboard under the **Deployments** tab.

Note

When using [Wrangler](https://developers.cloudflare.com/workers/wrangler/), changes made to a Worker's triggers [routes, domains](https://developers.cloudflare.com/workers/configuration/routing/) or [cron triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/) need to be applied with the command [wrangler triggers deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#triggers).

Note

New versions are not created when you make changes to [resources connected to your Worker](https://developers.cloudflare.com/workers/runtime-apis/bindings/). For example, if two Workers (Worker A and Worker B) are connected via a [service binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/), changing the code of Worker B will not create a new version of Worker A. Changing the code of Worker B will only create a new version of Worker B. Changes to the service binding (such as, deleting the binding or updating the [environment](https://developers.cloudflare.com/workers/wrangler/environments/) it points to) on Worker A will also not create a new version of Worker B.

#### Directly manage Versions and Deployments

See examples of creating a Worker, Versions, and Deployments directly with the API, library SDKs, and Terraform in [Infrastructure as Code](https://developers.cloudflare.com/workers/platform/infrastructure-as-code/).

### View versions and deployments

#### Via Wrangler

Wrangler allows you to view the 100 most recent versions and deployments. Refer to the [versions list](https://developers.cloudflare.com/workers/wrangler/commands/general/#list-4) and [deployments](https://developers.cloudflare.com/workers/wrangler/commands/general/#list-5) documentation to view the commands.

#### Via the Cloudflare dashboard

To view your deployments in the Cloudflare dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select your Worker > **Deployments**.

## Limits

### First upload

You must use [C3](https://developers.cloudflare.com/workers/get-started/guide/#1-create-a-new-worker-project) or [wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) the first time you create a new Workers project. Using [wrangler versions upload](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-upload) the first time you upload a Worker will fail.

### Service worker syntax

Service worker syntax is not supported for versions that are uploaded through [wrangler versions upload](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-upload). You must use ES modules format.

Refer to [Migrate from Service Workers to ES modules](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/#advantages-of-migrating) to learn how to migrate your Workers from the service worker format to the ES modules format.

### Durable Object migrations

Uploading a version with [Durable Object migrations](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/) is not supported. Use [wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) if you are applying a [Durable Object migration](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/).

This will be supported in the near future.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/versions-and-deployments/","name":"Versions & Deployments"}}]}
```

---

---
title: Gradual deployments
description: Incrementally deploy code changes to your Workers with gradual deployments.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/versions-and-deployments/gradual-deployments.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Gradual deployments

Gradual Deployments give you the ability to incrementally deploy new [versions](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#versions) of Workers by splitting traffic across versions.

![Gradual Deployments](https://developers.cloudflare.com/_astro/gradual-deployments.C6F9MQ6U_ZVKcdL.webp) 

Using gradual deployments, you can:

* Gradually shift traffic to a newer version of your Worker.
* Monitor error rates and exceptions across versions using [analytics and logs](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#observability) tooling.
* [Roll back](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/rollbacks/) to a previously stable version if you notice issues when deploying a new version.

## Use gradual deployments

The following section guides you through an example usage of gradual deployments. You will choose to use either [Wrangler](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#via-wrangler) or the Cloudflare dashboard to:

* Create a new Worker.
* Publish a new version of that Worker without deploying it.
* Create a gradual deployment between the two versions.
* Progress the deployment of the new version to 100% of traffic.

### Via Wrangler

Note

Minimum required Wrangler version: 3.40.0\. Versions before 3.73.0 require you to specify a `--x-versions` flag.

#### 1\. Create and deploy a new Worker

Create a new `"Hello World"` Worker using the [create-cloudflare CLI (C3)](https://developers.cloudflare.com/pages/get-started/c3/) and deploy it.

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- <NAME> -- --type=hello-world
```

```
yarn create cloudflare <NAME> -- --type=hello-world
```

```
pnpm create cloudflare@latest <NAME> -- --type=hello-world
```

Answer `yes` or `no` to using TypeScript. Answer `yes` to deploying your application. This is the first version of your Worker.

#### 2\. Create a new version of the Worker

To create a new version of the Worker, edit the Worker code by changing the `Response` content to your desired text and upload the Worker by using the [wrangler versions upload](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-upload) command.

 npm  yarn  pnpm 

```
npx wrangler versions upload
```

```
yarn wrangler versions upload
```

```
pnpm wrangler versions upload
```

This will create a new version of the Worker that is not automatically deployed.

#### 3\. Create a new deployment

Use the [wrangler versions deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-deploy) command to create a new deployment that splits traffic between two versions of the Worker. Follow the interactive prompts to create a deployment with the versions uploaded in [step #1](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#1-create-and-deploy-a-new-worker) and [step #2](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#2-create-a-new-version-of-the-worker). Select your desired percentages for each version.

 npm  yarn  pnpm 

```
npx wrangler versions deploy
```

```
yarn wrangler versions deploy
```

```
pnpm wrangler versions deploy
```

#### 4\. Test the split deployment

Run a cURL command on your Worker to test the split deployment.

Terminal window

```

for j in {0..10}

do

    curl -s https://$WORKER_NAME.$SUBDOMAIN.workers.dev

done


```

You should see 10 responses. Responses will reflect the content returned by the versions in your deployment. Responses will vary depending on the percentages configured in [step #3](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#3-create-a-new-deployment).

You can test also target a specific version using [version overrides](#version-overrides).

#### 5\. Set your new version to 100% deployment

Run `wrangler versions deploy` again and follow the interactive prompts. Select the version uploaded in [step 2](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#2-create-a-new-version-of-the-worker) and set it to 100% deployment.

 npm  yarn  pnpm 

```
npx wrangler versions deploy
```

```
yarn wrangler versions deploy
```

```
pnpm wrangler versions deploy
```

### Via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select **Create application** \> **Hello World** template > deploy your Worker.
3. Once the Worker is deployed, go to the online code editor through **Edit code**. Edit the Worker code (change the `Response` content) and upload the Worker.
4. To save changes, select the **down arrow** next to **Deploy** \> **Save**. This will create a new version of your Worker.
5. Create a new deployment that splits traffic between the two versions created in step 3 and 5 by going to **Deployments** and selecting **Deploy Version**.
6. cURL your Worker to test the split deployment.

Terminal window

```

for j in {0..10}

do

    curl -s https://$WORKER_NAME.$SUBDOMAIN.workers.dev

done


```

You should see 10 responses. Responses will reflect the content returned by the versions in your deployment. Responses will vary depending on the percentages configured in step #6.

## Gradual deployments with static assets

When your Worker serves [static assets](https://developers.cloudflare.com/workers/static-assets/), gradual deployments can cause asset compatibility issues where users receive HTML from one version that references assets only available in another version, leading to 404 errors.

For detailed guidance on handling static assets during gradual rollouts, including specific examples and configuration steps, refer to [Gradual rollouts](https://developers.cloudflare.com/workers/static-assets/routing/advanced/gradual-rollouts/).

## Version affinity

By default, the percentages configured when using gradual deployments operate on a per-request basis — a request has a X% probability of invoking one of two versions of the Worker in the [deployment](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#deployments).

You may want requests associated with a particular identifier (such as user, session, or any unique ID) to be handled by a consistent version of your Worker to prevent version skew. Version skew occurs when there are multiple versions of an application deployed that are not forwards/backwards compatible. You can configure version affinity to prevent the Worker's version from changing back and forth on a per-request basis.

You can do this by setting the `Cloudflare-Workers-Version-Key` header on the incoming request to your Worker. For example:

Terminal window

```

curl -s https://example.com -H 'Cloudflare-Workers-Version-Key: foo'


```

For a given [deployment](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#deployments), all requests with a version key set to `foo` will be handled by the same version of your Worker. The specific version of your Worker that the version key `foo` corresponds to is determined by the percentages you have configured for each Worker version in your deployment.

You can set the `Cloudflare-Workers-Version-Key` header both when making an external request from the Internet to your Worker, as well as when making a subrequest from one Worker to another Worker using a [service binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/).

### Setting `Cloudflare-Workers-Version-Key` using Ruleset Engine

You may want to extract a version key from certain properties of your request such as the URL, headers or cookies. You can configure a [Ruleset Engine](https://developers.cloudflare.com/ruleset-engine/) rule on your zone to do this. This allows you to specify version affinity based on these properties without having to modify the external client that makes the request.

For example, if your worker serves video assets under the URI path `/assets/` and you wanted requests to each unique asset to be handled by a consistent version, you could define the following [request header transform rule](https://developers.cloudflare.com/rules/transform/request-header-modification/):

Text in **Expression Editor**:

```

starts_with(http.request.uri.path, "/asset/")


```

Selected operation under **Modify request header**: _Set dynamic_

**Header name**: `Cloudflare-Workers-Version-Key`

**Value**: `regex_replace(http.request.uri.path, "/asset/(.*)", "${1}")`

## Version overrides

You can use version overrides to send a request to a specific version of your Worker in your gradual deployment.

To specify a version override in your request, you can set the `Cloudflare-Workers-Version-Overrides` header on the request to your Worker. For example:

Terminal window

```

curl -s https://example.com -H 'Cloudflare-Workers-Version-Overrides: my-worker-name="dc8dcd28-271b-4367-9840-6c244f84cb40"'


```

`Cloudflare-Workers-Version-Overrides` is a [Dictionary Structured Header ↗](https://www.rfc-editor.org/rfc/rfc8941#name-dictionaries).

The dictionary can contain multiple key-value pairs. Each key indicates the name of the Worker the override should be applied to. The value indicates the version ID that should be used and must be a [String ↗](https://www.rfc-editor.org/rfc/rfc8941#name-strings).

A version override will only be applied if the specified version is in the current deployment. The versions in the current deployment can be found using the [wrangler deployments list](https://developers.cloudflare.com/workers/wrangler/commands/general/#deployments-list) command or on the **Workers & Pages** page of the Cloudflare dashboard > Select your Workers > Deployments > Active Deployment.

Verifying that the version override was applied

There are a number of reasons why a request's version override may not be applied. For example:

* The deployment containing the specified version may not have propagated yet.
* The header value may not be a valid [Dictionary ↗](https://www.rfc-editor.org/rfc/rfc8941#name-dictionaries).

In the case that a request's version override is not applied, the request will be routed according to the percentages set in the gradual deployment configuration.

To make sure that the request's version override was applied correctly, you can [observe](#observability) the version of your Worker that was invoked. You could even automate this check by using the [runtime binding](#runtime-binding) to return the version in the Worker's response.

### Example

You may want to test a new version in production before gradually deploying it to an increasing proportion of external traffic.

In this example, your deployment is initially configured to route all traffic to a single version:

| Version ID                           | Percentage |
| ------------------------------------ | ---------- |
| db7cd8d3-4425-4fe7-8c81-01bf963b6067 | 100%       |

Create a new deployment using [wrangler versions deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-deploy) and specify 0% for the new version whilst keeping the previous version at 100%.

| Version ID                           | Percentage |
| ------------------------------------ | ---------- |
| dc8dcd28-271b-4367-9840-6c244f84cb40 | 0%         |
| db7cd8d3-4425-4fe7-8c81-01bf963b6067 | 100%       |

Now test the new version with a version override before gradually progressing the new version to 100%:

Terminal window

```

curl -s https://example.com -H 'Cloudflare-Workers-Version-Overrides: my-worker-name="dc8dcd28-271b-4367-9840-6c244f84cb40"'


```

## Gradual deployments for Durable Objects

To provide [global uniqueness](https://developers.cloudflare.com/durable-objects/platform/known-issues/#global-uniqueness), only one version of each [Durable Object](https://developers.cloudflare.com/durable-objects/) can run at a time. This means that gradual deployments work slightly differently for Durable Objects.

When you create a new gradual deployment for a Worker with Durable Objects, each Durable Object is assigned a Worker version based on the percentages you configured in your [deployment](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#deployments). This version will not change until you create a new deployment.

![Gradual Deployments Durable Objects](https://developers.cloudflare.com/_astro/durable-objects.D92CiuSQ_1zYrvV.webp) 

### Example

This example assumes that you have previously created 3 Durable Object instances with names "foo", "bar" and "baz".

Your Worker is currently on a version that we will call version "A" and you want to gradually deploy a new version "B" of your Worker.

Here is how the versions of your Durable Objects might change as you progress your gradual deployment:

| Deployment config              | "foo" | "bar" | "baz" |
| ------------------------------ | ----- | ----- | ----- |
| Version A: 100%                | A     | A     | A     |
| Version B: 20%  Version A: 80% | B     | A     | A     |
| Version B: 50%  Version A: 50% | B     | B     | A     |
| Version B: 100%                | B     | B     | B     |

This is only an example, so the versions assigned to your Durable Objects may be different. However, the following is guaranteed:

* For a given deployment, requests to each Durable Object will always use the same Worker version.
* When you specify each version in the same order as the previous deployment and increase the percentage of a version, Durable Objects which were previously assigned that version will not be assigned a different version. In this example, Durable Object "foo" would never revert from version "B" to version "A".
* The Durable Object will only be [reset](https://developers.cloudflare.com/durable-objects/observability/troubleshooting/#durable-object-reset-because-its-code-was-updated) when it is assigned a different version, so each Durable Object will only be reset once in this example.

Note

Typically, a Worker bundle will define both the Durable Object class and a Worker that interacts with it. In this case, you cannot deploy changes to your Durable Object and its Worker independently.

You should ensure that API changes between your Durable Object and its Worker are [forwards and backwards compatible](https://developers.cloudflare.com/durable-objects/platform/known-issues/#code-updates) whether you are using gradual deployments or not. However, using gradual deployments will make it even more likely that different versions of your Durable Objects and its Worker will interact with each other.

### Migrations

Versions of Worker bundles containing new Durable Object migrations cannot be uploaded. This is because Durable Object migrations are atomic operations. Once a migration is deployed, rollbacks cannot take place to any version prior to the one that included the migration.

Durable Object migrations can be deployed with the following command:

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

To limit the blast radius of Durable Object migration deployments, migrations should be deployed independently of other code changes.

To understand why Durable Object migrations are atomic operations, consider the hypothetical example of gradually deploying a delete migration. If a delete migration were applied to 50% of Durable Object instances, then Workers requesting those Durable Object instances would fail because they would have been deleted.

To do this without producing errors, a version of the Worker which does not depend on any Durable Object instances would have to have already been rolled out. Then, you can deploy a delete migration without affecting any traffic and there is no reason to do so gradually.

## Observability

When using gradual deployments, you may want to attribute Workers invocations to a specific version in order to get visibility into the impact of deploying new versions.

### Logpush

A new `ScriptVersion` object is available in [Workers Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush/). `ScriptVersion` can only be added through the Logpush API right now. Sample API call:

Terminal window

```

curl -X POST 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/logpush/jobs' \

-H 'Authorization: Bearer <TOKEN>' \

-H 'Content-Type: application/json' \

-d '{

"name": "workers-logpush",

"output_options": {

    "field_names": ["Event", "EventTimestampMs", "Outcome", "Logs", "ScriptName", "ScriptVersion"],

},

"destination_conf": "<DESTINATION_URL>",

"dataset": "workers_trace_events",

"enabled": true

}'| jq .


```

`ScriptVersion` is an object with the following structure:

```

scriptVersion: {

    id: "<UUID>",

    message: "<MESSAGE>",

    tag: "<TAG>"

}


```

### Runtime binding

Use the [Version metadata binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/version-metadata/) in to access version ID or version tag in your Worker.

## Limits

### Deployments limit

You can only create a new deployment with the last 100 uploaded versions of your Worker.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/versions-and-deployments/","name":"Versions & Deployments"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/configuration/versions-and-deployments/gradual-deployments/","name":"Gradual deployments"}}]}
```

---

---
title: Rollbacks
description: Revert to an older version of your Worker.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/versions-and-deployments/rollbacks.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Rollbacks

You can roll back to a previously deployed [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#versions) of your Worker using [Wrangler](https://developers.cloudflare.com/workers/wrangler/commands/general/#rollback) or the Cloudflare dashboard. Rolling back to a previous version of your Worker will immediately create a new [deployment](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#deployments) with the version specified and become the active deployment across all your deployed routes and domains.

You can roll back from any deployment, including:

* A single-version deployment (rolling back replaces the current version with the selected version).
* A [split deployment](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/) with two versions (rolling back replaces both versions with the selected version at 100% traffic).

## Via Wrangler

To roll back to a specified version of your Worker via Wrangler, use the [wrangler rollback](https://developers.cloudflare.com/workers/wrangler/commands/general/#rollback) command.

## Via the Cloudflare Dashboard

To roll back to a specified version of your Worker via the Cloudflare dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select your Worker > **Deployments**.
3. Select the three dot icon on the right of the version you would like to roll back to and select **Rollback**.

## Rolling back from a split deployment

If you are using a [gradual deployment](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/) with two versions splitting traffic, rolling back will:

1. Replace the split deployment with a single-version deployment.
2. Route 100% of traffic to the version you selected for rollback.

This effectively promotes one version to handle all traffic, which is useful if you notice issues with one of the versions in your split deployment and want to revert to a stable version immediately.

To roll back from a split deployment:

1. Identify which version in your split deployment is stable and performing correctly.
2. Use the [rollback procedure](#via-wrangler) or [dashboard rollback](#via-the-cloudflare-dashboard) to roll back to that version.
3. The split deployment will be replaced with the selected version at 100% traffic.

Warning

**[Resources connected to your Worker](https://developers.cloudflare.com/workers/runtime-apis/bindings/) will not be changed during a rollback.**

Errors could occur if using code for a prior version if the structure of data has changed between the version in the active deployment and the version selected to rollback to.

## Limits

### Rollbacks limit

You can only roll back to the 100 most recently published versions.

Note

When using Wrangler in interactive mode, you can select from up to 100 recent versions. To roll back to a specific version, you can also specify the version ID directly on the command line. Refer to the [wrangler rollback](https://developers.cloudflare.com/workers/wrangler/commands/general/#rollback) documentation for details on specifying version IDs.

### Bindings

You cannot roll back to a previous version of your Worker if the [Cloudflare Developer Platform resources](https://developers.cloudflare.com/workers/runtime-apis/bindings/) (such as [KV](https://developers.cloudflare.com/kv/) and [D1](https://developers.cloudflare.com/d1/)) have been deleted or modified between the version selected to roll back to and the version in the active deployment. Specifically, rollbacks will not be allowed if:

* A [Durable Object migration](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/) has occurred between the version in the active deployment and the version selected to roll back to.
* If the target deployment has a [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/) to an R2 bucket, KV namespace, or queue that no longer exists.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/versions-and-deployments/","name":"Versions & Deployments"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/configuration/versions-and-deployments/rollbacks/","name":"Rollbacks"}}]}
```

---

---
title: Page Rules
description: Review the interaction between various Page Rules and Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/configuration/workers-with-page-rules.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Page Rules

Page Rules trigger certain actions whenever a request matches one of the URL patterns you define. You can define a page rule to trigger one or more actions whenever a certain URL pattern is matched. Refer to [Page Rules](https://developers.cloudflare.com/rules/page-rules/) to learn more about configuring Page Rules.

## Page Rules with Workers

Cloudflare acts as a [reverse proxy ↗](https://www.cloudflare.com/learning/what-is-cloudflare/) to provide services, like Page Rules, to Internet properties. Your application's traffic will pass through a Cloudflare data center that is closest to the visitor. There are hundreds of these around the world, each of which are capable of running services like Workers and Page Rules. If your application is built on Workers and/or Pages, the [Cloudflare global network ↗](https://www.cloudflare.com/learning/serverless/glossary/what-is-edge-computing/) acts as your origin server and responds to requests directly from the Cloudflare global network.

When using Page Rules with Workers, the following workflow is applied.

1. Request arrives at Cloudflare data center.
2. Cloudflare decides if this request is a Worker route. Because this is a Worker route, Cloudflare evaluates and disabled a number of features, including some that would be set by Page Rules.
3. Page Rules run as part of normal request processing with some features now disabled.
4. Worker executes.
5. Worker makes a same-zone or other-zone subrequest. Because this is a Worker route, Cloudflare disables a number of features, including some that would be set by Page Rules.

Page Rules are evaluated both at the client-to-Worker request stage (step 2) and the Worker subrequest stage (step 5).

If you are experiencing Page Rule errors when running Workers, contact your Cloudflare account team or [Cloudflare Support](https://developers.cloudflare.com/support/contacting-cloudflare-support/).

## Affected Page Rules

The following Page Rules may not work as expected when an incoming request is matched to a Worker route:

* Always Online
* [Always Use HTTPS](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#always-use-https)
* [Automatic HTTPS Rewrites](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#automatic-https-rewrites)
* [Browser Cache TTL](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#browser-cache-ttl)
* [Browser Integrity Check](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#browser-integrity-check)
* [Cache Deception Armor](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#cache-deception-armor)
* [Cache Level](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#cache-level)
* Disable Apps
* [Disable Zaraz](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#disable-zaraz)
* [Edge Cache TTL](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#edge-cache-ttl)
* [Email Obfuscation](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#email-obfuscation)
* [Forwarding URL](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#forwarding-url)
* Host Header Override
* [IP Geolocation Header](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#ip-geolocation-header)
* [Origin Cache Control](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#origin-cache-control)
* [Rocket Loader](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#rocket-loader)
* [Security Level](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#security-level)
* [SSL](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#ssl)

This is because the default setting of these Page Rules will be disabled when Cloudflare recognizes that the request is headed to a Worker.

Testing

Due to ongoing changes to the Workers runtime, detailed documentation on how these rules will be affected are updated following testing.

To learn what these Page Rules do, refer to [Page Rules](https://developers.cloudflare.com/rules/page-rules/).

Same zone versus other zone

A same zone subrequest is a request the Worker makes to an orange-clouded hostname in the same zone the Worker runs on. Depending on your DNS configuration, any request that falls outside that definition may be considered an other zone request by the Cloudflare network.

### Always Use HTTPS

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Respected |
| Worker | Same Zone  | Rule Ignored   |
| Worker | Other Zone | Rule Ignored   |

### Automatic HTTPS Rewrites

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Ignored   |
| Worker | Same Zone  | Rule Respected |
| Worker | Other Zone | Rule Ignored   |

### Browser Cache TTL

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Ignored   |
| Worker | Same Zone  | Rule Respected |
| Worker | Other Zone | Rule Ignored   |

### Browser Integrity Check

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Respected |
| Worker | Same Zone  | Rule Ignored   |
| Worker | Other Zone | Rule Ignored   |

### Cache Deception Armor

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Respected |
| Worker | Same Zone  | Rule Respected |
| Worker | Other Zone | Rule Ignored   |

### Cache Level

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Respected |
| Worker | Same Zone  | Rule Respected |
| Worker | Other Zone | Rule Ignored   |

### Disable Zaraz

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Respected |
| Worker | Same Zone  | Rule Respected |
| Worker | Other Zone | Rule Ignored   |

### Edge Cache TTL

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Respected |
| Worker | Same Zone  | Rule Respected |
| Worker | Other Zone | Rule Ignored   |

### Email Obfuscation

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Ignored   |
| Worker | Same Zone  | Rule Respected |
| Worker | Other Zone | Rule Ignored   |

### Forwarding URL

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Ignored   |
| Worker | Same Zone  | Rule Respected |
| Worker | Other Zone | Rule Ignored   |

### IP Geolocation Header

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Respected |
| Worker | Same Zone  | Rule Respected |
| Worker | Other Zone | Rule Ignored   |

### Origin Cache Control

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Respected |
| Worker | Same Zone  | Rule Respected |
| Worker | Other Zone | Rule Ignored   |

### Rocket Loader

| Source | Target     | Behavior     |
| ------ | ---------- | ------------ |
| Client | Worker     | Rule Ignored |
| Worker | Same Zone  | Rule Ignored |
| Worker | Other Zone | Rule Ignored |

### Security Level

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Respected |
| Worker | Same Zone  | Rule Ignored   |
| Worker | Other Zone | Rule Ignored   |

### SSL

| Source | Target     | Behavior       |
| ------ | ---------- | -------------- |
| Client | Worker     | Rule Respected |
| Worker | Same Zone  | Rule Respected |
| Worker | Other Zone | Rule Ignored   |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/configuration/","name":"Configuration"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/configuration/workers-with-page-rules/","name":"Page Rules"}}]}
```

---

---
title: CI/CD
description: Set up continuous integration and continuous deployment for your Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# CI/CD

You can set up continuous integration and continuous deployment (CI/CD) for your Workers by using either the integrated build system, [Workers Builds](#workers-builds), or using [external providers](#external-cicd) to optimize your development workflow.

## Why use CI/CD?

Using a CI/CD pipeline to deploy your Workers is a best practice because it:

* Automates the build and deployment process, removing the need for manual `wrangler deploy` commands.
* Ensures consistent builds and deployments across your team by using the same source control management (SCM) system.
* Reduces variability and errors by deploying in a uniform environment.
* Simplifies managing access to production credentials.

## Which CI/CD should I use?

Choose [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds) if you want a fully integrated solution within Cloudflare's ecosystem that requires minimal setup and configuration for GitHub or GitLab users.

We recommend using [external CI/CD providers](https://developers.cloudflare.com/workers/ci-cd/external-cicd) if:

* You have a self-hosted instance of GitHub or GitLabs, which is currently not supported in Workers Builds' [Git integration](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/)
* You are using a Git provider that is not GitHub or GitLab

## Workers Builds

[Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds) is Cloudflare's native CI/CD system that allows you to integrate with GitHub or GitLab to automatically deploy changes with each new push to a selected branch (e.g. `main`).

![Workers Builds Workflow Diagram](https://developers.cloudflare.com/_astro/workers-builds-workflow.Bmy3qIVc_Z1wM0ch.webp) 

Ready to streamline your Workers deployments? Get started with [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/#get-started).

## External CI/CD

You can also choose to set up your CI/CD pipeline with an external provider.

* [GitHub Actions](https://developers.cloudflare.com/workers/ci-cd/external-cicd/github-actions/)
* [GitLab CI/CD](https://developers.cloudflare.com/workers/ci-cd/external-cicd/gitlab-cicd/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}}]}
```

---

---
title: Builds
description: Use Workers Builds to integrate with Git and automatically build and deploy your Worker when pushing a change
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Builds

The Cloudflare [Git integration](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/) lets you connect a new or existing Worker to a GitHub or GitLab repository, enabling automated builds and deployments for your Worker on push.

## Get started

### Connect a new Worker

To create a new Worker and connect it to a GitHub or GitLab repository:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select **Create application**.
3. Select **Get started** next to **Import a repository**.
4. Under **Import a repository**, select a **Git account**.
5. Select the repository you want to import from the list. You can also use the search bar to narrow the results.
6. Configure your project and select **Save and Deploy**.
7. Preview your Worker at its provided [workers.dev](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) subdomain.

### Connect an existing Worker

To connect an existing Worker to a GitHub or GitLab repository:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select the Worker you want to connect to a repository.
3. Select **Settings** and then **Builds**.
4. Select **Connect** and follow the prompts to connect the repository to your Worker and configure your [build settings](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/).
5. Push a commit to your Git repository to trigger a build and deploy to your Worker.

Warning

When connecting a repository to a Workers project, the Worker name in the Cloudflare dashboard must match the `name` in the Wrangler configuration file in the specified root directory, or the build will fail. This ensures that the Worker deployed from the repository is consistent with the Worker registered in the Cloudflare dashboard. For details, see [Workers name requirement](https://developers.cloudflare.com/workers/ci-cd/builds/troubleshoot/#workers-name-requirement).

## Automatic project configuration

When you connect a repository that does not have a Wrangler configuration file, [autoconfig](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/) runs to detect your framework and create a [pull request](https://developers.cloudflare.com/workers/ci-cd/builds/automatic-prs/) to configure your project for Cloudflare Workers.

1. Autoconfig detects your framework and generates the necessary configuration
2. A pull request is created in your repository with the necessary configuration changes
3. A preview deployment is generated so you can test before merging
4. Once you merge the PR, your project is ready for deployment

For details about supported frameworks and what files are created, refer to [Deploy an existing project](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/). For details about the PRs created, refer to [Automatic pull requests](https://developers.cloudflare.com/workers/ci-cd/builds/automatic-prs/).

## View build and preview URL

You can monitor a build's status and its build logs by navigating to **View build history** at the bottom of the **Deployments** tab of your Worker.

If the build is successful, you can view the build details by selecting **View build** in the associated new [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/) created under Version History. There you will also find the [preview URL](https://developers.cloudflare.com/workers/configuration/previews/) generated by the version under Version ID.

Builds, versions, deployments

If a build succeeds, it is uploaded as a version. If the build is configured to deploy (for example, with `wrangler deploy` set as the deploy command), the uploaded version will be automatically promoted to the Active Deployment.

## Disconnecting builds

To disconnect a Worker from a GitHub or GitLab repository:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select the Worker you want to disconnect from a repository.
3. Select **Settings** and then **Builds**.
4. Select **Disconnect**.

If you want to switch to a different repository for your Worker, you must first disable builds, then reconnect to select the new repository.

To disable automatic deployments while still allowing builds to run automatically and save as [versions](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/) (without promoting them to an active deployment), update your deploy command to: `npx wrangler versions upload`.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}}]}
```

---

---
title: Advanced setups
description: Learn how to use Workers Builds with more advanced setups
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/advanced-setups.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Advanced setups

## Monorepos

A monorepo is a single repository that contains multiple applications. This setup can be useful for a few reasons:

* **Simplified dependency management**: Manage dependencies across all your workers and shared packages from a single place using tools like [pnpm workspaces ↗](https://pnpm.io/workspaces) and [syncpack ↗](https://www.npmjs.com/package/syncpack).
* **Code sharing and reuse**: Easily create and share common logic, types, and utilities between workers by creating shared packages.
* **Atomic commits**: Changes affecting multiple workers or shared libraries can be committed together, making the history easier to understand and reducing the risk of inconsistencies.
* **Consistent tooling**: Apply the same build, test, linting, and formatting configurations (e.g., via [Turborepo ↗](https://turborepo.com) in for task orchestration and shared configs in `packages/`) across all projects, ensuring consistent tooling and code quality across Workers.
* **Easier refactoring**: Refactoring code that spans multiple Workers or shared packages is significantly easier within a single repository.

#### Example Workers monorepos:

* [cloudflare/mcp-server-cloudflare ↗](https://github.com/cloudflare/mcp-server-cloudflare)
* [jahands/workers-monorepo-template ↗](https://github.com/jahands/workers-monorepo-template)
* [cloudflare/templates ↗](https://github.com/cloudflare/templates)
* [cloudflare/workers-sdk ↗](https://github.com/cloudflare/workers-sdk)

### Getting Started

To set up a monorepo workflow:

1. Find the Workers associated with your project in the [Workers & Pages Dashboard ↗](https://dash.cloudflare.com).
2. Connect your monorepo to each Worker in the repository.
3. Set the root directory for each Worker to specify the location of its `wrangler.jsonc` and where build and deploy commands should run.
4. Optionally, configure unique build and deploy commands for each Worker.
5. Optionally, configure [build watch paths](https://developers.cloudflare.com/workers/ci-cd/builds/build-watch-paths/) for each Worker to monitor specific paths for changes.

When a new commit is made to the monorepo, a new build and deploy will trigger for each Worker if the change is within each of its included watch paths. You can also check on the status of each build associated with your repository within GitHub with [check runs](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/#check-run) or within GitLab with [commit statuses](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/gitlab-integration/#commit-status).

### Example

In the example `ecommerce-monorepo`, a Workers project should be created for `product-service`, `order-service`, and `notification-service`.

A Git connection to `ecommerce-monorepo` should be added in all of the Workers projects. If you are using a monorepo tool, such as [Turborepo ↗](https://turbo.build/), you can configure a different deploy command for each Worker, for example, `turbo deploy -F product-service`.

Set the root directory of each Worker to where its Wrangler configuration file is located. For example, for `product-service`, the root directory should be `/workers/product-service/`. Optionally, you can add [build watch paths](https://developers.cloudflare.com/workers/ci-cd/builds/build-watch-paths/) to optimize your builds.

When a new commit is made to `ecommerce-monorepo`, a build and deploy will be triggered for each of the Workers if the change is within its included watch paths using the configured commands for that Worker.

* Directoryecommerce-monorepo/  
   * Directoryworkers/  
         * Directoryproduct-service/  
                  * Directorysrc/  
                              * …  
                  * wrangler.jsonc  
         * Directoryorder-service/  
                  * Directorysrc/  
                              * …  
                  * wrangler.jsonc  
         * Directorynotification-service/  
                  * Directorysrc/  
                              * …  
                  * wrangler.jsonc  
   * Directorypackages/  
         * Directoryschema/  
                  * …  
   * README.md

## Wrangler Environments

You can use [Wrangler Environments](https://developers.cloudflare.com/workers/wrangler/environments/) with Workers Builds by completing the following steps:

1. [Deploy via Wrangler](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) to create the Workers for your environments on the Dashboard, if you do not already have them.
2. Find the Workers for your environments. They are typically named `[name of Worker] - [environment name]`.
3. Connect your repository to each of the Workers for your environment.
4. In each of the Workers, edit your Wrangler commands to include the flag `--env: <environment name>` in the build configurations for both the deploy command, and the non-production branch deploy command ([if applicable](https://developers.cloudflare.com/workers/ci-cd/builds/build-branches/#configure-non-production-branch-builds)).

When a new commit is detected in the repository, a new build/deploy will trigger for each associated Worker.

### Example

Imagine you have a Worker named `my-worker`, and you want to set up two environments `staging` and `production` set in the `wrangler.jsonc`. If you have not already, you can deploy `my-worker` for each environment using the commands `wrangler deploy --env staging` and `wrangler deploy --env production`.

In your Cloudflare Dashboard, you should find the two Workers `my-worker-staging` and `my-worker-production`. Then, connect the Git repository for the Worker, `my-worker`, to both of the environment Workers. In the build configurations of each environment Worker, edit the deploy commands to be `npx wrangler deploy --env staging` and `npx wrangler deploy --env production` and the non-production branch deploy commands to be `npx wrangler versions upload --env staging` and `npx wrangler versions upload --env production` respectively.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/advanced-setups/","name":"Advanced setups"}}]}
```

---

---
title: Builds API reference
description: Learn how to programmatically trigger builds, manage triggers, and monitor your Workers Builds using the API.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/api-reference.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Builds API reference

This guide shows you how to use the [Workers Builds REST API](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/) to programmatically trigger builds, manage triggers, and monitor build status. The examples use `curl` commands that you can run directly in your terminal or adapt to your preferred programming language. Some examples pipe output through [jq ↗](https://jqlang.org/) to filter JSON responses — install it if you do not have it already.

## Before you start

### 1\. Create an API token with the correct permissions

To use the Builds API, you need an API token to authenticate your requests. The Builds API requires a **user-scoped** API token, account-scoped tokens are not supported and will return "Invalid token" errors.

Create your token at [dash.cloudflare.com/profile/api-tokens ↗](https://dash.cloudflare.com/profile/api-tokens) with the following permissions:

| Permission                   | Access level | Why you need it                                                                                                                                                                       |
| ---------------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Workers Builds Configuration | Edit         | Trigger builds, manage triggers, configure environment variables                                                                                                                      |
| Workers Scripts              | Read         | Only needed for [one endpoint](#step-1-get-your-worker-tag) to retrieve your Worker's tag (documented as [external\_script\_id](#2-worker-tags-documented-as-external%5Fscript%5Fid)) |

Note 

This API token is different from a **build token**. Build tokens are used by the build system to deploy your Worker. By default, Cloudflare automatically generates a build token for your account, but you can also [create your own](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#api-token-optional). The API token described above is what you use to call the Builds API itself.

### 2\. Worker tags (documented as external\_script\_id)

The Builds API identifies Workers by their **tag**, an immutable UUID assigned by Cloudflare. In API responses and parameters, this value appears as `external_script_id`.

| Identifier                        | Example                          | Where it comes from                   |
| --------------------------------- | -------------------------------- | ------------------------------------- |
| Worker name (id)                  | my-worker                        | The name you gave your Worker         |
| Worker tag (external\_script\_id) | 1a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d | Immutable UUID assigned by Cloudflare |

Every Builds API endpoint that references a Worker requires the **tag**, not the name.

### 3\. What is a trigger?

A **trigger** is a configuration that defines how your Worker gets built and deployed. It specifies the build command, deploy command, environment variables, and which branches should trigger builds. Each Worker has up to **two triggers**: one for production (runs on your [production branch](https://developers.cloudflare.com/workers/ci-cd/builds/build-branches/#change-production-branch)) and one for preview (runs on all other branches). To set up triggers, refer to [Set up Workers Builds from scratch](#set-up-workers-builds-from-scratch).

**Trigger fields:**

| Field                   | Type    | Description                                                               |
| ----------------------- | ------- | ------------------------------------------------------------------------- |
| trigger\_name           | string  | Display name for the trigger                                              |
| build\_command          | string  | Command to build your project (for example, npm run build)                |
| deploy\_command         | string  | Command to deploy your Worker (for example, npx wrangler deploy)          |
| root\_directory         | string  | Path to your project root                                                 |
| branch\_includes        | array   | Branch patterns that trigger builds (for example, \["main"\] or \["\*"\]) |
| branch\_excludes        | array   | Branch patterns to exclude                                                |
| path\_includes          | array   | File path patterns that trigger builds                                    |
| path\_excludes          | array   | File path patterns to ignore                                              |
| build\_caching\_enabled | boolean | Enable or disable build caching                                           |
| environment\_variables  | object  | Build-time variables specific to this trigger                             |

## Workflow overview

Most Builds API operations follow this pattern: first get your Worker's tag, then get the trigger UUID, then perform build operations.

![Workflow overview: get Worker tag, then get trigger UUID, then perform build operations.](https://developers.cloudflare.com/_astro/workflow-overview.D-gY5w1T_2n0lJ2.svg) 

| Step | Action           | Endpoint                                    |
| ---- | ---------------- | ------------------------------------------- |
| 1    | Get Worker tag   | GET /workers/scripts                        |
| 2    | Get trigger UUID | GET /builds/workers/:worker\_tag/triggers   |
| 3a   | Trigger a build  | POST /builds/triggers/:trigger\_uuid/builds |
| 3b   | List builds      | GET /builds/workers/:worker\_tag/builds     |
| 3c   | Get build logs   | GET /builds/builds/:build\_uuid/logs        |
| 3d   | Cancel a build   | PUT /builds/builds/:build\_uuid/cancel      |

## Step 1: Get your Worker tag

Call the [Workers Scripts API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/list/) to list all your Workers and find the `tag` for the Worker you want to work with:

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/scripts" \

  --header "Authorization: Bearer <API_TOKEN>" \

  | jq '.result[] | {name: .id, tag: .tag}'


```

Example output:

```

{

  "name": "my-worker",

  "tag": "1a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d"

}

{

  "name": "another-worker",

  "tag": "8a1b2c3d4e5f67890abcdef123456789"

}


```

Save the `tag` value for your Worker. You will use it in all subsequent API calls.

## Step 2: Get your trigger UUID

Use the [GET /builds/workers/{tag}/triggers](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/triggers/methods/list/) endpoint to list triggers for your Worker:

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/workers/{worker_tag}/triggers" \

  --header "Authorization: Bearer <API_TOKEN>" \

  | jq '.result[] | {trigger_uuid, trigger_name, branch_includes, branch_excludes}'


```

Example output:

```

{

  "trigger_uuid": "f47ac10b-58cc-4372-a567-0e02b2c3d479",

  "trigger_name": "Deploy production",

  "branch_includes": ["main"],

  "branch_excludes": []

}

{

  "trigger_uuid": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",

  "trigger_name": "Deploy non-production branches",

  "branch_includes": ["*"],

  "branch_excludes": ["main"]

}


```

Save the `trigger_uuid` for the trigger you want to work with. Remember, you will have at most two triggers: one for your production branch (for example, `main`) that deploys to your live Worker, and optionally one for all other branches that creates preview deployments.

## Step 3: Work with builds

Now that you have the Worker tag and trigger UUID, you can trigger builds, list build history, and get logs.

### Trigger a manual build

Use the [POST /builds/triggers/{uuid}/builds](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/builds/methods/create/) endpoint with the `trigger_uuid` from [Step 2](#step-2-get-your-trigger-uuid).

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers/{trigger_uuid}/builds" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --header "Content-Type: application/json" \

  --request POST \

  --data '{"branch": "main"}'


```

You must specify `branch`, `commit_hash`, or both:

| Field        | Description                                                                                        |
| ------------ | -------------------------------------------------------------------------------------------------- |
| branch       | Git branch name to build (for example, main)                                                       |
| commit\_hash | Specific commit SHA to build. If provided without branch, builds the commit on its current branch. |

The response includes the `build_uuid` which you can use to monitor the build.

### List builds for a Worker

Use the [GET /builds/workers/{tag}/builds](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/builds/methods/list/) endpoint with the `worker_tag` from [Step 1](#step-1-get-your-worker-tag).

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/workers/{worker_tag}/builds" \

  --header "Authorization: Bearer <API_TOKEN>" \

  | jq '.result[] | {build_uuid, status, branch, created_at}'


```

The response includes `build_uuid` for each build, which you need for getting logs or canceling builds.

### Get build logs

Use the [GET /builds/builds/{uuid}/logs](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/builds/methods/get%5Flogs/) endpoint. Get the `build_uuid` from:

* [List builds](#list-builds-for-a-worker)
* The response when [triggering a build](#trigger-a-manual-build)
* [Get latest builds by script IDs](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/builds/methods/get%5Flatest%5Fby%5Fscript%5Fids/)
* The last segment of the URL on your build details page in the dashboard

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/builds/{build_uuid}/logs" \

  --header "Authorization: Bearer <API_TOKEN>"


```

### Cancel a running build

Use the [PUT /builds/builds/{uuid}/cancel](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/builds/methods/cancel/) endpoint. Get the `build_uuid` from:

* [List builds](#list-builds-for-a-worker)
* The response when [triggering a build](#trigger-a-manual-build)
* [Get latest builds by script IDs](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/builds/methods/get%5Flatest%5Fby%5Fscript%5Fids/)
* The last segment of the URL on your build details page in the dashboard

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/builds/{build_uuid}/cancel" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --request PUT


```

## Update trigger configuration

Use the [PATCH /builds/triggers/{uuid}](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/triggers/methods/update/) endpoint with the `trigger_uuid` from [Step 2](#step-2-get-your-trigger-uuid). You can update any of the trigger fields described in [What is a trigger?](#3-what-is-a-trigger).

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers/{trigger_uuid}" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --header "Content-Type: application/json" \

  --request PATCH \

  --data '{

    "build_command": "npm run build:prod",

    "deploy_command": "npx wrangler deploy"

  }'


```

## Manage build environment variables

Environment variables are set per trigger, meaning you can have different values for production and preview builds. For example, you might set `NODE_ENV=production` on your production trigger and `NODE_ENV=development` on your preview trigger. Refer to the [environment variables API reference](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/environment%5Fvariables/) for full endpoint details.

Note 

These are **build-time** environment variables, available only during the build process. For runtime environment variables, refer to [Environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/).

### List environment variables

Use the `trigger_uuid` from [Step 2](#step-2-get-your-trigger-uuid).

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers/{trigger_uuid}/environment_variables" \

  --header "Authorization: Bearer <API_TOKEN>"


```

### Set environment variables

You can set different variables for each trigger. For example, to set production environment variables:

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers/{production_trigger_uuid}/environment_variables" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --header "Content-Type: application/json" \

  --request PATCH \

  --data '{

    "variables": [

      {"key": "NODE_ENV", "value": "production", "type": "text"},

      {"key": "API_KEY", "value": "prod-secret-key", "type": "secret"}

    ]

  }'


```

And different values for preview builds:

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers/{preview_trigger_uuid}/environment_variables" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --header "Content-Type: application/json" \

  --request PATCH \

  --data '{

    "variables": [

      {"key": "NODE_ENV", "value": "development", "type": "text"},

      {"key": "API_KEY", "value": "dev-secret-key", "type": "secret"}

    ]

  }'


```

Use `type: "text"` for plain values and `type: "secret"` for sensitive values that should be masked in logs.

### Delete an environment variable

Use the `trigger_uuid` from [Step 2](#step-2-get-your-trigger-uuid). The `variable_key` is the key name you set (for example, `NODE_ENV`).

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers/{trigger_uuid}/environment_variables/{variable_key}" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --request DELETE


```

## Purge build cache

Use the [POST /builds/triggers/{uuid}/purge\_build\_cache](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/triggers/methods/purge%5Fbuild%5Fcache/) endpoint with the `trigger_uuid` from [Step 2](#step-2-get-your-trigger-uuid). This clears cached dependencies and build artifacts for that trigger.

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers/{trigger_uuid}/purge_build_cache" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --request POST


```

## Examples

The following examples show common use cases for the Builds API.

### Set up Workers Builds from scratch

This example walks through the complete process of connecting a GitHub repository to a Worker and setting up automated builds using only the API.

![Setup flow: get GitHub IDs, create repo connection, get Worker tag, create triggers, set env variables, trigger first build.](https://developers.cloudflare.com/_astro/setup-from-scratch.BUpowztp_1X7alF.svg) 

| Step | Action                      | Endpoint                                                      |
| ---- | --------------------------- | ------------------------------------------------------------- |
| 1    | Get GitHub account/repo IDs | GET api.github.com/users/... and GET api.github.com/repos/... |
| 2    | Create repo connection      | PUT /builds/repos/connections                                 |
| 3    | Get Worker tag              | GET /workers/scripts                                          |
| 4a   | Create production trigger   | POST /builds/triggers                                         |
| 4b   | Create preview trigger      | POST /builds/triggers                                         |
| 5    | Set environment variables   | PATCH /builds/triggers/:trigger\_uuid/environment\_variables  |
| 6    | Trigger first build         | POST /builds/triggers/:trigger\_uuid/builds                   |

#### Prerequisites

Before using the API, you must first install the Cloudflare GitHub App through the dashboard:

1. Go to **Workers & Pages** in the [Cloudflare dashboard ↗](https://dash.cloudflare.com).
2. Select any Worker and go to **Settings** \> **Builds** \> **Connect**.
3. Select **GitHub** and authorize the Cloudflare GitHub App for your account or organization.

This one-time setup creates the connection between your GitHub account and Cloudflare. Once complete, you can use the API for everything else.

#### Step 1: Get your GitHub account information

After installing the GitHub App, you need your GitHub account ID and repository ID. You can find these from an existing trigger or from the GitHub API.

From GitHub's API:

Terminal window

```

# Get your GitHub user/org ID

curl -s "https://api.github.com/users/<GITHUB_USERNAME>" | jq '.id'


# Get a repository ID

curl -s "https://api.github.com/repos/<GITHUB_USERNAME>/<REPO_NAME>" | jq '.id'


```

#### Step 2: Create a repository connection

Create a connection between your GitHub repository and Cloudflare:

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/repos/connections" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --header "Content-Type: application/json" \

  --request PUT \

  --data '{

    "provider_type": "github",

    "provider_account_id": "<GITHUB_USER_ID>",

    "provider_account_name": "<GITHUB_USERNAME>",

    "repo_id": "<GITHUB_REPO_ID>",

    "repo_name": "<REPO_NAME>"

  }'


```

Save the `repo_connection_uuid` from the response.

#### Step 3: Get your Worker tag

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/scripts" \

  --header "Authorization: Bearer <API_TOKEN>" \

  | jq '.result[] | {name: .id, tag: .tag}'


```

#### Step 4: Create a production trigger

Create a trigger that deploys when you push to `main`:

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --header "Content-Type: application/json" \

  --request POST \

  --data '{

    "external_script_id": "<WORKER_TAG>",

    "repo_connection_uuid": "<REPO_CONNECTION_UUID>",

    "trigger_name": "Deploy production",

    "build_command": "npm run build",

    "deploy_command": "npx wrangler deploy",

    "root_directory": "/",

    "branch_includes": ["main"],

    "branch_excludes": [],

    "path_includes": ["*"],

    "path_excludes": []

  }'


```

#### Step 5: Create a preview trigger (optional)

Create a second trigger for preview deployments on all other branches:

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --header "Content-Type: application/json" \

  --request POST \

  --data '{

    "external_script_id": "<WORKER_TAG>",

    "repo_connection_uuid": "<REPO_CONNECTION_UUID>",

    "trigger_name": "Deploy preview branches",

    "build_command": "npm run build",

    "deploy_command": "npx wrangler versions upload",

    "root_directory": "/",

    "branch_includes": ["*"],

    "branch_excludes": ["main"],

    "path_includes": ["*"],

    "path_excludes": []

  }'


```

Note the different `deploy_command`: production uses `wrangler deploy` while preview uses `wrangler versions upload` to create preview URLs without affecting the live deployment.

#### Step 6: Set environment variables for each trigger

Set production environment variables:

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers/{production_trigger_uuid}/environment_variables" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --header "Content-Type: application/json" \

  --request PATCH \

  --data '{

    "variables": [

      {"key": "NODE_ENV", "value": "production", "type": "text"}

    ]

  }'


```

Set preview environment variables:

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers/{preview_trigger_uuid}/environment_variables" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --header "Content-Type: application/json" \

  --request PATCH \

  --data '{

    "variables": [

      {"key": "NODE_ENV", "value": "development", "type": "text"}

    ]

  }'


```

#### Step 7: Trigger your first build

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers/{production_trigger_uuid}/builds" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --header "Content-Type: application/json" \

  --request POST \

  --data '{"branch": "main"}'


```

Your Worker is now connected to GitHub. Future pushes to `main` will automatically trigger production deployments, and pushes to other branches will create preview deployments.

### Redeploy current deployment

Redeploy your current active deployment to refresh build-time data. This is useful when you need to rebuild without code changes.

![Redeploy flow: get active deployment, find the build for that version, retrigger with same branch and commit.](https://developers.cloudflare.com/_astro/redeploy-flow.WidssEDb_Z1MmgGB.svg) 

| Step | Action                            | Endpoint                                       |
| ---- | --------------------------------- | ---------------------------------------------- |
| 1    | Get active deployment             | GET /workers/scripts/:worker\_name/deployments |
| 2    | Find the build for that version   | GET /builds/builds?version\_ids=:version\_id   |
| 3    | Retrigger with same branch/commit | POST /builds/triggers/:trigger\_uuid/builds    |

**Step 1: Get the active deployment's version ID**

Use the [GET /workers/scripts/{script\_name}/deployments](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/deployments/methods/list/) endpoint with the `worker_name` from [Step 1](#step-1-get-your-worker-tag):

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/scripts/{worker_name}/deployments" \

  --header "Authorization: Bearer <API_TOKEN>" \

  | jq '.result.deployments[0].versions[0].version_id'


```

Save the `version_id` from the output.

**Step 2: Find the build for that version**

Use the [GET /builds/builds](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/builds/methods/get%5Fby%5Fversion%5Fids/) endpoint with the `version_id` from the previous step:

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/builds?version_ids={version_id}" \

  --header "Authorization: Bearer <API_TOKEN>" \

  | jq '.result.builds'


```

From the response, note the `trigger.trigger_uuid`, `build_trigger_metadata.branch`, and `build_trigger_metadata.commit_hash`.

**Step 3: Retrigger with the same branch and commit**

Use the [POST /builds/triggers/{uuid}/builds](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/builds/methods/create/) endpoint with the values from the previous step:

Terminal window

```

curl -s "https://api.cloudflare.com/client/v4/accounts/{account_id}/builds/triggers/{trigger_uuid}/builds" \

  --header "Authorization: Bearer <API_TOKEN>" \

  --header "Content-Type: application/json" \

  --request POST \

  --data '{

    "branch": "{branch}",

    "commit_hash": "{commit_hash}"

  }'


Passing both `branch` and `commit_hash` pins the build to that exact commit on that branch.


```

## Troubleshooting

### "Resource not found" error

You are likely using the Worker name instead of the Worker tag. The Builds API requires the `tag` (a UUID like `1a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d`), not the Worker name. Refer to [Step 1](#step-1-get-your-worker-tag) to get your Worker tag.

For other build errors, refer to [Troubleshooting builds](https://developers.cloudflare.com/workers/ci-cd/builds/troubleshoot/).

## Related resources

* [Workers Builds REST API reference](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/) \- Complete endpoint documentation
* [Workers Scripts REST API reference](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/) \- For retrieving Worker tags
* [Workers Builds overview](https://developers.cloudflare.com/workers/ci-cd/builds/) \- Dashboard setup and configuration
* [Build configuration](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/) \- Build settings and options
* [Create API token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/) \- How to create tokens with the correct permissions

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/api-reference/","name":"Builds API reference"}}]}
```

---

---
title: Automatic pull requests
description: Learn about the pull requests Workers Builds creates to configure your project or resolve issues.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/automatic-prs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Automatic pull requests

Workers Builds can automatically create pull requests in your repository to configure your project or resolve deployment issues.

## Configuration PR

When you connect a repository that does not have a Wrangler configuration file, Workers Builds runs `wrangler deploy` which triggers [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/). Instead of failing, it creates a pull request with the necessary configuration for your detected framework.

Note

A configuration PR is only created when your deploy command is `npx wrangler deploy`. If you have a custom deploy command, autoconfig will still run and configure your project, but no PR will be created.

### Why you should merge the PR

Without the configuration in your repository, every build has to run autoconfig first, which means your project gets built twice - once during autoconfig to generate the configuration, and again for the actual deployment. Merging the PR commits the configuration to your repository, so future builds skip autoconfig and go straight to building and deploying. This results in faster deployments and version-controlled settings.

### What the PR includes

![Example of an automatic configuration pull request created by Workers Builds](https://developers.cloudflare.com/_astro/automatic-pr.CwJG6Bec_1cC506.webp) 

The configuration PR may contain changes to the following files, depending on your framework:

* **`wrangler.jsonc`** \- Wrangler configuration file with your Worker settings
* **Framework adapter** \- Any required Cloudflare adapter for your framework (for example, `@astrojs/cloudflare` for Astro)
* **Framework configuration** \- Updates to framework config files (for example, `astro.config.mjs` for Astro or `svelte.config.js` for SvelteKit)
* **`package.json`** \- New scripts like `deploy`, `preview`, and `cf-typegen`, plus required dependencies
* **`package-lock.json`** / **`yarn.lock`** / **`pnpm-lock.yaml`** \- Updated lock file with new dependencies
* **`.gitignore`** \- Entries for `.wrangler` and `.dev.vars*` files
* **`.assetsignore`** \- For frameworks that generate worker files in the output directory

### PR description

The PR description includes:

* **Detected settings** \- Framework, build command, deploy command, and version command
* **Preview link** \- A working preview generated using the detected settings
* **Next steps** \- Links to documentation for adding bindings, custom domains, and more

Note

When you merge the PR, Workers Builds will update your build and deploy commands if they do not match the detected settings, ensuring successful deployments.

## Name conflict PR

If Workers Builds detects a mismatch between your Worker name in the Cloudflare dashboard and the `name` field in your Wrangler configuration file, it will create a pull request to fix the conflict.

This can happen when:

* You rename your Worker in the dashboard but not in your config file
* You connect a repository that was previously used with a different Worker
* The `name` field in your config does not match the connected Worker

The PR will update the `name` field in your Wrangler configuration to match the Worker name in the dashboard.

For more details, refer to the [name conflict changelog](https://developers.cloudflare.com/changelog/2025-02-20-builds-name-conflict/).

## Reviewing PRs

When you receive a PR from Workers Builds:

1. **Review the changes** \- Check that the configuration matches your project requirements
2. **Test the preview** \- Use the preview link in the PR description to verify everything works
3. **Merge when ready** \- Once satisfied, merge the PR to enable faster deployments

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/automatic-prs/","name":"Automatic pull requests"}}]}
```

---

---
title: Build branches
description: Configure which git branches should trigger a Workers Build
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/build-branches.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Build branches

When you connect a git repository to Workers, commits made on the production git branch will produce a Workers Build. If you want to take advantage of [preview URLs](https://developers.cloudflare.com/workers/configuration/previews/) and [pull request comments](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/#pull-request-comment), you can additionally enable "non-production branch builds" in order to trigger a build on all branches of your repository.

## Change production branch

To change the production branch of your project:

1. In **Overview**, select your Workers project.
2. Go to **Settings** \> **Build** \> **Branch control**. Workers will default to the default branch of your git repository, but this can be changed in the dropdown.

Every push event made to this branch will trigger a build and execute the [build command](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#deploy-command), followed by the [deploy command](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#deploy-command).

## Configure non-production branch builds

To enable or disable non-production branch builds:

1. In **Overview**, select your Workers project.
2. Go to **Settings** \> **Build** \> **Branch control**. The checkbox **Builds for non-production branches** allows you to enable or disable builds for non-production branches.

When enabled, every push event made to a non-production branch will trigger a build and execute the [build command](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#deploy-command), followed by the [non-production branch deploy command](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#non-production-branch-deploy-command).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/build-branches/","name":"Build branches"}}]}
```

---

---
title: Build caching
description: Improve build times by caching build outputs and dependencies
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/build-caching.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Build caching

Improve Workers build times by caching dependencies and build output between builds with a project-wide shared cache.

The first build to occur after enabling build caching on your Workers project will save relevant artifacts to cache. Every subsequent build will restore from cache unless configured otherwise.

## About build cache

When enabled, build caching will automatically detect which package manager and framework the project is using from its `package.json` and cache data accordingly for the build.

The following shows which package managers and frameworks are supported for dependency and build output caching respectively.

### Package managers

Workers build cache will cache the global cache directories of the following package managers:

| Package Manager                 | Directories cached |
| ------------------------------- | ------------------ |
| [npm ↗](https://www.npmjs.com/) | .npm               |
| [yarn ↗](https://yarnpkg.com/)  | .cache/yarn        |
| [pnpm ↗](https://pnpm.io/)      | .pnpm-store        |
| [bun ↗](https://bun.sh/)        | .bun/install/cache |

### Frameworks

Some frameworks provide a cache directory that is typically populated by the framework with intermediate build outputs or dependencies during build time. Workers Builds will automatically detect the framework you are using and cache this directory for reuse in subsequent builds.

The following frameworks support build output caching:

| Framework  | Directories cached                       |
| ---------- | ---------------------------------------- |
| Astro      | node\_modules/.astro                     |
| Docusaurus | node\_modules/.cache, .docusaurus, build |
| Eleventy   | .cache                                   |
| Gatsby     | .cache, public                           |
| Next.js    | .next/cache                              |
| Nuxt       | node\_modules/.cache/nuxt                |
| SvelteKit  | node\_modules/.cache/imagetools          |

Note

[Static assets](https://developers.cloudflare.com/workers/static-assets/) and [frameworks](https://developers.cloudflare.com/workers/framework-guides/) are now supported in Cloudflare Workers.

### Limits

The following limits are imposed for build caching:

* **Retention**: Cache is purged 7 days after its last read date. Unread cache artifacts are purged 7 days after creation.
* **Storage**: Every project is allocated 10 GB. If the project cache exceeds this limit, the project will automatically start deleting artifacts that were read least recently.

## Enable build cache

To enable build caching:

1. Navigate to [Workers & Pages Overview ↗](https://dash.cloudflare.com) on the Dashboard.
2. Find your Workers project.
3. Go to **Settings** \> **Build** \> **Build cache**.
4. Select **Enable** to turn on build caching.

## Clear build cache

The build cache can be cleared for a project when needed, such as when debugging build issues. To clear the build cache:

1. Navigate to [Workers & Pages Overview ↗](https://dash.cloudflare.com) on the Dashboard.
2. Find your Workers project.
3. Go to **Settings** \> **Build** \> **Build cache**.
4. Select **Clear Cache** to clear the build cache.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/build-caching/","name":"Build caching"}}]}
```

---

---
title: Build image
description: Understand the build image used in Workers Builds.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/build-image.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Build image

Workers Builds uses a build image with support for a variety of languages and tools such as Node.js, Python, PHP, Ruby, and Go.

## Supported Tooling

Workers Builds supports a variety of runtimes, languages, and tools. Builds will use the default versions listed below unless a custom version is detected or specified. You can [override the default versions](https://developers.cloudflare.com/workers/ci-cd/builds/build-image/#overriding-default-versions) using environment variables or version files. All versions are available for override.

Default version updates

The default versions will be updated regularly to the latest minor version. No major version updates will be made without notice. If you need a specific minor version, please specify it by [overriding the default version](https://developers.cloudflare.com/workers/ci-cd/builds/build-image/#overriding-default-versions).

### Runtime

| Tool        | Default version | Environment variable | File                         |
| ----------- | --------------- | -------------------- | ---------------------------- |
| **Go**      | 1.24.3          | GO\_VERSION          |                              |
| **Node.js** | 22.16.0         | NODE\_VERSION        | .nvmrc, .node-version        |
| **Python**  | 3.13.3          | PYTHON\_VERSION      | .python-version, runtime.txt |
| **Ruby**    | 3.4.4           | RUBY\_VERSION        | .ruby-version                |

### Tools and languages

| Tool        | Default version   | Environment variable |
| ----------- | ----------------- | -------------------- |
| **Bun**     | 1.2.15            | BUN\_VERSION         |
| **Hugo**    | extended\_0.147.7 | HUGO\_VERSION        |
| **npm**     | 10.9.2            |                      |
| **yarn**    | 4.9.1             | YARN\_VERSION        |
| **pnpm**    | 10.11.1           | PNPM\_VERSION        |
| **pip**     | 25.1.1            |                      |
| **gem**     | 3.6.9             |                      |
| **poetry**  | 2.1.3             |                      |
| **pipx**    | 1.7.1             |                      |
| **bundler** | 2.6.9             |                      |

## Advanced Settings

### Overriding Default Versions

If you need to override a [specific version](https://developers.cloudflare.com/workers/ci-cd/builds/build-image/#overriding-default-versions) of a language or tool within the image, you can specify it as a [build environment variable](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings), or set the relevant file in your source code as shown above.

To set the version using a build environment variables, you can:

1. Find the environment variable name for the language or tool and desired version (e.g. `NODE_VERSION = 22`)
2. Add and save the environment variable on the dashboard by going to **Settings** \> **Build** \> **Build Variables and Secrets** in your Workers project

Or, to set the version by adding a file to your project, you can:

1. Find the filename for the language or tool (e.g. `.nvmrc`)
2. Add the specified file name to the root directory and set the desired version number as the file's content. For example, if the version number is 22, the file should contain '22'.

### Skip dependency install

You can add the following build variable to disable automatic dependency installation and run a custom install command instead.

| Build variable            | Value     |
| ------------------------- | --------- |
| SKIP\_DEPENDENCY\_INSTALL | 1 or true |

## Pre-installed Packages

In the following table, review the pre-installed packages in the build image. The packages are installed with `apt`, a package manager for Linux distributions.

| curl            | libbz2-dev      | libreadline-dev |
| --------------- | --------------- | --------------- |
| git             | libc++1         | libssl-dev      |
| git-lfs         | libdb-dev       | libvips-dev     |
| unzip           | libgdbm-dev     | libyaml-dev     |
| autoconf        | libgdbm6        | tzdata          |
| build-essential | libgbm1         | wget            |
| bzip2           | libgmp-dev      | zlib1g-dev      |
| gnupg           | liblzma-dev     | zstd            |
| libffi-dev      | libncurses5-dev |                 |

## Build Environment

Workers Builds are run in the following environment:

| **Build Environment** | Ubuntu 24.04 |
| --------------------- | ------------ |
| **Architecture**      | x86\_64      |

## Build Image Policy

### Preinstalled Software Updates

Preinstalled software (languages and tools) will be updated before reaching end-of-life (EOL). These updates apply only if you have not [overridden the default version](https://developers.cloudflare.com/workers/ci-cd/builds/build-image/#overriding-default-versions).

* **Minor version updates**: May be updated to the latest available minor version without notice. For tools that do not follow semantic versioning (e.g., Bun or Hugo), updates that may contain breaking changes will receive 3 months’ notice.
* **Major version updates**: Updated to the next stable long-term support (LTS) version with 3 months’ notice.

**How you'll be notified (for changes requiring notice):**

* [Cloudflare Changelog ↗](https://developers.cloudflare.com/changelog/)
* Dashboard notifications for projects that will receive the update
* Email notifications to project owners

To maintain a specific version and avoid automatic updates, [override the default version](https://developers.cloudflare.com/workers/ci-cd/builds/build-image/#overriding-default-versions).

### Best Practices

To avoid unexpected build failures:

* **Monitor announcements** via the [Cloudflare Changelog ↗](https://developers.cloudflare.com/changelog/), dashboard notifications, and email
* **Pin specific versions** of critical preinstalled software by [overriding default versions](https://developers.cloudflare.com/workers/ci-cd/builds/build-image/#overriding-default-versions)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/build-image/","name":"Build image"}}]}
```

---

---
title: Build watch paths
description: Reduce compute for your monorepo by specifying paths for Workers Builds to skip
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/build-watch-paths.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Build watch paths

When you connect a git repository to Workers, by default a change to any file in the repository will trigger a build. You can configure Workers to include or exclude specific paths to specify if Workers should skip a build for a given path. This can be especially helpful if you are using a monorepo project structure and want to limit the amount of builds being kicked off.

## Configure Paths

To configure which paths are included and excluded:

1. In **Overview**, select your Workers project.
2. Go to **Settings** \> **Build** \> **Build watch paths**. Workers will default to setting your project’s includes paths to everything (\[\*\]) and excludes paths to nothing (`[]`).

The configuration fields can be filled in two ways:

* **Static filepaths**: Enter the precise name of the file you are looking to include or exclude (for example, `docs/README.md`).
* **Wildcard syntax:** Use wildcards to match multiple path directories. You can specify wildcards at the start or end of your rule.

Wildcard syntax

A wildcard (`*`) is a character that is used within rules. It can be placed alone to match anything or placed at the start or end of a rule to allow for better control over branch configuration. A wildcard will match zero or more characters.For example, if you wanted to match all branches that started with `fix/` then you would create the rule `fix/*` to match strings like `fix/1`, `fix/bugs`or `fix/`.

For each path in a push event, build watch paths will be evaluated as follows:

* Paths satisfying excludes conditions are ignored first
* Any remaining paths are checked against includes conditions
* If any matching path is found, a build is triggered. Otherwise the build is skipped

Workers will bypass the path matching for a push event and default to building the project if:

* A push event contains 0 file changes, in case a user pushes a empty push event to trigger a build
* A push event contains 3000+ file changes or 20+ commits

## Examples

### Example 1

If you want to trigger a build from all changes within a set of directories, such as all changes in the folders `project-a/` and `packages/`

* Include paths: `project-a/*, packages/*`
* Exclude paths: \`\`

### Example 2

If you want to trigger a build for any changes, but want to exclude changes to a certain directory, such as all changes in a docs/ directory

* Include paths: `*`
* Exclude paths: `docs/*`

### Example 3

If you want to trigger a build for a specific file or specific filetype, for example all files ending in `.md`.

* Include paths: `*.md`
* Exclude paths: \`\`

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/build-watch-paths/","name":"Build watch paths"}}]}
```

---

---
title: Configuration
description: Understand the different settings associated with your build.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/configuration.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Configuration

When connecting your Git repository to your Worker, you can customize the configurations needed to build and deploy your Worker.

## How Workers Builds works

When a commit is pushed to your connected repository, Workers Builds runs a two-step process:

1. **Build command** _(optional)_ \- Compiles your project (for example, `npm run build` for frameworks like Next.js or Astro)
2. **Deploy command** \- Deploys your Worker to Cloudflare (defaults to `npx wrangler deploy`)

For preview builds (commits to branches other than your production branch), the deploy command is replaced with a **preview deploy command** (defaults to `npx wrangler versions upload`), which creates a preview version without promoting it to production.

## Build settings

Build settings can be found by navigating to **Settings** \> **Build** within your Worker.

Note that when you update and save build settings, the updated settings will be applied to your _next_ build. When you _retry_ a build, the build configurations that exist when the build is retried will be applied.

### Overview

| Setting                                                                                                                                                | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| ------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Git account**                                                                                                                                        | Select the Git account you would like to use. After the initial connection, you can continue to use this Git account for future projects.                                                                                                                                                                                                                                                                                                                                                                               |
| **Git repository**                                                                                                                                     | Choose the Git repository you would like to connect your Worker to.                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| **Git branch**                                                                                                                                         | Select the branch you would like Cloudflare to listen to for new commits. This will be defaulted to main.                                                                                                                                                                                                                                                                                                                                                                                                               |
| **Build command** _(Optional)_                                                                                                                         | Set a build command if your project requires a build step (e.g. npm run build). This is necessary, for example, when using a [front-end framework](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#framework-support) such as Next.js or Remix.                                                                                                                                                                                                                                                   |
| **[Deploy command](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#deploy-command)**                                             | The deploy command lets you set the [specific Wrangler command](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) used to deploy your Worker. Your deploy command will default to npx wrangler deploy but you may customize this command. Workers Builds will use the Wrangler version set in your package json.                                                                                                                                                                             |
| **[Non-production branch deploy command](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#non-production-branch-deploy-command)** | Set a command to run when executing [a build for commit on a non-production branch](https://developers.cloudflare.com/workers/ci-cd/builds/build-branches/#configure-non-production-branch-builds). This will default to npx wrangler versions upload but you may customize this command. Workers Builds will use the Wrangler version set in your package json.                                                                                                                                                        |
| **Root directory** _(Optional)_                                                                                                                        | Specify the path to your project. The root directory defines where the build command will be run and can be helpful in [monorepos](https://developers.cloudflare.com/workers/ci-cd/builds/advanced-setups/#monorepos) to isolate a specific project within the repository for builds.                                                                                                                                                                                                                                   |
| **[API token](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#api-token)** _(Optional)_                                          | The API token is used to authenticate your build request and authorize the upload and deployment of your Worker to Cloudflare. By default, Cloudflare will automatically generate an API token for your account when using Workers Builds, and continue to use this API token for all subsequent builds. Alternatively, you can [create your own API token](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/authentication/#generate-tokens), or select one that you already own. |
| **Build variables and secrets** _(Optional)_                                                                                                           | Add environment variables and secrets accessible only to your build. Build variables will not be accessible at runtime. If you would like to configure runtime variables you can do so in **Settings** \> **Variables & Secrets**                                                                                                                                                                                                                                                                                       |

Note

Currently, Workers Builds does not honor the configurations set in [Custom Builds](https://developers.cloudflare.com/workers/wrangler/custom-builds/) within your Wrangler configuration file.

### Deploy command

You can run your deploy command using the package manager of your choice.

If you have added a Wrangler deploy command as a script in your `package.json`, then you can run it by setting it as your deploy command. For example, `npm run deploy`.

Examples of other deploy commands you can set include:

| Example Command                        | Description                                                                                                                                                                                                                                                                                                                                       |
| -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| npx wrangler deploy --assets ./public/ | Deploy your Worker along with static assets from the specified directory. Alternatively, you can use the [assets binding](https://developers.cloudflare.com/workers/static-assets/binding/).                                                                                                                                                      |
| npx wrangler deploy --env staging      | If you have a [Wrangler environment](https://developers.cloudflare.com/workers/ci-cd/builds/advanced-setups/#wrangler-environments) Worker, you should set your deploy command with the environment flag. For more details, see [Advanced Setups](https://developers.cloudflare.com/workers/ci-cd/builds/advanced-setups/#wrangler-environments). |

### Non-production branch deploy command

The non-production branch deploy command is only applicable when you have enabled [non-production branch builds](https://developers.cloudflare.com/workers/ci-cd/builds/build-branches/#configure-non-production-branch-builds).

It defaults to `npx wrangler versions upload`, producing a [preview URL](https://developers.cloudflare.com/workers/configuration/previews/). Like the build and deploy commands, it can be customized to instead run anything.

Examples of other non-production branch deploy commands you can set include:

| Example Command                            | Description                                                                                                                                                                                                                                                                                                                                                             |
| ------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| yarn exec wrangler versions upload         | You can customize the package manager used to run Wrangler.                                                                                                                                                                                                                                                                                                             |
| npx wrangler versions upload --env staging | If you have a [Wrangler environment](https://developers.cloudflare.com/workers/ci-cd/builds/advanced-setups/#wrangler-environments) Worker, you should set your non-production branch deploy command with the environment flag. For more details, see [Advanced Setups](https://developers.cloudflare.com/workers/ci-cd/builds/advanced-setups/#wrangler-environments). |

### Automatic configuration for new projects

If your repository does not have a Wrangler configuration file, the deploy command (`wrangler deploy`) will trigger [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/). This detects your framework, creates the necessary configuration, and opens a [pull request](https://developers.cloudflare.com/workers/ci-cd/builds/automatic-prs/) for you to review. Once you merge the PR, your project is configured and future builds will deploy normally.

### API token

The API token in Workers Builds defines the access granted to Workers Builds for interacting with your account's resources. Currently, only user tokens are supported, with account-owned token support coming soon.

When you select **Create new token**, a new API token will be created automatically with the following permissions:

* **Account:** Account Settings (read), Workers Scripts (edit), Workers KV Storage (edit), Workers R2 Storage (edit)
* **Zone:** Workers Routes (edit) for all zones on the account
* **User:** User Details (read), Memberships (read)

You can configure the permissions of this API token by navigating to **My Profile** \> **API Tokens** for user tokens.

It is recommended to consistently use the same API token across all uploads and deployments of your Worker to maintain consistent access permissions.

## Framework support

[Static assets](https://developers.cloudflare.com/workers/static-assets/) and [frameworks](https://developers.cloudflare.com/workers/framework-guides/) are now supported in Cloudflare Workers. Learn to set up Workers projects and the commands for each framework in the framework guides:

* [ AI & agents ](https://developers.cloudflare.com/workers/framework-guides/ai-and-agents/)  
   * [ Agents SDK ](https://developers.cloudflare.com/agents/)  
   * [ LangChain ](https://developers.cloudflare.com/workers/languages/python/packages/langchain/)
* [ APIs ](https://developers.cloudflare.com/workers/framework-guides/apis/)  
   * [ FastAPI ](https://developers.cloudflare.com/workers/languages/python/packages/fastapi/)  
   * [ Hono ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/hono/)
* [ Deploy an existing project ](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/)
* [ Mobile applications ](https://developers.cloudflare.com/workers/framework-guides/mobile-apps/)  
   * [ Expo ](https://docs.expo.dev/eas/hosting/reference/worker-runtime/)
* [ Web applications ](https://developers.cloudflare.com/workers/framework-guides/web-apps/)  
   * [ React + Vite ](https://developers.cloudflare.com/workers/framework-guides/web-apps/react/)  
   * [ Astro ](https://developers.cloudflare.com/workers/framework-guides/web-apps/astro/)  
   * [ React Router (formerly Remix) ](https://developers.cloudflare.com/workers/framework-guides/web-apps/react-router/)  
   * [ Vue ](https://developers.cloudflare.com/workers/framework-guides/web-apps/vue/)  
   * [ TanStack Start ](https://developers.cloudflare.com/workers/framework-guides/web-apps/tanstack-start/)  
   * [ Microfrontends ](https://developers.cloudflare.com/workers/framework-guides/web-apps/microfrontends/)  
   * [ More guides... ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/)  
         * [ Analog ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/analog/)  
         * [ Angular ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/angular/)  
         * [ Docusaurus ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/docusaurus/)  
         * [ Gatsby ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/gatsby/)  
         * [ Hono ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/hono/)  
         * [ Nuxt ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/nuxt/)  
         * [ Qwik ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/qwik/)  
         * [ Solid ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/solid/)  
         * [ Waku ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/waku/)  
   * [ Next.js ](https://developers.cloudflare.com/workers/framework-guides/web-apps/nextjs/)  
   * [ RedwoodSDK ](https://developers.cloudflare.com/workers/framework-guides/web-apps/redwoodsdk/)  
   * [ SvelteKit ](https://developers.cloudflare.com/workers/framework-guides/web-apps/sveltekit/)  
   * [ Vike ](https://developers.cloudflare.com/workers/framework-guides/web-apps/vike/)

## Environment variables

You can provide custom environment variables to your build.

* [ Dashboard ](#tab-panel-7042)
* [ Wrangler ](#tab-panel-7043)

To add environment variables via the dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
1. In **Overview**, select your Worker.
2. Select **Settings** \> **Environment variables**.

To add env variables using Wrangler, define text and JSON via the `[vars]` configuration in your Wrangler file.

* [  wrangler.jsonc ](#tab-panel-7040)
* [  wrangler.toml ](#tab-panel-7041)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker-dev",

  "vars": {

    "API_HOST": "example.com",

    "API_ACCOUNT_ID": "example_user",

    "SERVICE_X_DATA": {

      "URL": "service-x-api.dev.example",

      "MY_ID": 123

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker-dev"


[vars]

API_HOST = "example.com"

API_ACCOUNT_ID = "example_user"


  [vars.SERVICE_X_DATA]

  URL = "service-x-api.dev.example"

  MY_ID = 123


```

### Default variables

The following system environment variables are injected by default (but can be overridden):

| Environment Variable     | Injected value                | Example use-case                                                                      |
| ------------------------ | ----------------------------- | ------------------------------------------------------------------------------------- |
| CI                       | true                          | Changing build behaviour when run on CI versus locally                                |
| WORKERS\_CI              | 1                             | Changing build behaviour when run on Workers Builds versus locally                    |
| WORKERS\_CI\_BUILD\_UUID | <build-uuid-of-current-build> | Passing the Build UUID along to custom workflows                                      |
| WORKERS\_CI\_COMMIT\_SHA | <sha1-hash-of-current-commit> | Passing current commit ID to error reporting, for example, Sentry                     |
| WORKERS\_CI\_BRANCH      | <branch-name-from-push-event  | Customizing build based on branch, for example, disabling debug logging on production |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/configuration/","name":"Configuration"}}]}
```

---

---
title: Deploy Hooks
description: Generate unique URLs that trigger new builds when they receive an HTTP POST request.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/deploy-hooks.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Deploy Hooks

By default, Workers Builds triggers a build when you push a commit to your [connected Git repository](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/). Deploy Hooks provide another way to trigger a build. Each hook is a unique URL that triggers a manual build for one branch when it receives an HTTP POST request. Use Deploy Hooks to connect Workers Builds with workflows such as:

* Rebuild automatically when content changes in a headless CMS
* Build on a schedule using an external cron service
* Trigger deployments from custom CI/CD pipelines based on specific conditions

## Create a Deploy Hook

Before creating a Deploy Hook, ensure your Worker is [connected to a Git repository](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/).

1. Go to **Workers & Pages** and select your Worker.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Go to **Settings** \> **Builds** \> **Deploy Hooks**.
3. Enter a **name** and select the **branch** to build.
4. Select **Create** and copy the generated URL.

Note

Give each Deploy Hook a descriptive name so you can tell them apart. If you have multiple content sources that each need to trigger builds independently, create a separate hook for each one.

## Trigger a Deploy Hook

Send an HTTP POST request to your Deploy Hook URL to start a build:

Terminal window

```

curl -X POST "https://api.cloudflare.com/client/v4/workers/builds/deploy_hooks/<DEPLOY_HOOK_ID>"


```

No `Authorization` header is needed. The unique identifier embedded in the URL acts as the authentication credential.

Example response:

```

{

  "success": true,

  "errors": [],

  "messages": [],

  "result": {

    "build_uuid": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",

    "branch": "main",

    "worker": "my-worker"

  }

}


```

The `build_uuid` in the response can be used to [monitor build status and retrieve logs](https://developers.cloudflare.com/workers/ci-cd/builds/api-reference/#get-build-logs).

### Verify the build

After you trigger a Deploy Hook, you can verify it from the dashboard:

* In the **Deploy Hooks** list, the hook shows when it was last triggered.
* In your Worker's build history, the **Triggered by** column identifies builds started by a Deploy Hook using the hook name and a `deploy hook` label.

If you need to inspect these builds programmatically, use [List builds for a Worker](https://developers.cloudflare.com/workers/ci-cd/builds/api-reference/#list-builds-for-a-worker) in the Builds API reference. Hook-triggered builds are recorded with `build_trigger_source: "deploy_hook"`.

## CMS integration

Most headless CMS platforms support webhooks that call your Deploy Hook URL when content changes. The general setup is the same across platforms:

1. Find the webhooks or integrations settings in your CMS.
2. Create a new webhook and paste your Deploy Hook URL as the target URL.
3. Select which events should trigger the webhook (for example, publish, unpublish, or update).

Refer to your CMS documentation for platform-specific instructions. Popular platforms with webhook support include Contentful, Sanity, Strapi, Storyblok, DatoCMS, and Prismic.

## Idempotency

If the same Deploy Hook is triggered again before the previous build has fully started, Workers Builds does not create a duplicate build. Instead, it returns the build that is already in progress.

If an external system sends the same Deploy Hook twice in quick succession:

1. The first request creates a build.
2. If a second request arrives while that build is still `queued` or `initializing`, no second build is created.
3. Instead, the response returns the existing `build_uuid` and sets `already_exists` to `true`.

Example response when an existing pending build is returned:

```

{

  "success": true,

  "errors": [],

  "messages": [],

  "result": {

    "build_uuid": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",

    "status": "queued",

    "created_on": "2026-01-21T18:50:00Z",

    "already_exists": true

  }

}


```

Once the earlier build moves past `initializing`, a later POST creates a new build as normal. This makes Deploy Hooks safe to use with systems that retry webhooks or emit bursts of content-update events.

## Examples

### Deploy from a Slack slash command

A Worker that receives a `/deploy` command from Slack and triggers a build:

* [  JavaScript ](#tab-panel-7046)
* [  TypeScript ](#tab-panel-7047)

JavaScript

```

export default {

  async fetch(request, env) {

    const body = await request.formData();

    const command = body.get("command");

    const token = body.get("token");


    if (token !== env.SLACK_VERIFICATION_TOKEN) {

      return new Response("Unauthorized", { status: 401 });

    }


    if (command === "/deploy") {

      const res = await fetch(env.DEPLOY_HOOK_URL, { method: "POST" });

      const { result } = await res.json();

      return new Response(`Build started: ${result.build_uuid}`);

    }


    return new Response("Unknown command", { status: 400 });

  },

};


```

TypeScript

```

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const body = await request.formData();

    const command = body.get("command");

    const token = body.get("token");


    if (token !== env.SLACK_VERIFICATION_TOKEN) {

      return new Response("Unauthorized", { status: 401 });

    }


    if (command === "/deploy") {

      const res = await fetch(env.DEPLOY_HOOK_URL, { method: "POST" });

      const { result } = await res.json<{ result: { build_uuid: string } }>();

      return new Response(`Build started: ${result.build_uuid}`);

    }


    return new Response("Unknown command", { status: 400 });

  },

};


```

### Rebuild on a schedule

A Worker with a [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/) that rebuilds every hour:

* [  JavaScript ](#tab-panel-7044)
* [  TypeScript ](#tab-panel-7045)

JavaScript

```

export default {

  async scheduled(event, env) {

    await fetch(env.DEPLOY_HOOK_URL, { method: "POST" });

  },

};


```

TypeScript

```

export default {

  async scheduled(event: ScheduledEvent, env: Env): Promise<void> {

    await fetch(env.DEPLOY_HOOK_URL, { method: "POST" });

  },

};


```

## Security considerations

Warning 

Deploy Hook URLs do not require a separate authorization header. Anyone with access to the URL can trigger builds for your Worker, so store them like other sensitive credentials.

* Store Deploy Hook URLs in environment variables or a secrets manager, never in source code or public configuration files.
* Restrict access to the URL to only the systems that need it.
* If a URL is compromised or you suspect unauthorized use, delete the Deploy Hook immediately and create a new one. The old URL stops working as soon as it is deleted.

### Using the Builds API for authenticated triggers

If your external system supports custom headers, you can call the [manual build endpoint](https://developers.cloudflare.com/api/resources/workers%5Fbuilds/subresources/triggers/methods/create%5Fbuild) with an API token in the `Authorization` header instead. This gives you token-based authentication and the ability to choose the branch per request. For a step-by-step walkthrough, see [Trigger a manual build](https://developers.cloudflare.com/workers/ci-cd/builds/api-reference/#trigger-a-manual-build).

## Limits

Deploy Hooks are rate limited to 10 builds per minute per Worker and 100 builds per minute per account. For all Workers Builds limits, see [Limits & pricing](https://developers.cloudflare.com/workers/ci-cd/builds/limits-and-pricing/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/deploy-hooks/","name":"Deploy Hooks"}}]}
```

---

---
title: Event subscriptions
description: Event subscriptions allow you to receive messages when events occur across your Cloudflare account. Cloudflare products (e.g., KV, Workers AI, Workers) can publish structured events to a queue, which you can then consume with Workers or HTTP pull consumers to build custom workflows, integrations, or logic.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/event-subscriptions.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Event subscriptions

[Event subscriptions](https://developers.cloudflare.com/queues/event-subscriptions/) allow you to receive messages when events occur across your Cloudflare account. Cloudflare products (e.g., [KV](https://developers.cloudflare.com/kv/), [Workers AI](https://developers.cloudflare.com/workers-ai/), [Workers](https://developers.cloudflare.com/workers/)) can publish structured events to a [queue](https://developers.cloudflare.com/queues/), which you can then consume with Workers or [HTTP pull consumers](https://developers.cloudflare.com/queues/configuration/pull-consumers/) to build custom workflows, integrations, or logic.

For more information on [Event Subscriptions](https://developers.cloudflare.com/queues/event-subscriptions/), refer to the [management guide](https://developers.cloudflare.com/queues/event-subscriptions/manage-event-subscriptions/).

## Send build notifications

You can deploy a Worker that consumes build events and sends notifications to Slack, Discord, or any webhook endpoint:

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/workers-builds-notifications-template)

The template sends notifications for:

* Successful builds with preview or live deployment URLs
* Failed builds with error messages
* Cancelled builds
![Example Slack notifications for Workers Builds events](https://developers.cloudflare.com/_astro/builds-notifications-slack.rcRiU95L_169ufw.webp) 

You can customize the Worker to format messages for your webhook provider. For setup instructions, refer to the [template README ↗](https://github.com/cloudflare/templates/tree/main/workers-builds-notifications-template#readme).

## Available Workers Builds events

#### `build.started`

Triggered when a build starts.

**Example:**

```

{

  "type": "cf.workersBuilds.worker.build.started",

  "source": {

    "type": "workersBuilds.worker",

    "workerName": "my-worker"

  },

  "payload": {

    "buildUuid": "build-12345678-90ab-cdef-1234-567890abcdef",

    "status": "running",

    "buildOutcome": null,

    "createdAt": "2025-05-01T02:48:57.132Z",

    "initializingAt": "2025-05-01T02:48:58.132Z",

    "runningAt": "2025-05-01T02:48:59.132Z",

    "stoppedAt": null,

    "buildTriggerMetadata": {

      "buildTriggerSource": "push_event",

      "branch": "main",

      "commitHash": "abc123def456",

      "commitMessage": "Fix bug in authentication",

      "author": "developer@example.com",

      "buildCommand": "npm run build",

      "deployCommand": "wrangler deploy",

      "rootDirectory": "/",

      "repoName": "my-worker-repo",

      "providerAccountName": "github-user",

      "providerType": "github"

    }

  },

  "metadata": {

    "accountId": "f9f79265f388666de8122cfb508d7776",

    "eventSubscriptionId": "1830c4bb612e43c3af7f4cada31fbf3f",

    "eventSchemaVersion": 1,

    "eventTimestamp": "2025-05-01T02:48:57.132Z"

  }

}


```

#### `build.failed`

Triggered when a build fails.

**Example:**

```

{

  "type": "cf.workersBuilds.worker.build.failed",

  "source": {

    "type": "workersBuilds.worker",

    "workerName": "my-worker"

  },

  "payload": {

    "buildUuid": "build-12345678-90ab-cdef-1234-567890abcdef",

    "status": "failed",

    "buildOutcome": "failure",

    "createdAt": "2025-05-01T02:48:57.132Z",

    "initializingAt": "2025-05-01T02:48:58.132Z",

    "runningAt": "2025-05-01T02:48:59.132Z",

    "stoppedAt": "2025-05-01T02:50:00.132Z",

    "buildTriggerMetadata": {

      "buildTriggerSource": "push_event",

      "branch": "main",

      "commitHash": "abc123def456",

      "commitMessage": "Fix bug in authentication",

      "author": "developer@example.com",

      "buildCommand": "npm run build",

      "deployCommand": "wrangler deploy",

      "rootDirectory": "/",

      "repoName": "my-worker-repo",

      "providerAccountName": "github-user",

      "providerType": "github"

    }

  },

  "metadata": {

    "accountId": "f9f79265f388666de8122cfb508d7776",

    "eventSubscriptionId": "1830c4bb612e43c3af7f4cada31fbf3f",

    "eventSchemaVersion": 1,

    "eventTimestamp": "2025-05-01T02:48:57.132Z"

  }

}


```

#### `build.canceled`

Triggered when a build is canceled.

**Example:**

```

{

  "type": "cf.workersBuilds.worker.build.canceled",

  "source": {

    "type": "workersBuilds.worker",

    "workerName": "my-worker"

  },

  "payload": {

    "buildUuid": "build-12345678-90ab-cdef-1234-567890abcdef",

    "status": "canceled",

    "buildOutcome": "canceled",

    "createdAt": "2025-05-01T02:48:57.132Z",

    "initializingAt": "2025-05-01T02:48:58.132Z",

    "runningAt": "2025-05-01T02:48:59.132Z",

    "stoppedAt": "2025-05-01T02:49:30.132Z",

    "buildTriggerMetadata": {

      "buildTriggerSource": "push_event",

      "branch": "main",

      "commitHash": "abc123def456",

      "commitMessage": "Fix bug in authentication",

      "author": "developer@example.com",

      "buildCommand": "npm run build",

      "deployCommand": "wrangler deploy",

      "rootDirectory": "/",

      "repoName": "my-worker-repo",

      "providerAccountName": "github-user",

      "providerType": "github"

    }

  },

  "metadata": {

    "accountId": "f9f79265f388666de8122cfb508d7776",

    "eventSubscriptionId": "1830c4bb612e43c3af7f4cada31fbf3f",

    "eventSchemaVersion": 1,

    "eventTimestamp": "2025-05-01T02:48:57.132Z"

  }

}


```

#### `build.succeeded`

Triggered when a build succeeds.

**Example:**

```

{

  "type": "cf.workersBuilds.worker.build.succeeded",

  "source": {

    "type": "workersBuilds.worker",

    "workerName": "my-worker"

  },

  "payload": {

    "buildUuid": "build-12345678-90ab-cdef-1234-567890abcdef",

    "status": "success",

    "buildOutcome": "success",

    "createdAt": "2025-05-01T02:48:57.132Z",

    "initializingAt": "2025-05-01T02:48:58.132Z",

    "runningAt": "2025-05-01T02:48:59.132Z",

    "stoppedAt": "2025-05-01T02:50:15.132Z",

    "buildTriggerMetadata": {

      "buildTriggerSource": "push_event",

      "branch": "main",

      "commitHash": "abc123def456",

      "commitMessage": "Fix bug in authentication",

      "author": "developer@example.com",

      "buildCommand": "npm run build",

      "deployCommand": "wrangler deploy",

      "rootDirectory": "/",

      "repoName": "my-worker-repo",

      "providerAccountName": "github-user",

      "providerType": "github"

    }

  },

  "metadata": {

    "accountId": "f9f79265f388666de8122cfb508d7776",

    "eventSubscriptionId": "1830c4bb612e43c3af7f4cada31fbf3f",

    "eventSchemaVersion": 1,

    "eventTimestamp": "2025-05-01T02:48:57.132Z"

  }

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/event-subscriptions/","name":"Event subscriptions"}}]}
```

---

---
title: Git integration
description: Learn how to add and manage your Git integration for Workers Builds
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/git-integration/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Git integration

Cloudflare supports connecting your [GitHub](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/) and [GitLab](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/gitlab-integration/) repository to your Cloudflare Worker, and will automatically deploy your code every time you push a change.

Adding a Git integration also lets you monitor build statuses directly in your Git provider using [pull request comments](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/#pull-request-comment), [check runs](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/#check-run), or [commit statuses](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/gitlab-integration/#commit-status), so you can manage deployments without leaving your workflow.

## Supported Git Providers

Cloudflare supports connecting Cloudflare Workers to your GitHub and GitLab repositories. Workers Builds does not currently support connecting self-hosted instances of GitHub or GitLab.

If you using a different Git provider (e.g. Bitbucket), you can use an [external CI/CD provider (e.g. GitHub Actions)](https://developers.cloudflare.com/workers/ci-cd/external-cicd/) and deploy using [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy).

## Add a Git Integration

Workers Builds provides direct integration with GitHub and GitLab accounts, including both individual and organization accounts, that are _not_ self-hosted.

If you do not have a Git account linked to your Cloudflare account, you will be prompted to set up an installation to GitHub or GitLab when [connecting a repository](https://developers.cloudflare.com/workers/ci-cd/builds/#get-started) for the first time, or when adding a new Git account. Follow the prompts and authorize the Cloudflare Git integration.

![Git providers](https://developers.cloudflare.com/_astro/workers-git-provider.aIMoWcJE_Z1X4wCk.webp) 

You can check the following pages to see if your Git integration has been installed:

* [GitHub Applications page ↗](https://github.com/settings/installations) (if you are in an organization, select **Switch settings context** to access your GitHub organization settings)
* [GitLab Authorized Applications page ↗](https://gitlab.com/-/profile/applications)

For details on providing access to organization accounts, see [GitHub organizational access](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/#organizational-access) and [GitLab organizational access](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/gitlab-integration/#organizational-access).

## Manage a Git Integration

To manage your Git installation:

1. Go to the **Workers & Pages** page in the Cloudflare dashboard.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select your Worker.
3. Go to **Settings** \> **Builds**.
4. Under **Git Repository**, select **Manage**.

This can be useful for managing repository access or troubleshooting installation issues by reinstalling. For more details, see the [GitHub](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration) and [GitLab](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/gitlab-integration) guides for how to manage your installation.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/git-integration/","name":"Git integration"}}]}
```

---

---
title: GitHub integration
description: Learn how to manage your GitHub integration for Workers Builds
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/git-integration/github-integration.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# GitHub integration

Cloudflare supports connecting your GitHub repository to your Cloudflare Worker, and will automatically deploy your code every time you push a change.

## Features

Beyond automatic builds and deployments, the Cloudflare GitHub integration lets you monitor builds directly in GitHub, keeping you informed without leaving your workflow.

### Pull request comment

If a commit is on a pull request, Cloudflare will automatically post a comment on the pull request with the status of the build.

![GitHub pull request comment](https://developers.cloudflare.com/_astro/github-pull-request-comment.DIkAC8Yh_yF45V.webp) 

A [preview URL](https://developers.cloudflare.com/workers/configuration/previews/) will be provided for any builds which perform `wrangler versions upload`. This is particularly useful when reviewing your pull request, as it allows you to compare the code changes alongside an updated version of your Worker.

Comment history reveals any builds completed earlier while the PR was open.

![GitHub pull request comment history](https://developers.cloudflare.com/_astro/github-pull-request-comment-history.pAxP7K1u_Z2jBa6y.webp) 

### Check run

If you have one or multiple Workers connected to a repository (i.e. a [monorepo](https://developers.cloudflare.com/workers/ci-cd/builds/advanced-setups/#monorepos)), you can check on the status of each build within GitHub via [GitHub check runs ↗](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/collaborating-on-repositories-with-code-quality-features/about-status-checks#checks).

You can see the checks by selecting on the status icon next to a commit within your GitHub repository. In the example below, you can select the green check mark to see the results of the check run.

![GitHub status](https://developers.cloudflare.com/_astro/gh-status-check-runs.DkY_pO9C_1Obpz1.webp) 

Check runs will appear like the following in your repository. You can select **Details** to view the build (Build ID) and project (Script) associated with each check.

![GitHub check runs](https://developers.cloudflare.com/_astro/workers-builds-gh-check-runs.CuqL6Htu_Z1vG6k.webp) 

Note that when using [build watch paths](https://developers.cloudflare.com/workers/ci-cd/builds/build-watch-paths/), only projects that trigger a build will generate a check run.

## Manage access

You can deploy projects to Cloudflare Workers from your company or side project on GitHub using the [Cloudflare Workers & Pages GitHub App ↗](https://github.com/apps/cloudflare-workers-and-pages).

### Organizational access

When authorizing Cloudflare Workers to access a GitHub account, you can specify access to your individual account or an organization that you belong to on GitHub.

To add Cloudflare Workers installation to an organization, your user account must be an owner or have the appropriate role within the organization (i.e. the GitHub Apps Manager role). More information on these roles can be seen on [GitHub's documentation ↗](https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#github-app-managers).

GitHub security consideration

A GitHub account should only point to one Cloudflare account. If you are setting up Cloudflare with GitHub for your organization, Cloudflare recommends that you limit the scope of the application to only the repositories you intend to build with Pages. To modify these permissions, go to the [Applications page ↗](https://github.com/settings/installations) on GitHub and select **Switch settings context** to access your GitHub organization settings. Then, select **Cloudflare Workers & Pages** \> For **Repository access**, select **Only select repositories** \> select your repositories.

### Remove access

You can remove Cloudflare Workers' access to your GitHub repository or account by going to the [Applications page ↗](https://github.com/settings/installations) on GitHub (if you are in an organization, select Switch settings context to access your GitHub organization settings). The GitHub App is named Cloudflare Workers and Pages, and it is shared between Workers and Pages projects.

#### Remove Cloudflare access to a GitHub repository

To remove access to an individual GitHub repository, you can navigate to **Repository access**. Select the **Only select repositories** option, and configure which repositories you would like Cloudflare to have access to.

![GitHub Repository Access](https://developers.cloudflare.com/_astro/github-repository-access.DGHekBft_ZyV5F2.webp) 

#### Remove Cloudflare access to the entire GitHub account

To remove Cloudflare Workers and Pages access to your entire Git account, you can navigate to **Uninstall "Cloudflare Workers and Pages"**, then select **Uninstall**. Removing access to the Cloudflare Workers and Pages app will revoke Cloudflare's access to _all repositories_ from that GitHub account. If you want to only disable automatic builds and deployments, follow the [Disable Build](https://developers.cloudflare.com/workers/ci-cd/builds/#disconnecting-builds) instructions.

Note that removing access to GitHub will disable new builds for Workers and Pages project that were connected to those repositories, though your previous deployments will continue to be hosted by Cloudflare Workers.

### Reinstall the Cloudflare GitHub App

When encountering Git integration related issues, one potential troubleshooting step is attempting to uninstall and reinstall the GitHub or GitLab application associated with the Cloudflare Pages installation. The process for each Git provider is provided below.

1. Go to the installation settings page on GitHub:  
   * Navigate to **Settings > Builds** for the Workers or Pages project and select **Manage** under Git Repository.  
   * Alternatively, visit these links to find the Cloudflare Workers and Pages installation and select **Configure**:

| **Individual**   | https://github.com/settings/installations                                          |
| ---------------- | ---------------------------------------------------------------------------------- |
| **Organization** | https://github.com/organizations/<YOUR\_ORGANIZATION\_NAME>/settings/installations |

1. In the Cloudflare Workers and Pages GitHub App settings page, navigate to **Uninstall "Cloudflare Workers and Pages"** and select **Uninstall**.
2. Go back to the [**Workers & Pages** overview ↗](https://dash.cloudflare.com) page. Select **Create application** \> **Pages** \> **Connect to Git**.
3. Select the **\+ Add account** button, select the GitHub account you want to add, and then select **Install & Authorize**.
4. You should be redirected to the create project page with your GitHub account or organization in the account list.
5. Attempt to make a new deployment with your project which was previously broken.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/git-integration/","name":"Git integration"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/ci-cd/builds/git-integration/github-integration/","name":"GitHub integration"}}]}
```

---

---
title: GitLab integration
description: Learn how to manage your GitLab integration for Workers Builds
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/git-integration/gitlab-integration.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# GitLab integration

Cloudflare supports connecting your GitLab repository to your Cloudflare Worker, and will automatically deploy your code every time you push a change.

## Features

Beyond automatic builds and deployments, the Cloudflare GitLab integration lets you monitor builds directly in GitLab, keeping you informed without leaving your workflow.

### Merge request comment

If a commit is on a merge request, Cloudflare will automatically post a comment on the merge request with the status of the build.

![GitLab merge request comment](https://developers.cloudflare.com/_astro/gitlab-pull-request-comment.CQVsQ21r_jud8J.webp) 

A [preview URL](https://developers.cloudflare.com/workers/configuration/previews/) will be provided for any builds which perform `wrangler versions upload`. This is particularly useful when reviewing your pull request, as it allows you to compare the code changes alongside an updated version of your Worker.

Enabling GitLab Merge Request events for existing connections

New GitLab connections are automatically configured to receive merge request events, which enable commenting functionality. For existing connections, you'll need to manually enable `Merge request events` in the Webhooks tab of your project's settings. You can follow GitLab's documentation for guidance on [managing webhooks ↗](https://docs.gitlab.com/user/project/integrations/webhooks/#manage-webhooks).

### Commit Status

If you have one or multiple Workers connected to a repository (i.e. a [monorepo](https://developers.cloudflare.com/workers/ci-cd/builds/advanced-setups/#monorepos)), you can check on the status of each build within GitLab via [GitLab commit status ↗](https://docs.gitlab.com/ee/user/project/merge%5Frequests/status%5Fchecks.html).

You can see the statuses by selecting the status icon next to a commit or by going to **Build** \> **Pipelines** within your GitLab repository. In the example below, you can select on the green check mark to see the results of the check run.

![GitLab Status](https://developers.cloudflare.com/_astro/gl-status-checks.B9jgSbf7_Z1XRFYR.webp) 

Check runs will appear like the following in your repository. You can select one of the statuses to view the build on the Cloudflare Dashboard.

![GitLab Commit Status](https://developers.cloudflare.com/_astro/gl-commit-status.BghMWpYX_Za7rrg.webp) 

Note that when using [build watch paths](https://developers.cloudflare.com/workers/ci-cd/builds/build-watch-paths/), only projects that trigger a build will generate a commit status.

## Manage access

You can deploy projects to Cloudflare Workers from your company or side project on GitLab using the Cloudflare Pages app.

### Organizational access

When you authorize Cloudflare Workers to access your GitLab account, you automatically give Cloudflare Workers access to organizations, groups, and namespaces accessed by your GitLab account. Managing access to these organizations and groups is handled by GitLab.

### Remove access

You can remove Cloudflare Workers' access to your GitLab account by navigating to [Authorized Applications page ↗](https://gitlab.com/-/profile/applications) on GitLab. Find the applications called Cloudflare Pages and select the **Revoke** button to revoke access.

Note that the GitLab application Cloudflare Workers is shared between Workers and Pages projects, and removing access to GitLab will disable new builds for Workers and Pages, though your previous deployments will continue to be hosted by Cloudflare Workers.

### Reinstall the Cloudflare GitLab App

1. Go to your application settings page on GitLab: [https://gitlab.com/-/profile/applications ↗](https://gitlab.com/-/profile/applications)
2. Click the "Revoke" button on your Cloudflare Workers installation if it exists.
3. Go back to the [**Workers & Pages** overview ↗](https://dash.cloudflare.com) page. Select **Create application** \> **Pages** \> **Connect to Git**.
4. Select the **\+ Add account** button, select the GitLab account you want to add, and then select **Install & Authorize**.
5. You should be redirected to the create project page with your GitLab account or organization in the account list.
6. Attempt to make a new deployment with your project which was previously broken.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/git-integration/","name":"Git integration"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/ci-cd/builds/git-integration/gitlab-integration/","name":"GitLab integration"}}]}
```

---

---
title: Limits &#38; pricing
description: Limits &#38; pricing for Workers Builds
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/limits-and-pricing.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Limits & pricing

Workers Builds has the following limits.

| Metric                            | Free plan                              | Paid plans                                 |
| --------------------------------- | -------------------------------------- | ------------------------------------------ |
| **Build minutes**                 | 3,000 per month                        | 6,000 per month (then, +$0.005 per minute) |
| **Concurrent builds**             | 1                                      | 6                                          |
| **Build timeout**                 | 20 minutes                             | 20 minutes                                 |
| **Deploy Hooks**                  | 10/min per Worker, 100/min per account | 10/min per Worker, 100/min per account     |
| **CPU**                           | 2 vCPU                                 | 4 vCPU                                     |
| **Memory**                        | 8 GB                                   | 8 GB                                       |
| **Disk space**                    | 20 GB                                  | 20 GB                                      |
| **Environment variables**         | 64                                     | 64                                         |
| **Size per environment variable** | 5 KB                                   | 5 KB                                       |

## Definitions

* **Build minutes**: The amount of minutes that it takes to build a project.
* **Concurrent builds**: The number of builds that can run in parallel across an account.
* **Build timeout**: The amount of time that a build can be run before it is terminated.
* **Deploy Hooks**: The rate limit for builds triggered by [Deploy Hooks](https://developers.cloudflare.com/workers/ci-cd/builds/deploy-hooks/).
* **vCPU**: The number of CPU cores available to your build.
* **Memory**: The amount of memory available to your build.
* **Disk space**: The amount of disk space available to your build.
* **Environment variables**: The number of custom environment variables you can configure per Worker.
* **Size per environment variable**: The maximum size for each individual environment variable.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/limits-and-pricing/","name":"Limits & pricing"}}]}
```

---

---
title: MCP server
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/mcp-server.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# MCP server

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/mcp-server/","name":"MCP server"}}]}
```

---

---
title: Troubleshooting builds
description: Learn how to troubleshoot common and known issues in Workers Builds.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/builds/troubleshoot.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Troubleshooting builds

This guide explains how to identify and resolve build errors, as well as troubleshoot common issues in the Workers Builds deployment process.

To view your build history, go to your Worker project in the Cloudflare dashboard, select **Deployment**, select **View Build History** at the bottom of the page, and select the build you want to view. To retry a build, select the ellipses next to the build and select **Retry build**. Alternatively, you can select **Retry build** on the Build Details page.

## Known issues or limitations

Here are some common build errors that may surface in the build logs or general issues and how you can resolve them.

### Workers name requirement

`✘ [ERROR] The name in your Wrangler configuration file (<Worker name>) must match the name of your Worker. Please update the name field in your Wrangler configuration file.`

When connecting a Git repository to your Workers project, the specified name for the Worker on the Cloudflare dashboard must match the `name` argument in the Wrangler configuration file located in the specified root directory. If it does not match, update the name field in your Wrangler configuration file to match the name of the Worker on the dashboard.

The build system uses the `name` argument in the Wrangler configuration file to determine which Worker to deploy to Cloudflare's global network. This requirement ensures consistency between the Worker's name on the dashboard and the deployed Worker.

Note

This does not apply to [Wrangler Environments](https://developers.cloudflare.com/workers/wrangler/environments/) if the Worker name before the `-<env_name>` suffix matches the name in the Wrangler configuration file.

For example, a Worker named `my-worker-staging` on the dashboard can be deployed from a repository that contains a Wrangler configuration file with the arguments `name = my-worker` and `[env.staging]` using the deploy command `npx wrangler deploy --env staging`. On Wrangler v3 and up, Workers Builds automatically matches the name of the connected Worker by overriding it with the `WRANGLER_CI_OVERRIDE_NAME` environment variable.

### Missing Wrangler configuration file

`✘ [ERROR] Missing entry-point: The entry-point should be specified via the command line (e.g. wrangler deploy path/to/script) or the main config field.`

If you see this error, a Wrangler configuration file is likely missing from the root directory. Navigate to **Settings** \> **Build** \> **Build Configuration** to update the root directory, or add a [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) to the specified directory.

### Incorrect account\_id

`Could not route to /client/v4/accounts/<Account ID>/workers/services/<Worker name>, perhaps your object identifier is invalid? [code: 7003]`

If you see this error, the Wrangler configuration file likely has an `account_id` for a different account. Remove the `account_id` argument or update it with your account's `account_id`, available in **Workers & Pages Overview** under **Account Details**.

### Stale API token

` Failed: The build token selected for this build has been deleted or rolled and cannot be used for this build. Please update your build token in the Worker Builds settings and retry the build.`

The API Token dropdown in Build Configuration settings may show stale tokens that were edited, deleted, or rolled. If you encounter an error due to a stale token, create a new API Token and select it for the build.

### Build timed out

`Build was timed out`

There is a maximum build duration of 20 minutes. If a build exceeds this time, then the build will be terminated and the above error log is shown. For more details, see [Workers Builds limits](https://developers.cloudflare.com/workers/ci-cd/builds/limits-and-pricing/).

### Git integration issues

If you are running into errors associated with your Git integration, you can try removing access to your [GitHub](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/#removing-access) or [GitLab](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/gitlab-integration/#removing-access) integration from Cloudflare, then reinstalling the [GitHub](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/#reinstall-a-git-integration) or [GitLab](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/gitlab-integration/#reinstall-a-git-integration) integration.

## For additional support

If you discover additional issues or would like to provide feedback, reach out to us in the [Cloudflare Developers Discord ↗](https://discord.com/channels/595317990191398933/1052656806058528849).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/builds/","name":"Builds"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/builds/troubleshoot/","name":"Troubleshooting builds"}}]}
```

---

---
title: External CI/CD
description: Integrate Workers development into your existing continuous integration and continuous development workflows, such as GitHub Actions or GitLab Pipelines.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/external-cicd/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# External CI/CD

Deploying Cloudflare Workers with CI/CD ensures reliable, automated deployments for every code change.

If you prefer to use your existing CI/CD provider instead of [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/), this section offers guides for popular providers:

* [**GitHub Actions**](https://developers.cloudflare.com/workers/ci-cd/external-cicd/github-actions/)
* [**GitLab CI/CD**](https://developers.cloudflare.com/workers/ci-cd/external-cicd/gitlab-cicd/)

Other CI/CD options including but not limited to Terraform, CircleCI, Jenkins, and more, can also be used to deploy Workers following a similar set up process.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/external-cicd/","name":"External CI/CD"}}]}
```

---

---
title: GitHub Actions
description: Integrate Workers development into your existing GitHub Actions workflows.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/external-cicd/github-actions.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# GitHub Actions

You can deploy Workers with [GitHub Actions ↗](https://github.com/marketplace/actions/deploy-to-cloudflare-workers-with-wrangler). Here is how you can set up your GitHub Actions workflow.

## 1\. Authentication

When running Wrangler locally, authentication to the Cloudflare API happens via the [wrangler login](https://developers.cloudflare.com/workers/wrangler/commands/general/#login) command, which initiates an interactive authentication flow. Since CI/CD environments are non-interactive, Wrangler requires a [Cloudflare API token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/) and [account ID](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids/) to authenticate with the Cloudflare API.

### Cloudflare account ID

To find your Cloudflare account ID, refer to [Find account and zone IDs](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids/).

### API token

To create an API token to authenticate Wrangler in your CI job:

1. In the Cloudflare dashboard, go to the **Account API tokens** page.  
[ Go to **Account API tokens** ](https://dash.cloudflare.com/?to=/:account/api-tokens)
2. Select **Create Token** \> find **Edit Cloudflare Workers** \> select **Use Template**.
3. Customize your token name.
4. Scope your token.

You will need to choose the account and zone resources that the generated API token will have access to. We recommend scoping these down as much as possible to limit the access of your token. For example, if you have access to three different Cloudflare accounts, you should restrict the generated API token to only the account on which you will be deploying a Worker.

## 2\. Set up CI/CD

The method for running Wrangler in your CI/CD environment will depend on the specific setup for your project (whether you use GitHub Actions/Jenkins/GitLab or something else entirely).

To set up your CI/CD:

1. Go to your CI/CD platform and add the following as secrets:
* `CLOUDFLARE_ACCOUNT_ID`: Set to the [Cloudflare account ID](#cloudflare-account-id) for the account on which you want to deploy your Worker.
* `CLOUDFLARE_API_TOKEN`: Set to the [Cloudflare API token you generated](#api-token).

Warning

Don't store the value of `CLOUDFLARE_API_TOKEN` in your repository, as it gives access to deploy Workers on your account. Instead, you should utilize your CI/CD provider's support for storing secrets.

1. Create a workflow that will be responsible for deploying the Worker. This workflow should run `wrangler deploy`. Review an example [GitHub Actions ↗](https://docs.github.com/en/actions/using-workflows/about-workflows) workflow in the follow section.

### GitHub Actions

Cloudflare provides [an official action ↗](https://github.com/cloudflare/wrangler-action) for deploying Workers. Refer to the following example workflow which deploys your Worker on push to the `main` branch.

```

name: Deploy Worker

on:

  push:

    branches:

      - main

jobs:

  deploy:

    runs-on: ubuntu-latest

    timeout-minutes: 60

    steps:

      - uses: actions/checkout@v4

      - name: Build & Deploy Worker

        uses: cloudflare/wrangler-action@v3

        with:

          apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}

          accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/external-cicd/","name":"External CI/CD"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/external-cicd/github-actions/","name":"GitHub Actions"}}]}
```

---

---
title: GitLab CI/CD
description: Integrate Workers development into your existing GitLab Pipelines workflows.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/ci-cd/external-cicd/gitlab-cicd.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# GitLab CI/CD

You can deploy Workers with [GitLab CI/CD ↗](https://docs.gitlab.com/ee/ci/pipelines/index.html). Here is how you can set up your Gitlab CI/CD pipeline.

## 1\. Authentication

When running Wrangler locally, authentication to the Cloudflare API happens via the [wrangler login](https://developers.cloudflare.com/workers/wrangler/commands/general/#login) command, which initiates an interactive authentication flow. Since CI/CD environments are non-interactive, Wrangler requires a [Cloudflare API token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/) and [account ID](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids/) to authenticate with the Cloudflare API.

### Cloudflare account ID

To find your Cloudflare account ID, refer to [Find account and zone IDs](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids/).

### API token

To create an API token to authenticate Wrangler in your CI job:

1. In the Cloudflare dashboard, go to the **Account API tokens** page.  
[ Go to **Account API tokens** ](https://dash.cloudflare.com/?to=/:account/api-tokens)
2. Select **Create Token** \> find **Edit Cloudflare Workers** \> select **Use Template**.
3. Customize your token name.
4. Scope your token.

You will need to choose the account and zone resources that the generated API token will have access to. We recommend scoping these down as much as possible to limit the access of your token. For example, if you have access to three different Cloudflare accounts, you should restrict the generated API token to only the account on which you will be deploying a Worker.

## 2\. Set up CI

The method for running Wrangler in your CI/CD environment will depend on the specific setup for your project (whether you use GitHub Actions/Jenkins/GitLab or something else entirely).

To set up your CI:

1. Go to your CI platform and add the following as secrets:
* `CLOUDFLARE_ACCOUNT_ID`: Set to the [Cloudflare account ID](#cloudflare-account-id) for the account on which you want to deploy your Worker.
* `CLOUDFLARE_API_TOKEN`: Set to the [Cloudflare API token you generated](#api-token).

Warning

Don't store the value of `CLOUDFLARE_API_TOKEN` in your repository, as it gives access to deploy Workers on your account. Instead, you should utilize your CI/CD provider's support for storing secrets.

1. Create a workflow that will be responsible for deploying the Worker. This workflow should run `wrangler deploy`. Review an example [GitHub Actions ↗](https://docs.github.com/en/actions/using-workflows/about-workflows) workflow in the following section.

### GitLab Pipelines

Refer to [GitLab's blog ↗](https://about.gitlab.com/blog/2022/11/21/deploy-remix-with-gitlab-and-cloudflare/) for an example pipeline. Under the `script` key, replace `npm run deploy` with [npx wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/ci-cd/","name":"CI/CD"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/ci-cd/external-cicd/","name":"External CI/CD"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/ci-cd/external-cicd/gitlab-cicd/","name":"GitLab CI/CD"}}]}
```

---

---
title: Runtime APIs
description: The Workers runtime is designed to be JavaScript standards compliant and web-interoperable. Wherever possible, it uses web platform APIs, so that code can be reused across client and server, as well as across WinterCG JavaScript runtimes.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Runtime APIs

The Workers runtime is designed to be [JavaScript standards compliant ↗](https://ecma-international.org/publications-and-standards/standards/ecma-262/) and web-interoperable. Wherever possible, it uses web platform APIs, so that code can be reused across client and server, as well as across [WinterCG ↗](https://wintercg.org/) JavaScript runtimes.

[Workers runtime features](https://developers.cloudflare.com/workers/runtime-apis/) are [compatible with a subset of Node.js APIs](https://developers.cloudflare.com/workers/runtime-apis/nodejs) and the ability to set a [compatibility date or compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-dates/).

* [ Bindings (env) ](https://developers.cloudflare.com/workers/runtime-apis/bindings/)
* [ Cache ](https://developers.cloudflare.com/workers/runtime-apis/cache/)
* [ Console ](https://developers.cloudflare.com/workers/runtime-apis/console/)
* [ Context (ctx) ](https://developers.cloudflare.com/workers/runtime-apis/context/)
* [ Encoding ](https://developers.cloudflare.com/workers/runtime-apis/encoding/)
* [ EventSource ](https://developers.cloudflare.com/workers/runtime-apis/eventsource/)
* [ Fetch ](https://developers.cloudflare.com/workers/runtime-apis/fetch/)
* [ Handlers ](https://developers.cloudflare.com/workers/runtime-apis/handlers/)
* [ Headers ](https://developers.cloudflare.com/workers/runtime-apis/headers/)
* [ HTMLRewriter ](https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/)
* [ MessageChannel ](https://developers.cloudflare.com/workers/runtime-apis/messagechannel/)
* [ Node.js compatibility ](https://developers.cloudflare.com/workers/runtime-apis/nodejs/)
* [ Performance and timers ](https://developers.cloudflare.com/workers/runtime-apis/performance/)
* [ Remote-procedure call (RPC) ](https://developers.cloudflare.com/workers/runtime-apis/rpc/)
* [ Request ](https://developers.cloudflare.com/workers/runtime-apis/request/)
* [ Response ](https://developers.cloudflare.com/workers/runtime-apis/response/)
* [ Scheduler ](https://developers.cloudflare.com/workers/runtime-apis/scheduler/)
* [ Streams ](https://developers.cloudflare.com/workers/runtime-apis/streams/)
* [ TCP sockets ](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/)
* [ Web Crypto ](https://developers.cloudflare.com/workers/runtime-apis/web-crypto/)
* [ Web standards ](https://developers.cloudflare.com/workers/runtime-apis/web-standards/)
* [ WebAssembly (Wasm) ](https://developers.cloudflare.com/workers/runtime-apis/webassembly/)
* [ WebSockets ](https://developers.cloudflare.com/workers/runtime-apis/websockets/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}}]}
```

---

---
title: Bindings (env)
description: Worker Bindings that allow for interaction with other Cloudflare Resources.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Bindings ](https://developers.cloudflare.com/search/?tags=Bindings) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Bindings (env)

Bindings allow your Worker to interact with resources on the Cloudflare Developer Platform. Bindings provide better performance and less restrictions when accessing resources from Workers than the [REST APIs](https://developers.cloudflare.com/api/) which are intended for non-Workers applications.

The following bindings are available today:

* [ AI ](https://developers.cloudflare.com/workers-ai/get-started/workers-wrangler/#2-connect-your-worker-to-workers-ai)
* [ Analytics Engine ](https://developers.cloudflare.com/analytics/analytics-engine)
* [ Assets ](https://developers.cloudflare.com/workers/static-assets/binding/)
* [ Browser Rendering ](https://developers.cloudflare.com/browser-rendering)
* [ D1 ](https://developers.cloudflare.com/d1/worker-api/)
* [ Dispatcher (Workers for Platforms) ](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/dynamic-dispatch/)
* [ Durable Objects ](https://developers.cloudflare.com/durable-objects/api/)
* [ Dynamic Worker Loaders ](https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/)
* [ Environment Variables ](https://developers.cloudflare.com/workers/configuration/environment-variables/)
* [ Hyperdrive ](https://developers.cloudflare.com/hyperdrive)
* [ Images ](https://developers.cloudflare.com/images/transform-images/bindings/)
* [ KV ](https://developers.cloudflare.com/kv/api/)
* [ Media Transformations ](https://developers.cloudflare.com/stream/transform-videos/bindings/)
* [ mTLS ](https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls/)
* [ Queues ](https://developers.cloudflare.com/queues/configuration/javascript-apis/)
* [ R2 ](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/)
* [ Rate Limiting ](https://developers.cloudflare.com/workers/runtime-apis/bindings/rate-limit/)
* [ Secrets ](https://developers.cloudflare.com/workers/configuration/secrets/)
* [ Secrets Store ](https://developers.cloudflare.com/secrets-store/integrations/workers/)
* [ Service bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/)
* [ Vectorize ](https://developers.cloudflare.com/vectorize/reference/client-api/)
* [ Version metadata ](https://developers.cloudflare.com/workers/runtime-apis/bindings/version-metadata/)
* [ Workflows ](https://developers.cloudflare.com/workflows/)

## What is a binding?

When you declare a binding on your Worker, you grant it a specific capability, such as being able to read and write files to an [R2](https://developers.cloudflare.com/r2/) bucket. For example:

* [  wrangler.jsonc ](#tab-panel-7524)
* [  wrangler.toml ](#tab-panel-7525)

```

{

  "main": "./src/index.js",

  "r2_buckets": [

    {

      "binding": "MY_BUCKET",

      "bucket_name": "<MY_BUCKET_NAME>"

    }

  ]

}


```

```

main = "./src/index.js"


[[r2_buckets]]

binding = "MY_BUCKET"

bucket_name = "<MY_BUCKET_NAME>"


```

* [  JavaScript ](#tab-panel-7510)
* [  Python ](#tab-panel-7511)

JavaScript

```

export default {

  async fetch(request, env) {

    const url = new URL(request.url);

    const key = url.pathname.slice(1);

    await env.MY_BUCKET.put(key, request.body);

    return new Response(`Put ${key} successfully!`);

  },

};


```

Python

```

from workers import WorkerEntrypoint, Response

from urllib.parse import urlparse


class Default(WorkerEntrypoint):

  async def fetch(self, request):

    url = urlparse(request.url)

    key = url.path.slice(1)

    await self.env.MY_BUCKET.put(key, request.body)

    return Response(f"Put {key} successfully!")


```

You can think of a binding as a permission and an API in one piece. With bindings, you never have to add secret keys or tokens to your Worker in order to access resources on your Cloudflare account — the permission is embedded within the API itself. The underlying secret is never exposed to your Worker's code, and therefore can't be accidentally leaked.

## Making changes to bindings

When you deploy a change to your Worker, and only change its bindings (i.e. you don't change the Worker's code), Cloudflare may reuse existing isolates that are already running your Worker. This improves performance — you can change an environment variable or other binding without unnecessarily reloading your code.

As a result, you must be careful when "polluting" global scope with derivatives of your bindings. Anything you create there might continue to exist despite making changes to any underlying bindings. Consider an external client instance which uses a secret API key accessed from `env`: if you put this client instance in global scope and then make changes to the secret, a client instance using the original value might continue to exist. The correct approach would be to create a new client instance for each request.

The following is a good approach:

TypeScript

```

export default {

  fetch(request, env) {

    let client = new Client(env.MY_SECRET); // `client` is guaranteed to be up-to-date with the latest value of `env.MY_SECRET` since a new instance is constructed with every incoming request


    // ... do things with `client`

  },

};


```

Compared to this alternative, which might have surprising and unwanted behavior:

TypeScript

```

let client = undefined;


export default {

  fetch(request, env) {

    client ??= new Client(env.MY_SECRET); // `client` here might not be updated when `env.MY_SECRET` changes, since it may already exist in global scope


    // ... do things with `client`

  },

};


```

If you have more advanced needs, explore the [AsyncLocalStorage API](https://developers.cloudflare.com/workers/runtime-apis/nodejs/asynclocalstorage/), which provides a mechanism for exposing values down to child execution handlers.

## How to access `env`

Bindings are located on the `env` object, which can be accessed in several ways:

* It is an argument to entrypoint handlers such as [fetch](https://developers.cloudflare.com/workers/runtime-apis/fetch/):  
JavaScript  
```  
export default {  
  async fetch(request, env) {  
    return new Response(`Hi, ${env.NAME}`);  
  },  
};  
```
* It is as class property on [WorkerEntrypoint](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/#bindings-env),[DurableObject](https://developers.cloudflare.com/durable-objects/), and [Workflow](https://developers.cloudflare.com/workflows/):  
   * [  JavaScript ](#tab-panel-7512)  
   * [  Python ](#tab-panel-7513)  
JavaScript  
```  
export class MyDurableObject extends DurableObject {  
  async sayHello() {  
    return `Hi, ${this.env.NAME}!`;  
  }  
}  
```  
Python  
```  
from workers import WorkerEntrypoint, Response  
class Default(WorkerEntrypoint):  
  async def fetch(self, request):  
    return Response(f"Hi {self.env.NAME}")  
```
* It can be imported from `cloudflare:workers`:  
   * [  JavaScript ](#tab-panel-7514)  
   * [  Python ](#tab-panel-7515)  
JavaScript  
```  
import { env } from "cloudflare:workers";  
console.log(`Hi, ${env.Name}`);  
```  
Python  
```  
from workers import import_from_javascript  
env = import_from_javascript("cloudflare:workers").env  
print(f"Hi, {env.NAME}")  
```

### Importing `env` as a global

Importing `env` from `cloudflare:workers` is useful when you need to access a binding such as [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) or [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/)in top-level global scope. For example, to initialize an API client:

* [  JavaScript ](#tab-panel-7516)
* [  Python ](#tab-panel-7517)

JavaScript

```

import { env } from "cloudflare:workers";

import ApiClient from "example-api-client";


// API_KEY and LOG_LEVEL now usable in top-level scope

let apiClient = ApiClient.new({ apiKey: env.API_KEY });

const LOG_LEVEL = env.LOG_LEVEL || "info";


export default {

  fetch(req) {

    // you can use apiClient or LOG_LEVEL, configured before any request is handled

  },

};


```

Python

```

from workers import WorkerEntrypoint, env

from example_api_client import ApiClient


api_client = ApiClient(api_key=env.API_KEY)

LOG_LEVEL = getattr(env, "LOG_LEVEL", "info")


class Default(WorkerEntrypoint):

  async def fetch(self, request):

    # ...


```

Workers do not allow I/O from outside a request context. This means that even though `env` is accessible from the top-level scope, you will not be able to access every binding's methods.

For instance, environment variables and secrets are accessible, and you are able to call `env.NAMESPACE.get` to get a [Durable Object stub](https://developers.cloudflare.com/durable-objects/api/stub/) in the top-level context. However, calling methods on the Durable Object stub, making [calls to a KV store](https://developers.cloudflare.com/kv/api/), and [calling to other Workers](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings) will not work.

* [  JavaScript ](#tab-panel-7518)
* [  Python ](#tab-panel-7519)

JavaScript

```

import { env } from "cloudflare:workers";


// This would error!

// env.KV.get('my-key')


export default {

  async fetch(req) {

    // This works

    let myVal = await env.KV.get("my-key");

    Response.new(myVal);

  },

};


```

Python

```

from workers import Response, WorkerEntrypoint, env


# This would fail!

# env.KV.get('my-key')


class Default(WorkerEntrypoint):

  async def fetch(self, request):

    # This works

    mv_val = await env.KV.get("my-key")

    return Response(my_val)


```

Additionally, importing `env` from `cloudflare:workers` lets you avoid passing `env`as an argument through many function calls if you need to access a binding from a deeply-nested function. This can be helpful in a complex codebase.

* [  JavaScript ](#tab-panel-7520)
* [  Python ](#tab-panel-7521)

JavaScript

```

import { env } from "cloudflare:workers";


export default {

  fetch(req) {

    Response.new(sayHello());

  },

};


// env is not an argument to sayHello...

function sayHello() {

  let myName = getName();

  return `Hello, ${myName}`;

}


// ...nor is it an argument to getName

function getName() {

  return env.MY_NAME;

}


```

Python

```

from workers import Response, WorkerEntrypoint, env


class Default(WorkerEntrypoint):

  def fetch(req):

    return Response(say_hello())


# env is not an argument to say_hello...

def say_hello():

  my_name = get_name()

  return f"Hello, {myName}"


# ...nor is it an argument to getName

def get_name():

  return env.MY_NAME


```

Note

While using `env` from `cloudflare:workers` may be simpler to write than passing it through a series of function calls, passing `env` as an argument is a helpful pattern for dependency injection and testing.

### Overriding `env` values

The `withEnv` function provides a mechanism for overriding values of `env`.

Imagine a user has defined the [environment variable](https://developers.cloudflare.com/workers/configuration/environment-variables/)"NAME" to be "Alice" in their Wrangler configuration file and deployed a Worker. By default, logging`env.NAME` would print "Alice". Using the `withEnv` function, you can override the value of "NAME".

* [  JavaScript ](#tab-panel-7522)
* [  Python ](#tab-panel-7523)

JavaScript

```

import { env, withEnv } from "cloudflare:workers";


function logName() {

  console.log(env.NAME);

}


export default {

  fetch(req) {

    // this will log "Alice"

    logName();


    withEnv({ NAME: "Bob" }, () => {

      // this will log "Bob"

      logName();

    });


    // ...etc...

  },

};


```

Python

```

from workers import Response, WorkerEntrypoint, env, patch_env


def log_name():

  print(env.NAME)


class Default(WorkerEntrypoint):

  async def fetch(req):

    # this will log "Alice"

    log_name()


    with patch_env(NAME="Bob"):

      # this will log "Bob"

      log_name()


    # ...etc...


```

This can be useful when testing code that relies on an imported `env` object.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}}]}
```

---

---
title: AI
description: Run generative AI inference and machine learning models on GPUs, without managing servers or infrastructure.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/ai.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# AI

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/ai/","name":"AI"}}]}
```

---

---
title: Analytics Engine
description: Write high-cardinality data and metrics at scale, directly from Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/analytics-engine.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Analytics Engine

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/analytics-engine/","name":"Analytics Engine"}}]}
```

---

---
title: Assets
description: APIs available in Cloudflare Workers to interact with a collection of static assets. Static assets can be uploaded as part of your Worker.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/assets.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Assets

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/assets/","name":"Assets"}}]}
```

---

---
title: Browser Rendering
description: Programmatically control and interact with a headless browser instance.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/browser-rendering.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Browser Rendering

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/browser-rendering/","name":"Browser Rendering"}}]}
```

---

---
title: D1
description: APIs available in Cloudflare Workers to interact with D1.  D1 is Cloudflare's native serverless database.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/D1.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# D1

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/d1/","name":"D1"}}]}
```

---

---
title: Dispatcher (Workers for Platforms)
description: Let your customers deploy their own code to your platform, and dynamically dispatch requests from your Worker to their Worker.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/dispatcher.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Dispatcher (Workers for Platforms)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/dispatcher/","name":"Dispatcher (Workers for Platforms)"}}]}
```

---

---
title: Durable Objects
description: A globally distributed coordination API with strongly consistent storage.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/durable-objects.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Durable Objects

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/durable-objects/","name":"Durable Objects"}}]}
```

---

---
title: Environment Variables
description: Add string and JSON values to your Worker.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/environment-variables.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Environment Variables

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/environment-variables/","name":"Environment Variables"}}]}
```

---

---
title: Hyperdrive
description: Connect to your existing database from Workers, turning your existing regional database into a globally distributed database.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/hyperdrive.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Hyperdrive

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/hyperdrive/","name":"Hyperdrive"}}]}
```

---

---
title: Images
description: Store, transform, optimize, and deliver images at scale.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/images.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Images

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/images/","name":"Images"}}]}
```

---

---
title: KV
description: Global, low-latency, key-value data storage.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/kv.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# KV

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/kv/","name":"KV"}}]}
```

---

---
title: Media Transformations
description: Optimize, transform, and extract from short-form video.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/media.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Media Transformations

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/media/","name":"Media Transformations"}}]}
```

---

---
title: mTLS
description: Configure your Worker to present a client certificate to services that enforce an mTLS connection.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/mTLS.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# mTLS

When using [HTTPS ↗](https://www.cloudflare.com/learning/ssl/what-is-https/), a server presents a certificate for the client to authenticate in order to prove their identity. For even tighter security, some services require that the client also present a certificate.

This process - known as [mTLS ↗](https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/) \- moves authentication to the protocol of TLS, rather than managing it in application code. Connections from unauthorized clients are rejected during the TLS handshake instead.

To present a client certificate when communicating with a service, create a mTLS certificate [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/) in your Worker project's Wrangler file. This will allow your Worker to present a client certificate to a service on your behalf.

Warning

Currently, mTLS for Workers cannot be used for requests made to a service that is a [proxied zone](https://developers.cloudflare.com/dns/proxy-status/) on Cloudflare. If your Worker presents a client certificate to a service proxied by Cloudflare, Cloudflare will return a `520` error.

First, upload a certificate and its private key to your account using the [wrangler mtls-certificate](https://developers.cloudflare.com/workers/wrangler/commands/certificates/#mtls-certificate) command:

Warning

The `wrangler mtls-certificate upload` command requires the [SSL and Certificates Edit API token scope](https://developers.cloudflare.com/fundamentals/api/reference/permissions/). If you are using the OAuth flow triggered by `wrangler login`, the correct scope is set automatically. If you are using API tokens, refer to [Create an API token ↗](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/) to set the right scope for your API token.

Terminal window

```

npx wrangler mtls-certificate upload --cert cert.pem --key key.pem --name my-client-cert


```

Then, update your Worker project's Wrangler file to create an mTLS certificate binding:

* [  wrangler.jsonc ](#tab-panel-7528)
* [  wrangler.toml ](#tab-panel-7529)

```

{

  "mtls_certificates": [

    {

      "binding": "MY_CERT",

      "certificate_id": "<CERTIFICATE_ID>"

    }

  ]

}


```

```

[[mtls_certificates]]

binding = "MY_CERT"

certificate_id = "<CERTIFICATE_ID>"


```

Note

Certificate IDs are displayed after uploading, and can also be viewed with the command `wrangler mtls-certificate list`.

Adding an mTLS certificate binding includes a variable in the Worker's environment on which the `fetch()` method is available. This `fetch()` method uses the standard [Fetch](https://developers.cloudflare.com/workers/runtime-apis/fetch/) API and has the exact same signature as the global `fetch`, but always presents the client certificate when establishing the TLS connection.

Note

mTLS certificate bindings present an API similar to [service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings).

### Interface

* [  JavaScript ](#tab-panel-7526)
* [  TypeScript ](#tab-panel-7527)

JavaScript

```

export default {

  async fetch(request, environment) {

    return await environment.MY_CERT.fetch("https://a-secured-origin.com");

  },

};


```

JavaScript

```

interface Env {

  MY_CERT: Fetcher;

}


export default {

    async fetch(request, environment): Promise<Response> {

        return await environment.MY_CERT.fetch("https://a-secured-origin.com")

    }

} satisfies ExportedHandler<Env>;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/mtls/","name":"mTLS"}}]}
```

---

---
title: Queues
description: Send and receive messages with guaranteed delivery.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/queues.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Queues

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/queues/","name":"Queues"}}]}
```

---

---
title: R2
description: APIs available in Cloudflare Workers to read from and write to R2 buckets.  R2 is S3-compatible, zero egress-fee, globally distributed object storage.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/R2.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# R2

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/r2/","name":"R2"}}]}
```

---

---
title: Rate Limiting
description: Define rate limits and interact with them directly from your Cloudflare Worker
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/rate-limit.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Rate Limiting

The Rate Limiting API lets you define rate limits and write code around them in your Worker.

You can use it to enforce:

* Rate limits that are applied after your Worker starts, only once a specific part of your code is reached
* Different rate limits for different types of customers or users (ex: free vs. paid)
* Resource-specific or path-specific limits (ex: limit per API route)
* Any combination of the above

The Rate Limiting API is backed by the same infrastructure that serves [rate limiting rules](https://developers.cloudflare.com/waf/rate-limiting-rules/).

Note

You must use version 4.36.0 or later of the [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler).

## Get started

First, add a [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings) to your Worker that gives it access to the Rate Limiting API:

* [  wrangler.jsonc ](#tab-panel-7534)
* [  wrangler.toml ](#tab-panel-7535)

```

{

  "main": "src/index.js",

  "ratelimits": [

    {

      "name": "MY_RATE_LIMITER",

      // An identifier you define, that is unique to your Cloudflare account.

      // Must be an integer.

      "namespace_id": "1001",

      // Limit: the number of tokens allowed within a given period in a single

      // Cloudflare location

      // Period: the duration of the period, in seconds. Must be either 10 or 60

      "simple": {

        "limit": 100,

        "period": 60

      }

    }

  ]

}


```

```

main = "src/index.js"


[[ratelimits]]

name = "MY_RATE_LIMITER"

namespace_id = "1001"


  [ratelimits.simple]

  limit = 100

  period = 60


```

This binding makes the `MY_RATE_LIMITER` binding available, which provides a `limit()` method:

* [  JavaScript ](#tab-panel-7530)
* [  TypeScript ](#tab-panel-7531)

JavaScript

```

export default {

  async fetch(request, env) {

    const { pathname } = new URL(request.url)


    const { success } = await env.MY_RATE_LIMITER.limit({ key: pathname }) // key can be any string of your choosing

    if (!success) {

      return new Response(`429 Failure – rate limit exceeded for ${pathname}`, { status: 429 })

    }


    return new Response(`Success!`)

  }

}


```

TypeScript

```

interface Env {

  MY_RATE_LIMITER: RateLimit;

}


export default {

  async fetch(request, env): Promise<Response> {

    const { pathname } = new URL(request.url)


    const { success } = await env.MY_RATE_LIMITER.limit({ key: pathname }) // key can be any string of your choosing

    if (!success) {

      return new Response(`429 Failure – rate limit exceeded for ${pathname}`, { status: 429 })

    }


    return new Response(`Success!`)

  }

} satisfies ExportedHandler<Env>;


```

The `limit()` API accepts a single argument — a configuration object with the `key` field.

* The key you provide can be any `string` value.
* A common pattern is to define your key by combining a string that uniquely identifies the actor initiating the request (ex: a user ID or customer ID) and a string that identifies a specific resource (ex: a particular API route).

You can define and configure multiple rate limiting configurations per Worker, which allows you to define different limits against incoming request and/or user parameters as needed to protect your application or upstream APIs.

For example, here is how you can define two rate limiting configurations for free and paid tier users:

* [  wrangler.jsonc ](#tab-panel-7536)
* [  wrangler.toml ](#tab-panel-7537)

```

{

  "main": "src/index.js",

  "ratelimits": [

    // Free user rate limiting

    {

      "name": "FREE_USER_RATE_LIMITER",

      "namespace_id": "1001",

      "simple": {

        "limit": 100,

        "period": 60

      }

    },

    // Paid user rate limiting

    {

      "name": "PAID_USER_RATE_LIMITER",

      "namespace_id": "1002",

      "simple": {

        "limit": 1000,

        "period": 60

      }

    }

  ]

}


```

```

main = "src/index.js"


[[ratelimits]]

name = "FREE_USER_RATE_LIMITER"

namespace_id = "1001"


  [ratelimits.simple]

  limit = 100

  period = 60


[[ratelimits]]

name = "PAID_USER_RATE_LIMITER"

namespace_id = "1002"


  [ratelimits.simple]

  limit = 1_000

  period = 60


```

## Configuration

A rate limiting binding has the following settings:

| Setting       | Type   | Description                                                                                                                                                                                                                                   |
| ------------- | ------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| namespace\_id | string | A string containing a positive integer that uniquely defines this rate limiting namespace within your Cloudflare account (for example, "1001"). Although the value must be a valid integer, it is specified as a string. This is intentional. |
| simple        | object | The rate limit configuration. simple is the only supported type.                                                                                                                                                                              |
| simple.limit  | number | The number of allowed requests (or calls to limit()) within the given period.                                                                                                                                                                 |
| simple.period | number | The duration of the rate limit window, in seconds. Must be either 10 or 60.                                                                                                                                                                   |

Note

Two rate limiting bindings that share the same `namespace_id` — even across different Workers on the same account — share the same rate limit counters for a given key. This is intentional and allows you to enforce a single rate limit across multiple Workers.

If you do not want to share rate limit state between bindings, use a unique `namespace_id` for each binding.

For example, to apply a rate limit of 1500 requests per minute, you would define a rate limiting configuration as follows:

* [  wrangler.jsonc ](#tab-panel-7532)
* [  wrangler.toml ](#tab-panel-7533)

```

{

  "ratelimits": [

    {

      "name": "MY_RATE_LIMITER",

      "namespace_id": "1001",

      // 1500 requests - calls to limit() increment this

      "simple": {

        "limit": 1500,

        "period": 60

      }

    }

  ]

}


```

```

[[ratelimits]]

name = "MY_RATE_LIMITER"

namespace_id = "1001"


  [ratelimits.simple]

  limit = 1_500

  period = 60


```

## Best practices

The `key` passed to the `limit` function, that determines what to rate limit on, should represent a unique characteristic of a user or class of user that you wish to rate limit.

* Good choices include API keys in `Authorization` HTTP headers, URL paths or routes, specific query parameters used by your application, and/or user IDs and tenant IDs. These are all stable identifiers and are unlikely to change from request-to-request.
* It is not recommended to use IP addresses or locations (regions or countries), since these can be shared by many users in many valid cases. You may find yourself unintentionally rate limiting a wider group of users than you intended by rate limiting on these keys.

TypeScript

```

// Recommended: use a key that represents a specific user or class of user

const url = new URL(req.url)

const userId = url.searchParams.get("userId") || ""

const { success } = await env.MY_RATE_LIMITER.limit({ key: userId })


// Not recommended:  many users may share a single IP, especially on mobile networks

// or when using privacy-enabling proxies

const ipAddress = req.headers.get("cf-connecting-ip") || ""

const { success } = await env.MY_RATE_LIMITER.limit({ key: ipAddress })


```

## Locality

Rate limits that you define and enforce in your Worker are local to the [Cloudflare location ↗](https://www.cloudflare.com/network/) that your Worker runs in.

For example, if a request comes in from Sydney, Australia, to the Worker shown above, after 100 requests in a 60 second window, any further requests for a particular path would be rejected, and a 429 HTTP status code returned. But this would only apply to requests served in Sydney. For each unique key you pass to your rate limiting binding, there is a unique limit per Cloudflare location.

## Performance

The Rate Limiting API in Workers is designed to be fast.

The underlying counters are cached on the same machine that your Worker runs in, and updated asynchronously in the background by communicating with a backing store that is within the same Cloudflare location.

This means that while in your code you `await` a call to the `limit()` method:

JavaScript

```

const { success } = await env.MY_RATE_LIMITER.limit({ key: customerId })


```

You are not waiting on a network request. You can use the Rate Limiting API without introducing any meaningful latency to your Worker.

## Accuracy

The above also means that the Rate Limiting API is permissive, eventually consistent, and intentionally designed to not be used as an accurate accounting system.

For example, if many requests come in to your Worker in a single Cloudflare location, all rate limited on the same key, the [isolate](https://developers.cloudflare.com/workers/reference/how-workers-works) that serves each request will check against its locally cached value of the rate limit. Very quickly, but not immediately, these requests will count towards the rate limit within that Cloudflare location.

## Monitoring

Rate limiting bindings are not currently visible in the Cloudflare dashboard. To monitor rate-limited requests from your Worker:

* **[Workers Observability](https://developers.cloudflare.com/workers/observability/)** — Use [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs/) and [Traces](https://developers.cloudflare.com/workers/observability/traces/) to observe HTTP 429 responses returned by your Worker when rate limits are exceeded.
* **[Workers Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/)** — Add an Analytics Engine binding to your Worker and emit custom data points (for example, a `rate_limited` event) when `limit()` returns `{ success: false }`. This lets you build dashboards and query rate limiting metrics over time.

## Examples

* [@elithrar/workers-hono-rate-limit ↗](https://github.com/elithrar/workers-hono-rate-limit) — Middleware that lets you easily add rate limits to routes in your [Hono ↗](https://hono.dev/) application.
* [@hono-rate-limiter/cloudflare ↗](https://github.com/rhinobase/hono-rate-limiter) — Middleware that lets you easily add rate limits to routes in your [Hono ↗](https://hono.dev/) application, with multiple data stores to choose from.
* [hono-cf-rate-limit ↗](https://github.com/bytaesu/hono-cf-rate-limit) — Middleware for Hono applications that applies rate limiting in Cloudflare Workers, powered by Wrangler’s built-in features.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/rate-limit/","name":"Rate Limiting"}}]}
```

---

---
title: Secrets
description: Add encrypted secrets to your Worker.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/secrets.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Secrets

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/secrets/","name":"Secrets"}}]}
```

---

---
title: Secrets Store
description: Account-level secrets that can be added to Workers applications as a binding.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/secrets-store.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Secrets Store

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/secrets-store/","name":"Secrets Store"}}]}
```

---

---
title: Service bindings
description: Facilitate Worker-to-Worker communication.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Bindings ](https://developers.cloudflare.com/search/?tags=Bindings) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/service-bindings/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Service bindings

## About Service bindings

Service bindings allow one Worker to call into another, without going through a publicly-accessible URL. A Service binding allows Worker A to call a method on Worker B, or to forward a request from Worker A to Worker B.

Service bindings provide the separation of concerns that microservice or service-oriented architectures provide, without configuration pain, performance overhead or need to learn RPC protocols.

* **Service bindings are fast.** When you use Service Bindings, there is zero overhead or added latency. By default, both Workers run on the same thread of the same Cloudflare server. And when you enable [Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/), each Worker runs in the optimal location for overall performance.
* **Service bindings are not just HTTP.** Worker A can expose methods that can be directly called by Worker B. Communicating between services only requires writing JavaScript methods and classes.
* **Service bindings don't increase costs.** You can split apart functionality into multiple Workers, without incurring additional costs. Learn more about [pricing for Service Bindings](https://developers.cloudflare.com/workers/platform/pricing/#service-bindings).
![Service bindings are a zero-cost abstraction](https://developers.cloudflare.com/_astro/service-bindings-comparison.CeB5uD1k_3yWPz.webp) 

Service bindings are commonly used to:

* **Provide a shared internal service to multiple Workers.** For example, you can deploy an authentication service as its own Worker, and then have any number of separate Workers communicate with it via Service bindings.
* **Isolate services from the public Internet.** You can deploy a Worker that is not reachable via the public Internet, and can only be reached via an explicit Service binding that another Worker declares.
* **Allow teams to deploy code independently.** Team A can deploy their Worker on their own release schedule, and Team B can deploy their Worker separately.

## Configuration

You add a Service binding by modifying the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) of the caller — the Worker that you want to be able to initiate requests.

For example, if you want Worker A to be able to call Worker B — you'd add the following to the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) for Worker A:

* [  wrangler.jsonc ](#tab-panel-7540)
* [  wrangler.toml ](#tab-panel-7541)

```

{

  "services": [

    {

      "binding": "<BINDING_NAME>",

      "service": "<WORKER_NAME>"

    }

  ]

}


```

```

[[services]]

binding = "<BINDING_NAME>"

service = "<WORKER_NAME>"


```

* `binding`: The name of the key you want to expose on the `env` object.
* `service`: The name of the target Worker you would like to communicate with. This Worker must be on your Cloudflare account.

## Interfaces

Worker A that declares a Service binding to Worker B can call Worker B in two different ways:

1. [RPC](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc) lets you communicate between Workers using function calls that you define. For example, `await env.BINDING_NAME.myMethod(arg1)`. This is recommended for most use cases, and allows you to create your own internal APIs that your Worker makes available to other Workers.
2. [HTTP](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/http) lets you communicate between Workers by calling the [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch) from other Workers, sending `Request` objects and receiving `Response` objects back. For example, `env.BINDING_NAME.fetch(request)`.

## Example — build your first Service binding using RPC

This example [extends the WorkerEntrypoint class](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/#the-workerentrypoint-class) to support RPC-based Service bindings. First, create the Worker that you want to communicate with. Let's call this "Worker B". Worker B exposes the public method, `add(a, b)`:

* [  wrangler.jsonc ](#tab-panel-7538)
* [  wrangler.toml ](#tab-panel-7539)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "worker_b",

  "main": "./src/workerB.js"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "worker_b"

main = "./src/workerB.js"


```

JavaScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class WorkerB extends WorkerEntrypoint {

  // Currently, entrypoints without a named handler are not supported

  async fetch() {

    return new Response(null, { status: 404 });

  }


  async add(a, b) {

    return a + b;

  }

}


```

Next, create the Worker that will call Worker B. Let's call this "Worker A". Worker A declares a binding to Worker B. This is what gives it permission to call public methods on Worker B.

* [  wrangler.jsonc ](#tab-panel-7542)
* [  wrangler.toml ](#tab-panel-7543)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "worker_a",

  "main": "./src/workerA.js",

  "services": [

    {

      "binding": "WORKER_B",

      "service": "worker_b"

    }

  ]

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "worker_a"

main = "./src/workerA.js"


[[services]]

binding = "WORKER_B"

service = "worker_b"


```

JavaScript

```

export default {

  async fetch(request, env) {

    const result = await env.WORKER_B.add(1, 2);

    return new Response(result);

  },

};


```

To run both Worker A and Worker B in local development, you must run two instances of [Wrangler](https://developers.cloudflare.com/workers/wrangler) in your terminal. For each Worker, open a new terminal and run [npx wrangler@latest dev](https://developers.cloudflare.com/workers/wrangler/commands#dev).

Each Worker is deployed separately.

## Lifecycle

The Service bindings API is asynchronous — you must `await` any method you call. If Worker A invokes Worker B via a Service binding, and Worker A does not await the completion of Worker B, Worker B will be terminated early.

For more about the lifecycle of calling a Worker over a Service Binding via RPC, refer to the [RPC Lifecycle](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle) docs.

## Local development

Local development is supported for Service bindings. For each Worker, open a new terminal and use [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) in the relevant directory. When running `wrangler dev`, service bindings will show as `connected`/`not connected` depending on whether Wrangler can find a running `wrangler dev` session for that Worker. For example:

Terminal window

```

$ wrangler dev

...

Your worker has access to the following bindings:

- Services:

  - SOME_OTHER_WORKER: some-other-worker [connected]

  - ANOTHER_WORKER: another-worker [not connected]


```

Wrangler also supports running multiple Workers at once with one command. To try it out, pass multiple `-c` flags to Wrangler, like this: `wrangler dev -c wrangler.json -c ../other-worker/wrangler.json`. The first config will be treated as the _primary_ worker, which will be exposed over HTTP as usual at `http://localhost:8787`. The remaining config files will be treated as _secondary_ and will only be accessible via a service binding from the primary worker.

Warning

Support for running multiple Workers at once with one Wrangler command is experimental, and subject to change as we work on the experience. If you run into bugs or have any feedback, [open an issue on the workers-sdk repository ↗](https://github.com/cloudflare/workers-sdk/issues/new)

## Deployment

Workers using Service bindings are deployed separately.

When getting started and deploying for the first time, this means that the target Worker (Worker B in the examples above) must be deployed first, before Worker A. Otherwise, when you attempt to deploy Worker A, deployment will fail, because Worker A declares a binding to Worker B, which does not yet exist.

When making changes to existing Workers, in most cases you should:

* Deploy changes to Worker B first, in a way that is compatible with the existing Worker A. For example, add a new method to Worker B.
* Next, deploy changes to Worker A. For example, call the new method on Worker B, from Worker A.
* Finally, remove any unused code. For example, delete the previously used method on Worker B.

## Smart Placement

[Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/) automatically places your Worker in an optimal location that minimizes latency.

You can use Smart Placement together with Service bindings to split your Worker into two services:

![Smart Placement and Service Bindings](https://developers.cloudflare.com/_astro/smart-placement-service-bindings.Ce58BYeF_ZmD4l8.webp) 

Refer to the [docs on Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/#multiple-workers) for more.

## Limits

Service bindings have the following limits:

* Each request to a Worker via a Service binding counts toward your [subrequest limit](https://developers.cloudflare.com/workers/platform/limits/#subrequests).
* A single request has a maximum of 32 Worker invocations, and each call to a Service binding counts towards this limit. Subsequent calls will throw an exception.
* Calling a service binding does not count towards [simultaneous open connection limits](https://developers.cloudflare.com/workers/platform/limits/#simultaneous-open-connections)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/service-bindings/","name":"Service bindings"}}]}
```

---

---
title: HTTP
description: Facilitate Worker-to-Worker communication by forwarding Request objects.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/service-bindings/http.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# HTTP

Worker A that declares a Service binding to Worker B can forward a [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) object to Worker B, by calling the `fetch()` method that is exposed on the binding object.

For example, consider the following Worker that implements a [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/):

* [  wrangler.jsonc ](#tab-panel-7544)
* [  wrangler.toml ](#tab-panel-7545)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "worker_b",

  "main": "./src/workerB.js"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "worker_b"

main = "./src/workerB.js"


```

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    return new Response("Hello World!");

  }

}


```

The following Worker declares a binding to the Worker above:

* [  wrangler.jsonc ](#tab-panel-7546)
* [  wrangler.toml ](#tab-panel-7547)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "worker_a",

  "main": "./src/workerA.js",

  "services": [

    {

      "binding": "WORKER_B",

      "service": "worker_b"

    }

  ]

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "worker_a"

main = "./src/workerA.js"


[[services]]

binding = "WORKER_B"

service = "worker_b"


```

And then can forward a request to it:

JavaScript

```

export default {

  async fetch(request, env) {

    return await env.WORKER_B.fetch(request);

  },

};


```

Note

If you construct a new request manually, rather than forwarding an existing one, ensure that you provide a valid and fully-qualified URL with a hostname. For example:

JavaScript

```

export default {

  async fetch(request, env) {

    // provide a valid URL

    let newRequest = new Request("https://valid-url.com", { method: "GET" });

    let response = await env.WORKER_B.fetch(newRequest);

    return response;

  }

};


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/service-bindings/","name":"Service bindings"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/runtime-apis/bindings/service-bindings/http/","name":"HTTP"}}]}
```

---

---
title: RPC (WorkerEntrypoint)
description: Facilitate Worker-to-Worker communication via RPC.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ RPC ](https://developers.cloudflare.com/search/?tags=RPC) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/service-bindings/rpc.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# RPC (WorkerEntrypoint)

[Service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings) allow one Worker to call into another, without going through a publicly-accessible URL.

You can use Service bindings to create your own internal APIs that your Worker makes available to other Workers. This can be done by extending the built-in `WorkerEntrypoint` class, and adding your own public methods. These public methods can then be directly called by other Workers on your Cloudflare account that declare a [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings) to this Worker.

The [RPC system in Workers](https://developers.cloudflare.com/workers/runtime-apis/rpc) is designed feel as similar as possible to calling a JavaScript function in the same Worker. In most cases, you should be able to write code in the same way you would if everything was in a single Worker.

Note

You can also use RPC to communicate between Workers and [Durable Objects](https://developers.cloudflare.com/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/#invoke-rpc-methods).

## Example

For example, the following Worker implements the public method `add(a, b)`:

For example, if Worker B implements the public method `add(a, b)`:

* [  wrangler.jsonc ](#tab-panel-7558)
* [  wrangler.toml ](#tab-panel-7559)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "worker_b",

  "main": "./src/workerB.js"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "worker_b"

main = "./src/workerB.js"


```

* [  JavaScript ](#tab-panel-7565)
* [  TypeScript ](#tab-panel-7566)
* [  Python ](#tab-panel-7567)

JavaScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch() {

    return new Response("Hello from Worker B");

  }


  add(a, b) {

    return a + b;

  }

}


```

TypeScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch() {

    return new Response("Hello from Worker B");

  }


  add(a: number, b: number) {

    return a + b;

  }

}


```

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        return Response("Hello from Worker B")


    def add(self, a: int, b: int) -> int:

        return a + b


```

Worker A can declare a [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings) to Worker B:

* [  wrangler.jsonc ](#tab-panel-7560)
* [  wrangler.toml ](#tab-panel-7561)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "worker_a",

  "main": "./src/workerA.js",

  "services": [

    {

      "binding": "WORKER_B",

      "service": "worker_b"

    }

  ]

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "worker_a"

main = "./src/workerA.js"


[[services]]

binding = "WORKER_B"

service = "worker_b"


```

Making it possible for Worker A to call the `add()` method from Worker B:

* [  JavaScript ](#tab-panel-7562)
* [  TypeScript ](#tab-panel-7563)
* [  Python ](#tab-panel-7564)

JavaScript

```

export default {

  async fetch(request, env) {

    const result = await env.WORKER_B.add(1, 2);

    return new Response(result);

  },

};


```

TypeScript

```

export default {

  async fetch(request, env) {

    const result = await env.WORKER_B.add(1, 2);

    return new Response(result);

  },

};


```

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        result = await self.env.WORKER_B.add(1, 2)

    return Response(f"Result: {result}")


```

You do not need to learn, implement, or think about special protocols to use the RPC system. The client, in this case Worker A, calls Worker B and tells it to execute a specific procedure using specific arguments that the client provides. This is accomplished with standard JavaScript classes.

## The `WorkerEntrypoint` Class

To provide RPC methods from your Worker, you must extend the `WorkerEntrypoint` class, as shown in the example below:

JavaScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async add(a, b) { return a + b; }

}


```

A new instance of the class is created every time the Worker is called. Note that even though the Worker is implemented as a class, it is still stateless — the class instance only lasts for the duration of the invocation. If you need to persist or coordinate state in Workers, you should use [Durable Objects](https://developers.cloudflare.com/durable-objects).

### Bindings (`env`)

The [env](https://developers.cloudflare.com/workers/runtime-apis/bindings) object is exposed as a class property of the `WorkerEntrypoint` class.

For example, a Worker that declares a binding to the [environment variable](https://developers.cloudflare.com/workers/configuration/environment-variables/) `GREETING`:

* [  wrangler.jsonc ](#tab-panel-7548)
* [  wrangler.toml ](#tab-panel-7549)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker",

  "vars": {

    "GREETING": "Hello"

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker"


[vars]

GREETING = "Hello"


```

Can access it by calling `this.env.GREETING`:

JavaScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  fetch() { return new Response("Hello from my-worker"); }


  async greet(name) {

    return this.env.GREETING + name;

  }

}


```

You can use any type of [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings) this way.

### Lifecycle methods (`ctx`)

The [ctx](https://developers.cloudflare.com/workers/runtime-apis/context) object is exposed as a class property of the `WorkerEntrypoint` class.

For example, you can extend the lifetime of the invocation context by calling the `waitUntil()` method:

JavaScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  fetch() { return new Response("Hello from my-worker"); }


  async signup(email, name) {

    // sendEvent() will continue running, even after this method returns a value to the caller

    this.ctx.waitUntil(this.#sendEvent("signup", email))

    // Perform any other work

    return "Success";

  }


  async #sendEvent(eventName, email) {

    //...

  }

}


```

### Fetching static assets

If your Worker has a [static assets binding](https://developers.cloudflare.com/workers/static-assets/binding/), you can call `this.env.ASSETS.fetch()` from within an RPC method. Since RPC methods do not receive a `request` parameter, construct a `Request` or URL with any hostname — the hostname is ignored by the assets binding, only the pathname matters:

* [  JavaScript ](#tab-panel-7556)
* [  TypeScript ](#tab-panel-7557)

JavaScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export class ImageWorker extends WorkerEntrypoint {

  async getImage(path) {

    return this.env.ASSETS.fetch(new Request(`https://assets.local${path}`));

  }

}


```

TypeScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export class ImageWorker extends WorkerEntrypoint {

  async getImage(path: string): Promise<Response> {

    return this.env.ASSETS.fetch(

      new Request(`https://assets.local${path}`)

    );

  }

}


```

The caller can then invoke this method via RPC:

* [  JavaScript ](#tab-panel-7552)
* [  TypeScript ](#tab-panel-7553)

JavaScript

```

const response = await env.IMAGE_SERVICE.getImage("/images/logo.png");


```

TypeScript

```

const response = await env.IMAGE_SERVICE.getImage("/images/logo.png");


```

Note

When fetching assets via the binding, the hostname (for example, `assets.local`) is not meaningful — any valid hostname will work. Only the URL pathname is used to match assets. The convention `assets.local` is used for clarity.

## Named entrypoints

You can also export any number of named `WorkerEntrypoint` classes from within a single Worker, in addition to the default export. You can then declare a Service binding to a specific named entrypoint.

You can use this to group multiple pieces of compute together. For example, you might create a distinct `WorkerEntrypoint` for each permission role in your application, and use these to provide role-specific RPC methods:

* [  wrangler.jsonc ](#tab-panel-7550)
* [  wrangler.toml ](#tab-panel-7551)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "todo-app",

  "d1_databases": [

    {

      "binding": "D1",

      "database_name": "todo-app-db",

      "database_id": "<unique-ID-for-your-database>"

    }

  ]

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "todo-app"


[[d1_databases]]

binding = "D1"

database_name = "todo-app-db"

database_id = "<unique-ID-for-your-database>"


```

JavaScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export class AdminEntrypoint extends WorkerEntrypoint {

  async createUser(username) {

    await this.env.D1.prepare("INSERT INTO users (username) VALUES (?)")

      .bind(username)

      .run();

  }


  async deleteUser(username) {

    await this.env.D1.prepare("DELETE FROM users WHERE username = ?")

      .bind(username)

      .run();

  }

}


export class UserEntrypoint extends WorkerEntrypoint {

  async getTasks(userId) {

    return await this.env.D1.prepare(

      "SELECT title FROM tasks WHERE user_id = ?"

    )

      .bind(userId)

      .run();

  }


  async createTask(userId, title) {

    await this.env.D1.prepare(

      "INSERT INTO tasks (user_id, title) VALUES (?, ?)"

    )

      .bind(userId, title)

      .run();

  }

}


export default class extends WorkerEntrypoint {

  async fetch(request, env) {

    return new Response("Hello from my to do app");

  }

}


```

You can then declare a Service binding directly to `AdminEntrypoint` in another Worker:

* [  wrangler.jsonc ](#tab-panel-7554)
* [  wrangler.toml ](#tab-panel-7555)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "admin-app",

  "services": [

    {

      "binding": "ADMIN",

      "service": "todo-app",

      "entrypoint": "AdminEntrypoint"

    }

  ]

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "admin-app"


[[services]]

binding = "ADMIN"

service = "todo-app"

entrypoint = "AdminEntrypoint"


```

JavaScript

```

export default {

  async fetch(request, env) {

    await env.ADMIN.createUser("aNewUser");

    return new Response("Hello from admin app");

  },

};


```

You can learn more about how to configure D1 in the [D1 documentation](https://developers.cloudflare.com/d1/get-started/#3-bind-your-worker-to-your-d1-database).

You can try out a complete example of this to do app, as well as a Discord bot built with named entrypoints, by cloning the [cloudflare/js-rpc-and-entrypoints-demo repository ↗](https://github.com/cloudflare/js-rpc-and-entrypoints-demo) from GitHub.

## Further reading

* [ Lifecycle ](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle/)
* [ Reserved Methods ](https://developers.cloudflare.com/workers/runtime-apis/rpc/reserved-methods/)
* [ Visibility and Security Model ](https://developers.cloudflare.com/workers/runtime-apis/rpc/visibility/)
* [ TypeScript ](https://developers.cloudflare.com/workers/runtime-apis/rpc/typescript/)
* [ Error handling ](https://developers.cloudflare.com/workers/runtime-apis/rpc/error-handling/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/service-bindings/","name":"Service bindings"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/runtime-apis/bindings/service-bindings/rpc/","name":"RPC (WorkerEntrypoint)"}}]}
```

---

---
title: Vectorize
description: APIs available in Cloudflare Workers to interact with Vectorize.  Vectorize is Cloudflare's globally distributed vector database.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/vectorize.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Vectorize

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/vectorize/","name":"Vectorize"}}]}
```

---

---
title: Version metadata
description: Exposes Worker version metadata (`versionID` and `versionTag`). These fields can be added to events emitted from the Worker to send to downstream observability systems.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/version-metadata.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Version metadata

The version metadata binding can be used to access metadata associated with a [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#versions) from inside the Workers runtime.

Worker version ID, version tag and timestamp of when the version was created are available through the version metadata binding. They can be used in events sent to [Workers Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/) or to any third-party analytics/metrics service in order to aggregate by Worker version.

To use the version metadata binding, update your Worker's Wrangler file:

* [  wrangler.jsonc ](#tab-panel-7570)
* [  wrangler.toml ](#tab-panel-7571)

```

{

  "version_metadata": {

    "binding": "CF_VERSION_METADATA"

  }

}


```

```

[version_metadata]

binding = "CF_VERSION_METADATA"


```

### Interface

An example of how to access the version ID and version tag from within a Worker to send events to [Workers Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/):

* [  JavaScript ](#tab-panel-7568)
* [  TypeScript ](#tab-panel-7569)

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    const { id: versionId, tag: versionTag, timestamp: versionTimestamp } = env.CF_VERSION_METADATA;

    env.WAE.writeDataPoint({

      indexes: [versionId],

      blobs: [versionTag, versionTimestamp],

      //...

    });

    //...

  },

};


```

TypeScript

```

interface Environment {

  CF_VERSION_METADATA: WorkerVersionMetadata;

  WAE: AnalyticsEngineDataset;

}


export default {

  async fetch(request, env, ctx) {

    const { id: versionId, tag: versionTag } = env.CF_VERSION_METADATA;

    env.WAE.writeDataPoint({

      indexes: [versionId],

      blobs: [versionTag],

      //...

    });

    //...

  },

} satisfies ExportedHandler<Env>;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/version-metadata/","name":"Version metadata"}}]}
```

---

---
title: Dynamic Workers
description: Spin up isolated Workers on demand to execute code.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/dynamic-workers/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Dynamic Workers

Spin up Workers at runtime to execute code on-demand in a secure, sandboxed environment.

Dynamic Workers let you spin up an unlimited number of Workers to execute arbitrary code specified at runtime. Dynamic Workers can be used as a lightweight alternative to containers for securely sandboxing code you don't trust.

Dynamic Workers are the lowest-level primitive for spinning up a Worker, giving you full control over defining how the Worker is composed, which bindings it receives, whether it can reach the network, and more.

### Get started

Deploy the [Dynamic Workers Playground ↗](https://github.com/cloudflare/agents/tree/main/examples/dynamic-workers-playground) to create and run Workers dynamically from code you write or import from GitHub, with real-time logs and observability.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/dinasaur404/dynamic-workers-playground)

## Use Dynamic Workers for

Use this pattern when code needs to run quickly in a secure, isolated environment.

* **AI Agent "Code Mode"**: LLMs are trained to write code. Instead of supplying an agent with tool calls to perform tasks, give it an API and let it write and execute code. Save up to 80% in inference tokens and cost by allowing the agent to programmatically process data instead of sending it all through the LLM.
* **AI-generated applications / "Vibe Code"**: Run generated code for prototypes, projects, and automations in a secure, isolated sandboxed environment.
* **Fast development and previews**: Load prototypes, previews, and playgrounds in milliseconds.
* **Custom automations**: Create custom tools on the fly that execute a task, call an integration, or automate a workflow.
* **Platforms**: Run applications uploaded by your users.

## Features

Because you compose the Worker that runs the code at runtime, you control how that Worker is configured and what it can access.

* **[Bindings](https://developers.cloudflare.com/dynamic-workers/usage/bindings/)**: Decide which bindings and structured data the dynamic Worker receives.
* **[Observability](https://developers.cloudflare.com/dynamic-workers/usage/observability/)**: Attach Tail Workers and capture logs for each run.
* **[Network access](https://developers.cloudflare.com/dynamic-workers/usage/egress-control/)**: Intercept or block Internet access for outbound requests.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/dynamic-workers/","name":"Dynamic Workers"}}]}
```

---

---
title: Workflows
description: APIs available in Cloudflare Workers to interact with Workflows. Workflows allow you to build durable, multi-step applications using Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/bindings/workflows.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workflows

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/bindings/","name":"Bindings (env)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/bindings/workflows/","name":"Workflows"}}]}
```

---

---
title: Cache
description: Control reading and writing from the Cloudflare global network cache.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/cache.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cache

## Background

The [Cache API ↗](https://developer.mozilla.org/en-US/docs/Web/API/Cache) allows fine grained control of reading and writing from the [Cloudflare global network ↗](https://www.cloudflare.com/network/) cache.

The Cache API is available globally but the contents of the cache do not replicate outside of the originating data center. A `GET /users` response can be cached in the originating data center, but will not exist in another data center unless it has been explicitly created.

Tiered caching

The `cache.put` method is not compatible with tiered caching. Refer to [Cache API](https://developers.cloudflare.com/workers/reference/how-the-cache-works/#cache-api) for more information. To perform tiered caching, use the [fetch API](https://developers.cloudflare.com/workers/reference/how-the-cache-works/#interact-with-the-cloudflare-cache).

Workers deployed to custom domains have access to functional `cache` operations. So do [Pages functions](https://developers.cloudflare.com/pages/functions/), whether attached to custom domains or `*.pages.dev` domains.

However, any Cache API operations in the Cloudflare Workers dashboard editor and [Playground](https://developers.cloudflare.com/workers/playground/) previews will have no impact. For Workers fronted by [Cloudflare Access ↗](https://www.cloudflare.com/teams/access/), the Cache API is not currently available.

Note

This individualized zone cache object differs from Cloudflare’s Global CDN. For details, refer to [How the cache works](https://developers.cloudflare.com/workers/reference/how-the-cache-works/).

---

## Accessing Cache

The `caches.default` API is strongly influenced by the web browsers’ Cache API, but there are some important differences. For instance, Cloudflare Workers runtime exposes a single global cache object.

JavaScript

```

let cache = caches.default;

await cache.match(request);


```

You may create and manage additional Cache instances via the [caches.open ↗](https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage/open) method.

JavaScript

```

let myCache = await caches.open('custom:cache');

await myCache.match(request);


```

Note

When using the cache API, avoid overriding the hostname in cache requests, as this can lead to unnecessary DNS lookups and cache inefficiencies. Always use the hostname that matches the domain associated with your Worker.

JavaScript

```

// recommended approach: use your Worker hostname to ensure efficient caching

request.url = "https://your-Worker-hostname.com/";


let myCache = await caches.open('custom:cache');

let response = await myCache.match(request);


```

---

## Headers

Our implementation of the Cache API respects the following HTTP headers on the response passed to `put()`:

* `Cache-Control`  
   * Controls caching directives. This is consistent with [Cloudflare Cache-Control Directives](https://developers.cloudflare.com/cache/concepts/cache-control#cache-control-directives). Refer to [Edge TTL](https://developers.cloudflare.com/cache/how-to/configure-cache-status-code#edge-ttl) for a list of HTTP response codes and their TTL when `Cache-Control` directives are not present.
* `Cache-Tag`  
   * Allows resource purging by tag(s) later.
* `ETag`  
   * Allows `cache.match()` to evaluate conditional requests with `If-None-Match`.
* `Expires` string  
   * A string that specifies when the resource becomes invalid.
* `Last-Modified`  
   * Allows `cache.match()` to evaluate conditional requests with `If-Modified-Since`.

This differs from the web browser Cache API as they do not honor any headers on the request or response.

Note

Responses with `Set-Cookie` headers are never cached, because this sometimes indicates that the response contains unique data. To store a response with a `Set-Cookie` header, either delete that header or set `Cache-Control: private=Set-Cookie` on the response before calling `cache.put()`.

Use the `Cache-Control` method to store the response without the `Set-Cookie` header.

---

## Methods

### `Put`

JavaScript

```

cache.put(request, response);


```

* `put(request, response)` : Promise  
   * Attempts to add a response to the cache, using the given request as the key. Returns a promise that resolves to `undefined` regardless of whether the cache successfully stored the response.

Note

The `stale-while-revalidate` and `stale-if-error` directives are not supported when using the `cache.put` or `cache.match` methods.

#### Parameters

* `request` string | Request  
   * Either a string or a [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) object to serve as the key. If a string is passed, it is interpreted as the URL for a new Request object.
* `response` Response  
   * A [Response](https://developers.cloudflare.com/workers/runtime-apis/response/) object to store under the given key.

#### Invalid parameters

`cache.put` will throw an error if:

* The `request` passed is a method other than `GET`.
* The `response` passed has a `status` of [206 Partial Content ↗](https://www.webfx.com/web-development/glossary/http-status-codes/what-is-a-206-status-code/).
* The `response` passed contains the header `Vary: *`. The value of the `Vary` header is an asterisk (`*`). Refer to the [Cache API specification ↗](https://w3c.github.io/ServiceWorker/#cache-put) for more information.

#### Errors

`cache.put` returns a `413` error if `Cache-Control` instructs not to cache or if the response is too large.

### `Match`

JavaScript

```

cache.match(request, options);


```

* `match(request, options)` : Promise`<Response | undefined>`  
   * Returns a promise wrapping the response object keyed to that request.

Note

The `stale-while-revalidate` and `stale-if-error` directives are not supported when using the `cache.put` or `cache.match` methods.

#### Parameters

* `request` string | Request  
   * The string or [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) object used as the lookup key. Strings are interpreted as the URL for a new `Request` object.
* `options`  
   * Can contain one possible property: `ignoreMethod` (Boolean). When `true`, the request is considered to be a `GET` request regardless of its actual value.

Unlike the browser Cache API, Cloudflare Workers do not support the `ignoreSearch` or `ignoreVary` options on `match()`. You can accomplish this behavior by removing query strings or HTTP headers at `put()` time.

Our implementation of the Cache API respects the following HTTP headers on the request passed to `match()`:

* `Range`  
   * Results in a `206` response if a matching response with a Content-Length header is found. Your Cloudflare cache always respects range requests, even if an `Accept-Ranges` header is on the response.
* `If-Modified-Since`  
   * Results in a `304` response if a matching response is found with a `Last-Modified` header with a value before the time specified in `If-Modified-Since`.
* `If-None-Match`  
   * Results in a `304` response if a matching response is found with an `ETag` header with a value that matches a value in `If-None-Match`.

Note

`cache.match()` never sends a subrequest to the origin. If no matching response is found in cache, the promise that `cache.match()` returns is fulfilled with `undefined`.

#### Errors

`cache.match` generates a `504` error response when the requested content is missing or expired. The Cache API does not expose this `504` directly to the Worker script, instead returning `undefined`. Nevertheless, the underlying `504` is still visible in Cloudflare Logs.

If you use Cloudflare Logs, you may see these `504` responses with the `RequestSource` of `edgeWorkerCacheAPI`. Again, these are expected if the cached asset was missing or expired. Note that `edgeWorkerCacheAPI` requests are already filtered out in other views, such as Cache Analytics. To filter out these requests or to filter requests by end users of your website only, refer to [Filter end users](https://developers.cloudflare.com/analytics/graphql-api/features/filtering/#filter-end-users).

### `Delete`

JavaScript

```

cache.delete(request, options);


```

* `delete(request, options)` : Promise`<boolean>`

Deletes the `Response` object from the cache and returns a `Promise` for a Boolean response:

* `true`: The response was cached but is now deleted
* `false`: The response was not in the cache at the time of deletion.

Global purges

The `cache.delete` method only purges content of the cache in the data center that the Worker was invoked. For global purges, refer to [Purging assets stored with the Cache API](https://developers.cloudflare.com/workers/reference/how-the-cache-works/#purge-assets-stored-with-the-cache-api).

#### Parameters

* `request` string | Request  
   * The string or [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) object used as the lookup key. Strings are interpreted as the URL for a new `Request` object.
* `options` object  
   * Can contain one possible property: `ignoreMethod` (Boolean). Consider the request method a GET regardless of its actual value.

---

## Related resources

* [How the cache works](https://developers.cloudflare.com/workers/reference/how-the-cache-works/)
* [Example: Cache using fetch()](https://developers.cloudflare.com/workers/examples/cache-using-fetch/)
* [Example: using the Cache API](https://developers.cloudflare.com/workers/examples/cache-api/)
* [Example: caching POST requests](https://developers.cloudflare.com/workers/examples/cache-post-request/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/cache/","name":"Cache"}}]}
```

---

---
title: Console
description: Supported methods of the `console` API in Cloudflare Workers
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/console.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Console

The `console` object provides a set of methods to help you emit logs, warnings, and debug code.

All standard [methods of the console API ↗](https://developer.mozilla.org/en-US/docs/Web/API/console) are present on the `console` object in Workers.

However, some methods are no ops — they can be called, and do not emit an error, but do not do anything. This ensures compatibility with libraries which may use these APIs.

The table below enumerates each method, and the extent to which it is supported in Workers.

All methods noted as "✅ supported" have the following behavior:

* They will be written to the console in local dev (`npx wrangler@latest dev`)
* They will appear in live logs, when tailing logs in the dashboard or running [wrangler tail ↗](https://developers.cloudflare.com/workers/observability/log-from-workers/#use-wrangler-tail)
* They will create entries in the `logs` field of [Tail Worker ↗](https://developers.cloudflare.com/workers/observability/tail-workers/) events and [Workers Trace Events ↗](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/workers%5Ftrace%5Fevents/), which can be pushed to a destination of your choice via [Logpush ↗](https://developers.cloudflare.com/workers/observability/logpush/).

All methods noted as "🟡 partial support" have the following behavior:

* In both production and local development the method can be safely called, but will do nothing (no op)
* In the [Workers Playground ↗](https://workers.cloudflare.com/playground), Quick Editor in the Workers dashboard, and remote preview mode (`wrangler dev --remote`) calling the method will behave as expected, print to the console, etc.

Refer to [Log from Workers ↗](https://developers.cloudflare.com/workers/observability/log-from-workers/) for more on debugging and adding logs to Workers.

| Method                                                                                                         | Behavior                                                                                           |
| -------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| [console.debug() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/debug%5Fstatic)                   | ✅ supported                                                                                        |
| [console.error() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/error%5Fstatic)                   | ✅ supported                                                                                        |
| [console.info() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/info%5Fstatic)                     | ✅ supported                                                                                        |
| [console.log() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/log%5Fstatic)                       | ✅ supported                                                                                        |
| [console.warn() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/warn%5Fstatic)                     | ✅ supported                                                                                        |
| [console.clear() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/clear%5Fstatic)                   | 🟡 partial support                                                                                 |
| [console.count() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/count%5Fstatic)                   | 🟡 partial support                                                                                 |
| [console.group() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/group%5Fstatic)                   | 🟡 partial support                                                                                 |
| [console.table() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/table%5Fstatic)                   | 🟡 partial support                                                                                 |
| [console.trace() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/trace%5Fstatic)                   | 🟡 partial support                                                                                 |
| [console.assert() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/assert%5Fstatic)                 | ⚪ no op                                                                                            |
| [console.countReset() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/countreset%5Fstatic)         | ⚪ no op                                                                                            |
| [console.dir() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/dir%5Fstatic)                       | ⚪ no op                                                                                            |
| [console.dirxml() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/dirxml%5Fstatic)                 | ⚪ no op                                                                                            |
| [console.groupCollapsed() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/groupcollapsed%5Fstatic) | ⚪ no op                                                                                            |
| [console.groupEnd ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/groupend%5Fstatic)               | ⚪ no op                                                                                            |
| [console.profile() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/profile%5Fstatic)               | ⚪ no op                                                                                            |
| [console.profileEnd() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/profileend%5Fstatic)         | ⚪ no op                                                                                            |
| [console.time() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/time%5Fstatic)                     | ⚪ no op                                                                                            |
| [console.timeEnd() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/timeend%5Fstatic)               | ⚪ no op                                                                                            |
| [console.timeLog() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/timelog%5Fstatic)               | ⚪ no op                                                                                            |
| [console.timeStamp() ↗](https://developer.mozilla.org/en-US/docs/Web/API/console/timestamp%5Fstatic)           | ⚪ no op                                                                                            |
| [console.createTask() ↗](https://developer.chrome.com/blog/devtools-modern-web-debugging/#linked-stack-traces) | 🔴 Will throw an exception in production, but works in local dev, Quick Editor, and remote preview |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/console/","name":"Console"}}]}
```

---

---
title: Context (ctx)
description: The Context API in Cloudflare Workers, including props, exports, waitUntil and passThroughOnException.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/context.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Context (ctx)

The Context API provides methods to manage the lifecycle of your Worker or Durable Object.

Context is exposed via the following places:

* As the third parameter in all [handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/), including the [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/). (`fetch(request, env, ctx)`)
* As a class property of the [WorkerEntrypoint class](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc) (`this.ctx`)

Note that the Context API is available strictly in stateless contexts, that is, not [Durable Objects](https://developers.cloudflare.com/durable-objects/). However, Durable Objects have a different object, the [Durable Object State](https://developers.cloudflare.com/durable-objects/api/state/), which is available as `this.ctx` inside a Durable Object class, and provides some of the same functionality as the Context API.

## `props`

`ctx.props` provides a way to pass additional configuration to a worker based on the context in which it was invoked. For example, when your Worker is called by another Worker, `ctx.props` can provide information about the calling worker.

For example, imagine that you are configuring a Worker called "frontend-worker", which must talk to another Worker called "doc-worker" in order to manipulate documents. You might configure "frontend-worker" with a [Service Binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings) like:

* [  wrangler.jsonc ](#tab-panel-7576)
* [  wrangler.toml ](#tab-panel-7577)

```

{

  "services": [

    {

      "binding": "DOC_SERVICE",

      "service": "doc-worker",

      "entrypoint": "DocServiceApi",

      "props": {

        "clientId": "frontend-worker",

        "permissions": [

          "read",

          "write"

        ]

      }

    }

  ]

}


```

```

[[services]]

binding = "DOC_SERVICE"

service = "doc-worker"

entrypoint = "DocServiceApi"


  [services.props]

  clientId = "frontend-worker"

  permissions = [ "read", "write" ]


```

Now frontend-worker can make calls to doc-worker with code like `env.DOC_SERVICE.getDoc(id)`. This will make a [Remote Procedure Call](https://developers.cloudflare.com/workers/runtime-apis/rpc/) invoking the method `getDoc()` of the class `DocServiceApi`, a [WorkerEntrypoint class](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc) exported by doc-worker.

The configuration contains a `props` value. This in an arbitrary JSON value. When the `DOC_SERVICE` binding is used, the `DocServiceApi` instance receiving the call will be able to access this `props` value as `this.ctx.props`. Here, we've configured `props` to specify that the call comes from frontend-worker, and that it should be allowed to read and write documents. However, the contents of `props` can be anything you want.

The Workers platform is designed to ensure that `ctx.props` can only be set by someone who has permission to edit and deploy the worker to which it is being delivered. This means that you can trust that the content of `ctx.props` is authentic. There is no need to use secret keys or cryptographic signatures in a `ctx.props` value.

`ctx.props` can also be used to configure an RPC interface to represent a _specific_ resource, thus creating a "custom binding". For example, we could configure a Service Binding to our "doc-worker" which grants access only to a specific document:

* [  wrangler.jsonc ](#tab-panel-7578)
* [  wrangler.toml ](#tab-panel-7579)

```

{

  "services": [

    {

      "binding": "FOO_DOCUMENT",

      "service": "doc-worker",

      "entrypoint": "DocumentApi",

      "props": {

        "docId": "e366592caec1d88dff724f74136b58b5",

        "permissions": [

          "read",

          "write"

        ]

      }

    }

  ]

}


```

```

[[services]]

binding = "FOO_DOCUMENT"

service = "doc-worker"

entrypoint = "DocumentApi"


  [services.props]

  docId = "e366592caec1d88dff724f74136b58b5"

  permissions = [ "read", "write" ]


```

Here, we've placed a `docId` property in `ctx.props`. The `DocumentApi` class could be designed to provide an API to the specific document identified by `ctx.props.docId`, and enforcing the given permissions.

## `exports`

Compatibility flag required

To use `ctx.exports`, you must use [the enable\_ctx\_exports compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags#enable-ctxexports).

`ctx.exports` provides automatically-configured "loopback" bindings for all of your top-level exports.

* For each top-level export that `extends WorkerEntrypoint` (or simply implements a fetch handler), `ctx.exports` automatically contains a [Service Binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings).
* For each top-level export that `extends DurableObject` (and which has been configured with storage via a [migration](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/)), `ctx.exports` automatically contains a [Durable Object namespace binding](https://developers.cloudflare.com/durable-objects/api/namespace/).

For example:

JavaScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export class Greeter extends WorkerEntrypoint {

  greet(name) {

    return `Hello, ${name}!`;

  }

}


export default {

  async fetch(request, env, ctx) {

    let greeting = await ctx.exports.Greeter.greet("World");

    return new Response(greeting);

  },

};


```

In this example, the default fetch handler calls the `Greeter` class over RPC, like how you'd use a Service Binding. However, there is no external configuration required. `ctx.exports` is populated _automatically_ from your top-level imports.

### Specifying `ctx.props` when using `ctx.exports`

Loopback Service Bindings in `ctx.exports` have an extra capability that regular Service Bindings do not: the caller can specify the value of `ctx.props` that should be delivered to the callee.

* [  JavaScript ](#tab-panel-7574)
* [  TypeScript ](#tab-panel-7575)

JavaScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export class Greeter extends WorkerEntrypoint {

  greet(name) {

    return `${this.ctx.props.greeting}, ${name}!`;

  }

}


export default {

  async fetch(request, env, ctx) {

    // Make a custom greeter that uses the greeting "Welcome".

    let greeter = ctx.exports.Greeter({ props: { greeting: "Welcome" } });


    // Greet the world. Returns "Welcome, World!"

    let greeting = await greeter.greet("World");


    return new Response(greeting);

  },

};


```

TypeScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


type Props = {

  greeting: string;

};


export class Greeter extends WorkerEntrypoint<Env, Props> {

  greet(name) {

    return `${this.ctx.props.greeting}, ${name}!`;

  }

}


export default {

  async fetch(request, env, ctx) {

    // Make a custom greeter that uses the greeting "Welcome".

    let greeter = ctx.exports.Greeter({ props: { greeting: "Welcome" } });


    // Greet the world. Returns "Welcome, World!"

    let greeting = await greeter.greet("World");


    return new Response(greeting);

  },

} satisfies ExportedHandler<Env>;


```

Specifying props dynamically is permitted in this case because the caller is the same Worker, and thus can be presumed to be trusted to specify any props. The ability to customize props is particularly useful when the resulting binding is to be passed to another Worker over RPC or used in the `env` of a [dynamically-loaded worker](https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/).

Note that `props` values specified in this way are allowed to contain any "persistently" serializable type. This includes all basic [structured clonable data types ↗](https://developer.mozilla.org/en-US/docs/Web/API/Web%5FWorkers%5FAPI/Structured%5Fclone%5Falgorithm). It also includes Service Bindings themselves: you can place a Service Binding into the `props` of another Service Binding.

### TypeScript types for `ctx.exports` and `ctx.props`

If using TypeScript, you should use [the wrangler types command](https://developers.cloudflare.com/workers/wrangler/commands/general/#types) to auto-generate types for your project. The generated types will ensure `ctx.exports` is typed correctly.

When declaring an entrypoint class that accepts `props`, make sure to declare it as `extends WorkerEntrypoint<Env, Props>`, where `Props` is the type of `ctx.props`. See the example above.

## `waitUntil`

`ctx.waitUntil()` extends the lifetime of your Worker, allowing you to perform work without blocking returning a response, and that may continue after a response is returned. It accepts a `Promise`, which the Workers runtime will continue executing, even after a response has been returned by the Worker's [handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/).

`waitUntil` is commonly used to:

* Fire off events to external analytics providers. (note that when you use [Workers Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/), you do not need to use `waitUntil`)
* Put items into cache using the [Cache API](https://developers.cloudflare.com/workers/runtime-apis/cache/)

`waitUntil` has a 30-second time limit

The Worker's lifetime is extended for up to 30 seconds after the response is sent or the client disconnects. This time limit is shared across all `waitUntil()` calls within the same request — if any Promises have not settled after 30 seconds, they are cancelled. When `waitUntil` tasks are cancelled, the following warning will be logged to [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs/) and any attached [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/): `waitUntil() tasks did not complete within the allowed time after invocation end and have been cancelled.`

If you need to guarantee that work completes successfully, you should send messages to a [Queue](https://developers.cloudflare.com/queues/) and process them in a separate consumer Worker. Queues provide reliable delivery and automatic retries, ensuring your work is not lost.

Alternatives to waitUntil

If you are using `waitUntil()` to emit logs or exceptions, we recommend using [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/) instead. Even if your Worker throws an uncaught exception, the Tail Worker will execute, ensuring that you can emit logs or exceptions regardless of the Worker's invocation status.

[Cloudflare Queues](https://developers.cloudflare.com/queues/) is purpose-built for performing work out-of-band, without blocking returning a response back to the client Worker.

You can call `waitUntil()` multiple times. Similar to `Promise.allSettled`, even if a promise passed to one `waitUntil` call is rejected, promises passed to other `waitUntil()` calls will still continue to execute.

For example:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    // Forward / proxy original request

    let res = await fetch(request);


    // Add custom header(s)

    res = new Response(res.body, res);

    res.headers.set("x-foo", "bar");


    // Cache the response

    // NOTE: Does NOT block / wait

    ctx.waitUntil(caches.default.put(request, res.clone()));


    // Done

    return res;

  },

};


```

## `passThroughOnException`

Reuse of body

The Workers Runtime uses streaming for request and response bodies. It does not buffer the body. Hence, if an exception occurs after the body has been consumed, `passThroughOnException()` cannot send the body again.

If this causes issues, we recommend cloning the request body and handling exceptions in code. This will protect against uncaught code exceptions. However some exception times such as exceed CPU or memory limits will not be mitigated.

The `passThroughOnException` method allows a Worker to [fail open ↗](https://community.microfocus.com/cyberres/b/sws-22/posts/security-fundamentals-part-1-fail-open-vs-fail-closed), and pass a request through to an origin server when a Worker throws an unhandled exception. This can be useful when using Workers as a layer in front of an existing service, allowing the service behind the Worker to handle any unexpected error cases that arise in your Worker.

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    // Proxy to origin on unhandled/uncaught exceptions

    ctx.passThroughOnException();

    throw new Error("Oops");

  },

};


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/context/","name":"Context (ctx)"}}]}
```

---

---
title: Encoding
description: Takes a stream of code points as input and emits a stream of bytes.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/encoding.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Encoding

## TextEncoder

### Background

The `TextEncoder` takes a stream of code points as input and emits a stream of bytes. Encoding types passed to the constructor are ignored and a UTF-8 `TextEncoder` is created.

[TextEncoder() ↗](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder/TextEncoder) returns a newly constructed `TextEncoder` that generates a byte stream with UTF-8 encoding. `TextEncoder` takes no parameters and throws no exceptions.

### Constructor

JavaScript

```

let encoder = new TextEncoder();


```

### Properties

* `encoder.encoding` DOMString read-only  
   * The name of the encoder as a string describing the method the `TextEncoder` uses (always `utf-8`).

### Methods

* `encode(inputUSVString)` : Uint8Array  
   * Encodes a string input.

---

## TextDecoder

### Background

The `TextDecoder` interface represents a UTF-8 decoder. Decoders take a stream of bytes as input and emit a stream of code points.

[TextDecoder() ↗](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoder/TextDecoder) returns a newly constructed `TextDecoder` that generates a code-point stream.

### Constructor

JavaScript

```

let decoder = new TextDecoder();


```

### Properties

* `decoder.encoding` DOMString read-only  
   * The name of the decoder that describes the method the `TextDecoder` uses.
* `decoder.fatal` boolean read-only  
   * Indicates if the error mode is fatal.
* `decoder.ignoreBOM` boolean read-only  
   * Indicates if the byte-order marker is ignored.

### Methods

* `decode()` : DOMString  
   * Decodes using the method specified in the `TextDecoder` object. Learn more at [MDN’s TextDecoder documentation ↗](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoder/decode).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/encoding/","name":"Encoding"}}]}
```

---

---
title: EventSource
description: EventSource is a server-sent event API that allows a server to push events to a client.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/eventsource.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# EventSource

## Background

The [EventSource ↗](https://developer.mozilla.org/en-US/docs/Web/API/EventSource) interface is a server-sent event API that allows a server to push events to a client. The `EventSource` object is used to receive server-sent events. It connects to a server over HTTP and receives events in a text-based format.

### Constructor

JavaScript

```

let eventSource = new EventSource(url, options);


```

* `url` USVString - The URL to which to connect.
* `options` EventSourceInit - An optional dictionary containing any optional settings.

By default, the `EventSource` will use the global `fetch()` function under the covers to make requests. If you need to use a different fetch implementation as provided by a Cloudflare Workers binding, you can pass the `fetcher` option:

JavaScript

```

export default {

  async fetch(req, env) {

    let eventSource = new EventSource(url, { fetcher: env.MYFETCHER });

    // ...

  }

};


```

Note that the `fetcher` option is a Cloudflare Workers specific extension.

### Properties

* `eventSource.url` USVString read-only  
   * The URL of the event source.
* `eventSource.readyState` USVString read-only  
   * The state of the connection.
* `eventSource.withCredentials` Boolean read-only  
   * A Boolean indicating whether the `EventSource` object was instantiated with cross-origin (CORS) credentials set (`true`), or not (`false`).

### Methods

* `eventSource.close()`  
   * Closes the connection.
* `eventSource.onopen`  
   * An event handler called when a connection is opened.
* `eventSource.onmessage`  
   * An event handler called when a message is received.
* `eventSource.onerror`  
   * An event handler called when an error occurs.

### Events

* `message`  
   * Fired when a message is received.
* `open`  
   * Fired when the connection is opened.
* `error`  
   * Fired when an error occurs.

### Class Methods

* `EventSource.from(readableStreamReadableStream) : EventSource`  
   * This is a Cloudflare Workers specific extension that creates a new `EventSource` object from an existing `ReadableStream`. Such an instance does not initiate a new connection but instead attaches to the provided stream.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/eventsource/","name":"EventSource"}}]}
```

---

---
title: Fetch
description: An interface for asynchronously fetching resources via HTTP requests inside of a Worker.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/fetch.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Fetch

The [Fetch API ↗](https://developer.mozilla.org/en-US/docs/Web/API/Fetch%5FAPI) provides an interface for asynchronously fetching resources via HTTP requests inside of a Worker.

Note

Asynchronous tasks such as `fetch` must be executed within a [handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/). If you try to call `fetch()` within [global scope ↗](https://developer.mozilla.org/en-US/docs/Glossary/Global%5Fscope), your Worker will throw an error. Learn more about [the Request context](https://developers.cloudflare.com/workers/runtime-apis/request/#the-request-context).

Worker to Worker

Worker-to-Worker `fetch` requests are possible with [Service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) or by enabling the [global\_fetch\_strictly\_public compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#global-fetch-strictly-public).

## Syntax

* [  Module Worker ](#tab-panel-7580)
* [  Service Worker ](#tab-panel-7581)
* [  Python Worker ](#tab-panel-7582)

JavaScript

```

export default {

  async scheduled(controller, env, ctx) {

    return await fetch("https://example.com", {

      headers: {

        "X-Source": "Cloudflare-Workers",

      },

    });

  },

};


```

Service Workers are deprecated

Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers.

JavaScript

```

addEventListener("fetch", (event) => {

  // NOTE: can’t use fetch here, as we’re not in an async scope yet

  event.respondWith(eventHandler(event));

});


async function eventHandler(event) {

  // fetch can be awaited here since `event.respondWith()` waits for the Promise it receives to settle

  const resp = await fetch(event.request);

  return resp;

}


```

Python

```

from workers import WorkerEntrypoint, Response, fetch


class Default(WorkerEntrypoint):

    async def scheduled(self, controller, env, ctx):

        return await fetch("https://example.com", headers={"X-Source": "Cloudflare-Workers"})


```

* `fetch(resource, options optional)` : Promise`<Response>`
* Fetch returns a promise to a Response.

### Parameters

* [resource ↗](https://developer.mozilla.org/en-US/docs/Web/API/fetch#resource) Request | string | URL
* `options` options  
   * `cache` `undefined | 'no-store' | 'no-cache'` optional  
         * Standard HTTP `cache` header. Only `cache: 'no-store'` and `cache: 'no-cache'` are supported. Any other `cache` header will result in a `TypeError` with the message `Unsupported cache mode: <attempted-cache-mode>`. \_ For all requests this forwards the `Pragma: no-cache` and `Cache-Control: no-cache` headers to the origin. \_ For `no-store`, requests to origins not hosted by Cloudflare bypass the use of Cloudflare's caches. \_ For `no-cache`, requests to origins not hosted by Cloudflare are forced to revalidate with the origin before resonding.  
   * An object that defines the content and behavior of the request.

---

## How the `Accept-Encoding` header is handled

When making a subrequest with the `fetch()` API, you can specify which forms of compression to prefer that the server will respond with (if the server supports it) by including the [Accept-Encoding ↗](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept-Encoding) header.

Workers supports both the gzip and brotli compression algorithms. Usually it is not necessary to specify `Accept-Encoding` or `Content-Encoding` headers in the Workers Runtime production environment – brotli or gzip compression is automatically requested when fetching from an origin and applied to the response when returning data to the client, depending on the capabilities of the client and origin server.

To support requesting brotli from the origin, you must enable the [brotli\_content\_encoding](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#brotli-content-encoding-support) compatibility flag in your Worker. Soon, this compatibility flag will be enabled by default for all Workers past an upcoming compatibility date.

### Passthrough behavior

One scenario where the Accept-Encoding header is useful is for passing through compressed data from a server to the client, where the Accept-Encoding allows the worker to directly receive the compressed data stream from the server without it being decompressed beforehand. As long as you do not read the body of the compressed response prior to returning it to the client and keep the `Content-Encoding` header intact, it will "pass through" without being decompressed and then recompressed again. This can be helpful when using Workers in front of origin servers or when fetching compressed media assets, to ensure that the same compression used by the origin server is used in the response that your Worker returns.

In addition to a change in the content encoding, recompression is also needed when a response uses an encoding not supported by the client. As an example, when a Worker requests either brotli or gzip as the encoding but the client only supports gzip, recompression will still be needed if the server returns brotli-encoded data to the server (and will be applied automatically). Note that this behavior may also vary based on the [compression rules](https://developers.cloudflare.com/rules/compression-rules/), which can be used to configure what compression should be applied for different types of data on the server side.

TypeScript

```

export default {

  async fetch(request) {

    // Accept brotli or gzip compression

    const headers = new Headers({

      "Accept-Encoding": "br, gzip",

    });

    let response = await fetch("https://developers.cloudflare.com", {

      method: "GET",

      headers,

    });


    // As long as the original response body is returned and the Content-Encoding header is

    // preserved, the same encoded data will be returned without needing to be compressed again.

    return new Response(response.body, {

      status: response.status,

      statusText: response.statusText,

      headers: response.headers,

    });

  },

};


```

## Related resources

* [Example: use fetch to respond with another site](https://developers.cloudflare.com/workers/examples/respond-with-another-site/)
* [Example: Fetch HTML](https://developers.cloudflare.com/workers/examples/fetch-html/)
* [Example: Fetch JSON](https://developers.cloudflare.com/workers/examples/fetch-json/)
* [Example: cache using Fetch](https://developers.cloudflare.com/workers/examples/cache-using-fetch/)
* Write your Worker code in [ES modules syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) for an optimized experience.
* [Error 526](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-5xx-errors/error-526/#error-526-in-the-workers-context)
* [Fetch API in a partial setup](https://developers.cloudflare.com/workers/platform/known-issues/#fetch-api-in-cname-setup)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/fetch/","name":"Fetch"}}]}
```

---

---
title: Handlers
description: Methods, such as `fetch()`, on Workers that can receive and process external inputs.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/handlers/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Handlers

Handlers are methods on Workers that can receive and process external inputs, and can be invoked from outside your Worker. For example, the `fetch()` handler receives an HTTP request, and can return a response:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    return new Response('Hello World!');

  },

};


```

The following handlers are available within Workers:

* [ Alarm Handler ](https://developers.cloudflare.com/durable-objects/api/alarms/)
* [ Email Handler ](https://developers.cloudflare.com/email-routing/email-workers/runtime-api/)
* [ Fetch Handler ](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/)
* [ Queue Handler ](https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer)
* [ Scheduled Handler ](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/)
* [ Tail Handler ](https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/)

## Handlers in Python Workers

When you [write Workers in Python](https://developers.cloudflare.com/workers/languages/python/), handlers are placed in a class named `Default` that extends the [WorkerEntrypoint class](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/) (which you can import from the `workers` SDK module).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/handlers/","name":"Handlers"}}]}
```

---

---
title: Alarm Handler
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/handlers/alarm.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Alarm Handler

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/handlers/","name":"Handlers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/handlers/alarm/","name":"Alarm Handler"}}]}
```

---

---
title: Email Handler
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/handlers/email.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Email Handler

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/handlers/","name":"Handlers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/handlers/email/","name":"Email Handler"}}]}
```

---

---
title: Fetch Handler
description: Incoming HTTP requests to a Worker are passed to the fetch() handler as a Request object. To respond to the request with a response, return a Response object:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/handlers/fetch.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Fetch Handler

## Background

Incoming HTTP requests to a Worker are passed to the `fetch()` handler as a [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) object. To respond to the request with a response, return a [Response](https://developers.cloudflare.com/workers/runtime-apis/response/) object:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    return new Response('Hello World!');

  },

};


```

Note

The Workers runtime does not support `XMLHttpRequest` (XHR). Learn the difference between `XMLHttpRequest` and `fetch()` in the [MDN ↗](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) documentation.

### Parameters

* `request` Request  
   * The incoming HTTP request.
* `env` object  
   * The [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) available to the Worker. As long as the [environment](https://developers.cloudflare.com/workers/wrangler/environments/) has not changed, the same object (equal by identity) may be passed to multiple requests. You can also [import env from cloudflare:workers](https://developers.cloudflare.com/workers/runtime-apis/bindings/#importing-env-as-a-global) to access bindings from anywhere in your code.
* `ctx.waitUntil(promisePromise)` : void  
   * Refer to [waitUntil](https://developers.cloudflare.com/workers/runtime-apis/context/#waituntil).
* `ctx.passThroughOnException()` : void  
   * Refer to [passThroughOnException](https://developers.cloudflare.com/workers/runtime-apis/context/#passthroughonexception).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/handlers/","name":"Handlers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/handlers/fetch/","name":"Fetch Handler"}}]}
```

---

---
title: Queue Handler
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/handlers/queue.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Queue Handler

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/handlers/","name":"Handlers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/handlers/queue/","name":"Queue Handler"}}]}
```

---

---
title: Scheduled Handler
description: When a Worker is invoked via a Cron Trigger, the scheduled() handler handles the invocation.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/handlers/scheduled.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Scheduled Handler

## Background

When a Worker is invoked via a [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/), the `scheduled()` handler handles the invocation.

Testing scheduled() handlers in local development

You can test the behavior of your `scheduled()` handler in local development using Wrangler.

Cron Triggers can be tested using `Wrangler` by passing in the `--test-scheduled` flag to [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev). This will expose a `/__scheduled` (or `/cdn-cgi/handler/scheduled` for Python Workers) route which can be used to test using a http request. To simulate different cron patterns, a `cron` query parameter can be passed in.

Terminal window

```

npx wrangler dev --test-scheduled


curl "http://localhost:8787/__scheduled?cron=*+*+*+*+*"


curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*" # Python Workers


```

---

## Syntax

* [  JavaScript ](#tab-panel-7583)
* [  TypeScript ](#tab-panel-7584)
* [  Python ](#tab-panel-7585)

JavaScript

```

export default {

  async scheduled(controller, env, ctx) {

    ctx.waitUntil(doSomeTaskOnASchedule());

  },

};


```

TypeScript

```

interface Env {}

export default {

  async scheduled(

    controller: ScheduledController,

    env: Env,

    ctx: ExecutionContext,

  ) {

    ctx.waitUntil(doSomeTaskOnASchedule());

  },

};


```

Python

```

from workers import WorkerEntrypoint, Response, fetch


class Default(WorkerEntrypoint):

    async def scheduled(self, controller, env, ctx):

        ctx.waitUntil(doSomeTaskOnASchedule())


```

### Properties

* `controller.cron` string  
   * The value of the [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/) that started the `ScheduledEvent`.
* `controller.type` string  
   * The type of controller. This will always return `"scheduled"`.
* `controller.scheduledTime` number  
   * The time the `ScheduledEvent` was scheduled to be executed in milliseconds since January 1, 1970, UTC. It can be parsed as `new Date(controller.scheduledTime)`.
* `env` object  
   * An object containing the bindings associated with your Worker using ES modules format, such as KV namespaces and Durable Objects.
* `ctx` object  
   * An object containing the context associated with your Worker using ES modules format. Currently, this object just contains the `waitUntil` function.

### Handle multiple cron triggers

When you configure multiple [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/) for a single Worker, each trigger invokes the same `scheduled()` handler. Use `controller.cron` to distinguish which schedule fired and run different logic for each.

* [  wrangler.jsonc ](#tab-panel-7588)
* [  wrangler.toml ](#tab-panel-7589)

```

{

  "triggers": {

    "crons": ["*/5 * * * *", "0 0 * * *"],

  },

}


```

```

[triggers]

crons = [ "*/5 * * * *", "0 0 * * *" ]


```

* [  JavaScript ](#tab-panel-7586)
* [  TypeScript ](#tab-panel-7587)

JavaScript

```

export default {

  async scheduled(controller, env, ctx) {

    switch (controller.cron) {

      case "*/5 * * * *":

        ctx.waitUntil(fetch("https://example.com/api/sync"));

        break;

      case "0 0 * * *":

        ctx.waitUntil(env.MY_KV.put("last-cleanup", new Date().toISOString()));

        break;

    }

  },

};


```

TypeScript

```

export default {

  async scheduled(

    controller: ScheduledController,

    env: Env,

    ctx: ExecutionContext,

  ) {

    switch (controller.cron) {

      case "*/5 * * * *":

        ctx.waitUntil(fetch("https://example.com/api/sync"));

        break;

      case "0 0 * * *":

        ctx.waitUntil(env.MY_KV.put("last-cleanup", new Date().toISOString()));

        break;

    }

  },

} satisfies ExportedHandler<Env>;


```

The value of `controller.cron` is the exact cron expression string from your configuration. It must match character-for-character, including spacing.

### Methods

When a Workers script is invoked by a [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/), the Workers runtime starts a `ScheduledEvent` which will be handled by the `scheduled` function in your Workers Module class. The `ctx` argument represents the context your function runs in, and contains the following methods to control what happens next:

* `ctx.waitUntil(promisePromise)` : void - Use this method to notify the runtime to wait for asynchronous tasks (for example, logging, analytics to third-party services, streaming and caching). The first `ctx.waitUntil` to fail will be observed and recorded as the status in the [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/) Past Events table. Otherwise, it will be reported as a success.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/handlers/","name":"Handlers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/handlers/scheduled/","name":"Scheduled Handler"}}]}
```

---

---
title: Tail Handler
description: The tail() handler is the handler you implement when writing a Tail Worker. Tail Workers can be used to process logs in real-time and send them to a logging or analytics service.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/handlers/tail.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Tail Handler

## Background

The `tail()` handler is the handler you implement when writing a [Tail Worker](https://developers.cloudflare.com/workers/observability/logs/tail-workers/). Tail Workers can be used to process logs in real-time and send them to a logging or analytics service.

The `tail()` handler is called once each time the connected producer Worker is invoked.

To configure a Tail Worker, refer to [Tail Workers documentation](https://developers.cloudflare.com/workers/observability/logs/tail-workers/).

## Syntax

JavaScript

```

export default {

  async tail(events, env, ctx) {

    fetch("<YOUR_ENDPOINT>", {

      method: "POST",

      body: JSON.stringify(events),

    })

  }

}


```

### Parameters

* `events` array  
   * An array of [TailItems](https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/#tailitems). One `TailItem` is collected for each event that triggers a Worker. For Workers for Platforms customers with a Tail Worker installed on the dynamic dispatch Worker, `events` will contain two elements: one for the dynamic dispatch Worker and one for the User Worker.
* `env` object  
   * An object containing the bindings associated with your Worker using [ES modules format](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/), such as KV namespaces and Durable Objects.
* `ctx` object  
   * An object containing the context associated with your Worker using [ES modules format](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/). Currently, this object just contains the `waitUntil` function.

### Properties

* `event.type` string  
   * The type of event. This will always return `"tail"`.
* `event.traces` array  
   * An array of [TailItems](https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/#tailitems). One `TailItem` is collected for each event that triggers a Worker. For Workers for Platforms customers with a Tail Worker installed on the dynamic dispatch Worker, `events` will contain two elements: one for the dynamic dispatch Worker and one for the user Worker.
* `event.waitUntil(promisePromise)` : void  
   * Refer to [waitUntil](https://developers.cloudflare.com/workers/runtime-apis/context/#waituntil). Note that unlike fetch event handlers, tail handlers do not return a value, so this is the only way for trace Workers to do asynchronous work.

### `TailItems`

#### Properties

* `scriptName` string  
   * The name of the producer script.
* `event` object  
   * Contains information about the Worker’s triggering event.  
         * For fetch events: a [FetchEventInfo object](https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/#fetcheventinfo)  
         * For other event types: `null`, currently.
* `eventTimestamp` number  
   * Measured in epoch time.
* `logs` array  
   * An array of [TailLogs](https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/#taillog).
* `exceptions` array  
   * An array of [TailExceptions](https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/#tailexception). A single Worker invocation might result in multiple unhandled exceptions, since a Worker can register multiple asynchronous tasks.
* `outcome` string  
   * The outcome of the Worker invocation, one of:  
         * `unknown`: outcome status was not set.  
         * `ok`: The worker invocation succeeded.  
         * `exception`: An unhandled exception was thrown. This can happen for many reasons, including:  
                  * An uncaught JavaScript exception.  
                  * A fetch handler that does not result in a Response.  
                  * An internal error.  
         * `exceededCpu`: The Worker invocation exceeded either its CPU limits.  
         * `exceededMemory`: The Worker invocation exceeded memory limits.  
         * `scriptNotFound`: An internal error from difficulty retrieving the Worker script.  
         * `canceled`: The worker invocation was canceled before it completed. Commonly because the client disconnected before a response could be sent.  
         * `responseStreamDisconnected`: The response stream was disconnected during deferred proxying. Happens when either the client or server hangs up early.

Outcome is not the same as HTTP status.

Outcome is equivalent to the exit status of a script and an indicator of whether it has fully run to completion. A Worker outcome may differ from a response code if, for example:

* a script successfully processes a request but is logically designed to return a `4xx`/`5xx` response.
* a script sends a successful `200` response but an asynchronous task registered via `waitUntil()` later exceeds CPU or memory limits.

### `FetchEventInfo`

#### Properties

* `request` object  
   * A [TailRequest object](https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/#tailrequest).
* `response` object  
   * A [TailResponse object](https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/#tailresponse).

### `TailRequest`

#### Properties

* `cf` object  
   * Contains the data from [IncomingRequestCfProperties](https://developers.cloudflare.com/workers/runtime-apis/request/#incomingrequestcfproperties).
* `headers` object  
   * Header name/value entries (redacted by default). Header names are lowercased, and the values associated with duplicate header names are concatenated, with the string `", "` (comma space) interleaved, similar to [the Fetch standard ↗](https://fetch.spec.whatwg.org/#concept-header-list-get).
* `method` string  
   * The HTTP request method.
* `url` string  
   * The HTTP request URL (redacted by default).

#### Methods

* `getUnredacted()` object  
   * Returns a TailRequest object with unredacted properties

Some of the properties of `TailRequest` are redacted by default to make it harder to accidentally record sensitive information, like user credentials or API tokens. The redactions use heuristic rules, so they are subject to false positives and negatives. Clients can call `getUnredacted()` to bypass redaction, but they should always be careful about what information is retained, whether using the redaction or not.

* Header redaction: The header value will be the string `“REDACTED”` when the (case-insensitive) header name is `cookie`/`set-cookie` or contains a substring `"auth”`, `“key”`, `“secret”`, `“token”`, or `"jwt"`.
* URL redaction: For each greedily matched substring of ID characters (a-z, A-Z, 0-9, '+', '-', '\_') in the URL, if it meets the following criteria for a hex or base-64 ID, the substring will be replaced with the string `“REDACTED”`.
* Hex ID: Contains 32 or more hex digits, and contains only hex digits and separators ('+', '-', '\_')
* Base-64 ID: Contains 21 or more characters, and contains at least two uppercase, two lowercase, and two digits.

### `TailResponse`

#### Properties

* `status` number  
   * The HTTP status code.

### `TailLog`

Records information sent to console functions.

#### Properties

* `timestamp` number  
   * Measured in epoch time.
* `level` string  
   * A string indicating the console function that was called. One of: `debug`, `info`, `log`, `warn`, `error`.
* `message` object  
   * The array of parameters passed to the console function.

### `TailException`

Records an unhandled exception that occurred during the Worker invocation.

#### Properties

* `timestamp` number  
   * Measured in epoch time.
* `name` string  
   * The error type (For example,`Error`, `TypeError`, etc.).
* `message` object  
   * The error description (For example, `"x" is not a function`).

## Related resources

* [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/) \- Configure a Tail Worker to receive information about the execution of other Workers.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/handlers/","name":"Handlers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/handlers/tail/","name":"Tail Handler"}}]}
```

---

---
title: Headers
description: Access HTTP request and response headers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/headers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Headers

## Background

All HTTP request and response headers are available through the [Headers API ↗](https://developer.mozilla.org/en-US/docs/Web/API/Headers).

When a header name possesses multiple values, those values will be concatenated as a single, comma-delimited string value. This means that `Headers.get` will always return a string or a `null` value. This applies to all header names except for `Set-Cookie`, which requires `Headers.getAll`. This is documented below in [Differences](#differences).

JavaScript

```

let headers = new Headers();


headers.get('x-foo'); //=> null


headers.set('x-foo', '123');

headers.get('x-foo'); //=> "123"


headers.set('x-foo', 'hello');

headers.get('x-foo'); //=> "hello"


headers.append('x-foo', 'world');

headers.get('x-foo'); //=> "hello, world"


```

## Differences

The Workers implementation of the `Headers` API differs from the web standard in several ways. These differences are intentional, and reflect the server-side nature of the Workers runtime.

TypeScript users

Workers type definitions (from `@cloudflare/workers-types` or generated via [wrangler types](https://developers.cloudflare.com/workers/wrangler/commands/general/#types)) define a `Headers` type that includes Workers-specific methods like `getAll()`. This type is not directly compatible with the standard `Headers` type from `lib.dom.d.ts`. If you are working with code that uses both Workers types and standard web types, you may need to use type assertions.

### `getAll()` method

Despite the fact that the `Headers.getAll` method has been made obsolete in web browsers, Workers still provides this method for use with the `Set-Cookie` header. This is because cookies often contain date strings, which include commas. This can make parsing multiple values in a `Set-Cookie` header difficult.

Any attempts to use `Headers.getAll` with other header names will throw an error. A brief history of `Headers.getAll` is available in this [GitHub issue ↗](https://github.com/whatwg/fetch/issues/973).

### `Set-Cookie` handling

Due to [RFC 6265 ↗](https://www.rfc-editor.org/rfc/rfc6265) prohibiting folding multiple `Set-Cookie` headers into a single header, the `Headers.append` method allows you to set multiple `Set-Cookie` response headers instead of appending the value onto the existing header.

JavaScript

```

const headers = new Headers();


headers.append("Set-Cookie", "cookie1=value_for_cookie_1; Path=/; HttpOnly;");

headers.append("Set-Cookie", "cookie2=value_for_cookie_2; Path=/; HttpOnly;");


console.log(headers.getAll("Set-Cookie"));

// Array(2) [ cookie1=value_for_cookie_1; Path=/; HttpOnly;, cookie2=value_for_cookie_2; Path=/; HttpOnly; ]


```

### `USVString` return type

In Cloudflare Workers, the `Headers.get` method returns a [USVString ↗](https://mdn2.netlify.app/en-us/docs/web/api/usvstring/) instead of a [ByteString ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/String), which is specified by the web standard. For most scenarios, this should have no noticeable effect. To compare the differences between these two string classes, refer to this [Playground example ↗](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbMutvvsCMALAJx-cAzAHZeANkG8AHAAZOU7t2EBWAEy9eqsXNWdOALg5HjbHv34jxk2fMUr1m7Z12cAsACgAwuioQApr7YACJQAM4w6KFQ0D76JBhYeATEJFRwwH4MAERQNH4AHgB0AFahWaSoUGAB6Zk5eUWlWR7evgEQ2AAqdDB+cXAwMGBQAMYEUD7IxXAAbnChIwiwEADUwOi44H4eHgURSCS4fqhw4BAkAN7uAJDzdFQj8X4QIwAWABQIfgCOIH6hEAAlJcbtdqucxucGCQsoBeDcAXHtZUHgkggCCoKSeAgkaFUPwAdxInQKEAAog8Nn4EO9AYUAiNKe9IYDkc8SPTKbgsVCSABlCBLKgAc0KqAQ6GAnleiG8R3ehQVaIx3JZoIZVFC6GqhTA6CF7yynVeYRIJrgJAAqryAGr8wVCkj46KvEjmyH6LIAGhIzLVPk12t1+sNxtCprD5oAQnR-Hbcg6nRAXW7sT5LZ0AGLYKQe70co5cgiq67XZDIEgACT8cCOCAjXxIoRAg0iflwJAg6EdmAA1iQfGA6I7nSRo7GBfHQt6yGj+yAEKCy6bgEM-BlfOM0yBQv9LTa48LQoUiaHUiSSMM8cOwGASDBBec4Ivy-jEFR466KLOk2FCqzzq81a1mGuIEpWQFUqE7wXDC+ZttgkJZHEcGFucAC+xbXF8EDzlQZ6EgASv8EQan4BpSn4Ix9pQ5xJn4JAAAatAGfgMa6NAdoBJBEeE-r0YBNaQR2XY7vRdFzhAMCzgyK6IGE-qFF6lwkAJwEkBhNxoe4aEeCYelGGYAiWBI0hyAoShqBoWg6HoLQ+P4gQhLxUQxFQcQJDg+CEKQaQZNkGSEF5cDlPEVQ1H5WRkLqZDNF49ntF0PR9K6gzDJCExUFMmpUDs7gXFkwBwLkAD66ybNUSH1EcjRlDp7j6Q1rCGRYogmTY5n2FZTguMwHhAA).

## Cloudflare headers

Cloudflare sets a number of its own custom headers on incoming requests and outgoing responses. While some may be used for its own tracking and bookkeeping, many of these can be useful to your own applications – or Workers – too.

For a list of documented Cloudflare request headers, refer to [Cloudflare HTTP headers](https://developers.cloudflare.com/fundamentals/reference/http-headers/).

## Related resources

* [Logging headers to console](https://developers.cloudflare.com/workers/examples/logging-headers/) \- Review how to log headers in the console.
* [Cloudflare HTTP headers](https://developers.cloudflare.com/fundamentals/reference/http-headers/) \- Contains a list of specific headers that Cloudflare adds.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/headers/","name":"Headers"}}]}
```

---

---
title: HTMLRewriter
description: Build comprehensive and expressive HTML parsers inside of a Worker application.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/html-rewriter.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# HTMLRewriter

## Background

The `HTMLRewriter` class allows developers to build comprehensive and expressive HTML parsers inside of a Cloudflare Workers application. It can be thought of as a jQuery-like experience directly inside of your Workers application. Leaning on a powerful JavaScript API to parse and transform HTML, `HTMLRewriter` allows developers to build deeply functional applications.

The `HTMLRewriter` class should be instantiated once in your Workers script, with a number of handlers attached using the `on` and `onDocument` functions.

---

## Constructor

JavaScript

```

new HTMLRewriter()

  .on("*", new ElementHandler())

  .onDocument(new DocumentHandler());


```

---

## Global types

Throughout the `HTMLRewriter` API, there are a few consistent types that many properties and methods use:

* `Content` string | Response | ReadableStream  
   * Content inserted in the output stream should be a string, [Response](https://developers.cloudflare.com/workers/runtime-apis/response/), or [ReadableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/).
* `ContentOptions` Object  
   * `{ html: Boolean }` Controls the way the HTMLRewriter treats inserted content. If the `html` boolean is set to true, content is treated as raw HTML. If the `html` boolean is set to false or not provided, content will be treated as text and proper HTML escaping will be applied to it.

---

## Handlers

There are two handler types that can be used with `HTMLRewriter`: element handlers and document handlers.

### Element Handlers

An element handler responds to any incoming element, when attached using the `.on` function of an `HTMLRewriter` instance. The element handler should respond to `element`, `comments`, and `text`. The example processes `div` elements with an `ElementHandler` class.

JavaScript

```

class ElementHandler {

  element(element) {

    // An incoming element, such as `div`

    console.log(`Incoming element: ${element.tagName}`);

  }


  comments(comment) {

    // An incoming comment

  }


  text(text) {

    // An incoming piece of text

  }

}


async function handleRequest(req) {

  const res = await fetch(req);


  return new HTMLRewriter().on("div", new ElementHandler()).transform(res);

}


```

### Document Handlers

A document handler represents the incoming HTML document. A number of functions can be defined on a document handler to query and manipulate a document’s `doctype`, `comments`, `text`, and `end`. Unlike an element handler, a document handler’s `doctype`, `comments`, `text`, and `end` functions are not scoped by a particular selector. A document handler's functions are called for all the content on the page including the content outside of the top-level HTML tag:

JavaScript

```

class DocumentHandler {

  doctype(doctype) {

    // An incoming doctype, such as <!DOCTYPE html>

  }


  comments(comment) {

    // An incoming comment

  }


  text(text) {

    // An incoming piece of text

  }


  end(end) {

    // The end of the document

  }

}


```

#### Async Handlers

All functions defined on both element and document handlers can return either `void` or a `Promise<void>`. Making your handler function `async` allows you to access external resources such as an API via fetch, Workers KV, Durable Objects, or the cache.

JavaScript

```

class UserElementHandler {

  async element(element) {

    let response = await fetch(new Request("/user"));


    // fill in user info using response

  }

}


async function handleRequest(req) {

  const res = await fetch(req);


  // run the user element handler via HTMLRewriter on a div with ID `user_info`

  return new HTMLRewriter()

    .on("div#user_info", new UserElementHandler())

    .transform(res);

}


```

### Element

The `element` argument, used only in element handlers, is a representation of a DOM element. A number of methods exist on an element to query and manipulate it:

#### Properties

* `tagName` string  
   * The name of the tag, such as `"h1"` or `"div"`. This property can be assigned different values, to modify an element’s tag.
* `attributes` Iterator read-only  
   * A `[name, value]` pair of the tag’s attributes.
* `removed` boolean  
   * Indicates whether the element has been removed or replaced by one of the previous handlers.
* `namespaceURI` string  
   * Represents the [namespace URI ↗](https://infra.spec.whatwg.org/#namespaces) of an element.

#### Methods

* `` getAttribute(name ` string `) `` : ` string | null `  
   * Returns the value for a given attribute name on the element, or `null` if it is not found.
* `` hasAttribute(name ` string `) `` : ` boolean `  
   * Returns a boolean indicating whether an attribute exists on the element.
* `` setAttribute(name ` string `, value ` string `) `` : ` Element `  
   * Sets an attribute to a provided value, creating the attribute if it does not exist.
* `` removeAttribute(name ` string `) `` : ` Element `  
   * Removes the attribute.
* `` before(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` Element `  
   * Inserts content before the element.  
Content and ContentOptions  
Refer to [Global types](https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/#global-types) for more information on `Content` and `ContentOptions`.
* `` after(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` Element `  
   * Inserts content right after the element.
* `` prepend(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` Element `  
   * Inserts content right after the start tag of the element.
* `` append(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` Element `  
   * Inserts content right before the end tag of the element.
* `` replace(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` Element `  
   * Removes the element and inserts content in place of it.
* `` setInnerContent(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` Element `  
   * Replaces content of the element.
* `remove()` : ` Element `  
   * Removes the element with all its content.
* `removeAndKeepContent()` : ` Element `  
   * Removes the start tag and end tag of the element but keeps its inner content intact.
* `` onEndTag(handler ` Function<void> `) `` : ` void `  
   * Registers a handler that is invoked when the end tag of the element is reached.

### EndTag

The `endTag` argument, used only in handlers registered with `element.onEndTag`, is a limited representation of a DOM element.

#### Properties

* `name` string  
   * The name of the tag, such as `"h1"` or `"div"`. This property can be assigned different values, to modify an element's tag.

#### Methods

* `` before(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` EndTag `  
   * Inserts content right before the end tag.
* `` after(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` EndTag `  
   * Inserts content right after the end tag.  
Content and ContentOptions  
Refer to [Global types](https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/#global-types) for more information on `Content` and `ContentOptions`.
* `remove()` : ` EndTag `  
   * Removes the element with all its content.

### Text chunks

Since Cloudflare performs zero-copy streaming parsing, text chunks are not the same thing as text nodes in the lexical tree. A lexical tree text node can be represented by multiple chunks, as they arrive over the wire from the origin.

Consider the following markup: `<div>Hey. How are you?</div>`. It is possible that the Workers script will not receive the entire text node from the origin at once; instead, the `text` element handler will be invoked for each received part of the text node. For example, the handler might be invoked with `"Hey. How "`, then `"are you?"`. When the last chunk arrives, the text's `lastInTextNode` property will be set to `true`. Developers should make sure to concatenate these chunks together.

#### Properties

* `removed` boolean  
   * Indicates whether the element has been removed or replaced by one of the previous handlers.
* `text` string read-only  
   * The text content of the chunk. Could be empty if the chunk is the last chunk of the text node.
* `lastInTextNode` boolean read-only  
   * Specifies whether the chunk is the last chunk of the text node.

#### Methods

* `` before(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` Element `  
   * Inserts content before the element.  
Content and ContentOptions  
Refer to [Global types](https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/#global-types) for more information on `Content` and `ContentOptions`.
* `` after(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` Element `  
   * Inserts content right after the element.
* `` replace(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` Element `  
   * Removes the element and inserts content in place of it.
* `remove()` : ` Element `  
   * Removes the element with all its content.

### Comments

The `comments` function on an element handler allows developers to query and manipulate HTML comment tags.

JavaScript

```

class ElementHandler {

  comments(comment) {

    // An incoming comment element, such as <!-- My comment -->

  }

}


```

#### Properties

* `comment.removed` boolean  
   * Indicates whether the element has been removed or replaced by one of the previous handlers.
* `comment.text` string  
   * The text of the comment. This property can be assigned different values, to modify comment's text.

#### Methods

* `` before(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` Element `  
   * Inserts content before the element.  
Content and ContentOptions  
Refer to [Global types](https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/#global-types) for more information on `Content` and `ContentOptions`.
* `` after(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` Element `  
   * Inserts content right after the element.
* `` replace(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` Element `  
   * Removes the element and inserts content in place of it.
* `remove()` : ` Element `  
   * Removes the element with all its content.

### Doctype

The `doctype` function on a document handler allows developers to query a document's [doctype ↗](https://developer.mozilla.org/en-US/docs/Glossary/Doctype).

JavaScript

```

class DocumentHandler {

  doctype(doctype) {

    // An incoming doctype element, such as

    // <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">

  }

}


```

#### Properties

* `doctype.name` string | null read-only  
   * The doctype name.
* `doctype.publicId` string | null read-only  
   * The quoted string in the doctype after the PUBLIC atom.
* `doctype.systemId` string | null read-only  
   * The quoted string in the doctype after the SYSTEM atom or immediately after the `publicId`.

### End

The `end` function on a document handler allows developers to append content to the end of a document.

JavaScript

```

class DocumentHandler {

  end(end) {

    // The end of the document

  }

}


```

#### Methods

* `` append(content ` Content `, contentOptions ` ContentOptions ` optional) `` : ` DocumentEnd `  
   * Inserts content after the end of the document.  
Content and ContentOptions  
Refer to [Global types](https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/#global-types) for more information on `Content` and `ContentOptions`.

---

## Selectors

This is what selectors are and what they are used for.

* `*`  
   * Any element.
* `E`  
   * Any element of type E.
* `E:nth-child(n)`  
   * An E element, the n-th child of its parent.
* `E:first-child`  
   * An E element, first child of its parent.
* `E:nth-of-type(n)`  
   * An E element, the n-th sibling of its type.
* `E:first-of-type`  
   * An E element, first sibling of its type.
* `E:not(s)`  
   * An E element that does not match either compound selectors.
* `E.warning`  
   * An E element belonging to the class warning.
* `E#myid`  
   * An E element with ID equal to myid.
* `E[foo]`  
   * An E element with a foo attribute.
* `E[foo="bar"]`  
   * An E element whose foo attribute value is exactly equal to bar.
* `E[foo="bar" i]`  
   * An E element whose foo attribute value is exactly equal to any (ASCII-range) case-permutation of bar.
* `E[foo="bar" s]`  
   * An E element whose foo attribute value is exactly and case-sensitively equal to bar.
* `E[foo~="bar"]`  
   * An E element whose foo attribute value is a list of whitespace-separated values, one of which is exactly equal to bar.
* `E[foo^="bar"]`  
   * An E element whose foo attribute value begins exactly with the string bar.
* `E[foo$="bar"]`  
   * An E element whose foo attribute value ends exactly with the string bar.
* `E[foo*="bar"]`  
   * An E element whose foo attribute value contains the substring bar.
* `E[foo|="en"]`  
   * An E element whose foo attribute value is a hyphen-separated list of values beginning with en.
* `E F`  
   * An F element descendant of an E element.
* `E > F`  
   * An F element child of an E element.

---

## Errors

If a handler throws an exception, parsing is immediately halted, the transformed response body is errored with the thrown exception, and the untransformed response body is canceled (closed). If the transformed response body was already partially streamed back to the client, the client will see a truncated response.

JavaScript

```

async function handle(request) {

  let oldResponse = await fetch(request);

  let newResponse = new HTMLRewriter()

    .on("*", {

      element(element) {

        throw new Error("A really bad error.");

      },

    })

    .transform(oldResponse);


  // At this point, an expression like `await newResponse.text()`

  // will throw `new Error("A really bad error.")`.

  // Thereafter, any use of `newResponse.body` will throw the same error,

  // and `oldResponse.body` will be closed.


  // Alternatively, this will produce a truncated response to the client:

  return newResponse;

}


```

---

## Related resources

* [Introducing HTMLRewriter ↗](https://blog.cloudflare.com/introducing-htmlrewriter/)
* [Tutorial: Localize a Website](https://developers.cloudflare.com/pages/tutorials/localize-a-website/)
* [Example: rewrite links](https://developers.cloudflare.com/workers/examples/rewrite-links/)
* [Example: Inject Turnstile](https://developers.cloudflare.com/workers/examples/turnstile-html-rewriter/)
* [Example: SPA shell with bootstrap data](https://developers.cloudflare.com/workers/examples/spa-shell/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/html-rewriter/","name":"HTMLRewriter"}}]}
```

---

---
title: MessageChannel
description: Channel messaging with MessageChannel and MessagePort
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/messagechannel.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# MessageChannel

## Background

The [MessageChannel API ↗](https://developer.mozilla.org/en-US/docs/Web/API/MessageChannel) provides a way to create a communication channel between different parts of your application.

The Workers runtime provides a minimal implementation of the `MessageChannel` API that is currently limited to uses with a single Worker instance. This means that you can use `MessageChannel` to send messages between different parts of your Worker, but not across different Workers.

JavaScript

```

const { port1, port2 } = new MessageChannel();


port2.onmessage = (event) => {

  console.log('Received message:', event.data);

};


port2.postMessage('Hello from port2!');


```

Any value that can be used with the `structuredClone(...)` API can be sent over the port.

## Differences

There are a number of key limitations to the `MessageChannel` API in Workers:

* Transfer lists are currently not supported. This means that you will not be able to transfer ownership of objects like `ArrayBuffer` or `MessagePort` between ports.
* The `MessagePort` is not yet serializable. This means that you cannot send a `MessagePort` object through the `postMessage` method or via JSRPC calls.
* The `'messageerror'` event is only partially supported. If the `'onmessage'` handler throws an error, the `'messageerror'` event will be triggered, however, it will not be triggered when there are errors serializing or deserializing the message data. Instead, the error will be thrown when the `postMessage` method is called on the sending port.
* The `'close'` event will be emitted on both ports when one of the ports is closed, however it will not be emitted when the Worker is terminated or when one of the ports is garbage collected.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/messagechannel/","name":"MessageChannel"}}]}
```

---

---
title: Node.js compatibility
description: Node.js APIs available in Cloudflare Workers
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Node.js compatibility

When you write a Worker, you may need to import packages from [npm ↗](https://www.npmjs.com/). Many npm packages rely on APIs from the [Node.js runtime ↗](https://nodejs.org/en/about), and will not work unless these Node.js APIs are available.

Cloudflare Workers provides a subset of Node.js APIs in two forms:

1. As built-in APIs provided by the Workers Runtime. Most of these APIs are full implementations of the corresponding Node.js APIs, while a few are partially supported or non-functional stubs intended for the APIs to be available for import only but not for actual use.
2. As polyfill shim implementations that [Wrangler](https://developers.cloudflare.com/workers/wrangler/) adds to your Worker's code, allowing it to import the module, but calling API methods will throw errors.

## Get Started

To enable built-in Node.js APIs and add polyfills, add the `nodejs_compat` compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), and ensure that your Worker's [compatibility date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

* [  wrangler.jsonc ](#tab-panel-7590)
* [  wrangler.toml ](#tab-panel-7591)

```

{

  "compatibility_flags": [

    "nodejs_compat"

  ],

  // Set this to today's date

  "compatibility_date": "2026-04-03"

}


```

```

compatibility_flags = [ "nodejs_compat" ]

# Set this to today's date

compatibility_date = "2026-04-03"


```

## Supported Node.js APIs

The runtime APIs from Node.js listed below as "🟢 supported" are currently natively supported in the Workers Runtime. Item listed as "🟡 partially supported" are either only partially implemented or are implemented as non-functional stubs.

[Deprecated or experimental APIs from Node.js ↗](https://nodejs.org/docs/latest/api/documentation.html#stability-index), and APIs that do not fit in a serverless context, are not included as part of the list below:

| API Name                                                                                                          | Natively supported by the Workers Runtime                                                                                     |
| ----------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| [Assertion testing](https://developers.cloudflare.com/workers/runtime-apis/nodejs/assert/)                        | 🟢 supported                                                                                                                  |
| [Asynchronous context tracking](https://developers.cloudflare.com/workers/runtime-apis/nodejs/asynclocalstorage/) | 🟢 supported                                                                                                                  |
| [Async hooks ↗](https://nodejs.org/docs/latest/api/async%5Fhooks.html)                                            | 🟡 partially supported (non-functional)                                                                                       |
| [Buffer](https://developers.cloudflare.com/workers/runtime-apis/nodejs/buffer/)                                   | 🟢 supported                                                                                                                  |
| [Child processes ↗](https://nodejs.org/docs/latest/api/child%5Fprocess.html)                                      | 🟡 partially supported (non-functional)                                                                                       |
| [Cluster ↗](https://nodejs.org/docs/latest/api/cluster.html)                                                      | 🟡 partially supported (non-functional)                                                                                       |
| [Console ↗](https://nodejs.org/docs/latest/api/console.html)                                                      | 🟡 partially supported                                                                                                        |
| [Crypto](https://developers.cloudflare.com/workers/runtime-apis/nodejs/crypto/)                                   | 🟢 supported                                                                                                                  |
| [Debugger](https://developers.cloudflare.com/workers/observability/dev-tools/)                                    | 🟢 supported via [Chrome Dev Tools integration](https://developers.cloudflare.com/workers/observability/dev-tools/)           |
| [Diagnostics Channel](https://developers.cloudflare.com/workers/runtime-apis/nodejs/diagnostics-channel/)         | 🟢 supported                                                                                                                  |
| [DNS](https://developers.cloudflare.com/workers/runtime-apis/nodejs/dns/)                                         | 🟢 supported                                                                                                                  |
| Errors                                                                                                            | 🟢 supported                                                                                                                  |
| [Events](https://developers.cloudflare.com/workers/runtime-apis/nodejs/eventemitter/)                             | 🟢 supported                                                                                                                  |
| [File system](https://developers.cloudflare.com/workers/runtime-apis/nodejs/fs/)                                  | 🟢 supported                                                                                                                  |
| Globals                                                                                                           | 🟢 supported                                                                                                                  |
| [HTTP](https://developers.cloudflare.com/workers/runtime-apis/nodejs/http/)                                       | 🟢 supported                                                                                                                  |
| [HTTP/2 ↗](https://nodejs.org/docs/latest/api/http2.html)                                                         | 🟡 partially supported (non-functional)                                                                                       |
| [HTTPS](https://developers.cloudflare.com/workers/runtime-apis/nodejs/https/)                                     | 🟢 supported                                                                                                                  |
| [Inspector ↗](https://nodejs.org/docs/latest/api/inspector.html)                                                  | 🟡 partially supported via [Chrome Dev Tools integration](https://developers.cloudflare.com/workers/observability/dev-tools/) |
| [Module ↗](https://nodejs.org/docs/latest/api/module.html)                                                        | 🟡 partially supported                                                                                                        |
| [Net](https://developers.cloudflare.com/workers/runtime-apis/nodejs/net/)                                         | 🟢 supported                                                                                                                  |
| [OS ↗](https://nodejs.org/docs/latest/api/os.html)                                                                | 🟡 partially supported                                                                                                        |
| [Path](https://developers.cloudflare.com/workers/runtime-apis/nodejs/path/)                                       | 🟢 supported                                                                                                                  |
| [Performance hooks ↗](https://nodejs.org/docs/latest/api/perf%5Fhooks.html)                                       | 🟡 partially supported                                                                                                        |
| [Process](https://developers.cloudflare.com/workers/runtime-apis/nodejs/process/)                                 | 🟢 supported                                                                                                                  |
| [Punycode ↗](https://nodejs.org/docs/latest/api/punycode.html) (deprecated)                                       | 🟢 supported                                                                                                                  |
| [Readline ↗](https://nodejs.org/docs/latest/api/readline.html)                                                    | 🟡 partially supported (non-functional)                                                                                       |
| [REPL ↗](https://nodejs.org/docs/latest/api/repl.html)                                                            | 🟡 partially supported (non-functional)                                                                                       |
| [Query strings ↗](https://nodejs.org/docs/latest/api/querystring.html)                                            | 🟢 supported                                                                                                                  |
| [SQLite ↗](https://nodejs.org/docs/latest/api/sqlite.html)                                                        | ⚪ not yet supported                                                                                                           |
| [Stream](https://developers.cloudflare.com/workers/runtime-apis/nodejs/streams)                                   | 🟢 supported                                                                                                                  |
| [String decoder](https://developers.cloudflare.com/workers/runtime-apis/nodejs/string-decoder/)                   | 🟢 supported                                                                                                                  |
| [Test runner ↗](https://nodejs.org/docs/latest/api/test.html)                                                     | ⚪ not supported                                                                                                               |
| [Timers](https://developers.cloudflare.com/workers/runtime-apis/nodejs/timers/)                                   | 🟢 supported                                                                                                                  |
| [TLS/SSL](https://developers.cloudflare.com/workers/runtime-apis/nodejs/tls/)                                     | 🟡 partially supported                                                                                                        |
| [UDP/datagram ↗](https://nodejs.org/docs/latest/api/dgram.html)                                                   | 🟡 partially supported (non-functional)                                                                                       |
| [URL](https://developers.cloudflare.com/workers/runtime-apis/nodejs/url/)                                         | 🟢 supported                                                                                                                  |
| [Utilities](https://developers.cloudflare.com/workers/runtime-apis/nodejs/util/)                                  | 🟢 supported                                                                                                                  |
| [V8 ↗](https://nodejs.org/docs/latest/api/v8.html)                                                                | 🟡 partially supported (non-functional)                                                                                       |
| [VM ↗](https://nodejs.org/docs/latest/api/vm.html)                                                                | 🟡 partially supported (non-functional)                                                                                       |
| [Web Crypto API](https://developers.cloudflare.com/workers/runtime-apis/web-crypto/)                              | 🟢 supported                                                                                                                  |
| [Web Streams API](https://developers.cloudflare.com/workers/runtime-apis/streams/)                                | 🟢 supported                                                                                                                  |
| [Zlib](https://developers.cloudflare.com/workers/runtime-apis/nodejs/zlib/)                                       | 🟢 supported                                                                                                                  |

Unless otherwise specified, native implementations of Node.js APIs in Workers are intended to match the implementation in the [Current release of Node.js ↗](https://github.com/nodejs/release#release-schedule).

If an API you wish to use is missing and you want to suggest that Workers support it, please add a post or comment in the[Node.js APIs discussions category ↗](https://github.com/cloudflare/workerd/discussions/categories/node-js-apis) on GitHub.

### Node.js API Polyfills

Node.js APIs that are not yet supported in the Workers runtime are polyfilled via [Wrangler](https://developers.cloudflare.com/workers/wrangler/), which uses [unenv ↗](https://github.com/unjs/unenv). If the `nodejs_compat` [compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/) is enabled, and your Worker's [compatibility date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) is 2024-09-23 or later, Wrangler will automatically inject polyfills into your Worker's code.

Adding polyfills maximizes compatibility with existing npm packages by providing modules with mocked methods. Calling these mocked methods will either noop or will throw an error with a message like:

```

[unenv] <method name> is not implemented yet!


```

This allows you to import packages that use these Node.js modules, even if certain methods are not supported.

## Enable only AsyncLocalStorage

If you need to enable only the Node.js `AsyncLocalStorage` API, you can enable the `nodejs_als` compatibility flag:

* [  wrangler.jsonc ](#tab-panel-7592)
* [  wrangler.toml ](#tab-panel-7593)

```

{

  "compatibility_flags": [

    "nodejs_als"

  ]

}


```

```

compatibility_flags = [ "nodejs_als" ]


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}}]}
```

---

---
title: assert
description: The node:assert module in Node.js provides a number of useful assertions that are useful when building tests.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/assert.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# assert

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

The [node:assert ↗](https://nodejs.org/docs/latest/api/assert.html) module in Node.js provides a number of useful assertions that are useful when building tests.

JavaScript

```

import { strictEqual, deepStrictEqual, ok, doesNotReject } from "node:assert";


strictEqual(1, 1); // ok!

strictEqual(1, "1"); // fails! throws AssertionError


deepStrictEqual({ a: { b: 1 } }, { a: { b: 1 } }); // ok!

deepStrictEqual({ a: { b: 1 } }, { a: { b: 2 } }); // fails! throws AssertionError


ok(true); // ok!

ok(false); // fails! throws AssertionError


await doesNotReject(async () => {}); // ok!

await doesNotReject(async () => {

  throw new Error("boom");

}); // fails! throws AssertionError


```

Note

In the Workers implementation of `assert`, all assertions run in, what Node.js calls, the strict assertion mode. In strict assertion mode, non-strict methods behave like their corresponding strict methods. For example, `deepEqual()` will behave like `deepStrictEqual()`.

Refer to the [Node.js documentation for assert ↗](https://nodejs.org/dist/latest-v19.x/docs/api/assert.html) for more information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/assert/","name":"assert"}}]}
```

---

---
title: AsyncLocalStorage
description: Cloudflare Workers provides an implementation of a subset of the Node.js AsyncLocalStorage API for creating in-memory stores that remain coherent through asynchronous operations.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/asynclocalstorage.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# AsyncLocalStorage

## Background

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

Cloudflare Workers provides an implementation of a subset of the Node.js [AsyncLocalStorage ↗](https://nodejs.org/dist/latest-v18.x/docs/api/async%5Fcontext.html#class-asynclocalstorage) API for creating in-memory stores that remain coherent through asynchronous operations.

## Constructor

JavaScript

```

import { AsyncLocalStorage } from "node:async_hooks";


const asyncLocalStorage = new AsyncLocalStorage();


```

* `new AsyncLocalStorage()` : AsyncLocalStorage  
   * Returns a new `AsyncLocalStorage` instance.

## Methods

* `getStore()` : any  
   * Returns the current store. If called outside of an asynchronous context initialized by calling `asyncLocalStorage.run()`, it returns `undefined`.
* `run(storeany, callbackfunction, ...argsarguments)` : any  
   * Runs a function synchronously within a context and returns its return value. The store is not accessible outside of the callback function. The store is accessible to any asynchronous operations created within the callback. The optional `args` are passed to the callback function. If the callback function throws an error, the error is thrown by `run()` also.
* `exit(callbackfunction, ...argsarguments)` : any  
   * Runs a function synchronously outside of a context and returns its return value. This method is equivalent to calling `run()` with the `store` value set to `undefined`.

## Static Methods

* `AsyncLocalStorage.bind(fn)` : function  
   * Captures the asynchronous context that is current when `bind()` is called and returns a function that enters that context before calling the passed in function.
* `AsyncLocalStorage.snapshot()` : function  
   * Captures the asynchronous context that is current when `snapshot()` is called and returns a function that enters that context before calling a given function.

## Examples

### Fetch Listener

JavaScript

```

import { AsyncLocalStorage } from 'node:async_hooks';


const asyncLocalStorage = new AsyncLocalStorage();

let idSeq = 0;


export default {

  async fetch(req) {

    return asyncLocalStorage.run(idSeq++, () => {

      // Simulate some async activity...

      await scheduler.wait(1000);

      return new Response(asyncLocalStorage.getStore());

    });

  }

};


```

### Multiple stores

The API supports multiple `AsyncLocalStorage` instances to be used concurrently.

JavaScript

```

import { AsyncLocalStorage } from 'node:async_hooks';


const als1 = new AsyncLocalStorage();

const als2 = new AsyncLocalStorage();


export default {

  async fetch(req) {

    return als1.run(123, () => {

      return als2.run(321, () => {

        // Simulate some async activity...

        await scheduler.wait(1000);

        return new Response(`${als1.getStore()}-${als2.getStore()}`);

      });

    });

  }

};


```

### Unhandled Rejections

When a `Promise` rejects and the rejection is unhandled, the async context propagates to the `'unhandledrejection'` event handler:

JavaScript

```

import { AsyncLocalStorage } from "node:async_hooks";


const asyncLocalStorage = new AsyncLocalStorage();

let idSeq = 0;


addEventListener("unhandledrejection", (event) => {

  console.log(asyncLocalStorage.getStore(), "unhandled rejection!");

});


export default {

  async fetch(req) {

    return asyncLocalStorage.run(idSeq++, () => {

      // Cause an unhandled rejection!

      throw new Error("boom");

    });

  },

};


```

### `AsyncLocalStorage.bind()` and `AsyncLocalStorage.snapshot()`

JavaScript

```

import { AsyncLocalStorage } from "node:async_hooks";


const als = new AsyncLocalStorage();


function foo() {

  console.log(als.getStore());

}

function bar() {

  console.log(als.getStore());

}


const oneFoo = als.run(123, () => AsyncLocalStorage.bind(foo));

oneFoo(); // prints 123


const snapshot = als.run("abc", () => AsyncLocalStorage.snapshot());

snapshot(foo); // prints 'abc'

snapshot(bar); // prints 'abc'


```

JavaScript

```

import { AsyncLocalStorage } from "node:async_hooks";


const als = new AsyncLocalStorage();


class MyResource {

  #runInAsyncScope = AsyncLocalStorage.snapshot();


  doSomething() {

    this.#runInAsyncScope(() => {

      return als.getStore();

    });

  }

}


const myResource = als.run(123, () => new MyResource());

console.log(myResource.doSomething()); // prints 123


```

## `AsyncResource`

The [AsyncResource ↗](https://nodejs.org/dist/latest-v18.x/docs/api/async%5Fcontext.html#class-asyncresource) class is a component of Node.js' async context tracking API that allows users to create their own async contexts. Objects that extend from `AsyncResource` are capable of propagating the async context in much the same way as promises.

Note that `AsyncLocalStorage.snapshot()` and `AsyncLocalStorage.bind()` provide a better approach. `AsyncResource` is provided solely for backwards compatibility with Node.js.

### Constructor

JavaScript

```

import { AsyncResource, AsyncLocalStorage } from "node:async_hooks";


const als = new AsyncLocalStorage();


class MyResource extends AsyncResource {

  constructor() {

    // The type string is required by Node.js but unused in Workers.

    super("MyResource");

  }


  doSomething() {

    this.runInAsyncScope(() => {

      return als.getStore();

    });

  }

}


const myResource = als.run(123, () => new MyResource());

console.log(myResource.doSomething()); // prints 123


```

* `new AsyncResource(typestring, optionsAsyncResourceOptions)` : AsyncResource  
   * Returns a new `AsyncResource`. Importantly, while the constructor arguments are required in Node.js' implementation of `AsyncResource`, they are not used in Workers.
* `AsyncResource.bind(fnfunction, typestring, thisArgany)`  
   * Binds the given function to the current async context.

### Methods

* `asyncResource.bind(fnfunction, thisArgany)`  
   * Binds the given function to the async context associated with this `AsyncResource`.
* `  
asyncResource.runInAsyncScope(fnfunction, thisArgany, ...argsarguments)  
`  
   * Call the provided function with the given arguments in the async context associated with this `AsyncResource`.

## Caveats

* The `AsyncLocalStorage` implementation provided by Workers intentionally omits support for the [asyncLocalStorage.enterWith() ↗](https://nodejs.org/dist/latest-v18.x/docs/api/async%5Fcontext.html#asynclocalstorageenterwithstore) and [asyncLocalStorage.disable() ↗](https://nodejs.org/dist/latest-v18.x/docs/api/async%5Fcontext.html#asynclocalstoragedisable) methods.
* Workers does not implement the full [async\_hooks ↗](https://nodejs.org/dist/latest-v18.x/docs/api/async%5Fhooks.html) API upon which Node.js' implementation of `AsyncLocalStorage` is built.
* Workers does not implement the ability to create an `AsyncResource` with an explicitly identified trigger context as allowed by Node.js. This means that a new `AsyncResource` will always be bound to the async context in which it was created.
* Thenables (non-Promise objects that expose a `then()` method) are not fully supported when using `AsyncLocalStorage`. When working with thenables, instead use [AsyncLocalStorage.snapshot() ↗](https://nodejs.org/api/async%5Fcontext.html#static-method-asynclocalstoragesnapshot) to capture a snapshot of the current context.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/asynclocalstorage/","name":"AsyncLocalStorage"}}]}
```

---

---
title: Buffer
description: The Buffer API in Node.js is one of the most commonly used Node.js APIs for manipulating binary data. Every Buffer instance extends from the standard Uint8Array class, but adds a range of unique capabilities such as built-in base64 and hex encoding/decoding, byte-order manipulation, and encoding-aware substring searching.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/buffer.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Buffer

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

The [Buffer ↗](https://nodejs.org/docs/latest/api/buffer.html) API in Node.js is one of the most commonly used Node.js APIs for manipulating binary data. Every `Buffer` instance extends from the standard [Uint8Array ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/Uint8Array) class, but adds a range of unique capabilities such as built-in base64 and hex encoding/decoding, byte-order manipulation, and encoding-aware substring searching.

JavaScript

```

import { Buffer } from "node:buffer";


const buf = Buffer.from("hello world", "utf8");


console.log(buf.toString("hex"));

// Prints: 68656c6c6f20776f726c64

console.log(buf.toString("base64"));

// Prints: aGVsbG8gd29ybGQ=


```

A Buffer extends from `Uint8Array`. Therefore, it can be used in any Workers API that currently accepts `Uint8Array`, such as creating a new Response:

JavaScript

```

const response = new Response(Buffer.from("hello world"));


```

You can also use the `Buffer` API when interacting with streams:

JavaScript

```

const writable = getWritableStreamSomehow();

const writer = writable.getWriter();

writer.write(Buffer.from("hello world"));


```

One key difference between the Workers implementation of `Buffer` and the Node.js implementation is that some methods of creating a `Buffer` in Node.js will allocate those from a global memory pool as a performance optimization. The Workers implementation does not use a memory pool and all `Buffer` instances are allocated independently.

Further, in Node.js it is possible to allocate a `Buffer` with uninitialized memory using the `Buffer.allocUnsafe()` method. This is not supported in Workers and `Buffer`instances are always initialized so that the `Buffer` is always filled with null bytes (`0x00`) when allocated.

Refer to the [Node.js documentation for Buffer ↗](https://nodejs.org/dist/latest-v19.x/docs/api/buffer.html) for more information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/buffer/","name":"Buffer"}}]}
```

---

---
title: crypto
description: The node:crypto module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/crypto.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# crypto

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

The [node:crypto ↗](https://nodejs.org/docs/latest/api/crypto.html) module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions.

All `node:crypto` APIs are fully supported in Workers with the following exceptions:

* The functions [generateKeyPair ↗](https://nodejs.org/api/crypto.html#cryptogeneratekeypairtype-options-callback) and [generateKeyPairSync ↗](https://nodejs.org/api/crypto.html#cryptogeneratekeypairsynctype-options)do not support DSA or DH key pairs.
* `ed448` and `x448` curves are not supported.
* It is not possible to manually enable or disable [FIPS mode ↗](https://nodejs.org/docs/latest/api/crypto.html#fips-mode).

The full `node:crypto` API is documented in the [Node.js documentation for node:crypto ↗](https://nodejs.org/api/crypto.html).

The [WebCrypto API](https://developers.cloudflare.com/workers/runtime-apis/web-crypto/) is also available within Cloudflare Workers. This does not require the `nodejs_compat` compatibility flag.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/crypto/","name":"crypto"}}]}
```

---

---
title: Diagnostics Channel
description: The diagnostics_channel module provides an API to create named channels to report arbitrary message data for diagnostics purposes. The API is essentially a simple event pub/sub model that is specifically designed to support low-overhead diagnostics reporting.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/diagnostics-channel.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Diagnostics Channel

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

The [diagnostics\_channel ↗](https://nodejs.org/dist/latest-v20.x/docs/api/diagnostics%5Fchannel.html) module provides an API to create named channels to report arbitrary message data for diagnostics purposes. The API is essentially a simple event pub/sub model that is specifically designed to support low-overhead diagnostics reporting.

JavaScript

```

import {

  channel,

  hasSubscribers,

  subscribe,

  unsubscribe,

  tracingChannel,

} from "node:diagnostics_channel";


// For publishing messages to a channel, acquire a channel object:

const myChannel = channel("my-channel");


// Any JS value can be published to a channel.

myChannel.publish({ foo: "bar" });


// For receiving messages on a channel, use subscribe:


subscribe("my-channel", (message) => {

  console.log(message);

});


```

All `Channel` instances are singletons per each Isolate/context (for example, the same entry point). Subscribers are always invoked synchronously and in the order they were registered, much like an `EventTarget` or Node.js `EventEmitter` class.

## Integration with Tail Workers

When using [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/), all messages published to any channel will be forwarded also to the [Tail Worker](https://developers.cloudflare.com/workers/observability/logs/tail-workers/). Within the Tail Worker, the diagnostic channel messages can be accessed via the `diagnosticsChannelEvents` property:

JavaScript

```

export default {

  async tail(events) {

    for (const event of events) {

      for (const messageData of event.diagnosticsChannelEvents) {

        console.log(

          messageData.timestamp,

          messageData.channel,

          messageData.message,

        );

      }

    }

  },

};


```

Note that message published to the tail worker is passed through the [structured clone algorithm ↗](https://developer.mozilla.org/en-US/docs/Web/API/Web%5FWorkers%5FAPI/Structured%5Fclone%5Falgorithm) (same mechanism as the [structuredClone() ↗](https://developer.mozilla.org/en-US/docs/Web/API/structuredClone) API) so only values that can be successfully cloned are supported.

## `TracingChannel`

Per the Node.js documentation, "[TracingChannel ↗](https://nodejs.org/api/diagnostics%5Fchannel.html#class-tracingchannel) is a collection of \[Channels\] which together express a single traceable action. `TracingChannel` is used to formalize and simplify the process of producing events for tracing application flow."

JavaScript

```

import { tracingChannel } from "node:diagnostics_channel";

import { AsyncLocalStorage } from "node:async_hooks";


const channels = tracingChannel("my-channel");

const requestId = new AsyncLocalStorage();

channels.start.bindStore(requestId);


channels.subscribe({

  start(message) {

    console.log(requestId.getStore()); // { requestId: '123' }

    // Handle start message

  },

  end(message) {

    console.log(requestId.getStore()); // { requestId: '123' }

    // Handle end message

  },

  asyncStart(message) {

    console.log(requestId.getStore()); // { requestId: '123' }

    // Handle asyncStart message

  },

  asyncEnd(message) {

    console.log(requestId.getStore()); // { requestId: '123' }

    // Handle asyncEnd message

  },

  error(message) {

    console.log(requestId.getStore()); // { requestId: '123' }

    // Handle error message

  },

});


// The subscriber handlers will be invoked while tracing the execution of the async

// function passed into `channel.tracePromise`...

channel.tracePromise(

  async () => {

    // Perform some asynchronous work...

  },

  { requestId: "123" },

);


```

Refer to the [Node.js documentation for diagnostics\_channel ↗](https://nodejs.org/dist/latest-v20.x/docs/api/diagnostics%5Fchannel.html) for more information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/diagnostics-channel/","name":"Diagnostics Channel"}}]}
```

---

---
title: dns
description: You can use node:dns for name resolution via DNS over HTTPS using
Cloudflare DNS at 1.1.1.1.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/dns.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# dns

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

You can use [node:dns ↗](https://nodejs.org/api/dns.html) for name resolution via [DNS over HTTPS](https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https/) using[Cloudflare DNS ↗](https://www.cloudflare.com/application-services/products/dns/) at 1.1.1.1.

* [  JavaScript ](#tab-panel-7594)
* [  TypeScript ](#tab-panel-7595)

index.js

```

import dns from "node:dns";


let response = await dns.promises.resolve4("cloudflare.com", "NS");


```

index.ts

```

import dns from 'node:dns';


let response = await dns.promises.resolve4('cloudflare.com', 'NS');


```

All `node:dns` functions are available, except `lookup`, `lookupService`, and `resolve` which throw "Not implemented" errors when called.

Note

DNS requests will execute a subrequest, counts for your [Worker's subrequest limit](https://developers.cloudflare.com/workers/platform/limits/#subrequests).

The full `node:dns` API is documented in the [Node.js documentation for node:dns ↗](https://nodejs.org/api/dns.html).

```

```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/dns/","name":"dns"}}]}
```

---

---
title: EventEmitter
description: An EventEmitter
is an object that emits named events that cause listeners to be called.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/EventEmitter.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# EventEmitter

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

An [EventEmitter ↗](https://nodejs.org/docs/latest/api/events.html#class-eventemitter)is an object that emits named events that cause listeners to be called.

JavaScript

```

import { EventEmitter } from "node:events";


const emitter = new EventEmitter();

emitter.on("hello", (...args) => {

  console.log(...args); // 1 2 3

});


emitter.emit("hello", 1, 2, 3);


```

The implementation in the Workers runtime supports the entire Node.js `EventEmitter` API. This includes the [captureRejections ↗](https://nodejs.org/docs/latest/api/events.html#capture-rejections-of-promises)option that allows improved handling of async functions as event handlers:

JavaScript

```

const emitter = new EventEmitter({ captureRejections: true });

emitter.on("hello", async (...args) => {

  throw new Error("boom");

});

emitter.on("error", (err) => {

  // the async promise rejection is emitted here!

});


```

Like Node.js, when an `'error'` event is emitted on an `EventEmitter` and there is no listener for it, the error will be immediately thrown. However, in Node.js it is possible to add a handler on the `process` object for the`'uncaughtException'` event to catch globally uncaught exceptions. The`'uncaughtException'` event, however, is currently not implemented in the Workers runtime. It is strongly recommended to always add an `'error'` listener to any `EventEmitter` instance.

Refer to the [Node.js documentation for EventEmitter ↗](https://nodejs.org/api/events.html#class-eventemitter) for more information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/eventemitter/","name":"EventEmitter"}}]}
```

---

---
title: fs
description: You can use node:fs to access a virtual file
system in Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/fs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# fs

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

You can use [node:fs ↗](https://nodejs.org/api/fs.html) to access a virtual file system in Workers.

The `node:fs` module is available in Workers runtimes that support Node.js compatibility using the `nodejs_compat` compatibility flag. Any Worker running with `nodejs_compat` enabled and with a compatibility date of`2025-09-01` or later will have access to `node:fs` by default. It is also possible to enable `node:fs` on Workers with an earlier compatibility date using a combination of the `nodejs_compat` and `enable_nodejs_fs_module`flags. To disable `node:fs` you can set the `disable_nodejs_fs_module` flag.

JavaScript

```

import { readFileSync, writeFileSync } from "node:fs";


const config = readFileSync("/bundle/config.txt", "utf8");


writeFileSync("/tmp/abc.txt", "Hello, world!");


```

The Workers Virtual File System (VFS) is a memory-based file system that allows you to read modules included in your Worker bundle as read-only files, access a directory for writing temporary files, or access common[character devices ↗](https://linux-kernel-labs.github.io/refs/heads/master/labs/device%5Fdrivers.html) like`/dev/null`, `/dev/random`, `/dev/full`, and `/dev/zero`.

The directory structure initially looks like:

```

/bundle

└── (one file for each module in your Worker bundle)

/tmp

└── (empty, but you can write files, create directories, symlinks, etc)

/dev

├── null

├── random

├── full

└── zero


```

The `/bundle` directory contains the files for all modules included in your Worker bundle, which you can read using APIs like `readFileSync` or`read(...)`, etc. These are always read-only. Reading from the bundle can be useful when you need to read a config file or a template.

JavaScript

```

import { readFileSync } from "node:fs";


// The config.txt file would be included in your Worker bundle.

// Refer to the Wrangler documentation for details on how to

// include additional files.

const config = readFileSync("/bundle/config.txt", "utf8");


export default {

  async fetch(request) {

    return new Response(`Config contents: ${config}`);

  },

};


```

The `/tmp` directory is writable, and you can use it to create temporary files or directories. You can also create symlinks in this directory. However, the contents of `/tmp` are not persistent and are unique to each request. This means that files created in `/tmp` within the context of one request will not be available in other concurrent or subsequent requests.

JavaScript

```

import { writeFileSync, readFileSync } from "node:fs";


export default {

  fetch(request) {

    // The file `/tmp/hello.txt` will only exist for the duration

    // of this request.

    writeFileSync("/tmp/hello.txt", "Hello, world!");

    const contents = readFileSync("/tmp/hello.txt", "utf8");

    return new Response(`File contents: ${contents}`);

  },

};


```

The `/dev` directory contains common character devices:

* `/dev/null`: A null device that discards all data written to it and returns EOF on read.
* `/dev/random`: A device that provides random bytes on reads and discards all data written to it. Reading from `/dev/random` is only permitted when within the context of a request.
* `/dev/full`: A device that always returns EOF on reads and discards all data written to it.
* `/dev/zero`: A device that provides an infinite stream of zero bytes on reads and discards all data written to it.

All operations on the VFS are synchronous. You can use the synchronous, asynchronous callback, or promise-based APIs provided by the `node:fs` module but all operations will be performed synchronously.

Timestamps for files in the VFS are currently always set to the Unix epoch (`1970-01-01T00:00:00Z`). This means that operations that rely on timestamps, like `fs.stat`, will always return the same timestamp for all files in the VFS. This is a temporary limitation that will be addressed in a future release.

Since all temporary files are held in memory, the total size of all temporary files and directories created count towards your Worker’s memory limit. If you exceed this limit, the Worker instance will be terminated and restarted.

The file system implementation has the following limits:

* The maximum total length of a file path is 4096 characters, including path separators. Because paths are handled as file URLs internally, the limit accounts for percent-encoding of special characters, decoding characters that do not need encoding before the limit is checked. For example, the path `/tmp/abcde%66/ghi%zz' is 18 characters long because the `%66`does not need to be percent-encoded and is therefore counted as one character, while the`%zz\` is an invalid percent-encoding that is counted as 3 characters.
* The maximum number of path segments is 48\. For example, the path `/a/b/c` is 3 segments.
* The maximum size of an individual file is 128 MB total.

The following `node:fs` APIs are not supported in Workers, or are only partially supported:

* `fs.watch` and `fs.watchFile` operations for watching for file changes.
* The `fs.globSync()` and other glob APIs have not yet been implemented.
* The `force` option in the `fs.rm` API has not yet been implemented.
* Timestamps for files are always set to the Unix epoch (`1970-01-01T00:00:00Z`).
* File permissions and ownership are not supported.

The full `node:fs` API is documented in the [Node.js documentation for node:fs ↗](https://nodejs.org/api/fs.html).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/fs/","name":"fs"}}]}
```

---

---
title: http
description: To use the HTTP client-side methods (http.get, http.request, etc.), you must enable the enable_nodejs_http_modules compatibility flag in addition to the nodejs_compat flag.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/http.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# http

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

## Compatibility flags

### Client-side methods

To use the HTTP client-side methods (`http.get`, `http.request`, etc.), you must enable the [enable\_nodejs\_http\_modules](https://developers.cloudflare.com/workers/configuration/compatibility-flags/) compatibility flag in addition to the [nodejs\_compat](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) flag.

This flag is automatically enabled for Workers using a [compatibility date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) of `2025-08-15` or later when `nodejs_compat` is enabled. For Workers using an earlier compatibility date, you can manually enable it by adding the flag to your Wrangler configuration file:

* [  wrangler.jsonc ](#tab-panel-7596)
* [  wrangler.toml ](#tab-panel-7597)

```

{

  "compatibility_flags": [

    "nodejs_compat",

    "enable_nodejs_http_modules"

  ]

}


```

```

compatibility_flags = [ "nodejs_compat", "enable_nodejs_http_modules" ]


```

### Server-side methods

To use the HTTP server-side methods (`http.createServer`, `http.Server`, `http.ServerResponse`), you must enable the `enable_nodejs_http_server_modules` compatibility flag in addition to the [nodejs\_compat](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) flag.

This flag is automatically enabled for Workers using a [compatibility date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) of `2025-09-01` or later when `nodejs_compat` is enabled. For Workers using an earlier compatibility date, you can manually enable it by adding the flag to your Wrangler configuration file:

* [  wrangler.jsonc ](#tab-panel-7598)
* [  wrangler.toml ](#tab-panel-7599)

```

{

  "compatibility_flags": [

    "nodejs_compat",

    "enable_nodejs_http_server_modules"

  ]

}


```

```

compatibility_flags = [ "nodejs_compat", "enable_nodejs_http_server_modules" ]


```

To use both client-side and server-side methods, enable both flags:

* [  wrangler.jsonc ](#tab-panel-7600)
* [  wrangler.toml ](#tab-panel-7601)

```

{

  "compatibility_flags": [

    "nodejs_compat",

    "enable_nodejs_http_modules",

    "enable_nodejs_http_server_modules"

  ]

}


```

```

compatibility_flags = [

  "nodejs_compat",

  "enable_nodejs_http_modules",

  "enable_nodejs_http_server_modules"

]


```

## get

An implementation of the Node.js [http.get ↗](https://nodejs.org/docs/latest/api/http.html#httpgetoptions-callback) method.

The `get` method performs a GET request to the specified URL and invokes the callback with the response. It's a convenience method that simplifies making HTTP GET requests without manually configuring request options.

Because `get` is a wrapper around `fetch(...)`, it may be used only within an exported fetch or similar handler. Outside of such a handler, attempts to use `get` will throw an error.

JavaScript

```

import { get } from "node:http";


export default {

  async fetch() {

    const { promise, resolve, reject } = Promise.withResolvers();

    get("http://example.org", (res) => {

      let data = "";

      res.setEncoding("utf8");

      res.on("data", (chunk) => {

        data += chunk;

      });

      res.on("end", () => {

        resolve(new Response(data));

      });

      res.on("error", reject);

    }).on("error", reject);

    return promise;

  },

};


```

The implementation of `get` in Workers is a wrapper around the global[fetch API ↗](https://developers.cloudflare.com/workers/runtime-apis/fetch/)and is therefore subject to the same [limits ↗](https://developers.cloudflare.com/workers/platform/limits/).

As shown in the example above, it is necessary to arrange for requests to be correctly awaited in the `fetch` handler using a promise or the fetch may be canceled prematurely when the handler returns.

## request

An implementation of the Node.js [\`http.request' ↗](https://nodejs.org/docs/latest/api/http.html#httprequesturl-options-callback) method.

The `request` method creates an HTTP request with customizable options like method, headers, and body. It provides full control over the request configuration and returns a Node.js [stream.Writable ↗](https://developers.cloudflare.com/workers/runtime-apis/nodejs/streams/) for sending request data.

Because `request` is a wrapper around `fetch(...)`, it may be used only within an exported fetch or similar handler. Outside of such a handler, attempts to use `request` will throw an error.

JavaScript

```

import { get } from "node:http";


export default {

  async fetch() {

    const { promise, resolve, reject } = Promise.withResolvers();

    get(

      {

        method: "GET",

        protocol: "http:",

        hostname: "example.org",

        path: "/",

      },

      (res) => {

        let data = "";

        res.setEncoding("utf8");

        res.on("data", (chunk) => {

          data += chunk;

        });

        res.on("end", () => {

          resolve(new Response(data));

        });

        res.on("error", reject);

      },

    )

      .on("error", reject)

      .end();

    return promise;

  },

};


```

The following options passed to the `request` (and `get`) method are not supported due to the differences required by Cloudflare Workers implementation of `node:http` as a wrapper around the global `fetch` API:

* `maxHeaderSize`
* `insecureHTTPParser`
* `createConnection`
* `lookup`
* `socketPath`

## OutgoingMessage

The [OutgoingMessage ↗](https://nodejs.org/docs/latest/api/http.html#class-httpoutgoingmessage) class represents an HTTP response that is sent to the client. It provides methods for writing response headers and body, as well as for ending the response. `OutgoingMessage` extends from the Node.js [stream.Writable stream class ↗](https://developers.cloudflare.com/workers/runtime-apis/nodejs/streams/).

The `OutgoingMessage` class is a base class for outgoing HTTP messages (both requests and responses). It provides methods for writing headers and body data, as well as for ending the message. `OutgoingMessage` extends from the [Writable stream class ↗](https://nodejs.org/docs/latest/api/stream.html#class-streamwritable).

Both `ClientRequest` and `ServerResponse` both extend from and inherit from `OutgoingMessage`.

## IncomingMessage

The `IncomingMessage` class represents an HTTP request that is received from the client. It provides methods for reading request headers and body, as well as for ending the request. `IncomingMessage` extends from the `Readable` stream class.

The `IncomingMessage` class represents an HTTP message (request or response). It provides methods for reading headers and body data. `IncomingMessage` extends from the `Readable` stream class.

JavaScript

```

import { get, IncomingMessage } from "node:http";

import { ok, strictEqual } from "node:assert";


export default {

  async fetch() {

    // ...

    get("http://example.org", (res) => {

      ok(res instanceof IncomingMessage);

    });

    // ...

  },

};


```

The Workers implementation includes a `cloudflare` property on `IncomingMessage` objects:

JavaScript

```

import { createServer } from "node:http";

import { httpServerHandler } from "cloudflare:node";


const server = createServer((req, res) => {

  console.log(req.cloudflare.cf.country);

  console.log(req.cloudflare.cf.ray);

  res.write("Hello, World!");

  res.end();

});


server.listen(8080);


export default httpServerHandler({ port: 8080 });


```

The `cloudflare.cf` property contains [Cloudflare-specific request properties](https://developers.cloudflare.com/workers/runtime-apis/request/#incomingrequestcfproperties).

The following differences exist between the Workers implementation and Node.js:

* Trailer headers are not supported
* The `socket` attribute **does not extend from `net.Socket`** and only contains the following properties: `encrypted`, `remoteFamily`, `remoteAddress`, `remotePort`, `localAddress`, `localPort`, and `destroy()` method.
* The following `socket` attributes behave differently than their Node.js counterparts:  
   * `remoteAddress` will return `127.0.0.1` when ran locally  
   * `remotePort` will return a random port number between 2^15 and 2^16  
   * `localAddress` will return the value of request's `host` header if exists. Otherwise, it will return `127.0.0.1`  
   * `localPort` will return the port number assigned to the server instance  
   * `req.socket.destroy()` falls through to `req.destroy()`

## Agent

A partial implementation of the Node.js [\`http.Agent' ↗](https://nodejs.org/docs/latest/api/http.html#class-httpagent) class.

An `Agent` manages HTTP connection reuse by maintaining request queues per host/port. In the workers environment, however, such low-level management of the network connection, ports, etc, is not relevant because it is handled by the Cloudflare infrastructure instead. Accordingly, the implementation of `Agent` in Workers is a stub implementation that does not support connection pooling or keep-alive.

JavaScript

```

import { Agent } from "node:http";

import { strictEqual } from "node:assert";


const agent = new Agent();

strictEqual(agent.protocol, "http:");


```

## createServer

An implementation of the Node.js [http.createServer ↗](https://nodejs.org/docs/latest/api/http.html#httpcreateserveroptions-requestlistener) method.

The `createServer` method creates an HTTP server instance that can handle incoming requests.

JavaScript

```

import { createServer } from "node:http";

import { httpServerHandler } from "cloudflare:node";


const server = createServer((req, res) => {

  res.writeHead(200, { "Content-Type": "text/plain" });

  res.end("Hello from Node.js HTTP server!");

});


server.listen(8080);

export default httpServerHandler({ port: 8080 });


```

## Node.js integration

### httpServerHandler

The `httpServerHandler` function integrates Node.js HTTP servers with the Cloudflare Workers request model. It supports two API patterns:

JavaScript

```

import http from "node:http";

import { httpServerHandler } from "cloudflare:node";


const server = http.createServer((req, res) => {

  res.end("hello world");

});


// Pass server directly (simplified) - automatically calls listen() if needed

export default httpServerHandler(server);


// Or use port-based routing for multiple servers

server.listen(8080);

export default httpServerHandler({ port: 8080 });


```

The handler automatically routes incoming Worker requests to your Node.js server. When using port-based routing, the port number acts as a routing key to determine which server handles requests, allowing multiple servers to coexist in the same Worker.

### handleAsNodeRequest

For more direct control over request routing, you can use the `handleAsNodeRequest` function from `cloudflare:node`. This function directly routes a Worker request to a Node.js server running on a specific port:

JavaScript

```

import { createServer } from "node:http";

import { handleAsNodeRequest } from "cloudflare:node";


const server = createServer((req, res) => {

  res.writeHead(200, { "Content-Type": "text/plain" });

  res.end("Hello from Node.js HTTP server!");

});


server.listen(8080);


export default {

  fetch(request) {

    return handleAsNodeRequest(8080, request);

  },

};


```

This approach gives you full control over the fetch handler while still leveraging Node.js HTTP servers for request processing.

Note

Failing to call `close()` on an HTTP server may result in the server persisting until the worker is destroyed. In most cases, this is not an issue since servers typically live for the lifetime of the worker. However, if you need to create multiple servers during a worker's lifetime or want explicit lifecycle control (such as in test scenarios), call `close()` when you're done with the server, or use [explicit resource management ↗](https://v8.dev/features/explicit-resource-management).

## Server

An implementation of the Node.js [http.Server ↗](https://nodejs.org/docs/latest/api/http.html#class-httpserver) class.

The `Server` class represents an HTTP server and provides methods for handling incoming requests. It extends the Node.js `EventEmitter` class and can be used to create custom server implementations.

When using `httpServerHandler`, the port number specified in `server.listen()` acts as a routing key rather than an actual network port. The handler uses this port to determine which HTTP server instance should handle incoming requests, allowing multiple servers to coexist within the same Worker by using different port numbers for identification. Using a port value of `0` (or `null` or `undefined`) will result in a random port number being assigned.

JavaScript

```

import { Server } from "node:http";

import { httpServerHandler } from "cloudflare:node";


const server = new Server((req, res) => {

  res.writeHead(200, { "Content-Type": "application/json" });

  res.end(JSON.stringify({ message: "Hello from HTTP Server!" }));

});


server.listen(8080);

export default httpServerHandler({ port: 8080 });


```

The following differences exist between the Workers implementation and Node.js:

* Connection management methods such as `closeAllConnections()` and `closeIdleConnections()` are not implemented
* Only `listen()` variants with a port number or no parameters are supported: `listen()`, `listen(0, callback)`, `listen(callback)`, etc. For reference, see the [Node.js documentation ↗](https://nodejs.org/docs/latest/api/net.html#serverlisten).
* The following server options are not supported: `maxHeaderSize`, `insecureHTTPParser`, `keepAliveTimeout`, `connectionsCheckingInterval`

## ServerResponse

An implementation of the Node.js [http.ServerResponse ↗](https://nodejs.org/docs/latest/api/http.html#class-httpserverresponse) class.

The `ServerResponse` class represents the server-side response object that is passed to request handlers. It provides methods for writing response headers and body data, and extends the Node.js `Writable` stream class.

JavaScript

```

import { createServer, ServerResponse } from "node:http";

import { httpServerHandler } from "cloudflare:node";

import { ok } from "node:assert";


const server = createServer((req, res) => {

  ok(res instanceof ServerResponse);


  // Set multiple headers at once

  res.writeHead(200, {

    "Content-Type": "application/json",

    "X-Custom-Header": "Workers-HTTP",

  });


  // Stream response data

  res.write('{"data": [');

  res.write('{"id": 1, "name": "Item 1"},');

  res.write('{"id": 2, "name": "Item 2"}');

  res.write("]}");


  // End the response

  res.end();

});


export default httpServerHandler(server);


```

The following methods and features are not supported in the Workers implementation:

* `assignSocket()` and `detachSocket()` methods are not available
* Trailer headers are not supported
* `writeContinue()` and `writeEarlyHints()` methods are not available
* 1xx responses in general are not supported

## Other differences between Node.js and Workers implementation of `node:http`

Because the Workers implementation of `node:http` is a wrapper around the global `fetch` API, there are some differences in behavior and limitations compared to a standard Node.js environment:

* `Connection` headers are not used. Workers will manage connections automatically.
* `Content-Length` headers will be handled the same way as in the `fetch` API. If a body is provided, the header will be set automatically and manually set values will be ignored.
* `Expect: 100-continue` headers are not supported.
* Trailing headers are not supported.
* The `'continue'` event is not supported.
* The `'information'` event is not supported.
* The `'socket'` event is not supported.
* The `'upgrade'` event is not supported.
* Gaining direct access to the underlying `socket` is not supported.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/http/","name":"http"}}]}
```

---

---
title: https
description: To use the HTTPS client-side methods (https.get, https.request, etc.), you must enable the enable_nodejs_http_modules compatibility flag in addition to the nodejs_compat flag.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/https.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# https

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

## Compatibility flags

### Client-side methods

To use the HTTPS client-side methods (`https.get`, `https.request`, etc.), you must enable the [enable\_nodejs\_http\_modules](https://developers.cloudflare.com/workers/configuration/compatibility-flags/) compatibility flag in addition to the [nodejs\_compat](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) flag.

This flag is automatically enabled for Workers using a [compatibility date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) of `2025-08-15` or later when `nodejs_compat` is enabled. For Workers using an earlier compatibility date, you can manually enable it by adding the flag to your `wrangler.toml`:

```

compatibility_flags = ["nodejs_compat", "enable_nodejs_http_modules"]


```

### Server-side methods

To use the HTTPS server-side methods (`https.createServer`, `https.Server`, `https.ServerResponse`), you must enable the `enable_nodejs_http_server_modules` compatibility flag in addition to the [nodejs\_compat](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) flag.

This flag is automatically enabled for Workers using a [compatibility date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) of `2025-09-01` or later when `nodejs_compat` is enabled. For Workers using an earlier compatibility date, you can manually enable it by adding the flag to your `wrangler.toml`:

```

compatibility_flags = ["nodejs_compat", "enable_nodejs_http_server_modules"]


```

To use both client-side and server-side methods, enable both flags:

```

compatibility_flags = ["nodejs_compat", "enable_nodejs_http_modules", "enable_nodejs_http_server_modules"]


```

## get

An implementation of the Node.js [\`https.get' ↗](https://nodejs.org/docs/latest/api/https.html#httpsgetoptions-callback) method.

The `get` method performs a GET request to the specified URL and invokes the callback with the response. This is a convenience method that simplifies making HTTPS GET requests without manually configuring request options.

Because `get` is a wrapper around `fetch(...)`, it may be used only within an exported fetch or similar handler. Outside of such a handler, attempts to use `get` will throw an error.

JavaScript

```

import { get } from "node:https";


export default {

  async fetch() {

    const { promise, resolve, reject } = Promise.withResolvers();

    get("https://example.com", (res) => {

      let data = "";

      res.setEncoding("utf8");

      res.on("data", (chunk) => {

        data += chunk;

      });

      res.on("end", () => {

        resolve(new Response(data));

      });

      res.on("error", reject);

    }).on("error", reject);

    return promise;

  },

};


```

The implementation of `get` in Workers is a wrapper around the global[fetch API ↗](https://developers.cloudflare.com/workers/runtime-apis/fetch/)and is therefore subject to the same [limits ↗](https://developers.cloudflare.com/workers/platform/limits/).

As shown in the example above, it is necessary to arrange for requests to be correctly awaited in the `fetch` handler using a promise or the fetch may be canceled prematurely when the handler returns.

## request

An implementation of the Node.js [\`https.request' ↗](https://nodejs.org/docs/latest/api/https.html#httpsrequestoptions-callback) method.

The `request` method creates an HTTPS request with customizable options like method, headers, and body. It provides full control over the request configuration and returns a Node.js [stream.Writable ↗](https://developers.cloudflare.com/workers/runtime-apis/nodejs/streams/) for sending request data.

Because `get` is a wrapper around `fetch(...)`, it may be used only within an exported fetch or similar handler. Outside of such a handler, attempts to use `get` will throw an error.

The request method accepts all options from [http.request](https://developers.cloudflare.com/workers/runtime-apis/nodejs/http#request) with some differences in default values:

* `protocol`: default `https:`
* `port`: default `443`
* `agent`: default `https.globalAgent`

JavaScript

```

import { request } from "node:https";

import { strictEqual, ok } from "node:assert";


export default {

  async fetch() {

    const { promise, resolve, reject } = Promise.withResolvers();

    const req = request(

      "https://developers.cloudflare.com/robots.txt",

      {

        method: "GET",

      },

      (res) => {

        strictEqual(res.statusCode, 200);

        let data = "";

        res.setEncoding("utf8");

        res.on("data", (chunk) => {

          data += chunk;

        });

        res.once("error", reject);

        res.on("end", () => {

          ok(data.includes("User-agent"));

          resolve(new Response(data));

        });

      },

    );

    req.end();

    return promise;

  },

};


```

The following additional options are not supported: `ca`, `cert`, `ciphers`, `clientCertEngine` (deprecated), `crl`, `dhparam`, `ecdhCurve`, `honorCipherOrder`, `key`, `passphrase`, `pfx`, `rejectUnauthorized`, `secureOptions`, `secureProtocol`, `servername`, `sessionIdContext`, `highWaterMark`.

## createServer

An implementation of the Node.js [https.createServer ↗](https://nodejs.org/docs/latest/api/https.html#httpscreateserveroptions-requestlistener) method.

The `createServer` method creates an HTTPS server instance that can handle incoming secure requests. It's a convenience function that creates a new `Server` instance and optionally sets up a request listener callback.

JavaScript

```

import { createServer } from "node:https";

import { httpServerHandler } from "cloudflare:node";


const server = createServer((req, res) => {

  res.writeHead(200, { "Content-Type": "text/plain" });

  res.end("Hello from Node.js HTTPS server!");

});


server.listen(8080);

export default httpServerHandler({ port: 8080 });


```

The `httpServerHandler` function integrates Node.js HTTPS servers with the Cloudflare Workers request model. When a request arrives at your Worker, the handler automatically routes it to your Node.js server running on the specified port. This bridge allows you to use familiar Node.js server patterns while benefiting from the Workers runtime environment, including automatic scaling, edge deployment, and integration with other Cloudflare services.

Note

Failing to call `close()` on an HTTPS server may result in the server being leaked. To prevent this, call `close()` when you're done with the server, or use explicit resource management:

JavaScript

```

import { createServer } from "node:https";


await using server = createServer((req, res) => {

  res.end("Hello World");

});

// Server will be automatically closed when it goes out of scope


```

## Agent

An implementation of the Node.js [https.Agent ↗](https://nodejs.org/docs/latest/api/https.html#class-httpsagent) class.

An [Agent ↗](https://nodejs.org/docs/latest/api/https.html#class-httpsagent) manages HTTPS connection reuse by maintaining request queues per host/port. In the Workers environment, however, such low-level management of the network connection, ports, etc, is not relevant because it is handled by the Cloudflare infrastructure instead. Accordingly, the implementation of `Agent` in Workers is a stub implementation that does not support connection pooling or keep-alive.

## Server

An implementation of the Node.js [https.Server ↗](https://nodejs.org/docs/latest/api/https.html#class-httpsserver) class.

In Node.js, the `https.Server` class represents an HTTPS server and provides methods for handling incoming secure requests. In Workers, handling of secure requests is provided by the Cloudflare infrastructure so there really is not much difference between using `https.Server` or `http.Server`. The workers runtime provides an implementation for completeness but most workers should probably just use [http.Server](https://developers.cloudflare.com/workers/runtime-apis/nodejs/http#server).

JavaScript

```

import { Server } from "node:https";

import { httpServerHandler } from "cloudflare:node";


const server = new Server((req, res) => {

  res.writeHead(200, { "Content-Type": "application/json" });

  res.end(JSON.stringify({ message: "Hello from HTTPS Server!" }));

});

server.listen(8080);

export default httpServerHandler({ port: 8080 });


```

The following differences exist between the Workers implementation and Node.js:

* Connection management methods such as `closeAllConnections()` and `closeIdleConnections()` are not implemented due to the nature of the Workers environment.
* Only `listen()` variants with a port number or no parameters are supported: `listen()`, `listen(0, callback)`, `listen(callback)`, etc.
* The following server options are not supported: `maxHeaderSize`, `insecureHTTPParser`, `keepAliveTimeout`, `connectionsCheckingInterval`
* TLS/SSL-specific options such as `ca`, `cert`, `key`, `pfx`, `rejectUnauthorized`, `secureProtocol` are not supported in the Workers environment. If you need to use mTLS, use the [mTLS binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls/).

## Other differences between Node.js and Workers implementation of `node:https`

Because the Workers implementation of `node:https` is a wrapper around the global `fetch` API, there are some differences in behavior compared to Node.js:

* `Connection` headers are not used. Workers will manage connections automatically.
* `Content-Length` headers will be handled the same way as in the `fetch` API. If a body is provided, the header will be set automatically and manually set values will be ignored.
* `Expect: 100-continue` headers are not supported.
* Trailing headers are not supported.
* The `'continue'` event is not supported.
* The `'information'` event is not supported.
* The `'socket'` event is not supported.
* The `'upgrade'` event is not supported.
* Gaining direct access to the underlying `socket` is not supported.
* Configuring TLS-specific options like `ca`, `cert`, `key`, `rejectUnauthorized`, etc, is not supported.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/https/","name":"https"}}]}
```

---

---
title: net
description: You can use node:net to create a direct connection to servers via a TCP sockets
with net.Socket.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/net.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# net

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

You can use [node:net ↗](https://nodejs.org/api/net.html) to create a direct connection to servers via a TCP sockets with [net.Socket ↗](https://nodejs.org/api/net.html#class-netsocket).

These functions use [connect](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/#connect) functionality from the built-in `cloudflare:sockets` module.

* [  JavaScript ](#tab-panel-7602)
* [  TypeScript ](#tab-panel-7603)

index.js

```

import net from "node:net";


const exampleIP = "127.0.0.1";


export default {

  async fetch(req) {

    const socket = new net.Socket();

    socket.connect(4000, exampleIP, function () {

      console.log("Connected");

    });


    socket.write("Hello, Server!");

    socket.end();


    return new Response("Wrote to server", { status: 200 });

  },

};


```

index.ts

```

import net from "node:net";


const exampleIP = "127.0.0.1";


export default {

  async fetch(req): Promise<Response> {

    const socket = new net.Socket();

    socket.connect(4000, exampleIP, function () {

      console.log("Connected");

    });


    socket.write("Hello, Server!");

    socket.end();


    return new Response("Wrote to server", { status: 200 });


},

} satisfies ExportedHandler;


```

Additionally, other APIs such as [net.BlockList ↗](https://nodejs.org/api/net.html#class-netblocklist)and [net.SocketAddress ↗](https://nodejs.org/api/net.html#class-netsocketaddress) are available.

Note that the [net.Server ↗](https://nodejs.org/api/net.html#class-netserver) class is not supported by Workers.

The full `node:net` API is documented in the [Node.js documentation for node:net ↗](https://nodejs.org/api/net.html).

```

```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/net/","name":"net"}}]}
```

---

---
title: path
description: The node:path module provides utilities for working with file and directory paths. The node:path module can be accessed using:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/path.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# path

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

The [node:path ↗](https://nodejs.org/api/path.html) module provides utilities for working with file and directory paths. The `node:path` module can be accessed using:

JavaScript

```

import path from "node:path";

path.join("/foo", "bar", "baz/asdf", "quux", "..");

// Returns: '/foo/bar/baz/asdf'


```

Refer to the [Node.js documentation for path ↗](https://nodejs.org/api/path.html) for more information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/path/","name":"path"}}]}
```

---

---
title: process
description: The process module in Node.js provides a number of useful APIs related to the current process.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/process.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# process

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

The [process ↗](https://nodejs.org/docs/latest/api/process.html) module in Node.js provides a number of useful APIs related to the current process.

Initially Workers only supported `nextTick`, `env`, `exit`, `getBuiltinModule`, `platform` and `features` on process, which was then updated with the [enable\_nodejs\_process\_v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#enable-process-v2-implementation) flag to include most Node.js process features.

Refer to the [Node.js documentation for process ↗](https://nodejs.org/docs/latest/api/process.html) for more information.

Workers-specific implementation details apply when adapting Node.js process support for a serverless environment, which are described in more detail below.

## `process.env`

In the Node.js implementation of `process.env`, the `env` object is a copy of the environment variables at the time the process was started. In the Workers implementation, there is no process-level environment, so by default `env` is an empty object. You can still set and get values from `env`, and those will be globally persistent for all Workers running in the same isolate and context (for example, the same Workers entry point).

When [Node.js compatibility](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) is enabled and the [nodejs\_compat\_populate\_process\_env](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#enable-auto-populating-processenv) compatibility flag is set (enabled by default for compatibility dates on or after 2025-04-01), `process.env` will contain any [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/),[secrets](https://developers.cloudflare.com/workers/configuration/secrets/), or [version metadata](https://developers.cloudflare.com/workers/runtime-apis/bindings/version-metadata/) metadata that has been configured on your Worker.

Setting any value on `process.env` will coerce that value into a string.

### Alternative: Import `env` from `cloudflare:workers`

Instead of using `process.env`, you can [import env from cloudflare:workers](https://developers.cloudflare.com/workers/runtime-apis/bindings/#importing-env-as-a-global) to access environment variables and all other bindings from anywhere in your code.

JavaScript

```

import * as process from "node:process";


export default {

  fetch(req, env) {

    // Set process.env.FOO to the value of env.FOO if process.env.FOO is not already set

    // and env.FOO is a string.

    process.env.FOO ??= (() => {

      if (typeof env.FOO === "string") {

        return env.FOO;

      }

    })();

  },

};


```

It is strongly recommended that you _do not_ replace the entire `process.env` object with the cloudflare `env` object. Doing so will cause you to lose any environment variables that were set previously and will cause unexpected behavior for other Workers running in the same isolate. Specifically, it would cause inconsistency with the `process.env` object when accessed via named imports.

JavaScript

```

import * as process from "node:process";

import { env } from "node:process";


process.env === env; // true! they are the same object

process.env = {}; // replace the object! Do not do this!

process.env === env; // false! they are no longer the same object


// From this point forward, any changes to process.env will not be reflected in env,

// and vice versa!


```

## `process.nextTick()`

The Workers implementation of `process.nextTick()` is a wrapper for the standard Web Platform API [queueMicrotask() ↗](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/queueMicrotask).

JavaScript

```

import { env, nextTick } from "node:process";


env["FOO"] = "bar";

console.log(env["FOO"]); // Prints: bar


nextTick(() => {

  console.log("next tick");

});


```

## Stdio

[process.stdout ↗](https://nodejs.org/docs/latest/api/process.html#processstdout), [process.stderr ↗](https://nodejs.org/docs/latest/api/process.html#processstderr) and [process.stdin ↗](https://nodejs.org/docs/latest/api/process.html#processstdin) are supported as streams. `stdin` is treated as an empty readable stream.`stdout` and `stderr` are non-TTY writable streams, which output to normal logging output only with `stdout: ` and `stderr: ` prefixing.

The line buffer works by storing writes to stdout or stderr until either a newline character `\n` is encountered or until the next microtask, when the log is then flushed to the output.

This ensures compatibility with inspector and structured logging outputs.

## Current Working Directory

[process.cwd() ↗](https://nodejs.org/docs/latest/api/process.html#processcwd) is the _current working directory_, used as the default path for all filesystem operations, and is initialized to `/bundle`.

[process.chdir() ↗](https://nodejs.org/docs/latest/api/process.html#processchdirdirectory) allows modifying the `cwd` and is respected by FS operations when using `enable_nodejs_fs_module`.

## Hrtime

While [process.hrtime ↗](https://nodejs.org/docs/latest/api/process.html#processhrtimetime) high-resolution timer is available, it provides an inaccurate timer for compatibility only.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/process/","name":"process"}}]}
```

---

---
title: Streams
description: The Node.js streams API is the original API for working with streaming data in JavaScript, predating the WHATWG ReadableStream standard. A stream is an abstract interface for working with streaming data in Node.js. Streams can be readable, writable, or both. All streams are instances of EventEmitter.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/streams.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Streams

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

The [Node.js streams API ↗](https://nodejs.org/api/stream.html) is the original API for working with streaming data in JavaScript, predating the [WHATWG ReadableStream standard ↗](https://streams.spec.whatwg.org/). A stream is an abstract interface for working with streaming data in Node.js. Streams can be readable, writable, or both. All streams are instances of [EventEmitter](https://developers.cloudflare.com/workers/runtime-apis/nodejs/eventemitter/).

Where possible, you should use the [WHATWG standard "Web Streams" API ↗](https://streams.spec.whatwg.org/), which is [supported in Workers ↗](https://streams.spec.whatwg.org/).

JavaScript

```

import { Readable, Transform } from "node:stream";


import { text } from "node:stream/consumers";


import { pipeline } from "node:stream/promises";


// A Node.js-style Transform that converts data to uppercase

// and appends a newline to the end of the output.

class MyTransform extends Transform {

  constructor() {

    super({ encoding: "utf8" });

  }

  _transform(chunk, _, cb) {

    this.push(chunk.toString().toUpperCase());

    cb();

  }

  _flush(cb) {

    this.push("\n");

    cb();

  }

}


export default {

  async fetch() {

    const chunks = [

      "hello ",

      "from ",

      "the ",

      "wonderful ",

      "world ",

      "of ",

      "node.js ",

      "streams!",

    ];


    function nextChunk(readable) {

      readable.push(chunks.shift());

      if (chunks.length === 0) readable.push(null);

      else queueMicrotask(() => nextChunk(readable));

    }


    // A Node.js-style Readable that emits chunks from the

    // array...

    const readable = new Readable({

      encoding: "utf8",

      read() {

        nextChunk(readable);

      },

    });


    const transform = new MyTransform();

    await pipeline(readable, transform);

    return new Response(await text(transform));

  },

};


```

Refer to the [Node.js documentation for stream ↗](https://nodejs.org/api/stream.html) for more information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/streams/","name":"Streams"}}]}
```

---

---
title: StringDecoder
description: The node:string_decoder is a legacy utility module that predates the WHATWG standard TextEncoder and TextDecoder API. In most cases, you should use TextEncoder and TextDecoder instead. StringDecoder is available in the Workers runtime primarily for compatibility with existing npm packages that rely on it. StringDecoder can be accessed using:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/string-decoder.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# StringDecoder

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

The [node:string\_decoder ↗](https://nodejs.org/api/string%5Fdecoder.html) is a legacy utility module that predates the WHATWG standard [TextEncoder](https://developers.cloudflare.com/workers/runtime-apis/encoding/#textencoder) and [TextDecoder](https://developers.cloudflare.com/workers/runtime-apis/encoding/#textdecoder) API. In most cases, you should use `TextEncoder` and `TextDecoder` instead. `StringDecoder` is available in the Workers runtime primarily for compatibility with existing npm packages that rely on it. `StringDecoder` can be accessed using:

JavaScript

```

const { StringDecoder } = require("node:string_decoder");

const decoder = new StringDecoder("utf8");


const cent = Buffer.from([0xc2, 0xa2]);

console.log(decoder.write(cent));


const euro = Buffer.from([0xe2, 0x82, 0xac]);

console.log(decoder.write(euro));


```

Refer to the [Node.js documentation for string\_decoder ↗](https://nodejs.org/dist/latest-v20.x/docs/api/string%5Fdecoder.html) for more information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/string-decoder/","name":"StringDecoder"}}]}
```

---

---
title: test
description: The MockTracker API in Node.js provides a means of tracking and managing mock objects in a test
environment.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/test.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# test

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

## `MockTracker`

The `MockTracker` API in Node.js provides a means of tracking and managing mock objects in a test environment.

JavaScript

```

import { mock } from 'node:test';


const fn = mock.fn();

fn(1,2,3);  // does nothing... but


console.log(fn.mock.callCount());  // Records how many times it was called

console.log(fn.mock.calls[0].arguments));  // Recoreds the arguments that were passed each call


```

The full `MockTracker` API is documented in the [Node.js documentation for MockTracker ↗](https://nodejs.org/docs/latest/api/test.html#class-mocktracker).

The Workers implementation of `MockTracker` currently does not include an implementation of the [Node.js mock timers API ↗](https://nodejs.org/docs/latest/api/test.html#class-mocktimers).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/test/","name":"test"}}]}
```

---

---
title: timers
description: Use node:timers APIs to schedule functions to be executed later.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/timers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# timers

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

Use [node:timers ↗](https://nodejs.org/api/timers.html) APIs to schedule functions to be executed later.

This includes [setTimeout ↗](https://nodejs.org/api/timers.html#settimeoutcallback-delay-args) for calling a function after a delay,[setInterval ↗](https://nodejs.org/api/timers.html#clearintervaltimeout) for calling a function repeatedly, and [setImmediate ↗](https://nodejs.org/api/timers.html#setimmediatecallback-args) for calling a function in the next iteration of the event loop.

* [  JavaScript ](#tab-panel-7604)
* [  TypeScript ](#tab-panel-7605)

index.js

```

import timers from "node:timers";


export default {

  async fetch() {

    console.log("first");

    const { promise: promise1, resolve: resolve1 } = Promise.withResolvers();

    const { promise: promise2, resolve: resolve2 } = Promise.withResolvers();

    timers.setTimeout(() => {

      console.log("last");

      resolve1();

    }, 10);


    timers.setTimeout(() => {

      console.log("next");

      resolve2();

    });


    await Promise.all([promise1, promise2]);


    return new Response("ok");

  },

};


```

index.ts

```

import timers from "node:timers";


export default {

  async fetch(): Promise<Response> {

    console.log("first");

    const { promise: promise1, resolve: resolve1 } = Promise.withResolvers<void>();

    const { promise: promise2, resolve: resolve2 } = Promise.withResolvers<void>();

    timers.setTimeout(() => {

      console.log("last");

      resolve1();

    }, 10);


    timers.setTimeout(() => {

      console.log("next");

      resolve2();

    });


    await Promise.all([promise1, promise2]);


    return new Response("ok");

  }

} satisfies ExportedHandler<Env>;


```

Note

Due to [security-based restrictions on timers](https://developers.cloudflare.com/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading) in Workers, timers are limited to returning the time of the last I/O. This means that while setTimeout, setInterval, and setImmediate will defer your function execution until after other events have run, they will not delay them for the full time specified.

Note

When called from a global level (on [globalThis ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/globalThis)), functions such as `clearTimeout` and `setTimeout` will respect web standards rather than Node.js-specific functionality. For complete Node.js compatibility, you must call functions from the `node:timers` module.

The full `node:timers` API is documented in the [Node.js documentation for node:timers ↗](https://nodejs.org/api/timers.html).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/timers/","name":"timers"}}]}
```

---

---
title: tls
description: You can use node:tls to create secure connections to
external services using TLS (Transport Layer Security).
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/tls.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# tls

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

You can use [node:tls ↗](https://nodejs.org/api/tls.html) to create secure connections to external services using [TLS ↗](https://developer.mozilla.org/en-US/docs/Web/Security/Transport%5FLayer%5FSecurity) (Transport Layer Security).

JavaScript

```

import { connect } from "node:tls";


// ... in a request handler ...

const connectionOptions = { key: env.KEY, cert: env.CERT };

const socket = connect(url, connectionOptions, () => {

  if (socket.authorized) {

    console.log("Connection authorized");

  }

});


socket.on("data", (data) => {

  console.log(data);

});


socket.on("end", () => {

  console.log("server ends connection");

});


```

The following APIs are available:

* [connect ↗](https://nodejs.org/api/tls.html#tlsconnectoptions-callback)
* [TLSSocket ↗](https://nodejs.org/api/tls.html#class-tlstlssocket)
* [checkServerIdentity ↗](https://nodejs.org/api/tls.html#tlscheckserveridentityhostname-cert)
* [createSecureContext ↗](https://nodejs.org/api/tls.html#tlscreatesecurecontextoptions)

All other APIs, including [tls.Server ↗](https://nodejs.org/api/tls.html#class-tlsserver) and [tls.createServer ↗](https://nodejs.org/api/tls.html#tlscreateserveroptions-secureconnectionlistener), are not supported and will throw a `Not implemented` error when called.

The full `node:tls` API is documented in the [Node.js documentation for node:tls ↗](https://nodejs.org/api/tls.html).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/tls/","name":"tls"}}]}
```

---

---
title: url
description: Returns the Punycode ASCII serialization of the domain. If domain is an invalid domain, the empty string is returned.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/url.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# url

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

## domainToASCII

Returns the Punycode ASCII serialization of the domain. If domain is an invalid domain, the empty string is returned.

JavaScript

```

import { domainToASCII } from "node:url";


console.log(domainToASCII("español.com"));

// Prints xn--espaol-zwa.com

console.log(domainToASCII("中文.com"));

// Prints xn--fiq228c.com

console.log(domainToASCII("xn--iñvalid.com"));

// Prints an empty string


```

## domainToUnicode

Returns the Unicode serialization of the domain. If domain is an invalid domain, the empty string is returned.

It performs the inverse operation to `domainToASCII()`.

JavaScript

```

import { domainToUnicode } from "node:url";


console.log(domainToUnicode("xn--espaol-zwa.com"));

// Prints español.com

console.log(domainToUnicode("xn--fiq228c.com"));

// Prints 中文.com

console.log(domainToUnicode("xn--iñvalid.com"));

// Prints an empty string


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/url/","name":"url"}}]}
```

---

---
title: util
description: The promisify and callbackify APIs in Node.js provide a means of bridging between a Promise-based programming model and a callback-based model.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/util.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# util

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

## promisify/callbackify

The `promisify` and `callbackify` APIs in Node.js provide a means of bridging between a Promise-based programming model and a callback-based model.

The `promisify` method allows taking a Node.js-style callback function and converting it into a Promise-returning async function:

JavaScript

```

import { promisify } from "node:util";


function foo(args, callback) {

  try {

    callback(null, 1);

  } catch (err) {

    // Errors are emitted to the callback via the first argument.

    callback(err);

  }

}


const promisifiedFoo = promisify(foo);

await promisifiedFoo(args);


```

Similarly to `promisify`, `callbackify` converts a Promise-returning async function into a Node.js-style callback function:

JavaScript

```

import { callbackify } from 'node:util';


async function foo(args) {

  throw new Error('boom');

}


const callbackifiedFoo = callbackify(foo);


callbackifiedFoo(args, (err, value) => {

  if (err) throw err;

});


```

`callbackify` and `promisify` make it easy to handle all of the challenges that come with bridging between callbacks and promises.

Refer to the [Node.js documentation for callbackify ↗](https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utilcallbackifyoriginal) and [Node.js documentation for promisify ↗](https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utilpromisifyoriginal) for more information.

## util.types

The `util.types` API provides a reliable and efficient way of checking that values are instances of various built-in types.

JavaScript

```

import { types } from "node:util";


types.isAnyArrayBuffer(new ArrayBuffer()); // Returns true

types.isAnyArrayBuffer(new SharedArrayBuffer()); // Returns true

types.isArrayBufferView(new Int8Array()); // true

types.isArrayBufferView(Buffer.from("hello world")); // true

types.isArrayBufferView(new DataView(new ArrayBuffer(16))); // true

types.isArrayBufferView(new ArrayBuffer()); // false

function foo() {

  types.isArgumentsObject(arguments); // Returns true

}

types.isAsyncFunction(function foo() {}); // Returns false

types.isAsyncFunction(async function foo() {}); // Returns true

// .. and so on


```

Warning

The Workers implementation currently does not provide implementations of the `util.types.isExternal()`, `util.types.isProxy()`, `util.types.isKeyObject()`, or `util.type.isWebAssemblyCompiledModule()` APIs.

For more about `util.types`, refer to the [Node.js documentation for util.types ↗](https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utiltypes).

## util.MIMEType

`util.MIMEType` provides convenience methods that allow you to more easily work with and manipulate [MIME types ↗](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics%5Fof%5FHTTP/MIME%5Ftypes). For example:

JavaScript

```

import { MIMEType } from "node:util";


const myMIME = new MIMEType("text/javascript;key=value");


console.log(myMIME.type);

// Prints: text


console.log(myMIME.essence);

// Prints: text/javascript


console.log(myMIME.subtype);

// Prints: javascript


console.log(String(myMIME));

// Prints: application/javascript;key=value


```

For more about `util.MIMEType`, refer to the [Node.js documentation for util.MIMEType ↗](https://nodejs.org/api/util.html#class-utilmimetype).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/util/","name":"util"}}]}
```

---

---
title: zlib
description: The node:zlib module provides compression functionality implemented using Gzip, Deflate/Inflate, and Brotli.
To access it:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/nodejs/zlib.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# zlib

Note

To enable built-in Node.js APIs and polyfills, add the nodejs\_compat compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This also enables nodejs\_compat\_v2 as long as your compatibility date is 2024-09-23 or later. [Learn more about the Node.js compatibility flag and v2](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

The node:zlib module provides compression functionality implemented using Gzip, Deflate/Inflate, and Brotli. To access it:

JavaScript

```

import zlib from "node:zlib";


```

The full `node:zlib` API is documented in the [Node.js documentation for node:zlib ↗](https://nodejs.org/api/zlib.html).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/nodejs/","name":"Node.js compatibility"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/nodejs/zlib/","name":"zlib"}}]}
```

---

---
title: Performance and timers
description: Measure timing, performance, and timing of subrequests and other operations.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/performance.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Performance and timers

## Background

The Workers runtime supports a subset of the [Performance API ↗](https://developer.mozilla.org/en-US/docs/Web/API/Performance), used to measure timing and performance, as well as timing of subrequests and other operations.

### `performance.now()`

The [performance.now() method ↗](https://developer.mozilla.org/en-US/docs/Web/API/Performance/now) returns timestamp in milliseconds, representing the time elapsed since `performance.timeOrigin`.

When Workers are deployed to Cloudflare, as a security measure to [mitigate against Spectre attacks](https://developers.cloudflare.com/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading), APIs that return timers, including [performance.now() ↗](https://developer.mozilla.org/en-US/docs/Web/API/Performance/now) and [Date.now() ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/Date/now), only advance or increment after I/O occurs. Consider the following examples:

Time is frozen — start will have the exact same value as end.

```

const start = performance.now();

for (let i = 0; i < 1e6; i++) {

  // do expensive work

}

const end = performance.now();

const timing = end - start; // 0


```

Time advances, because a subrequest has occurred between start and end.

```

const start = performance.now();

const response = await fetch("https://developers.cloudflare.com/");

const end = performance.now();

const timing = end - start; // duration of the subrequest to developers.cloudflare.com


```

By wrapping a subrequest in calls to `performance.now()` or `Date.now()` APIs, you can measure the timing of a subrequest, fetching a key from KV, an object from R2, or any other form of I/O in your Worker.

In local development, however, timers will increment regardless of whether I/O happens or not. This means that if you need to measure timing of a piece of code that is CPU intensive, that does not involve I/O, you can run your Worker locally, via [Wrangler](https://developers.cloudflare.com/workers/wrangler/), which uses the open-source Workers runtime, [workerd ↗](https://github.com/cloudflare/workerd) — the same runtime that your Worker runs in when deployed to Cloudflare.

### `performance.timeOrigin`

The [performance.timeOrigin ↗](https://developer.mozilla.org/en-US/docs/Web/API/Performance/timeOrigin) API is a read-only property that returns a baseline timestamp to base other measurements off of.

In the Workers runtime, the `timeOrigin` property returns 0.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/performance/","name":"Performance and timers"}}]}
```

---

---
title: Request
description: Interface that represents an HTTP request.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/request.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Request

The [Request ↗](https://developer.mozilla.org/en-US/docs/Web/API/Request/Request) interface represents an HTTP request and is part of the [Fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch/).

## Background

The most common way you will encounter a `Request` object is as a property of an incoming request:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    return new Response('Hello World!');

  },

};


```

You may also want to construct a `Request` yourself when you need to modify a request object, because the incoming `request` parameter that you receive from the [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) is immutable.

JavaScript

```

export default {

  async fetch(request, env, ctx) {

        const url = "https://example.com";

        const modifiedRequest = new Request(url, request);

    // ...

  },

};


```

The [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) invokes the `Request` constructor. The [RequestInit](#options) and [RequestInitCfProperties](#the-cf-property-requestinitcfproperties) types defined below also describe the valid parameters that can be passed to the [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/).

---

## Constructor

JavaScript

```

let request = new Request(input, options)


```

### Parameters

* `input` string | Request  
   * Either a string that contains a URL, or an existing `Request` object.
* `options` options optional  
   * Optional options object that contains settings to apply to the `Request`.

#### `options`

An object containing properties that you want to apply to the request.

* `cache` `undefined | 'no-store' | 'no-cache'` optional  
   * Standard HTTP `cache` header. Only `cache: 'no-store'` and `cache: 'no-cache'` are supported. Any other cache header will result in a `TypeError` with the message `Unsupported cache mode: <attempted-cache-mode>`.
* `cf` RequestInitCfProperties optional  
   * Cloudflare-specific properties that can be set on the `Request` that control how Cloudflare’s global network handles the request.
* `method` ` string ` optional  
   * The HTTP request method. The default is `GET`. In Workers, all [HTTP request methods ↗](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Methods) are supported, except for [CONNECT ↗](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Methods/CONNECT).
* `headers` Headers optional  
   * A [Headers object ↗](https://developer.mozilla.org/en-US/docs/Web/API/Headers).
* `body` string | ReadableStream | FormData | URLSearchParams optional  
   * The request body, if any.  
   * Note that a request using the GET or HEAD method cannot have a body.
* `redirect` ` string ` optional  
   * The redirect mode to use: `follow`, `error`, or `manual`. The default for a new `Request` object is `follow`. Note, however, that the incoming `Request` property of a `FetchEvent` will have redirect mode `manual`.
* `signal` AbortSignal optional  
   * If provided, the request can be canceled by triggering an abort on the corresponding `AbortController`.

#### The `cf` property (`RequestInitCfProperties`)

An object containing Cloudflare-specific properties that can be set on the `Request` object. For example:

JavaScript

```

// Disable ScrapeShield for this request.

fetch(event.request, { cf: { scrapeShield: false } })


```

Invalid or incorrectly-named keys in the `cf` object will be silently ignored. Consider using TypeScript and generating types by running [wrangler types](https://developers.cloudflare.com/workers/languages/typescript/#generate-types) to ensure proper use of the `cf` object.

* `apps` ` boolean ` optional  
   * Whether [Cloudflare Apps ↗](https://www.cloudflare.com/apps/) should be enabled for this request. Defaults to `true`.
* `cacheEverything` ` boolean ` optional  
   * Treats all content as static and caches all [file types](https://developers.cloudflare.com/cache/concepts/default-cache-behavior#default-cached-file-extensions) beyond the Cloudflare default cached content. Respects cache headers from the origin web server. This is equivalent to setting the Page Rule [**Cache Level** (to **Cache Everything**)](https://developers.cloudflare.com/rules/page-rules/reference/settings/). Defaults to `false`. This option applies to `GET` and `HEAD` request methods only.
* `cacheKey` ` string ` optional  
   * A request’s cache key is what determines if two requests are the same for caching purposes. If a request has the same cache key as some previous request, then Cloudflare can serve the same cached response for both.
* `cacheTags` Array<string> optional  
   * This option appends additional [**Cache-Tag**](https://developers.cloudflare.com/cache/how-to/purge-cache/purge-by-tags/) headers to the response from the origin server. This allows for purges of cached content based on tags provided by the Worker, without modifications to the origin server. This is performed using the [**Purge by Tag**](https://developers.cloudflare.com/cache/how-to/purge-cache/purge-by-tags/#purge-using-cache-tags) feature.
* `cacheTtl` ` number ` optional  
   * This option forces Cloudflare to cache the response for this request, regardless of what headers are seen on the response. This is equivalent to setting two Page Rules: [**Edge Cache TTL**](https://developers.cloudflare.com/cache/how-to/edge-browser-cache-ttl/) and [**Cache Level** (to **Cache Everything**)](https://developers.cloudflare.com/rules/page-rules/reference/settings/). The value must be zero or a positive number. A value of `0` indicates that the cache asset expires immediately. This option applies to `GET` and `HEAD` request methods only.
* `cacheTtlByStatus` `{ [key: string]: number }` optional  
   * This option is a version of the `cacheTtl` feature which chooses a TTL based on the response’s status code. If the response to this request has a status code that matches, Cloudflare will cache for the instructed time and override cache instructives sent by the origin. For example: `{ "200-299": 86400, "404": 1, "500-599": 0 }`. The value can be any integer, including zero and negative integers. A value of `0` indicates that the cache asset expires immediately. Any negative value instructs Cloudflare not to cache at all. This option applies to `GET` and `HEAD` request methods only.
* `image` Object | null optional  
   * Enables [Image Resizing](https://developers.cloudflare.com/images/transform-images/) for this request. The possible values are described in [Transform images via Workers](https://developers.cloudflare.com/images/transform-images/transform-via-workers/) documentation.
* `polish` ` string ` optional  
   * Sets [Polish ↗](https://blog.cloudflare.com/introducing-polish-automatic-image-optimizati/) mode. The possible values are `lossy`, `lossless` or `off`.
* `resolveOverride` ` string ` optional  
   * Directs the request to an alternate origin server by overriding the DNS lookup. The value of `resolveOverride` specifies an alternate hostname which will be used when determining the origin IP address, instead of using the hostname specified in the URL. The `Host` header of the request will still match what is in the URL. Thus, `resolveOverride` allows a request to be sent to a different server than the URL / `Host` header specifies. However, `resolveOverride` will only take effect if both the URL host and the host specified by `resolveOverride` are within your zone. If either specifies a host from a different zone / domain, then the option will be ignored for security reasons. If you need to direct a request to a host outside your zone (while keeping the `Host` header pointing within your zone), first create a CNAME record within your zone pointing to the outside host, and then set `resolveOverride` to point at the CNAME record. Note that, for security reasons, it is not possible to set the `Host` header to specify a host outside of your zone unless the request is actually being sent to that host.
* `scrapeShield` ` boolean ` optional  
   * Whether [ScrapeShield ↗](https://blog.cloudflare.com/introducing-scrapeshield-discover-defend-dete/) should be enabled for this request, if otherwise configured for this zone. Defaults to `true`.
* `webp` ` boolean ` optional  
   * Enables or disables [WebP ↗](https://blog.cloudflare.com/a-very-webp-new-year-from-cloudflare/) image format in [Polish](https://developers.cloudflare.com/images/polish/).

---

## Properties

All properties of an incoming `Request` object (the request you receive from the [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/)) are read-only. To modify the properties of an incoming request, create a new `Request` object and pass the options to modify to its [constructor](#constructor).

* `body` ReadableStream read-only  
   * Stream of the body contents.
* `bodyUsed` Boolean read-only  
   * Declares whether the body has been used in a response yet.
* `cf` IncomingRequestCfProperties read-only  
   * An object containing properties about the incoming request provided by Cloudflare’s global network.  
   * This property is read-only (unless created from an existing `Request`). To modify its values, pass in the new values on the [cf key of the init options argument](https://developers.cloudflare.com/workers/runtime-apis/request/#the-cf-property-requestinitcfproperties) when creating a new `Request` object.
* `headers` Headers read-only  
   * A [Headers object ↗](https://developer.mozilla.org/en-US/docs/Web/API/Headers).  
   * Compared to browsers, Cloudflare Workers imposes very few restrictions on what headers you are allowed to send. For example, a browser will not allow you to set the `Cookie` header, since the browser is responsible for handling cookies itself. Workers, however, has no special understanding of cookies, and treats the `Cookie` header like any other header.  
Warning  
If the response is a redirect and the redirect mode is set to `follow` (see below), then all headers will be forwarded to the redirect destination, even if the destination is a different hostname or domain. This includes sensitive headers like `Cookie`, `Authorization`, or any application-specific headers. If this is not the behavior you want, you should set redirect mode to `manual` and implement your own redirect policy. Note that redirect mode defaults to `manual` for requests that originated from the Worker's client, so this warning only applies to `fetch()`es made by a Worker that are not proxying the original request.
* `method` string read-only  
   * Contains the request’s method, for example, `GET`, `POST`, etc.
* `redirect` string read-only  
   * The redirect mode to use: `follow`, `error`, or `manual`. The `fetch` method will automatically follow redirects if the redirect mode is set to `follow`. If set to `manual`, the `3xx` redirect response will be returned to the caller as-is. The default for a new `Request` object is `follow`. Note, however, that the incoming `Request` property of a `FetchEvent` will have redirect mode `manual`.
* `signal` AbortSignal read-only  
   * The `AbortSignal` corresponding to this request. If you use the [enable\_request\_signal](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#enable-requestsignal-for-incoming-requests) compatibility flag, you can attach an event listener to the signal. This allows you to perform cleanup tasks or write to logs before your Worker's invocation ends. For example, if you run the Worker below, and then abort the request from the client, a log will be written:  
         * [  JavaScript ](#tab-panel-7606)  
         * [  TypeScript ](#tab-panel-7607)  
   index.js  
   ```  
   export default {  
     async fetch(request, env, ctx) {  
       // This sets up an event listener that will be called if the client disconnects from your  
       // worker.  
       request.signal.addEventListener("abort", () => {  
         console.log("The request was aborted!");  
       });  
       const { readable, writable } = new IdentityTransformStream();  
       sendPing(writable);  
       return new Response(readable, {  
         headers: { "Content-Type": "text/plain" },  
       });  
     },  
   };  
   async function sendPing(writable) {  
     const writer = writable.getWriter();  
     const enc = new TextEncoder();  
     for (;;) {  
       // Send 'ping' every second to keep the connection alive  
       await writer.write(enc.encode("ping\r\n"));  
       await scheduler.wait(1000);  
     }  
   }  
   ```  
   index.ts  
   ```  
   export default {  
     async fetch(request, env, ctx): Promise<Response> {  
       // This sets up an event listener that will be called if the client disconnects from your  
       // worker.  
       request.signal.addEventListener('abort', () => {  
         console.log('The request was aborted!');  
       });  
       const { readable, writable } = new IdentityTransformStream();  
       sendPing(writable);  
       return new Response(readable, { headers: { 'Content-Type': 'text/plain' } });  
     },  
   } satisfies ExportedHandler<Env>;  
   async function sendPing(writable: WritableStream): Promise<void> {  
     const writer = writable.getWriter();  
     const enc = new TextEncoder();  
     for (;;) {  
       // Send 'ping' every second to keep the connection alive  
       await writer.write(enc.encode('ping\r\n'));  
       await scheduler.wait(1000);  
     }  
   }  
   ```
* `url` string read-only  
   * Contains the URL of the request.

### `IncomingRequestCfProperties`

In addition to the properties on the standard [Request ↗](https://developer.mozilla.org/en-US/docs/Web/API/Request) object, the `request.cf` object on an inbound `Request` contains information about the request provided by Cloudflare’s global network.

All plans have access to:

* `asn` Number  
   * ASN of the incoming request, for example, `395747`.
* `asOrganization` string  
   * The organization which owns the ASN of the incoming request, for example, `Google Cloud`.
* `botManagement` Object | null  
   * Only set when using Cloudflare Bot Management. Object with the following properties: `score`, `verifiedBot`, `staticResource`, `ja3Hash`, `ja4`, and `detectionIds`. Refer to [Bot Management Variables](https://developers.cloudflare.com/bots/reference/bot-management-variables/) for more details.
* `clientAcceptEncoding` string | null  
   * If Cloudflare replaces the value of the `Accept-Encoding` header, the original value is stored in the `clientAcceptEncoding` property, for example, `"gzip, deflate, br"`.
* `clientQuicRtt` number | undefined  
   * The smoothed round-trip time (RTT) between Cloudflare and the client for QUIC connections, in milliseconds. Only present when the client connected over QUIC (HTTP/3). For example, `42`.
* `clientTcpRtt` number | undefined  
   * The smoothed round-trip time (RTT) between the client and Cloudflare for TCP connections, in milliseconds. Only present when the client connected over TCP (HTTP/1 and HTTP/2). For example, `22`.
* `colo` string  
   * The three-letter [IATA ↗](https://en.wikipedia.org/wiki/IATA%5Fairport%5Fcode) airport code of the data center that the request hit, for example, `"DFW"`.
* `country` string | null  
   * Country of the incoming request. The two-letter country code in the request. This is the same value as that provided in the `CF-IPCountry` header, for example, `"US"`.
* `edgeL4` Object | undefined  
   * Layer 4 transport statistics for the connection between the client and Cloudflare. Contains the following property:  
         * `deliveryRate` number - The most recent data delivery rate estimate for the connection, in bytes per second. For example, `123456`.
* `isEUCountry` string | null  
   * If the country of the incoming request is in the EU, this will return `"1"`. Otherwise, this property is either omitted or `false`.
* `httpProtocol` string  
   * HTTP Protocol, for example, `"HTTP/2"`.
* `hostMetadata` Object | undefined  
   * Only populated when the incoming request is from a zone with custom hostname metadata. Refer to the Cloudflare for Platforms documentation for more about what you can add as [custom hostname metadata](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/domain-support/custom-metadata/), and how it is exposed on the `hostMetadata` field.
* `requestPriority` string | null  
   * The browser-requested prioritization information in the request object, for example, `"weight=192;exclusive=0;group=3;group-weight=127"`.
* `tlsCipher` string  
   * The cipher for the connection to Cloudflare, for example, `"AEAD-AES128-GCM-SHA256"`.
* `tlsClientAuth` Object | null  
   * Various details about the client certificate (for mTLS connections). Refer to [Client certificate variables](https://developers.cloudflare.com/ssl/client-certificates/client-certificate-variables/) for more details.
* `tlsClientCiphersSha1` string  
   * The SHA-1 hash (Base64-encoded) of the cipher suite sent by the client during the TLS handshake, encoded in big-endian format. For example, `"GXSPDLP4G3X+prK73a4wBuOaHRc="`.
* `tlsClientExtensionsSha1` string  
   * The SHA-1 hash (Base64-encoded) of the TLS client extensions sent during the handshake, encoded in big-endian format. For example, `"OWFiM2I5ZDc0YWI0YWYzZmFkMGU0ZjhlYjhiYmVkMjgxNTU5YTU2Mg=="`.
* `tlsClientExtensionsSha1Le` string  
   * The SHA-1 hash (Base64-encoded) of the TLS client extensions sent during the handshake, encoded in little-endian format. For example, `"7zIpdDU5pvFPPBI2/PCzqbaXnRA="`.
* `tlsClientHelloLength` string  
   * The length of the client hello message sent in a [TLS handshake ↗](https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/). For example, `"508"`. Specifically, the length of the bytestring of the client hello.
* `tlsClientRandom` string  
   * The value of the 32-byte random value provided by the client in a [TLS handshake ↗](https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/). Refer to [RFC 8446 ↗](https://datatracker.ietf.org/doc/html/rfc8446#section-4.1.2) for more details.
* `tlsVersion` string  
   * The TLS version of the connection to Cloudflare, for example, `TLSv1.3`.
* `city` string | null  
   * City of the incoming request, for example, `"Austin"`.
* `continent` string | null  
   * Continent of the incoming request, for example, `"NA"`.
* `latitude` string | null  
   * Latitude of the incoming request, for example, `"30.27130"`.
* `longitude` string | null  
   * Longitude of the incoming request, for example, `"-97.74260"`.
* `postalCode` string | null  
   * Postal code of the incoming request, for example, `"78701"`.
* `metroCode` string | null  
   * Metro code (DMA) of the incoming request, for example, `"635"`.
* `region` string | null  
   * If known, the [ISO 3166-2 ↗](https://en.wikipedia.org/wiki/ISO%5F3166-2) name for the first level region associated with the IP address of the incoming request, for example, `"Texas"`.
* `regionCode` string | null  
   * If known, the [ISO 3166-2 ↗](https://en.wikipedia.org/wiki/ISO%5F3166-2) code for the first-level region associated with the IP address of the incoming request, for example, `"TX"`.
* `timezone` string  
   * Timezone of the incoming request, for example, `"America/Chicago"`.

Warning

The `request.cf` object is not available in the Cloudflare Workers dashboard or Playground preview editor.

---

## Methods

### Instance methods

These methods are only available on an instance of a `Request` object or through its prototype.

* `clone()` : Request  
   * Creates a copy of the `Request` object.
* `arrayBuffer()` : Promise<ArrayBuffer>  
   * Returns a promise that resolves with an [ArrayBuffer ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/ArrayBuffer) representation of the request body.
* `formData()` : Promise<FormData>  
   * Returns a promise that resolves with a [FormData ↗](https://developer.mozilla.org/en-US/docs/Web/API/FormData) representation of the request body.
* `json()` : Promise<Object>  
   * Returns a promise that resolves with a JSON representation of the request body.
* `text()` : Promise<string>  
   * Returns a promise that resolves with a string (text) representation of the request body.

---

## The `Request` context

Each time a Worker is invoked by an incoming HTTP request, the [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch) is called on your Worker. The `Request` context starts when the `fetch()` handler is called, and asynchronous tasks (such as making a subrequest using the [fetch() API](https://developers.cloudflare.com/workers/runtime-apis/fetch/)) can only be run inside the `Request` context:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

        // Request context starts here

    return new Response('Hello World!');

  },

};


```

### When passing a promise to fetch event `.respondWith()`

If you pass a Response promise to the fetch event `.respondWith()` method, the request context is active during any asynchronous tasks which run before the Response promise has settled. You can pass the event to an async handler, for example:

JavaScript

```

addEventListener("fetch", event => {

  event.respondWith(eventHandler(event))

})


// No request context available here


async function eventHandler(event){

  // Request context available here

  return new Response("Hello, Workers!")

}


```

### Errors when attempting to access an inactive `Request` context

Any attempt to use APIs such as `fetch()` or access the `Request` context during script startup will throw an exception:

JavaScript

```

const promise = fetch("https://example.com/") // Error

async function eventHandler(event){..}


```

This code snippet will throw during script startup, and the `"fetch"` event listener will never be registered.

---

### Set the `Content-Length` header

The `Content-Length` header will be automatically set by the runtime based on whatever the data source for the `Request` is. Any value manually set by user code in the `Headers` will be ignored. To have a `Content-Length` header with a specific value specified, the `body` of the `Request` must be either a `FixedLengthStream` or a fixed-length value just as a string or `TypedArray`.

A `FixedLengthStream` is an identity `TransformStream` that permits only a fixed number of bytes to be written to it.

JavaScript

```

  const { writable, readable } = new FixedLengthStream(11);


  const enc = new TextEncoder();

  const writer = writable.getWriter();

  writer.write(enc.encode("hello world"));

  writer.end();


  const req = new Request('https://example.org', { method: 'POST', body: readable });


```

Using any other type of `ReadableStream` as the body of a request will result in Chunked-Encoding being used.

---

## Differences

The Workers implementation of the `Request` interface includes several extensions to the web standard `Request` API. These differences are intentional and provide additional functionality specific to the Workers runtime.

TypeScript users

Workers type definitions (from `@cloudflare/workers-types` or generated via [wrangler types](https://developers.cloudflare.com/workers/wrangler/commands/general/#types)) define a `Request` type that includes Workers-specific properties like `cf`. This type is not directly compatible with the standard `Request` type from `lib.dom.d.ts`. If you are working with code that uses both Workers types and standard web types, you may need to use type assertions or create a new `Request` object.

### The `cf` property

Workers adds a `cf` property to the `Request` object that contains Cloudflare-specific metadata about the incoming request. This property is not part of the web standard, and is only available in the Workers runtime. Refer to [IncomingRequestCfProperties](#incomingrequestcfproperties) for details.

### The `headers` property

The `headers` property returns a Workers-specific [Headers](https://developers.cloudflare.com/workers/runtime-apis/headers/) object that includes additional methods like `getAll()` for `Set-Cookie` headers. Refer to the [Headers documentation](https://developers.cloudflare.com/workers/runtime-apis/headers/#differences) for details on how the Workers `Headers` implementation differs from the web standard.

### Immutability

Incoming `Request` objects passed to the [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) are immutable. To modify properties of an incoming request, you must create a new `Request` object.

---

## Related resources

* [Examples: Modify request property](https://developers.cloudflare.com/workers/examples/modify-request-property/)
* [Examples: Accessing the cf object](https://developers.cloudflare.com/workers/examples/accessing-the-cloudflare-object/)
* [Reference: Response](https://developers.cloudflare.com/workers/runtime-apis/response/)
* Write your Worker code in [ES modules syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) for an optimized experience.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/request/","name":"Request"}}]}
```

---

---
title: Response
description: Interface that represents an HTTP response.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/response.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Response

The `Response` interface represents an HTTP response and is part of the Fetch API.

---

## Constructor

JavaScript

```

let response = new Response(body, init);


```

### Parameters

* `body` optional  
   * An object that defines the body text for the response. Can be `null` or any one of the following types:  
         * BufferSource  
         * FormData  
         * ReadableStream  
         * URLSearchParams  
         * USVString
* `init` optional  
   * An `options` object that contains custom settings to apply to the response.

Valid options for the `options` object include:

* `cf` any | null  
   * An object that contains Cloudflare-specific information. This object is not part of the Fetch API standard and is only available in Cloudflare Workers. This field is only used by consumers of the Response for informational purposes and does not have any impact on Workers behavior.
* `encodeBody` string  
   * Workers have to compress data according to the `content-encoding` header when transmitting, to serve data that is already compressed, this property has to be set to `"manual"`, otherwise the default is `"automatic"`.
* `headers` Headers | ByteString  
   * Any headers to add to your response that are contained within a [Headers](https://developers.cloudflare.com/workers/runtime-apis/request/#parameters) object or object literal of [ByteString ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/String) key-value pairs.
* `status` int  
   * The status code for the response, such as `200`.
* `statusText` string  
   * The status message associated with the status code, such as, `OK`.
* `webSocket` WebSocket | null  
   * This is present in successful WebSocket handshake responses. For example, if a client sends a WebSocket upgrade request to an origin and a Worker intercepts the request and then forwards it to the origin and the origin replies with a successful WebSocket upgrade response, the Worker sees `response.webSocket`. This establishes a WebSocket connection proxied through a Worker. Note that you cannot intercept data flowing over a WebSocket connection.

## Properties

* `response.body` Readable Stream  
   * A getter to get the body contents.
* `response.bodyUsed` boolean  
   * A boolean indicating if the body was used in the response.
* `response.headers` Headers  
   * The headers for the response.
* `response.ok` boolean  
   * A boolean indicating if the response was successful (status in the range `200`\-`299`).
* `response.redirected` boolean  
   * A boolean indicating if the response is the result of a redirect. If so, its URL list has more than one entry.
* `response.status` int  
   * The status code of the response (for example, `200` to indicate success).
* `response.statusText` string  
   * The status message corresponding to the status code (for example, `OK` for `200`).
* `response.url` string  
   * The URL of the response. The value is the final URL obtained after any redirects.
* `response.webSocket` WebSocket?  
   * This is present in successful WebSocket handshake responses. For example, if a client sends a WebSocket upgrade request to an origin and a Worker intercepts the request and then forwards it to the origin and the origin replies with a successful WebSocket upgrade response, the Worker sees `response.webSocket`. This establishes a WebSocket connection proxied through a Worker. Note that you cannot intercept data flowing over a WebSocket connection.

## Methods

### Instance methods

* `clone()` : Response  
   * Creates a clone of a [Response](#response) object.
* `json()` : Response  
   * Creates a new response with a JSON-serialized payload.
* `redirect()` : Response  
   * Creates a new response with a different URL.

### Additional instance methods

`Response` implements the [Body ↗](https://developer.mozilla.org/en-US/docs/Web/API/Fetch%5FAPI/Using%5FFetch#body) mixin of the [Fetch API ↗](https://developer.mozilla.org/en-US/docs/Web/API/Fetch%5FAPI), and therefore `Response` instances additionally have the following methods available:

* `arrayBuffer()` : Promise<ArrayBuffer>  
   * Takes a [Response](#response) stream, reads it to completion, and returns a promise that resolves with an [ArrayBuffer ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/ArrayBuffer).
* `formData()` : Promise<FormData>  
   * Takes a [Response](#response) stream, reads it to completion, and returns a promise that resolves with a [FormData ↗](https://developer.mozilla.org/en-US/docs/Web/API/FormData) object.
* `json()` : Promise<JSON>  
   * Takes a [Response](#response) stream, reads it to completion, and returns a promise that resolves with the result of parsing the body text as [JSON ↗](https://developer.mozilla.org/en-US/docs/Web/).
* `text()` : Promise<USVString>  
   * Takes a [Response](#response) stream, reads it to completion, and returns a promise that resolves with a [USVString ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/String) (text).

### Set the `Content-Length` header

The `Content-Length` header will be automatically set by the runtime based on whatever the data source for the `Response` is. Any value manually set by user code in the `Headers` will be ignored. To have a `Content-Length` header with a specific value specified, the `body` of the `Response` must be either a `FixedLengthStream` or a fixed-length value just as a string or `TypedArray`.

A `FixedLengthStream` is an identity `TransformStream` that permits only a fixed number of bytes to be written to it.

JavaScript

```

  const { writable, readable } = new FixedLengthStream(11);


  const enc = new TextEncoder();

  const writer = writable.getWriter();

  writer.write(enc.encode("hello world"));

  writer.end();


  return new Response(readable);


```

Using any other type of `ReadableStream` as the body of a response will result in chunked encoding being used.

---

## Differences

The Workers implementation of the `Response` interface includes several extensions to the web standard `Response` API. These differences are intentional and provide additional functionality specific to the Workers runtime.

TypeScript users

Workers type definitions (from `@cloudflare/workers-types` or generated via [wrangler types](https://developers.cloudflare.com/workers/wrangler/commands/general/#types)) define a `Response` type that includes Workers-specific properties like `cf` and `webSocket`. This type is not directly compatible with the standard `Response` type from `lib.dom.d.ts`. If you are working with code that uses both Workers types and standard web types, you may need to use type assertions.

### The `cf` property

Workers adds an optional `cf` property to the `Response` object. This property can be set in the `ResponseInit` options and is used for informational purposes by consumers of the Response. It does not affect Workers behavior.

### The `webSocket` property

Workers adds a `webSocket` property to the `Response` object to support WebSocket connections. This property is present in successful WebSocket handshake responses. Refer to [WebSockets](https://developers.cloudflare.com/workers/runtime-apis/websockets/) for more information.

### The `encodeBody` option

Workers adds an `encodeBody` option in `ResponseInit` that controls how the response body is compressed. Set this to `"manual"` when serving pre-compressed data to prevent automatic compression.

### The `headers` property

The `headers` property returns a Workers-specific [Headers](https://developers.cloudflare.com/workers/runtime-apis/headers/) object that includes additional methods like `getAll()` for `Set-Cookie` headers. Refer to the [Headers documentation](https://developers.cloudflare.com/workers/runtime-apis/headers/#differences) for details on how the Workers `Headers` implementation differs from the web standard.

---

## Related resources

* [Examples: Modify response](https://developers.cloudflare.com/workers/examples/modify-response/)
* [Examples: Conditional response](https://developers.cloudflare.com/workers/examples/conditional-response/)
* [Reference: Request](https://developers.cloudflare.com/workers/runtime-apis/request/)
* Write your Worker code in [ES modules syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) for an optimized experience.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/response/","name":"Response"}}]}
```

---

---
title: Remote-procedure call (RPC)
description: The built-in, JavaScript-native RPC system built into Workers and Durable Objects.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ RPC ](https://developers.cloudflare.com/search/?tags=RPC) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/rpc/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Remote-procedure call (RPC)

Note

To use RPC, [define a compatibility date](https://developers.cloudflare.com/workers/configuration/compatibility-dates) of `2024-04-03` or higher, or include `rpc` in your [compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag).

Workers provide a built-in, JavaScript-native [RPC (Remote Procedure Call) ↗](https://en.wikipedia.org/wiki/Remote%5Fprocedure%5Fcall) system, allowing you to:

* Define public methods on your Worker that can be called by other Workers on the same Cloudflare account, via [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc)
* Define public methods on [Durable Objects](https://developers.cloudflare.com/durable-objects) that can be called by other workers on the same Cloudflare account that declare a binding to it.

The RPC system is designed to feel as similar as possible to calling a JavaScript function in the same Worker. In most cases, you should be able to write code in the same way you would if everything was in a single Worker.

## Example

For example, if Worker B implements the public method `add(a, b)`:

* [  wrangler.jsonc ](#tab-panel-7626)
* [  wrangler.toml ](#tab-panel-7627)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "worker_b",

  "main": "./src/workerB.js"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "worker_b"

main = "./src/workerB.js"


```

* [  JavaScript ](#tab-panel-7643)
* [  TypeScript ](#tab-panel-7644)
* [  Python ](#tab-panel-7645)

JavaScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch() {

    return new Response("Hello from Worker B");

  }


  add(a, b) {

    return a + b;

  }

}


```

TypeScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch() {

    return new Response("Hello from Worker B");

  }


  add(a: number, b: number) {

    return a + b;

  }

}


```

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        return Response("Hello from Worker B")


    def add(self, a: int, b: int) -> int:

        return a + b


```

Worker A can declare a [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings) to Worker B:

* [  wrangler.jsonc ](#tab-panel-7630)
* [  wrangler.toml ](#tab-panel-7631)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "worker_a",

  "main": "./src/workerA.js",

  "services": [

    {

      "binding": "WORKER_B",

      "service": "worker_b"

    }

  ]

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "worker_a"

main = "./src/workerA.js"


[[services]]

binding = "WORKER_B"

service = "worker_b"


```

Making it possible for Worker A to call the `add()` method from Worker B:

* [  JavaScript ](#tab-panel-7638)
* [  TypeScript ](#tab-panel-7639)
* [  Python ](#tab-panel-7640)

JavaScript

```

export default {

  async fetch(request, env) {

    const result = await env.WORKER_B.add(1, 2);

    return new Response(result);

  },

};


```

TypeScript

```

export default {

  async fetch(request, env) {

    const result = await env.WORKER_B.add(1, 2);

    return new Response(result);

  },

};


```

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        result = await self.env.WORKER_B.add(1, 2)

    return Response(f"Result: {result}")


```

The client, in this case Worker A, calls Worker B and tells it to execute a specific procedure using specific arguments that the client provides. This is accomplished with standard JavaScript classes.

## All calls are asynchronous

Whether or not the method you are calling was declared asynchronous on the server side, it will behave as such on the client side. You must `await` the result.

Note that RPC calls do not actually return `Promise`s, but they return a type that behaves like a `Promise`. The type is a "custom thenable", in that it implements the method `then()`. JavaScript supports awaiting any "thenable" type, so, for the most part, you can treat the return value like a Promise.

(We'll see why the type is not actually a Promise a bit later.)

## Structured clonable types, and more

Nearly all types that are [Structured Cloneable ↗](https://developer.mozilla.org/en-US/docs/Web/API/Web%5FWorkers%5FAPI/Structured%5Fclone%5Falgorithm#supported%5Ftypes) can be used as a parameter or return value of an RPC method. This includes, most basic "value" types in JavaScript, including objects, arrays, strings and numbers.

As an exception to Structured Clone, application-defined classes (or objects with custom prototypes) cannot be passed over RPC, except as described below.

The RPC system also supports a number of types that are not Structured Cloneable, including:

* Functions, which are replaced by stubs that call back to the sender.
* Application-defined classes that extend `RpcTarget`, which are similarly replaced by stubs.
* [ReadableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/) and [WriteableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/writablestream/), with automatic streaming flow control.
* [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) and [Response](https://developers.cloudflare.com/workers/runtime-apis/response/), for conveniently representing HTTP messages.
* RPC stubs themselves, even if the stub was received from a third Worker.

## Functions

You can send a function over RPC. When you do so, the function is replaced by a "stub". The recipient can call the stub like a function, but doing so makes a new RPC back to the place where the function originated.

### Return functions from RPC methods

Consider the following two Workers, connected via a [Service Binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc). The counter service provides the RPC method `newCounter()`, which returns a function:

* [  wrangler.jsonc ](#tab-panel-7632)
* [  wrangler.toml ](#tab-panel-7633)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "counter-service",

  "main": "./src/counterService.js"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "counter-service"

main = "./src/counterService.js"


```

* [  JavaScript ](#tab-panel-7641)
* [  TypeScript ](#tab-panel-7642)

JavaScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch() {

    return new Response("Hello from counter-service");

  }


  async newCounter() {

    let value = 0;

    return (increment = 0) => {

      value += increment;

      return value;

    };

  }

}


```

TypeScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch() {

    return new Response("Hello from counter-service");

  }


  async newCounter() {

    let value = 0;

    return (increment = 0) => {

      value += increment;

      return value;

    };

  }

}


```

This function can then be called by the client Worker:

* [  wrangler.jsonc ](#tab-panel-7634)
* [  wrangler.toml ](#tab-panel-7635)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "client_worker",

  "main": "./src/clientWorker.js",

  "services": [

    {

      "binding": "COUNTER_SERVICE",

      "service": "counter-service"

    }

  ]

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "client_worker"

main = "./src/clientWorker.js"


[[services]]

binding = "COUNTER_SERVICE"

service = "counter-service"


```

* [  JavaScript ](#tab-panel-7636)
* [  TypeScript ](#tab-panel-7637)

JavaScript

```

export default {

  async fetch(request, env) {

    using f = await env.COUNTER_SERVICE.newCounter();

    await f(2); // returns 2

    await f(1); // returns 3

    const count = await f(-5); // returns -2


    return new Response(count);

  },

};


```

TypeScript

```

export default {

  async fetch(request: Request, env: Env) {

    using f = await env.COUNTER_SERVICE.newCounter();

    await f(2); // returns 2

    await f(1); // returns 3

    const count = await f(-5); // returns -2


    return new Response(count);

  },

};


```

Note

Refer to [Explicit Resource Management](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle) to learn more about the `using` declaration shown in the example above.

How is this possible? The system is not serializing the function itself. When the function returned by `CounterService` is called, it runs within `CounterService` — even if it is called by another Worker.

Under the hood, the caller is not really calling the function itself directly, but calling what is called a "stub". A "stub" is a [Proxy ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/Proxy) object that allows the client to call the remote service as if it were local, running in the same Worker. Behind the scenes, it calls back to the Worker that implements `CounterService` and asks it to execute the function closure that had been returned earlier.

### Send functions as parameters of RPC methods

You can also send a function in the parameters of an RPC. This enables the "server" to call back to the "client", reversing the direction of the relationship.

Because of this, the words "client" and "server" can be ambiguous when talking about RPC. The "server" is a Durable Object or WorkerEntrypoint, and the "client" is the Worker that invoked the server via a binding. But, RPCs can flow both ways between the two. When talking about an individual RPC, we recommend instead using the words "caller" and "callee".

## Class Instances

To use an instance of a class that you define as a parameter or return value of an RPC method, you must extend the built-in `RpcTarget` class.

Consider the following example:

* [  wrangler.jsonc ](#tab-panel-7608)
* [  wrangler.toml ](#tab-panel-7609)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "counter",

  "main": "./src/counter.js"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "counter"

main = "./src/counter.js"


```

* [  JavaScript ](#tab-panel-7624)
* [  TypeScript ](#tab-panel-7625)

JavaScript

```

import { WorkerEntrypoint, RpcTarget } from "cloudflare:workers";


class Counter extends RpcTarget {

  #value = 0;


  increment(amount) {

    this.#value += amount;

    return this.#value;

  }


  get value() {

    return this.#value;

  }

}


export class CounterService extends WorkerEntrypoint {

  async newCounter() {

    return new Counter();

  }

}


export default {

  fetch() {

    return new Response("ok");

  },

};


```

TypeScript

```

import { WorkerEntrypoint, RpcTarget } from "cloudflare:workers";


class Counter extends RpcTarget {

  #value = 0;


  increment(amount: number) {

    this.#value += amount;

    return this.#value;

  }


  get value() {

    return this.#value;

  }

}


export class CounterService extends WorkerEntrypoint {

  async newCounter() {

    return new Counter();

  }

}


export default {

  fetch() {

    return new Response("ok");

  },

};


```

The method `increment` can be called directly by the client, as can the public property `value`:

* [  wrangler.jsonc ](#tab-panel-7610)
* [  wrangler.toml ](#tab-panel-7611)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "client-worker",

  "main": "./src/clientWorker.js",

  "services": [

    {

      "binding": "COUNTER_SERVICE",

      "service": "counter",

      "entrypoint": "CounterService"

    }

  ]

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "client-worker"

main = "./src/clientWorker.js"


[[services]]

binding = "COUNTER_SERVICE"

service = "counter"

entrypoint = "CounterService"


```

* [  JavaScript ](#tab-panel-7616)
* [  TypeScript ](#tab-panel-7617)

JavaScript

```

export default {

  async fetch(request, env) {

    using counter = await env.COUNTER_SERVICE.newCounter();


    await counter.increment(2); // returns 2

    await counter.increment(1); // returns 3

    await counter.increment(-5); // returns -2


    const count = await counter.value; // returns -2


    return new Response(count);

  },

};


```

TypeScript

```

export default {

  async fetch(request: Request, env: Env) {

    using counter = await env.COUNTER_SERVICE.newCounter();


    await counter.increment(2); // returns 2

    await counter.increment(1); // returns 3

    await counter.increment(-5); // returns -2


    const count = await counter.value; // returns -2


    return new Response(count);

  },

};


```

Note

Refer to [Explicit Resource Management](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle) to learn more about the `using` declaration shown in the example above.

Classes that extend `RpcTarget` work a lot like functions: the object itself is not serialized, but is instead replaced by a stub. In this case, the stub itself is not callable, but its methods are. Calling any method on the stub actually makes an RPC back to the original object, where it was created.

As shown above, you can also access properties of classes. Properties behave like RPC methods that don't take any arguments — you await the property to asynchronously fetch its current value. Note that the act of awaiting the property (which, behind the scenes, calls `.then()` on it) is what causes the property to be fetched. If you do not use `await` when accessing the property, it will not be fetched.

Note

While it's possible to define a similar interface to the caller using an object that contains many functions, this is less efficient. If you return an object that contains five functions, then you are creating five stubs. If you return a class instance, where the class declares five methods, you are only returning a single stub. Returning a single stub is often more efficient and easier to reason about. Moreover, when returning a plain object (not a class), non-function properties of the object will be transmitted at the time the object itself is transmitted; they cannot be fetched asynchronously on-demand.

Note

Classes which do not inherit `RpcTarget` cannot be sent over RPC at all. This differs from Structured Clone, which defines application-defined classes as clonable. Why the difference? By default, the Structured Clone algorithm simply ignores an object's class entirely. So, the recipient receives a plain object, containing the original object's instance properties but entirely missing its original type. This behavior is rarely useful in practice, and could be confusing if the developer had intended the class to be treated as an `RpcTarget`. So, Workers RPC has chosen to disallow classes that are not `RpcTarget`s, to avoid any confusion.

### Promise pipelining

When you call an RPC method and get back an object, it's common to immediately call a method on the object:

* [  JavaScript ](#tab-panel-7612)
* [  TypeScript ](#tab-panel-7613)

JavaScript

```

// Two round trips.

using counter = await env.COUNTER_SERVICE.getCounter();

await counter.increment();


```

TypeScript

```

// Two round trips.

using counter = await env.COUNTER_SERVICE.getCounter();

await counter.increment();


```

But consider the case where the Worker service that you are calling may be far away across the network, as in the case of [Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/) or [Durable Objects](https://developers.cloudflare.com/durable-objects). The code above makes two round trips, once when calling `getCounter()`, and again when calling `.increment()`. We'd like to avoid this.

With most RPC systems, the only way to avoid the problem would be to combine the two calls into a single "batch" call, perhaps called `getCounterAndIncrement()`. However, this makes the interface worse. You wouldn't design a local interface this way.

Workers RPC allows a different approach: You can simply omit the first `await`:

* [  JavaScript ](#tab-panel-7614)
* [  TypeScript ](#tab-panel-7615)

JavaScript

```

// Only one round trip! Note the missing `await`.

using promiseForCounter = env.COUNTER_SERVICE.getCounter();

await promiseForCounter.increment();


```

TypeScript

```

// Only one round trip! Note the missing `await`.

using promiseForCounter = env.COUNTER_SERVICE.getCounter();

await promiseForCounter.increment();


```

In this code, `getCounter()` returns a promise for a counter. Normally, the only thing you would do with a promise is `await` it. However, Workers RPC promises are special: they also allow you to initiate speculative calls on the future result of the promise. These calls are sent to the server immediately, without waiting for the initial call to complete. Thus, multiple chained calls can be completed in a single round trip.

How does this work? The promise returned by an RPC is not a real JavaScript `Promise`. Instead, it is a custom ["Thenable" ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/Promise#thenables). It has a `.then()` method like `Promise`, which allows it to be used in all the places where you'd use a normal `Promise`. For instance, you can `await` it. But, in addition to that, an RPC promise also acts like a stub. Calling any method name on the promise forms a speculative call on the promise's eventual result. This is known as "promise pipelining".

This works when calling properties of objects returned by RPC methods as well. For example:

* [  JavaScript ](#tab-panel-7618)
* [  TypeScript ](#tab-panel-7619)

JavaScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export class MyService extends WorkerEntrypoint {

  async foo() {

    return {

      bar: {

        baz: () => "qux",

      },

    };

  }

}


```

TypeScript

```

import { WorkerEntrypoint } from "cloudflare:workers";


export class MyService extends WorkerEntrypoint {

  async foo() {

    return {

      bar: {

        baz: () => "qux",

      },

    };

  }

}


```

* [  JavaScript ](#tab-panel-7622)
* [  TypeScript ](#tab-panel-7623)

JavaScript

```

export default {

  async fetch(request, env) {

    using foo = env.MY_SERVICE.foo();

    let baz = await foo.bar.baz();

    return new Response(baz);

  },

};


```

TypeScript

```

export default {

  async fetch(request, env) {

    using foo = env.MY_SERVICE.foo();

    let baz = await foo.bar.baz();

    return new Response(baz);

  },

};


```

If the initial RPC ends up throwing an exception, then any pipelined calls will also fail with the same exception

## ReadableStream, WriteableStream, Request and Response

You can send and receive [ReadableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/), [WriteableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/writablestream/), [Request](https://developers.cloudflare.com/workers/runtime-apis/request/), and [Response](https://developers.cloudflare.com/workers/runtime-apis/response/) using RPC methods. When doing so, bytes in the body are automatically streamed with appropriate flow control. This allows you to send messages over RPC which are larger than [the typical 32 MiB limit](#limitations).

Only [byte-oriented streams ↗](https://developer.mozilla.org/en-US/docs/Web/API/Streams%5FAPI/Using%5Freadable%5Fbyte%5Fstreams) (streams with an underlying byte source of `type: "bytes"`) are supported.

In all cases, ownership of the stream is transferred to the recipient. The sender can no longer read/write the stream after sending it. If the sender wishes to keep its own copy, it can use the [tee() method of ReadableStream ↗](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/tee) or the [clone() method of Request or Response ↗](https://developer.mozilla.org/en-US/docs/Web/API/Response/clone). Keep in mind that doing this may force the system to buffer bytes and lose the benefits of flow control.

## Forwarding RPC stubs

A stub received over RPC from one Worker can be forwarded over RPC to another Worker.

* [  JavaScript ](#tab-panel-7620)
* [  TypeScript ](#tab-panel-7621)

JavaScript

```

using counter = env.COUNTER_SERVICE.getCounter();

await env.ANOTHER_SERVICE.useCounter(counter);


```

TypeScript

```

using counter = env.COUNTER_SERVICE.getCounter();

await env.ANOTHER_SERVICE.useCounter(counter);


```

Here, three different workers are involved:

1. The calling Worker (we'll call this the "introducer")
2. `COUNTER_SERVICE`
3. `ANOTHER_SERVICE`

When `ANOTHER_SERVICE` calls a method on the `counter` that is passed to it, this call will automatically be proxied through the introducer and on to the [RpcTarget](https://developers.cloudflare.com/workers/runtime-apis/rpc/) class implemented by `COUNTER_SERVICE`.

In this way, the introducer Worker can connect two Workers that did not otherwise have any ability to form direct connections to each other.

Currently, this proxying only lasts until the end of the Workers' execution contexts. A proxy connection cannot be persisted for later use.

## Video Tutorial

In this video, we explore how Cloudflare Workers support Remote Procedure Calls (RPC) to simplify communication between Workers. Learn how to implement RPC in your JavaScript applications and build serverless solutions with ease. Whether you're managing microservices or optimizing web architecture, this tutorial will show you how to quickly set up and use Cloudflare Workers for RPC calls. By the end of this video, you'll understand how to call functions between Workers, pass functions as arguments, and implement user authentication with Cloudflare Workers.

## More Details

* [ Lifecycle ](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle/)
* [ Reserved Methods ](https://developers.cloudflare.com/workers/runtime-apis/rpc/reserved-methods/)
* [ Visibility and Security Model ](https://developers.cloudflare.com/workers/runtime-apis/rpc/visibility/)
* [ TypeScript ](https://developers.cloudflare.com/workers/runtime-apis/rpc/typescript/)
* [ Error handling ](https://developers.cloudflare.com/workers/runtime-apis/rpc/error-handling/)

## Limitations

* [Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/) is currently ignored when making RPC calls. If Smart Placement is enabled for Worker A, and Worker B declares a [Service Binding](https://developers.cloudflare.com/workers/runtime-apis/bindings) to it, when Worker B calls Worker A via RPC, Worker A will run locally, on the same machine.
* The maximum serialized RPC limit is 32 MiB. Consider using [ReadableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/) when returning more data.  
   * [  JavaScript ](#tab-panel-7628)  
   * [  TypeScript ](#tab-panel-7629)  
JavaScript  
```  
export class MyService extends WorkerEntrypoint {  
  async foo() {  
    // Although this works, it puts a lot of memory pressure on the isolate.  
    // If possible, streaming the data from its original source is much preferred and would yield better performance.  
    // If you must buffer the data into memory, consider chunking it into smaller pieces if possible.  
    const sizeInBytes = 33 * 1024 * 1024; // 33 MiB  
    const arr = new Uint8Array(sizeInBytes);  
    return new ReadableStream({  
      start(controller) {  
        controller.enqueue(arr);  
        controller.close();  
      },  
    });  
  }  
}  
```  
TypeScript  
```  
export class MyService extends WorkerEntrypoint {  
  async foo() {  
    // Although this works, it puts a lot of memory pressure on the isolate.  
    // If possible, streaming the data from its original source is much preferred and would yield better performance.  
    // If you must buffer the data into memory, consider chunking it into smaller pieces if possible.  
    const sizeInBytes = 33 * 1024 * 1024; // 33 MiB  
    const arr = new Uint8Array(sizeInBytes);  
    return new ReadableStream({  
      start(controller) {  
        controller.enqueue(arr);  
        controller.close();  
      },  
    });  
  }  
}  
```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/rpc/","name":"Remote-procedure call (RPC)"}}]}
```

---

---
title: Error handling
description: How exceptions, stack traces, and logging works with the Workers RPC system.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/rpc/error-handling.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Error handling

## Exceptions

An exception thrown by an RPC method implementation will propagate to the caller. If it is one of the standard JavaScript Error types, the `message` and prototype's `name` will be retained, though the stack trace is not.

### Unsupported error types

* If an [AggregateError ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/AggregateError) is thrown by an RPC method, it is not propagated back to the caller.
* The [SuppressedError ↗](https://github.com/tc39/proposal-explicit-resource-management?tab=readme-ov-file#the-suppressederror-error) type from the Explicit Resource Management proposal is not currently implemented or supported in Workers.
* Own properties of error objects, such as the [cause ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/Error/cause) property, are not propagated back to the caller

## Additional properties

For some remote exceptions, the runtime may set properties on the propagated exception to provide more information about the error; see [Durable Object error handling](https://developers.cloudflare.com/durable-objects/best-practices/error-handling) for more details.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/rpc/","name":"Remote-procedure call (RPC)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/rpc/error-handling/","name":"Error handling"}}]}
```

---

---
title: Lifecycle
description: Memory management, resource management, and the lifecycle of RPC stubs.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/rpc/lifecycle.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Lifecycle

## Lifetimes, Memory and Resource Management

When you call another Worker over RPC using a Service binding, you are using memory in the Worker you are calling. Consider the following example:

JavaScript

```

let user = await env.USER_SERVICE.findUser(id);


```

Assume that `findUser()` on the server side returns an object extending `RpcTarget`, thus `user` on the client side ends up being a stub pointing to that remote object.

As long as the stub still exists on the client, the corresponding object on the server cannot be garbage collected. But, each isolate has its own garbage collector which cannot see into other isolates. So, in order for the server's isolate to know that the object can be collected, the calling isolate must send it an explicit signal saying so, called "disposing" the stub.

In many cases (described below), the system will automatically realize when a stub is no longer needed, and will dispose it automatically. However, for best performance, your code should dispose stubs explicitly when it is done with them.

## Explicit Resource Management

To ensure resources are properly disposed of, you should use [Explicit Resource Management ↗](https://github.com/tc39/proposal-explicit-resource-management), a new JavaScript language feature that allows you to explicitly signal when resources can be disposed of. Explicit Resource Management is a Stage 3 TC39 proposal — it is [coming to V8 soon ↗](https://bugs.chromium.org/p/v8/issues/detail?id=13559).

Explicit Resource Management adds the following language features:

* The [using declaration ↗](https://github.com/tc39/proposal-explicit-resource-management?tab=readme-ov-file#using-declarations)
* [Symbol.dispose and Symbol.asyncDispose ↗](https://github.com/tc39/proposal-explicit-resource-management?tab=readme-ov-file#additions-to-symbol)

If a variable is declared with `using`, when the variable is no longer in scope, the variable's disposer will be invoked. For example:

JavaScript

```

function sendEmail(id, message) {

  using user = await env.USER_SERVICE.findUser(id);

  await user.sendEmail(message);


  // user[Symbol.dispose]() is implicitly called at the end of the scope.

}


```

`using` declarations are useful to make sure you can't forget to dispose stubs — even if your code is interrupted by an exception.

### How to use the `using` declaration in your Worker

[Wrangler](https://developers.cloudflare.com/workers/wrangler/) v4+ supports the `using` keyword natively. If you are using an earlier version of Wrangler, you will need to manually dispose of resources instead.

The following code:

JavaScript

```

{

  using counter = await env.COUNTER_SERVICE.newCounter();

  await counter.increment(2);

  await counter.increment(4);

}


```

...is equivalent to:

JavaScript

```

{

  const counter = await env.COUNTER_SERVICE.newCounter();

  try {

    await counter.increment(2);

    await counter.increment(4);

  } finally {

    counter[Symbol.dispose]();

  }

}


```

## Automatic disposal and execution contexts

The RPC system automatically disposes of stubs in the following cases:

### End of event handler / execution context

When an event handler is "done", any stubs created as part of the event are automatically disposed.

For example, consider a [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch) which handles incoming HTTP events. The handler may make outgoing RPCs as part of handling the event, and those may return stubs. When the final HTTP response is sent, the handler is "done", and all stubs are immediately disposed.

More precisely, the event has an "execution context", which begins when the handler is first invoked, and ends when the HTTP response is sent. The execution context may also end early if the client disconnects before receiving a response, or it can be extended past its normal end point by calling [ctx.waitUntil()](https://developers.cloudflare.com/workers/runtime-apis/context).

For example, the Worker below does not make use of the `using` declaration, but stubs will be disposed of once the `fetch()` handler returns a response:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    let authResult = await env.AUTH_SERVICE.checkCookie(

      req.headers.get("Cookie"),

    );

    if (!authResult.authorized) {

      return new Response("Not authorized", { status: 403 });

    }

    let profile = await authResult.user.getProfile();


    return new Response(`Hello, ${profile.name}!`);

  },

};


```

A Worker invoked via RPC also has an execution context. The context begins when an RPC method on a `WorkerEntrypoint` is invoked. If no stubs are passed in the parameters or results of this RPC, the context ends (the event is "done") when the RPC returns. However, if any stubs are passed, then the execution context is implicitly extended until all such stubs are disposed (and all calls made through them have returned). As with HTTP, if the client disconnects, the server's execution context is canceled immediately, regardless of whether stubs still exist. A client that is itself another Worker is considered to have disconnected when its own execution context ends. Again, the context can be extended with [ctx.waitUntil()](https://developers.cloudflare.com/workers/runtime-apis/context).

### Stubs received as parameters in an RPC call

When stubs are received in the parameters of an RPC, those stubs are automatically disposed when the call returns. If you wish to keep the stubs longer than that, you must call the `dup()` method on them.

### Disposing RPC objects disposes stubs that are part of that object

When an RPC returns any kind of object, that object will have a disposer added by the system. Disposing it will dispose all stubs returned by the call. For instance, if an RPC returns an array of four stubs, the array itself will have a disposer that disposes all four stubs. The only time the value returned by an RPC does not have a disposer is when it is a primitive value, such as a number or string. These types cannot have disposers added to them, but because these types cannot themselves contain stubs, there is no need for a disposer in this case.

This means you should almost always store the result of an RPC into a `using` declaration:

JavaScript

```

using result = stub.foo();


```

This way, if the result contains any stubs, they will be disposed of. Even if you don't expect the RPC to return stubs, if it returns any kind of an object, it is a good idea to store it into a `using` declaration. This way, if the RPC is extended in the future to return stubs, your code is ready.

If you decide you want to keep a returned stub beyond the scope of the `using` declaration, you can call `dup()` on the stub before the end of the scope. (Remember to explicitly dispose the duplicate later.)

## Disposers and `RpcTarget` classes

A class that extends [RpcTarget](https://developers.cloudflare.com/workers/runtime-apis/rpc/) can optionally implement a disposer:

JavaScript

```

class Foo extends RpcTarget {

  [Symbol.dispose]() {

    // ...

  }

}


```

The RpcTarget's disposer runs after the last stub is disposed. Note that the client-side call to the stub's disposer does not wait for the server-side disposer to be called; the server's disposer is called later on. Because of this, any exceptions thrown by the disposer do not propagate to the client; instead, they are reported as uncaught exceptions. Note that an `RpcTarget`'s disposer must be declared as `Symbol.dispose`. `Symbol.asyncDispose` is not supported.

## The `dup()` method

Sometimes, you need to pass a stub to a function which will dispose the stub when it is done, but you also want to keep the stub for later use. To solve this problem, you can "dup" the stub:

JavaScript

```

let stub = await env.SOME_SERVICE.getThing();


// Create a duplicate.

let stub2 = stub.dup();


// Call some function that will dispose the stub.

await func(stub);


// stub2 is still valid


```

You can think of `dup()` like the [Unix system call of the same name ↗](https://man7.org/linux/man-pages/man2/dup.2.html): it creates a new handle pointing at the same target, which must be independently closed (disposed).

If the instance of the [RpcTarget class](https://developers.cloudflare.com/workers/runtime-apis/rpc/) that the stubs point to has a disposer, the disposer will only be invoked when all duplicates have been disposed. However, this only applies to duplicates that originate from the same stub. If the same instance of `RpcTarget` is passed over RPC multiple times, a new stub is created each time, and these are not considered duplicates of each other. Thus, the disposer will be invoked once for each time the `RpcTarget` was sent.

In order to avoid this situation, you can manually create a stub locally, and then pass the stub across RPC multiple times. When passing a stub over RPC, ownership of the stub transfers to the recipient, so you must make a `dup()` for each time you send it:

JavaScript

```

import { RpcTarget, RpcStub } from "cloudflare:workers";


class Foo extends RpcTarget {

  // ...

}


let obj = new Foo();

let stub = new RpcStub(obj);

await rpc1(stub.dup()); // sends a dup of `stub`

await rpc2(stub.dup()); // sends another dup of `stub`

stub[Symbol.dispose](); // disposes the original stub


// obj's disposer will be called when the other two stubs

// are disposed remotely.


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/rpc/","name":"Remote-procedure call (RPC)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/rpc/lifecycle/","name":"Lifecycle"}}]}
```

---

---
title: Reserved Methods
description: Reserved methods with special behavior that are treated differently.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/rpc/reserved-methods.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Reserved Methods

Some method names are reserved or have special semantics.

## Special Methods

For backwards compatibility, when extending `WorkerEntrypoint` or `DurableObject`, the following method names have special semantics. Note that this does _not_ apply to `RpcTarget`. On `RpcTarget`, these methods work like any other RPC method.

### `fetch()`

The `fetch()` method is treated specially — it can only be used to handle an HTTP request — equivalent to the [fetch handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/).

You may implement a `fetch()` method in your class that extends `WorkerEntrypoint` — but it must accept only one parameter of type [Request ↗](https://developer.mozilla.org/en-US/docs/Web/API/Request), and must return an instance of [Response ↗](https://developer.mozilla.org/en-US/docs/Web/API/Response), or a [Promise ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/Promise) of one.

On the client side, `fetch()` called on a service binding or Durable Object stub works like the standard global `fetch()`. That is, the caller may pass one or two parameters to `fetch()`. If the caller does not simply pass a single `Request` object, then a new `Request` is implicitly constructed, passing the parameters to its constructor, and that request is what is actually sent to the server.

Some properties of `Request` control the behavior of `fetch()` on the client side and are not actually sent to the server. For example, the property `redirect: "auto"` (which is the default) instructs `fetch()` that if the server returns a redirect response, it should automatically be followed, resulting in an HTTP request to the public internet. Again, this behavior is according to the Fetch API standard. In short, `fetch()` doesn't have RPC semantics, it has Fetch API semantics.

### `connect()`

The `connect()` method of the `WorkerEntrypoint` class is reserved for opening a socket-like connection to your Worker. This is currently not implemented or supported — though you can [open a TCP socket from a Worker](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) or connect directly to databases over a TCP socket with [Hyperdrive](https://developers.cloudflare.com/hyperdrive/get-started/).

## Disallowed Method Names

The following method (or property) names may not be used as RPC methods on any RPC type (including `WorkerEntrypoint`, `DurableObject`, and `RpcTarget`):

* `dup`: This is reserved for duplicating a stub. Refer to the [RPC Lifecycle](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle) docs to learn more about `dup()`.
* `constructor`: This name has special meaning for JavaScript classes. It is not intended to be called as a method, so it is not allowed over RPC.

The following methods are disallowed only on `WorkerEntrypoint` and `DurableObject`, but allowed on `RpcTarget`. These methods have historically had special meaning to Durable Objects, where they are used to handle certain system-generated events.

* `alarm`
* `webSocketMessage`
* `webSocketClose`
* `webSocketError`

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/rpc/","name":"Remote-procedure call (RPC)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/rpc/reserved-methods/","name":"Reserved Methods"}}]}
```

---

---
title: TypeScript
description: How TypeScript types for your Worker or Durable Object's RPC methods are generated and exposed to clients
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/rpc/typescript.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# TypeScript

Running [wrangler types](https://developers.cloudflare.com/workers/languages/typescript/#generate-types) generates runtime types including the `Service` and `DurableObjectNamespace` types, each of which accepts a single type parameter for the [WorkerEntrypoint](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc) or [DurableObject](https://developers.cloudflare.com/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/#call-rpc-methods) types.

Using higher-order types, we automatically generate client-side stub types (e.g., forcing all methods to be async).

[wrangler types](https://developers.cloudflare.com/workers/languages/typescript/#generate-types) also generates types for the `env` object. You can pass in the path to the config files of the Worker or Durable Object being called so that the generated types include the type parameters for the `Service` and `DurableObjectNamespace` types.

For example, if your client Worker had bindings to a Worker in `../sum-worker/` and a Durable Object in `../counter/`, you should generate types for the client Worker's `env` by running:

 npm  yarn  pnpm 

```
npx wrangler types -c ./client/wrangler.jsonc -c ../sum-worker/wrangler.jsonc -c ../counter/wrangler.jsonc
```

```
yarn wrangler types -c ./client/wrangler.jsonc -c ../sum-worker/wrangler.jsonc -c ../counter/wrangler.jsonc
```

```
pnpm wrangler types -c ./client/wrangler.jsonc -c ../sum-worker/wrangler.jsonc -c ../counter/wrangler.jsonc
```

This will produce a `worker-configuration.d.ts` file that includes:

worker-configuration.d.ts

```

interface Env {

  SUM_SERVICE: Service<import("../sum-worker/src/index").SumService>;

  COUNTER_OBJECT: DurableObjectNamespace<

    import("../counter/src/index").Counter

  >;

}


```

Now types for RPC method like the `env.SUM_SERVICE.sum` method will be exposed to the client Worker.

src/index.ts

```

export default {

  async fetch(req, env, ctx): Promise<Response> {

    const result = await env.SUM_SERVICE.sum(1, 2);

    return new Response(result.toString());

  },

} satisfies ExportedHandler<Env>;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/rpc/","name":"Remote-procedure call (RPC)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/rpc/typescript/","name":"TypeScript"}}]}
```

---

---
title: Visibility and Security Model
description: Which properties are and are not exposed to clients that communicate with your Worker or Durable Object via RPC
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/rpc/visibility.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Visibility and Security Model

## Security Model

The Workers RPC system is intended to allow safe communications between Workers that do not trust each other. The system does not allow either side of an RPC session to access arbitrary objects on the other side, much less invoke arbitrary code. Instead, each side can only invoke the objects and functions for which they have explicitly received stubs via previous calls.

This security model is commonly known as Object Capabilities, or Capability-Based Security. Workers RPC is built on [Cap'n Proto RPC ↗](https://capnproto.org/rpc.html), which in turn is based on CapTP, the object transport protocol used by the [distributed programming language E ↗](https://www.crockford.com/ec/etut.html).

## Visibility of Methods and Properties

### Private properties

[Private properties ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes/Private%5Fproperties) of classes are not directly exposed over RPC.

### Class instance properties

When you send an instance of an application-defined class, the recipient can only access methods and properties declared on the class, not properties of the instance. For example:

JavaScript

```

class Foo extends RpcTarget {

  constructor() {

    super();


    // i CANNOT be accessed over RPC

    this.i = 0;


    // funcProp CANNOT be called over RPC

    this.funcProp = () => {}

  }


  // value CAN be accessed over RPC

  get value() {

    return this.i;

  }


  // method CAN be called over RPC

  method() {}

}


```

This behavior is intentional — it is intended to protect you from accidentally exposing private class internals. Generally, instance properties should be declared private, [by prefixing them with # ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes/Private%5Fproperties). However, private properties are a relatively new feature of JavaScript, and are not yet widely used in the ecosystem.

Since the RPC interface between two of your Workers may be a security boundary, we need to be extra-careful, so instance properties are always private when communicating between Workers using RPC, whether or not they have the `#` prefix. You can always declare an explicit getter at the class level if you wish to expose the property, as shown above.

These visibility rules apply only to objects that extend `RpcTarget`, `WorkerEntrypoint`, or `DurableObject`, and do not apply to plain objects. Plain objects are passed "by value", sending all of their "own" properties.

### "Own" properties of functions

When you pass a function over RPC, the caller can access the "own" properties of the function object itself.

JavaScript

```

someRpcMethod() {

  let func = () => {};

  func.prop = 123;  // `prop` is visible over RPC

  return func;

}


```

Such properties on a function are accessed asynchronously, like class properties of an RpcTarget. But, unlike the `RpcTarget` example above, the function's instance properties that are accessible to the caller. In practice, properties are rarely added to functions.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/rpc/","name":"Remote-procedure call (RPC)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/rpc/visibility/","name":"Visibility and Security Model"}}]}
```

---

---
title: Scheduler
description: Use the scheduler.wait() API to delay execution in Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/scheduler.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Scheduler

## Background

The `scheduler` global provides task scheduling APIs based on the [WICG Scheduling APIs proposal ↗](https://github.com/WICG/scheduling-apis). Workers currently implement the `scheduler.wait()` method.

`scheduler.wait()` returns a Promise that resolves after a given number of milliseconds. It is an `await`\-able alternative to `setTimeout()` that does not require a callback.

Like other [timers in Workers](https://developers.cloudflare.com/workers/runtime-apis/web-standards/#timers), `scheduler.wait()` does not advance during CPU execution when deployed to Cloudflare. This is a [security measure to mitigate against Spectre attacks](https://developers.cloudflare.com/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading). In local development, timers advance regardless of whether I/O occurs.

## Syntax

JavaScript

```

await scheduler.wait(delay);

await scheduler.wait(delay, options);


```

## Parameters

* `delay` number  
   * The number of milliseconds to wait before the returned Promise resolves.
* `options` object optional  
   * Optional configuration for the wait operation.  
   * `signal` AbortSignal optional  
         * An [AbortSignal](https://developers.cloudflare.com/workers/runtime-apis/web-standards/#abortcontroller-and-abortsignal) that cancels the wait. When the signal is aborted, the returned Promise rejects with an `AbortError`.

## Return value

A `Promise<void>` that resolves after `delay` milliseconds. If an `AbortSignal` is provided and aborted before the delay elapses, the Promise rejects with an `AbortError`.

## Examples

### Basic delay

Use `scheduler.wait()` to pause execution for a specified duration.

* [  JavaScript ](#tab-panel-7646)
* [  TypeScript ](#tab-panel-7647)

JavaScript

```

export default {

  async fetch(request) {

    // Wait for 1 second

    await scheduler.wait(1000);

    return new Response("Delayed response");

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    // Wait for 1 second

    await scheduler.wait(1000);

    return new Response("Delayed response");

  },

} satisfies ExportedHandler;


```

### Retry with exponential backoff

Use `scheduler.wait()` to implement a delay between retry attempts. This example uses exponential backoff with jitter.

* [  JavaScript ](#tab-panel-7650)
* [  TypeScript ](#tab-panel-7651)

JavaScript

```

async function fetchWithRetry(url, maxAttempts = 3) {

  const baseBackoffMs = 100;

  const maxBackoffMs = 10000;


  for (let attempt = 0; attempt < maxAttempts; attempt++) {

    try {

      return await fetch(url);

    } catch (err) {

      if (attempt + 1 >= maxAttempts) {

        throw err;

      }

      const backoffMs = Math.min(

        maxBackoffMs,

        baseBackoffMs * Math.random() * Math.pow(2, attempt),

      );

      await scheduler.wait(backoffMs);

    }

  }

  throw new Error("unreachable");

}


export default {

  async fetch(request) {

    const response = await fetchWithRetry("https://example.com/api");

    return new Response(response.body, response);

  },

};


```

TypeScript

```

async function fetchWithRetry(url: string, maxAttempts = 3): Promise<Response> {

  const baseBackoffMs = 100;

  const maxBackoffMs = 10000;


  for (let attempt = 0; attempt < maxAttempts; attempt++) {

    try {

      return await fetch(url);

    } catch (err) {

      if (attempt + 1 >= maxAttempts) {

        throw err;

      }

      const backoffMs = Math.min(

        maxBackoffMs,

        baseBackoffMs * Math.random() * Math.pow(2, attempt),

      );

      await scheduler.wait(backoffMs);

    }

  }

  throw new Error("unreachable");

}


export default {

  async fetch(request): Promise<Response> {

    const response = await fetchWithRetry("https://example.com/api");

    return new Response(response.body, response);

  },

} satisfies ExportedHandler;


```

### Cancel with AbortSignal

Use an [AbortController](https://developers.cloudflare.com/workers/runtime-apis/web-standards/#abortcontroller-and-abortsignal) to cancel a pending wait.

* [  JavaScript ](#tab-panel-7648)
* [  TypeScript ](#tab-panel-7649)

JavaScript

```

export default {

  async fetch(request) {

    const controller = new AbortController();


    // Cancel the wait after 500ms

    setTimeout(() => controller.abort(), 500);


    try {

      await scheduler.wait(5000, { signal: controller.signal });

      return new Response("Wait completed");

    } catch (err) {

      if (err instanceof DOMException && err.name === "AbortError") {

        return new Response("Wait was cancelled", { status: 408 });

      }

      throw err;

    }

  },

};


```

TypeScript

```

export default {

  async fetch(request): Promise<Response> {

    const controller = new AbortController();


    // Cancel the wait after 500ms

    setTimeout(() => controller.abort(), 500);


    try {

      await scheduler.wait(5000, { signal: controller.signal });

      return new Response("Wait completed");

    } catch (err) {

      if (err instanceof DOMException && err.name === "AbortError") {

        return new Response("Wait was cancelled", { status: 408 });

      }

      throw err;

    }

  },

} satisfies ExportedHandler;


```

## Related resources

* [Timers](https://developers.cloudflare.com/workers/runtime-apis/web-standards/#timers) — `setTimeout()` and `setInterval()` APIs
* [Performance and timers](https://developers.cloudflare.com/workers/runtime-apis/performance/) — `performance.now()` and timer security behavior
* [AbortController and AbortSignal](https://developers.cloudflare.com/workers/runtime-apis/web-standards/#abortcontroller-and-abortsignal) — cancel asynchronous operations
* [WICG Scheduling APIs proposal ↗](https://github.com/WICG/scheduling-apis) — the specification this API is based on

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/scheduler/","name":"Scheduler"}}]}
```

---

---
title: Streams
description: A web standard API that allows JavaScript to programmatically access and process streams of data.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/streams/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Streams

The [Streams API ↗](https://developer.mozilla.org/en-US/docs/Web/API/Streams%5FAPI) is a web standard API that allows JavaScript to programmatically access and process streams of data.

* [ ReadableStream ](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/)
* [ ReadableStream BYOBReader ](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestreambyobreader/)
* [ ReadableStream DefaultReader ](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestreamdefaultreader/)
* [ TransformStream ](https://developers.cloudflare.com/workers/runtime-apis/streams/transformstream/)
* [ WritableStream ](https://developers.cloudflare.com/workers/runtime-apis/streams/writablestream/)
* [ WritableStream DefaultWriter ](https://developers.cloudflare.com/workers/runtime-apis/streams/writablestreamdefaultwriter/)

Use the Streams API to avoid buffering large requests or responses in memory. This enables you to parse extremely large request or response bodies within a Worker's 128 MB memory limit. This is faster than buffering the entire payload into memory, as your Worker can start processing data incrementally, and allows your Worker to handle multi-gigabyte payloads or files within its memory limits.

Workers do not need to prepare an entire response body before returning a `Response`. You can use a [ReadableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/) to stream a response body after sending the response status line and headers.

Note

By default, Cloudflare Workers is capable of streaming responses using the [Streams APIs ↗](https://developer.mozilla.org/en-US/docs/Web/API/Streams%5FAPI). To maintain the streaming behavior, you should only modify the response body using the methods in the Streams APIs.

If your Worker only forwards subrequest responses to the client verbatim without reading their body text, then its body handling is already optimal and you do not have to use these APIs.

The worker can create a `Response` object using a `ReadableStream` as the body. Any data provided through the`ReadableStream` will be streamed to the client as it becomes available.

* [  Module Worker ](#tab-panel-7652)
* [  Service Worker ](#tab-panel-7653)

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    // Fetch from origin server.

    const response = await fetch(request);


    // ... and deliver our Response while that’s running.

    return new Response(response.body, response);

  },

};


```

Service Workers are deprecated

Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers.

JavaScript

```

addEventListener("fetch", (event) => {

  event.respondWith(fetchAndStream(event.request));

});


async function fetchAndStream(request) {

  // Fetch from origin server.

  const response = await fetch(request);


  // ... and deliver our Response while that’s running.

  return new Response(readable.body, response);

}


```

A [TransformStream](https://developers.cloudflare.com/workers/runtime-apis/streams/transformstream/) and the [ReadableStream.pipeTo()](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/#methods) method can be used to modify the response body as it is being streamed:

* [  Module Worker ](#tab-panel-7654)
* [  Service Worker ](#tab-panel-7655)

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    // Fetch from origin server.

    const response = await fetch(request);


    const { readable, writable } = new TransformStream({

      transform(chunk, controller) {

        controller.enqueue(modifyChunkSomehow(chunk));

      },

    });


    // Start pumping the body. NOTE: No await!

    response.body.pipeTo(writable);


    // ... and deliver our Response while that’s running.

    return new Response(readable, response);

  },

};


```

Service Workers are deprecated

Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers.

JavaScript

```

addEventListener("fetch", (event) => {

  event.respondWith(fetchAndStream(event.request));

});


async function fetchAndStream(request) {

  // Fetch from origin server.

  const response = await fetch(request);


  const { readable, writable } = new TransformStream({

    transform(chunk, controller) {

      controller.enqueue(modifyChunkSomehow(chunk));

    },

  });


  // Start pumping the body. NOTE: No await!

  response.body.pipeTo(writable);


  // ... and deliver our Response while that’s running.

  return new Response(readable, response);

}


```

This example calls `response.body.pipeTo(writable)` but does not `await` it. This is so it does not block the forward progress of the remainder of the `fetchAndStream()` function. It continues to run asynchronously until the response is complete or the client disconnects.

The runtime can continue running a function (`response.body.pipeTo(writable)`) after a response is returned to the client. This example pumps the subrequest response body to the final response body. However, you can use more complicated logic, such as adding a prefix or a suffix to the body or to process it somehow.

---

## Common issues

Warning

The Streams API is only available inside of the [Request context](https://developers.cloudflare.com/workers/runtime-apis/request/), inside the `fetch` event listener callback.

---

## Related resources

* [Stream large JSON](https://developers.cloudflare.com/workers/examples/streaming-json/) \- Parse and transform large JSON request and response bodies
* [MDN's Streams API documentation ↗](https://developer.mozilla.org/en-US/docs/Web/API/Streams%5FAPI)
* [Streams API spec ↗](https://streams.spec.whatwg.org/)
* Write your Worker code in [ES modules syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) for an optimized experience.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/streams/","name":"Streams"}}]}
```

---

---
title: ReadableStream
description: A ReadableStream is returned by the readable property inside TransformStream.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/streams/readablestream.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# ReadableStream

## Background

A `ReadableStream` is returned by the `readable` property inside [TransformStream](https://developers.cloudflare.com/workers/runtime-apis/streams/transformstream/).

## Properties

* `locked` boolean  
   * A Boolean value that indicates if the readable stream is locked to a reader.

## Methods

* `pipeTo(destinationWritableStream, optionsPipeToOptions)` : Promise<void>  
   * Pipes the readable stream to a given writable stream `destination` and returns a promise that is fulfilled when the `write` operation succeeds or rejects it if the operation fails.
* `getReader(optionsObject)` : ReadableStreamDefaultReader  
   * Gets an instance of `ReadableStreamDefaultReader` and locks the `ReadableStream` to that reader instance. This method accepts an object argument indicating options. The only supported option is `mode`, which can be set to `byob` to create a [ReadableStreamBYOBReader](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestreambyobreader/), as shown here:

JavaScript

```

let reader = readable.getReader({ mode: 'byob' });


```

### `PipeToOptions`

* `preventClose` bool  
   * When `true`, closure of the source `ReadableStream` will not cause the destination `WritableStream` to be closed.
* `preventAbort` bool  
   * When `true`, errors in the source `ReadableStream` will no longer abort the destination `WritableStream`. `pipeTo` will return a rejected promise with the error from the source or any error that occurred while aborting the destination.

---

## Related resources

* [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/)
* [Readable streams in the WHATWG Streams API specification ↗](https://streams.spec.whatwg.org/#rs-model)
* [MDN’s ReadableStream documentation ↗](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/streams/","name":"Streams"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/streams/readablestream/","name":"ReadableStream"}}]}
```

---

---
title: ReadableStream BYOBReader
description: BYOB is an abbreviation of bring your own buffer. A ReadableStreamBYOBReader allows reading into a developer-supplied buffer, thus minimizing copies.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/streams/readablestreambyobreader.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# ReadableStream BYOBReader

## Background

`BYOB` is an abbreviation of bring your own buffer. A `ReadableStreamBYOBReader` allows reading into a developer-supplied buffer, thus minimizing copies.

An instance of `ReadableStreamBYOBReader` is functionally identical to [ReadableStreamDefaultReader](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestreamdefaultreader/) with the exception of the `read` method.

A `ReadableStreamBYOBReader` is not instantiated via its constructor. Rather, it is retrieved from a [ReadableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/):

JavaScript

```

const { readable, writable } = new TransformStream();

const reader = readable.getReader({ mode: 'byob' });


```

---

## Methods

* `read(bufferArrayBufferView)` : Promise<ReadableStreamBYOBReadResult>  
   * Returns a promise with the next available chunk of data read into a passed-in buffer.
* `readAtLeast(minBytes, bufferArrayBufferView)` : Promise<ReadableStreamBYOBReadResult>  
   * Returns a promise with the next available chunk of data read into a passed-in buffer. The promise will not resolve until at least `minBytes` bytes have been read. However, fewer than `minBytes` bytes may be returned if the end of the stream is reached or the underlying stream is closed. Specifically:  
         * If `minBytes` or more bytes are available, the promise resolves with `{ value: <buffer view sized to bytes read>, done: false }`.  
         * If the stream ends after some bytes have been read but fewer than `minBytes`, the promise resolves with the partial data: `{ value: <buffer view sized to bytes actually read>, done: false }`. The next call to `read` or `readAtLeast` will then return `{ value: undefined, done: true }`.  
         * If the stream ends with zero bytes available (that is, the stream is already at EOF), the promise resolves with `{ value: <zero-length view>, done: true }`.  
         * If the stream errors, the promise rejects.  
         * `minBytes` must be at least 1, and must not exceed the byte length of `bufferArrayBufferView`, or the promise rejects with a `TypeError`.

---

## Common issues

Warning

`read` provides no control over the minimum number of bytes that should be read into the buffer. Even if you allocate a 1 MiB buffer, the kernel is perfectly within its rights to fulfill this read with a single byte, whether or not an EOF immediately follows.

In practice, the Workers team has found that `read` typically fills only 1% of the provided buffer.

`readAtLeast` is a non-standard extension to the Streams API which allows users to specify that at least `minBytes` bytes must be read into the buffer before resolving the read. If the stream ends before `minBytes` bytes are available, the partial data that was read is still returned rather than throwing an error — refer to the [readAtLeast method documentation above](#methods) for the full details.

---

## Related resources

* [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/)
* [Background about BYOB readers in the Streams API WHATWG specification ↗](https://streams.spec.whatwg.org/#byob-readers)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/streams/","name":"Streams"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/streams/readablestreambyobreader/","name":"ReadableStream BYOBReader"}}]}
```

---

---
title: ReadableStream DefaultReader
description: A reader is used when you want to read from a ReadableStream, rather than piping its output to a WritableStream.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/streams/readablestreamdefaultreader.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# ReadableStream DefaultReader

## Background

A reader is used when you want to read from a [ReadableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/), rather than piping its output to a [WritableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/writablestream/).

A `ReadableStreamDefaultReader` is not instantiated via its constructor. Rather, it is retrieved from a [ReadableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/):

JavaScript

```

const { readable, writable } = new TransformStream();

const reader = readable.getReader();


```

---

## Properties

* `reader.closed` : Promise  
   * A promise indicating if the reader is closed. The promise is fulfilled when the reader stream closes and is rejected if there is an error in the stream.

## Methods

* `read()` : Promise  
   * A promise that returns the next available chunk of data being passed through the reader queue.
* `cancel(reasonstringoptional)` : void  
   * Cancels the stream. `reason` is an optional human-readable string indicating the reason for cancellation. `reason` will be passed to the underlying source’s cancel algorithm -- if this readable stream is one side of a [TransformStream](https://developers.cloudflare.com/workers/runtime-apis/streams/transformstream/), then its cancel algorithm causes the transform’s writable side to become errored with `reason`.  
Warning  
Any data not yet read is lost.
* `releaseLock()` : void  
   * Releases the lock on the readable stream. A lock cannot be released if the reader has pending read operations. A `TypeError` is thrown and the reader remains locked.

---

## Related resources

* [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/)
* [Readable streams in the WHATWG Streams API specification ↗](https://streams.spec.whatwg.org/#rs-model)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/streams/","name":"Streams"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/streams/readablestreamdefaultreader/","name":"ReadableStream DefaultReader"}}]}
```

---

---
title: TransformStream
description: A transform stream consists of a pair of streams: a writable stream, known as its writable side, and a readable stream, known as its readable side. Writes to the writable side result in new data being made available for reading from the readable side.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/streams/transformstream.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# TransformStream

## Background

A transform stream consists of a pair of streams: a writable stream, known as its writable side, and a readable stream, known as its readable side. Writes to the writable side result in new data being made available for reading from the readable side.

Workers currently only implements an identity transform stream, a type of transform stream which forwards all chunks written to its writable side to its readable side, without any changes.

---

## Constructor

JavaScript

```

let { readable, writable } = new TransformStream();


```

* `TransformStream()` TransformStream  
   * Returns a new identity transform stream.

## Properties

* `readable` ReadableStream  
   * An instance of a `ReadableStream`.
* `writable` WritableStream  
   * An instance of a `WritableStream`.

---

## `IdentityTransformStream`

The current implementation of `TransformStream` in the Workers platform is not current compliant with the [Streams Standard ↗](https://streams.spec.whatwg.org/#transform-stream) and we will soon be making changes to the implementation to make it conform with the specification. In preparation for doing so, we have introduced the `IdentityTransformStream` class that implements behavior identical to the current `TransformStream` class. This type of stream forwards all chunks of byte data (in the form of `TypedArray`s) written to its writable side to its readable side, without any changes.

The `IdentityTransformStream` readable side supports [bring your own buffer (BYOB) reads ↗](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStreamBYOBReader).

### Constructor

JavaScript

```

let { readable, writable } = new IdentityTransformStream();


```

* `IdentityTransformStream()` IdentityTransformStream  
   * Returns a new identity transform stream.

### Properties

* `readable` ReadableStream  
   * An instance of a `ReadableStream`.
* `writable` WritableStream  
   * An instance of a `WritableStream`.

---

## `FixedLengthStream`

The `FixedLengthStream` is a specialization of `IdentityTransformStream` that limits the total number of bytes that the stream will passthrough. It is useful primarily because, when using `FixedLengthStream` to produce either a `Response` or `Request`, the fixed length of the stream will be used as the `Content-Length` header value as opposed to use chunked encoding when using any other type of stream. An error will occur if too many, or too few bytes are written through the stream.

### Constructor

JavaScript

```

let { readable, writable } = new FixedLengthStream(1000);


```

* `FixedLengthStream(length)` FixedLengthStream  
   * Returns a new identity transform stream.  
   * `length` maybe a `number` or `bigint` with a maximum value of `2^53 - 1`.

### Properties

* `readable` ReadableStream  
   * An instance of a `ReadableStream`.
* `writable` WritableStream  
   * An instance of a `WritableStream`.

---

## Related resources

* [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/)
* [Transform Streams in the WHATWG Streams API specification ↗](https://streams.spec.whatwg.org/#transform-stream)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/streams/","name":"Streams"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/streams/transformstream/","name":"TransformStream"}}]}
```

---

---
title: WritableStream
description: A WritableStream is the writable property of a TransformStream. On the Workers platform, WritableStream cannot be directly created using the WritableStream constructor.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/streams/writablestream.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# WritableStream

## Background

A `WritableStream` is the `writable` property of a [TransformStream](https://developers.cloudflare.com/workers/runtime-apis/streams/transformstream/). On the Workers platform, `WritableStream` cannot be directly created using the `WritableStream` constructor.

A typical way to write to a `WritableStream` is to pipe a [ReadableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/) to it.

JavaScript

```

readableStream

  .pipeTo(writableStream)

  .then(() => console.log('All data successfully written!'))

  .catch(e => console.error('Something went wrong!', e));


```

To write to a `WritableStream` directly, you must use its writer.

JavaScript

```

const writer = writableStream.getWriter();

writer.write(data);


```

Refer to the [WritableStreamDefaultWriter](https://developers.cloudflare.com/workers/runtime-apis/streams/writablestreamdefaultwriter/) documentation for further detail.

## Properties

* `locked` boolean  
   * A Boolean value to indicate if the writable stream is locked to a writer.

## Methods

* `abort(reasonstringoptional)` : Promise<void>  
   * Aborts the stream. This method returns a promise that fulfills with a response `undefined`. `reason` is an optional human-readable string indicating the reason for cancellation. `reason` will be passed to the underlying sink’s abort algorithm. If this writable stream is one side of a [TransformStream](https://developers.cloudflare.com/workers/runtime-apis/streams/transformstream/), then its abort algorithm causes the transform’s readable side to become errored with `reason`.  
Warning  
Any data not yet written is lost upon abort.
* `getWriter()` : WritableStreamDefaultWriter  
   * Gets an instance of `WritableStreamDefaultWriter` and locks the `WritableStream` to that writer instance.

---

## Related resources

* [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/)
* [Writable streams in the WHATWG Streams API specification ↗](https://streams.spec.whatwg.org/#ws-model)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/streams/","name":"Streams"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/streams/writablestream/","name":"WritableStream"}}]}
```

---

---
title: WritableStream DefaultWriter
description: A writer is used when you want to write directly to a WritableStream, rather than piping data to it from a ReadableStream. For example:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/streams/writablestreamdefaultwriter.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# WritableStream DefaultWriter

## Background

A writer is used when you want to write directly to a [WritableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/writablestream/), rather than piping data to it from a [ReadableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/). For example:

JavaScript

```

function writeArrayToStream(array, writableStream) {

  const writer = writableStream.getWriter();

  array.forEach(chunk => writer.write(chunk).catch(() => {}));


  return writer.close();

}


writeArrayToStream([1, 2, 3, 4, 5], writableStream)

  .then(() => console.log('All done!'))

  .catch(e => console.error('Error with the stream: ' + e));


```

## Properties

* `writer.desiredSize` int  
   * The size needed to fill the stream’s internal queue, as an integer. Always returns 1, 0 (if the stream is closed), or `null` (if the stream has errors).
* `writer.closed` Promise<void>  
   * A promise that indicates if the writer is closed. The promise is fulfilled when the writer stream is closed and rejected if there is an error in the stream.

## Methods

* `abort(reasonstringoptional)` : Promise<void>  
   * Aborts the stream. This method returns a promise that fulfills with a response `undefined`. `reason` is an optional human-readable string indicating the reason for cancellation. `reason` will be passed to the underlying sink’s abort algorithm. If this writable stream is one side of a [TransformStream](https://developers.cloudflare.com/workers/runtime-apis/streams/transformstream/), then its abort algorithm causes the transform’s readable side to become errored with `reason`.  
Warning  
Any data not yet written is lost upon abort.
* `close()` : Promise<void>  
   * Attempts to close the writer. Remaining writes finish processing before the writer is closed. This method returns a promise fulfilled with `undefined` if the writer successfully closes and processes the remaining writes, or rejected on any error.
* `releaseLock()` : void  
   * Releases the writer’s lock on the stream. Once released, the writer is no longer active. You can call this method before all pending `write(chunk)` calls are resolved. This allows you to queue a `write` operation, release the lock, and begin piping into the writable stream from another source, as shown in the example below.

JavaScript

```

let writer = writable.getWriter();

// Write a preamble.

writer.write(new TextEncoder().encode('foo bar'));

// While that’s still writing, pipe the rest of the body from somewhere else.

writer.releaseLock();

await someResponse.body.pipeTo(writable);


```

* `write(chunkany)` : Promise<void>  
   * Writes a chunk of data to the writer and returns a promise that resolves if the operation succeeds.  
   * The underlying stream may accept fewer kinds of type than `any`, it will throw an exception when encountering an unexpected type.

---

## Related resources

* [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/)
* [Writable streams in the WHATWG Streams API specification ↗](https://streams.spec.whatwg.org/#ws-model)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/streams/","name":"Streams"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/streams/writablestreamdefaultwriter/","name":"WritableStream DefaultWriter"}}]}
```

---

---
title: TCP sockets
description: Use the `connect()` API to create outbound TCP connections from Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/tcp-sockets.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# TCP sockets

The Workers runtime provides the `connect()` API for creating outbound [TCP connections ↗](https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/) from Workers.

Many application-layer protocols are built on top of the Transmission Control Protocol (TCP). These application-layer protocols, including SSH, MQTT, SMTP, FTP, IRC, and most database wire protocols including MySQL, PostgreSQL, MongoDB, require an underlying TCP socket API in order to work.

Note

Connecting to a PostgreSQL database? You should use [Hyperdrive](https://developers.cloudflare.com/hyperdrive/), which provides the `connect()` API with built-in connection pooling and query caching.

Note

TCP Workers outbound connections are sourced from a prefix that is not part of [list of IP ranges ↗](https://www.cloudflare.com/ips/).

## `connect()`

The `connect()` function returns a TCP socket, with both a [readable](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestream/) and [writable](https://developers.cloudflare.com/workers/runtime-apis/streams/writablestream/) stream of data. This allows you to read and write data on an ongoing basis, as long as the connection remains open.

`connect()` is provided as a [Runtime API](https://developers.cloudflare.com/workers/runtime-apis/), and is accessed by importing the `connect` function from `cloudflare:sockets`. This process is similar to how one imports built-in modules in Node.js. Refer to the following codeblock for an example of creating a TCP socket, writing to it, and returning the readable side of the socket as a response:

TypeScript

```

import { connect } from 'cloudflare:sockets';


export default {

  async fetch(req): Promise<Response> {

    const gopherAddr = { hostname: "gopher.floodgap.com", port: 70 };

    const url = new URL(req.url);


    try {

      const socket = connect(gopherAddr);


      const writer = socket.writable.getWriter()

      const encoder = new TextEncoder();

      const encoded = encoder.encode(url.pathname + "\r\n");

      await writer.write(encoded);

      await writer.close();


      return new Response(socket.readable, { headers: { "Content-Type": "text/plain" } });

    } catch (error) {

      return new Response("Socket connection failed: " + error, { status: 500 });

    }

  }

} satisfies ExportedHandler;


```

* `connect(address: SocketAddress | string, options?: optional SocketOptions)` : `Socket`  
   * `connect()` accepts either a URL string or [SocketAddress](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/#socketaddress) to define the hostname and port number to connect to, and an optional configuration object, [SocketOptions](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/#socketoptions). It returns an instance of a [Socket](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/#socket).

### `SocketAddress`

* `hostname` string  
   * The hostname to connect to. Example: `cloudflare.com`.
* `port` number  
   * The port number to connect to. Example: `5432`.

### `SocketOptions`

* `secureTransport` "off" | "on" | "starttls" — Defaults to `off`  
   * Specifies whether or not to use [TLS ↗](https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/) when creating the TCP socket.  
   * `off` — Do not use TLS.  
   * `on` — Use TLS.  
   * `starttls` — Do not use TLS initially, but allow the socket to be upgraded to use TLS by calling [startTls()](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/#opportunistic-tls-starttls).
* `allowHalfOpen` boolean — Defaults to `false`  
   * Defines whether the writable side of the TCP socket will automatically close on end-of-file (EOF). When set to `false`, the writable side of the TCP socket will automatically close on EOF. When set to `true`, the writable side of the TCP socket will remain open on EOF.  
   * This option is similar to that offered by the Node.js [net module ↗](https://nodejs.org/api/net.html) and allows interoperability with code which utilizes it.

### `SocketInfo`

* `remoteAddress` string | null  
   * The address of the remote peer the socket is connected to. May not always be set.
* `localAddress` string | null  
   * The address of the local network endpoint for this socket. May not always be set.

### `Socket`

* `readable` : ReadableStream  
   * Returns the readable side of the TCP socket.
* `writable` : WritableStream  
   * Returns the writable side of the TCP socket.  
   * The `WritableStream` returned only accepts chunks of `Uint8Array` or its views.
* `opened` `Promise<SocketInfo>`  
   * This promise is resolved when the socket connection is established and is rejected if the socket encounters an error.
* `closed` `Promise<void>`  
   * This promise is resolved when the socket is closed and is rejected if the socket encounters an error.
* `close()` `Promise<void>`  
   * Closes the TCP socket. Both the readable and writable streams are forcibly closed.
* `startTls()` : Socket  
   * Upgrades an insecure socket to a secure one that uses TLS, returning a new [Socket](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets#socket). Note that in order to call `startTls()`, you must set [secureTransport](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/#socketoptions) to `starttls` when initially calling `connect()` to create the socket.

## Opportunistic TLS (StartTLS)

Many TCP-based systems, including databases and email servers, require that clients use opportunistic TLS (otherwise known as [StartTLS ↗](https://en.wikipedia.org/wiki/Opportunistic%5FTLS)) when connecting. In this pattern, the client first creates an insecure TCP socket, without TLS, and then upgrades it to a secure TCP socket, that uses TLS. The `connect()` API simplifies this by providing a method, `startTls()`, which returns a new `Socket` instance that uses TLS:

TypeScript

```

import { connect } from "cloudflare:sockets"


const address = {

  hostname: "example-postgres-db.com",

  port: 5432

};

const socket = connect(address, { secureTransport: "starttls" });

const secureSocket = socket.startTls();


```

* `startTls()` can only be called if `secureTransport` is set to `starttls` when creating the initial TCP socket.
* Once `startTls()` is called, the initial socket is closed and can no longer be read from or written to. In the example above, anytime after `startTls()` is called, you would use the newly created `secureSocket`. Any existing readers and writers based off the original socket will no longer work. You must create new readers and writers from the newly created `secureSocket`.
* `startTls()` should only be called once on an existing socket.

## Handle errors

To handle errors when creating a new TCP socket, reading from a socket, or writing to a socket, wrap these calls inside [try...catch ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/try...catch) statement blocks. The following example opens a connection to Google.com, initiates a HTTP request, and returns the response. If this fails and throws an exception, it returns a [500](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-5xx-errors/error-500/) response:

TypeScript

```

import { connect } from 'cloudflare:sockets';

const connectionUrl = { hostname: "google.com", port: 80 };

export interface Env { }

export default {

  async fetch(req, env, ctx): Promise<Response> {

    try {

      const socket = connect(connectionUrl);

      const writer = socket.writable.getWriter();

      const encoder = new TextEncoder();

      const encoded = encoder.encode("GET / HTTP/1.0\r\n\r\n");

      await writer.write(encoded);

      await writer.close();


      return new Response(socket.readable, { headers: { "Content-Type": "text/plain" } });

    } catch (error) {

      return new Response(`Socket connection failed: ${error}`, { status: 500 });

    }

  }

} satisfies ExportedHandler<Env>;


```

## Close TCP connections

You can close a TCP connection by calling `close()` on the socket. This will close both the readable and writable sides of the socket.

TypeScript

```

import { connect } from "cloudflare:sockets"


const socket = connect({ hostname: "my-url.com", port: 70 });

const reader = socket.readable.getReader();

socket.close();


// After close() is called, you can no longer read from the readable side of the socket

const reader = socket.readable.getReader(); // This fails


```

## Considerations

* Outbound TCP sockets to [Cloudflare IP ranges ↗](https://www.cloudflare.com/ips/) are blocked.
* TCP sockets cannot be created in global scope and shared across requests. You should always create TCP sockets within a handler (ex: [fetch()](https://developers.cloudflare.com/workers/get-started/guide/#3-write-code), [scheduled()](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/), [queue()](https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer)) or [alarm()](https://developers.cloudflare.com/durable-objects/api/alarms/).
* Each open TCP socket counts towards the maximum number of [open connections](https://developers.cloudflare.com/workers/platform/limits/#simultaneous-open-connections) that can be simultaneously open.
* By default, Workers cannot create outbound TCP connections on port `25` to send email to SMTP mail servers. [Cloudflare Email Workers](https://developers.cloudflare.com/email-routing/email-workers/) provides APIs to process and forward email.
* Support for handling inbound TCP connections is [coming soon ↗](https://blog.cloudflare.com/workers-tcp-socket-api-connect-databases/). Currently, it is not possible to make an inbound TCP connection to your Worker, for example, by using the `CONNECT` HTTP method.

## Troubleshooting

Review descriptions of common error messages you may see when working with TCP Sockets, what the error messages mean, and how to solve them.

### `proxy request failed, cannot connect to the specified address`

Your socket is connecting to an address that was disallowed. Examples of a disallowed address include Cloudflare IPs, `localhost`, and private network IPs.

If you need to connect to addresses on port `80` or `443` to make HTTP requests, use [fetch](https://developers.cloudflare.com/workers/runtime-apis/fetch/).

### `TCP Loop detected`

Your socket is connecting back to the Worker that initiated the outbound connection. In other words, the Worker is connecting back to itself. This is currently not supported.

### `Connections to port 25 are prohibited`

Your socket is connecting to an address on port `25`. This is usually the port used for SMTP mail servers. Workers cannot create outbound connections on port `25`. Consider using [Cloudflare Email Workers](https://developers.cloudflare.com/email-routing/email-workers/) instead.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/tcp-sockets/","name":"TCP sockets"}}]}
```

---

---
title: Web Crypto
description: A set of low-level functions for common cryptographic tasks.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/web-crypto.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Web Crypto

## Background

The Web Crypto API provides a set of low-level functions for common cryptographic tasks. The Workers runtime implements the full surface of this API, but with some differences in the [supported algorithms](#supported-algorithms) compared to those implemented in most browsers.

Performing cryptographic operations using the Web Crypto API is significantly faster than performing them purely in JavaScript. If you want to perform CPU-intensive cryptographic operations, you should consider using the Web Crypto API.

The Web Crypto API is implemented through the `SubtleCrypto` interface, accessible via the global `crypto.subtle` binding. A simple example of calculating a digest (also known as a hash) is:

JavaScript

```

const myText = new TextEncoder().encode('Hello world!');


const myDigest = await crypto.subtle.digest(

  {

    name: 'SHA-256',

  },

  myText // The data you want to hash as an ArrayBuffer

);


console.log(new Uint8Array(myDigest));


```

Some common uses include [signing requests](https://developers.cloudflare.com/workers/examples/signing-requests/).

Warning

The Web Crypto API differs significantly from the [Node.js Crypto API](https://developers.cloudflare.com/workers/runtime-apis/nodejs/crypto/). If you are working with code that relies on the Node.js Crypto API, you can use it by enabling the [nodejs\_compat compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/).

---

## Constructors

* `crypto.DigestStream(algorithm)` DigestStream  
   * A non-standard extension to the `crypto` API that supports generating a hash digest from streaming data. The `DigestStream` itself is a [WritableStream](https://developers.cloudflare.com/workers/runtime-apis/streams/writablestream/) that does not retain the data written into it. Instead, it generates a hash digest automatically when the flow of data has ended.

### Parameters

* `algorithm`string | object  
   * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/digest#Syntax).

### Usage

* [  JavaScript ](#tab-panel-7656)
* [  TypeScript ](#tab-panel-7657)

JavaScript

```

export default {

  async fetch(req) {

    // Fetch from origin

    const res = await fetch(req);


    // We need to read the body twice so we `tee` it (get two instances)

    const [bodyOne, bodyTwo] = res.body.tee();

    // Make a new response so we can set the headers (responses from `fetch` are immutable)

    const newRes = new Response(bodyOne, res);

    // Create a SHA-256 digest stream and pipe the body into it

    const digestStream = new crypto.DigestStream("SHA-256");

    bodyTwo.pipeTo(digestStream);

    // Get the final result

    const digest = await digestStream.digest;

    // Turn it into a hex string

    const hexString = [...new Uint8Array(digest)]

      .map(b => b.toString(16).padStart(2, '0'))

      .join('')

    // Set a header with the SHA-256 hash and return the response

    newRes.headers.set("x-content-digest", `SHA-256=${hexString}`);

    return newRes;

  }

}


```

TypeScript

```

export default {

  async fetch(req): Promise<Response> {

    // Fetch from origin

    const res = await fetch(req);


    // We need to read the body twice so we `tee` it (get two instances)

    const [bodyOne, bodyTwo] = res.body.tee();

    // Make a new response so we can set the headers (responses from `fetch` are immutable)

    const newRes = new Response(bodyOne, res);

    // Create a SHA-256 digest stream and pipe the body into it

    const digestStream = new crypto.DigestStream("SHA-256");

    bodyTwo.pipeTo(digestStream);

    // Get the final result

    const digest = await digestStream.digest;

    // Turn it into a hex string

    const hexString = [...new Uint8Array(digest)]

      .map(b => b.toString(16).padStart(2, '0'))

      .join('')

    // Set a header with the SHA-256 hash and return the response

    newRes.headers.set("x-content-digest", `SHA-256=${hexString}`);

    return newRes;

  }

} satisfies ExportedHandler;


```

## Methods

* `crypto.randomUUID()` : string  
   * Generates a new random (version 4) UUID as defined in [RFC 4122 ↗](https://www.rfc-editor.org/rfc/rfc4122.txt).
* `crypto.getRandomValues(bufferArrayBufferView)` : ArrayBufferView  
   * Fills the passed `ArrayBufferView` with cryptographically sound random values and returns the `buffer`.

### Parameters

* `buffer`ArrayBufferView  
   * Must be an Int8Array | Uint8Array | Uint8ClampedArray | Int16Array | Uint16Array | Int32Array | Uint32Array | BigInt64Array | BigUint64Array.

## SubtleCrypto Methods

These methods are all accessed via [crypto.subtle ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto#Methods), which is also documented in detail on MDN.

### encrypt

* `encrypt(algorithm, key, data)` : Promise<ArrayBuffer>  
   * Returns a Promise that fulfills with the encrypted data corresponding to the clear text, algorithm, and key given as parameters.

#### Parameters

* `algorithm`object  
   * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/encrypt#Syntax).
* `key`CryptoKey
* `data`BufferSource

### decrypt

* `decrypt(algorithm, key, data)` : Promise<ArrayBuffer>  
   * Returns a Promise that fulfills with the clear data corresponding to the ciphertext, algorithm, and key given as parameters.

#### Parameters

* `algorithm`object  
   * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/decrypt#Syntax).
* `key`CryptoKey
* `data`BufferSource

### sign

* `sign(algorithm, key, data)` : Promise<ArrayBuffer>  
   * Returns a Promise that fulfills with the signature corresponding to the text, algorithm, and key given as parameters.

#### Parameters

* `algorithm`string | object  
   * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/sign#Syntax).
* `key`CryptoKey
* `data`ArrayBuffer

### verify

* `verify(algorithm, key, signature, data)` : Promise<boolean>  
   * Returns a Promise that fulfills with a Boolean value indicating if the signature given as a parameter matches the text, algorithm, and key that are also given as parameters.

#### Parameters

* `algorithm`string | object  
   * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/verify#Syntax).
* `key`CryptoKey
* `signature`ArrayBuffer
* `data`ArrayBuffer

### digest

* `digest(algorithm, data)` : Promise<ArrayBuffer>  
   * Returns a Promise that fulfills with a digest generated from the algorithm and text given as parameters.

#### Parameters

* `algorithm`string | object  
   * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/digest#Syntax).
* `data`ArrayBuffer

### generateKey

* `generateKey(algorithm, extractable, keyUsages)` : Promise<CryptoKey> | Promise<CryptoKeyPair>  
   * Returns a Promise that fulfills with a newly-generated `CryptoKey`, for symmetrical algorithms, or a `CryptoKeyPair`, containing two newly generated keys, for asymmetrical algorithms. For example, to generate a new AES-GCM key:  
JavaScript  
```  
let keyPair = await crypto.subtle.generateKey(  
  {  
    name: 'AES-GCM',  
    length: 256,  
  },  
  true,  
  ['encrypt', 'decrypt']  
);  
```

#### Parameters

* `algorithm`object  
   * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/generateKey#Syntax).
* `extractable`bool
* `keyUsages`Array  
   * An Array of strings indicating the [possible usages of the new key ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/generateKey#Syntax).

### deriveKey

* `deriveKey(algorithm, baseKey, derivedKeyAlgorithm, extractable, keyUsages)` : Promise<CryptoKey>  
   * Returns a Promise that fulfills with a newly generated `CryptoKey` derived from the base key and specific algorithm given as parameters.

#### Parameters

* `algorithm`object  
   * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveKey#Syntax).
* `baseKeyCryptoKey`
* `derivedKeyAlgorithmobject`  
   * Defines the algorithm the derived key will be used for in [an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveKey#Syntax).
* `extractablebool`
* `keyUsagesArray`  
   * An Array of strings indicating the [possible usages of the new key ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveKey#Syntax)

### deriveBits

* `deriveBits(algorithm, baseKey, length)` : Promise<ArrayBuffer>  
   * Returns a Promise that fulfills with a newly generated buffer of pseudo-random bits derived from the base key and specific algorithm given as parameters. It returns a Promise which will be fulfilled with an `ArrayBuffer` containing the derived bits. This method is very similar to `deriveKey()`, except that `deriveKey()` returns a `CryptoKey` object rather than an `ArrayBuffer`. Essentially, `deriveKey()` is composed of `deriveBits()` followed by `importKey()`.

#### Parameters

* `algorithm`object  
   * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveBits#Syntax).
* `baseKey`CryptoKey
* `length`int  
   * Length of the bit string to derive.

### importKey

* `importKey(format, keyData, algorithm, extractable, keyUsages)` : Promise<CryptoKey>  
   * Transform a key from some external, portable format into a `CryptoKey` for use with the Web Crypto API.

#### Parameters

* `format`string  
   * Describes [the format of the key to be imported ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/importKey#Syntax).
* `keyData`ArrayBuffer
* `algorithm`object  
   * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/importKey#Syntax).
* `extractable`bool
* `keyUsages`Array  
   * An Array of strings indicating the [possible usages of the new key ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/importKey#Syntax)

### exportKey

* `exportKey(formatstring, keyCryptoKey)` : Promise<ArrayBuffer>  
   * Transform a `CryptoKey` into a portable format, if the `CryptoKey` is `extractable`.

#### Parameters

* `format`string  
   * Describes the [format in which the key will be exported ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/exportKey#Syntax).
* `key`CryptoKey

### wrapKey

* `wrapKey(format, key, wrappingKey, wrapAlgo)` : Promise<ArrayBuffer>  
   * Transform a `CryptoKey` into a portable format, and then encrypt it with another key. This renders the `CryptoKey` suitable for storage or transmission in untrusted environments.

#### Parameters

* `format`string  
   * Describes the [format in which the key will be exported ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/wrapKey#Syntax) before being encrypted.
* `key`CryptoKey
* `wrappingKey`CryptoKey
* `wrapAlgo`object  
   * Describes the algorithm to be used to encrypt the exported key, including any required parameters, in [an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/wrapKey#Syntax).

### unwrapKey

* `unwrapKey(format, key, unwrappingKey, unwrapAlgo,  
 unwrappedKeyAlgo, extractable, keyUsages)` : Promise<CryptoKey>  
   * Transform a key that was wrapped by `wrapKey()` back into a `CryptoKey`.

#### Parameters

* `format`string  
   * Described the [data format of the key to be unwrapped ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/unwrapKey#Syntax).
* `key`CryptoKey
* `unwrappingKey`CryptoKey
* `unwrapAlgo`object  
   * Describes the algorithm that was used to encrypt the wrapped key, [in an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/unwrapKey#Syntax).
* `unwrappedKeyAlgo`object  
   * Describes the key to be unwrapped, [in an algorithm-specific format ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/unwrapKey#Syntax).
* `extractable`bool
* `keyUsages`Array  
   * An Array of strings indicating the [possible usages of the new key ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/unwrapKey#Syntax)

### timingSafeEqual

* `timingSafeEqual(a, b)` : bool  
   * Compare two buffers in a way that is resistant to timing attacks. This is a non-standard extension to the Web Crypto API.

#### Parameters

* `a`ArrayBuffer | TypedArray
* `b`ArrayBuffer | TypedArray

### Supported algorithms

Workers implements all operations of the [WebCrypto standard ↗](https://www.w3.org/TR/WebCryptoAPI/), as shown in the following table.

A checkmark (✓) indicates that this feature is believed to be fully supported according to the spec.  
An x (✘) indicates that this feature is part of the specification but not implemented.  
If a feature only implements the operation partially, details are listed.

| Algorithm                    | sign()verify() | encrypt()decrypt() | digest() | deriveBits()deriveKey() | generateKey() | wrapKey()unwrapKey() | exportKey() | importKey() |
| ---------------------------- | -------------- | ------------------ | -------- | ----------------------- | ------------- | -------------------- | ----------- | ----------- |
| RSASSA PKCS1 v1.5            | ✓              | ✓                  | ✓        | ✓                       |               |                      |             |             |
| RSA PSS                      | ✓              | ✓                  | ✓        | ✓                       |               |                      |             |             |
| RSA OAEP                     | ✓              | ✓                  | ✓        | ✓                       | ✓             |                      |             |             |
| ECDSA                        | ✓              | ✓                  | ✓        | ✓                       |               |                      |             |             |
| ECDH                         | ✓              | ✓                  | ✓        | ✓                       |               |                      |             |             |
| Ed25519[1](#footnote-1)      | ✓              | ✓                  | ✓        | ✓                       |               |                      |             |             |
| X25519[1](#footnote-1)       | ✓              | ✓                  | ✓        | ✓                       |               |                      |             |             |
| NODE ED25519[2](#footnote-2) | ✓              | ✓                  | ✓        | ✓                       |               |                      |             |             |
| AES CTR                      | ✓              | ✓                  | ✓        | ✓                       | ✓             |                      |             |             |
| AES CBC                      | ✓              | ✓                  | ✓        | ✓                       | ✓             |                      |             |             |
| AES GCM                      | ✓              | ✓                  | ✓        | ✓                       | ✓             |                      |             |             |
| AES KW                       | ✓              | ✓                  | ✓        | ✓                       |               |                      |             |             |
| HMAC                         | ✓              | ✓                  | ✓        | ✓                       |               |                      |             |             |
| SHA 1                        | ✓              |                    |          |                         |               |                      |             |             |
| SHA 256                      | ✓              |                    |          |                         |               |                      |             |             |
| SHA 384                      | ✓              |                    |          |                         |               |                      |             |             |
| SHA 512                      | ✓              |                    |          |                         |               |                      |             |             |
| MD5[3](#footnote-3)          | ✓              |                    |          |                         |               |                      |             |             |
| HKDF                         | ✓              | ✓                  |          |                         |               |                      |             |             |
| PBKDF2                       | ✓              | ✓                  |          |                         |               |                      |             |             |

**Footnotes:**

1. Algorithms as specified in the [Secure Curves API ↗](https://wicg.github.io/webcrypto-secure-curves).
2. Legacy non-standard EdDSA is supported for the Ed25519 curve in addition to the Secure Curves version. Since this algorithm is non-standard, note the following while using it:  
   * Use `NODE-ED25519` as the algorithm and `namedCurve` parameters.  
   * Unlike NodeJS, Cloudflare will not support raw import of private keys.  
   * The algorithm implementation may change over time. While Cloudflare cannot guarantee it at this time, Cloudflare will strive to maintain backward compatibility and compatibility with NodeJS's behavior. Any notable compatibility notes will be communicated in release notes and via this developer documentation.
3. MD5 is not part of the WebCrypto standard but is supported in Cloudflare Workers for interacting with legacy systems that require MD5\. MD5 is considered a weak algorithm. Do not rely upon MD5 for security.

---

## Related resources

* [SubtleCrypto documentation on MDN ↗](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto)
* [SubtleCrypto documentation as part of the W3C Web Crypto API specification ↗](https://www.w3.org/TR/WebCryptoAPI//#subtlecrypto-interface)
* [Example: signing requests](https://developers.cloudflare.com/workers/examples/signing-requests/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/web-crypto/","name":"Web Crypto"}}]}
```

---

---
title: Web standards
description: Standardized APIs for use by Workers running on Cloudflare's global network.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/web-standards.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Web standards

## JavaScript standards

The Cloudflare Workers runtime is [built on top of the V8 JavaScript and WebAssembly engine](https://developers.cloudflare.com/workers/reference/how-workers-works/). The Workers runtime is updated at least once a week, to at least the version of V8 that is currently used by Google Chrome's stable release. This means you can safely use the latest JavaScript features, with no need for transpilers.

All of the [standard built-in objects ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference) supported by the current Google Chrome stable release are supported, with a few notable exceptions:

* For security reasons, the following are not allowed:  
   * `eval()`  
   * `new Function`  
   * [WebAssembly.compile ↗](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript%5Finterface/compile%5Fstatic)  
   * [WebAssembly.compileStreaming ↗](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript%5Finterface/compileStreaming%5Fstatic)  
   * `WebAssembly.instantiate` with a [buffer parameter ↗](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript%5Finterface/instantiate%5Fstatic#primary%5Foverload%5F%E2%80%94%5Ftaking%5Fwasm%5Fbinary%5Fcode)  
   * [WebAssembly.instantiateStreaming ↗](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript%5Finterface/instantiateStreaming%5Fstatic)
* `Date.now()` returns the time of the last I/O; it does not advance during code execution.

---

## Web standards and global APIs

The following methods are available per the [Worker Global Scope ↗](https://developer.mozilla.org/en-US/docs/Web/API/WorkerGlobalScope):

### Base64 utility methods

* atob()  
   * Decodes a string of data which has been encoded using base-64 encoding.
* btoa()  
   * Creates a base-64 encoded ASCII string from a string of binary data.

### Timers

* setInterval()  
   * Schedules a function to execute every time a given number of milliseconds elapses.
* clearInterval()  
   * Cancels the repeated execution set using [setInterval() ↗](https://developer.mozilla.org/en-US/docs/Web/API/setInterval).
* setTimeout()  
   * Schedules a function to execute in a given amount of time.
* clearTimeout()  
   * Cancels the delayed execution set using [setTimeout() ↗](https://developer.mozilla.org/en-US/docs/Web/API/setTimeout).
* [scheduler.wait()](https://developers.cloudflare.com/workers/runtime-apis/scheduler/)  
   * Returns a Promise that resolves after a given number of milliseconds. An `await`\-able alternative to `setTimeout()`.

Note

Timers are only available inside of [the Request Context](https://developers.cloudflare.com/workers/runtime-apis/request/#the-request-context).

### `performance.timeOrigin` and `performance.now()`

* performance.timeOrigin  
   * Returns the high resolution time origin. Workers uses the UNIX epoch as the time origin, meaning that `performance.timeOrigin` will always return `0`.
* performance.now()  
   * Returns a `DOMHighResTimeStamp` representing the number of milliseconds elapsed since `performance.timeOrigin`. Note that Workers intentionally reduces the precision of `performance.now()` such that it returns the time of the last I/O and does not advance during code execution. Effectively, because of this, and because `performance.timeOrigin` is always, `0`, `performance.now()` will always equal `Date.now()`, yielding a consistent view of the passage of time within a Worker.

### `EventTarget` and `Event`

The [EventTarget ↗](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget) and [Event ↗](https://developer.mozilla.org/en-US/docs/Web/API/Event) API allow objects to publish and subscribe to events.

### `AbortController` and `AbortSignal`

The [AbortController ↗](https://developer.mozilla.org/en-US/docs/Web/API/AbortController) and [AbortSignal ↗](https://developer.mozilla.org/en-US/docs/Web/API/AbortSignal) APIs provide a common model for canceling asynchronous operations.

### Fetch global

* fetch()  
   * Starts the process of fetching a resource from the network. Refer to [Fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch/).

Note

The Fetch API is only available inside of [the Request Context](https://developers.cloudflare.com/workers/runtime-apis/request/#the-request-context).

---

## Encoding API

Both `TextEncoder` and `TextDecoder` support UTF-8 encoding/decoding.

[Refer to the MDN documentation for more information ↗](https://developer.mozilla.org/en-US/docs/Web/API/Encoding%5FAPI).

The [TextEncoderStream ↗](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoderStream) and [TextDecoderStream ↗](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoderStream) classes are also available.

---

## URL API

The URL API supports URLs conforming to HTTP and HTTPS schemes.

[Refer to the MDN documentation for more information ↗](https://developer.mozilla.org/en-US/docs/Web/API/URL)

Note

The default URL class behavior differs from the URL Spec documented above.

A new spec-compliant implementation of the URL class can be enabled using the `url_standard` [compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/).

---

## Compression Streams

The `CompressionStream` and `DecompressionStream` classes support the deflate, deflate-raw and gzip compression methods.

[Refer to the MDN documentation for more information ↗](https://developer.mozilla.org/en-US/docs/Web/API/Compression%5FStreams%5FAPI)

---

## URLPattern API

The `URLPattern` API provides a mechanism for matching URLs based on a convenient pattern syntax.

[Refer to the MDN documentation for more information ↗](https://developer.mozilla.org/en-US/docs/Web/API/URLPattern).

---

## `Intl`

The `Intl` API allows you to format dates, times, numbers, and more to the format that is used by a provided locale (language and region).

[Refer to the MDN documentation for more information ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/Intl).

---

## `navigator.userAgent`

When the [global\_navigator](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#global-navigator) compatibility flag is set, the [navigator.userAgent ↗](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/userAgent) property is available with the value `'Cloudflare-Workers'`. This can be used, for example, to reliably determine that code is running within the Workers environment.

## Unhandled promise rejections

The [unhandledrejection ↗](https://developer.mozilla.org/en-US/docs/Web/API/Window/unhandledrejection%5Fevent) event is emitted by the global scope when a JavaScript promise is rejected without a rejection handler attached.

The [rejectionhandled ↗](https://developer.mozilla.org/en-US/docs/Web/API/Window/rejectionhandled%5Fevent) event is emitted by the global scope when a JavaScript promise rejection is handled late (after a rejection handler is attached to the promise after an `unhandledrejection` event has already been emitted).

worker.js

```

addEventListener("unhandledrejection", (event) => {

  console.log(event.promise); // The promise that was rejected.

  console.log(event.reason); // The value or Error with which the promise was rejected.

});


addEventListener("rejectionhandled", (event) => {

  console.log(event.promise); // The promise that was rejected.

  console.log(event.reason); // The value or Error with which the promise was rejected.

});


```

---

## `navigator.sendBeacon(url[, data])`

When the [global\_navigator](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#global-navigator) compatibility flag is set, the [navigator.sendBeacon(...) ↗](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/sendBeacon) API is available to send an HTTP `POST` request containing a small amount of data to a web server. This API is intended as a means of transmitting analytics or diagnostics information asynchronously on a best-effort basis.

For example, you can replace:

JavaScript

```

const promise = fetch("https://example.com", {

  method: "POST",

  body: "hello world",

});

ctx.waitUntil(promise);


```

with `navigator.sendBeacon(...)`:

JavaScript

```

navigator.sendBeacon("https://example.com", "hello world");


```

## The Web File System Access API

When the `enable_web_file_system` compatibility flag is set, Workers supports the [Web File System Access API ↗](https://developer.mozilla.org/en-US/docs/Web/API/File%5FSystem%5FAccess%5FAPI), which allows you to read and write files and directories to a virtual file system within the Worker environment. This API provides access to the same in-memory virtual file system as the [node:fs module](https://developers.cloudflare.com/workers/runtime-apis/nodejs/fs/) but does not require Node.js compatibility to be enabled.

JavaScript

```

const root = await navigator.storage.getDirectory();


export default {

  async fetch(request) {

    const fileHandle = await root.getFileHandle("hello.txt", { create: true });

    const writable = await fileHandle.createWritable();

    await writable.write("Hello, world!");

    await writable.close();


    const file = await fileHandle.getFile();

    const contents = await file.text();


    return new Response(contents, { status: 200 });

  },

};


```

Please refer to the [MDN documentation ↗](https://developer.mozilla.org/en-US/docs/Web/API/File%5FSystem%5FAccess%5FAPI) for more information on using this API, and to the [node:fs documentation](https://developers.cloudflare.com/workers/runtime-apis/nodejs/fs/) for details on the virtual file system structure and limitations.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/web-standards/","name":"Web standards"}}]}
```

---

---
title: WebAssembly (Wasm)
description: Execute code written in a language other than JavaScript or write an entire Cloudflare Worker in Rust.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/webassembly/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# WebAssembly (Wasm)

[WebAssembly ↗](https://webassembly.org/) (abbreviated Wasm) allows you to compile languages like [Rust](https://developers.cloudflare.com/workers/languages/rust/), Go, or C to a binary format that can run in a wide variety of environments, including [web browsers ↗](https://developer.mozilla.org/en-US/docs/WebAssembly#browser%5Fcompatibility), Cloudflare Workers, and other WebAssembly runtimes.

You can use WebAssembly to:

* Execute code written in a language other than JavaScript, via `WebAssembly.instantiate()`.  
Note  
`WebAssembly.instantiate()` only supports pre-compiled modules as documented in the [web-standards documentation](https://developers.cloudflare.com/workers/runtime-apis/web-standards/#javascript-standards).
* Write an entire Cloudflare Worker in Rust, using bindings that make Workers' JavaScript APIs available directly from your Rust code.

Most programming languages can be compiled to Wasm, although support varies across languages and compilers. Guides are available for the following languages:

* [ Wasm in JavaScript ](https://developers.cloudflare.com/workers/runtime-apis/webassembly/javascript/)

## Supported proposals

WebAssembly is a rapidly evolving set of standards, with [many proposed APIs ↗](https://webassembly.org/roadmap/) which are in various stages of development. In general, Workers supports the same set of features that are available in Google Chrome.

### SIMD

SIMD is supported on Workers. For more information on using SIMD in WebAssembly, refer to [Fast, parallel applications with WebAssembly SIMD ↗](https://v8.dev/features/simd).

### Threading

Threading is not possible in Workers. Each Worker runs in a single thread, and the [Web Worker ↗](https://developer.mozilla.org/en-US/docs/Web/API/Web%5FWorkers%5FAPI) API is not supported.

## Binary size

Compiling to WebAssembly often requires including additional runtime dependencies. As a result, Workers that use WebAssembly are typically larger than an equivalent Worker written in JavaScript. The larger your Worker is, the longer it may take your Worker to start. Refer to [Worker startup time ↗](https://developers.cloudflare.com/workers/platform/limits/#worker-startup-time) for more information. We recommend using tools like [wasm-opt ↗](https://github.com/brson/wasm-opt-rs) to optimize the size of your Wasm binary.

## WebAssembly System Interface (WASI)

The [WebAssembly System Interface ↗](https://wasi.dev/) (abbreviated WASI) is a modular system interface for WebAssembly that standardizes a set of underlying system calls for networking, file system access, and more. Applications can depend on the WebAssembly System Interface to behave identically across host environments and operating systems.

WASI is an earlier and more rapidly evolving set of standards than Wasm. WASI support is experimental on Cloudflare Workers, with only some syscalls implemented. Refer to our [open source implementation of WASI ↗](https://github.com/cloudflare/workers-wasi), and [blog post about WASI on Workers ↗](https://blog.cloudflare.com/announcing-wasi-on-workers/) demonstrating its use.

### Resources on WebAssembly

* [Serverless Rust with Cloudflare Workers ↗](https://blog.cloudflare.com/cloudflare-workers-as-a-serverless-rust-platform/)
* [WebAssembly on Cloudflare Workers ↗](https://blog.cloudflare.com/webassembly-on-cloudflare-workers/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/webassembly/","name":"WebAssembly (Wasm)"}}]}
```

---

---
title: Wasm in JavaScript
description: Wasm can be used from within a Worker written in JavaScript or TypeScript by importing a Wasm module,
and instantiating an instance of this module using WebAssembly.instantiate(). This can be used to accelerate computationally intensive operations which do not involve significant I/O.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/webassembly/javascript.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Wasm in JavaScript

Wasm can be used from within a Worker written in JavaScript or TypeScript by importing a Wasm module, and instantiating an instance of this module using [WebAssembly.instantiate() ↗](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript%5Finterface/instantiate). This can be used to accelerate computationally intensive operations which do not involve significant I/O.

This guide demonstrates the basics of Wasm and JavaScript interoperability.

## Simple Wasm Module

In this guide, you will use the WebAssembly Text Format to create a simple Wasm module to understand how imports and exports work. In practice, you would not write code in this format. You would instead use the programming language of your choice and compile directly to WebAssembly Binary Format (`.wasm`).

Review the following example module (`;;` denotes a comment):

```

;; src/simple.wat

(module

  ;; Import a function from JavaScript named `imported_func`

  ;; which takes a single i32 argument and assign to

  ;; variable $i

  (func $i (import "imports" "imported_func") (param i32))

  ;; Export a function named `exported_func` which takes a

  ;; single i32 argument and returns an i32

  (func (export "exported_func") (param $input i32) (result i32)

    ;; Invoke `imported_func` with $input as argument

    local.get $input

    call $i

    ;; Return $input

    local.get $input

    return

  )

)


```

Using [wat2wasm ↗](https://github.com/WebAssembly/wabt), convert the WAT format to WebAssembly Binary Format:

Terminal window

```

wat2wasm src/simple.wat -o src/simple.wasm


```

## Bundling

Wrangler will bundle any Wasm module that ends in `.wasm` or `.wasm?module`, so that it is available at runtime within your Worker. This is done using a default bundling rule which can be customized in the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). Refer to [Wrangler Bundling](https://developers.cloudflare.com/workers/wrangler/bundling/) for more information.

## Use from JavaScript

After you have converted the WAT format to WebAssembly Binary Format, import and use the Wasm module in your existing JavaScript or TypeScript Worker:

TypeScript

```

import mod from "./simple.wasm";


// Define imports available to Wasm instance.

const importObject = {

  imports: {

    imported_func: (arg: number) => {

      console.log(`Hello from JavaScript: ${arg}`);

    },

  },

};


// Create instance of WebAssembly Module `mod`, supplying

// the expected imports in `importObject`. This should be

// done at the top level of the script to avoid instantiation on every request.

const instance = await WebAssembly.instantiate(mod, importObject);


export default {

  async fetch() {

    // Invoke the `exported_func` from our Wasm Instance with

    // an argument.

    const retval = instance.exports.exported_func(42);

    // Return the return value!

    return new Response(`Success: ${retval}`);

  },

};


```

When invoked, this Worker should log `Hello from JavaScript: 42` and return `Success: 42`, demonstrating the ability to invoke Wasm methods with arguments from JavaScript and vice versa.

## Next steps

In practice, you will likely compile a language of your choice (such as Rust) to WebAssembly binaries. Many languages provide a `bindgen` to simplify the interaction between JavaScript and Wasm. These tools may integrate with your JavaScript bundler, and provide an API other than the WebAssembly API for initializing and invoking your Wasm module. As an example, refer to the [Rust wasm-bindgen documentation ↗](https://rustwasm.github.io/wasm-bindgen/examples/without-a-bundler.html).

Alternatively, to write your entire Worker in Rust, Workers provides many of the same [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis) and [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) when using the `workers-rs` crate. For more information, refer to the [Workers Rust guide](https://developers.cloudflare.com/workers/languages/rust/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/webassembly/","name":"WebAssembly (Wasm)"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/runtime-apis/webassembly/javascript/","name":"Wasm in JavaScript"}}]}
```

---

---
title: WebSockets
description: Communicate in real time with your Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/runtime-apis/websockets.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# WebSockets

## Background

WebSockets allow you to communicate in real time with your Cloudflare Workers serverless functions. For a complete example, refer to [Using the WebSockets API](https://developers.cloudflare.com/workers/examples/websockets/).

Note

If your application needs to coordinate among multiple WebSocket connections, such as a chat room or game match, you will need clients to send messages to a single-point-of-coordination. Durable Objects provide a single-point-of-coordination for Cloudflare Workers, and are often used in parallel with WebSockets to persist state over multiple clients and connections. In this case, refer to [Durable Objects](https://developers.cloudflare.com/durable-objects/) to get started, and prefer using the Durable Objects' extended [WebSockets API](https://developers.cloudflare.com/durable-objects/best-practices/websockets/).

## Constructor

JavaScript

```

// { 0: <WebSocket>, 1: <WebSocket> }

let websocketPair = new WebSocketPair();


```

The WebSocketPair returned from this constructor is an Object, with two WebSockets at keys `0` and `1`.

These WebSockets are commonly referred to as `client` and `server`. The below example combines `Object.values` and ES6 destructuring to retrieve the WebSockets as `client` and `server`:

JavaScript

```

let [client, server] = Object.values(new WebSocketPair());


```

## Methods

### accept

* `accept()`  
   * Accepts the WebSocket connection and begins terminating requests for the WebSocket on Cloudflare's global network. This effectively enables the Workers runtime to begin responding to and handling WebSocket requests.

### addEventListener

* `addEventListener(eventWebSocketEvent, callbackFunctionFunction)`  
   * Add callback functions to be executed when an event has occurred on the WebSocket.

#### Parameters

* `event` WebSocketEvent  
   * The WebSocket event (refer to [Events](https://developers.cloudflare.com/workers/runtime-apis/websockets/#events)) to listen to.
* `callbackFunction(messageMessage)` Function  
   * A function to be called when the WebSocket responds to a specific event.

### close

* `close(codenumber, reasonstring)`  
   * Close the WebSocket connection.

#### Parameters

* `codeinteger` optional  
   * An integer indicating the close code sent by the server. This should match an option from the [list of status codes ↗](https://developer.mozilla.org/en-US/docs/Web/API/CloseEvent#status%5Fcodes) provided by the WebSocket spec.
* `reasonstring` optional  
   * A human-readable string indicating why the WebSocket connection was closed.

### send

* `send(messagestring | ArrayBuffer | ArrayBufferView)`  
   * Send a message to the other WebSocket in this WebSocket pair.

#### Parameters

* `messagestring`  
   * The message to send down the WebSocket connection to the corresponding client. This should be a string or something coercible into a string; for example, strings and numbers will be simply cast into strings, but objects and arrays should be cast to JSON strings using `JSON.stringify`, and parsed in the client.

---

## Events

* `close`  
   * An event indicating the WebSocket has closed.
* `error`  
   * An event indicating there was an error with the WebSocket.
* `message`  
   * An event indicating a new message received from the client, including the data passed by the client.

Note

WebSocket messages received by a Worker have a size limit of 32 MiB (33,554,432 bytes). If a larger message is sent, the WebSocket will be automatically closed with a `1009` "Message is too large" response.

## Types

### Message

* `data` any - The data passed back from the other WebSocket in your pair.
* `type` string - Defaults to `message`.

---

## Related resources

* [Mozilla Developer Network's (MDN) documentation on the WebSocket class ↗](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket)
* [Our WebSocket template for building applications on Workers using WebSockets ↗](https://github.com/cloudflare/websocket-template)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/runtime-apis/","name":"Runtime APIs"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/runtime-apis/websockets/","name":"WebSockets"}}]}
```

---

---
title: Static Assets
description: Create full-stack applications deployed to Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Static Assets

You can upload static assets (HTML, CSS, images and other files) as part of your Worker, and Cloudflare will handle caching and serving them to web browsers.

**Start from CLI** \- Scaffold a React SPA with an API Worker, and use the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-react-app --framework=react
```

```
yarn create cloudflare my-react-app --framework=react
```

```
pnpm create cloudflare@latest my-react-app --framework=react
```

---

**Or just deploy to Cloudflare**

[![Deploy to Workers](https://deploy.workers.cloudflare.com/button)](https://dash.cloudflare.com/?to=/:account/workers-and-pages/create/deploy-to-workers&repository=https://github.com/cloudflare/templates/tree/main/vite-react-template)

Learn more about supported frameworks on Workers.

[ Supported frameworks ](https://developers.cloudflare.com/workers/framework-guides/) Start building on Workers with our framework guides. 

### How it works

When you deploy your project, Cloudflare deploys both your Worker code and your static assets in a single operation. This deployment operates as a tightly integrated "unit" running across Cloudflare's network, combining static file hosting, custom logic, and global caching.

The **assets directory** specified in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/#assets) is central to this design. During deployment, Wrangler automatically uploads the files from this directory to Cloudflare's infrastructure. Once deployed, requests for these assets are routed efficiently to locations closest to your users.

* [  wrangler.jsonc ](#tab-panel-7662)
* [  wrangler.toml ](#tab-panel-7663)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-spa",

  "main": "src/index.js",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "./dist",

    "binding": "ASSETS"

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-spa"

main = "src/index.js"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "./dist"

binding = "ASSETS"


```

Note

If you are using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), you do not need to specify `assets.directory`. For more information about using static assets with the Vite plugin, refer to the [plugin documentation](https://developers.cloudflare.com/workers/vite-plugin/reference/static-assets/).

By adding an [**assets binding**](https://developers.cloudflare.com/workers/static-assets/binding/#binding), you can directly fetch and serve assets within your Worker code.

* [  JavaScript ](#tab-panel-7658)
* [  Python ](#tab-panel-7659)

JavaScript

```

// index.js


export default {

  async fetch(request, env) {

    const url = new URL(request.url);


    if (url.pathname.startsWith("/api/")) {

      return new Response(JSON.stringify({ name: "Cloudflare" }), {

        headers: { "Content-Type": "application/json" },

      });

    }


    return env.ASSETS.fetch(request);

  },

};


```

Python

```

from workers import WorkerEntrypoint, Response

from urllib.parse import urlparse


class Default(WorkerEntrypoint):

  async def fetch(self, request):

    # Example of serving static assets

    url = urlparse(request.url)

    if url.path.startswith("/api/):

      return Response.json({"name": "Cloudflare"})


    return await self.env.ASSETS.fetch(request)


```

### Routing behavior

By default, if a requested URL matches a file in the static assets directory, that file will be served — without invoking Worker code. If no matching asset is found and a Worker script is present, the request will be processed by the Worker. The Worker can return a response or choose to defer again to static assets by using the [assets binding](https://developers.cloudflare.com/workers/static-assets/binding/) (e.g. `env.ASSETS.fetch(request)`). If no Worker script is present, a `404 Not Found` response is returned.

The default behavior for requests which don't match a static asset can be changed by setting the [not\_found\_handling option under assets](https://developers.cloudflare.com/workers/wrangler/configuration/#assets) in your Wrangler configuration file:

* [not\_found\_handling = "single-page-application"](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/): Sets your application to return a `200 OK` response with `index.html` for requests which don't match a static asset. Use this if you have a Single Page Application. We recommend pairing this with selective routing using `run_worker_first` for [advanced routing control](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/#advanced-routing-control).
* [not\_found\_handling = "404-page"](https://developers.cloudflare.com/workers/static-assets/routing/static-site-generation/#custom-404-pages): Sets your application to return a `404 Not Found` response with the nearest `404.html` for requests which don't match a static asset.

* [  wrangler.jsonc ](#tab-panel-7660)
* [  wrangler.toml ](#tab-panel-7661)

```

{

  "assets": {

    "directory": "./dist",

    "not_found_handling": "single-page-application"

  }

}


```

```

[assets]

directory = "./dist"

not_found_handling = "single-page-application"


```

If you want the Worker code to execute before serving assets, you can use the `run_worker_first` option. This can be set to `true` to invoke the Worker script for all requests, or configured as an array of route patterns for selective Worker-script-first routing:

**Invoking your Worker script on specific paths:**

* [  wrangler.jsonc ](#tab-panel-7664)
* [  wrangler.toml ](#tab-panel-7665)

```

{

  "name": "my-spa-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./src/index.ts",

  "assets": {

    "directory": "./dist/",

    "not_found_handling": "single-page-application",

    "binding": "ASSETS",

    "run_worker_first": ["/api/*", "!/api/docs/*"]

  }

}


```

```

name = "my-spa-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./src/index.ts"


[assets]

directory = "./dist/"

not_found_handling = "single-page-application"

binding = "ASSETS"

run_worker_first = [ "/api/*", "!/api/docs/*" ]


```

For a more advanced pattern, refer to [SPA shell with bootstrap data](https://developers.cloudflare.com/workers/examples/spa-shell/), which uses HTMLRewriter to inject prefetched API data into the HTML stream.

[ Routing options ](https://developers.cloudflare.com/workers/static-assets/routing/) Learn more about how you can customize routing behavior. 

### Caching behavior

Cloudflare provides automatic caching for static assets across its network, ensuring fast delivery to users worldwide. When a static asset is requested, it is automatically cached for future requests.

* **First Request:** When an asset is requested for the first time, it is fetched from storage and cached at the nearest Cloudflare location.
* **Subsequent Requests:** If a request for the same asset reaches a data center that does not have it cached, Cloudflare's [tiered caching system](https://developers.cloudflare.com/cache/how-to/tiered-cache/) allows it to be retrieved from a nearby cache rather than going back to storage. This improves cache hit ratio, reduces latency, and reduces unnecessary origin fetches.

## Try it out

[ Vite + React SPA tutorial ](https://developers.cloudflare.com/workers/vite-plugin/tutorial/) Learn how to build and deploy a full-stack Single Page Application with static assets and API routes. 

## Learn more

[ Supported frameworks ](https://developers.cloudflare.com/workers/framework-guides/) Start building on Workers with our framework guides. 

[ Billing and limitations ](https://developers.cloudflare.com/workers/static-assets/billing-and-limitations/) Learn more about how requests are billed, current limitations, and troubleshooting. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}}]}
```

---

---
title: Billing and Limitations
description: Billing, troubleshooting, and limitations for Static assets on Workers
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/billing-and-limitations.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Billing and Limitations

## Billing

Requests to a project with static assets can either return static assets or invoke the Worker script, depending on if the request [matches a static asset or not](https://developers.cloudflare.com/workers/static-assets/routing/).

* Requests to static assets are free and unlimited. Requests to the Worker script (for example, in the case of SSR content) are billed according to Workers pricing. Refer to [pricing](https://developers.cloudflare.com/workers/platform/pricing/#example-2) for an example.
* There is no additional cost for storing Assets.
* **Important note for free tier users**: When using [run\_worker\_first](https://developers.cloudflare.com/workers/static-assets/binding/#run%5Fworker%5Ffirst), requests matching the specified patterns will always invoke your Worker script. If you exceed your free tier request limits, these requests will receive a 429 (Too Many Requests) response instead of falling back to static asset serving. Negative patterns (patterns beginning with `!/`) will continue to serve assets correctly, as requests are directed to assets, without invoking your Worker script.

## Limitations

See the [Platform Limits](https://developers.cloudflare.com/workers/platform/limits/#static-assets)

## Troubleshooting

* `assets.bucket is a required field` — if you see this error, you need to update Wrangler to at least `3.78.10` or later. `bucket` is not a required field.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/billing-and-limitations/","name":"Billing and Limitations"}}]}
```

---

---
title: Configuration and Bindings
description: Details on how to configure Workers static assets and its binding.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Bindings ](https://developers.cloudflare.com/search/?tags=Bindings) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/binding.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Configuration and Bindings

Configuring a Worker with assets requires specifying a [directory](https://developers.cloudflare.com/workers/static-assets/binding/#directory) and, optionally, an [assets binding](https://developers.cloudflare.com/workers/static-assets/binding/), in your Worker's Wrangler file. The [assets binding](https://developers.cloudflare.com/workers/static-assets/binding/) allows you to dynamically fetch assets from within your Worker script (e.g. `env.ASSETS.fetch()`), similarly to how you might with a make a `fetch()` call with a [Service binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/http/).

Only one collection of static assets can be configured in each Worker.

## `directory`

The folder of static assets to be served. For many frameworks, this is the `./public/`, `./dist/`, or `./build/` folder.

* [  wrangler.jsonc ](#tab-panel-7668)
* [  wrangler.toml ](#tab-panel-7669)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "./public/",

  },

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "./public/"


```

### Ignoring assets

Sometime there are files in the asset directory that should not be uploaded.

In this case, create a `.assetsignore` file in the root of the assets directory. This file takes the same format as `.gitignore`.

Wrangler will not upload asset files that match lines in this file.

**Example**

You are migrating from a Pages project where the assets directory is `dist`. You do not want to upload the server-side Worker code nor Pages configuration files as public client-side assets. Add the following `.assetsignore` file:

```

_worker.js

_redirects

_headers


```

Now Wrangler will not upload these files as client-side assets when deploying the Worker.

## `run_worker_first`

Controls whether to invoke the Worker script regardless of a request which would have otherwise matched an asset. `run_worker_first = false` (default) will serve any static asset matching a request, while `run_worker_first = true` will unconditionally [invoke your Worker script](https://developers.cloudflare.com/workers/static-assets/routing/worker-script/#run-your-worker-script-first).

* [  wrangler.jsonc ](#tab-panel-7670)
* [  wrangler.toml ](#tab-panel-7671)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "src/index.ts",

  // The following configuration unconditionally invokes the Worker script at

  // `src/index.ts`, which can programatically fetch assets via the ASSETS binding

  "assets": {

    "directory": "./public/",

    "binding": "ASSETS",

    "run_worker_first": true,

  },

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "src/index.ts"


[assets]

directory = "./public/"

binding = "ASSETS"

run_worker_first = true


```

You can also specify `run_worker_first` as an array of route patterns to selectively run the Worker script first only for specific routes.

The array supports glob patterns with `*` for deep matching and negative patterns with `!` prefix.

Negative patterns have precedence over non-negative patterns. The Worker will run first when a non-negative pattern matches and none of the negative pattern matches.

The order in which the patterns are listed is not significant.

`run_worker_first` is often paired with the [not\_found\_handling = "single-page-application" setting](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/#advanced-routing-control):

* [  wrangler.jsonc ](#tab-panel-7672)
* [  wrangler.toml ](#tab-panel-7673)

```

{

  "name": "my-spa-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./src/index.ts",

  "assets": {

    "directory": "./dist/",

    "not_found_handling": "single-page-application",

    "binding": "ASSETS",

    "run_worker_first": ["/api/*", "!/api/docs/*"]

  }

}


```

```

name = "my-spa-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./src/index.ts"


[assets]

directory = "./dist/"

not_found_handling = "single-page-application"

binding = "ASSETS"

run_worker_first = [ "/api/*", "!/api/docs/*" ]


```

In this configuration, requests to `/api/*` routes will invoke the Worker script first, except for `/api/docs/*` which will follow the default asset-first routing behavior.

Common uses for `run_worker_first` include authentication checks, A/B testing, and [injecting bootstrap data into your SPA shell](https://developers.cloudflare.com/workers/examples/spa-shell/).

## `binding`

Configuring the optional [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings) gives you access to the collection of assets from within your Worker script.

* [  wrangler.jsonc ](#tab-panel-7674)
* [  wrangler.toml ](#tab-panel-7675)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker",

  "main": "./src/index.js",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "./public/",

    "binding": "ASSETS",

  },

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker"

main = "./src/index.js"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "./public/"

binding = "ASSETS"


```

In the example above, assets would be available through `env.ASSETS`.

### Runtime API Reference

#### `fetch()`

**Parameters**

* `request: Request | URL | string` Pass a [Request object](https://developers.cloudflare.com/workers/runtime-apis/request/), URL object, or URL string. Requests made through this method have `html_handling` and `not_found_handling` configuration applied to them.

**Response**

* `Promise<Response>` Returns a static asset response for the given request.

**Example**

Your dynamic code can make new, or forward incoming requests to your project's static assets using the assets binding. For example, `env.ASSETS.fetch(request)`, `env.ASSETS.fetch(new URL('https://assets.local/my-file'))` or `env.ASSETS.fetch('https://assets.local/my-file')`. The hostname used in the URL (for example, `assets.local`) is not meaningful — any valid hostname will work. Only the URL pathname is used to match assets.

Note

If you need to fetch assets from within an [RPC method](https://developers.cloudflare.com/workers/runtime-apis/rpc/#fetching-static-assets) (where there is no incoming `request`), construct a URL using any hostname — for example, `this.env.ASSETS.fetch(new Request('https://assets.local/path/to/asset'))`.

Take the following example that configures a Worker script to return a response under all requests headed for `/api/`. Otherwise, the Worker script will pass the incoming request through to the asset binding. In this case, because a Worker script is only invoked when the requested route has not matched any static assets, this will always evaluate [not\_found\_handling](https://developers.cloudflare.com/workers/static-assets/#routing-behavior) behavior.

* [  JavaScript ](#tab-panel-7666)
* [  TypeScript ](#tab-panel-7667)

JavaScript

```

export default {

  async fetch(request, env) {

    const url = new URL(request.url);

    if (url.pathname.startsWith("/api/")) {

      // TODO: Add your custom /api/* logic here.

      return new Response("Ok");

    }

    // Passes the incoming request through to the assets binding.

    // No asset matched this request, so this will evaluate `not_found_handling` behavior.

    return env.ASSETS.fetch(request);

  },

};


```

TypeScript

```

interface Env {

  ASSETS: Fetcher;

}


export default {

  async fetch(request, env): Promise<Response> {

    const url = new URL(request.url);

    if (url.pathname.startsWith("/api/")) {

      // TODO: Add your custom /api/* logic here.

      return new Response("Ok");

    }

    // Passes the incoming request through to the assets binding.

    // No asset matched this request, so this will evaluate `not_found_handling` behavior.

    return env.ASSETS.fetch(request);

  },

} satisfies ExportedHandler<Env>;


```

## Routing configuration

For the various static asset routing configuration options, refer to [Routing](https://developers.cloudflare.com/workers/static-assets/routing/).

## Smart Placement

[Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/) can be used to place a Worker's code close to your back-end infrastructure. Smart Placement will only have an effect if you specified a `main`, pointing to your Worker code.

### Smart Placement with Worker Code First

If you desire to run your [Worker code ahead of assets](https://developers.cloudflare.com/workers/static-assets/routing/worker-script/#run-your-worker-script-first) by setting `run_worker_first=true`, all requests must first travel to your Smart-Placed Worker. As a result, you may experience increased latency for asset requests.

Use Smart Placement with `run_worker_first=true` when you need to integrate with other backend services, authenticate requests before serving any assets, or if you want to make modifications to your assets before serving them.

If you want some assets served as quickly as possible to the user, but others to be served behind a smart-placed Worker, considering splitting your app into multiple Workers and [using service bindings to connect them](https://developers.cloudflare.com/workers/configuration/placement/#multiple-workers).

### Smart Placement with Assets First

Enabling Smart Placement with `run_worker_first=false` (or not specifying it) lets you serve assets from as close as possible to your users, but moves your Worker logic to run most efficiently (such as near a database).

Use Smart Placement with `run_worker_first=false` (or not specifying it) when prioritizing fast asset delivery.

This will not impact the [default routing behavior](https://developers.cloudflare.com/workers/static-assets/#routing-behavior).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/binding/","name":"Configuration and Bindings"}}]}
```

---

---
title: Direct Uploads
description: Upload assets through the Workers API.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/direct-upload.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Direct Uploads

Note

Directly uploading assets via APIs is an advanced approach which, unless you are building a programatic integration, most users will not need. Instead, we encourage users to deploy your Worker with [Wrangler](https://developers.cloudflare.com/workers/static-assets/get-started/#1-create-a-new-worker-project-using-the-cli).

Our API empowers users to upload and include static assets as part of a Worker. These static assets can be served for free, and additionally, users can also fetch assets through an optional [assets binding](https://developers.cloudflare.com/workers/static-assets/binding/) to power more advanced applications. This guide will describe the process for attaching assets to your Worker directly with the API.

* [ Workers ](#tab-panel-7676)
* [ Workers for Platforms ](#tab-panel-7677)

sequenceDiagram
    participant User
    participant Workers API
    User<<->>Workers API: Submit manifest<br/>POST /client/v4/accounts/:accountId/workers/scripts/:scriptName/assets-upload-session
    User<<->>Workers API: Upload files<br/>POST /client/v4/accounts/:accountId/workers/assets/upload?base64=true
    User<<->>Workers API: Upload script version<br/>PUT /client/v4/accounts/:accountId/workers/scripts/:scriptName

sequenceDiagram
    participant User
    participant Workers API
    User<<->>Workers API: Submit manifest<br/>POST /client/v4/accounts/:accountId/workers/dispatch/namespaces/:dispatchNamespace/scripts/:scriptName/assets-upload-session
    User<<->>Workers API: Upload files<br/>POST /client/v4/accounts/:accountId/workers/assets/upload?base64=true
    User<<->>Workers API: Upload script version<br/>PUT /client/v4/accounts/:accountId/workers/dispatch/namespaces/:dispatchNamespace/scripts/:scriptName

The asset upload flow can be distilled into three distinct phases:

1. Registration of a manifest
2. Upload of the assets
3. Deployment of the Worker

## Upload manifest

The asset manifest is a ledger which keeps track of files we want to use in our Worker. This manifest is used to track assets associated with each Worker version, and eliminate the need to upload unchanged files prior to a new upload.

The [manifest upload request](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/assets/subresources/upload/methods/create/) describes each file which we intend to upload. Each file is its own key representing the file path and name, and is an object which contains metadata about the file.

`hash` represents a 32 hexadecimal character hash of the file, while `size` is the size (in bytes) of the file.

* [ Workers ](#tab-panel-7678)
* [ Workers for Platforms ](#tab-panel-7679)

Terminal window

```

curl -X POST https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/scripts/{script_name}/assets-upload-session \

--header 'content-type: application/json' \

--header 'Authorization: Bearer <API_TOKEN>' \

--data '{

  "manifest": {

    "/filea.html": {

      "hash": "08f1dfda4574284ab3c21666d1",

      "size": 12

    },

    "/fileb.html": {

      "hash": "4f1c1af44620d531446ceef93f",

      "size": 23

    },

    "/filec.html": {

      "hash": "54995e302614e0523757a04ec1",

      "size": 23

    }

  }

}'


```

Terminal window

```

curl -X POST https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/dispatch/namespaces/{dispatch_namespace}/scripts/{script_name}/assets-upload-session \

--header 'content-type: application/json' \

--header 'Authorization: Bearer <API_TOKEN>' \

--data '{

  "manifest": {

    "/filea.html": {

      "hash": "08f1dfda4574284ab3c21666d1",

      "size": 12

    },

    "/fileb.html": {

      "hash": "4f1c1af44620d531446ceef93f",

      "size": 23

    },

    "/filec.html": {

      "hash": "54995e302614e0523757a04ec1",

      "size": 23

    }

  }

}'


```

The resulting response will contain a JWT, which provides authentication during file upload. The JWT is valid for one hour.

In addition to the JWT, the response instructs users how to optimally batch upload their files. These instructions are encoded in the `buckets` field. Each array in `buckets` contains a list of file hashes which should be uploaded together. Unmodified files will not be returned in the `buckets` field (as they do not need to be re-uploaded) if they have recently been uploaded in previous versions of your Worker.

```

{

  "result": {

    "jwt": "<UPLOAD_TOKEN>",

    "buckets": [

      ["08f1dfda4574284ab3c21666d1", "4f1c1af44620d531446ceef93f"],

      ["54995e302614e0523757a04ec1"]

    ]

  },

  "success": true,

  "errors": null,

  "messages": null

}


```

Note

If all assets have been previously uploaded, `buckets` will be empty, and `jwt` will contain a completion token. Uploading files is not necessary, and you can skip directly to [uploading a new script or version](https://developers.cloudflare.com/workers/static-assets/direct-upload/#createdeploy-new-version).

### Limitations

* Limits differ based on account plan. Refer to [Account Plan Limits](https://developers.cloudflare.com/workers/platform/limits/#account-plan-limits) for more information on limitations of static assets.

## Upload Static Assets

The [file upload API](https://developers.cloudflare.com/api/resources/workers/subresources/assets/subresources/upload/methods/create/) requires files be uploaded using `multipart/form-data`. The contents of each file must be base64 encoded, and the `base64` query parameter in the URL must be set to `true`.

The provided `Content-Type` header of each file part will be attached when eventually serving the file. If you wish to avoid sending a `Content-Type` header in your deployment, `application/null` may be sent at upload time.

The `Authorization` header must be provided as a bearer token, using the JWT (upload token) from the aforementioned manifest upload call.

Once every file in the manifest has been uploaded, a status code of 201 will be returned, with the `jwt` field present. This JWT is a final "completion" token which can be used to create a deployment of a Worker with this set of assets. This completion token is valid for 1 hour.

## Create/Deploy New Version

[Script](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/), [Version](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/versions/methods/create/), and [Workers for Platform script](https://developers.cloudflare.com/api/resources/workers%5Ffor%5Fplatforms/subresources/dispatch/subresources/namespaces/subresources/scripts/methods/update/) upload endpoints require specifying a metadata part in the form data. Here, we can provide the completion token from the previous (upload assets) step.

Example Worker Metadata Specifying Completion Token

```

{

  "main_module": "main.js",

  "assets": {

    "jwt": "<completion_token>"

  },

  "compatibility_date": "2021-09-14"

}


```

If this is a Worker which already has assets, and you wish to just re-use the existing set of assets, we do not have to specify the completion token again. Instead, we can pass the boolean `keep_assets` option.

Example Worker Metadata Specifying keep\_assets

```

{

  "main_module": "main.js",

  "keep_assets": true,

  "compatibility_date": "2021-09-14"

}


```

Asset [routing configuration](https://developers.cloudflare.com/workers/wrangler/configuration/#assets) can be provided in the `assets` object, such as `html_handling` and `not_found_handling`.

Example Worker Metadata Specifying Asset Configuration

```

{

  "main_module": "main.js",

  "assets": {

    "jwt": "<completion_token>",

    "config" {

      "html_handling": "auto-trailing-slash"

    }

  },

  "compatibility_date": "2021-09-14"

}


```

Optionally, an assets binding can be provided if you wish to fetch and serve assets from within your Worker code.

Example Worker Metadata Specifying Asset Binding

```

{

  "main_module": "main.js",

  "assets": {

    ...

  },

  "bindings": [

    ...

    {

      "name": "ASSETS",

      "type": "assets"

    }

    ...

  ]

  "compatibility_date": "2021-09-14"

}


```

## Programmatic Example

This example is from [cloudflare-typescript ↗](https://github.com/cloudflare/cloudflare-typescript/blob/main/examples/workers/script-with-assets-upload.ts).

* [  JavaScript ](#tab-panel-7680)
* [  TypeScript ](#tab-panel-7681)

JavaScript

```

#!/usr/bin/env -S npm run tsn -T


/**

 * Create a Worker that serves static assets

 *

 * This example demonstrates how to:

 * - Upload static assets to Cloudflare Workers

 * - Create and deploy a Worker that serves those assets

 *

 * Docs:

 * - https://developers.cloudflare.com/workers/static-assets/direct-upload

 *

 * Prerequisites:

 * 1. Generate an API token: https://developers.cloudflare.com/fundamentals/api/get-started/create-token/

 * 2. Find your account ID: https://developers.cloudflare.com/fundamentals/setup/find-account-and-zone-ids/

 * 3. Find your workers.dev subdomain: https://developers.cloudflare.com/workers/configuration/routing/workers-dev/

 *

 * Environment variables:

 *   - CLOUDFLARE_API_TOKEN (required)

 *   - CLOUDFLARE_ACCOUNT_ID (required)

 *   - ASSETS_DIRECTORY (required)

 *   - CLOUDFLARE_SUBDOMAIN (optional)

 *

 * Usage:

 *   Place your static files in the ASSETS_DIRECTORY, then run this script.

 *   Assets will be available at: my-script-with-assets.$subdomain.workers.dev/$filename

 */


import crypto from "crypto";

import fs from "fs";

import { readFile } from "node:fs/promises";

import { extname } from "node:path";

import path from "path";

import { exit } from "node:process";


import Cloudflare from "cloudflare";


const WORKER_NAME = "my-worker-with-assets";

const SCRIPT_FILENAME = `${WORKER_NAME}.mjs`;


function loadConfig() {

  const apiToken = process.env["CLOUDFLARE_API_TOKEN"];

  if (!apiToken) {

    throw new Error(

      "Missing required environment variable: CLOUDFLARE_API_TOKEN",

    );

  }


  const accountId = process.env["CLOUDFLARE_ACCOUNT_ID"];

  if (!accountId) {

    throw new Error(

      "Missing required environment variable: CLOUDFLARE_ACCOUNT_ID",

    );

  }


  const assetsDirectory = process.env["ASSETS_DIRECTORY"];

  if (!assetsDirectory) {

    throw new Error("Missing required environment variable: ASSETS_DIRECTORY");

  }


  if (!fs.existsSync(assetsDirectory)) {

    throw new Error(`Assets directory does not exist: ${assetsDirectory}`);

  }


  const subdomain = process.env["CLOUDFLARE_SUBDOMAIN"];


  return {

    apiToken,

    accountId,

    assetsDirectory,

    subdomain: subdomain || undefined,

    workerName: WORKER_NAME,

  };

}


const config = loadConfig();

const client = new Cloudflare({

  apiToken: config.apiToken,

});


/**

 * Recursively reads all files from a directory and creates a manifest

 * mapping file paths to their hash and size.

 */

function createManifest(directory) {

  const manifest = {};


  function processDirectory(currentDir, basePath = "") {

    try {

      const entries = fs.readdirSync(currentDir, { withFileTypes: true });


      for (const entry of entries) {

        const fullPath = path.join(currentDir, entry.name);

        const relativePath = path.join(basePath, entry.name);


        if (entry.isDirectory()) {

          processDirectory(fullPath, relativePath);

        } else if (entry.isFile()) {

          try {

            const fileContent = fs.readFileSync(fullPath);

            const extension = extname(relativePath).substring(1);


            // Generate a hash for the file

            const hash = crypto

              .createHash("sha256")

              .update(fileContent.toString("base64") + extension)

              .digest("hex")

              .slice(0, 32);


            // Normalize path separators to forward slashes

            const manifestPath = `/${relativePath.replace(/\\/g, "/")}`;


            manifest[manifestPath] = {

              hash,

              size: fileContent.length,

            };


            console.log(

              `Added to manifest: ${manifestPath} (${fileContent.length} bytes)`,

            );

          } catch (error) {

            console.warn(`Failed to process file ${fullPath}:`, error);

          }

        }

      }

    } catch (error) {

      throw new Error(`Failed to read directory ${currentDir}: ${error}`);

    }

  }


  processDirectory(directory);


  if (Object.keys(manifest).length === 0) {

    throw new Error(`No files found in assets directory: ${directory}`);

  }


  console.log(`Created manifest with ${Object.keys(manifest).length} files`);

  return manifest;

}


/**

 * Generates the Worker script content that serves static assets

 */

function generateWorkerScript(exampleFile) {

  return `

export default {

  async fetch(request, env, ctx) {

    const url = new URL(request.url);


    // Serve a simple index page at the root

    if (url.pathname === '/') {

      return new Response(

        \`<!DOCTYPE html>

<html>

<head>

  <title>Static Assets Worker</title>

  <style>

    body { font-family: Arial, sans-serif; max-width: 800px; margin: 50px auto; padding: 20px; }

    h1 { color: #f38020; }

    .asset-info { background: #f5f5f5; padding: 15px; border-radius: 5px; }

  </style>

</head>

<body>

  <h1>This Worker serves static assets!</h1>

  <div class="asset-info">

    <p><strong>To access your assets,</strong> add <code>/filename</code> to the URL.</p>

    <p>Try visiting <a href="https://developers.cloudflare.com/workers/static-assets/direct-upload/%3C/span%3E%3Cspan%20style="--0:#89DDFF;--1:#446E7F">\${url.origin}/${exampleFile}">/${exampleFile}</a></p>

  </div>

</body>

</html>\`,

        {

          status: 200,

          headers: { 'Content-Type': 'text/html' }

        }

      );

    }


    // Serve static assets for all other paths

    return env.ASSETS.fetch(request);

  }

};

  `.trim();

}


/**

 * Creates upload payloads from buckets and manifest

 */

async function createUploadPayloads(buckets, manifest, assetsDirectory) {

  const payloads = [];


  for (const bucket of buckets) {

    const payload = {};


    for (const hash of bucket) {

      // Find the file path for this hash

      const manifestEntry = Object.entries(manifest).find(

        ([_, data]) => data.hash === hash,

      );


      if (!manifestEntry) {

        throw new Error(`Could not find file for hash: ${hash}`);

      }


      const [relativePath] = manifestEntry;

      const fullPath = path.join(assetsDirectory, relativePath);


      try {

        const fileContent = await readFile(fullPath);

        payload[hash] = fileContent.toString("base64");

        console.log(`Prepared for upload: ${relativePath}`);

      } catch (error) {

        throw new Error(`Failed to read file ${fullPath}: ${error}`);

      }

    }


    payloads.push(payload);

  }


  return payloads;

}


/**

 * Uploads asset payloads

 */

async function uploadAssets(payloads, uploadJwt, accountId) {

  let completionJwt;


  console.log(`Uploading ${payloads.length} payload(s)...`);


  for (let i = 0; i < payloads.length; i++) {

    const payload = payloads[i];

    console.log(`Uploading payload ${i + 1}/${payloads.length}...`);


    try {

      const response = await client.workers.assets.upload.create(

        {

          account_id: accountId,

          base64: true,

          body: payload,

        },

        {

          headers: { Authorization: `Bearer ${uploadJwt}` },

        },

      );


      if (response?.jwt) {

        completionJwt = response.jwt;

      }

    } catch (error) {

      throw new Error(`Failed to upload payload ${i + 1}: ${error}`);

    }

  }


  if (!completionJwt) {

    throw new Error("Upload completed but no completion JWT received");

  }


  console.log("✅ All assets uploaded successfully");

  return completionJwt;

}


async function main() {

  try {

    console.log(

      "🚀 Starting Worker creation and deployment with static assets...",

    );

    console.log(`📁 Assets directory: ${config.assetsDirectory}`);


    console.log("📝 Creating asset manifest...");

    const manifest = createManifest(config.assetsDirectory);

    const exampleFile =

      Object.keys(manifest)[0]?.replace(/^\//, "") || "file.txt";


    const scriptContent = generateWorkerScript(exampleFile);


    let worker;

    try {

      worker = await client.workers.beta.workers.get(config.workerName, {

        account_id: config.accountId,

      });

      console.log(`♻️  Worker ${config.workerName} already exists. Using it.`);

    } catch (error) {

      if (!(error instanceof Cloudflare.NotFoundError)) {

        throw error;

      }

      console.log(`✏️  Creating Worker ${config.workerName}...`);

      worker = await client.workers.beta.workers.create({

        account_id: config.accountId,

        name: config.workerName,

        subdomain: {

          enabled: config.subdomain !== undefined,

        },

        observability: {

          enabled: true,

        },

      });

    }


    console.log(`⚙️  Worker id: ${worker.id}`);

    console.log("🔄 Starting asset upload session...");


    const uploadResponse = await client.workers.scripts.assets.upload.create(

      config.workerName,

      {

        account_id: config.accountId,

        manifest,

      },

    );


    const { buckets, jwt: uploadJwt } = uploadResponse;


    if (!uploadJwt || !buckets) {

      throw new Error("Failed to start asset upload session");

    }


    let completionJwt;


    if (buckets.length === 0) {

      console.log("✅ No new assets to upload!");

      // Use the initial upload JWT as completion JWT when no uploads are needed

      completionJwt = uploadJwt;

    } else {

      const payloads = await createUploadPayloads(

        buckets,

        manifest,

        config.assetsDirectory,

      );


      completionJwt = await uploadAssets(payloads, uploadJwt, config.accountId);

    }


    console.log("✏️  Creating Worker version...");


    // Create a new version with assets

    const version = await client.workers.beta.workers.versions.create(

      worker.id,

      {

        account_id: config.accountId,

        main_module: SCRIPT_FILENAME,

        compatibility_date: new Date().toISOString().split("T")[0],

        bindings: [

          {

            type: "assets",

            name: "ASSETS",

          },

        ],

        assets: {

          jwt: completionJwt,

        },

        modules: [

          {

            name: SCRIPT_FILENAME,

            content_type: "application/javascript+module",

            content_base64: Buffer.from(scriptContent).toString("base64"),

          },

        ],

      },

    );


    console.log("🚚 Creating Worker deployment...");


    // Create a deployment and point all traffic to the version we created

    await client.workers.scripts.deployments.create(config.workerName, {

      account_id: config.accountId,

      strategy: "percentage",

      versions: [

        {

          percentage: 100,

          version_id: version.id,

        },

      ],

    });


    console.log("✅ Deployment successful!");


    if (config.subdomain) {

      console.log(`

🌍 Your Worker is live!

📍 Base URL: https://${config.workerName}.${config.subdomain}.workers.dev/

📄 Try accessing: https://${config.workerName}.${config.subdomain}.workers.dev/${exampleFile}

`);

    } else {

      console.log(`

⚠️  Set up a route, custom domain, or workers.dev subdomain to access your Worker.

Add CLOUDFLARE_SUBDOMAIN to your environment variables to set one up automatically.

`);

    }

  } catch (error) {

    console.error("❌ Deployment failed:", error);

    exit(1);

  }

}


main();


```

TypeScript

```

#!/usr/bin/env -S npm run tsn -T


/**

 * Create a Worker that serves static assets

 *

 * This example demonstrates how to:

 * - Upload static assets to Cloudflare Workers

 * - Create and deploy a Worker that serves those assets

 *

 * Docs:

 * - https://developers.cloudflare.com/workers/static-assets/direct-upload

 *

 * Prerequisites:

 * 1. Generate an API token: https://developers.cloudflare.com/fundamentals/api/get-started/create-token/

 * 2. Find your account ID: https://developers.cloudflare.com/fundamentals/setup/find-account-and-zone-ids/

 * 3. Find your workers.dev subdomain: https://developers.cloudflare.com/workers/configuration/routing/workers-dev/

 *

 * Environment variables:

 *   - CLOUDFLARE_API_TOKEN (required)

 *   - CLOUDFLARE_ACCOUNT_ID (required)

 *   - ASSETS_DIRECTORY (required)

 *   - CLOUDFLARE_SUBDOMAIN (optional)

 *

 * Usage:

 *   Place your static files in the ASSETS_DIRECTORY, then run this script.

 *   Assets will be available at: my-script-with-assets.$subdomain.workers.dev/$filename

 */


import crypto from 'crypto';

import fs from 'fs';

import { readFile } from 'node:fs/promises';

import { extname } from 'node:path';

import path from 'path';

import { exit } from 'node:process';


import Cloudflare from 'cloudflare';


interface Config {

  apiToken: string;

  accountId: string;

  assetsDirectory: string;

  subdomain: string | undefined;

  workerName: string;

}


interface AssetManifest {

  [path: string]: {

    hash: string;

    size: number;

  };

}


interface UploadPayload {

  [hash: string]: string; // base64 encoded content

}


const WORKER_NAME = 'my-worker-with-assets';

const SCRIPT_FILENAME = `${WORKER_NAME}.mjs`;


function loadConfig(): Config {

  const apiToken = process.env['CLOUDFLARE_API_TOKEN'];

  if (!apiToken) {

    throw new Error('Missing required environment variable: CLOUDFLARE_API_TOKEN');

  }


  const accountId = process.env['CLOUDFLARE_ACCOUNT_ID'];

  if (!accountId) {

    throw new Error('Missing required environment variable: CLOUDFLARE_ACCOUNT_ID');

  }


  const assetsDirectory = process.env['ASSETS_DIRECTORY'];

  if (!assetsDirectory) {

    throw new Error('Missing required environment variable: ASSETS_DIRECTORY');

  }


  if (!fs.existsSync(assetsDirectory)) {

    throw new Error(`Assets directory does not exist: ${assetsDirectory}`);

  }


  const subdomain = process.env['CLOUDFLARE_SUBDOMAIN'];


  return {

    apiToken,

    accountId,

    assetsDirectory,

    subdomain: subdomain || undefined,

    workerName: WORKER_NAME,

  };

}


const config = loadConfig();

const client = new Cloudflare({

  apiToken: config.apiToken,

});


/**

 * Recursively reads all files from a directory and creates a manifest

 * mapping file paths to their hash and size.

 */

function createManifest(directory: string): AssetManifest {

  const manifest: AssetManifest = {};


  function processDirectory(currentDir: string, basePath = ''): void {

    try {

      const entries = fs.readdirSync(currentDir, { withFileTypes: true });


      for (const entry of entries) {

        const fullPath = path.join(currentDir, entry.name);

        const relativePath = path.join(basePath, entry.name);


        if (entry.isDirectory()) {

          processDirectory(fullPath, relativePath);

        } else if (entry.isFile()) {

          try {

            const fileContent = fs.readFileSync(fullPath);

            const extension = extname(relativePath).substring(1);


            // Generate a hash for the file

            const hash = crypto

              .createHash('sha256')

              .update(fileContent.toString('base64') + extension)

              .digest('hex')

              .slice(0, 32);


            // Normalize path separators to forward slashes

            const manifestPath = `/${relativePath.replace(/\\/g, '/')}`;


            manifest[manifestPath] = {

              hash,

              size: fileContent.length,

            };


            console.log(`Added to manifest: ${manifestPath} (${fileContent.length} bytes)`);

          } catch (error) {

            console.warn(`Failed to process file ${fullPath}:`, error);

          }

        }

      }

    } catch (error) {

      throw new Error(`Failed to read directory ${currentDir}: ${error}`);

    }

  }


  processDirectory(directory);


  if (Object.keys(manifest).length === 0) {

    throw new Error(`No files found in assets directory: ${directory}`);

  }


  console.log(`Created manifest with ${Object.keys(manifest).length} files`);

  return manifest;

}


/**

 * Generates the Worker script content that serves static assets

 */

function generateWorkerScript(exampleFile: string): string {

  return `

export default {

  async fetch(request, env, ctx) {

    const url = new URL(request.url);


    // Serve a simple index page at the root

    if (url.pathname === '/') {

      return new Response(

        \`<!DOCTYPE html>

<html>

<head>

  <title>Static Assets Worker</title>

  <style>

    body { font-family: Arial, sans-serif; max-width: 800px; margin: 50px auto; padding: 20px; }

    h1 { color: #f38020; }

    .asset-info { background: #f5f5f5; padding: 15px; border-radius: 5px; }

  </style>

</head>

<body>

  <h1>This Worker serves static assets!</h1>

  <div class="asset-info">

    <p><strong>To access your assets,</strong> add <code>/filename</code> to the URL.</p>

    <p>Try visiting <a href="https://developers.cloudflare.com/workers/static-assets/direct-upload/%3C/span%3E%3Cspan%20style="--0:#89DDFF;--1:#446E7F">\${url.origin}/${exampleFile}">/${exampleFile}</a></p>

  </div>

</body>

</html>\`,

        {

          status: 200,

          headers: { 'Content-Type': 'text/html' }

        }

      );

    }


    // Serve static assets for all other paths

    return env.ASSETS.fetch(request);

  }

};

  `.trim();

}


/**

 * Creates upload payloads from buckets and manifest

 */

async function createUploadPayloads(

  buckets: string[][],

  manifest: AssetManifest,

  assetsDirectory: string

): Promise<UploadPayload[]> {

  const payloads: UploadPayload[] = [];


  for (const bucket of buckets) {

    const payload: UploadPayload = {};


    for (const hash of bucket) {

      // Find the file path for this hash

      const manifestEntry = Object.entries(manifest).find(

        ([_, data]) => data.hash === hash

      );


      if (!manifestEntry) {

        throw new Error(`Could not find file for hash: ${hash}`);

      }


      const [relativePath] = manifestEntry;

      const fullPath = path.join(assetsDirectory, relativePath);


      try {

        const fileContent = await readFile(fullPath);

        payload[hash] = fileContent.toString('base64');

        console.log(`Prepared for upload: ${relativePath}`);

      } catch (error) {

        throw new Error(`Failed to read file ${fullPath}: ${error}`);

      }

    }


    payloads.push(payload);

  }


  return payloads;

}


/**

 * Uploads asset payloads

 */

async function uploadAssets(

  payloads: UploadPayload[],

  uploadJwt: string,

  accountId: string

): Promise<string> {

  let completionJwt: string | undefined;


  console.log(`Uploading ${payloads.length} payload(s)...`);


  for (let i = 0; i < payloads.length; i++) {

    const payload = payloads[i]!;

    console.log(`Uploading payload ${i + 1}/${payloads.length}...`);


    try {

      const response = await client.workers.assets.upload.create(

        {

          account_id: accountId,

          base64: true,

          body: payload,

        },

        {

          headers: { Authorization: `Bearer ${uploadJwt}` },

        }

      );


      if (response?.jwt) {

        completionJwt = response.jwt;

      }

    } catch (error) {

      throw new Error(`Failed to upload payload ${i + 1}: ${error}`);

    }

  }


  if (!completionJwt) {

    throw new Error('Upload completed but no completion JWT received');

  }


  console.log('✅ All assets uploaded successfully');

  return completionJwt;

}


async function main(): Promise<void> {

  try {

    console.log('🚀 Starting Worker creation and deployment with static assets...');

    console.log(`📁 Assets directory: ${config.assetsDirectory}`);


    console.log('📝 Creating asset manifest...');

    const manifest = createManifest(config.assetsDirectory);

    const exampleFile = Object.keys(manifest)[0]?.replace(/^\//, '') || 'file.txt';


    const scriptContent = generateWorkerScript(exampleFile);


    let worker;

    try {

      worker = await client.workers.beta.workers.get(config.workerName, {

        account_id: config.accountId,

      });

      console.log(`♻️  Worker ${config.workerName} already exists. Using it.`);

    } catch (error) {

      if (!(error instanceof Cloudflare.NotFoundError)) { throw error; }

      console.log(`✏️  Creating Worker ${config.workerName}...`);

      worker = await client.workers.beta.workers.create({

        account_id: config.accountId,

        name: config.workerName,

        subdomain: {

          enabled: config.subdomain !== undefined,

        },

        observability: {

          enabled: true,

        },

      });

    }


    console.log(`⚙️  Worker id: ${worker.id}`);

    console.log('🔄 Starting asset upload session...');


    const uploadResponse = await client.workers.scripts.assets.upload.create(

      config.workerName,

      {

        account_id: config.accountId,

        manifest,

      }

    );


    const { buckets, jwt: uploadJwt } = uploadResponse;


    if (!uploadJwt || !buckets) {

      throw new Error('Failed to start asset upload session');

    }


    let completionJwt: string;


    if (buckets.length === 0) {

      console.log('✅ No new assets to upload!');

      // Use the initial upload JWT as completion JWT when no uploads are needed

      completionJwt = uploadJwt;

    } else {

      const payloads = await createUploadPayloads(

        buckets,

        manifest,

        config.assetsDirectory

      );


      completionJwt = await uploadAssets(

        payloads,

        uploadJwt,

        config.accountId

      );

    }


    console.log('✏️  Creating Worker version...');


    // Create a new version with assets

    const version = await client.workers.beta.workers.versions.create(worker.id, {

      account_id: config.accountId,

      main_module: SCRIPT_FILENAME,

      compatibility_date: new Date().toISOString().split('T')[0]!,

      bindings: [

        {

          type: 'assets',

          name: 'ASSETS',

        },

      ],

      assets: {

        jwt: completionJwt,

      },

      modules: [

        {

          name: SCRIPT_FILENAME,

          content_type: 'application/javascript+module',

          content_base64: Buffer.from(scriptContent).toString('base64'),

        },

      ],

    });


    console.log('🚚 Creating Worker deployment...');


    // Create a deployment and point all traffic to the version we created

    await client.workers.scripts.deployments.create(config.workerName, {

      account_id: config.accountId,

      strategy: 'percentage',

      versions: [

        {

            percentage: 100,

            version_id: version.id,

          },

        ],

    });


    console.log('✅ Deployment successful!');


    if (config.subdomain) {

      console.log(`

🌍 Your Worker is live!

📍 Base URL: https://${config.workerName}.${config.subdomain}.workers.dev/

📄 Try accessing: https://${config.workerName}.${config.subdomain}.workers.dev/${exampleFile}

`);

    } else {

      console.log(`

⚠️  Set up a route, custom domain, or workers.dev subdomain to access your Worker.

Add CLOUDFLARE_SUBDOMAIN to your environment variables to set one up automatically.

`);

    }

  } catch (error) {

    console.error('❌ Deployment failed:', error);

    exit(1);

  }

}


main();


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/direct-upload/","name":"Direct Uploads"}}]}
```

---

---
title: Get Started
description: Run front-end websites — static or dynamic — directly on Cloudflare's global network.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/get-started.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Get Started

For most front-end applications, you'll want to use a framework. Workers supports number of popular [frameworks](https://developers.cloudflare.com/workers/framework-guides/) that come with ready-to-use components, a pre-defined and structured architecture, and community support. View [framework specific guides](https://developers.cloudflare.com/workers/framework-guides/) to get started using a framework.

Alternatively, you may prefer to build your website from scratch if:

* You're interested in learning by implementing core functionalities on your own.
* You're working on a simple project where you might not need a framework.
* You want to optimize for performance by minimizing external dependencies.
* You require complete control over every aspect of the application.
* You want to build your own framework.

This guide will instruct you through setting up and deploying a static site or a full-stack application without a framework on Workers.

## Deploy a static site

This guide will instruct you through setting up and deploying a static site on Workers.

### 1\. Create a new Worker project using the CLI

[C3 (create-cloudflare-cli) ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. Open a terminal window and run C3 to create your Worker project:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-static-site
```

```
yarn create cloudflare my-static-site
```

```
pnpm create cloudflare@latest my-static-site
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Static site`.
* For _Which language do you want to use?_, choose `TypeScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

After setting up your project, change your directory by running the following command:

Terminal window

```

cd my-static-site


```

### 2\. Develop locally

After you have created your Worker, run the [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) in the project directory to start a local server. This will allow you to preview your project locally during development.

Terminal window

```

npx wrangler dev


```

### 3\. Deploy your project

Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/).

The [wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.

Terminal window

```

npx wrangler deploy


```

Note

Learn about how assets are configured and how routing works from [Routing configuration](https://developers.cloudflare.com/workers/static-assets/routing/).

## Deploy a full-stack application

This guide will instruct you through setting up and deploying dynamic and interactive server-side rendered (SSR) applications on Cloudflare Workers.

When building a full-stack application, you can use any [Workers bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/), [including assets' own](https://developers.cloudflare.com/workers/static-assets/binding/), to interact with resources on the Cloudflare Developer Platform.

### 1\. Create a new Worker project

[C3 (create-cloudflare-cli) ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare.

Open a terminal window and run C3 to create your Worker project:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-dynamic-site
```

```
yarn create cloudflare my-dynamic-site
```

```
pnpm create cloudflare@latest my-dynamic-site
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `SSR / full-stack app`.
* For _Which language do you want to use?_, choose `TypeScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

After setting up your project, change your directory by running the following command:

Terminal window

```

cd my-dynamic-site


```

### 2\. Develop locally

After you have created your Worker, run the [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) in the project directory to start a local server. This will allow you to preview your project locally during development.

Terminal window

```

npx wrangler dev


```

### 3\. Modify your Project

With your new project generated and running, you can begin to write and edit your project:

* The `src/index.ts` file is populated with sample code. Modify its content to change the server-side behavior of your Worker.
* The `public/index.html` file is populated with sample code. Modify its content, or anything else in `public/`, to change the static assets of your Worker.

Then, save the files and reload the page. Your project's output will have changed based on your modifications.

### 4\. Deploy your Project

Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/).

The [wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.

Terminal window

```

npx wrangler deploy


```

Note

Learn about how assets are configured and how routing works from [Routing configuration](https://developers.cloudflare.com/workers/static-assets/routing/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/get-started/","name":"Get Started"}}]}
```

---

---
title: Headers
description: When serving static assets, Workers will attach some headers to the response by default. These are:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/headers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Headers

## Default headers

When serving static assets, Workers will attach some headers to the response by default. These are:

* **`Content-Type`**  
A `Content-Type` header is attached to the response if one is provided during [the asset upload process](https://developers.cloudflare.com/workers/static-assets/direct-upload/). [Wrangler](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) automatically determines the MIME type of the file, based on its extension.
* **`Cache-Control: public, max-age=0, must-revalidate`**  
Sent when the request does not have an `Authorization` or `Range` header, this response header tells the browser that the asset can be cached, but that the browser should revalidate the freshness of the content every time before using it. This default behavior ensures good website performance for static pages, while still guaranteeing that stale content will never be served.
* **`ETag`**  
This header complements the default `Cache-Control` header. Its value is a hash of the static asset file, and browsers can use this in subsequent requests with an `If-None-Match` header to check for freshness, without needing to re-download the entire file in the case of a match.
* **`CF-Cache-Status`**  
This header indicates whether the asset was served from the cache (`HIT`) or not (`MISS`).[1](#user-content-fn-1)

Cloudflare reserves the right to attach new headers to static asset responses at any time in order to improve performance or harden the security of your Worker application.

## Custom headers

The default response headers served on static asset responses can be overridden, removed, or added to, by creating a plain text file called `_headers` without a file extension, in the static asset directory of your project. This file will not itself be served as a static asset, but will instead be parsed by Workers and its rules will be applied to static asset responses.

If you are using a framework, you will often have a directory named `public/` or `static/`, and this usually contains deploy-ready assets, such as favicons, `robots.txt` files, and site manifests. These files get copied over to a final output directory during the build, so this is the perfect place to author your `_headers` file. If you are not using a framework, the `_headers` file can go directly into your [static assets directory](https://developers.cloudflare.com/workers/static-assets/binding/#directory).

Headers defined in the `_headers` file override what Cloudflare ordinarily sends.

Warning

Custom headers defined in the `_headers` file are not applied to responses generated by your Worker code, even if the request URL matches a rule defined in `_headers`. If you use a server-side rendered (SSR) framework, have configured `assets.run_worker_first`, or otherwise use a Worker script, you will likely need to attach any custom headers you wish to apply directly within that Worker script.

### Attach a header

Header rules are defined in multi-line blocks. The first line of a block is the URL or URL pattern where the rule's headers should be applied. On the next line, an indented list of header names and header values must be written:

```

[url]

  [name]: [value]


```

Using absolute URLs is supported, though be aware that absolute URLs must begin with `https` and specifying a port is not supported. `_headers` rules ignore the incoming request's port and protocol when matching against an incoming request. For example, a rule like `https://example.com/path` would match against requests to `other://example.com:1234/path`.

You can define as many `[name]: [value]` pairs as you require on subsequent lines. For example:

```

# This is a comment

/secure/page

  X-Frame-Options: DENY

  X-Content-Type-Options: nosniff

  Referrer-Policy: no-referrer


/static/*

  Access-Control-Allow-Origin: *

  X-Robots-Tag: nosnippet


https://myworker.mysubdomain.workers.dev/*

  X-Robots-Tag: noindex


```

An incoming request which matches multiple rules' URL patterns will inherit all rules' headers. Using the previous `_headers` file, the following requests will have the following headers applied:

| Request URL                                                | Headers                                                                                                  |
| ---------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
| https://custom.domain/secure/page                          | X-Frame-Options: DENY X-Content-Type-Options: nosniff Referrer-Policy: no-referrer                       |
| https://custom.domain/static/image.jpg                     | Access-Control-Allow-Origin: \* X-Robots-Tag: nosnippet                                                  |
| https://myworker.mysubdomain.workers.dev/home              | X-Robots-Tag: noindex                                                                                    |
| https://myworker.mysubdomain.workers.dev/secure/page       | X-Frame-Options: DENY X-Content-Type-Options: nosniff Referrer-Policy: no-referrer X-Robots-Tag: noindex |
| https://myworker.mysubdomain.workers.dev/static/styles.css | Access-Control-Allow-Origin: \* X-Robots-Tag: nosnippet, noindex                                         |

You may define up to 100 header rules. Each line in the `_headers` file has a 2,000 character limit. The entire line, including spacing, header name, and value, counts towards this limit.

If a header is applied twice in the `_headers` file, the values are joined with a comma separator.

### Detach a header

You may wish to remove a default header or a header which has been added by a more pervasive rule. This can be done by prepending the header name with an exclamation mark and space (`! `).

```

/*

  Content-Security-Policy: default-src 'self';


/*.jpg

  ! Content-Security-Policy


```

### Match a path

The same URL matching features that [\_redirects](https://developers.cloudflare.com/workers/static-assets/redirects/) offers is also available to the `_headers` file. Note, however, that redirects are applied before headers, so when a request matches both a redirect and a header, the redirect takes priority.

#### Splats

When matching, a splat pattern — signified by an asterisk (`*`) — will greedily match all characters. You may only include a single splat in the URL.

The matched value can be referenced within the header value as the `:splat` placeholder.

#### Placeholders

A placeholder can be defined with `:placeholder_name`. A colon (`:`) followed by a letter indicates the start of a placeholder and the placeholder name that follows must be composed of alphanumeric characters and underscores (`:[A-Za-z]\w*`). Every named placeholder can only be referenced once. Placeholders match all characters apart from the delimiter, which when part of the host, is a period (`.`) or a forward-slash (`/`) and may only be a forward-slash (`/`) when part of the path.

Similarly, the matched value can be used in the header values with `:placeholder_name`.

```

/movies/:title

  x-movie-name: You are watching ":title"


```

#### Examples

##### Cross-Origin Resource Sharing (CORS)

To enable other domains to fetch every static asset from your Worker, the following can be added to the `_headers` file:

```

/*

  Access-Control-Allow-Origin: *


```

This applies the \`Access-Control-Allow-Origin\` header to any incoming URL. Note that the CORS specification only allows \`\*\`, \`null\`, or an exact origin as valid \`Access-Control-Allow-Origin\` values — wildcard patterns within origins are not supported. To allow CORS from specific [preview URLs](https://developers.cloudflare.com/workers/configuration/previews/), you will need to handle this dynamically in your Worker code rather than through the \`\_headers\` file.

##### Prevent your workers.dev URLs showing in search results

[Google ↗](https://developers.google.com/search/docs/advanced/robots/robots%5Fmeta%5Ftag#directives) and other search engines often support the `X-Robots-Tag` header to instruct its crawlers how your website should be indexed.

For example, to prevent your `\*.\*.workers.dev` URLs from being indexed, add the following to your `_headers` file:

```

https://:version.:subdomain.workers.dev/*

  X-Robots-Tag: noindex


```

##### Configure custom browser cache behavior

If you have a folder of fingerprinted assets (assets which have a hash in their filename), you can configure more aggressive caching behavior in the browser to improve performance for repeat visitors:

```

/static/*

  Cache-Control: public, max-age=31556952, immutable


```

##### Harden security for an application

Warning

If you are server-side rendering (SSR) or using a Worker to generate responses in any other way and wish to attach security headers, the headers should be sent from the Worker's `Response` instead of using a `_headers` file. For example, if you have an API endpoint and want to allow cross-origin requests, you should ensure that your Worker code attaches CORS headers to its responses, including to `OPTIONS` requests.

You can prevent click-jacking by informing browsers not to embed your application inside another (for example, with an `<iframe>`) with a [X-Frame-Options ↗](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options) header.

[X-Content-Type-Options: nosniff ↗](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options) prevents browsers from interpreting a response as any other content-type than what is defined with the `Content-Type` header.

[Referrer-Policy ↗](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Referrer-Policy) allows you to customize how much information visitors give about where they are coming from when they navigate away from your page.

Browser features can be disabled to varying degrees with the [Permissions-Policy ↗](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Permissions-Policy) header (recently renamed from `Feature-Policy`).

If you need fine-grained control over your application's content, the [Content-Security-Policy ↗](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy) header allows you to configure a number of security settings, including similar controls to the `X-Frame-Options` header.

```

/app/*

  X-Frame-Options: DENY

  X-Content-Type-Options: nosniff

  Referrer-Policy: no-referrer

  Permissions-Policy: document-domain=()

  Content-Security-Policy: script-src 'self'; frame-ancestors 'none';


```

## Footnotes

1. Due to a technical limitation that we hope to address in the future, the `CF-Cache-Status` header is not always entirely accurate. It is possible for false-positives and false-negatives to occur. This should be rare. In the meantime, this header should be considered as returning a "probablistic" result. [↩](#user-content-fnref-1)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/headers/","name":"Headers"}}]}
```

---

---
title: Migrate from Pages to Workers
description: A guide for migrating from Cloudflare Pages to Cloudflare Workers. Includes a compatibility matrix for comparing the features of Cloudflare Workers and Pages.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/migration-guides/migrate-from-pages.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Migrate from Pages to Workers

You can deploy full-stack applications, including front-end static assets and back-end APIs, as well as server-side rendered pages (SSR), with [Cloudflare Workers](https://developers.cloudflare.com/workers/static-assets/).

Like Pages, requests for static assets on Workers are free, and [Pages Functions](#pages-functions) invocations are charged at the same rate as Workers, so you can expect [a similar cost structure](https://developers.cloudflare.com/workers/platform/pricing/#workers).

Unlike Pages, Workers has a distinctly broader set of features available to it, (including Durable Objects, Cron Triggers, and more comprehensive Observability). A complete list can be found at [the bottom of this page](#compatibility-matrix).

## Migration

Migrating from Cloudflare Pages to Cloudflare Workers is often a straightforward process. The following are some of the most common steps you will need to take to migrate your project.

### Frameworks

If your Pages project uses [a popular framework](https://developers.cloudflare.com/workers/framework-guides/), most frameworks already have adapters available for Cloudflare Workers. Switch out any Pages-specific adapters for the Workers equivalent and follow any guidance that they provide.

### Project configuration

If your project doesn't already have one, create a [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) (either `wrangler.jsonc`, `wrangler.json` or `wrangler.toml`) in the root of your project. The two mandatory fields are:

* [name](https://developers.cloudflare.com/workers/wrangler/configuration/#inheritable-keys)  
Set this to the name of the Worker you wish to deploy to. This can be the same as your existing Pages project name, so long as it conforms to Workers' name restrictions (e.g. max length).
* [compatibility\_date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/).  
If you were already using [Pages Functions](https://developers.cloudflare.com/pages/functions/wrangler-configuration/#inheritable-keys), set this to the same date configured there. Otherwise, set it to the current date.

#### Build output directory

Where you previously would configure a "build output directory" for Pages (in either a [Wrangler configuration file](https://developers.cloudflare.com/pages/functions/wrangler-configuration/#inheritable-keys) or in [the Cloudflare dashboard](https://developers.cloudflare.com/pages/configuration/build-configuration/#build-commands-and-directories)), you must now set the [assets.directory](https://developers.cloudflare.com/workers/static-assets/binding/#directory) value for a Worker project.

Before, with **Cloudflare Pages**:

* [  wrangler.jsonc ](#tab-panel-7682)
* [  wrangler.toml ](#tab-panel-7683)

```

{

  "name": "my-pages-project",

  "pages_build_output_dir": "./dist/client/"

}


```

```

name = "my-pages-project"

pages_build_output_dir = "./dist/client/"


```

Now, with **Cloudflare Workers**:

* [  wrangler.jsonc ](#tab-panel-7684)
* [  wrangler.toml ](#tab-panel-7685)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "./dist/client/"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "./dist/client/"


```

Note

If your Worker will only contain assets and no Worker script, then you should remove the `"binding": "ASSETS"` field from your configuration file, since this is only valid if you have a Worker script indicated by a `"main"` property. See the [Assets binding](#assets-binding) section below.

#### Serving behavior

Pages would automatically attempt to determine the type of project you deployed. It would look for `404.html` and `index.html` files as signals for whether the project was likely a [Single Page Application (SPA)](https://developers.cloudflare.com/pages/configuration/serving-pages/#single-page-application-spa-rendering) or if it should [serve custom 404 pages](https://developers.cloudflare.com/pages/configuration/serving-pages/#not-found-behavior).

In Workers, to prevent accidental misconfiguration, this behavior is explicit and [must be set up manually](https://developers.cloudflare.com/workers/static-assets/routing/).

For a Single Page Application (SPA):

* [  wrangler.jsonc ](#tab-panel-7686)
* [  wrangler.toml ](#tab-panel-7687)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "./dist/client/",

    "not_found_handling": "single-page-application"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "./dist/client/"

not_found_handling = "single-page-application"


```

For custom 404 pages:

* [  wrangler.jsonc ](#tab-panel-7688)
* [  wrangler.toml ](#tab-panel-7689)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "./dist/client/",

    "not_found_handling": "404-page"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "./dist/client/"

not_found_handling = "404-page"


```

##### Ignoring assets

Pages would automatically exclude some files and folders from being uploaded as static assets such as `node_modules`, `.DS_Store`, and `.git`. If you wish to also avoid uploading these files to Workers, you can create an [.assetsignore file](https://developers.cloudflare.com/workers/static-assets/binding/#ignoring-assets) in your project's static asset directory.

dist/client/.assetsignore

```

**/node_modules

**/.DS_Store

**/.git


```

#### Pages Functions

##### Full-stack framework

If you use a full-stack framework powered by [Pages Functions](https://developers.cloudflare.com/pages/functions/), ensure you have [updated your framework](#frameworks) to target Workers instead of Pages.

##### Pages Functions with an "advanced mode" `_worker.js` file

If you use Pages Functions with an ["advanced mode" \_worker.js file](https://developers.cloudflare.com/pages/functions/advanced-mode/), you must first ensure this script doesn't get uploaded as a static asset. Either move `_worker.js` out of the static asset directory (recommended), or create [an .assetsignore file](https://developers.cloudflare.com/workers/static-assets/binding/#ignoring-assets) in the static asset directory and include `_worker.js` within it.

dist/client/.assetsignore

```

_worker.js


```

Then, update your configuration file's `main` field to point to the location of this Worker script:

* [  wrangler.jsonc ](#tab-panel-7690)
* [  wrangler.toml ](#tab-panel-7691)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./dist/client/_worker.js", // or some other location if you moved the script out of the static asset directory

  "assets": {

    "directory": "./dist/client/"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./dist/client/_worker.js"


[assets]

directory = "./dist/client/"


```

##### Pages Functions with a `functions/` folder

If you use **Pages Functions with a [folder of functions/](https://developers.cloudflare.com/pages/functions/)**, you must first compile these functions into a single Worker script with the [wrangler pages functions build](https://developers.cloudflare.com/workers/wrangler/commands/pages/#pages-functions-build) command.

 npm  yarn  pnpm 

```
npx wrangler pages functions build --outdir=./dist/worker/
```

```
yarn wrangler pages functions build --outdir=./dist/worker/
```

```
pnpm wrangler pages functions build --outdir=./dist/worker/
```

Although this command will remain available to you to run at any time, we do recommend considering using another framework if you wish to continue to use file-based routing. [HonoX ↗](https://github.com/honojs/honox) is one popular option.

Once the Worker script has been compiled, you can update your configuration file's `main` field to point to the location it was built to:

* [  wrangler.jsonc ](#tab-panel-7692)
* [  wrangler.toml ](#tab-panel-7693)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./dist/worker/index.js",

  "assets": {

    "directory": "./dist/client/"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./dist/worker/index.js"


[assets]

directory = "./dist/client/"


```

##### `_routes.json` and Pages Functions middleware

If you authored [a \_routes.json file](https://developers.cloudflare.com/pages/functions/routing/#create-a-%5Froutesjson-file) in your Pages project, or used [middleware](https://developers.cloudflare.com/pages/functions/middleware/) in Pages Functions, you must pay close attention to the configuration of your Worker script. Pages would default to serving your Pages Functions ahead of static assets and `_routes.json` and Pages Functions middleware allowed you to customize this behavior.

Workers, on the other hand, will default to serving static assets ahead of your Worker script, unless you have configured [assets.run\_worker\_first](https://developers.cloudflare.com/workers/static-assets/routing/worker-script/#run-your-worker-script-first). This option is required if you are, for example, performing any authentication checks or logging requests before serving static assets.

* [  wrangler.jsonc ](#tab-panel-7694)
* [  wrangler.toml ](#tab-panel-7695)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./dist/worker/index.js",

  "assets": {

    "directory": "./dist/client/",

    "run_worker_first": true

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./dist/worker/index.js"


[assets]

directory = "./dist/client/"

run_worker_first = true


```

##### Starting from scratch

If you wish to, you can start a new Worker script from scratch and take advantage of all of Wrangler's and the latest runtime features (e.g. [WorkerEntrypoints](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/), [TypeScript support](https://developers.cloudflare.com/workers/languages/typescript/), [bundling](https://developers.cloudflare.com/workers/wrangler/bundling), etc.):

* [  JavaScript ](#tab-panel-7704)
* [  TypeScript ](#tab-panel-7705)

./worker/index.js

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch(request) {

    return new Response("Hello, world!");

  }

}


```

./worker/index.ts

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch(request: Request) {

    return new Response("Hello, world!");

  }

}


```

* [  wrangler.jsonc ](#tab-panel-7696)
* [  wrangler.toml ](#tab-panel-7697)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./worker/index.ts",

  "assets": {

    "directory": "./dist/client/"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./worker/index.ts"


[assets]

directory = "./dist/client/"


```

#### Assets binding

Pages automatically provided [an ASSETS binding](https://developers.cloudflare.com/pages/functions/api-reference/#envassetsfetch) to access static assets from Pages Functions. In Workers, the name of this binding is customizable and it must be manually configured:

* [  wrangler.jsonc ](#tab-panel-7698)
* [  wrangler.toml ](#tab-panel-7699)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./worker/index.ts",

  "assets": {

    "directory": "./dist/client/",

    "binding": "ASSETS"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./worker/index.ts"


[assets]

directory = "./dist/client/"

binding = "ASSETS"


```

#### Runtime

If you had customized [placement](https://developers.cloudflare.com/workers/configuration/placement/), or set a [compatibility date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) or any [compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags/) in your Pages project, you can define the same in your Wrangler configuration file:

* [  wrangler.jsonc ](#tab-panel-7706)
* [  wrangler.toml ](#tab-panel-7707)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": ["nodejs_compat"],

  "main": "./worker/index.ts",

  "placement": {

    "mode": "smart"

  },

  "assets": {

    "directory": "./dist/client/",

    "binding": "ASSETS"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]

main = "./worker/index.ts"


[placement]

mode = "smart"


[assets]

directory = "./dist/client/"

binding = "ASSETS"


```

### Variables, secrets and bindings

[Variables](https://developers.cloudflare.com/workers/configuration/environment-variables/) and [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) can be set in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) and are made available in your Worker's environment (`env`). [Secrets](https://developers.cloudflare.com/workers/configuration/secrets/) can uploaded with Wrangler or defined in the Cloudflare dashboard for [production](https://developers.cloudflare.com/workers/configuration/secrets/#adding-secrets-to-your-project) and [.dev.vars for local development](https://developers.cloudflare.com/workers/configuration/secrets/#local-development-with-secrets).

If you are [using Workers Builds](#builds), ensure you also [configure any variables relevant to the build environment there](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/). Unlike Pages, Workers does not share the same set of runtime and build-time variables.

### Wrangler commands

Where previously you used [wrangler pages dev](https://developers.cloudflare.com/workers/wrangler/commands/pages/#pages-dev) and [wrangler pages deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy), now instead use [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) and [wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy). Additionally, if you are using a Vite-powered framework, [our new Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) may be able offer you an even simpler development experience.

Wrangler uses a different default port for the local development

`wrangler pages dev` will, by default, expose the local development server at `http://localhost:8788`, whereas `wrangler dev` will expose it at `http://localhost:8787/`.

You can customize the port using `--port`.

### Builds

If you are using Pages' built-in CI/CD system, you can swap this for Workers Builds by first [connecting your repository to Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/#get-started) and then [disabling automatic deployments on your Pages project](https://developers.cloudflare.com/pages/configuration/git-integration/#disable-automatic-deployments).

### Preview environment

Pages automatically creates a preview environment for each project, and can be indepenedently configured.

To get a similar experience in Workers, you must:

1. Ensure [preview URLs](https://developers.cloudflare.com/workers/configuration/previews/) are enabled (they are on by default).  
   * [  wrangler.jsonc ](#tab-panel-7702)  
   * [  wrangler.toml ](#tab-panel-7703)  
```  
{  
  "name": "my-worker",  
  // Set this to today's date  
  "compatibility_date": "2026-04-03",  
  "main": "./worker/index.ts",  
  "assets": {  
    "directory": "./dist/client/"  
  },  
  "preview_urls": true  
}  
```  
```  
name = "my-worker"  
# Set this to today's date  
compatibility_date = "2026-04-03"  
main = "./worker/index.ts"  
preview_urls = true  
[assets]  
directory = "./dist/client/"  
```
2. [Enable non-production branch builds](https://developers.cloudflare.com/workers/ci-cd/builds/build-branches/#configure-non-production-branch-builds) in Workers Builds.

Optionally, you can also [protect these preview URLs with Cloudflare Access](https://developers.cloudflare.com/workers/configuration/previews/#manage-access-to-preview-urls).

Note

Unlike Pages, Workers does not natively support defining different bindings in production vs. non-production builds. This is something we are actively exploring, but in the meantime, you may wish to consider using [Wrangler Environments](https://developers.cloudflare.com/workers/wrangler/environments/) and an [appropriate Workers Build configuration](https://developers.cloudflare.com/workers/ci-cd/builds/advanced-setups/#wrangler-environments) to achieve this.

### Headers and redirects

[\_headers](https://developers.cloudflare.com/workers/static-assets/headers/) and [\_redirects](https://developers.cloudflare.com/workers/static-assets/redirects/) files are supported natively in Workers with static assets. Ensure that, just like for Pages, these files are included in the static asset directory of your project.

### pages.dev

Where previously you were offered a `pages.dev` subdomain for your Pages project, you can now configure a personalized `workers.dev` subdomain for all of your Worker projects. You can [configure this subdomain in the Cloudflare dashboard](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/#configure-workersdev), and opt-in to using it with the [workers\_dev option](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/#disabling-workersdev-in-the-wrangler-configuration-file) in your configuration file.

* [  wrangler.jsonc ](#tab-panel-7700)
* [  wrangler.toml ](#tab-panel-7701)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./worker/index.ts",

  "workers_dev": true

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./worker/index.ts"

workers_dev = true


```

### Custom domains

If your domain's nameservers are managed by Cloudflare, you can, like Pages, configure a [custom domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) for your Worker. Additionally, you can also configure a [route](https://developers.cloudflare.com/workers/configuration/routing/routes/) if you only wish to some subset of paths to be served by your Worker.

Note

Unlike Pages, Workers does not support any domain whose nameservers are not managed by Cloudflare.

### Rollout

Once you have validated the behavior of Worker, and are satisfied with the development workflows, and have migrated all of your production traffic, you can delete your Pages project in the Cloudflare dashboard or with Wrangler:

 npm  yarn  pnpm 

```
npx wrangler pages project delete
```

```
yarn wrangler pages project delete
```

```
pnpm wrangler pages project delete
```

## Migrate your project using an AI coding assistant

You can add the following [experimental prompt ↗](https://developers.cloudflare.com/workers/prompts/pages-to-workers.txt) in your preferred coding assistant (e.g. Claude Code, Cursor) to make your project compatible with Workers:

```

https://developers.cloudflare.com/workers/prompts/pages-to-workers.txt


```

You can also use the Cloudflare Documentation [MCP server ↗](https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/docs-vectorize) in your coding assistant to provide better context to your LLM when building with Workers, which includes this prompt when you ask to migrate from Pages to Workers.

## Compatibility matrix

This compatibility matrix compares the features of Workers and Pages. Unless otherwise stated below, what works in Pages works in Workers, and what works in Workers works in Pages. Think something is missing from this list? [Open a pull request ↗](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/compatibility-matrix.mdx) or [create a GitHub issue ↗](https://github.com/cloudflare/cloudflare-docs/issues/new).

**Legend**   
✅: Supported   
⏳: Coming soon   
🟡: Unsupported, workaround available   
❌: Unsupported

| Workers                                                                                                                                      | Pages                      |                            |
| -------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- | -------------------------- |
| **Writing, Testing, and Deploying Code**                                                                                                     |                            |                            |
| [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/)                                                             | ✅                          | ❌                          |
| [Rollbacks](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/rollbacks/)                                     | ✅                          | ✅                          |
| [Gradual Deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/)                                     | ✅                          | ❌                          |
| [Preview URLs](https://developers.cloudflare.com/workers/configuration/previews)                                                             | ✅                          | ✅                          |
| [Testing tools](https://developers.cloudflare.com/workers/testing)                                                                           | ✅                          | ✅                          |
| [Local Development](https://developers.cloudflare.com/workers/development-testing/)                                                          | ✅                          | ✅                          |
| [Remote Development (\--remote)](https://developers.cloudflare.com/workers/wrangler/commands/)                                               | ✅                          | ❌                          |
| [Quick Editor in Dashboard ↗](https://blog.cloudflare.com/improved-quick-edit)                                                               | ✅                          | ❌                          |
| **Static Assets**                                                                                                                            |                            |                            |
| [Early Hints](https://developers.cloudflare.com/pages/configuration/early-hints/)                                                            | ❌                          | ✅                          |
| [Custom HTTP headers for static assets](https://developers.cloudflare.com/workers/static-assets/headers/)                                    | ✅                          | ✅                          |
| [Middleware](https://developers.cloudflare.com/workers/static-assets/binding/#run%5Fworker%5Ffirst)                                          | ✅ [1](#user-content-fn-1)  | ✅                          |
| [Redirects](https://developers.cloudflare.com/workers/static-assets/redirects/)                                                              | ✅                          | ✅                          |
| [Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/)                                                        | ✅                          | ✅                          |
| [Serve assets on a path](https://developers.cloudflare.com/workers/static-assets/routing/advanced/serving-a-subdirectory/)                   | ✅                          | ❌                          |
| **Observability**                                                                                                                            |                            |                            |
| [Workers Logs](https://developers.cloudflare.com/workers/observability/)                                                                     | ✅                          | ❌                          |
| [Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush/)                                                             | ✅                          | ❌                          |
| [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/)                                                   | ✅                          | ❌                          |
| [Real-time logs](https://developers.cloudflare.com/workers/observability/logs/real-time-logs/)                                               | ✅                          | ✅                          |
| [Source Maps](https://developers.cloudflare.com/workers/observability/source-maps/)                                                          | ✅                          | ❌                          |
| **Runtime APIs & Compute Models**                                                                                                            |                            |                            |
| [Node.js Compatibility Mode](https://developers.cloudflare.com/workers/runtime-apis/nodejs/)                                                 | ✅                          | ✅                          |
| [Durable Objects](https://developers.cloudflare.com/durable-objects/api/)                                                                    | ✅                          | 🟡 [2](#user-content-fn-2) |
| [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/)                                                      | ✅                          | ❌                          |
| **Bindings**                                                                                                                                 |                            |                            |
| [AI](https://developers.cloudflare.com/workers-ai/get-started/workers-wrangler/#2-connect-your-worker-to-workers-ai)                         | ✅                          | ✅                          |
| [Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine)                                                             | ✅                          | ✅                          |
| [Assets](https://developers.cloudflare.com/workers/static-assets/binding/)                                                                   | ✅                          | ✅                          |
| [Browser Rendering](https://developers.cloudflare.com/browser-rendering)                                                                     | ✅                          | ✅                          |
| [D1](https://developers.cloudflare.com/d1/worker-api/)                                                                                       | ✅                          | ✅                          |
| [Email Workers](https://developers.cloudflare.com/email-routing/email-workers/send-email-workers/)                                           | ✅                          | ❌                          |
| [Environment Variables](https://developers.cloudflare.com/workers/configuration/environment-variables/)                                      | ✅                          | ✅                          |
| [Hyperdrive](https://developers.cloudflare.com/hyperdrive/)                                                                                  | ✅                          | ✅                          |
| [Image Resizing](https://developers.cloudflare.com/images/transform-images/bindings/)                                                        | ✅                          | ❌                          |
| [KV](https://developers.cloudflare.com/kv/)                                                                                                  | ✅                          | ✅                          |
| [mTLS](https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls/)                                                                | ✅                          | ✅                          |
| [Queue Producers](https://developers.cloudflare.com/queues/configuration/configure-queues/#producer-worker-configuration)                    | ✅                          | ✅                          |
| [Queue Consumers](https://developers.cloudflare.com/queues/configuration/configure-queues/#consumer-worker-configuration)                    | ✅                          | ❌                          |
| [R2](https://developers.cloudflare.com/r2/)                                                                                                  | ✅                          | ✅                          |
| [Rate Limiting](https://developers.cloudflare.com/workers/runtime-apis/bindings/rate-limit/)                                                 | ✅                          | ❌                          |
| [Secrets](https://developers.cloudflare.com/workers/configuration/secrets/)                                                                  | ✅                          | ✅                          |
| [Service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/)                                        | ✅                          | ✅                          |
| [Vectorize](https://developers.cloudflare.com/vectorize/get-started/intro/#3-bind-your-worker-to-your-index)                                 | ✅                          | ✅                          |
| **Builds (CI/CD)**                                                                                                                           |                            |                            |
| [Monorepos](https://developers.cloudflare.com/workers/ci-cd/builds/advanced-setups/)                                                         | ✅                          | ✅                          |
| [Build Watch Paths](https://developers.cloudflare.com/workers/ci-cd/builds/build-watch-paths/)                                               | ✅                          | ✅                          |
| [Build Caching](https://developers.cloudflare.com/workers/ci-cd/builds/build-caching/)                                                       | ✅                          | ✅                          |
| [Deploy Hooks](https://developers.cloudflare.com/workers/ci-cd/builds/deploy-hooks/)                                                         | ✅                          | ✅                          |
| [Branch Deploy Controls](https://developers.cloudflare.com/pages/configuration/branch-build-controls/)                                       | 🟡 [3](#user-content-fn-3) | ✅                          |
| [Custom Branch Aliases](https://developers.cloudflare.com/pages/how-to/custom-branch-aliases/)                                               | ⏳                          | ✅                          |
| **Pages Functions**                                                                                                                          |                            |                            |
| [File-based Routing](https://developers.cloudflare.com/pages/functions/routing/)                                                             | 🟡 [4](#user-content-fn-4) | ✅                          |
| [Pages Plugins](https://developers.cloudflare.com/pages/functions/plugins/)                                                                  | 🟡 [5](#user-content-fn-5) | ✅                          |
| **Domain Configuration**                                                                                                                     |                            |                            |
| [Custom domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/#add-a-custom-domain)                        | ✅                          | ✅                          |
| [Custom subdomains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/#set-up-a-custom-domain-in-the-dashboard) | ✅                          | ✅                          |
| [Custom domains outside Cloudflare zones](https://developers.cloudflare.com/pages/configuration/custom-domains/#add-a-custom-cname-record)   | ❌                          | ✅                          |
| [Non-root routes](https://developers.cloudflare.com/workers/configuration/routing/routes/)                                                   | ✅                          | ❌                          |

## Footnotes

1. Middleware can be configured via the [run\_worker\_first](https://developers.cloudflare.com/workers/static-assets/binding/#run%5Fworker%5Ffirst) option, but is charged as a normal Worker invocation. We plan to explore additional related options in the future. [↩](#user-content-fnref-1)
2. To [use Durable Objects with your Cloudflare Pages project](https://developers.cloudflare.com/pages/functions/bindings/#durable-objects), you must create a separate Worker with a Durable Object and then declare a binding to it in both your Production and Preview environments. Using Durable Objects with Workers is simpler and recommended. [↩](#user-content-fnref-2)
3. Workers Builds supports enabling [non-production branch builds](https://developers.cloudflare.com/workers/ci-cd/builds/build-branches/#configure-non-production-branch-builds), though does not yet have the same level of configurability as Pages does. [↩](#user-content-fnref-3)
4. Workers [supports popular frameworks](https://developers.cloudflare.com/workers/framework-guides/), many of which implement file-based routing. Additionally, you can use Wrangler to [compile your folder of functions/](#pages-functions-with-a-functions-folder) into a Worker to help ease the migration from Pages to Workers. [↩](#user-content-fnref-4)
5. As in 4, Wrangler can [compile your Pages Functions into a Worker](#pages-functions-with-a-functions-folder). Or if you are starting from scratch, everything that is possible with Pages Functions can also be achieved by adding code to your Worker or by using framework-specific plugins for relevant third party tools. [↩](#user-content-fnref-5)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/migration-guides/","name":"Migration Guides"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/static-assets/migration-guides/migrate-from-pages/","name":"Migrate from Pages to Workers"}}]}
```

---

---
title: Migrate from Netlify to Workers
description: Migrate your Netlify application to Cloudflare Workers. You should already have an existing project deployed on Netlified that you would like to host on Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/migration-guides/netlify-to-workers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Migrate from Netlify to Workers

**Last reviewed:**  11 months ago 

In this tutorial, you will learn how to migrate your Netlify application to Cloudflare Workers.

You should already have an existing project deployed on Netlify that you would like to host on Cloudflare Workers. Netlify specific features are not supported by Cloudflare Workers. Review the [Workers compatibility matrix](https://developers.cloudflare.com/workers/static-assets/migration-guides/migrate-from-pages/#compatibility-matrix) for more information on what is supported.

## Frameworks

Some frameworks like Next.js, Astro with on demand rendering, and others have specific guides for migrating to Cloudflare Workers. Refer to our [framework guides](https://developers.cloudflare.com/workers/framework-guides/) for more information. If your framework has a **Deploy an existing project on Workers** guide, follow that guide for specific instructions. Otherwise, continue with the steps below.

## Find your build command and build directory

To move your application to Cloudflare Workers, you will need to know your build command and build directory. Cloudflare Workers will use this information to build and deploy your application. We will cover how to find these values in the Netlify Dashboard below.

In your Netlify Dashboard, find the project you want to migrate to Workers. Go to the **Project configuration** menu for your specific project, then go into the **Build & deploy** menu item. You will find a **Build settings** card that includes the **Build command** and **Publish directory** fields. Save these for deploying to Cloudflare Workers. In the below image, the **Build Command** is `npm run build`, and the **Output Directory** is `.next`.

![Finding the Build Command and publish Directory fields](https://developers.cloudflare.com/_astro/netlify-build-command.DH5kCyI8_Z14wWmF.webp) 

## Create a wrangler file

In the root of your project, create a `wrangler.jsonc` or `wrangler.toml` file (`wrangler.jsonc` is recommended). What goes in the file depends on what type of application you are deploying: an application powered by [Static Site Generation (SSG)](https://developers.cloudflare.com/workers/static-assets/routing/static-site-generation/), or a [Single Page Application (SPA)](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/).

For each case, be sure to update the `<your-project-name>` value with the name of your project and `<your-build-directory>` value with the build directory from Netlify.

For a **static site**, you will need to add the following to your wrangler file.

* [  wrangler.jsonc ](#tab-panel-7708)
* [  wrangler.toml ](#tab-panel-7709)

```

{

  "name": "<your-project-name>",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "<your-build-directory>",

  },

}


```

```

name = "<your-project-name>"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "<your-build-directory>"


```

For a **Single Page Application**, you will need to add the following to your Wrangler configuration file, which includes the `not_found_handling` field.

* [  wrangler.jsonc ](#tab-panel-7710)
* [  wrangler.toml ](#tab-panel-7711)

```

{

  "name": "<your-project-name>",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "<your-build-directory>",

    "not_found_handling": "single-page-application",

  },

}


```

```

name = "<your-project-name>"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "<your-build-directory>"

not_found_handling = "single-page-application"


```

Some frameworks provide specific guides for migrating to Cloudflare Workers. Please refer to our [framework guides](https://developers.cloudflare.com/workers/framework-guides/) for more information. If your framework includes a “Deploy an existing project on Workers” guide, follow it for detailed instructions.

## Create a new Workers project

Your application has the proper configuration to be built and deployed to Cloudflare Workers.

The [Connect a new Worker](https://developers.cloudflare.com/workers/ci-cd/builds/#connect-a-new-worker) guide will instruct you how to connect your GitHub project to Cloudflare Workers. In the configuration step, ensure your build command is the same as the command you found on Netlify. Also, the deploy command should be the default `npx wrangler deploy`.

## Add a custom domain

Workers Custom Domains only supports domains that are configured as zones on your account. A zone refers to a domain (such as example.com) that Cloudflare manages for you, including its DNS and traffic.

Follow these instructions for [adding a custom domain to your Workers project](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/#add-a-custom-domain). You will also find additional information on creating a zone for your domain.

## Delete your Netlify app

Once your custom domain is set up and sending requests to Cloudflare Workers, you can safely delete your Netlify application.

## Troubleshooting

For additional migration instructions, review the [Cloudflare Pages to Workers migration guide](https://developers.cloudflare.com/workers/static-assets/migration-guides/migrate-from-pages/). While not Netlify specific, it does cover some additional steps that may be helpful.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/migration-guides/","name":"Migration Guides"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/static-assets/migration-guides/netlify-to-workers/","name":"Migrate from Netlify to Workers"}}]}
```

---

---
title: Migrate from Vercel to Workers
description: Migrate your Vercel application to Cloudflare Workers. You should already have an existing project deployed on Vercel that you would like to host on Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/migration-guides/vercel-to-workers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Migrate from Vercel to Workers

**Last reviewed:**  11 months ago 

In this tutorial, you will learn how to migrate your Vercel application to Cloudflare Workers.

You should already have an existing project deployed on Vercel that you would like to host on Cloudflare Workers. Vercel specific features are not supported by Cloudflare Workers. Review the [Workers compatibility matrix](https://developers.cloudflare.com/workers/static-assets/migration-guides/migrate-from-pages/#compatibility-matrix) for more information on what is supported.

## Frameworks

Some frameworks like Next.js, Astro with on demand rendering, and others have specific guides for migrating to Cloudflare Workers. Refer to our [framework guides](https://developers.cloudflare.com/workers/framework-guides/) for more information. If your framework has a **Deploy an existing project on Workers** guide, follow that guide for specific instructions. Otherwise, continue with the steps below.

## Find your build command and build directory

To move your application to Cloudflare Workers, you will need to know your build command and build directory. Cloudflare Workers will use this information to build and deploy your application. We'll cover how to find these values in the Vercel Dashboard below.

In your Vercel Dashboard, find the project you want to migrate to Workers. Go to the **Settings** tab for your specific project and find the **Build & Development settings** panel. You will find the **Build Command** and **Output Directory** fields there. If you are using a framework, these values may not be filled in but will show the defaults used by the framework. Save these for deploying to Cloudflare Workers. In the below image, the **Build Command** is `npm run build`, and the **Output Directory** is `dist`.

![Finding the Build Command and Output Directory fields](https://developers.cloudflare.com/_astro/vercel-deploy-1.DrHD4fam_2hTL0B.webp) 

## Create a wrangler file

In the root of your project, create a `wrangler.jsonc` or `wrangler.toml` file (`wrangler.jsonc` is recommended). What goes in the file depends on what type of application you are deploying: static or single-page application.

For each case, be sure to update the `<your-project-name>` value with the name of your project and `<your-build-directory>` value with the build directory from Vercel. Be sure to set the right pathing, for example `./dist` if the build directory is `dist` or `./build` if your build directory is `build`.

For a **static site**, you will need to add the following to your wrangler file.

* [  wrangler.jsonc ](#tab-panel-7712)
* [  wrangler.toml ](#tab-panel-7713)

```

{

  "name": "<your-project-name>",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "<your-build-directory>",

  },

}


```

```

name = "<your-project-name>"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "<your-build-directory>"


```

For a **single page application**, you will need to add the following to your wrangler file, which includes the `not_found_handling` field.

* [  wrangler.jsonc ](#tab-panel-7714)
* [  wrangler.toml ](#tab-panel-7715)

```

{

  "name": "<your-project-name>",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "<your-build-directory>",

    "not_found_handling": "single-page-application",

  },

}


```

```

name = "<your-project-name>"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "<your-build-directory>"

not_found_handling = "single-page-application"


```

Some frameworks provide specific guides for migrating to Cloudflare Workers. Please refer to our [framework guides](https://developers.cloudflare.com/workers/framework-guides/) for more information. If your framework includes a “Deploy an existing project on Workers” guide, follow it for detailed instructions.

## Create a new Workers project

Your application has the proper configuration to be built and deployed to Cloudflare Workers.

The [Connect a new Worker](https://developers.cloudflare.com/workers/ci-cd/builds/#connect-a-new-worker) guide will instruct you how to connect your GitHub project to Cloudflare Workers. In the configuration step, ensure your build command is the same as the command you found on Vercel. Also, the deploy command should be the default `npx wrangler deploy`.

## Add a custom domain

Workers Custom Domains only supports domains that are configured as zones on your account. A zone refers to a domain (such as example.com) that Cloudflare manages for you, including its DNS and traffic.

Follow these instructions for [adding a custom domain to your Workers project](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/#add-a-custom-domain). You will also find additional information on creating a zone for your domain.

## Delete your Vercel app

Once your custom domain is set up and sending requests to Cloudflare Workers, you can safely delete your Vercel application.

## Troubleshooting

For additional migration instructions, review the [Cloudflare Pages to Workers migration guide](https://developers.cloudflare.com/workers/static-assets/migration-guides/migrate-from-pages/). While not Vercel specific, it does cover some additional steps that may be helpful.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/migration-guides/","name":"Migration Guides"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/static-assets/migration-guides/vercel-to-workers/","name":"Migrate from Vercel to Workers"}}]}
```

---

---
title: Redirects
description: To apply custom redirects on a Worker with static assets, declare your redirects in a plain text file called _redirects without a file extension, in the static asset directory of your project. This file will not itself be served as a static asset, but will instead be parsed by Workers and its rules will be applied to static asset responses.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/redirects.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Redirects

To apply custom redirects on a Worker with static assets, declare your redirects in a plain text file called `_redirects` without a file extension, in the static asset directory of your project. This file will not itself be served as a static asset, but will instead be parsed by Workers and its rules will be applied to static asset responses.

If you are using a framework, you will often have a directory named `public/` or `static/`, and this usually contains deploy-ready assets, such as favicons, `robots.txt` files, and site manifests. These files get copied over to a final output directory during the build, so this is the perfect place to author your `_redirects` file. If you are not using a framework, the `_redirects` file can go directly into your [static assets directory](https://developers.cloudflare.com/workers/static-assets/binding/#directory).

Warning

Redirects defined in the `_redirects` file are not applied to requests served by your Worker code, even if the request URL matches a rule defined in `_redirects`. You may wish to apply redirects manually in your Worker code, or explore other options such as [Bulk Redirects](https://developers.cloudflare.com/rules/url-forwarding/bulk-redirects/create-dashboard/).

## Structure

### Per line

Only one redirect can be defined per line and must follow this format, otherwise it will be ignored.

```

[source] [destination] [code?]


```

* `source` ` string ` required  
   * A file path.  
   * Can include [wildcards (\*)](#splats) and [placeholders](#placeholders).  
   * Because fragments are evaluated by your browser and not Cloudflare's network, any fragments in the source are not evaluated.
* `destination` ` string ` required  
   * A file path or external link.  
   * Can include fragments, query strings, [splats](#splats), and [placeholders](#placeholders).
* `code` ` number ` (default: 302) optional  
   * Optional parameter

Lines starting with a `#` will be treated as comments.

### Per file

A `_redirects` file is limited to 2,000 static redirects and 100 dynamic redirects, for a combined total of 2,100 redirects. Each redirect declaration has a 1,000-character limit.

In your `_redirects` file:

* The order of your redirects matter. If there are multiple redirects for the same `source` path, the top-most redirect is applied.
* Static redirects should appear before dynamic redirects.
* Redirects are always followed, regardless of whether or not an asset matches the incoming request.

A complete example with multiple redirects may look like the following:

```

/home301 / 301

/home302 / 302

/querystrings /?query=string 301

/twitch https://twitch.tv

/trailing /trailing/ 301

/notrailing/ /nottrailing 301

/page/ /page2/#fragment 301

/blog/* https://blog.my.domain/:splat

/products/:code/:name /products?code=:code&name=:name


```

## Advanced redirects

Cloudflare currently offers limited support for advanced redirects.

| Feature                             | Support | Example                                                       | Notes                                   |
| ----------------------------------- | ------- | ------------------------------------------------------------- | --------------------------------------- |
| Redirects (301, 302, 303, 307, 308) | ✅       | /home / 301                                                   | 302 is used as the default status code. |
| Rewrites (other status codes)       | ❌       | /blog/\* /blog/404.html 404                                   |                                         |
| Splats                              | ✅       | /blog/\* /posts/:splat                                        | Refer to [Splats](#splats).             |
| Placeholders                        | ✅       | /blog/:year/:month/:date/:slug /news/:year/:month/:date/:slug | Refer to [Placeholders](#placeholders). |
| Query Parameters                    | ❌       | /shop id=:id /blog/:id 301                                    |                                         |
| Proxying                            | ✅       | /blog/\* /news/:splat 200                                     | Refer to [Proxying](#proxying).         |
| Domain-level redirects              | ❌       | workers.example.com/\* workers.example.com/blog/:splat 301    |                                         |
| Redirect by country or language     | ❌       | / /us 302 Country=us                                          |                                         |
| Redirect by cookie                  | ❌       | /\\\* /preview/:splat 302 Cookie=preview                      |                                         |

## Redirects and header matching

Redirects execute before headers, so in the case of a request matching rules in both files, the redirect will win out.

### Splats

On matching, a splat (asterisk, `*`) will greedily match all characters. You may only include a single splat in the URL.

The matched value can be used in the redirect location with `:splat`.

### Placeholders

A placeholder can be defined with `:placeholder_name`. A colon (`:`) followed by a letter indicates the start of a placeholder and the placeholder name that follows must be composed of alphanumeric characters and underscores (`:[A-Za-z]\w*`). Every named placeholder can only be referenced once. Placeholders match all characters apart from the delimiter, which when part of the host, is a period (`.`) or a forward-slash (`/`) and may only be a forward-slash (`/`) when part of the path.

Similarly, the matched value can be used in the redirect values with `:placeholder_name`.

```

/movies/:title /media/:title


```

### Proxying

Proxying will only support relative URLs on your site. You cannot proxy external domains.

Only the first redirect in your file will apply. For example, in the following example, a request to `/a` will render `/b`, and a request to `/b` will render `/c`, but `/a` will not render `/c`.

```

/a /b 200

/b /c 200


```

Note

Be aware that proxying pages can have an adverse effect on search engine optimization (SEO). Search engines often penalize websites that serve duplicate content. Consider adding a `Link` HTTP header which informs search engines of the canonical source of content.

For example, if you have added `/about/faq/* /about/faqs 200` to your `_redirects` file, you may want to add the following to your `_headers` file:

```

/about/faq/*

  Link: </about/faqs>; rel="canonical"


```

## Surpass `_redirects` limits

A [\_redirects](https://developers.cloudflare.com/workers/platform/limits/#redirects) file has a maximum of 2,000 static redirects and 100 dynamic redirects, for a combined total of 2,100 redirects. Use [Bulk Redirects](https://developers.cloudflare.com/rules/url-forwarding/bulk-redirects/) to handle redirects that surpasses the 2,100 redirect rules limit of `_redirects`.

Note

The redirects defined in the `_redirects` file of your build folder can work together with your Bulk Redirects. In case of duplicates, Bulk Redirects will run in front of your Worker, where your other redirects live.

For example, if you have Bulk Redirects set up to direct `abc.com` to `xyz.com` but also have `_redirects` set up to direct `xyz.com` to `foo.com`, a request for `abc.com` will eventually redirect to `foo.com`.

To use Bulk Redirects, refer to the [Bulk Redirects dashboard documentation](https://developers.cloudflare.com/rules/url-forwarding/bulk-redirects/create-dashboard/) or the [Bulk Redirects API documentation](https://developers.cloudflare.com/rules/url-forwarding/bulk-redirects/create-api/).

## Related resources

* [Transform Rules](https://developers.cloudflare.com/rules/transform/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/redirects/","name":"Redirects"}}]}
```

---

---
title: Gradual rollouts
description: Provide static asset routing solutions for gradual Worker deployments.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/routing/advanced/gradual-rollouts.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Gradual rollouts

[Gradual deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/) route requests to different Worker versions based on configured percentages. When your Worker serves static assets, this per-request routing can cause asset reference mismatches that result in 404 errors and broken user experiences.

Modern JavaScript frameworks commonly generate fingerprinted asset filenames during builds. For example, when you build a React application with Vite, your assets might look like:

```

dist/

├── index.html

├── assets/

│   ├── index-a1b2c3d4.js    # Main bundle with content hash

│   ├── index-e5f6g7h8.css   # Styles with content hash

│   └── logo-i9j0k1l2.svg    # Images with content hash


```

During a gradual rollout between two versions of your application, you might have:

**Version A (old build):**

* `index.html` references `assets/index-a1b2c3d4.js`
* `assets/index-a1b2c3d4.js` exists

**Version B (new build):**

* `index.html` references `assets/index-m3n4o5p6.js`
* `assets/index-m3n4o5p6.js` exists

If a user's initial request for `/` goes to Version A, they'll receive HTML that references `index-a1b2c3d4.js`. However, when their browser then requests `/assets/index-a1b2c3d4.js`, that request might be routed to Version B, which only contains `index-m3n4o5p6.js`, resulting in a 404 error.

This issue affects applications built with any framework that fingerprints assets, including:

* **React** (Create React App, Next.js, Vite)
* **Vue** (Vue CLI, Nuxt.js, Vite)
* **Angular** (Angular CLI)
* **Svelte** (SvelteKit, Vite)
* **Static site generators** that optimize asset loading

## Preventing asset mismatches with version affinity

[Version affinity](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#version-affinity) ensures all requests from the same user are handled by the same Worker version, preventing asset reference mismatches entirely. You can configure this using [Transform Rules](https://developers.cloudflare.com/rules/transform/request-header-modification/) to automatically set the `Cloudflare-Workers-Version-Key` header.

### Session-based affinity

For applications with user sessions, use session identifiers:

Text in **Expression Editor**:

```

http.cookie contains "session_id"


```

Selected operation under **Modify request header**: _Set dynamic_

**Header name**: `Cloudflare-Workers-Version-Key`

**Value**: `http.request.cookies["session_id"][0]`

### User-based affinity

For authenticated applications, use user identifiers stored in cookies or headers:

Text in **Expression Editor**:

```

http.cookie contains "user_id"


```

Selected operation under **Modify request header**: _Set dynamic_

**Header name**: `Cloudflare-Workers-Version-Key`

**Value**: `http.request.cookies["user_id"][0]`

## Testing and monitoring

Before rolling out to production, verify that your version affinity setup works correctly:

Terminal window

```

# Test with version affinity - both requests should hit the same version

curl -H "Cookie: session_id=test123" https://your-worker.example.com/

curl -H "Cookie: session_id=test123" https://your-worker.example.com/assets/index.js


```

During gradual rollouts, monitor your Worker's analytics for increased 404 response rates, especially for asset files (`.js`, `.css`, `.png`). Use [Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/) or [Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush/) to track these metrics and catch asset mismatch issues early.

## Best practices

When deploying applications with fingerprinted assets using gradual rollouts:

* Use version affinity (preferably session-based) to ensure consistent asset loading
* Test asset loading using version overrides before increasing rollout percentages
* Monitor 404 rates during deployments to catch issues quickly
* Have rollback procedures ready in case asset problems arise
* Choose session-based or user-based affinity depending on your application's authentication model

With proper version affinity configuration, you can safely perform gradual deployments of applications that use modern build tools and asset optimization without worrying about broken user experiences from missing assets.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/routing/","name":"Routing"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/static-assets/routing/advanced/","name":"Advanced"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/static-assets/routing/advanced/gradual-rollouts/","name":"Gradual rollouts"}}]}
```

---

---
title: HTML handling
description: How to configure a HTML handling and trailing slashes for the static assets of your Worker.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/routing/advanced/html-handling.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# HTML handling

Forcing or dropping trailing slashes on request paths (for example, `example.com/page/` vs. `example.com/page`) is often something that developers wish to control for cosmetic reasons. Additionally, it can impact SEO because search engines often treat URLs with and without trailing slashes as different, separate pages. This distinction can lead to duplicate content issues, indexing problems, and overall confusion about the correct canonical version of a page.

The [assets.html\_handling configuration](https://developers.cloudflare.com/workers/wrangler/configuration/#assets) determines the redirects and rewrites of requests for HTML content. It is used to specify the pattern for canonical URLs, thus where Cloudflare serves HTML content from, and additionally, where Cloudflare redirects non-canonical URLs to.

Take the following directory structure:

* Directorydist  
   * file.html  
   * Directoryfolder  
         * index.html

## Automatic trailing slashes (default)

This will usually give you the desired behavior automatically: individual files (e.g. `foo.html`) will be served _without_ a trailing slash and folder index files (e.g. `foo/index.html`) will be served _with_ a trailing slash.

* [  wrangler.jsonc ](#tab-panel-7716)
* [  wrangler.toml ](#tab-panel-7717)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "./dist/",

    "html_handling": "auto-trailing-slash"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "./dist/"

html_handling = "auto-trailing-slash"


```

Based on the incoming requests, the following assets would be served:

| Incoming Request   | Response        | Asset Served            |
| ------------------ | --------------- | ----------------------- |
| /file              | 200             | /dist/file.html         |
| /file.html         | 307 to /file    | \-                      |
| /file/             | 307 to /file    | \-                      |
| /file/index        | 307 to /file    | \-                      |
| /file/index.html   | 307 to /file    | \-                      |
| /folder            | 307 to /folder/ | \-                      |
| /folder.html       | 307 to /folder  | \-                      |
| /folder/           | 200             | /dist/folder/index.html |
| /folder/index      | 307 to /folder  | \-                      |
| /folder/index.html | 307 to /folder  | \-                      |

## Force trailing slashes

Alternatively, you can force trailing slashes (`force-trailing-slash`).

* [  wrangler.jsonc ](#tab-panel-7718)
* [  wrangler.toml ](#tab-panel-7719)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "./dist/",

    "html_handling": "force-trailing-slash"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "./dist/"

html_handling = "force-trailing-slash"


```

Based on the incoming requests, the following assets would be served:

| Incoming Request   | Response        | Asset Served            |
| ------------------ | --------------- | ----------------------- |
| /file              | 307 to /file/   | \-                      |
| /file.html         | 307 to /file/   | \-                      |
| /file/             | 200             | /dist/file.html         |
| /file/index        | 307 to /file/   | \-                      |
| /file/index.html   | 307 to /file/   | \-                      |
| /folder            | 307 to /folder/ | \-                      |
| /folder.html       | 307 to /folder/ | \-                      |
| /folder/           | 200             | /dist/folder/index.html |
| /folder/index      | 307 to /folder/ | \-                      |
| /folder/index.html | 307 to /folder/ | \-                      |

## Drop trailing slashes

Or you can drop trailing slashes (`drop-trailing-slash`).

* [  wrangler.jsonc ](#tab-panel-7720)
* [  wrangler.toml ](#tab-panel-7721)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "./dist/",

    "html_handling": "drop-trailing-slash"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "./dist/"

html_handling = "drop-trailing-slash"


```

Based on the incoming requests, the following assets would be served:

| Incoming Request   | Response       | Asset Served            |
| ------------------ | -------------- | ----------------------- |
| /file              | 200            | /dist/file.html         |
| /file.html         | 307 to /file   | \-                      |
| /file/             | 307 to /file   | \-                      |
| /file/index        | 307 to /file   | \-                      |
| /file/index.html   | 307 to /file   | \-                      |
| /folder            | 200            | /dist/folder/index.html |
| /folder.html       | 307 to /folder | \-                      |
| /folder/           | 307 to /folder | \-                      |
| /folder/index      | 307 to /folder | \-                      |
| /folder/index.html | 307 to /folder | \-                      |

## Disable HTML handling

Alternatively, if you have bespoke needs, you can disable the built-in HTML handling entirely (`none`).

* [  wrangler.jsonc ](#tab-panel-7722)
* [  wrangler.toml ](#tab-panel-7723)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "./dist/",

    "html_handling": "none"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "./dist/"

html_handling = "none"


```

Based on the incoming requests, the following assets would be served:

| Incoming Request   | Response                        | Asset Served                    |
| ------------------ | ------------------------------- | ------------------------------- |
| /file              | Depends on not\_found\_handling | Depends on not\_found\_handling |
| /file.html         | 200                             | /dist/file.html                 |
| /file/             | Depends on not\_found\_handling | Depends on not\_found\_handling |
| /file/index        | Depends on not\_found\_handling | Depends on not\_found\_handling |
| /file/index.html   | Depends on not\_found\_handling | Depends on not\_found\_handling |
| /folder            | Depends on not\_found\_handling | Depends on not\_found\_handling |
| /folder.html       | Depends on not\_found\_handling | Depends on not\_found\_handling |
| /folder/           | Depends on not\_found\_handling | Depends on not\_found\_handling |
| /folder/index      | Depends on not\_found\_handling | Depends on not\_found\_handling |
| /folder/index.html | 200                             | /dist/folder/index.html         |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/routing/","name":"Routing"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/static-assets/routing/advanced/","name":"Advanced"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/static-assets/routing/advanced/html-handling/","name":"HTML handling"}}]}
```

---

---
title: Serving a subdirectory
description: How to configure a Worker with static assets on a subpath.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/routing/advanced/serving-a-subdirectory.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Serving a subdirectory

Note

This feature requires Wrangler v3.98.0 or later.

Like with any other Worker, [you can configure a Worker with assets to run on a path of your domain](https://developers.cloudflare.com/workers/configuration/routing/routes/). Assets defined for a Worker must be nested in a directory structure that mirrors the desired path.

For example, to serve assets from `example.com/blog/*`, create a `blog` directory in your asset directory.

* Directorydist  
   * Directoryblog  
         * index.html  
         * Directoryposts  
                  * post1.html  
                  * post2.html

With a [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) like so:

* [  wrangler.jsonc ](#tab-panel-7724)
* [  wrangler.toml ](#tab-panel-7725)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "assets-on-a-path-example",

  "main": "src/index.js",

  "route": "example.com/blog/*",

  "assets": {

    "directory": "dist"

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "assets-on-a-path-example"

main = "src/index.js"

route = "example.com/blog/*"


[assets]

directory = "dist"


```

In this example, requests to `example.com/blog/` will serve the `index.html` file, and requests to `example.com/blog/posts/post1` will serve the `post1.html` file.

If you have a file outside the configured path, it will not be served, unless it is part of the `assets.not_found_handling` for [Single Page Applications](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/) or [custom 404 pages](https://developers.cloudflare.com/workers/static-assets/routing/static-site-generation/). For example, if you have a `home.html` file in the root of your asset directory, it will not be served when requesting `example.com/blog/home`. However, if needed, these files can still be manually fetched over [the binding](https://developers.cloudflare.com/workers/static-assets/binding/#binding).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/routing/","name":"Routing"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/static-assets/routing/advanced/","name":"Advanced"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/static-assets/routing/advanced/serving-a-subdirectory/","name":"Serving a subdirectory"}}]}
```

---

---
title: Full-stack application
description: How to configure and use a full-stack application with Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/routing/full-stack-application.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Full-stack application

Full-stack applications are web applications which are span both the client and server. The build process of these applications will produce a HTML files, accompanying client-side resources (e.g. JavaScript bundles, CSS stylesheets, images, fonts, etc.) and a Worker script. Data is typically fetched the Worker script at request-time and the initial page response is usually server-side rendered (SSR). From there, the client is then hydrated and a SPA-like experience ensues.

The following full-stack frameworks are natively supported by Workers:

* [ Astro ](https://developers.cloudflare.com/workers/framework-guides/web-apps/astro/)
* [ React Router (formerly Remix) ](https://developers.cloudflare.com/workers/framework-guides/web-apps/react-router/)
* [ Next.js ](https://developers.cloudflare.com/workers/framework-guides/web-apps/nextjs/)
* [ RedwoodSDK ](https://developers.cloudflare.com/workers/framework-guides/web-apps/redwoodsdk/)
* [ TanStack Start ](https://developers.cloudflare.com/workers/framework-guides/web-apps/tanstack-start/)
* [ Vike ](https://developers.cloudflare.com/workers/framework-guides/web-apps/vike/)
* [ Analog ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/analog/)
* [ Angular ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/angular/)
* [ Nuxt ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/nuxt/)
* [ Qwik ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/qwik/)
* [ Solid ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/solid/)
* [ Waku ](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/waku/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/routing/","name":"Routing"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/static-assets/routing/full-stack-application/","name":"Full-stack application"}}]}
```

---

---
title: Single Page Application (SPA)
description: How to configure and use a Single Page Application (SPA) with Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/routing/single-page-application.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Single Page Application (SPA)

Single Page Applications (SPAs) are web applications which are client-side rendered (CSR). They are often built with a framework such as [React](https://developers.cloudflare.com/workers/framework-guides/web-apps/react/), [Vue](https://developers.cloudflare.com/workers/framework-guides/web-apps/vue/) or [Svelte](https://developers.cloudflare.com/workers/framework-guides/web-apps/sveltekit/). The build process of these frameworks will produce a single `/index.html` file and accompanying client-side resources (e.g. JavaScript bundles, CSS stylesheets, images, fonts, etc.). Typically, data is fetched by the client from an API with client-side requests.

When you configure `single-page-application` mode, Cloudflare provides default routing behavior that automatically serves your `/index.html` file for navigation requests (those with `Sec-Fetch-Mode: navigate` headers) which don't match any other asset. For more control over which paths invoke your Worker script, you can use [advanced routing control](#advanced-routing-control).

## Configuration

In order to deploy a Single Page Application to Workers, you must configure the `assets.directory` and `assets.not_found_handling` options in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/#assets):

* [  wrangler.jsonc ](#tab-panel-7726)
* [  wrangler.toml ](#tab-panel-7727)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "./dist/",

    "not_found_handling": "single-page-application"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "./dist/"

not_found_handling = "single-page-application"


```

Configuring `assets.not_found_handling` to `single-page-application` overrides the default serving behavior of Workers for static assets. When an incoming request does not match a file in the `assets.directory`, Workers will serve the contents of the `/index.html` file with a `200 OK` status.

### Navigation requests

If you have a Worker script (`main`), have configured `assets.not_found_handling`, and use the [assets\_navigation\_prefers\_asset\_serving compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#navigation-requests-prefer-asset-serving) (or set a compatibility date of `2025-04-01` or greater), _navigation requests_ will not invoke the Worker script. A _navigation request_ is a request made with the `Sec-Fetch-Mode: navigate` header, which browsers automatically attach when navigating to a page. This reduces billable invocations of your Worker script, and is particularly useful for client-heavy applications which would otherwise invoke your Worker script very frequently and unnecessarily.

Note

This can lead to surprising but intentional behavior. For example, if you define an API endpoint in a Worker script (e.g. `/api/date`) and then fetch it with a client-side request in your SPA (e.g. `fetch("/api/date")`), the Worker script will be invoked and your API response will be returned as expected. However, if you navigate to `/api/date` in your browser, you will be served an HTML file. Again, this is to reduce the number of billable invocations for your application while still maintaining SPA-like functionality. This behavior can be disabled by setting the [assets\_navigation\_has\_no\_effect compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#navigation-requests-prefer-asset-serving).

Note

If you wish to run the Worker script ahead of serving static assets (e.g. to log requests, or perform some authentication checks), you can additionally configure the [assets.run\_worker\_first setting](https://developers.cloudflare.com/workers/static-assets/routing/worker-script/#run%5Fworker%5Ffirst). This will retain your `assets.not_found_handling` behavior when no other asset matches, while still allowing you to control access to your application with your Worker script.

#### Client-side callbacks

In some cases, you might need to pass a value from a navigation request to your Worker script. For example, if you are acting as an OAuth callback, you might expect to see requests made to some route such as `/oauth/callback?code=...`. With the `assets_navigation_prefers_asset_serving` flag, your HTML assets will be server, rather than your Worker script. In this case, we recommend, either as part of your client application for this appropriate route, or with a slimmed-down endpoint-specific HTML file, passing the value to the server with client-side JavaScript.

./dist/oauth/callback.html

```

<!DOCTYPE html>

<html>

  <head>

    <title>OAuth callback</title>

  </head>

  <body>

    <p>Loading...</p>

    <script>

      (async () => {

        const response = await fetch("/api/oauth/callback" + window.location.search);

        if (response.ok) {

          window.location.href = '/';

        } else {

          document.querySelector('p').textContent = 'Error: ' + (await response.json()).error;

        }

      })();

    </script>

  </body>

</html>


```

* [  JavaScript ](#tab-panel-7732)
* [  TypeScript ](#tab-panel-7733)

./worker/index.js

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch(request) {

    const url = new URL(request.url);

    if (url.pathname === "/api/oauth/callback") {

      const code = url.searchParams.get("code");


      const sessionId =

        await exchangeAuthorizationCodeForAccessAndRefreshTokensAndPersistToDatabaseAndGetSessionId(

          code,

        );


      if (sessionId) {

        return new Response(null, {

          headers: {

            "Set-Cookie": `sessionId=${sessionId}; HttpOnly; SameSite=Strict; Secure; Path=/; Max-Age=86400`,

          },

        });

      } else {

        return Response.json(

          { error: "Invalid OAuth code. Please try again." },

          { status: 400 },

        );

      }

    }


    return new Response(null, { status: 404 });

  }

}


```

./worker/index.ts

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch(request: Request) {

    const url = new URL(request.url);

    if (url.pathname === "/api/oauth/callback") {

      const code = url.searchParams.get("code");


      const sessionId = await exchangeAuthorizationCodeForAccessAndRefreshTokensAndPersistToDatabaseAndGetSessionId(code);


      if (sessionId) {

        return new Response(null, {

          headers: {

            "Set-Cookie": `sessionId=${sessionId}; HttpOnly; SameSite=Strict; Secure; Path=/; Max-Age=86400`,

          },

        });

      } else {

        return Response.json(

          { error: "Invalid OAuth code. Please try again." },

          { status: 400 }

        );

      }

    }


    return new Response(null, { status: 404 });

  }

}


```

## Advanced routing control

For more explicit control over SPA routing behavior, you can use `run_worker_first` with an array of route patterns. This approach disables the automatic `Sec-Fetch-Mode: navigate` detection and gives you explicit control over which requests should be handled by your Worker script vs served as static assets.

Note

Advanced routing control is supported in:

* [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/) v4.20.0 and above
* [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/get-started/) v1.7.0 and above

* [  wrangler.jsonc ](#tab-panel-7728)
* [  wrangler.toml ](#tab-panel-7729)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./src/index.ts",

  "assets": {

    "directory": "./dist/",

    "not_found_handling": "single-page-application",

    "binding": "ASSETS",

    "run_worker_first": ["/api/*", "!/api/docs/*"]

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./src/index.ts"


[assets]

directory = "./dist/"

not_found_handling = "single-page-application"

binding = "ASSETS"

run_worker_first = [ "/api/*", "!/api/docs/*" ]


```

This configuration provides explicit routing control without relying on browser navigation headers, making it ideal for complex SPAs that need fine-grained routing behavior. Your Worker script can then handle the matched routes and (optionally using [the assets binding](https://developers.cloudflare.com/workers/static-assets/binding/#binding)) and serve dynamic content.

**For example:**

* [  JavaScript ](#tab-panel-7730)
* [  TypeScript ](#tab-panel-7731)

./src/index.js

```

export default {

  async fetch(request, env) {

    const url = new URL(request.url);


    if (url.pathname === "/api/name") {

      return new Response(JSON.stringify({ name: "Cloudflare" }), {

        headers: { "Content-Type": "application/json" },

      });

    }


    return new Response(null, { status: 404 });

  },

};


```

./src/index.ts

```

export default {

  async fetch(request, env): Promise<Response> {

    const url = new URL(request.url);


    if (url.pathname === "/api/name") {

      return new Response(JSON.stringify({ name: "Cloudflare" }), {

        headers: { "Content-Type": "application/json" },

      });

    }


    return new Response(null, { status: 404 });

  },

} satisfies ExportedHandler;


```

You can also use `run_worker_first` to inject data into your SPA shell before it reaches the browser. For a full example using HTMLRewriter to prefetch API data and embed it in the HTML stream, refer to [SPA shell with bootstrap data](https://developers.cloudflare.com/workers/examples/spa-shell/).

## Local Development

If you are using a Vite-powered SPA framework, you might be interested in using our [Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) which offers a Vite-native developer experience.

### Reference

In most cases, configuring `assets.not_found_handling` to `single-page-application` will provide the desired behavior. If you are building your own framework, or have specialized needs, the following diagram can provide insight into exactly how the routing decisions are made.

Full routing decision diagram

flowchart
Request@{ shape: stadium, label: "Incoming request" }
Request-->RunWorkerFirst
RunWorkerFirst@{ shape: diamond, label: "Run Worker script first?" }
RunWorkerFirst-->|Request matches run_worker_first path|WorkerScriptInvoked
RunWorkerFirst-->|Request matches run_worker_first negative path|AssetServing
RunWorkerFirst-->|No matches|RequestMatchesAsset
RequestMatchesAsset@{ shape: diamond, label: "Request matches asset?" }
RequestMatchesAsset-->|Yes|AssetServing
RequestMatchesAsset-->|No|WorkerScriptPresent
WorkerScriptPresent@{ shape: diamond, label: "Worker script present?" }
WorkerScriptPresent-->|No|AssetServing
WorkerScriptPresent-->|Yes|RequestNavigation
RequestNavigation@{ shape: diamond, label: "Request is navigation request?" }
RequestNavigation-->|No|WorkerScriptInvoked
WorkerScriptInvoked@{ shape: rect, label: "Worker script invoked" }
WorkerScriptInvoked-.->|Asset binding|AssetServing
RequestNavigation-->|Yes|AssetServing

subgraph Asset serving
	AssetServing@{ shape: diamond, label: "Request matches asset?" }
	AssetServing-->|Yes|AssetServed
	AssetServed@{ shape: stadium, label: "**200 OK**<br />asset served" }
	AssetServing-->|No|NotFoundHandling

	subgraph single-page-application
		NotFoundHandling@{ shape: rect, label: "Request rewritten to /index.html" }
		NotFoundHandling-->SPAExists
		SPAExists@{ shape: diamond, label: "HTML Page exists?" }
		SPAExists-->|Yes|SPAServed
		SPAExists-->|No|Generic404PageServed
		Generic404PageServed@{ shape: stadium, label: "**404 Not Found**<br />null-body response served" }
		SPAServed@{ shape: stadium, label: "**200 OK**<br />/index.html page served" }
	end

end

Requests are only billable if a Worker script is invoked. From there, it is possible to serve assets using the assets binding (depicted as the dotted line in the diagram above).

Although unlikely to impact how a SPA is served, you can read more about how we match assets in the [HTML handling docs](https://developers.cloudflare.com/workers/static-assets/routing/advanced/html-handling/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/routing/","name":"Routing"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/static-assets/routing/single-page-application/","name":"Single Page Application (SPA)"}}]}
```

---

---
title: Static Site Generation (SSG) and custom 404 pages
description: How to configure a Static Site Generation (SSG) application and custom 404 pages with Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/routing/static-site-generation.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Static Site Generation (SSG) and custom 404 pages

Static Site Generation (SSG) applications are web applications which are predominantely built or "prerendered" ahead-of-time. They are often built with a framework such as [Gatsby](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/gatsby/) or [Docusaurus](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/docusaurus/). The build process of these frameworks will produce many HTML files and accompanying client-side resources (e.g. JavaScript bundles, CSS stylesheets, images, fonts, etc.). Data is either static, fetched and compiled into the HTML at build-time, or fetched by the client from an API with client-side requests.

Often, an SSG framework will allow you to create a custom 404 page.

## Configuration

In order to deploy a Static Site Generation application to Workers, you must configure the `assets.directory`, and optionally, the `assets.not_found_handling` and `assets.html_handling` options in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/#assets):

* [  wrangler.jsonc ](#tab-panel-7734)
* [  wrangler.toml ](#tab-panel-7735)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "directory": "./dist/",

    "not_found_handling": "404-page",

    "html_handling": "auto-trailing-slash"

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

directory = "./dist/"

not_found_handling = "404-page"

html_handling = "auto-trailing-slash"


```

`assets.html_handling` defaults to `auto-trailing-slash` and this will usually give you the desired behavior automatically: individual files (e.g. `foo.html`) will be served _without_ a trailing slash and folder index files (e.g. `foo/index.html`) will be served _with_ a trailing slash. Alternatively, you can force trailing slashes (`force-trailing-slash`) or drop trailing slashes (`drop-trailing-slash`) on requests for HTML pages.

### Custom 404 pages

Configuring `assets.not_found_handling` to `404-page` overrides the default serving behavior of Workers for static assets. When an incoming request does not match a file in the `assets.directory`, Workers will serve the contents of the nearest `404.html` file with a `404 Not Found` status.

### Navigation requests

If you have a Worker script (`main`), have configured `assets.not_found_handling`, and use the [assets\_navigation\_prefers\_asset\_serving compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#navigation-requests-prefer-asset-serving) (or set a compatibility date of `2025-04-01` or greater), _navigation requests_ will not invoke the Worker script. A _navigation request_ is a request made with the `Sec-Fetch-Mode: navigate` header, which browsers automatically attach when navigating to a page. This reduces billable invocations of your Worker script, and is particularly useful for client-heavy applications which would otherwise invoke your Worker script very frequently and unnecessarily.

Note

This can lead to surprising but intentional behavior. For example, if you define an API endpoint in a Worker script (e.g. `/api/date`) and then fetch it with a client-side request in your SPA (e.g. `fetch("/api/date")`), the Worker script will be invoked and your API response will be returned as expected. However, if you navigate to `/api/date` in your browser, you will be served an HTML file. Again, this is to reduce the number of billable invocations for your application while still maintaining SPA-like functionality. This behavior can be disabled by setting the [assets\_navigation\_has\_no\_effect compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#navigation-requests-prefer-asset-serving).

Note

If you wish to run the Worker script ahead of serving static assets (e.g. to log requests, or perform some authentication checks), you can additionally configure the [assets.run\_worker\_first setting](https://developers.cloudflare.com/workers/static-assets/routing/worker-script/#run%5Fworker%5Ffirst). This will retain your `assets.not_found_handling` behavior when no other asset matches, while still allowing you to control access to your application with your Worker script.

#### Client-side callbacks

In some cases, you might need to pass a value from a navigation request to your Worker script. For example, if you are acting as an OAuth callback, you might expect to see requests made to some route such as `/oauth/callback?code=...`. With the `assets_navigation_prefers_asset_serving` flag, your HTML assets will be server, rather than your Worker script. In this case, we recommend, either as part of your client application for this appropriate route, or with a slimmed-down endpoint-specific HTML file, passing the value to the server with client-side JavaScript.

./dist/oauth/callback.html

```

<!DOCTYPE html>

<html>

  <head>

    <title>OAuth callback</title>

  </head>

  <body>

    <p>Loading...</p>

    <script>

      (async () => {

        const response = await fetch("/api/oauth/callback" + window.location.search);

        if (response.ok) {

          window.location.href = '/';

        } else {

          document.querySelector('p').textContent = 'Error: ' + (await response.json()).error;

        }

      })();

    </script>

  </body>

</html>


```

* [  JavaScript ](#tab-panel-7736)
* [  TypeScript ](#tab-panel-7737)

./worker/index.js

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch(request) {

    const url = new URL(request.url);

    if (url.pathname === "/api/oauth/callback") {

      const code = url.searchParams.get("code");


      const sessionId =

        await exchangeAuthorizationCodeForAccessAndRefreshTokensAndPersistToDatabaseAndGetSessionId(

          code,

        );


      if (sessionId) {

        return new Response(null, {

          headers: {

            "Set-Cookie": `sessionId=${sessionId}; HttpOnly; SameSite=Strict; Secure; Path=/; Max-Age=86400`,

          },

        });

      } else {

        return Response.json(

          { error: "Invalid OAuth code. Please try again." },

          { status: 400 },

        );

      }

    }


    return new Response(null, { status: 404 });

  }

}


```

./worker/index.ts

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch(request: Request) {

    const url = new URL(request.url);

    if (url.pathname === "/api/oauth/callback") {

      const code = url.searchParams.get("code");


      const sessionId = await exchangeAuthorizationCodeForAccessAndRefreshTokensAndPersistToDatabaseAndGetSessionId(code);


      if (sessionId) {

        return new Response(null, {

          headers: {

            "Set-Cookie": `sessionId=${sessionId}; HttpOnly; SameSite=Strict; Secure; Path=/; Max-Age=86400`,

          },

        });

      } else {

        return Response.json(

          { error: "Invalid OAuth code. Please try again." },

          { status: 400 }

        );

      }

    }


    return new Response(null, { status: 404 });

  }

}


```

## Local Development

If you are using a Vite-powered SPA framework, you might be interested in using our [Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) which offers a Vite-native developer experience.

### Reference

In most cases, configuring `assets.not_found_handling` to `404-page` will provide the desired behavior. If you are building your own framework, or have specialized needs, the following diagram can provide insight into exactly how the routing decisions are made.

Full routing decision diagram

flowchart
Request@{ shape: stadium, label: "Incoming request" }
Request-->RunWorkerFirst
RunWorkerFirst@{ shape: diamond, label: "Run Worker script first?" }
RunWorkerFirst-->|Request matches run_worker_first path|WorkerScriptInvoked
RunWorkerFirst-->|Request matches run_worker_first negative path|AssetServing
RunWorkerFirst-->|No matches|RequestMatchesAsset
RequestMatchesAsset@{ shape: diamond, label: "Request matches asset?" }
RequestMatchesAsset-->|Yes|AssetServing
RequestMatchesAsset-->|No|WorkerScriptPresent
WorkerScriptPresent@{ shape: diamond, label: "Worker script present?" }
WorkerScriptPresent-->|No|AssetServing
WorkerScriptPresent-->|Yes|RequestNavigation
RequestNavigation@{ shape: diamond, label: "Request is navigation request?" }
RequestNavigation-->|No|WorkerScriptInvoked
WorkerScriptInvoked@{ shape: rect, label: "Worker script invoked" }
WorkerScriptInvoked-.->|Asset binding|AssetServing
RequestNavigation-->|Yes|AssetServing

subgraph Asset serving
	AssetServing@{ shape: diamond, label: "Request matches asset?" }
	AssetServing-->|Yes|AssetServed
	AssetServed@{ shape: stadium, label: "**200 OK**<br />asset served" }
	AssetServing-->|No|NotFoundHandling

	subgraph 404-page
		NotFoundHandling@{ shape: rect, label: "Request rewritten to ../404.html" }
		NotFoundHandling-->404PageExists
		404PageExists@{ shape: diamond, label: "HTML Page exists?" }
		404PageExists-->|Yes|404PageServed
		404PageExists-->|No|404PageAtIndex
		404PageAtIndex@{ shape: diamond, label: "Request is for root /404.html?" }
		404PageAtIndex-->|Yes|Generic404PageServed
		404PageAtIndex-->|No|NotFoundHandling
		Generic404PageServed@{ shape: stadium, label: "**404 Not Found**<br />null-body response served" }
		404PageServed@{ shape: stadium, label: "**404 Not Found**<br />404.html page served" }
	end

end

Requests are only billable if a Worker script is invoked. From there, it is possible to serve assets using the assets binding (depicted as the dotted line in the diagram above).

You can read more about how we match assets in the [HTML handling docs](https://developers.cloudflare.com/workers/static-assets/routing/advanced/html-handling/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/routing/","name":"Routing"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/static-assets/routing/static-site-generation/","name":"Static Site Generation (SSG) and custom 404 pages"}}]}
```

---

---
title: Worker script
description: How the presence of a Worker script influences static asset routing and the related configuration options.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/static-assets/routing/worker-script.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Worker script

If you have both static assets and a Worker script configured, Cloudflare will first attempt to serve static assets if one matches the incoming request. You can read more about how we match assets in the [HTML handling docs](https://developers.cloudflare.com/workers/static-assets/routing/advanced/html-handling/).

If an appropriate static asset if not found, Cloudflare will invoke your Worker script.

This allows you to easily combine together these two features to create powerful applications (e.g. a [full-stack application](https://developers.cloudflare.com/workers/static-assets/routing/full-stack-application/), or a [Single Page Application (SPA)](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/) or [Static Site Generation (SSG) application](https://developers.cloudflare.com/workers/static-assets/routing/static-site-generation/) with an API).

## Run your Worker script first

You can configure the [assets.run\_worker\_first setting](https://developers.cloudflare.com/workers/static-assets/binding/#run%5Fworker%5Ffirst) to control when your Worker script runs relative to static asset serving. This gives you more control over exactly how and when those assets are served and can be used to implement "middleware" for requests.

Warning

If you are using [Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/) in combination with `assets.run_worker_first`, you may find that placement decisions are not optimized correctly as, currently, the entire Worker script is placed as a single unit. This may not accurately reflect the desired "split" in behavior of edge-first vs. smart-placed compute for your application. This is a limitation that we are currently working to resolve.

### Run Worker before each request

If you need to always run your Worker script before serving static assets (for example, you wish to log requests, perform some authentication checks, use [HTMLRewriter](https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/), or otherwise transform assets before serving), set `run_worker_first` to `true`:

* [  wrangler.jsonc ](#tab-panel-7738)
* [  wrangler.toml ](#tab-panel-7739)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./worker/index.ts",

  "assets": {

    "directory": "./dist/",

    "binding": "ASSETS",

    "run_worker_first": true

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./worker/index.ts"


[assets]

directory = "./dist/"

binding = "ASSETS"

run_worker_first = true


```

* [  JavaScript ](#tab-panel-7742)
* [  TypeScript ](#tab-panel-7743)

./worker/index.js

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch(request) {

    // You can perform checks before fetching assets

    const user = await checkIfRequestIsAuthenticated(request);


    if (!user) {

      return new Response("Unauthorized", { status: 401 });

    }


    // You can then just fetch the assets as normal, or you could pass in a custom Request object here if you wanted to fetch some other specific asset

    const assetResponse = await this.env.ASSETS.fetch(request);


    // You can return static asset response as-is, or you can transform them with something like HTMLRewriter

    return new HTMLRewriter()

      .on("#user", {

        element(element) {

          element.setInnerContent(JSON.stringify({ name: user.name }));

        },

      })

      .transform(assetResponse);

  }

}


```

./worker/index.ts

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint<Env> {

  async fetch(request: Request) {

    // You can perform checks before fetching assets

    const user = await checkIfRequestIsAuthenticated(request);


    if (!user) {

      return new Response("Unauthorized", { status: 401 });

    }


    // You can then just fetch the assets as normal, or you could pass in a custom Request object here if you wanted to fetch some other specific asset

    const assetResponse = await this.env.ASSETS.fetch(request);


    // You can return static asset response as-is, or you can transform them with something like HTMLRewriter

    return new HTMLRewriter()

      .on("#user", {

        element(element) {

          element.setInnerContent(JSON.stringify({ name: user.name }));

        },

      })

      .transform(assetResponse);

  }

}


```

### Run Worker first for selective paths

You can also configure selective Worker-first routing using an array of route patterns, often paired with the [single-page-application setting](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/#advanced-routing-control). This allows you to run the Worker first only for specific routes while letting other requests follow the default asset-first behavior:

* [  wrangler.jsonc ](#tab-panel-7740)
* [  wrangler.toml ](#tab-panel-7741)

```

{

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./worker/index.ts",

  "assets": {

    "directory": "./dist/",

    "not_found_handling": "single-page-application",

    "binding": "ASSETS",

    "run_worker_first": ["/oauth/callback"]

  }

}


```

```

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./worker/index.ts"


[assets]

directory = "./dist/"

not_found_handling = "single-page-application"

binding = "ASSETS"

run_worker_first = [ "/oauth/callback" ]


```

* [  JavaScript ](#tab-panel-7744)
* [  TypeScript ](#tab-panel-7745)

./worker/index.js

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint {

  async fetch(request) {

    // The only thing this Worker script does is handle an OAuth callback.

    // All other requests either serve an asset that matches or serve the index.html fallback, without ever hitting this code.

    const url = new URL(request.url);

    const code = url.searchParams.get("code");

    const state = url.searchParams.get("state");


    const accessToken = await exchangeCodeForToken(code, state);

    const sessionIdentifier = await storeTokenAndGenerateSession(accessToken);


    // Redirect back to the index, but set a cookie that the front-end will use.

    return new Response(null, {

      headers: {

        Location: "/",

        "Set-Cookie": `session_token=${sessionIdentifier}; HttpOnly; Secure; SameSite=Lax; Path=/`,

      },

    });

  }

}


```

./worker/index.ts

```

import { WorkerEntrypoint } from "cloudflare:workers";


export default class extends WorkerEntrypoint<Env> {

  async fetch(request: Request) {

    // The only thing this Worker script does is handle an OAuth callback.

    // All other requests either serve an asset that matches or serve the index.html fallback, without ever hitting this code.

    const url = new URL(request.url);

    const code = url.searchParams.get("code");

    const state = url.searchParams.get("state");


    const accessToken = await exchangeCodeForToken(code, state);

    const sessionIdentifier = await storeTokenAndGenerateSession(accessToken);


    // Redirect back to the index, but set a cookie that the front-end will use.

    return new Response(null, {

      headers: {

        Location: "/",

        "Set-Cookie": `session_token=${sessionIdentifier}; HttpOnly; Secure; SameSite=Lax; Path=/`,

      },

    });

  }

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/static-assets/","name":"Static Assets"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/static-assets/routing/","name":"Routing"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/static-assets/routing/worker-script/","name":"Worker script"}}]}
```

---

---
title: Testing
description: The Workers platform has a variety of ways to test your applications, depending on your requirements. We recommend using the Vitest integration, which allows you to run tests inside the Workers runtime, and unit test individual functions within your Worker.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Testing

The Workers platform has a variety of ways to test your applications, depending on your requirements. We recommend using the [Vitest integration](https://developers.cloudflare.com/workers/testing/vitest-integration), which allows you to run tests _inside_ the Workers runtime, and unit test individual functions within your Worker.

[ Get started with Vitest ](https://developers.cloudflare.com/workers/testing/vitest-integration/write-your-first-test/) 

## Testing comparison matrix

However, if you don't use Vitest, both [Miniflare's API](https://developers.cloudflare.com/workers/testing/miniflare/writing-tests) and the [unstable\_startWorker()](https://developers.cloudflare.com/workers/wrangler/api/#unstable%5Fstartworker) API provide options for testing your Worker in any testing framework.

| Feature                               | [Vitest integration](https://developers.cloudflare.com/workers/testing/vitest-integration) | [unstable\_startWorker()](https://developers.cloudflare.com/workers/testing/unstable%5Fstartworker/) | [Miniflare's API](https://developers.cloudflare.com/workers/testing/miniflare/writing-tests/) |
| ------------------------------------- | ------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- |
| Unit testing                          | ✅                                                                                          | ❌                                                                                                    | ❌                                                                                             |
| Integration testing                   | ✅                                                                                          | ✅                                                                                                    | ✅                                                                                             |
| Loading Wrangler configuration files  | ✅                                                                                          | ✅                                                                                                    | ❌                                                                                             |
| Use bindings directly in tests        | ✅                                                                                          | ❌                                                                                                    | ✅                                                                                             |
| Isolated per-test storage             | ✅                                                                                          | ❌                                                                                                    | ❌                                                                                             |
| Outbound request mocking              | ✅                                                                                          | ❌                                                                                                    | ✅                                                                                             |
| Multiple Worker support               | ✅                                                                                          | ✅                                                                                                    | ✅                                                                                             |
| Direct access to Durable Objects      | ✅                                                                                          | ❌                                                                                                    | ❌                                                                                             |
| Run Durable Object alarms immediately | ✅                                                                                          | ❌                                                                                                    | ❌                                                                                             |
| List Durable Objects                  | ✅                                                                                          | ❌                                                                                                    | ❌                                                                                             |
| Testing service Workers               | ❌                                                                                          | ✅                                                                                                    | ✅                                                                                             |

Pages Functions

The content described on this page is also applicable to [Pages Functions](https://developers.cloudflare.com/pages/functions/). Pages Functions are Cloudflare Workers and can be thought of synonymously with Workers in this context.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}}]}
```

---

---
title: Miniflare
description: Miniflare is a simulator for developing and testing
Cloudflare Workers. It's written in
TypeScript, and runs your code in a sandbox implementing Workers' runtime APIs.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Miniflare

Warning

This documentation describes the Miniflare API, which is only relevant for advanced use cases. Instead, most users should use [Wrangler](https://developers.cloudflare.com/workers/wrangler) to build, run & deploy their Workers locally

**Miniflare** is a simulator for developing and testing[**Cloudflare Workers** ↗](https://workers.cloudflare.com/). It's written in TypeScript, and runs your code in a sandbox implementing Workers' runtime APIs.

* 🎉 **Fun:** develop Workers easily with detailed logging, file watching and pretty error pages supporting source maps.
* 🔋 **Full-featured:** supports most Workers features, including KV, Durable Objects, WebSockets, modules and more.
* ⚡ **Fully-local:** test and develop Workers without an Internet connection. Reload code on change quickly.
[ Get Started ](https://developers.cloudflare.com/workers/testing/miniflare/get-started) [ GitHub ](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare) [ NPM ](https://npmjs.com/package/miniflare) 

---

These docs primarily cover Miniflare specific things. For more information on runtime APIs, refer to the[Cloudflare Workers docs](https://developers.cloudflare.com/workers).

If you find something that doesn't behave as it does in the production Workers environment (and this difference isn't documented), or something's wrong in these docs, please[open a GitHub issue ↗](https://github.com/cloudflare/workers-sdk/issues/new/choose).

* [ Get Started ](https://developers.cloudflare.com/workers/testing/miniflare/get-started/)
* [ Writing tests ](https://developers.cloudflare.com/workers/testing/miniflare/writing-tests/) :  Write integration tests against Workers using Miniflare.
* [ Core ](https://developers.cloudflare.com/workers/testing/miniflare/core/)
* [ Developing ](https://developers.cloudflare.com/workers/testing/miniflare/developing/)
* [ Migrations ](https://developers.cloudflare.com/workers/testing/miniflare/migrations/) :  Review migration guides for specific versions of Miniflare.
* [ Storage ](https://developers.cloudflare.com/workers/testing/miniflare/storage/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}}]}
```

---

---
title: Compatibility Dates
description: Miniflare uses compatibility dates to opt-into backwards-incompatible changes
from a specific date. If one isn't set, it will default to some time far in the
past.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/core/compatibility.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Compatibility Dates

* [Compatibility Dates Reference](https://developers.cloudflare.com/workers/configuration/compatibility-dates)

## Compatibility Dates

Miniflare uses compatibility dates to opt-into backwards-incompatible changes from a specific date. If one isn't set, it will default to some time far in the past.

JavaScript

```

const mf = new Miniflare({

  compatibilityDate: "2021-11-12",

});


```

## Compatibility Flags

Miniflare also lets you opt-in/out of specific changes using compatibility flags:

JavaScript

```

const mf = new Miniflare({

  compatibilityFlags: [

    "formdata_parser_supports_files",

    "durable_object_fetch_allows_relative_url",

  ],

});


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/core/","name":"Core"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/core/compatibility/","name":"Compatibility Dates"}}]}
```

---

---
title: Fetch Events
description: Whenever an HTTP request is made, a Request object is dispatched to your worker, then the generated Response is returned. The
Request object will include a
cf object.
Miniflare will log the method, path, status, and the time it took to respond.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/core/fetch.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Fetch Events

* [FetchEvent Reference](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/)

## HTTP Requests

Whenever an HTTP request is made, a `Request` object is dispatched to your worker, then the generated `Response` is returned. The`Request` object will include a[cf object](https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties). Miniflare will log the method, path, status, and the time it took to respond.

If the Worker throws an error whilst generating a response, an error page containing the stack trace is returned instead.

## Dispatching Events

When using the API, the `dispatchFetch` function can be used to dispatch `fetch`events to your Worker. This can be used for testing responses. `dispatchFetch`has the same API as the regular `fetch` method: it either takes a `Request`object, or a URL and optional `RequestInit` object:

JavaScript

```

import { Miniflare, Request } from "miniflare";


const mf = new Miniflare({

  modules: true,

  script: `

  export default {

    async fetch(request, env, ctx) {

      const body = JSON.stringify({

        url: event.request.url,

        header: event.request.headers.get("X-Message"),

      });

      return new Response(body, {

        headers: { "Content-Type": "application/json" },

      });

    })

  }

  `,

});


let res = await mf.dispatchFetch("http://localhost:8787/");

console.log(await res.json()); // { url: "http://localhost:8787/", header: null }


res = await mf.dispatchFetch("http://localhost:8787/1", {

  headers: { "X-Message": "1" },

});

console.log(await res.json()); // { url: "http://localhost:8787/1", header: "1" }


res = await mf.dispatchFetch(

  new Request("http://localhost:8787/2", {

    headers: { "X-Message": "2" },

  }),

);

console.log(await res.json()); // { url: "http://localhost:8787/2", header: "2" }


```

When dispatching events, you are responsible for adding[CF-\* headers](https://developers.cloudflare.com/fundamentals/reference/http-headers/) and the[cf object](https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties). This lets you control their values for testing:

JavaScript

```

const res = await mf.dispatchFetch("http://localhost:8787", {

  headers: {

    "CF-IPCountry": "GB",

  },

  cf: {

    country: "GB",

  },

});


```

## Upstream

Miniflare will call each `fetch` listener until a response is returned. If no response is returned, or an exception is thrown and `passThroughOnException()`has been called, the response will be fetched from the specified upstream instead:

JavaScript

```

import { Miniflare } from "miniflare";


const mf = new Miniflare({

  script: `

  addEventListener("fetch", (event) => {

    event.passThroughOnException();

    throw new Error();

  });

  `,

  upstream: "https://miniflare.dev",

});

// If you don't use the same upstream URL when dispatching, Miniflare will

// rewrite it to match the upstream

const res = await mf.dispatchFetch("https://miniflare.dev/core/fetch");

console.log(await res.text()); // Source code of this page


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/core/","name":"Core"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/core/fetch/","name":"Fetch Events"}}]}
```

---

---
title: Modules
description: Miniflare supports both the traditional service-worker and the newer modules formats for writing workers. To use the modules format, enable it with:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/core/modules.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Modules

* [Modules Reference](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/)

## Enabling Modules

Miniflare supports both the traditional `service-worker` and the newer `modules` formats for writing workers. To use the `modules` format, enable it with:

JavaScript

```

const mf = new Miniflare({

  modules: true,

});


```

You can then use `modules` worker scripts like the following:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    // - `request` is the incoming `Request` instance

    // - `env` contains bindings, KV namespaces, Durable Objects, etc

    // - `ctx` contains `waitUntil` and `passThroughOnException` methods

    return new Response("Hello Miniflare!");

  },

  async scheduled(controller, env, ctx) {

    // - `controller` contains `scheduledTime` and `cron` properties

    // - `env` contains bindings, KV namespaces, Durable Objects, etc

    // - `ctx` contains the `waitUntil` method

    console.log("Doing something scheduled...");

  },

};


```

String scripts via the `script` option are supported using the `modules` format, but you cannot import other modules using them. You must use a script file via the `scriptPath` option for this.

## Module Rules

Miniflare supports all module types: `ESModule`, `CommonJS`, `Text`, `Data` and`CompiledWasm`. You can specify additional module resolution rules as follows:

JavaScript

```

const mf = new Miniflare({

  modulesRules: [

    { type: "ESModule", include: ["**/*.js"], fallthrough: true },

    { type: "Text", include: ["**/*.txt"] },

  ],

});


```

### Default Rules

The following rules are automatically added to the end of your modules rules list. You can override them by specifying rules matching the same `globs`:

JavaScript

```

[

  { type: "ESModule", include: ["**/*.mjs"] },

  { type: "CommonJS", include: ["**/*.js", "**/*.cjs"] },

];


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/core/","name":"Core"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/core/modules/","name":"Modules"}}]}
```

---

---
title: Multiple Workers
description: Miniflare allows you to run multiple workers in the same instance. All Workers can be defined at the same level, using the workers option.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/core/multiple-workers.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Multiple Workers

Miniflare allows you to run multiple workers in the same instance. All Workers can be defined at the same level, using the `workers` option.

Here's an example that uses a service binding to increment a value in a shared KV namespace:

JavaScript

```

import { Miniflare, Response } from "miniflare";


const message = "The count is ";

const mf = new Miniflare({

  // Options shared between workers such as HTTP and persistence configuration

  // should always be defined at the top level.

  host: "0.0.0.0",

  port: 8787,

  kvPersist: true,


  workers: [

    {

      name: "worker",

      kvNamespaces: { COUNTS: "counts" },

      serviceBindings: {

        INCREMENTER: "incrementer",

        // Service bindings can also be defined as custom functions, with access

        // to anything defined outside Miniflare.

        async CUSTOM(request) {

          // `request` is the incoming `Request` object.

          return new Response(message);

        },

      },

      modules: true,

      script: `export default {

        async fetch(request, env, ctx) {

          // Get the message defined outside

          const response = await env.CUSTOM.fetch("http://host/");

          const message = await response.text();


          // Increment the count 3 times

          await env.INCREMENTER.fetch("http://host/");

          await env.INCREMENTER.fetch("http://host/");

          await env.INCREMENTER.fetch("http://host/");

          const count = await env.COUNTS.get("count");


          return new Response(message + count);

        }

      }`,

    },

    {

      name: "incrementer",

      // Note we're using the same `COUNTS` namespace as before, but binding it

      // to `NUMBERS` instead.

      kvNamespaces: { NUMBERS: "counts" },

      // Worker formats can be mixed-and-matched

      script: `addEventListener("fetch", (event) => {

        event.respondWith(handleRequest());

      })

      async function handleRequest() {

        const count = parseInt((await NUMBERS.get("count")) ?? "0") + 1;

        await NUMBERS.put("count", count.toString());

        return new Response(count.toString());

      }`,

    },

  ],

});

const res = await mf.dispatchFetch("http://localhost");

console.log(await res.text()); // "The count is 3"

await mf.dispose();


```

## Routing

You can enable routing by specifying `routes` via the API, using the[standard route syntax](https://developers.cloudflare.com/workers/configuration/routing/routes/#matching-behavior). Note port numbers are ignored:

JavaScript

```

const mf = new Miniflare({

  workers: [

    {

      scriptPath: "./api/worker.js",

      routes: ["http://127.0.0.1/api*", "api.mf/*"],

    },

  ],

});


```

When using hostnames that aren't `localhost` or `127.0.0.1`, you may need to edit your computer's `hosts` file, so those hostnames resolve to`localhost`. On Linux and macOS, this is usually at `/etc/hosts`. On Windows, it's at `C:\Windows\System32\drivers\etc\hosts`. For the routes above, we would need to append the following entries to the file:

```

127.0.0.1 miniflare.test

127.0.0.1 api.mf


```

Alternatively, you can customise the `Host` header when sending the request:

Terminal window

```

# Dispatches to the "api" worker

$ curl "http://localhost:8787/todos/update/1" -H "Host: api.mf"


```

When using the API, Miniflare will use the request's URL to determine which Worker to dispatch to.

JavaScript

```

// Dispatches to the "api" worker

const res = await mf.dispatchFetch("http://api.mf/todos/update/1", { ... });


```

## Durable Objects

Miniflare supports the `script_name` option for accessing Durable Objects exported by other scripts. See[📌 Durable Objects](https://developers.cloudflare.com/workers/testing/miniflare/storage/durable-objects#using-a-class-exported-by-another-script)for more details.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/core/","name":"Core"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/core/multiple-workers/","name":"Multiple Workers"}}]}
```

---

---
title: Queues
description: Specify Queue producers to add to your environment as follows:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/core/queues.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Queues

* [Queues Reference](https://developers.cloudflare.com/queues/)

## Producers

Specify Queue producers to add to your environment as follows:

JavaScript

```

const mf = new Miniflare({

  queueProducers: { MY_QUEUE: "my-queue" },

  queueProducers: ["MY_QUEUE"], // If binding and queue names are the same

});


```

## Consumers

Specify Workers to consume messages from your Queues as follows:

JavaScript

```

const mf = new Miniflare({

  queueConsumers: {

    "my-queue": {

      maxBatchSize: 5, // default: 5

      maxBatchTimeout: 1 /* second(s) */, // default: 1

      maxRetries: 2, // default: 2

      deadLetterQueue: "my-dead-letter-queue", // default: none

    },

  },

  queueConsumers: ["my-queue"], // If using default consumer options

});


```

## Manipulating Outside Workers

For testing, it can be valuable to interact with Queues outside a Worker. You can do this by using the `workers` option to run multiple Workers in the same instance:

JavaScript

```

const mf = new Miniflare({

  workers: [

    {

      name: "a",

      modules: true,

      script: `

      export default {

        async fetch(request, env, ctx) {

          await env.QUEUE.send(await request.text());

        }

      }

      `,

      queueProducers: { QUEUE: "my-queue" },

    },

    {

      name: "b",

      modules: true,

      script: `

      export default {

        async queue(batch, env, ctx) {

          console.log(batch);

        }

      }

      `,

      queueConsumers: { "my-queue": { maxBatchTimeout: 1 } },

    },

  ],

});


const queue = await mf.getQueueProducer("QUEUE", "a"); // Get from worker "a"

await queue.send("message"); // Logs "message" 1 second later


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/core/","name":"Core"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/core/queues/","name":"Queues"}}]}
```

---

---
title: Scheduled Events
description: scheduled events are automatically dispatched according to the specified cron
triggers:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/core/scheduled.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Scheduled Events

* [ScheduledEvent Reference](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/)

## Cron Triggers

`scheduled` events are automatically dispatched according to the specified cron triggers:

JavaScript

```

const mf = new Miniflare({

  crons: ["15 * * * *", "45 * * * *"],

});


```

## HTTP Triggers

Because waiting for cron triggers is annoying, you can also make HTTP requests to `/cdn-cgi/mf/scheduled` to trigger `scheduled` events:

Terminal window

```

$ curl "http://localhost:8787/cdn-cgi/mf/scheduled"


```

To simulate different values of `scheduledTime` and `cron` in the dispatched event, use the `time` and `cron` query parameters:

Terminal window

```

$ curl "http://localhost:8787/cdn-cgi/mf/scheduled?time=1000"

$ curl "http://localhost:8787/cdn-cgi/mf/scheduled?cron=*+*+*+*+*"


```

## Dispatching Events

When using the API, the `getWorker` function can be used to dispatch`scheduled` events to your Worker. This can be used for testing responses. It takes optional `scheduledTime` and `cron` parameters, which default to the current time and the empty string respectively. It will return a promise which resolves to an array containing data returned by all waited promises:

JavaScript

```

import { Miniflare } from "miniflare";


const mf = new Miniflare({

  modules: true,

  script: `

  export default {

    async scheduled(controller, env, ctx) {

      const lastScheduledController = controller;

      if (controller.cron === "* * * * *") controller.noRetry();

    }

  }

  `,

});


const worker = await mf.getWorker();


let scheduledResult = await worker.scheduled({

  cron: "* * * * *",

});

console.log(scheduledResult); // { outcome: 'ok', noRetry: true }


scheduledResult = await worker.scheduled({

  scheduledTime: new Date(1000),

  cron: "30 * * * *",

});


console.log(scheduledResult); // { outcome: 'ok', noRetry: false }


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/core/","name":"Core"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/core/scheduled/","name":"Scheduled Events"}}]}
```

---

---
title: Web Standards
description: When using the API, Miniflare allows you to substitute custom Responses for
fetch() calls using undici's
MockAgent API.
This is useful for testing Workers that make HTTP requests to other services. To
enable fetch mocking, create a
MockAgent
using the createFetchMock() function, then set this using the fetchMock
option.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/core/standards.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Web Standards

* [Web Standards Reference](https://developers.cloudflare.com/workers/runtime-apis/web-standards)
* [Encoding Reference](https://developers.cloudflare.com/workers/runtime-apis/encoding)
* [Fetch Reference](https://developers.cloudflare.com/workers/runtime-apis/fetch)
* [Request Reference](https://developers.cloudflare.com/workers/runtime-apis/request)
* [Response Reference](https://developers.cloudflare.com/workers/runtime-apis/response)
* [Streams Reference](https://developers.cloudflare.com/workers/runtime-apis/streams)
* [Web Crypto Reference](https://developers.cloudflare.com/workers/runtime-apis/web-crypto)

## Mocking Outbound `fetch` Requests

When using the API, Miniflare allows you to substitute custom `Response`s for`fetch()` calls using `undici`'s[MockAgent API ↗](https://undici.nodejs.org/#/docs/api/MockAgent?id=mockagentgetorigin). This is useful for testing Workers that make HTTP requests to other services. To enable `fetch` mocking, create a[MockAgent ↗](https://undici.nodejs.org/#/docs/api/MockAgent?id=mockagentgetorigin)using the `createFetchMock()` function, then set this using the `fetchMock`option.

JavaScript

```

import { Miniflare, createFetchMock } from "miniflare";


// Create `MockAgent` and connect it to the `Miniflare` instance

const fetchMock = createFetchMock();

const mf = new Miniflare({

  modules: true,

  script: `

  export default {

    async fetch(request, env, ctx) {

      const res = await fetch("https://example.com/thing");

      const text = await res.text();

      return new Response(\`response:\${text}\`);

    }

  }

  `,

  fetchMock,

});


// Throw when no matching mocked request is found

// (see https://undici.nodejs.org/#/docs/api/MockAgent?id=mockagentdisablenetconnect)

fetchMock.disableNetConnect();


// Mock request to https://example.com/thing

// (see https://undici.nodejs.org/#/docs/api/MockAgent?id=mockagentgetorigin)

const origin = fetchMock.get("https://example.com");

// (see https://undici.nodejs.org/#/docs/api/MockPool?id=mockpoolinterceptoptions)

origin

  .intercept({ method: "GET", path: "/thing" })

  .reply(200, "Mocked response!");


const res = await mf.dispatchFetch("http://localhost:8787/");

console.log(await res.text()); // "response:Mocked response!"


```

## Subrequests

Miniflare does not support limiting the amount of[subrequests](https://developers.cloudflare.com/workers/platform/limits#account-plan-limits). Please keep this in mind if you make a large amount of subrequests from your Worker.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/core/","name":"Core"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/core/standards/","name":"Web Standards"}}]}
```

---

---
title: Variables and Secrets
description: Variables and secrets are bound as follows:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/core/variables-secrets.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Variables and Secrets

## Bindings

Variables and secrets are bound as follows:

JavaScript

```

const mf = new Miniflare({

  bindings: {

    KEY1: "value1",

    KEY2: "value2",

  },

});


```

## Text and Data Blobs

Text and data blobs can be loaded from files. File contents will be read and bound as `string`s and `ArrayBuffer`s respectively.

JavaScript

```

const mf = new Miniflare({

  textBlobBindings: { TEXT: "text.txt" },

  dataBlobBindings: { DATA: "data.bin" },

});


```

## Globals

Injecting arbitrary globals is not supported by [workerd ↗](https://github.com/cloudflare/workerd). If you're using a service Worker, bindings will be injected as globals, but these must be JSON-serializable.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/core/","name":"Core"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/core/variables-secrets/","name":"Variables and Secrets"}}]}
```

---

---
title: WebSockets
description: Miniflare will always upgrade Web Socket connections. The Worker must respond
with a status 101 Switching Protocols response including a webSocket. For
example, the Worker below implements an echo WebSocket server:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/core/web-sockets.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# WebSockets

* [WebSockets Reference](https://developers.cloudflare.com/workers/runtime-apis/websockets)
* [Using WebSockets](https://developers.cloudflare.com/workers/examples/websockets/)

## Server

Miniflare will always upgrade Web Socket connections. The Worker must respond with a status `101 Switching Protocols` response including a `webSocket`. For example, the Worker below implements an echo WebSocket server:

JavaScript

```

export default {

  fetch(request) {

    const [client, server] = Object.values(new WebSocketPair());


    server.accept();

    server.addEventListener("message", (event) => {

      server.send(event.data);

    });


    return new Response(null, {

      status: 101,

      webSocket: client,

    });

  },

};


```

When using `dispatchFetch`, you are responsible for handling WebSockets by using the `webSocket` property on `Response`. As an example, if the above worker script was stored in `echo.mjs`:

JavaScript

```

import { Miniflare } from "miniflare";


const mf = new Miniflare({

  modules: true,

  scriptPath: "echo.mjs",

});


const res = await mf.dispatchFetch("https://example.com", {

  headers: {

    Upgrade: "websocket",

  },

});

const webSocket = res.webSocket;

webSocket.accept();

webSocket.addEventListener("message", (event) => {

  console.log(event.data);

});


webSocket.send("Hello!"); // Above listener logs "Hello!"


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/core/","name":"Core"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/core/web-sockets/","name":"WebSockets"}}]}
```

---

---
title: Attaching a Debugger
description: You can use regular Node.js tools to debug your Workers. Setting breakpoints,
watching values and inspecting the call stack are all examples of things you can
do with a debugger.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/developing/debugger.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Attaching a Debugger

Warning

This documentation describes breakpoint debugging when using Miniflare directly, which is only relevant for advanced use cases. Instead, most users should refer to the [Workers Observability documentation for how to set this up when using Wrangler](https://developers.cloudflare.com/workers/observability/dev-tools/breakpoints/).

You can use regular Node.js tools to debug your Workers. Setting breakpoints, watching values and inspecting the call stack are all examples of things you can do with a debugger.

## Visual Studio Code

### Create configuration

The easiest way to debug a Worker in VSCode is to create a new configuration.

Open the **Run and Debug** menu in the VSCode activity bar and create a`.vscode/launch.json` file that contains the following:

```

---

filename: .vscode/launch.json

---

{

  "configurations": [

    {

      "name": "Miniflare",

      "type": "node",

      "request": "attach",

      "port": 9229,

      "cwd": "/",

      "resolveSourceMapLocations": null,

      "attachExistingChildren": false,

      "autoAttachChildProcesses": false,

    }

  ]

}


```

From the **Run and Debug** menu in the activity bar, select the `Miniflare`configuration, and click the green play button to start debugging.

## WebStorm

Create a new configuration, by clicking **Add Configuration** in the top right.

![WebStorm add configuration button](https://developers.cloudflare.com/_astro/debugger-webstorm-node-add.1Aka_l-1_1vHfDB.webp) 

Click the **plus** button in the top left of the popup and create a new**Node.js/Chrome** configuration. Set the **Host** field to `localhost` and the**Port** field to `9229`. Then click **OK**.

![WebStorm Node.js debug configuration](https://developers.cloudflare.com/_astro/debugger-webstorm-settings.CxmegMYm_Z1NsdxH.webp) 

With the new configuration selected, click the green debug button to start debugging.

![WebStorm configuration debug button](https://developers.cloudflare.com/_astro/debugger-webstorm-node-run.BodpA57u_Z1SMC98.webp) 

## DevTools

Breakpoints can also be added via the Workers DevTools. For more information,[read the guide](https://developers.cloudflare.com/workers/observability/dev-tools)in the Cloudflare Workers docs.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/developing/","name":"Developing"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/developing/debugger/","name":"Attaching a Debugger"}}]}
```

---

---
title: Live Reload
description: Miniflare automatically refreshes your browser when your Worker script
changes when liveReload is set to true.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/developing/live-reload.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Live Reload

Miniflare automatically refreshes your browser when your Worker script changes when `liveReload` is set to `true`.

JavaScript

```

const mf = new Miniflare({

  liveReload: true,

});


```

Miniflare will only inject the `<script>` tag required for live-reload at the end of responses with the `Content-Type` header set to `text/html`:

JavaScript

```

export default {

  fetch() {

    const body = `

      <!DOCTYPE html>

      <html>

      <body>

        <p>Try update me!</p>

      </body>

      </html>

    `;


    return new Response(body, {

      headers: { "Content-Type": "text/html; charset=utf-8" },

    });

  },

};


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/developing/","name":"Developing"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/developing/live-reload/","name":"Live Reload"}}]}
```

---

---
title: Get Started
description: The Miniflare API allows you to dispatch events to workers without making actual HTTP requests, simulate connections between Workers, and interact with local emulations of storage products like KV, R2, and Durable Objects. This makes it great for writing tests, or other advanced use cases where you need finer-grained control.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/get-started.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Get Started

The Miniflare API allows you to dispatch events to workers without making actual HTTP requests, simulate connections between Workers, and interact with local emulations of storage products like [KV](https://developers.cloudflare.com/workers/testing/miniflare/storage/kv), [R2](https://developers.cloudflare.com/workers/testing/miniflare/storage/r2), and [Durable Objects](https://developers.cloudflare.com/workers/testing/miniflare/storage/durable-objects). This makes it great for writing tests, or other advanced use cases where you need finer-grained control.

## Installation

Miniflare is installed using `npm` as a dev dependency:

 npm  yarn  pnpm  bun 

```
npm i -D miniflare
```

```
yarn add -D miniflare
```

```
pnpm add -D miniflare
```

```
bun add -d miniflare
```

## Usage

In all future examples, we'll assume Node.js is running in ES module mode. You can do this by setting the `type` field in your `package.json`:

package.json

```

{

  ...

  "type": "module"

  ...

}


```

To initialise Miniflare, import the `Miniflare` class from `miniflare`:

JavaScript

```

import { Miniflare } from "miniflare";


const mf = new Miniflare({

  modules: true,

  script: `

  export default {

    async fetch(request, env, ctx) {

      return new Response("Hello Miniflare!");

    }

  }

  `,

});


const res = await mf.dispatchFetch("http://localhost:8787/");

console.log(await res.text()); // Hello Miniflare!

await mf.dispose();


```

The [rest of these docs](https://developers.cloudflare.com/workers/testing/miniflare/core/fetch) go into more detail on configuring specific features.

### String and File Scripts

Note in the above example we're specifying `script` as a string. We could've equally put the script in a file such as `worker.js`, then used the `scriptPath`property instead:

JavaScript

```

const mf = new Miniflare({

  scriptPath: "worker.js",

});


```

### Watching, Reloading and Disposing

Miniflare's API is primarily intended for testing use cases, where file watching isn't usually required. If you need to watch files, consider using a separate file watcher like [fs.watch() ↗](https://nodejs.org/api/fs.html#fswatchfilename-options-listener) or [chokidar ↗](https://github.com/paulmillr/chokidar), and calling setOptions() with your original configuration on change.

To cleanup and stop listening for requests, you should `dispose()` your instances:

JavaScript

```

await mf.dispose();


```

You can also manually reload scripts (main and Durable Objects') and options by calling `setOptions()` with the original configuration object.

### Updating Options and the Global Scope

You can use the `setOptions` method to update the options of an existing`Miniflare` instance. This accepts the same options object as the`new Miniflare` constructor, applies those options, then reloads the worker.

JavaScript

```

const mf = new Miniflare({

  script: "...",

  kvNamespaces: ["TEST_NAMESPACE"],

  bindings: { KEY: "value1" },

});


await mf.setOptions({

  script: "...",

  kvNamespaces: ["TEST_NAMESPACE"],

  bindings: { KEY: "value2" },

});


```

### Dispatching Events

`getWorker` dispatches `fetch`, `queues`, and `scheduled` events to workers respectively:

JavaScript

```

import { Miniflare } from "miniflare";


const mf = new Miniflare({

  modules: true,

  script: `

  let lastScheduledController;

  let lastQueueBatch;

  export default {

    async fetch(request, env, ctx) {

      const { pathname } = new URL(request.url);

      if (pathname === "/scheduled") {

        return Response.json({

          scheduledTime: lastScheduledController?.scheduledTime,

          cron: lastScheduledController?.cron,

        });

      } else if (pathname === "/queue") {

        return Response.json({

          queue: lastQueueBatch.queue,

          messages: lastQueueBatch.messages.map((message) => ({

          id: message.id,

          timestamp: message.timestamp.getTime(),

          body: message.body,

          bodyType: message.body.constructor.name,

          })),

        });

      } else if (pathname === "/get-url") {

        return new Response(request.url);

      } else {

        return new Response(null, { status: 404 });

      }

    },

    async scheduled(controller, env, ctx) {

      lastScheduledController = controller;

      if (controller.cron === "* * * * *") controller.noRetry();

    },

    async queue(batch, env, ctx) {

      lastQueueBatch = batch;

      if (batch.queue === "needy") batch.retryAll();

      for (const message of batch.messages) {

        if (message.id === "perfect") message.ack();

      }

    }

  }`,

});


const res = await mf.dispatchFetch("http://localhost:8787/", {

  headers: { "X-Message": "Hello Miniflare!" },

});

console.log(await res.text()); // Hello Miniflare!


const worker = await mf.getWorker();


const scheduledResult = await worker.scheduled({

  cron: "* * * * *",

});

console.log(scheduledResult); // { outcome: "ok", noRetry: true });


const queueResult = await worker.queue("needy", [

  { id: "a", timestamp: new Date(1000), body: "a", attempts: 1 },

  { id: "b", timestamp: new Date(2000), body: { b: 1 }, attempts: 1 },

]);

console.log(queueResult); // { outcome: "ok", retryAll: true, ackAll: false, explicitRetries: [], explicitAcks: []}


```

See [📨 Fetch Events](https://developers.cloudflare.com/workers/testing/miniflare/core/fetch) and [⏰ Scheduled Events](https://developers.cloudflare.com/workers/testing/miniflare/core/scheduled)for more details.

### HTTP Server

Miniflare starts an HTTP server automatically. To wait for it to be ready, `await` the `ready` property:

JavaScript

```

import { Miniflare } from "miniflare";


const mf = new Miniflare({

  modules: true,

  script: `

  export default {

    async fetch(request, env, ctx) {

      return new Response("Hello Miniflare!");

    })

  }

  `,

  port: 5000,

});

await mf.ready;

console.log("Listening on :5000");


```

#### `Request#cf` Object

By default, Miniflare will fetch the `Request#cf` object from a trusted Cloudflare endpoint. You can disable this behaviour, using the `cf` option:

JavaScript

```

const mf = new Miniflare({

  cf: false,

});


```

You can also provide a custom cf object via a filepath:

JavaScript

```

const mf = new Miniflare({

  cf: "cf.json",

});


```

### HTTPS Server

To start an HTTPS server instead, set the `https` option. To use the [default shared self-signed certificate ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare/src/http/cert.ts), set `https` to `true`:

JavaScript

```

const mf = new Miniflare({

  https: true,

});


```

To load an existing certificate from the file system:

JavaScript

```

const mf = new Miniflare({

  // These are all optional, you don't need to include them all

  httpsKeyPath: "./key.pem",

  httpsCertPath: "./cert.pem",

});


```

To load an existing certificate from strings instead:

JavaScript

```

const mf = new Miniflare({

  // These are all optional, you don't need to include them all

  httpsKey: "-----BEGIN RSA PRIVATE KEY-----...",

  httpsCert: "-----BEGIN CERTIFICATE-----...",

});


```

If both a string and path are specified for an option (e.g. `httpsKey` and`httpsKeyPath`), the string will be preferred.

### Logging

By default, `[mf:*]` logs are disabled when using the API. To enable these, set the `log` property to an instance of the `Log` class. Its only parameter is a log level indicating which messages should be logged:

JavaScript

```

import { Miniflare, Log, LogLevel } from "miniflare";


const mf = new Miniflare({

  scriptPath: "worker.js",

  log: new Log(LogLevel.DEBUG), // Enable debug messages

});


```

## Reference

JavaScript

```

import { Miniflare, Log, LogLevel } from "miniflare";


const mf = new Miniflare({

  // All options are optional, but one of script or scriptPath is required


  log: new Log(LogLevel.INFO), // Logger Miniflare uses for debugging


  script: `

    export default {

      async fetch(request, env, ctx) {

        return new Response("Hello Miniflare!");

      }

    }

  `,

  scriptPath: "./index.js",


  modules: true, // Enable modules

  modulesRules: [

    // Modules import rule

    { type: "ESModule", include: ["**/*.js"], fallthrough: true },

    { type: "Text", include: ["**/*.text"] },

  ],

  compatibilityDate: "2021-11-23", // Opt into backwards-incompatible changes from

  compatibilityFlags: ["formdata_parser_supports_files"], // Control specific backwards-incompatible changes

  upstream: "https://miniflare.dev", // URL of upstream origin

  workers: [{

    // reference additional named workers

    name: "worker2",

    kvNamespaces: { COUNTS: "counts" },

    serviceBindings: {

      INCREMENTER: "incrementer",

      // Service bindings can also be defined as custom functions, with access

      // to anything defined outside Miniflare.

      async CUSTOM(request) {

        // `request` is the incoming `Request` object.

        return new Response(message);

      },

    },

    modules: true,

    script: `export default {

        async fetch(request, env, ctx) {

          // Get the message defined outside

          const response = await env.CUSTOM.fetch("http://host/");

          const message = await response.text();


          // Increment the count 3 times

          await env.INCREMENTER.fetch("http://host/");

          await env.INCREMENTER.fetch("http://host/");

          await env.INCREMENTER.fetch("http://host/");

          const count = await env.COUNTS.get("count");


          return new Response(message + count);

        }

      }`,

    },

  }],

  name: "worker", // Name of service

  routes: ["*site.mf/worker"],


  host: "127.0.0.1", // Host for HTTP(S) server to listen on

  port: 8787, // Port for HTTP(S) server to listen on

  https: true, // Enable self-signed HTTPS (with optional cert path)

  httpsKey: "-----BEGIN RSA PRIVATE KEY-----...",

  httpsKeyPath: "./key.pem", // Path to PEM SSL key

  httpsCert: "-----BEGIN CERTIFICATE-----...",

  httpsCertPath: "./cert.pem", // Path to PEM SSL cert chain

  cf: "./node_modules/.mf/cf.json", // Path for cached Request cf object from Cloudflare

  liveReload: true, // Reload HTML pages whenever worker is reloaded


  kvNamespaces: ["TEST_NAMESPACE"], // KV namespace to bind

  kvPersist: "./kv-data", // Persist KV data (to optional path)


  r2Buckets: ["BUCKET"], // R2 bucket to bind

  r2Persist: "./r2-data", // Persist R2 data (to optional path)


  durableObjects: {

    // Durable Object to bind

    TEST_OBJECT: "TestObject", // className

    API_OBJECT: { className: "ApiObject", scriptName: "api" },

  },

  durableObjectsPersist: "./durable-objects-data", // Persist Durable Object data (to optional path)


  cache: false, // Enable default/named caches (enabled by default)

  cachePersist: "./cache-data", // Persist cached data (to optional path)

  cacheWarnUsage: true, // Warn on cache usage, for workers.dev subdomains


  sitePath: "./site", // Path to serve Workers Site files from

  siteInclude: ["**/*.html", "**/*.css", "**/*.js"], // Glob pattern of site files to serve

  siteExclude: ["node_modules"], // Glob pattern of site files not to serve


  bindings: { SECRET: "sssh" }, // Binds variable/secret to environment

  wasmBindings: { ADD_MODULE: "./add.wasm" }, // WASM module to bind

  textBlobBindings: { TEXT: "./text.txt" }, // Text blob to bind

  dataBlobBindings: { DATA: "./data.bin" }, // Data blob to bind

});


await mf.setOptions({ kvNamespaces: ["TEST_NAMESPACE2"] }); // Apply options and reload


const bindings = await mf.getBindings(); // Get bindings (KV/Durable Object namespaces, variables, etc)


// Dispatch "fetch" event to worker

const res = await mf.dispatchFetch("http://localhost:8787/", {

  headers: { Authorization: "Bearer ..." },

});

const text = await res.text();


const worker = await mf.getWorker();


// Dispatch "scheduled" event to worker

const scheduledResult = await worker.scheduled({ cron: "30 * * * *" })


const TEST_NAMESPACE = await mf.getKVNamespace("TEST_NAMESPACE");


const BUCKET = await mf.getR2Bucket("BUCKET");


const caches = await mf.getCaches(); // Get global `CacheStorage` instance

const defaultCache = caches.default;

const namedCache = await caches.open("name");


// Get Durable Object namespace and storage for ID

const TEST_OBJECT = await mf.getDurableObjectNamespace("TEST_OBJECT");

const id = TEST_OBJECT.newUniqueId();

const storage = await mf.getDurableObjectStorage(id);


// Get Queue Producer

const producer = await mf.getQueueProducer("QUEUE_BINDING");


// Get D1 Database

const db = await mf.getD1Database("D1_BINDING")


await mf.dispose(); // Cleanup storage database connections and watcher


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/get-started/","name":"Get Started"}}]}
```

---

---
title: Migrating from Version 2
description: Miniflare v3 now uses workerd, the
open-source Cloudflare Workers runtime. This is the same runtime that's deployed
on Cloudflare's network, giving bug-for-bug compatibility and practically
eliminating behavior mismatches. Refer to the
Miniflare v3 and
Wrangler v3 announcements for more
information.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/migrations/from-v2.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Migrating from Version 2

Miniflare v3 now uses [workerd ↗](https://github.com/cloudflare/workerd), the open-source Cloudflare Workers runtime. This is the same runtime that's deployed on Cloudflare's network, giving bug-for-bug compatibility and practically eliminating behavior mismatches. Refer to the[Miniflare v3 ↗](https://blog.cloudflare.com/miniflare-and-workerd/) and[Wrangler v3 announcements ↗](https://blog.cloudflare.com/wrangler3/) for more information.

## CLI Changes

Miniflare v3 no longer includes a standalone CLI. To get the same functionality, you will need to switch over to[Wrangler](https://developers.cloudflare.com/workers/wrangler/). Wrangler v3 uses Miniflare v3 by default. To start a local development server, run:

Terminal window

```

$ npx wrangler@3 dev


```

If there are features from the Miniflare CLI you would like to see in Wrangler, please open an issue on[GitHub ↗](https://github.com/cloudflare/workers-sdk/issues/new/choose).

## API Changes

We have tried to keep Miniflare v3's API close to Miniflare v2 where possible, but many options and methods have been removed or changed with the switch to the open-source `workerd` runtime. See the [Getting Started guide for the new API docs](https://developers.cloudflare.com/workers/testing/miniflare/get-started)

### Updated Options

* `kvNamespaces/r2Buckets/d1Databases`  
   * In addition to `string[]`s, these options now accept`Record<string, string>`s, mapping binding names to namespace IDs/bucket names/database IDs. This means multiple Workers can bind to the same namespace/bucket/database under different names.
* `queueBindings`  
   * Renamed to `queueProducers`. This either accepts a `Record<string, string>`mapping binding names to queue names, or a `string[]` of binding names to queues of the same name.
* `queueConsumers`  
   * Either accepts a `Record<string, QueueConsumerOptions>` mapping queue names to consumer options, or a `string[]` of queue names to consume with default options. `QueueConsumerOptions` has the following type:  
   TypeScript  
   ```  
   interface QueueConsumerOptions {  
     // /queues/platform/configuration/#consumer  
     maxBatchSize?: number; // default: 5  
     maxBatchTimeout?: number /* seconds */; // default: 1  
     maxRetries?: number; // default: 2  
     deadLetterQueue?: string; // default: none  
   }  
   ```
* `cfFetch`  
   * Renamed to `cf`. Either accepts a `boolean`, `string` (as before), or an object to use a the `cf` object for incoming requests.

### Removed Options

* `wranglerConfigPath/wranglerConfigEnv`  
   * Miniflare no longer handles Wrangler's configuration. To programmatically start up a Worker based on Wrangler configuration, use the[unstable\_dev()](https://developers.cloudflare.com/workers/wrangler/api/#unstable%5Fdev)API.
* `packagePath`  
   * Miniflare no longer loads script paths from `package.json` files. Use the`scriptPath` option to specify your script instead.
* `watch`  
   * Miniflare's API is primarily intended for testing use cases, where file watching isn't usually required. This option was here to enable Miniflare's CLI which has now been removed. If you need to watch files, consider using a separate file watcher like[fs.watch() ↗](https://nodejs.org/api/fs.html#fswatchfilename-options-listener)or [chokidar ↗](https://github.com/paulmillr/chokidar), and calling`setOptions()` with your original configuration on change.
* `logUnhandledRejections`  
   * Unhandled rejections can be handled in Workers with[addEventListener("unhandledrejection") ↗](https://community.cloudflare.com/t/2021-10-21-workers-runtime-release-notes/318571).
* `globals`  
   * Injecting arbitrary globals is not supported by[workerd ↗](https://github.com/cloudflare/workerd). If you're using a service worker, `bindings` will be injected as globals, but these must be JSON-serialisable.
* `https/httpsKey(Path)/httpsCert(Path)/httpsPfx(Path)/httpsPassphrase`  
   * Miniflare does not support starting HTTPS servers yet. These options may be added back in a future release.
* `crons`  
   * [workerd ↗](https://github.com/cloudflare/workerd) does not support triggering scheduled events yet. This option may be added back in a future release.
* `mounts`  
   * Miniflare no longer has the concept of parent and child Workers. Instead, all Workers can be defined at the same level, using the new `workers`option. Here's an example that uses a service binding to increment a value in a shared KV namespace:  
   TypeScript  
   ```  
   import { Miniflare, Response } from "miniflare";  
   const message = "The count is ";  
   const mf = new Miniflare({  
     // Options shared between Workers such as HTTP and persistence configuration  
     // should always be defined at the top level.  
     host: "0.0.0.0",  
     port: 8787,  
     kvPersist: true,  
     workers: [  
       {  
         name: "worker",  
         kvNamespaces: { COUNTS: "counts" },  
         serviceBindings: {  
           INCREMENTER: "incrementer",  
           // Service bindings can also be defined as custom functions, with access  
           // to anything defined outside Miniflare.  
           async CUSTOM(request) {  
             // `request` is the incoming `Request` object.  
             return new Response(message);  
           },  
         },  
         modules: true,  
         script: `export default {  
           async fetch(request, env, ctx) {  
             // Get the message defined outside  
             const response = await env.CUSTOM.fetch("http://host/");  
             const message = await response.text();  
             // Increment the count 3 times  
             await env.INCREMENTER.fetch("http://host/");  
             await env.INCREMENTER.fetch("http://host/");  
             await env.INCREMENTER.fetch("http://host/");  
             const count = await env.COUNTS.get("count");  
             return new Response(message + count);  
           }  
         }`,  
       },  
       {  
         name: "incrementer",  
         // Note we're using the same `COUNTS` namespace as before, but binding it  
         // to `NUMBERS` instead.  
         kvNamespaces: { NUMBERS: "counts" },  
         // Worker formats can be mixed-and-matched  
         script: `addEventListener("fetch", (event) => {  
           event.respondWith(handleRequest());  
         })  
         async function handleRequest() {  
           const count = parseInt((await NUMBERS.get("count")) ?? "0") + 1;  
           await NUMBERS.put("count", count.toString());  
           return new Response(count.toString());  
         }`,  
       },  
     ],  
   });  
   const res = await mf.dispatchFetch("http://localhost");  
   console.log(await res.text()); // "The count is 3"  
   await mf.dispose();  
   ```
* `metaProvider`  
   * The `cf` object and `X-Forwarded-Proto`/`X-Real-IP` headers can be specified when calling `dispatchFetch()` instead. A default `cf` object can be specified using the new `cf` option too.
* `durableObjectAlarms`  
   * Miniflare now always enables Durable Object alarms.
* `globalAsyncIO/globalTimers/globalRandom`  
   * [workerd ↗](https://github.com/cloudflare/workerd) cannot support these options without fundamental changes.
* `actualTime`  
   * Miniflare now always returns the current time.
* `inaccurateCpu`  
   * Set the `inspectorPort: 9229` option to enable the V8 inspector. Visit`chrome://inspect` in Google Chrome to open DevTools and perform CPU profiling.

### Updated Methods

* `setOptions()`  
   * Miniflare v3 now requires a full configuration object to be passed, instead of a partial patch.

### Removed Methods

* `reload()`  
   * Call `setOptions()` with the original configuration object to reload Miniflare.
* `createServer()/startServer()`  
   * Miniflare now always starts a[workerd ↗](https://github.com/cloudflare/workerd) server listening on the configured `host` and `port`, so these methods are redundant.
* `dispatchScheduled()/startScheduled()`  
   * The functionality of `dispatchScheduled` can now be done via `getWorker()`. For more information read the [scheduled events documentation](https://developers.cloudflare.com/workers/testing/miniflare/core/scheduled#dispatching-events).
* `dispatchQueue()`  
   * Use the `queue()` method on[service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings)or[queue producer bindings](https://developers.cloudflare.com/queues/configuration/configure-queues/#producer-worker-configuration)instead.
* `getGlobalScope()/getBindings()/getModuleExports()`  
   * These methods returned objects from inside the Workers sandbox. Since Miniflare now uses [workerd ↗](https://github.com/cloudflare/workerd), which runs in a different process, these methods can no longer be supported.
* `addEventListener()`/`removeEventListener()`  
   * Miniflare no longer emits `reload` events. As Miniflare no longer watches files, reloads are only triggered by initialisation or `setOptions()` calls. In these cases, it's possible to wait for the reload with either`await mf.ready` or `await mf.setOptions()` respectively.
* `Response#waitUntil()`  
   * [workerd ↗](https://github.com/cloudflare/workerd) does not support waiting for all `waitUntil()`ed promises yet.

### Removed Packages

* `@miniflare/*`  
   * Miniflare is now contained within a single `miniflare` package.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/migrations/","name":"Migrations"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/migrations/from-v2/","name":"Migrating from Version 2"}}]}
```

---

---
title: Cache
description: Access to the default cache is enabled by default:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/storage/cache.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cache

* [Cache Reference](https://developers.cloudflare.com/workers/runtime-apis/cache)
* [How the Cache works](https://developers.cloudflare.com/workers/reference/how-the-cache-works/#cache-api)(note that cache using `fetch` is unsupported)

## Default Cache

Access to the default cache is enabled by default:

JavaScript

```

addEventListener("fetch", (e) => {

  e.respondWith(caches.default.match("http://miniflare.dev"));

});


```

## Named Caches

You can access a namespaced cache using `open`. Note that you cannot name your cache `default`, trying to do so will throw an error:

JavaScript

```

await caches.open("cache_name");


```

## Persistence

By default, cached data is stored in memory. It will persist between reloads, but not different `Miniflare` instances. To enable persistence to the file system, specify the cache persistence option:

JavaScript

```

const mf = new Miniflare({

  cachePersist: true, // Defaults to ./.mf/cache

  cachePersist: "./data", // Custom path

});


```

## Manipulating Outside Workers

For testing, it can be useful to put/match data from cache outside a Worker. You can do this with the `getCaches` method:

JavaScript

```

import { Miniflare, Response } from "miniflare";


const mf = new Miniflare({

  modules: true,

  script: `

  export default {

    async fetch(request) {

      const url = new URL(request.url);

      const cache = caches.default;

      if(url.pathname === "/put") {

        await cache.put("https://miniflare.dev/", new Response("1", {

          headers: { "Cache-Control": "max-age=3600" },

        }));

      }

      return cache.match("https://miniflare.dev/");

    }

  }

  `,

});

let res = await mf.dispatchFetch("http://localhost:8787/put");

console.log(await res.text()); // 1


const caches = await mf.getCaches(); // Gets the global caches object

const cachedRes = await caches.default.match("https://miniflare.dev/");

console.log(await cachedRes.text()); // 1


await caches.default.put(

  "https://miniflare.dev",

  new Response("2", {

    headers: { "Cache-Control": "max-age=3600" },

  }),

);

res = await mf.dispatchFetch("http://localhost:8787");

console.log(await res.text()); // 2


```

## Disabling

Both default and named caches can be disabled with the `disableCache` option. When disabled, the caches will still be available in the sandbox, they just won't cache anything. This may be useful during development:

JavaScript

```

const mf = new Miniflare({

  cache: false,

});


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/storage/","name":"Storage"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/storage/cache/","name":"Cache"}}]}
```

---

---
title: D1
description: Specify D1 Databases to add to your environment as follows:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/storage/d1.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# D1

* [D1 Reference](https://developers.cloudflare.com/d1/)

## Databases

Specify D1 Databases to add to your environment as follows:

JavaScript

```

const mf = new Miniflare({

  d1Databases: {

    DB: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",

  },

});


```

## Working with D1 Databases

For testing, it can be useful to put/get data from D1 storage bound to a Worker. You can do this with the `getD1Database` method:

JavaScript

```

const db = await mf.getD1Database("DB");

const stmt = await db.prepare("<Query>");

const returnValue = await stmt.run();


return Response.json(returnValue.results);


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/storage/","name":"Storage"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/storage/d1/","name":"D1"}}]}
```

---

---
title: Durable Objects
description: Specify Durable Objects to add to your environment as follows:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/storage/durable-objects.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Durable Objects

* [Durable Objects Reference](https://developers.cloudflare.com/durable-objects/api/)
* [Using Durable Objects](https://developers.cloudflare.com/durable-objects/)

## Objects

Specify Durable Objects to add to your environment as follows:

JavaScript

```

const mf = new Miniflare({

  modules: true,

  script: `

  export class Object1 {

    async fetch(request) {

      ...

    }

  }

  export default {

    fetch(request) {

      ...

    }

  }

  `,

  durableObjects: {

    // Note Object1 is exported from main (string) script

    OBJECT1: "Object1",

  },

});


```

## Persistence

By default, Durable Object data is stored in memory. It will persist between reloads, but not different `Miniflare` instances. To enable persistence to the file system, specify the Durable Object persistence option:

JavaScript

```

const mf = new Miniflare({

  durableObjectsPersist: true, // Defaults to ./.mf/do

  durableObjectsPersist: "./data", // Custom path

});


```

## Manipulating Outside Workers

For testing, it can be useful to make requests to your Durable Objects from outside a worker. You can do this with the `getDurableObjectNamespace` method.

JavaScript

```

import { Miniflare } from "miniflare";


const mf = new Miniflare({

  modules: true,

  durableObjects: { TEST_OBJECT: "TestObject" },

  script: `

  export class TestObject {

    constructor(state) {

      this.storage = state.storage;

    }


    async fetch(request) {

      const url = new URL(request.url);

      if (url.pathname === "/put") await this.storage.put("key", 1);

      return new Response((await this.storage.get("key")).toString());

    }

  }


  export default {

    async fetch(request, env) {

      const stub = env.TEST_OBJECT.getByName("test");

      return stub.fetch(request);

    }

  }

  `,

});


const ns = await mf.getDurableObjectNamespace("TEST_OBJECT");

const stub = ns.getByName("test");

const doRes = await stub.fetch("http://localhost:8787/put");

console.log(await doRes.text()); // "1"


const res = await mf.dispatchFetch("http://localhost:8787/");

console.log(await res.text()); // "1"


```

## Using a Class Exported by Another Script

Miniflare supports the `script_name` option for accessing Durable Objects exported by other scripts. This requires mounting the other worker as described in [🔌 Multiple Workers](https://developers.cloudflare.com/workers/testing/miniflare/core/multiple-workers).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/storage/","name":"Storage"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/storage/durable-objects/","name":"Durable Objects"}}]}
```

---

---
title: KV
description: Specify KV namespaces to add to your environment as follows:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/storage/kv.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# KV

* [KV Reference](https://developers.cloudflare.com/kv/api/)

## Namespaces

Specify KV namespaces to add to your environment as follows:

JavaScript

```

const mf = new Miniflare({

  kvNamespaces: ["TEST_NAMESPACE1", "TEST_NAMESPACE2"],

});


```

You can now access KV namespaces in your workers:

JavaScript

```

export default {

  async fetch(request, env) {

    return new Response(await env.TEST_NAMESPACE1.get("key"));

  },

};


```

Miniflare supports all KV operations and data types.

## Manipulating Outside Workers

For testing, it can be useful to put/get data from KV outside a worker. You can do this with the `getKVNamespace` method:

JavaScript

```

import { Miniflare } from "miniflare";


const mf = new Miniflare({

  modules: true,

  script: `

  export default {

    async fetch(request, env, ctx) {

      const value = parseInt(await env.TEST_NAMESPACE.get("count")) + 1;

      await env.TEST_NAMESPACE.put("count", value.toString());

      return new Response(value.toString());

    },

  }

  `,

  kvNamespaces: ["TEST_NAMESPACE"],

});


const ns = await mf.getKVNamespace("TEST_NAMESPACE");

await ns.put("count", "1");


const res = await mf.dispatchFetch("http://localhost:8787/");

console.log(await res.text()); // 2

console.log(await ns.get("count")); // 2


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/storage/","name":"Storage"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/storage/kv/","name":"KV"}}]}
```

---

---
title: R2
description: Specify R2 Buckets to add to your environment as follows:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/storage/r2.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# R2

* [R2 Reference](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/)

## Buckets

Specify R2 Buckets to add to your environment as follows:

JavaScript

```

const mf = new Miniflare({

  r2Buckets: ["BUCKET1", "BUCKET2"],

});


```

## Manipulating Outside Workers

For testing, it can be useful to put/get data from R2 storage outside a worker. You can do this with the `getR2Bucket` method:

JavaScript

```

import { Miniflare } from "miniflare";


const mf = new Miniflare({

  modules: true,

  script: `

  export default {

    async fetch(request, env, ctx) {

      const object = await env.BUCKET.get("count");

      const value = parseInt(await object.text()) + 1;

      await env.BUCKET.put("count", value.toString());

      return new Response(value.toString());

    }

  }

  `,

  r2Buckets: ["BUCKET"],

});


const bucket = await mf.getR2Bucket("BUCKET");

await bucket.put("count", "1");


const res = await mf.dispatchFetch("http://localhost:8787/");

console.log(await res.text()); // 2

console.log(await (await bucket.get("count")).text()); // 2


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/storage/","name":"Storage"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/miniflare/storage/r2/","name":"R2"}}]}
```

---

---
title: Writing tests
description: Write integration tests against Workers using Miniflare.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/miniflare/writing-tests.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Writing tests

Note

For most users, Cloudflare recommends using the Workers Vitest integration. If you have been using test environments from Miniflare, refer to the [Migrate from Miniflare 2 guide](https://developers.cloudflare.com/workers/testing/vitest-integration/migration-guides/migrate-from-miniflare-2/).

This guide will show you how to set up [Miniflare](https://developers.cloudflare.com/workers/testing/miniflare) to test your Workers. Miniflare is a low-level API that allows you to fully control how your Workers are run and tested.

To use Miniflare, make sure you've installed the latest version of Miniflare v3:

 npm  yarn  pnpm  bun 

```
npm i -D miniflare@latest
```

```
yarn add -D miniflare@latest
```

```
pnpm add -D miniflare@latest
```

```
bun add -d miniflare@latest
```

The rest of this guide demonstrates concepts with the [node:test ↗](https://nodejs.org/api/test.html) testing framework, but any testing framework can be used.

Miniflare is a low-level API that exposes a large variety of configuration options for running your Worker. In most cases, your tests will only need a subset of the available options, but you can refer to the [full API reference](https://developers.cloudflare.com/workers/testing/miniflare/get-started/#reference) to explore what is possible with Miniflare.

Before writing a test, you will need to create a Worker. Since Miniflare is a low-level API that emulates the Cloudflare platform primitives, your Worker will need to be written in JavaScript or you'll need to [integrate your own build pipeline](#custom-builds) into your testing setup. Here's an example JavaScript-only Worker:

src/index.js

```

export default {

  async fetch(request) {

    return new Response(`Hello World`);

  },

};


```

Next, you will need to create an initial test file:

src/index.test.js

```

import assert from "node:assert";

import test, { after, before, describe } from "node:test";

import { Miniflare } from "miniflare";


describe("worker", () => {

  /**

   * @type {Miniflare}

   */

  let worker;


  before(async () => {

    worker = new Miniflare({

      modules: [

        {

          type: "ESModule",

          path: "src/index.js",

        },

      ],

    });

    await worker.ready;

  });


  test("hello world", async () => {

    assert.strictEqual(

      await (await worker.dispatchFetch("http://example.com")).text(),

      "Hello World",

    );

  });


  after(async () => {

    await worker.dispose();

  });

});


```

You should be able to run the above test via `node --test`

The highlighted lines of the test file above demonstrate how to set up Miniflare to run a JavaScript Worker. Once Miniflare has been set up, your individual tests can send requests to the running Worker and assert against the responses. This is the main limitation of using Miniflare for testing your Worker as compared to the [Vitest integration](https://developers.cloudflare.com/workers/testing/vitest-integration/) — all access to your Worker must be through the `dispatchFetch()` Miniflare API, and you cannot unit test individual functions from your Worker.

What runtime are tests running in?

When using the [Vitest integration](https://developers.cloudflare.com/workers/testing/vitest-integration/), your entire test suite runs in[workerd ↗](https://github.com/cloudflare/workerd), which is why it is possible to unit test individual functions. By contrast, when using a different testing framework to run tests via Miniflare, only your Worker itself is running in [workerd ↗](https://github.com/cloudflare/workerd) — your test files run in Node.js. This means that importing functions from your Worker into your test files might exhibit different behaviour than you'd see at runtime if the functions rely on `workerd`\-specific behaviour.

## Interacting with Bindings

Warning

Miniflare does not read [Wrangler's config file](https://developers.cloudflare.com/workers/wrangler/configuration). All bindings that your Worker uses need to be specified in the Miniflare API options.

The `dispatchFetch()` API from Miniflare allows you to send requests to your Worker and assert that the correct response is returned, but sometimes you need to interact directly with bindings in tests. For use cases like that, Miniflare provides the [getBindings()](https://developers.cloudflare.com/workers/testing/miniflare/get-started/#reference) API. For instance, to access an environment variable in your tests, adapt the test file `src/index.test.js` as follows:

src/index.test.js

```

...

describe("worker", () => {

  ...

  before(async () => {

    worker = new Miniflare({

      ...

      bindings: {

        FOO: "Hello Bindings",

      },

    });

    ...

  });


  test("text binding", async () => {

    const bindings = await worker.getBindings();

    assert.strictEqual(bindings.FOO, "Hello Bindings");

  });

  ...

});


```

You can also interact with local resources such as KV and R2 using the same API as you would from a Worker. For example, here's how you would interact with a KV namespace:

src/index.test.js

```

...

describe("worker", () => {

  ...

  before(async () => {

    worker = new Miniflare({

      ...

      kvNamespaces: ["KV"],

    });

    ...

  });


  test("kv binding", async () => {

    const bindings = await worker.getBindings();

    await bindings.KV.put("key", "value");

    assert.strictEqual(await bindings.KV.get("key"), "value");

  });

  ...

});


```

## More complex Workers

The example given above shows how to test a simple Worker consisting of a single JavaScript file. However, most real-world Workers are more complex than that. Miniflare supports providing all constituent files of your Worker directly using the API:

JavaScript

```

new Miniflare({

  modules: [

    {

      type: "ESModule",

      path: "src/index.js",

    },

    {

      type: "ESModule",

      path: "src/imported.js",

    },

  ],

});


```

This can be a bit cumbersome as your Worker grows. To help with this, Miniflare can also crawl your module graph to automatically figure out which modules to include:

JavaScript

```

new Miniflare({

  scriptPath: "src/index-with-imports.js",

  modules: true,

  modulesRules: [{ type: "ESModule", include: ["**/*.js"] }],

});


```

## Custom builds

In many real-world cases, Workers are not written in plain JavaScript but instead consist of multiple TypeScript files that import from npm packages and other dependencies, which are then bundled by a build tool. When testing your Worker via Miniflare directly you need to run this build tool before your tests. Exactly how this build is run will depend on the specific test framework you use, but for `node:test` it would likely be in a `setup()` hook. For example, if you use [Wrangler](https://developers.cloudflare.com/workers/wrangler/) to build and deploy your Worker, you could spawn a `wrangler build` command like this:

JavaScript

```

before(() => {

  spawnSync("npx wrangler build -c wrangler-build.json", {

    shell: true,

    stdio: "pipe",

  });

});


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/miniflare/","name":"Miniflare"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/miniflare/writing-tests/","name":"Writing tests"}}]}
```

---

---
title: Wrangler's unstable_startWorker()
description: Write integration tests using Wrangler's `unstable_startWorker()` API
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/unstable%5Fstartworker.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Wrangler's unstable\_startWorker()

Note

For most users, Cloudflare recommends using the Workers Vitest integration. If you have been using `unstable_dev()`, refer to the [Migrate from unstable\_dev() guide](https://developers.cloudflare.com/workers/testing/vitest-integration/migration-guides/migrate-from-unstable-dev/).

Warning

`unstable_startWorker()` is an experimental API subject to breaking changes.

If you do not want to use Vitest, consider using [Wrangler's unstable\_startWorker() API](https://developers.cloudflare.com/workers/wrangler/api/#unstable%5Fstartworker). This API exposes the internals of Wrangler's dev server, and allows you to customise how it runs. Compared to using [Miniflare directly for testing](https://developers.cloudflare.com/workers/testing/miniflare/writing-tests/), you can pass in a Wrangler configuration file, and it will automatically load the configuration for you.

This example uses `node:test`, but should apply to any testing framework:

TypeScript

```

import assert from "node:assert";

import test, { after, before, describe } from "node:test";

import { unstable_startWorker } from "wrangler";


describe("worker", () => {

  let worker;


  before(async () => {

    worker = await unstable_startWorker({ config: "wrangler.json" });

  });


  test("hello world", async () => {

    assert.strictEqual(

      await (await worker.fetch("http://example.com")).text(),

      "Hello world",

    );

  });


  after(async () => {

    await worker.dispose();

  });

});


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/unstable_startworker/","name":"Wrangler's unstable_startWorker()"}}]}
```

---

---
title: Vitest integration
description: For most users, Cloudflare recommends using the Workers Vitest integration for testing Workers and Pages Functions projects. Vitest is a popular JavaScript testing framework featuring a very fast watch mode, Jest compatibility, and out-of-the-box support for TypeScript. In this integration, Cloudflare provides a custom pool that allows your Vitest tests to run inside the Workers runtime.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/vitest-integration/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Vitest integration

For most users, Cloudflare recommends using the Workers Vitest integration for testing Workers and [Pages Functions](https://developers.cloudflare.com/pages/functions/) projects. [Vitest ↗](https://vitest.dev/) is a popular JavaScript testing framework featuring a very fast watch mode, Jest compatibility, and out-of-the-box support for TypeScript. In this integration, Cloudflare provides a custom pool that allows your Vitest tests to run _inside_ the Workers runtime.

The Workers Vitest integration:

* Supports both **unit tests** and **integration tests**.
* Provides direct access to Workers runtime APIs and bindings.
* Implements isolated per-test-file storage.
* Runs tests fully-locally using [Miniflare ↗](https://miniflare.dev/).
* Leverages Vitest's hot-module reloading for near instant reruns.
* Supports projects with multiple Workers.
[ Write your first test ](https://developers.cloudflare.com/workers/testing/vitest-integration/write-your-first-test/) 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/vitest-integration/","name":"Vitest integration"}}]}
```

---

---
title: Configuration
description: Vitest configuration specific to the Workers integration.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/vitest-integration/configuration.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Configuration

The Workers Vitest integration provides additional configuration on top of Vitest's usual options using the `cloudflareTest()` Vite plugin.

An example configuration would be:

TypeScript

```

import { cloudflareTest } from "@cloudflare/vitest-pool-workers";

import { defineConfig } from "vitest/config";


export default defineConfig({

  plugins: [

    cloudflareTest({

      wrangler: {

        configPath: "./wrangler.jsonc",

      },

    }),

  ],

});


```

Warning

Custom Vitest `environment`s or `runner`s are not supported when using the Workers Vitest integration.

## APIs

The following APIs are exported from the `@cloudflare/vitest-pool-workers` package.

### `cloudflareTest(options)`

A Vite plugin that configures Vitest to use the Workers integration with the correct module resolution settings, and provides type checking for [CloudflareTestOptions](#cloudflaretestoptions). Add this to the `plugins` array in your Vitest config alongside [defineConfig() ↗](https://vitest.dev/config/file.html) from Vitest.

It also accepts an optionally-`async` function returning `options`.

TypeScript

```

import { cloudflareTest } from "@cloudflare/vitest-pool-workers";

import { defineConfig } from "vitest/config";


export default defineConfig({

  plugins: [

    cloudflareTest({

      // Refer to CloudflareTestOptions...

    }),

  ],

});


```

### `buildPagesASSETSBinding(assetsPath)`

Exported from `@cloudflare/vitest-pool-workers/config`. Creates a Pages ASSETS binding that serves files inside the `assetsPath`. This is required if you use `createPagesEventContext()` to test your **Pages Functions**. Refer to the [Pages recipe](https://developers.cloudflare.com/workers/testing/vitest-integration/recipes) for a full example.

TypeScript

```

import path from "node:path";

import { buildPagesASSETSBinding } from "@cloudflare/vitest-pool-workers/config";

import { cloudflareTest } from "@cloudflare/vitest-pool-workers";

import { defineConfig } from "vitest/config";


export default defineConfig({

  plugins: [

    cloudflareTest(async () => {

      const assetsPath = path.join(__dirname, "public");


      return {

        miniflare: {

          serviceBindings: {

            ASSETS: await buildPagesASSETSBinding(assetsPath),

          },

        },

      };

    }),

  ],

});


```

### `readD1Migrations(migrationsPath)`

Exported from `@cloudflare/vitest-pool-workers/config`. Reads all [D1 migrations](https://developers.cloudflare.com/d1/reference/migrations/) stored at `migrationsPath` and returns them ordered by migration number. Each migration will have its contents split into an array of individual SQL queries. Call the [applyD1Migrations()](https://developers.cloudflare.com/workers/testing/vitest-integration/test-apis/#d1) function inside a test or [setup file ↗](https://vitest.dev/config/#setupfiles) to apply migrations. Refer to the [D1 recipe ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/d1) for an example project using migrations.

TypeScript

```

import path from "node:path";

import { readD1Migrations } from "@cloudflare/vitest-pool-workers/config";

import { cloudflareTest } from "@cloudflare/vitest-pool-workers";

import { defineConfig } from "vitest/config";


export default defineConfig({

  plugins: [

    cloudflareTest(async () => {

      const migrationsPath = path.join(__dirname, "migrations");

      const migrations = await readD1Migrations(migrationsPath);


      return {

        miniflare: {

          // Add a test-only binding for migrations, so we can apply them in a setup file

          bindings: { TEST_MIGRATIONS: migrations },

        },

      };

    }),

  ],

  test: {

    setupFiles: ["./test/apply-migrations.ts"],

  },

});


```

## `CloudflareTestOptions`

Options passed directly to `cloudflareTest()`.

* `main`: string optional  
   * Entry point to Worker run in the same isolate/context as tests. This option is required to use Durable Objects without an explicit `scriptName` if classes are defined in the same Worker. This file goes through Vite transforms and can be TypeScript. Note that `import module from "<path-to-main>"` inside tests gives exactly the same `module` instance as is used internally for `exports` and Durable Object bindings. If `wrangler.configPath` is defined and this option is not, it will be read from the `main` field in that configuration file.
* `miniflare`: `SourcelessWorkerOptions & { workers?: WorkerOptions\[]; }` optional  
   * Use this to provide configuration information that is typically stored within the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), such as [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/), [compatibility dates](https://developers.cloudflare.com/workers/configuration/compatibility-dates/), and [compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags/). The `WorkerOptions` interface is defined [here ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare#interface-workeroptions). Use the `main` option above to configure the entry point, instead of the Miniflare `script`, `scriptPath`, or `modules` options.  
   * If your project makes use of multiple Workers, you can configure auxiliary Workers that run in the same `workerd` process as your tests and can be bound to. Auxiliary Workers are configured using the `workers` array, containing regular Miniflare [WorkerOptions ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare#interface-workeroptions) objects. Note that unlike the `main` Worker, auxiliary Workers:  
         * Cannot have TypeScript entrypoints. You must compile auxiliary Workers to JavaScript first. You can use the [wrangler deploy --dry-run --outdir dist](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) command for this.  
         * Use regular Workers module resolution semantics. Refer to the [Isolation and concurrency](https://developers.cloudflare.com/workers/testing/vitest-integration/isolation-and-concurrency/#modules) page for more information.  
         * Cannot access the [cloudflare:test](https://developers.cloudflare.com/workers/testing/vitest-integration/test-apis/) module.  
         * Do not require specific compatibility dates or flags.  
         * Can be written with the [Service Worker syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/#service-worker-syntax).  
         * Are not affected by global mocks defined in your tests.
* `wrangler`: `{ configPath?: string; environment?: string; }` optional  
   * Path to [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) to load `main`, [compatibility settings](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) and [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) from. These options will be merged with the `miniflare` option above, with `miniflare` values taking precedence. For example, if your Wrangler configuration defined a [service binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) named `SERVICE` to a Worker named `service`, but you included `serviceBindings: { SERVICE(request) { return new Response("body"); } }` in the `miniflare` option, all requests to `SERVICE` in tests would return `body`. Note `configPath` accepts both `.toml` and `.json` files.  
   * The environment option can be used to specify the [Wrangler environment](https://developers.cloudflare.com/workers/wrangler/environments/) to pick up bindings and variables from.

## Dynamic configuration with `inject`

You can pass an `async` function to `cloudflareTest()` that receives an `inject` function. This allows you to define `miniflare` configuration based on injected values from [globalSetup ↗](https://vitest.dev/config/#globalsetup) scripts. Use this if you have a value in your configuration that is dynamically generated and only known at runtime of your tests. For example, a global setup script might start an upstream server on a random port. This port could be `provide()`d and then `inject()`ed in the configuration for an external service binding or [Hyperdrive](https://developers.cloudflare.com/hyperdrive/). Refer to the [Hyperdrive recipe ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/hyperdrive) for an example project using this provide/inject approach.

Illustrative example

TypeScript

```

// env.d.ts

declare module "vitest" {

  interface ProvidedContext {

    port: number;

  }

}


// global-setup.ts

import type { GlobalSetupContext } from "vitest/node";

export default function ({ provide }: GlobalSetupContext) {

  // Runs inside Node.js, could start server here...

  provide("port", 1337);

  return () => {

    /* ...then teardown here */

  };

}


// vitest.config.ts

import { cloudflareTest } from "@cloudflare/vitest-pool-workers";

import { defineConfig } from "vitest/config";


export default defineConfig({

  plugins: [

    cloudflareTest(({ inject }) => ({

      miniflare: {

        hyperdrives: {

          DATABASE: `postgres://user:pass@example.com:${inject("port")}/db`,

        },

      },

    })),

  ],

  test: {

    globalSetup: ["./global-setup.ts"],

  },

});


```

## `SourcelessWorkerOptions`

Sourceless `WorkerOptions` type without `script`, `scriptPath`, or `modules` properties. Refer to the Miniflare [WorkerOptions ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare#interface-workeroptions) type for more details.

TypeScript

```

type SourcelessWorkerOptions = Omit<

  WorkerOptions,

  "script" | "scriptPath" | "modules" | "modulesRoot"

>;


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/vitest-integration/","name":"Vitest integration"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/vitest-integration/configuration/","name":"Configuration"}}]}
```

---

---
title: Debugging
description: Debug your Workers tests with Vitest.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/vitest-integration/debugging.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Debugging

This guide shows you how to debug your Workers tests with Vitest. This is available with `@cloudflare/vitest-pool-workers` v0.7.5 or later.

## Open inspector with Vitest

To start debugging, run Vitest with the following command and attach a debugger to port `9229`:

Terminal window

```

vitest --inspect --no-file-parallelism


```

## Customize the inspector port

By default, the inspector will be opened on port `9229`. If you need to use a different port (for example, `3456`), you can run the following command:

Terminal window

```

vitest --inspect=3456 --no-file-parallelism


```

Alternatively, you can define it in your Vitest configuration file:

TypeScript

```

import { cloudflareTest } from "@cloudflare/vitest-pool-workers";

import { defineConfig } from "vitest/config";


export default defineConfig({

  plugins: [

    cloudflareTest({

      // ...

    }),

  ],

  test: {

    inspector: {

      port: 3456,

    },

  },

});


```

## Setup VS Code to use breakpoints

To setup VS Code for breakpoint debugging in your Worker tests, create a `.vscode/launch.json` file that contains the following configuration:

```

{

  "configurations": [

    {

      "type": "node",

      "request": "launch",

      "name": "Open inspector with Vitest",

      "program": "${workspaceRoot}/node_modules/vitest/vitest.mjs",

      "console": "integratedTerminal",

      "args": ["--inspect=9229", "--no-file-parallelism"]

    },

    {

      "name": "Attach to Workers Runtime",

      "type": "node",

      "request": "attach",

      "port": 9229,

      "cwd": "/",

      "resolveSourceMapLocations": null,

      "attachExistingChildren": false,

      "autoAttachChildProcesses": false

    }

  ],

  "compounds": [

    {

      "name": "Debug Workers tests",

      "configurations": [

        "Open inspector with Vitest",

        "Attach to Workers Runtime"

      ],

      "stopAll": true

    }

  ]

}


```

Select **Debug Workers tests** at the top of the **Run & Debug** panel to open an inspector with Vitest and attach a debugger to the Workers runtime. Then you can add breakpoints to your test files and start debugging.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/vitest-integration/","name":"Vitest integration"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/vitest-integration/debugging/","name":"Debugging"}}]}
```

---

---
title: Isolation and concurrency
description: Review how the Workers Vitest integration runs your tests, how it isolates tests from each other, and how it imports modules.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/vitest-integration/isolation-and-concurrency.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Isolation and concurrency

Review how the Workers Vitest integration runs your tests, how it isolates tests from each other, and how it imports modules.

## Run tests

When you run your tests with the Workers Vitest integration, Vitest will:

1. Read and evaluate your configuration file using Node.js.
2. Run any [globalSetup ↗](https://vitest.dev/config/#globalsetup) files using Node.js.
3. Collect and sequence test files.
4. For each Vitest project, depending on its configured isolation and concurrency, start one or more [workerd ↗](https://github.com/cloudflare/workerd) processes, each running one or more Workers.
5. Run [setupFiles ↗](https://vitest.dev/config/#setupfiles) and test files in `workerd` using the appropriate Workers.
6. Watch for changes and re-run test files using the same Workers if the configuration has not changed.

## Isolation model

Storage isolation is per test file. Each test file gets its own storage environment, and any writes to storage during a test file are not visible to other test files. The Workers Vitest integration reuses Workers and their module caches between test runs where possible. A copy of all auxiliary `workers` exists in each `workerd` process.

By default, test files run concurrently. To make test files share the same storage (for example, for integration tests that depend on shared state), use the Vitest flags `--max-workers=1 --no-isolate`.

## Modules

Each Worker has its own module cache. As Workers are reused between test runs, their module caches are also reused. Vitest invalidates parts of the module cache at the start of each test run based on changed files.

The Workers Vitest pool works by running code inside a Cloudflare Worker that Vitest would usually run inside a [Node.js Worker thread ↗](https://nodejs.org/api/worker%5Fthreads.html). To make this possible, the pool **automatically injects** the [nodejs\_compat](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag), \[`no_nodejs_compat_v2`\] and [export\_commonjs\_default](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#commonjs-modules-do-not-export-a-module-namespace) compatibility flags. This is the minimal compatibility setup that still allows Vitest to run correctly, but without pulling in polyfills and globals that aren't required. If you already have a Node.js compatibility flag defined in your configuration, Vitest Pool Workers will not try to add those flags.

Warning

Using Vitest Pool Workers may cause your Worker to behave differently when deployed than during testing as the `nodejs_compat` flag is enabled by default. This means that Node.js-specific APIs and modules are available when running your tests. However, Cloudflare Workers do not support these Node.js APIs in the production environment unless you specify this flag in your Worker configuration.

If you do not have a `nodejs_compat` or `nodejs_compat_v2` flag in your configuration and you import a Node.js module in your Worker code, your tests may pass, but you will find that you will not be able to deploy this Worker, as the upload call (either via the REST API or via Wrangler) will throw an error.

However, if you use Node.js globals that are not supported by the runtime, your Worker upload will be successful, but you may see errors in production code. Let's create a contrived example to illustrate the issue.

The Wrangler configuration file does not specify either `nodejs_compat` or `nodejs_compat_v2`:

* [  wrangler.jsonc ](#tab-panel-7746)
* [  wrangler.toml ](#tab-panel-7747)

```

{ "name": "test",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03"

  # no nodejs_compat flags here

}


```

```

name = "test"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"


```

In our `src/index.ts` file, we use the `process` object, which is a Node.js global, unavailable in the Workerd runtime:

TypeScript

```

export default {

  async fetch(request, env, ctx): Promise<Response> {

    process.env.TEST = "test";

    return new Response(process.env.TEST);

  },

} satisfies ExportedHandler<Env>;


```

The test is a simple assertion that the Worker managed to use `process`.

TypeScript

```

it('responds with "test"', async () => {

  const response = await exports.default.fetch("https://example.com/");

  expect(await response.text()).toMatchInlineSnapshot(`"test"`);

});


```

Now, if we run `npm run test`, we see that the tests will _pass_:

```

 ✓ test/index.spec.ts (1)

   ✓ responds with "test"


 Test Files  1 passed (1)

      Tests  1 passed (1)


```

And we can run `wrangler dev` and `wrangler deploy` without issues. It _looks like_ our code is fine. However, this code will fail in production as `process` is not available in the Workerd runtime.

To fix the issue, we either need to avoid using Node.js APIs, or add the `nodejs_compat` flag to our Wrangler configuration.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/vitest-integration/","name":"Vitest integration"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/vitest-integration/isolation-and-concurrency/","name":"Isolation and concurrency"}}]}
```

---

---
title: Known issues
description: Explore the known issues associated with the Workers Vitest integration.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/vitest-integration/known-issues.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Known issues

The Workers Vitest pool is currently in open beta. The following are issues Cloudflare is aware of and fixing:

### Coverage

Native code coverage via [V8 ↗](https://v8.dev/blog/javascript-code-coverage) is not supported. You must use instrumented code coverage via [Istanbul ↗](https://istanbul.js.org/) instead. Refer to the [Vitest Coverage documentation ↗](https://vitest.dev/guide/coverage) for setup instructions.

### Fake timers

Vitest's [fake timers ↗](https://vitest.dev/guide/mocking.html#timers) do not apply to KV, R2 and cache simulators. For example, you cannot expire a KV key by advancing fake time.

### Dynamic `import()` statements with `exports` and Durable Objects

Dynamic `import()` statements do not work inside `export default { ... }` handlers when writing integration tests with `exports.default.fetch()`, or inside Durable Object event handlers. You must import and call your handlers directly, or use static `import` statements in the global scope.

### Durable Object alarms

Durable Object alarms are not reset between test runs and do not respect isolated storage. Ensure you delete or run all alarms with [runDurableObjectAlarm()](https://developers.cloudflare.com/workers/testing/vitest-integration/test-apis/#durable-objects) scheduled in each test before finishing the test.

### WebSockets

Using WebSockets with Durable Objects is not supported with per-file storage isolation. To work around this, run your tests with shared storage using `--max-workers=1 --no-isolate`.

### Storage isolation

Storage isolation is per test file. The test runner will undo any writes to storage at the end of each test file as detailed in the [isolation and concurrency documentation](https://developers.cloudflare.com/workers/testing/vitest-integration/isolation-and-concurrency/). Cloudflare recommends the following actions to avoid common issues:

#### Await all storage operations

Always `await` all `Promise`s that read or write to storage services.

TypeScript

```

// Example: Seed data

beforeAll(async () => {

  await env.KV.put("message", "test message");

  await env.R2.put("file", "hello-world");

});


```

#### Explicitly signal resource disposal

When calling RPC methods of a Service Worker or Durable Object that return non-primitive values (such as objects or classes extending `RpcTarget`), use the `using` keyword to explicitly signal when resources can be disposed of. See [this example test ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/rpc/test/unit.test.ts#L155) and refer to [explicit-resource-management](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle#explicit-resource-management) for more details.

TypeScript

```

using result = await stub.getCounter();


```

#### Consume response bodies

When making requests via `fetch` or `R2.get()`, consume the entire response body, even if you are not asserting its content. For example:

TypeScript

```

test("check if file exists", async () => {

  await env.R2.put("file", "hello-world");

  const response = await env.R2.get("file");


  expect(response).not.toBe(null);

  // Consume the response body even if you are not asserting it

  await response.text();

});


```

### Missing properties on `ctx.exports`

The `ctx.exports` property provides access to the exports of the main Worker. The Workers Vitest integration attempts to automatically infer these exports by statically analyzing the Worker source code using esbuild. However, complex build setups, such as those using virtual modules or wildcard re-exports that esbuild cannot follow, may result in missing properties on the `ctx.exports` object.

For example, consider a Worker that re-exports an entrypoint from a virtual module using a wildcard export:

TypeScript

```

// index.ts

export * from "@virtual-module";


```

In this case, any exports from `@virtual-module` (such as `MyEntrypoint`) cannot be automatically inferred and will be missing from `ctx.exports`.

To work around this, add the `additionalExports` option to your Vitest configuration:

TypeScript

```

import { cloudflareTest } from "@cloudflare/vitest-pool-workers";

import { defineConfig } from "vitest/config";


export default defineConfig({

  plugins: [

    cloudflareTest({

      wrangler: { configPath: "./wrangler.jsonc" },

      additionalExports: {

        MyEntrypoint: "WorkerEntrypoint",

      },

    }),

  ],

});


```

The `additionalExports` option is a map where keys are the export names and values are the type of export (`"WorkerEntrypoint"`, `"DurableObject"`, or `"WorkflowEntrypoint"`).

### Module resolution

If you encounter module resolution issues such as: `Error: Cannot use require() to import an ES Module` or `Error: No such module`, you can bundle these dependencies using the [deps.optimizer ↗](https://vitest.dev/config/#deps-optimizer) option:

TypeScript

```

import { cloudflareTest } from "@cloudflare/vitest-pool-workers";

import { defineConfig } from "vitest/config";


export default defineConfig({

  plugins: [

    cloudflareTest({

      // ...

    }),

  ],

  test: {

    deps: {

      optimizer: {

        ssr: {

          enabled: true,

          include: ["your-package-name"],

        },

      },

    },

  },

});


```

You can find an example in the [Recipes](https://developers.cloudflare.com/workers/testing/vitest-integration/recipes) page.

### Importing modules from global setup file

Although Vitest is set up to resolve packages for the [workerd ↗](https://github.com/cloudflare/workerd) runtime, it runs your global setup file in the Node.js environment. This can cause issues when importing packages like [Postgres.js ↗](https://github.com/cloudflare/workers-sdk/issues/6465), which exports a non-Node version for `workerd`. To work around this, you can create a wrapper that uses Vite's SSR module loader to import the global setup file under the correct conditions. Then, adjust your Vitest configuration to point to this wrapper. For example:

TypeScript

```

// File: global-setup-wrapper.ts

import { createServer } from "vite";


// Import the actual global setup file with the correct setup

const mod = await viteImport("./global-setup.ts");


export default mod.default;


// Helper to import the file with default node setup

async function viteImport(file: string) {

  const server = await createServer({

    root: import.meta.dirname,

    configFile: false,

    server: { middlewareMode: true, hmr: false, watch: null, ws: false },

    optimizeDeps: { noDiscovery: true },

    clearScreen: false,

  });

  const mod = await server.ssrLoadModule(file);

  await server.close();

  return mod;

}


```

TypeScript

```

// File: vitest.config.ts

import { cloudflareTest } from "@cloudflare/vitest-pool-workers";

import { defineConfig } from "vitest/config";


export default defineConfig({

  plugins: [

    cloudflareTest({

      // ...

    }),

  ],

  test: {

    // Replace the globalSetup with the wrapper file

    globalSetup: ["./global-setup-wrapper.ts"],

  },

});


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/vitest-integration/","name":"Vitest integration"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/vitest-integration/known-issues/","name":"Known issues"}}]}
```

---

---
title: Migrate from Miniflare 2's test environments
description: Migrate from [Miniflare 2](https://github.com/cloudflare/miniflare?tab=readme-ov-file) to the Workers Vitest integration.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/vitest-integration/migration-guides/migrate-from-miniflare-2.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Migrate from Miniflare 2's test environments

[Miniflare 2 ↗](https://github.com/cloudflare/miniflare?tab=readme-ov-file) provided custom environments for Jest and Vitest in the `jest-environment-miniflare` and `vitest-environment-miniflare` packages respectively. The `@cloudflare/vitest-pool-workers` package provides similar functionality using modern Miniflare versions and the [workerd runtime ↗](https://github.com/cloudflare/workerd). `workerd` is the same JavaScript/WebAssembly runtime that powers Cloudflare Workers. Using `workerd` practically eliminates behavior mismatches between your tests and deployed code. Refer to the [Miniflare 3 announcement ↗](https://blog.cloudflare.com/miniflare-and-workerd) for more information.

Warning

Cloudflare no longer provides a Jest testing environment for Workers. If you previously used Jest, you will need to [migrate to Vitest ↗](https://vitest.dev/guide/migration.html#migrating-from-jest) first, then follow the rest of this guide. Vitest provides built-in support for TypeScript, ES modules, and hot-module reloading for tests out-of-the-box.

Warning

The Workers Vitest integration does not support testing Workers using the service worker format. [Migrate to ES modules format](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) first.

## Install the Workers Vitest integration

First, you will need to uninstall the old environment and install the new pool. Vitest environments can only customize the global scope, whereas pools can run tests using a completely different runtime. In this case, the pool runs your tests inside [workerd ↗](https://github.com/cloudflare/workerd) instead of Node.js.

Terminal window

```

npm uninstall vitest-environment-miniflare

npm install --save-dev vitest@^4.1.0

npm install --save-dev @cloudflare/vitest-pool-workers


```

## Update your Vitest configuration file

After installing the Workers Vitest integration, update your Vitest configuration file to use the `cloudflareTest()` Vite plugin instead. Most Miniflare configuration previously specified in `environmentOptions` can be moved to the `miniflare` option in `cloudflareTest()`. Refer to [Miniflare's WorkerOptions interface ↗](https://github.com/cloudflare/workers-sdk/blob/main/packages/miniflare/README.md#interface-workeroptions) for supported options and the [Miniflare version 2 to 3 migration guide](https://developers.cloudflare.com/workers/testing/miniflare/migrations/from-v2/) for more information. If you relied on configuration stored in a Wrangler file, set `wrangler.configPath` too.

```

import { cloudflareTest } from "@cloudflare/vitest-pool-workers";

import { defineConfig } from "vitest/config";


export default defineWorkersConfig({

  test: {

    environment: "miniflare",

    environmentOptions: { ... },

  },

});

export default defineConfig({

  plugins: [

    cloudflareTest({

      miniflare: { ... },

      wrangler: { configPath: "./wrangler.jsonc" },

    }),

  ],

});


```

## Update your TypeScript configuration file

If you are using TypeScript, update your `tsconfig.json` to include the correct ambient `types`:

```

{

  "compilerOptions": {

    ...,

    "types": [

      ...

      "vitest-environment-miniflare/globals"

      "@cloudflare/vitest-pool-workers"

    ]

  },

}


```

## Access bindings

To access [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) in your tests, use the `env` helper from the `cloudflare:workers` module.

```

import { it } from "vitest";

import { env } from "cloudflare:workers";


it("does something", () => {

  const env = getMiniflareBindings();

  // ...

});


```

If you are using TypeScript, you need to define the type of `env` for your tests. Refer to [Define types](https://developers.cloudflare.com/workers/testing/vitest-integration/write-your-first-test/#define-types) for setup instructions.

## Storage isolation

Storage isolation is per test file by default. You no longer need to include `setupMiniflareIsolatedStorage()` in your tests.

```

const describe = setupMiniflareIsolatedStorage();

import { describe } from "vitest";


```

## Work with `waitUntil()`

The `new ExecutionContext()` constructor and `getMiniflareWaitUntil()` function are now `createExecutionContext()` and `waitOnExecutionContext()` respectively. Note `waitOnExecutionContext()` now returns an empty `Promise<void>` instead of a `Promise` resolving to the results of all `waitUntil()`ed `Promise`s.

```

import { createExecutionContext, waitOnExecutionContext } from "cloudflare:test";


it("does something", () => {

  // ...

  const ctx = new ExecutionContext();

  const ctx = createExecutionContext();

  const response = worker.fetch(request, env, ctx);

  await getMiniflareWaitUntil(ctx);

  await waitOnExecutionContext(ctx);

});


```

## Mock outbound requests

The `getMiniflareFetchMock()` function is no longer available. To mock outbound `fetch()` requests, mock `globalThis.fetch` directly or use ecosystem libraries such as [MSW ↗](https://mswjs.io/). Refer to the [request mocking example ↗](https://github.com/cloudflare/workers-sdk/blob/main/fixtures/vitest-pool-workers-examples/request-mocking/test/imperative.test.ts) for a complete example.

## Use Durable Object helpers

The `getMiniflareDurableObjectStorage()`, `getMiniflareDurableObjectState()`, `getMiniflareDurableObjectInstance()`, and `runWithMiniflareDurableObjectGates()` functions have all been replaced with a single `runInDurableObject()` function from the `cloudflare:test` module. The `runInDurableObject()` function accepts a `DurableObjectStub` with a callback accepting the Durable Object and corresponding `DurableObjectState` as arguments. Consolidating these functions into a single function simplifies the API surface, and ensures instances are accessed with the correct request context and [gating behavior ↗](https://blog.cloudflare.com/durable-objects-easy-fast-correct-choose-three/). Refer to the [Test APIs page](https://developers.cloudflare.com/workers/testing/vitest-integration/test-apis/) for more details.

```

import { env } from "cloudflare:workers";

import { runInDurableObject } from "cloudflare:test";


it("does something", async () => {

  const env = getMiniflareBindings();

  const id = env.OBJECT.newUniqueId();

  const stub = env.OBJECT.get(id);


  const storage = await getMiniflareDurableObjectStorage(id);

  doSomethingWith(storage);

  await runInDurableObject(stub, async (instance, state) => {

    doSomethingWith(state.storage);

  });


  const state = await getMiniflareDurableObjectState(id);

  doSomethingWith(state);

  await runInDurableObject(stub, async (instance, state) => {

    doSomethingWith(state);

  });


  const instance = await getMiniflareDurableObjectInstance(id);

  await runWithMiniflareDurableObjectGates(state, async () => {

    doSomethingWith(instance);

  });

  await runInDurableObject(stub, async (instance) => {

    doSomethingWith(instance);

  });

});


```

The `flushMiniflareDurableObjectAlarms()` function has been replaced with the `runDurableObjectAlarm()` function from the `cloudflare:test` module. The `runDurableObjectAlarm()` function accepts a single `DurableObjectStub` and returns a `Promise` that resolves to `true` if an alarm was scheduled and the `alarm()` handler was executed, or `false` otherwise. To "flush" multiple instances' alarms, call `runDurableObjectAlarm()` in a loop.

```

import { env } from "cloudflare:workers";

import { runDurableObjectAlarm } from "cloudflare:test";


it("does something", async () => {

  const env = getMiniflareBindings();

  const id = env.OBJECT.newUniqueId();

  await flushMiniflareDurableObjectAlarms([id]);

  const stub = env.OBJECT.get(id);

  const ran = await runDurableObjectAlarm(stub);

});


```

Finally, the `getMiniflareDurableObjectIds()` function has been replaced with the `listDurableObjectIds()` function from the `cloudflare:test` module. The `listDurableObjectIds()` function now accepts a `DurableObjectNamespace` instance instead of a namespace `string` to provide stricter typing. Note the `listDurableObjectIds()` function respects storage isolation. IDs of objects created in other test files will not be returned.

```

import { env } from "cloudflare:workers";

import { listDurableObjectIds } from "cloudflare:test";


it("does something", async () => {

  const ids = await getMiniflareDurableObjectIds("OBJECT");

  const ids = await listDurableObjectIds(env.OBJECT);

});


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/vitest-integration/","name":"Vitest integration"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/vitest-integration/migration-guides/","name":"Migration guides"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/vitest-integration/migration-guides/migrate-from-miniflare-2/","name":"Migrate from Miniflare 2's test environments"}}]}
```

---

---
title: Migrate from unstable_dev
description: Migrate from the [`unstable_dev`](/workers/wrangler/api/#unstable_dev) API to writing tests with the Workers Vitest integration.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/vitest-integration/migration-guides/migrate-from-unstable-dev.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Migrate from unstable\_dev

The [unstable\_dev](https://developers.cloudflare.com/workers/wrangler/api/#unstable%5Fdev) API has been a recommended approach to run integration tests. The `@cloudflare/vitest-pool-workers` package integrates directly with Vitest for fast re-runs, supports both unit and integration tests, all whilst providing isolated per-test storage.

This guide demonstrates key differences between tests written with the `unstable_dev` API and the Workers Vitest integration. For more information on writing tests with the Workers Vitest integration, refer to [Write your first test](https://developers.cloudflare.com/workers/testing/vitest-integration/write-your-first-test/).

## Reference a Worker for integration testing

With `unstable_dev`, to trigger a `fetch` event, you would do this:

JavaScript

```

import { unstable_dev } from "wrangler"


it("dispatches fetch event", () => {

  const worker = await unstable_dev("src/index.ts");

  const resp = await worker.fetch("http://example.com");

  ...

})


```

With the Workers Vitest integration, you can accomplish the same goal using `exports` from `cloudflare:workers`. `exports.default` refers to the default export defined by the `main` option in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This `main` Worker runs in the same isolate as tests so any global mocks will apply to it too.

JavaScript

```

import { exports } from "cloudflare:workers";

import "../src/"; // Currently required to automatically rerun tests when `main` changes


it("dispatches fetch event", async () => {

  const response = await exports.default.fetch("http://example.com");

  ...

});


```

## Stop a Worker

With the Workers Vitest integration, there is no need to stop a Worker via `worker.stop()`. This functionality is handled automatically after tests run.

## Import Wrangler configuration

Via the `unstable_dev` API, you can reference a [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) by adding it as an option:

JavaScript

```

await unstable_dev("src/index.ts", {

  config: "wrangler.toml",

});


```

With the Workers Vitest integration, you can now set this reference to a [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) in `vitest.config.js` for all of your tests:

JavaScript

```

import { cloudflareTest } from "@cloudflare/vitest-pool-workers";

import { defineConfig } from "vitest/config";


export default defineConfig({

  plugins: [

    cloudflareTest({

      wrangler: {

        configPath: "wrangler.jsonc",

      },

    }),

  ],

});


```

## Test service Workers

Unlike the `unstable_dev` API, the Workers Vitest integration does not support testing Workers using the service worker format. You will need to first [migrate to the ES modules format](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) in order to use the Workers Vitest integration.

## Define types

You can remove `UnstableDevWorker` imports from your code. Instead, follow the [Write your first test guide](https://developers.cloudflare.com/workers/testing/vitest-integration/write-your-first-test/#define-types) to define types for all of your tests.

```

import { unstable_dev } from "wrangler";

import type { UnstableDevWorker } from "wrangler";

import worker from "src/index.ts";


describe("Worker", () => {

  let worker: UnstableDevWorker;

  ...

});


```

## Related resources

* [Write your first test](https://developers.cloudflare.com/workers/testing/vitest-integration/write-your-first-test/#define-types) \- Write unit tests against Workers.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/vitest-integration/","name":"Vitest integration"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/vitest-integration/migration-guides/","name":"Migration guides"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/testing/vitest-integration/migration-guides/migrate-from-unstable-dev/","name":"Migrate from unstable_dev"}}]}
```

---

---
title: Recipes and examples
description: Examples that demonstrate how to write unit and integration tests with the Workers Vitest integration.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/vitest-integration/recipes.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Recipes and examples

Recipes are examples that help demonstrate how to write unit tests and integration tests for Workers projects using the [@cloudflare/vitest-pool-workers ↗](https://www.npmjs.com/package/@cloudflare/vitest-pool-workers) package.

* [Basic unit and integration tests for Workers using SELF ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/basics-unit-integration-self)
* [Basic unit and integration tests for Pages Functions using SELF ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/pages-functions-unit-integration-self)
* [Basic integration tests using an auxiliary Worker ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/basics-integration-auxiliary)
* [Basic integration test for Workers with static assets ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/workers-assets)
* [Isolated tests using KV, R2 and the Cache API ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/kv-r2-caches)
* [Isolated tests using D1 with migrations ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/d1)
* [Isolated tests using Durable Objects with direct access ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/durable-objects)
* [Isolated tests using Workflows ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/workflows)
* [Tests using Queue producers and consumers ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/queues)
* [Tests using Hyperdrive with a Vitest managed TCP server ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/hyperdrive)
* [Tests using declarative/imperative outbound request mocks ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/request-mocking)
* [Tests using multiple auxiliary Workers and request mocks ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/multiple-workers)
* [Tests importing WebAssembly modules ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/web-assembly)
* [Tests using JSRPC with entrypoints and Durable Objects ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/rpc)
* [Tests using ctx.exports to access Worker exports ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/context-exports)
* [Integration test with static assets and Puppeteer ↗](https://github.com/GregBrimble/puppeteer-vitest-workers-assets)
* [Resolving modules with Vite Dependency Pre-Bundling ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/module-resolution)
* [Mocking Workers AI and Vectorize bindings in unit tests ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/ai-vectorize)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/vitest-integration/","name":"Vitest integration"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/vitest-integration/recipes/","name":"Recipes and examples"}}]}
```

---

---
title: Test APIs
description: Runtime helpers for writing tests, exported from `cloudflare:workers` and `cloudflare:test`.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/vitest-integration/test-apis.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Test APIs

The Workers Vitest integration provides runtime helpers for writing tests. Some helpers are exported from the `cloudflare:workers` module, and others from the `cloudflare:test` module. Both modules are provided by the `@cloudflare/vitest-pool-workers` package, but can only be imported from test files that execute in the Workers runtime.

## `cloudflare:workers` exports

* `env`: import("cloudflare:workers").ProvidedEnv  
   * Exposes the [env object](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/#parameters) for use as the second argument passed to ES modules format exported handlers. This provides access to [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) that you have defined in your [Vitest configuration file](https://developers.cloudflare.com/workers/testing/vitest-integration/configuration/).  
         
   JavaScript  
   ```  
   import { env } from "cloudflare:workers";  
   it("uses binding", async () => {  
     await env.KV_NAMESPACE.put("key", "value");  
     expect(await env.KV_NAMESPACE.get("key")).toBe("value");  
   });  
   ```  
   To configure the type of this value, use an ambient module type:  
   TypeScript  
   ```  
   declare module "cloudflare:workers" {  
     interface ProvidedEnv {  
       KV_NAMESPACE: KVNamespace;  
     }  
     // ...or if you have an existing `Env` type...  
     interface ProvidedEnv extends Env {}  
   }  
   ```
* `exports`: object  
   * Provides access to the exports of the `main` Worker. Use `exports.default.fetch()` to write integration tests against your Worker's default export handler. The `main` Worker runs in the same isolate/context as tests so any global mocks will apply to it too. Unlike the previous `SELF` binding, `exports` does not expose Assets. To test assets, use [startDevWorker()](https://developers.cloudflare.com/workers/testing/unstable%5Fstartworker/).  
         
   JavaScript  
   ```  
   import { exports } from "cloudflare:workers";  
   it("dispatches fetch event", async () => {  
     const response = await exports.default.fetch("https://example.com");  
     expect(await response.text()).toMatchInlineSnapshot(...);  
   });  
   ```

## `cloudflare:test` exports

### Events

* `createExecutionContext()`: ExecutionContext  
   * Creates an instance of the [context object](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/#parameters) for use as the third argument to ES modules format exported handlers.
* `waitOnExecutionContext(ctx:ExecutionContext)`: Promise<void>  
   * Use this to wait for all Promises passed to `ctx.waitUntil()` to settle, before running test assertions on any side effects. Only accepts instances of `ExecutionContext` returned by `createExecutionContext()`.  
         
   TypeScript  
   ```  
   import { env } from "cloudflare:workers";  
   import { createExecutionContext, waitOnExecutionContext } from "cloudflare:test";  
   import { it, expect } from "vitest";  
   import worker from "./index.mjs";  
   it("calls fetch handler", async () => {  
     const request = new Request("https://example.com");  
     const ctx = createExecutionContext();  
     const response = await worker.fetch(request, env, ctx);  
     await waitOnExecutionContext(ctx);  
     expect(await response.text()).toMatchInlineSnapshot(...);  
   });  
   ```
* `createScheduledController(options?:FetcherScheduledOptions)`: ScheduledController  
   * Creates an instance of `ScheduledController` for use as the first argument to modules-format [scheduled()](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/) exported handlers.  
         
   TypeScript  
   ```  
   import { env } from "cloudflare:workers";  
   import { createScheduledController, createExecutionContext, waitOnExecutionContext } from "cloudflare:test";  
   import { it, expect } from "vitest";  
   import worker from "./index.mjs";  
   it("calls scheduled handler", async () => {  
     const ctrl = createScheduledController({  
       scheduledTime: new Date(1000),  
       cron: "30 * * * *"  
     });  
     const ctx = createExecutionContext();  
     await worker.scheduled(ctrl, env, ctx);  
     await waitOnExecutionContext(ctx);  
   });  
   ```
* `createMessageBatch(queueName:string, messages:ServiceBindingQueueMessage[])`: MessageBatch  
   * Creates an instance of `MessageBatch` for use as the first argument to modules-format [queue()](https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer) exported handlers.
* `getQueueResult(batch:MessageBatch, ctx:ExecutionContext)`: Promise<FetcherQueueResult>  
   * Gets the acknowledged/retry state of messages in the `MessageBatch`, and waits for all `ExecutionContext#waitUntil()`ed `Promise`s to settle. Only accepts instances of `MessageBatch` returned by `createMessageBatch()`, and instances of `ExecutionContext` returned by `createExecutionContext()`.  
         
   TypeScript  
   ```  
   import { env } from "cloudflare:workers";  
   import { createMessageBatch, createExecutionContext, getQueueResult } from "cloudflare:test";  
   import { it, expect } from "vitest";  
   import worker from "./index.mjs";  
   it("calls queue handler", async () => {  
     const batch = createMessageBatch("my-queue", [  
       {  
         id: "message-1",  
         timestamp: new Date(1000),  
         body: "body-1"  
       }  
     ]);  
     const ctx = createExecutionContext();  
     await worker.queue(batch, env, ctx);  
     const result = await getQueueResult(batch, ctx);  
     expect(result.ackAll).toBe(false);  
     expect(result.retryBatch).toMatchObject({ retry: false });  
     expect(result.explicitAcks).toStrictEqual(["message-1"]);  
     expect(result.retryMessages).toStrictEqual([]);  
   });  
   ```

### Durable Objects

* `runInDurableObject<O extends DurableObject, R>(stub:DurableObjectStub, callback:(instance: O, state: DurableObjectState) => R | Promise<R>)`: Promise<R>  
   * Runs the provided `callback` inside the Durable Object that corresponds to the provided `stub`.  
         
   This temporarily replaces your Durable Object's `fetch()` handler with `callback`, then sends a request to it, returning the result. This can be used to call/spy-on Durable Object methods or seed/get persisted data. Note this can only be used with `stub`s pointing to Durable Objects defined in the `main` Worker.  
         
   TypeScript  
   ```  
   export class Counter {  
     constructor(readonly state: DurableObjectState) {}  
     async fetch(request: Request): Promise<Response> {  
       let count = (await this.state.storage.get<number>("count")) ?? 0;  
       void this.state.storage.put("count", ++count);  
       return new Response(count.toString());  
     }  
   }  
   ```  
   TypeScript  
   ```  
   import { env } from "cloudflare:workers";  
   import { runInDurableObject } from "cloudflare:test";  
   import { it, expect } from "vitest";  
   import { Counter } from "./index.ts";  
   it("increments count", async () => {  
     const id = env.COUNTER.newUniqueId();  
     const stub = env.COUNTER.get(id);  
     let response = await stub.fetch("https://example.com");  
     expect(await response.text()).toBe("1");  
     response = await runInDurableObject(stub, async (instance: Counter, state) => {  
       expect(instance).toBeInstanceOf(Counter);  
       expect(await state.storage.get<number>("count")).toBe(1);  
       const request = new Request("https://example.com");  
       return instance.fetch(request);  
     });  
     expect(await response.text()).toBe("2");  
   });  
   ```
* `runDurableObjectAlarm(stub:DurableObjectStub)`: Promise<boolean>  
   * Immediately runs and removes the Durable Object pointed to by `stub`'s alarm if one is scheduled. Returns `true` if an alarm ran, and `false` otherwise. Note this can only be used with `stub`s pointing to Durable Objects defined in the `main` Worker.
* `listDurableObjectIds(namespace:DurableObjectNamespace)`: Promise<DurableObjectId\[\]>  
   * Gets the IDs of all objects that have been created in the `namespace`. Respects per-file storage isolation, meaning objects created in a different test file will not be returned.  
         
   TypeScript  
   ```  
   import { env } from "cloudflare:workers";  
   import { listDurableObjectIds } from "cloudflare:test";  
   import { it, expect } from "vitest";  
   it("increments count", async () => {  
     const id = env.COUNTER.newUniqueId();  
     const stub = env.COUNTER.get(id);  
     const response = await stub.fetch("https://example.com");  
     expect(await response.text()).toBe("1");  
     const ids = await listDurableObjectIds(env.COUNTER);  
     expect(ids.length).toBe(1);  
     expect(ids[0].equals(id)).toBe(true);  
   });  
   ```

### D1

* `applyD1Migrations(db:D1Database, migrations:D1Migration[], migrationTableName?:string)`: Promise<void>  
   * Applies all un-applied [D1 migrations](https://developers.cloudflare.com/d1/reference/migrations/) stored in the `migrations` array to database `db`, recording migrations state in the `migrationsTableName` table. `migrationsTableName` defaults to `d1_migrations`. Call the [readD1Migrations()](https://developers.cloudflare.com/workers/testing/vitest-integration/configuration/#readd1migrationsmigrationspath) function from the `@cloudflare/vitest-pool-workers/config` package inside Node.js to get the `migrations` array. Refer to the [D1 recipe ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples/d1) for an example project using migrations.

### Workflows

Workflows with storage isolation

To ensure proper test isolation in Workflows with per-file storage isolation, introspectors should be disposed at the end of each test. This is accomplished by either:

* Using an `await using` statement on the introspector.
* Explicitly calling the introspector `dispose()` method.

Version

Available in `@cloudflare/vitest-pool-workers` version **0.9.0**!

* `introspectWorkflowInstance(workflow: Workflow, instanceId: string)`: Promise<WorkflowInstanceIntrospector>  
   * Creates an **introspector** for a specific Workflow instance, used to **modify** its behavior, **await** outcomes, and **clear** its state during tests. This is the primary entry point for testing individual Workflow instances with a known ID.  
         
   TypeScript  
   ```  
   import { env } from "cloudflare:workers";  
   import { introspectWorkflowInstance } from "cloudflare:test";  
   it("should disable all sleeps, mock an event and complete", async () => {  
     // 1. CONFIGURATION  
     await using instance = await introspectWorkflowInstance(env.MY_WORKFLOW, "123456");  
     await instance.modify(async (m) => {  
       await m.disableSleeps();  
       await m.mockEvent({  
         type: "user-approval",  
         payload: { approved: true, approverId: "user-123" },  
       });  
     });  
     // 2. EXECUTION  
     await env.MY_WORKFLOW.create({ id: "123456" });  
     // 3. ASSERTION  
     await expect(instance.waitForStatus("complete")).resolves.not.toThrow();  
     const output = await instance.getOutput();  
     expect(output).toEqual({ success: true });  
     // 4. DISPOSE: is implicit and automatic here.  
   });  
   ```  
   * The returned `WorkflowInstanceIntrospector` object has the following methods:  
         * `modify(fn: (m: WorkflowInstanceModifier) => Promise<void>): Promise<void>`: Applies modifications to the Workflow instance's behavior.  
         * `waitForStepResult(step: { name: string; index?: number }): Promise<unknown>`: Waits for a specific step to complete and returns a result. If multiple steps share the same name, use the optional `index` property (1-based, defaults to `1`) to target a specific occurrence.  
         * `waitForStatus(status: InstanceStatus["status"]): Promise<void>`: Waits for the Workflow instance to reach a specific [status](https://developers.cloudflare.com/workflows/build/workers-api/#instancestatus) (e.g., 'running', 'complete').  
         * `getOutput(): Promise<unknown>`: Returns the output value of the successful completed Workflow instance.  
         * `getError(): Promise<{name: string, message: string}>`: Returns the error information of the errored Workflow instance. The error information follows the form `{ name: string; message: string }`.  
         * `dispose(): Promise<void>`: Disposes the Workflow instance, which is crucial for test isolation. If this function isn't called and `await using` is not used, isolated storage will fail and the instance's state will persist across subsequent tests. For example, an instance that becomes completed in one test will already be completed at the start of the next.  
         * `[Symbol.asyncDispose](): Promise<void>`: Provides automatic dispose. It's invoked by the `await using` statement, which calls `dispose()`.
* `introspectWorkflow(workflow: Workflow)`: Promise<WorkflowIntrospector>  
   * Creates an **introspector** for a Workflow where instance IDs are unknown beforehand. This allows for defining modifications that will apply to **all subsequently created instances**.  
         
   TypeScript  
   ```  
   import { env, exports } from "cloudflare:workers";  
   import { introspectWorkflow } from "cloudflare:test";  
   it("should disable all sleeps, mock an event and complete", async () => {  
     // 1. CONFIGURATION  
     await using introspector = await introspectWorkflow(env.MY_WORKFLOW);  
     await introspector.modifyAll(async (m) => {  
       await m.disableSleeps();  
       await m.mockEvent({  
         type: "user-approval",  
         payload: { approved: true, approverId: "user-123" },  
       });  
     });  
     // 2. EXECUTION  
     await env.MY_WORKFLOW.create();  
     // 3. ASSERTION  
     const instances = introspector.get();  
     for(const instance of instances) {  
       await expect(instance.waitForStatus("complete")).resolves.not.toThrow();  
       const output = await instance.getOutput();  
       expect(output).toEqual({ success: true });  
     }  
     // 4. DISPOSE: is implicit and automatic here.  
   });  
   ```  
   The workflow instance doesn't have to be created directly inside the test. The introspector will capture **all** instances created after it is initialized. For example, you could trigger the creation of **one or multiple** instances via a single `fetch` event to your Worker:  
   JavaScript  
   ```  
   // This also works for the EXECUTION phase:  
   await exports.default.fetch("https://example.com/trigger-workflows");  
   ```  
   * The returned `WorkflowIntrospector` object has the following methods:  
         * `modifyAll(fn: (m: WorkflowInstanceModifier) => Promise<void>): Promise<void>`: Applies modifications to all Workflow instances created after calling `introspectWorkflow`.  
         * `get(): Promise<WorkflowInstanceIntrospector[]>`: Returns all `WorkflowInstanceIntrospector` objects from instances created after `introspectWorkflow` was called.  
         * `dispose(): Promise<void>`: Disposes the Workflow introspector. All `WorkflowInstanceIntrospector` from created instances will also be disposed. This is crucial to prevent modifications and captured instances from leaking between tests. After calling this method, the `WorkflowIntrospector` should not be reused.  
         * `[Symbol.asyncDispose](): Promise<void>`: Provides automatic dispose. It's invoked by the `await using` statement, which calls `dispose()`.
* `WorkflowInstanceModifier`  
   * This object is provided to the `modify` and `modifyAll` callbacks to mock or alter the behavior of a Workflow instance's steps, events, and sleeps.  
         * `disableSleeps(steps?: { name: string; index?: number }[])`: Disables sleeps, causing `step.sleep()` and `step.sleepUntil()` to resolve immediately. If `steps` is omitted, all sleeps are disabled.  
         * `mockStepResult(step: { name: string; index?: number }, stepResult: unknown)`: Mocks the result of a `step.do()`, causing it to return the specified value instantly without executing the step's implementation.  
         * `mockStepError(step: { name: string; index?: number }, error: Error, times?: number)`: Forces a `step.do()` to throw an error, simulating a failure. `times` is an optional number that sets how many times the step should error. If `times` is omitted, the step will error on every attempt, making the Workflow instance fail.  
         * `forceStepTimeout(step: { name: string; index?: number }, times?: number)`: Forces a `step.do()` to fail by timing out immediately. `times` is an optional number that sets how many times the step should timeout. If `times` is omitted, the step will timeout on every attempt, making the Workflow instance fail.  
         * `mockEvent(event: { type: string; payload: unknown })`: Sends a mock event to the Workflow instance, causing a `step.waitForEvent()` to resolve with the provided payload. `type` must match the `waitForEvent` type.  
         * `forceEventTimeout(step: { name: string; index?: number })`: Forces a `step.waitForEvent()` to time out instantly, causing the step to fail.  
         
   TypeScript  
   ```  
   import { env } from "cloudflare:workers";  
   import { introspectWorkflowInstance } from "cloudflare:test";  
   // This example showcases explicit disposal  
   it("should apply all modifier functions", async () => {  
     // 1. CONFIGURATION  
     const instance = await introspectWorkflowInstance(env.COMPLEX_WORKFLOW, "123456");  
     try {  
       // Modify instance behavior  
       await instance.modify(async (m) => {  
         // Disables all sleeps to make the test run instantly  
         await m.disableSleeps();  
         // Mocks the successful result of a data-fetching step  
         await m.mockStepResult(  
           { name: "get-order-details" },  
           { orderId: "abc-123", amount: 99.99 }  
         );  
         // Mocks an incoming event to satisfy a `step.waitForEvent()`  
         await m.mockEvent({  
           type: "user-approval",  
           payload: { approved: true, approverId: "user-123" },  
         });  
         // Forces a step to fail once with a specific error to test retry logic  
         await m.mockStepError(  
           { name: "process-payment" },  
           new Error("Payment gateway timeout"),  
           1 // Fail only the first time  
         );  
         // Forces a `step.do()` to time out immediately  
         await m.forceStepTimeout({ name: "notify-shipping-partner" });  
         // Forces a `step.waitForEvent()` to time out  
         await m.forceEventTimeout({ name: "wait-for-fraud-check" });  
       });  
       // 2. EXECUTION  
       await env.COMPLEX_WORKFLOW.create({ id: "123456" });  
       // 3. ASSERTION  
       expect(await instance.waitForStepResult({ name: "get-order-details" })).toEqual({  
         orderId: "abc-123",  
         amount: 99.99,  
       });  
       // Given the forced timeouts, the workflow will end in an errored state  
       await expect(instance.waitForStatus("errored")).resolves.not.toThrow();  
       const error = await instance.getError();  
       expect(error.name).toEqual("Error");  
       expect(error.message).toContain("Execution timed out");  
     } catch {  
       // 4. DISPOSE  
       await instance.dispose();  
     }  
   });  
   ```  
   When targeting a step, use its `name`. If multiple steps share the same name, use the optional `index` property (1-based, defaults to `1`) to specify the occurrence.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/vitest-integration/","name":"Vitest integration"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/vitest-integration/test-apis/","name":"Test APIs"}}]}
```

---

---
title: Write your first test
description: Write tests against Workers using Vitest
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/testing/vitest-integration/write-your-first-test.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Write your first test

This guide will instruct you through getting started with the `@cloudflare/vitest-pool-workers` package. For more complex examples of testing using `@cloudflare/vitest-pool-workers`, refer to [Recipes](https://developers.cloudflare.com/workers/testing/vitest-integration/recipes/).

## Prerequisites

First, make sure that:

* Your [compatibility date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) is set to `2022-10-31` or later.
* Your Worker using the ES modules format (if not, refer to the [migrate to the ES modules format](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) guide).
* Vitest and `@cloudflare/vitest-pool-workers` are installed in your project as dev dependencies  
 npm  yarn  pnpm  bun  
```  
npm i -D vitest@^4.1.0 @cloudflare/vitest-pool-workers  
```  
```  
yarn add -D vitest@^4.1.0 @cloudflare/vitest-pool-workers  
```  
```  
pnpm add -D vitest@^4.1.0 @cloudflare/vitest-pool-workers  
```  
```  
bun add -d vitest@^4.1.0 @cloudflare/vitest-pool-workers  
```  
Note  
The `@cloudflare/vitest-pool-workers` package requires Vitest 4.1 or later.

## Define Vitest configuration

In your `vitest.config.ts` file, use the `cloudflareTest()` plugin to configure the Workers Vitest integration.

You can use your Worker configuration from your [Wrangler config file](https://developers.cloudflare.com/workers/wrangler/configuration/) by specifying it with `wrangler.configPath`.

vitest.config.ts

```

import { cloudflareTest } from "@cloudflare/vitest-pool-workers";

import { defineConfig } from "vitest/config";


export default defineConfig({

  plugins: [

    cloudflareTest({

      wrangler: { configPath: "./wrangler.jsonc" },

    }),

  ],

});


```

You can also override or define additional configuration using the `miniflare` key. This takes precedence over values set in via your Wrangler config.

For example, this configuration would add a KV namespace `TEST_NAMESPACE` that was only accessed and modified in tests.

JavaScript

```

export default defineConfig({

  plugins: [

    cloudflareTest({

      wrangler: { configPath: "./wrangler.jsonc" },

      miniflare: {

        kvNamespaces: ["TEST_NAMESPACE"],

      },

    }),

  ],

});


```

For a full list of available Miniflare options, refer to the [Miniflare WorkersOptions API documentation ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare#interface-workeroptions).

For a full list of available configuration options, refer to [Configuration](https://developers.cloudflare.com/workers/testing/vitest-integration/configuration/).

## Define types

If you are not using Typescript, you can skip this section.

First make sure you have run [wrangler types](https://developers.cloudflare.com/workers/wrangler/commands/), which generates [types for the Cloudflare Workers runtime](https://developers.cloudflare.com/workers/languages/typescript/) and an `Env` type based on your Worker's bindings.

Then add a `tsconfig.json` in your tests folder and add `"@cloudflare/vitest-pool-workers"` to your types array to define types for `cloudflare:test`. You should also add the output of `wrangler types` to the `include` array so that the types for the Cloudflare Workers runtime are available.

Example test/tsconfig.json

test/tsconfig.json

```

{

  "extends": "../tsconfig.json",

  "compilerOptions": {

    "moduleResolution": "bundler",

    "types": [

      "@cloudflare/vitest-pool-workers", // provides `cloudflare:test` and `cloudflare:workers` types

    ],

  },

  "include": [

    "./**/*.ts",

    "../src/worker-configuration.d.ts", // output of `wrangler types`

  ],

}


```

You also need to define the type of the `env` object that is provided to your tests. Create an `env.d.ts` file in your tests folder, and declare the `ProvidedEnv` interface by extending the `Env` interface that is generated by `wrangler types`.

test/env.d.ts

```

declare module "cloudflare:workers" {

  // ProvidedEnv controls the type of `import("cloudflare:workers").env`

  interface ProvidedEnv extends Env {}

}


```

If your test bindings differ from the bindings in your Wrangler config, you should type them here in `ProvidedEnv`.

## Writing tests

We will use this simple Worker as an example. It returns a 404 response for the `/404` path and `"Hello World!"` for all other paths.

* [  JavaScript ](#tab-panel-7748)
* [  TypeScript ](#tab-panel-7749)

src/index.js

```

export default {

  async fetch(request, env, ctx) {

    if (pathname === "/404") {

      return new Response("Not found", { status: 404 });

    }

    return new Response("Hello World!");

  },

};


```

src/index.ts

```

export default {

  async fetch(request, env, ctx): Promise<Response> {

    if (pathname === "/404") {

      return new Response("Not found", { status: 404 });

    }

    return new Response("Hello World!");

  },

} satisfies ExportedHandler<Env>;


```

### Unit tests

By importing the Worker we can write a unit test for its `fetch` handler.

* [  JavaScript ](#tab-panel-7752)
* [  TypeScript ](#tab-panel-7753)

test/unit.spec.js

```

import { env } from "cloudflare:workers";

import {

  createExecutionContext,

  waitOnExecutionContext,

} from "cloudflare:test";

import { describe, it, expect } from "vitest";

// Import your worker so you can unit test it

import worker from "../src";


// For now, you'll need to do something like this to get a correctly-typed

// `Request` to pass to `worker.fetch()`.

const IncomingRequest = Request;


describe("Hello World worker", () => {

  it("responds with Hello World!", async () => {

    const request = new IncomingRequest("http://example.com/404");

    // Create an empty context to pass to `worker.fetch()`

    const ctx = createExecutionContext();

    const response = await worker.fetch(request, env, ctx);

    // Wait for all `Promise`s passed to `ctx.waitUntil()` to settle before running test assertions

    await waitOnExecutionContext(ctx);

    expect(response.status).toBe(404);

    expect(await response.text()).toBe("Not found");

  });

});


```

test/unit.spec.ts

```

import { env } from "cloudflare:workers";

import {

  createExecutionContext,

  waitOnExecutionContext,

} from "cloudflare:test";

import { describe, it, expect } from "vitest";

// Import your worker so you can unit test it

import worker from "../src";


// For now, you'll need to do something like this to get a correctly-typed

// `Request` to pass to `worker.fetch()`.

const IncomingRequest = Request<unknown, IncomingRequestCfProperties>;


describe("Hello World worker", () => {

  it("responds with Hello World!", async () => {

    const request = new IncomingRequest("http://example.com/404");

    // Create an empty context to pass to `worker.fetch()`

    const ctx = createExecutionContext();

    const response = await worker.fetch(request, env, ctx);

    // Wait for all `Promise`s passed to `ctx.waitUntil()` to settle before running test assertions

    await waitOnExecutionContext(ctx);

    expect(response.status).toBe(404);

    expect(await response.text()).toBe("Not found");

  });

});


```

### Integration tests

You can use the `exports` object provided by `cloudflare:workers` to write an integration test. `exports.default.fetch()` calls the default export handler defined in the main Worker.

* [  JavaScript ](#tab-panel-7750)
* [  TypeScript ](#tab-panel-7751)

test/integration.spec.js

```

import { exports } from "cloudflare:workers";

import { describe, it, expect } from "vitest";


describe("Hello World worker", () => {

  it("responds with not found and proper status for /404", async () => {

    const response = await exports.default.fetch("http://example.com/404");

    expect(response.status).toBe(404);

    expect(await response.text()).toBe("Not found");

  });

});


```

test/integration.spec.ts

```

import { exports } from "cloudflare:workers";

import { describe, it, expect } from "vitest";


describe("Hello World worker", () => {

  it("responds with not found and proper status for /404", async () => {

    const response = await exports.default.fetch("http://example.com/404");

    expect(response.status).toBe(404);

    expect(await response.text()).toBe("Not found");

  });

});


```

When using `exports.default.fetch()` for integration tests, your Worker code runs in the same context as the test runner. This means you can use global mocks to control your Worker, but also means your Worker uses the subtly different module resolution behavior provided by Vite. Usually this is not a problem, but to run your Worker in a fresh environment that is as close to production as possible, you can use an auxiliary Worker. Refer to [this example ↗](https://github.com/cloudflare/workers-sdk/blob/main/fixtures/vitest-pool-workers-examples/basics-integration-auxiliary/vitest.config.ts) for how to set up integration tests using auxiliary Workers. However, using auxiliary Workers comes with [limitations](https://developers.cloudflare.com/workers/testing/vitest-integration/configuration/#workerspooloptions) that you should be aware of.

## Related resources

* For more complex examples of testing using `@cloudflare/vitest-pool-workers`, refer to [Recipes](https://developers.cloudflare.com/workers/testing/vitest-integration/recipes/).
* [Configuration API reference](https://developers.cloudflare.com/workers/testing/vitest-integration/configuration/)
* [Test APIs reference](https://developers.cloudflare.com/workers/testing/vitest-integration/test-apis/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/testing/","name":"Testing"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/testing/vitest-integration/","name":"Vitest integration"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/testing/vitest-integration/write-your-first-test/","name":"Write your first test"}}]}
```

---

---
title: Observability
description: Understand how your Worker projects are performing via logs, traces, metrics, and other data sources.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Observability

Cloudflare Workers provides comprehensive observability tools to help you understand how your applications are performing, diagnose issues, and gain insights into request flows. Whether you want to use Cloudflare's native observability platform or export telemetry data to your existing monitoring stack, Workers has you covered.

## Logs

Logs are essential for troubleshooting and understanding your application's behavior. Cloudflare offers several ways to access and manage your Worker logs.

[ Workers Logs ](https://developers.cloudflare.com/workers/observability/logs/workers-logs/) Automatically collect, store, filter, and analyze logs in the Cloudflare dashboard. 

[ Real-time logs ](https://developers.cloudflare.com/workers/observability/logs/real-time-logs/) Access log events in near real-time for immediate feedback during development and deployments. 

[ Tail Workers ](https://developers.cloudflare.com/workers/observability/logs/tail-workers/) Apply custom filtering, sampling, and transformation logic to your telemetry data. 

[ Workers Logpush ](https://developers.cloudflare.com/workers/observability/logs/logpush/) Send Workers Trace Event Logs to supported destinations like R2, S3, or logging providers. 

## Traces

[Tracing](https://developers.cloudflare.com/workers/observability/traces/) gives you end-to-end visibility into the life of a request as it travels through your Workers application and connected services. With automatic instrumentation, Cloudflare captures telemetry data for fetch calls, binding operations (KV, R2, Durable Objects), and handler invocations - no code changes required.

## Metrics and analytics

[Metrics and analytics](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/) let you monitor your Worker's health with built-in metrics including request counts, error rates, CPU time, wall time, and execution duration. View metrics per Worker or aggregated across all Workers on a zone.

## Query Builder

The [Query Builder](https://developers.cloudflare.com/workers/observability/query-builder/) helps you write structured queries to investigate and visualize your telemetry data. Build queries with filters, aggregations, and groupings to analyze logs and identify patterns.

## Exporting data

[Export OpenTelemetry-compliant traces and logs](https://developers.cloudflare.com/workers/observability/exporting-opentelemetry-data/) from Workers to your existing observability stack. Workers supports exporting to any destination with an OTLP endpoint, including Honeycomb, Grafana Cloud, Axiom, and Sentry.

## Debugging

[ Errors and exceptions ](https://developers.cloudflare.com/workers/observability/errors/) Understand Workers error codes and debug common issues. 

[ Source maps and stack traces ](https://developers.cloudflare.com/workers/observability/source-maps/) Get readable stack traces that map back to your original source code. 

[ DevTools ](https://developers.cloudflare.com/workers/observability/dev-tools/) Use Chrome DevTools for breakpoints, CPU profiling, and memory debugging during local development. 

## Additional resources

[ MCP server ](https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/workers-observability) Query Workers observability data using the Model Context Protocol. 

[ Third-party integrations ](https://developers.cloudflare.com/workers/observability/third-party-integrations/) Integrate Workers with third-party observability platforms. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}}]}
```

---

---
title: DevTools
description: When running your Worker locally using the Wrangler CLI (wrangler dev) or using Vite with the Cloudflare Vite plugin, you automatically have access to Cloudflare's implementation of Chrome DevTools.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/dev-tools/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# DevTools

## Using DevTools

When running your Worker locally using the [Wrangler CLI ↗](https://developers.cloudflare.com/workers/wrangler/) (`wrangler dev`) or using [Vite ↗](https://vite.dev/) with the [Cloudflare Vite plugin ↗](https://developers.cloudflare.com/workers/vite-plugin/), you automatically have access to [Cloudflare's implementation ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/chrome-devtools-patches) of [Chrome DevTools ↗](https://developer.chrome.com/docs/devtools/overview).

You can use Chrome DevTools to:

* View logs directly in the Chrome console
* [Debug code by setting breakpoints](https://developers.cloudflare.com/workers/observability/dev-tools/breakpoints/)
* [Profile CPU usage](https://developers.cloudflare.com/workers/observability/dev-tools/cpu-usage/)
* [Observe memory usage and debug memory leaks in your code that can cause out-of-memory (OOM) errors](https://developers.cloudflare.com/workers/observability/dev-tools/memory-usage/)

## Opening DevTools

### Wrangler

* Run your Worker locally, by running `wrangler dev`
* Press the `D` key from your terminal to open DevTools in a browser tab

### Vite

* Run your Worker locally by running `vite`
* In a new Chrome tab, open the debug URL that shows in your console (for example, `http://localhost:5173/__debug`)

### Dashboard editor & playground

Both the [Cloudflare dashboard ↗](https://dash.cloudflare.com/) and the [Worker's Playground ↗](https://workers.cloudflare.com/playground) include DevTools in the UI.

## Related resources

* [Local development](https://developers.cloudflare.com/workers/development-testing/) \- Develop your Workers and connected resources locally via Wrangler and workerd, for a fast, accurate feedback loop.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/dev-tools/","name":"DevTools"}}]}
```

---

---
title: Breakpoints
description: Debug your local and deployed Workers using breakpoints.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/dev-tools/breakpoints.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Breakpoints

## Debug via breakpoints

When developing a Worker locally using Wrangler or Vite, you can debug via breakpoints in your Worker. Breakpoints provide the ability to review what is happening at a given point in the execution of your Worker. Breakpoint functionality exists in both DevTools and VS Code.

For more information on breakpoint debugging via Chrome's DevTools, refer to [Chrome's article on breakpoints ↗](https://developer.chrome.com/docs/devtools/javascript/breakpoints/).

### VSCode debug terminals

Using VSCode's built-in [JavaScript Debug Terminals ↗](https://code.visualstudio.com/docs/nodejs/nodejs-debugging#%5Fjavascript-debug-terminal), all you have to do is open a JS debug terminal (`Cmd + Shift + P` and then type `javascript debug`) and run `wrangler dev` (or `vite dev`) from within the debug terminal. VSCode will automatically connect to your running Worker (even if you're running multiple Workers at once!) and start a debugging session.

### Setup VS Code to use breakpoints with `launch.json` files

To setup VS Code for breakpoint debugging in your Worker project:

1. Create a `.vscode` folder in your project's root folder if one does not exist.
2. Within that folder, create a `launch.json` file with the following content:

```

{

  "configurations": [

    {

      "name": "Wrangler",

      "type": "node",

      "request": "attach",

      "port": 9229,

      "cwd": "/",

      "resolveSourceMapLocations": null,

      "attachExistingChildren": false,

      "autoAttachChildProcesses": false,

      "sourceMaps": true // works with or without this line

    }

  ]

}


```

1. Open your project in VS Code, open a new terminal window from VS Code, and run `npx wrangler dev` to start the local dev server.
2. At the top of the **Run & Debug** panel, you should see an option to select a configuration. Choose **Wrangler**, and select the play icon. **Wrangler: Remote Process \[0\]** should show up in the Call Stack panel on the left.
3. Go back to a `.js` or `.ts` file in your project and add at least one breakpoint.
4. Open your browser and go to the Worker's local URL (default `http://127.0.0.1:8787`). The breakpoint should be hit, and you should be able to review details about your code at the specified line.

Warning

Breakpoint debugging in `wrangler dev` using `--remote` could extend Worker CPU time and incur additional costs since you are testing against actual resources that count against usage limits. It is recommended to use `wrangler dev` without the `--remote` option. This ensures you are developing locally.

If you are debugging using `--remote`, you cannot use code minification as the debugger will be unable to find vars when stopped at a breakpoint. Do not set minify to `true` in your Wrangler configuration file.

Note

The `.vscode/launch.json` file only applies to a single workspace. If you prefer, you can add the above launch configuration to your User Settings (per the [official VS Code documentation ↗](https://code.visualstudio.com/docs/editor/debugging#%5Fglobal-launch-configuration)) to have it available for all your workspaces.

## Related resources

* [Local Development](https://developers.cloudflare.com/workers/development-testing/) \- Develop your Workers and connected resources locally via Wrangler and [workerd ↗](https://github.com/cloudflare/workerd), for a fast, accurate feedback loop.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/dev-tools/","name":"DevTools"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/dev-tools/breakpoints/","name":"Breakpoints"}}]}
```

---

---
title: Profiling CPU usage
description: Learn how to profile CPU usage and ensure CPU-time per request stays under Workers limits
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/dev-tools/cpu-usage.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Profiling CPU usage

If a Worker spends too much time performing CPU-intensive tasks, responses may be slow or the Worker might fail to startup due to [time limits](https://developers.cloudflare.com/workers/platform/limits/#worker-startup-time).

Profiling in DevTools can help you identify and fix code that uses too much CPU.

Measuring execution time of specific functions in production can be difficult because Workers[only increment timers on I/O](https://developers.cloudflare.com/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading)for security purposes. However, measuring CPU execution times is possible in local development with DevTools.

When using DevTools to monitor CPU usage, it may be difficult to replicate specific behavior you are seeing in production. To mimic production behavior, make sure the requests you send to the local Worker are similar to requests in production. This might mean sending a large volume of requests, making requests to specific routes, or using production-like data via [remote bindings](https://developers.cloudflare.com/workers/development-testing/#remote-bindings).

## Taking a profile

To generate a CPU profile:

* Run `wrangler dev` to start your Worker
* Press the `D` key from your terminal to open DevTools
* Select the "Profiler" tab
* Select `Start` to begin recording CPU usage
* Send requests to your Worker from a new tab
* Select `Stop`

You now have a CPU profile.

Note

For Rust Workers, add the following to your `Cargo.toml` to preserve [DWARF ↗](https://dwarfstd.org/) debug symbols (from [this comment ↗](https://github.com/rustwasm/wasm-pack/issues/1351#issuecomment-2100231587)):

wrangler.toml

```

[package.metadata.wasm-pack.profile.dev.wasm-bindgen]

dwarf-debug-info = true


```

Then, update your `wrangler.toml` to configure wasm-pack (via worker-build) to use the `dev` [profile ↗](https://rustwasm.github.io/docs/wasm-pack/commands/build.html#profile) to preserve debug symbols.

Cargo.toml

```

[build]

command = "cargo install -q worker-build && worker-build --dev"


```

## An Example Profile

Let's look at an example to learn how to read a CPU profile. Imagine you have the following Worker:

index.js

```

const addNumbers = (body) => {

  for (let i = 0; i < 5000; ++i) {

    body = body + " " + i;

  }

  return body;

};


const moreAddition = (body) => {

  for (let i = 5001; i < 15000; ++i) {

    body = body + " " + i;

  }

  return body;

};


export default {

  async fetch(request, env, ctx) {

    let body = "Hello Profiler! - ";

    body = addNumbers(body);

    body = moreAddition(body);

    return new Response(body);

  },

};


```

You want to find which part of the code causes slow response times. How do you use DevTool profiling to identify the CPU-heavy code and fix the issue?

First, as mentioned above, you open DevTools by pressing the `D` key after running `wrangler dev`. Then, you navigate to the "Profiler" tab and take a profile by pressing `Start` and sending a request.

![CPU Profile](https://developers.cloudflare.com/_astro/profile.Dz8PUp_K_Z16J4tW.webp) 

The top chart in this image shows a timeline of the profile, and you can use it to zoom in on a specific request.

The chart below shows the CPU time used for operations run during the request. In this screenshot, you can see "fetch" time at the top and the subscomponents of fetch beneath, including the two functions `addNumbers` and`moreAdditions`. By hovering over each box, you get more information, and by clicking the box, you navigate to the function's source code.

Using this graph, you can answer the question of "what is taking CPU time?". The `addNumbers` function has a very small box, representing 0.3ms of CPU time. The `moreAdditions` box is larger, representing 2.2ms of CPU time.

Therefore, if you want to make response times faster, you need to optimize `moreAdditions`.

You can also change the visualization from ‘Chart’ to ‘Heavy (Bottom Up)’ for an alternative view.

![CPU Profile](https://developers.cloudflare.com/_astro/heavy.17oO4-BN_1suv6n.webp) 

This shows the relative times allocated to each function. At the top of the list, `moreAdditions` is clearly the slowest portion of your Worker. You can see that garbage collection also represents a large percentage of time, so memory optimization could be useful.

## Additional Resources

To learn more about how to use the CPU profiler, see [Google's documentation on Profiling the CPU in DevTools ↗](https://developer.chrome.com/docs/devtools/performance/nodejs#profile).

To learn how to use DevTools to gain insight into memory, see the [Memory Usage Documentation](https://developers.cloudflare.com/workers/observability/dev-tools/memory-usage/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/dev-tools/","name":"DevTools"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/dev-tools/cpu-usage/","name":"Profiling CPU usage"}}]}
```

---

---
title: Profiling Memory
description: Understanding Worker memory usage can help you optimize performance, avoid Out of Memory (OOM) errors
when hitting Worker memory limits, and fix memory leaks.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/dev-tools/memory-usage.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Profiling Memory

Understanding Worker memory usage can help you optimize performance, avoid Out of Memory (OOM) errors when hitting [Worker memory limits](https://developers.cloudflare.com/workers/platform/limits/#memory), and fix memory leaks.

You can profile memory usage with snapshots in DevTools. Memory snapshots let you view a summary of memory usage, see how much memory is allocated to different data types, and get details on specific objects in memory.

When using DevTools to profile memory, it may be difficult to replicate specific behavior you are seeing in production. To mimic production behavior, make sure the requests you send to the local Worker are similar to requests in production. This might mean sending a large volume of requests, making requests to specific routes, or using production-like data with the [\--remote flag](https://developers.cloudflare.com/workers/development-testing/#remote-bindings).

## Taking a snapshot

To generate a memory snapshot:

* Run `wrangler dev` to start your Worker
* Press the `D` from your terminal to open DevTools
* Select on the "Memory" tab
* Send requests to your Worker to start allocating memory  
   * Optionally include a debugger to make sure you can pause execution at the proper time
* Select `Take snapshot`

You can now inspect Worker memory.

## An Example Snapshot

Let's look at an example to learn how to read a memory snapshot. Imagine you have the following Worker:

index.js

```

let responseText = "Hello world!";


export default {

  async fetch(request, env, ctx) {

    let now = new Date().toISOString();

    responseText = responseText + ` (Requested at: ${now})`;

    return new Response(responseText.slice(0, 53));

  },

};


```

While this code worked well initially, over time you notice slower responses and Out of Memory errors. Using DevTools, you can find out if this is a memory leak.

First, as mentioned above, you open DevTools by pressing the `D` key after running `wrangler dev`. Then, you navigate to the "Memory" tab.

Next, generate a large volume of traffic to the Worker by sending requests. You can do this with `curl` or by repeatedly reloading the browser. Note that other Workers may require more specific requests to reproduce a memory leak.

Then, click the "Take Snapshot" button and view the results.

First, navigate to "Statistics" in the dropdown to get a general sense of what takes up memory.

![Memory Statistics](https://developers.cloudflare.com/_astro/memory-stats.BkZs-j29_IdQLM.webp) 

Looking at these statistics, you can see that a lot of memory is dedicated to strings at 67 kB. This is likely the source of the memory leak. If you make more requests and take another snapshot, you would see this number grow.

![Memory Summary](https://developers.cloudflare.com/_astro/memory-summary.CPf4-TMr_Z16f24m.webp) 

The memory summary lists data types by the amount of memory they take up. When you click into "(string)", you can see a string that is far larger than the rest. The text shows that you are appending "Requested at" and a date repeatedly, inadvertently overwriting the global variable with an increasingly large string:

JavaScript

```

responseText = responseText + ` (Requested at: ${now})`;


```

Using Memory Snapshotting in DevTools, you've identified the object and line of code causing the memory leak. You can now fix it with a small code change.

## Additional Resources

To learn more about how to use Memory Snapshotting, see [Google's documentation on Memory Heap Snapshots ↗](https://developer.chrome.com/docs/devtools/memory-problems/heap-snapshots).

To learn how to use DevTools to gain insight into CPU usage, see the [CPU Profiling Documentation](https://developers.cloudflare.com/workers/observability/dev-tools/cpu-usage/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/dev-tools/","name":"DevTools"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/dev-tools/memory-usage/","name":"Profiling Memory"}}]}
```

---

---
title: Errors and exceptions
description: Review Workers errors and exceptions.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/errors.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Errors and exceptions

Review Workers errors and exceptions.

## Error pages generated by Workers

When a Worker running in production has an error that prevents it from returning a response, the client will receive an error page with an error code, defined as follows:

| Error code | Meaning                                                                                                                                                                                                                                                                                                                                |
| ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1101       | Worker threw a JavaScript exception.                                                                                                                                                                                                                                                                                                   |
| 1102       | Worker exceeded [CPU time limit](https://developers.cloudflare.com/workers/platform/limits/#cpu-time).                                                                                                                                                                                                                                 |
| 1103       | The owner of this worker needs to contact [Cloudflare Support](https://developers.cloudflare.com/support/contacting-cloudflare-support/)                                                                                                                                                                                               |
| 1015       | Worker hit the [burst rate limit](https://developers.cloudflare.com/workers/platform/limits/#daily-requests).                                                                                                                                                                                                                          |
| 1019       | Worker hit [loop limit](#loop-limit).                                                                                                                                                                                                                                                                                                  |
| 1021       | Worker has requested a host it cannot access.                                                                                                                                                                                                                                                                                          |
| 1022       | Cloudflare has failed to route the request to the Worker.                                                                                                                                                                                                                                                                              |
| 1024       | Worker cannot make a subrequest to a Cloudflare-owned IP address.                                                                                                                                                                                                                                                                      |
| 1027       | Worker exceeded free tier [daily request limit](https://developers.cloudflare.com/workers/platform/limits/#daily-requests).                                                                                                                                                                                                            |
| 1042       | Worker tried to fetch from another Worker on the same zone, which is only [supported](https://developers.cloudflare.com/workers/runtime-apis/fetch/) when the [global\_fetch\_strictly\_public compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#global-fetch-strictly-public) is used. |
| 10162      | Module has an unsupported Content-Type.                                                                                                                                                                                                                                                                                                |

Other `11xx` errors generally indicate a problem with the Workers runtime itself. Refer to the [status page ↗](https://www.cloudflarestatus.com) if you are experiencing an error.

Errors in the `11xx` range can also be related with [Snippets](https://developers.cloudflare.com/rules/snippets/).

### Loop limit

A Worker cannot call itself or another Worker more than 16 times. In order to prevent infinite loops between Workers, the [CF-EW-Via](https://developers.cloudflare.com/fundamentals/reference/http-headers/#cf-ew-via) header's value is an integer that indicates how many invocations are left. Every time a Worker is invoked, the integer will decrement by 1\. If the count reaches zero, a [1019](#error-pages-generated-by-workers) error is returned.

### "The script will never generate a response" errors

Some requests may return a 1101 error with `The script will never generate a response` in the error message. This occurs when the Workers runtime detects that all the code associated with the request has executed and no events are left in the event loop, but a Response has not been returned.

#### Cause 1: Unresolved Promises

This is most commonly caused by relying on a Promise that is never resolved or rejected, which is required to return a Response. To debug, look for Promises within your code or dependencies' code that block a Response, and ensure they are resolved or rejected.

In browsers and other JavaScript runtimes, equivalent code will hang indefinitely, leading to both bugs and memory leaks. The Workers runtime throws an explicit error to help you debug.

In the example below, the Response relies on a Promise resolution that never happens. Uncommenting the `resolve` callback solves the issue.

JavaScript

```

export default {

  fetch(req) {

    let response = new Response("Example response");

    let { promise, resolve } = Promise.withResolvers();


    // If the promise is not resolved, the Workers runtime will

    // recognize this and throw an error.


    // setTimeout(resolve, 0)


    return promise.then(() => response);

  },

};


```

You can prevent this by enforcing the [no-floating-promises eslint rule ↗](https://typescript-eslint.io/rules/no-floating-promises/), which reports when a Promise is created and not properly handled.

#### Cause 2: WebSocket connections that are never closed

If a WebSocket is missing the proper code to close its server-side connection, the Workers runtime will throw a `script will never generate a response` error. In the example below, the `'close'` event from the client is not properly handled by calling `server.close()`, and the error is thrown. In order to avoid this, ensure that the WebSocket's server-side connection is properly closed via an event listener or other server-side logic.

JavaScript

```

async function handleRequest(request) {

  let webSocketPair = new WebSocketPair();

  let [client, server] = Object.values(webSocketPair);

  server.accept();


  server.addEventListener("close", () => {

    // This missing line would keep a WebSocket connection open indefinitely

    // and results in "The script will never generate a response" errors

    // server.close();

  });


  return new Response(null, {

    status: 101,

    webSocket: client,

  });

}


```

### "Illegal invocation" errors

The error message `TypeError: Illegal invocation: function called with incorrect this reference` can be a source of confusion.

This is typically caused by calling a function that calls `this`, but the value of `this` has been lost.

For example, given an `obj` object with the `obj.foo()` method which logic relies on `this`, executing the method via `obj.foo();` will make sure that `this` properly references the `obj` object. However, assigning the method to a variable, e.g.`const func = obj.foo;` and calling such variable, e.g. `func();` would result in `this` being `undefined`. This is because `this` is lost when the method is called as a standalone function. This is standard behavior in JavaScript.

In practice, this is often seen when destructuring runtime provided Javascript objects that have functions that rely on the presence of `this`, such as `ctx`.

The following code will error:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    // destructuring ctx makes waitUntil lose its 'this' reference

    const { waitUntil } = ctx;

    // waitUntil errors, as it has no 'this'

    waitUntil(somePromise);


    return fetch(request);

  },

};


```

Avoid destructuring or re-bind the function to the original context to avoid the error.

The following code will run properly:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    // directly calling the method on ctx avoids the error

    ctx.waitUntil(somePromise);


    // alternatively re-binding to ctx via apply, call, or bind avoids the error

    const { waitUntil } = ctx;

    waitUntil.apply(ctx, [somePromise]);

    waitUntil.call(ctx, somePromise);

    const reboundWaitUntil = waitUntil.bind(ctx);

    reboundWaitUntil(somePromise);


    return fetch(request);

  },

};


```

### Cannot perform I/O on behalf of a different request

```

Uncaught (in promise) Error: Cannot perform I/O on behalf of a different request. I/O objects (such as streams, request/response bodies, and others) created in the context of one request handler cannot be accessed from a different request's handler.


```

This error occurs when you attempt to share input/output (I/O) objects (such as streams, requests, or responses) created by one invocation of your Worker in the context of a different invocation.

In Cloudflare Workers, each invocation is handled independently and has its own execution context. This design ensures optimal performance and security by isolating requests from one another. When you try to share I/O objects between different invocations, you break this isolation. Since these objects are tied to the specific request they were created in, accessing them from another request's handler is not allowed and leads to the error.

This error is most commonly caused by attempting to cache an I/O object, like a [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) in global scope, and then access it in a subsequent request. For example, if you create a Worker and run the following code in local development, and make two requests to your Worker in quick succession, you can reproduce this error:

JavaScript

```

let cachedResponse = null;


export default {

  async fetch(request, env, ctx) {

    if (cachedResponse) {

      return cachedResponse;

    }

    cachedResponse = new Response("Hello, world!");

    await new Promise((resolve) => setTimeout(resolve, 5000)); // Sleep for 5s to demonstrate this particular error case

    return cachedResponse;

  },

};


```

You can fix this by instead storing only the data in global scope, rather than the I/O object itself:

JavaScript

```

let cachedData = null;


export default {

  async fetch(request, env, ctx) {

    if (cachedData) {

      return new Response(cachedData);

    }

    const response = new Response("Hello, world!");

    cachedData = await response.text();

    return new Response(cachedData, response);

  },

};


```

If you need to share state across requests, consider using [Durable Objects](https://developers.cloudflare.com/durable-objects/). If you need to cache data across requests, consider using [Workers KV](https://developers.cloudflare.com/kv/).

## Errors on Worker upload

These errors occur when a Worker is uploaded or modified.

| Error code | Meaning                                                                                                                                                      |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| 10006      | Could not parse your Worker's code.                                                                                                                          |
| 10007      | Worker or [workers.dev subdomain](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) not found.                                   |
| 10015      | Account is not entitled to use Workers.                                                                                                                      |
| 10016      | Invalid Worker name.                                                                                                                                         |
| 10021      | Validation Error. Refer to [Validation Errors](https://developers.cloudflare.com/workers/observability/errors/#validation-errors-10021) for details.         |
| 10026      | Could not parse request body.                                                                                                                                |
| 10027      | The uploaded Worker exceeded the [Worker size limits](https://developers.cloudflare.com/workers/platform/limits/#worker-size).                               |
| 10035      | Multiple attempts to modify a resource at the same time                                                                                                      |
| 10037      | An account has exceeded the number of [Workers allowed](https://developers.cloudflare.com/workers/platform/limits/#number-of-workers).                       |
| 10052      | A [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/) is uploaded without a name.                                                    |
| 10054      | A environment variable or secret exceeds the [size limit](https://developers.cloudflare.com/workers/platform/limits/#environment-variables).                 |
| 10055      | The number of environment variables or secrets exceeds the [limit/Worker](https://developers.cloudflare.com/workers/platform/limits/#environment-variables). |
| 10056      | [Binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/) not found.                                                                       |
| 10068      | The uploaded Worker has no registered [event handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/).                                    |
| 10069      | The uploaded Worker contains [event handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/) unsupported by the Workers runtime.          |

### Validation Errors (10021)

The 10021 error code includes all errors that occur when you attempt to deploy a Worker, and Cloudflare then attempts to load and run the top-level scope (everything that happens before your Worker's [handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/) is invoked). For example, if you attempt to deploy a broken Worker with invalid JavaScript that would throw a `SyntaxError` — Cloudflare will not deploy your Worker.

Specific error cases include but are not limited to:

#### Script startup exceeded CPU time limit

This means that you are doing work in the top-level scope of your Worker that takes more than the [startup time limit (1s)](https://developers.cloudflare.com/workers/platform/limits/#worker-startup-time) of CPU time.

#### Script startup exceeded memory limit

This means that you are doing work in the top-level scope of your Worker that allocates more than the [memory limit (128 MB)](https://developers.cloudflare.com/workers/platform/limits/#memory) of memory.

## Runtime errors

Runtime errors will occur within the runtime, do not throw up an error page, and are not visible to the end user. Runtime errors are detected by the user with logs.

| Error message                            | Meaning                                                                                                                                           |
| ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |
| Network connection lost                  | Connection failure. Catch a fetch or binding invocation and retry it.                                                                             |
| Memory limitwould be exceeded before EOF | Trying to read a stream or buffer that would take you over the [memory limit](https://developers.cloudflare.com/workers/platform/limits/#memory). |
| daemonDown                               | A temporary problem invoking the Worker.                                                                                                          |

## Identify errors: Workers Metrics

To review whether your application is experiencing any downtime or returning any errors:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker and review your Worker's metrics.

### Worker Errors

The **Errors by invocation status** chart shows the number of errors broken down into the following categories:

| Error                    | Meaning                                                         |
| ------------------------ | --------------------------------------------------------------- |
| Uncaught Exception       | Your Worker code threw a JavaScript exception during execution. |
| Exceeded CPU Time Limits | Worker exceeded CPU time limit or other resource constraints.   |
| Exceeded Memory          | Worker exceeded the memory limit during execution.              |
| Internal                 | An internal error occurred in the Workers runtime.              |

The **Client disconnected by type** chart shows the number of client disconnect errors broken down into the following categories:

| Client Disconnects           | Meaning                                                                                                                                                                                                                           |
| ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Response Stream Disconnected | Connection was terminated during the deferred proxying stage of a Worker request flow. It commonly appears for longer lived connections such as [WebSockets](https://developers.cloudflare.com/workers/runtime-apis/websockets/). |
| Cancelled                    | The Client disconnected before the Worker completed its response.                                                                                                                                                                 |

## Debug exceptions with Workers Logs

[Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs) is a powerful tool for debugging your Workers. It shows all the historic logs generated by your Worker, including any uncaught exceptions that occur during execution.

To find all your errors in Workers Logs, you can use the following filter: `$metadata.error EXISTS`. This will show all the logs that have an error associated with them. You can also filter by `$workers.outcome` to find the requests that resulted in an error. For example, you can filter by `$workers.outcome = "exception"` to find all the requests that resulted in an uncaught exception.

All the possible outcome values can be found in the [Workers Trace Event](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/workers%5Ftrace%5Fevents/#outcome) reference.

## Debug exceptions from `Wrangler`

To debug your worker via wrangler use `wrangler tail` to inspect and fix the exceptions.

Exceptions will show up under the `exceptions` field in the JSON returned by `wrangler tail`. After you have identified the exception that is causing errors, redeploy your code with a fix, and continue tailing the logs to confirm that it is fixed.

## Set up a 3rd party logging service

A Worker can make HTTP requests to any HTTP service on the public Internet. You can use a service like [Sentry ↗](https://sentry.io) to collect error logs from your Worker, by making an HTTP request to the service to report the error. Refer to your service’s API documentation for details on what kind of request to make.

When using an external logging strategy, remember that outstanding asynchronous tasks are canceled as soon as a Worker finishes sending its main response body to the client. To ensure that a logging subrequest completes, pass the request promise to [event.waitUntil() ↗](https://developer.mozilla.org/en-US/docs/Web/API/ExtendableEvent/waitUntil). For example:

* [  Module Worker ](#tab-panel-7448)
* [  Service Worker ](#tab-panel-7449)

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    function postLog(data) {

      return fetch("https://log-service.example.com/", {

        method: "POST",

        body: data,

      });

    }


    // Without ctx.waitUntil(), the `postLog` function may or may not complete.

    ctx.waitUntil(postLog(stack));

    return fetch(request);

  },

};


```

Service Workers are deprecated

Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers.

JavaScript

```

addEventListener("fetch", (event) => {

  event.respondWith(handleEvent(event));

});


async function handleEvent(event) {

  // ...


  // Without event.waitUntil(), the `postLog` function may or may not complete.

  event.waitUntil(postLog(stack));

  return fetch(event.request);

}


function postLog(data) {

  return fetch("https://log-service.example.com/", {

    method: "POST",

    body: data,

  });

}


```

## Collect and persist Wasm core dumps

Configure the [Wasm Coredump Service ↗](https://github.com/cloudflare/wasm-coredump) to collect coredumps from your Rust Workers applications and persist them to logs, Sentry, or R2 for analysis with [wasmgdb ↗](https://github.com/xtuc/wasm-coredump/tree/main/bin/wasmgdb). Read the [blog post ↗](https://blog.cloudflare.com/wasm-coredumps/) for more details.

## Go to origin on error

By using [event.passThroughOnException](https://developers.cloudflare.com/workers/runtime-apis/context/#passthroughonexception), a Workers application will forward requests to your origin if an exception is thrown during the Worker's execution. This allows you to add logging, tracking, or other features with Workers, without degrading your application's functionality.

* [  Module Worker ](#tab-panel-7450)
* [  Service Worker ](#tab-panel-7451)

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    ctx.passThroughOnException();

    // an error here will return the origin response, as if the Worker wasn't present

    return fetch(request);

  },

};


```

Service Workers are deprecated

Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers.

JavaScript

```

addEventListener("fetch", (event) => {

  event.passThroughOnException();

  event.respondWith(handleRequest(event.request));

});


async function handleRequest(request) {

  // An error here will return the origin response, as if the Worker wasn’t present.

  // ...

  return fetch(request);

}


```

## Related resources

* [Log from Workers](https://developers.cloudflare.com/workers/observability/logs/) \- Learn how to log your Workers.
* [Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush/) \- Learn how to push Workers Trace Event Logs to supported destinations.
* [RPC error handling](https://developers.cloudflare.com/workers/runtime-apis/rpc/error-handling/) \- Learn how to handle errors from remote-procedure calls.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/errors/","name":"Errors and exceptions"}}]}
```

---

---
title: Exporting OpenTelemetry Data
description: Cloudflare Workers supports exporting OpenTelemetry (OTel)-compliant telemetry data to any destination with an available OTel endpoint, allowing you to integrate with your existing monitoring and observability stack.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/exporting-opentelemetry-data/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Exporting OpenTelemetry Data

Cloudflare Workers supports exporting OpenTelemetry (OTel)-compliant telemetry data to any destination with an available OTel endpoint, allowing you to integrate with your existing monitoring and observability stack.

### Supported telemetry types

You can export the following types of telemetry data:

* **Traces** \- Traces showing request flows through your Worker and connected services
* **Logs** \- Application logs including `console.log()` output and system-generated logs

**Note**: exporting Worker metrics and custom metrics is not yet supported.

### Available OpenTelemetry destinations

Below are common OTLP endpoint formats for popular observability providers. Refer to your provider's documentation for specific details and authentication requirements.

| Provider                                                                                                                 | Traces Endpoint                                             | Logs Endpoint                                                |
| ------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------- | ------------------------------------------------------------ |
| [**Honeycomb**](https://developers.cloudflare.com/workers/observability/exporting-opentelemetry-data/honeycomb/)         | https://api.honeycomb.io/v1/traces                          | https://api.honeycomb.io/v1/logs                             |
| [**Grafana Cloud**](https://developers.cloudflare.com/workers/observability/exporting-opentelemetry-data/grafana-cloud/) | https://otlp-gateway-{region}.grafana.net/otlp/v1/traces    | https://otlp-gateway-{region}.grafana.net/otlp/v1/logs\[^1\] |
| [**Firetiger** ↗](https://docs.firetiger.com/ingest/cloudflare-workers.html)                                             | https://ingest.cloud.firetiger.com/v1/traces                | https://ingest.cloud.firetiger.com/v1/logs                   |
| [**Axiom**](https://developers.cloudflare.com/workers/observability/exporting-opentelemetry-data/axiom/)                 | https://api.axiom.co/v1/traces                              | https://api.axiom.co/v1/logs                                 |
| [**Sentry**](https://developers.cloudflare.com/workers/observability/exporting-opentelemetry-data/sentry/)               | https://{HOST}/api/{PROJECT\_ID}/integration/otlp/v1/traces | https://{HOST}/api/{PROJECT\_ID}/integration/otlp/v1/logs    |
| [**Datadog** ↗](https://docs.datadoghq.com/opentelemetry/setup/otlp%5Fingest/)                                           | Coming soon, pending release from Datadog                   | https://otlp.{SITE}.datadoghq.com/v1/logs                    |
| [**Splunk Observability** ↗](https://dev.splunk.com/observability/reference/api/ingest%5Fdata/latest)                    | https://ingest.{REALM}.signalfx.com/v2/trace/otlp           | N/A                                                          |
| [**Splunk Platform** ↗](https://github.com/splunk/splunk-connect-for-otlp)                                               | http://splunk.internal:4318/v1/traces                       | http://splunk.internal:4318/v1/logs                          |

Authentication

Most providers require authentication headers. Refer to your provider's documentation for specific authentication requirements.

## Setting up OpenTelemetry-compatible destinations

To start sending data to your destination, you'll need to create a destination in the Cloudflare dashboard.

### Creating a destination

![Observability Destinations dashboard showing configured destinations for Grafana and Honeycomb with their respective endpoints and status](https://developers.cloudflare.com/_astro/destinations.B-CW_OSI_Z1IImW8.webp) 
1. Head to your account's [Workers Observability ↗](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/pipelines) section of the dashboard
2. Click add destination.
3. Configure your destination:  
   * **Destination Name** \- A descriptive name (e.g., "Grafana-tracing", "Honeycomb-Logs")  
   * **Destination Type** \- Choose between "Traces" or "Logs"  
   * **OTLP Endpoint** \- The URL where your observability platform accepts OTLP data.  
   * **Custom Headers** (Optional) - Any authentication headers or other provider-required headers
4. Save your destination
![Edit Destination dialog showing configuration for Honeycomb tracing with destination name, type selection, OTLP endpoint, and custom headers](https://developers.cloudflare.com/_astro/destination-setup.B8cxx8yd_Z127o0L.webp) 

## Enabling OpenTelemetry export for your Worker

After setting up destinations in the dashboard, configure your Worker to export telemetry data by updating your Wrangler configuration. Your destination name configured in your configuration file should be the same as the destination configured in the dashboard.

* [  wrangler.jsonc ](#tab-panel-7452)
* [  wrangler.toml ](#tab-panel-7453)

```

{

  "observability": {

    "traces": {

      "enabled": true,

      "destinations": ["tracing-destination-name"],


      // traces sample rate of 5%

      "head_sampling_rate": 0.05,


      // (optional) set to false to only export traces to your

      // destination without persisting them in the Cloudflare dashboard

      "persist": false

    },

    "logs": {

      "enabled": true,

      "destinations": ["logs-destination-name"],

      // logs sample rate of 60%

      "head_sampling_rate": 0.6,


      // (optional) set to false to only export logs to your

      // destination without persisting them in the Cloudflare dashboard

      "persist": false

    }

  }

}


```

```

[observability.traces]

enabled = true

destinations = [ "tracing-destination-name" ]

head_sampling_rate = 0.05

persist = false


[observability.logs]

enabled = true

destinations = [ "logs-destination-name" ]

head_sampling_rate = 0.6

persist = false


```

`persist` and pricing

By default, `persist` is `true`, which means logs and traces are both exported to your destination and stored in the Cloudflare dashboard. Dashboard storage is billed [separately](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#pricing). Set `persist` to `false` if you only need data in your external destination.

Once you've configured your Wrangler configuration file, redeploy your Worker for new configurations to take effect. Note that it may take a few minutes for events to reach your destination.

## Destination status

After creating a destination, you can monitor its health and delivery status in the Cloudflare dashboard. Each destination displays a status indicator that shows how recently data was successfully delivered.

### Status indicators

| Status                  | Description                                                             | Troubleshooting                                                                                   |
| ----------------------- | ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
| **Last: n minutes ago** | Data was recently delivered successfully.                               |                                                                                                   |
| **Never run**           | No data has been delivered to this destination.                         | •Check if your Worker is receving traffic  • Review sampling rates (low rates generate less data) |
| **Error**               | An error occurred while attempting to deliver data to this destination. | • Verify OTLP endpoint URL is correct• Check authentication headers are valid                     |

## Limits and pricing

Exporting OTel data is currently **free** to those currently on a Workers Paid subscription or higher during the early beta period. However, starting on **`March 1, 2026`**, tracing will be billed as part of your usage on the Workers Paid plan or contract.

This includes the following limits and pricing:

| Plan             | Traces                               | Logs                                 | Pricing                             |
| ---------------- | ------------------------------------ | ------------------------------------ | ----------------------------------- |
| **Workers Free** | Not available                        | Not available                        | \-                                  |
| **Workers Paid** | 10 million events per month included | 10 million events per month included | $0.05 per million additional events |

## Known limitations

OpenTelemetry data export is currently in beta. Please be aware of the following limitations:

* **Metrics export not yet supported**: Exporting Worker infrastructure metrics and custom metrics via OpenTelemetry is not currently available. We are actively working to add metrics support in the future.
* **Limited OTLP support from some providers**: Some observability providers are still rolling out OTLP endpoint support. Check the [Available OpenTelemetry destinations](#available-opentelemetry-destinations) table above for current availability.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/exporting-opentelemetry-data/","name":"Exporting OpenTelemetry Data"}}]}
```

---

---
title: Export to Axiom
description: Axiom is a serverless log analytics platform that helps you store, search, and analyze massive amounts of data. By exporting your Cloudflare Workers application telemetry to Axiom, you can:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/exporting-opentelemetry-data/axiom.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Export to Axiom

Axiom is a serverless log analytics platform that helps you store, search, and analyze massive amounts of data. By exporting your Cloudflare Workers application telemetry to Axiom, you can:

* Store and query logs and traces at scale
* Create dashboards and alerts to monitor your Workers
![Trace view with timing information displayed on a timeline](https://developers.cloudflare.com/_astro/axiom-example.BRPbEoGh_IlBGJ.webp) 

This guide will walk you through exporting OpenTelemetry-compliant traces and logs to Axiom from your Cloudflare Worker application

## Prerequisites

Before you begin, ensure you have:

* An active [Axiom account ↗](https://app.axiom.co/register) (free tier available)
* A deployed Worker that you want to monitor
* An Axiom dataset to send data to

## Step 1: Create a dataset

If you don't already have a dataset to send data to:

1. Log in to your [Axiom account ↗](https://app.axiom.co/)
2. Navigate to **Datasets** in the left sidebar
3. Click **New Dataset**
4. Enter a name (e.g. `cloudflare-workers-otel`)
5. Click **Create Dataset**

## Step 2: Get your Axiom API token and dataset

1. Navigate to **Settings** in the left sidebar
2. Click on **API Tokens**
3. Click **Create API Token**
4. Configure your API token:  
   * **Name**: Enter a descriptive name (e.g., `cloudflare-workers-otel`)  
   * **Permissions**: Select **Ingest** permission (required for sending telemetry data)  
   * **Datasets**: Choose which datasets this token can write to, or select **All Datasets**
5. Click **Create**
6. **Important**: Copy the API token immediately and store it securely - you won't be able to see it again

The API token will look something like: `xaat-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`

## Step 3: Configure Cloudflare destinations

Now you'll create destinations in the Cloudflare dashboard that point to Axiom.

### Axiom OTLP endpoints

Axiom provides separate OTLP endpoints for traces and logs:

* **Traces**: `https://api.axiom.co/v1/traces`
* **Logs**: `https://api.axiom.co/v1/logs`

### Configure trace or logs destination

1. Navigate to your Cloudflare account's [Workers Observability ↗](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/pipelines) section
2. Click **Add destination**
3. Configure your trace destination:  
   * **Destination Name**: `axiom-traces` (or any descriptive name)  
   * **Destination Type**: Select **Traces**  
   * **OTLP Endpoint**: `https://api.axiom.co/v1/traces` (or `/v1/logs`)  
   * **Custom Headers**: Add two required headers:  
         * Authentication header  
                  * Header name: `Authorization`  
                  * Header value: `Bearer <your-api-token>`  
         * Dataset header:  
                  * Header name: `X-Axiom-Dataset`  
                  * Header value: Your dataset name (e.g., `cloudflare-workers-otel`)
4. Click **Save**

## Step 3: Configure your Worker

With your destinations created in the Cloudflare dashboard, update your Worker's configuration to enable telemetry export.

* [  wrangler.jsonc ](#tab-panel-7454)
* [  wrangler.toml ](#tab-panel-7455)

```

{

  "observability": {

    "traces": {

      "enabled": true,

      // Must match the destination name in the dashboard

      "destinations": ["axiom-traces"]

    },

    "logs": {

      "enabled": true,

      // Must match the destination name in the dashboard

      "destinations": ["axiom-logs"]

    }

  }

}


```

```

[observability.traces]

enabled = true

destinations = [ "axiom-traces" ]


[observability.logs]

enabled = true

destinations = [ "axiom-logs" ]


```

After updating your configuration, deploy your Worker for the changes to take effect.

Note

It may take a few minutes after deployment for data to appear in Axiom.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/exporting-opentelemetry-data/","name":"Exporting OpenTelemetry Data"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/exporting-opentelemetry-data/axiom/","name":"Export to Axiom"}}]}
```

---

---
title: Export to Grafana Cloud
description: Grafana Cloud is a fully managed observability platform that provides visualization, alerting, and analytics for your telemetry data. By exporting your Cloudflare Workers telemetry to Grafana Cloud, you can:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/exporting-opentelemetry-data/grafana-cloud.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Export to Grafana Cloud

Grafana Cloud is a fully managed observability platform that provides visualization, alerting, and analytics for your telemetry data. By exporting your Cloudflare Workers telemetry to Grafana Cloud, you can:

* Visualize distributed traces in **Grafana Tempo** to understand request flows and performance bottlenecks
* Query and analyze logs in **Grafana Loki** alongside your traces

This guide will walk you through configuring Cloudflare Workers to export OpenTelemetry-compliant traces and logs to your Grafana Cloud stack.

![Grafana Tempo trace view showing a distributed trace for a service with multiple spans including fetch requests, durable object subrequests, and queue operations, with timing information displayed on a timeline](https://developers.cloudflare.com/_astro/grafana-traces.CuFntNVO_1VEu9k.webp) 

## Prerequisites

Before you begin, ensure you have:

* An active [Grafana Cloud account ↗](https://grafana.com/auth/sign-up/create-user) (free tier available)
* A deployed Worker that you want to monitor

## Step 1: Access the OpenTelemetry setup guide

1. Log in to your [Grafana Cloud portal ↗](https://grafana.com/)
2. From your organization's home page, navigate to **Connections** → **Add new connection**
3. Search for "OpenTelemetry" and select **OpenTelemetry (OTLP)**
4. Select **Quickstart** then select **JavaScript**
5. Click **Create a new token**
6. Enter a name for your token (e.g., `cloudflare-workers-otel`) and click **create token**
7. Click on **Close** without copying the token
8. Copy and Save the value for `OTEL_EXPORTER_OTLP_ENDPOINT` and `OTEL_EXPORTER_OTLP_HEADERS` in the `Environment variables` code block as the OTel endpoint and as the Auth header value respectively

## Step 2: Set up destination

1. Navigate to your Cloudflare account's [Workers Observability ↗](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/pipelines) section
2. Click **Add destination** and configure a destination name (e.g. `grafana-tracing`)
3. From Grafana, copy your Otel endpoint, auth header, and auth value
* Your OTEL endpoint will look like `https://otlp-gateway-prod-us-east-2.grafana.net/otlp` (append `/v1/traces` for traces and `/v1/logs` for logs)
* Your custom header should include:  
   * Your auth header name `Authorization`  
   * Your auth header value `Basic MTMxxx...`

## Step 3: Configure your Worker

With your destination created in the Cloudflare dashboard, update your Worker's configuration to enable telemetry export.

* [  wrangler.jsonc ](#tab-panel-7456)
* [  wrangler.toml ](#tab-panel-7457)

```

{

  "observability": {

    "traces": {

      "enabled": true,

      // Must match the destination name in the dashboard

      "destinations": ["grafana-traces"]

    },

    "logs": {

      "enabled": true,

      // Must match the destination name in the dashboard

      "destinations": ["grafana-logs"]

    }

  }

}


```

```

[observability.traces]

enabled = true

destinations = [ "grafana-traces" ]


[observability.logs]

enabled = true

destinations = [ "grafana-logs" ]


```

After updating your configuration, deploy your Worker for the changes to take effect.

Note

It may take a few minutes after deployment for data to appear in Grafana Cloud.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/exporting-opentelemetry-data/","name":"Exporting OpenTelemetry Data"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/exporting-opentelemetry-data/grafana-cloud/","name":"Export to Grafana Cloud"}}]}
```

---

---
title: Export to Honeycomb
description: Honeycomb is an observability platform built for high-cardinality data that helps you understand and debug your applications. By exporting your Cloudflare Workers application telemetry to Honeycomb, you can:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/exporting-opentelemetry-data/honeycomb.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Export to Honeycomb

Honeycomb is an observability platform built for high-cardinality data that helps you understand and debug your applications. By exporting your Cloudflare Workers application telemetry to Honeycomb, you can:

* Visualize traces to understand request flows and identify performance bottlenecks
* Query and analyze logs with unlimited dimensionality across any attribute
* Create custom queries and dashboards to monitor your Workers
![Trace view including POST request, fetch operations, durable object subrequest, and queue send, with timing information displayed on a timeline](https://developers.cloudflare.com/_astro/honeycomb-example.cEkEF1c4_Z52f1D.webp) 

This guide will walk you through configuring your Cloudflare Worker application to export OpenTelemetry-compliant traces and logs to Honeycomb.

## Prerequisites

Before you begin, ensure you have:

* An active [Honeycomb account ↗](https://ui.honeycomb.io/signup) (free tier available)
* A deployed Worker that you want to monitor

## Step 1: Get your Honeycomb API key

1. Log in to your [Honeycomb account ↗](https://ui.honeycomb.io/)
2. Navigate to your account settings by clicking on your profile icon in the top right
3. Select **Team Settings**
4. In the left sidebar, click **Environments** and click the gear icon
5. Find your environment (e.g., `production`, `test`) or create a new one
6. Under **API Keys**, click **Create Ingest API Key**
7. Configure your API key:  
   * **Name**: Enter a descriptive name (e.g., `cloudflare-workers-otel`)  
   * **Permissions**: Select **Can create services/datasets** (required for OTLP ingestion)
8. Click **Create**
9. **Important**: Copy the API key immediately and store it securely - you won't be able to see it again

The API key will look something like: `hcaik_01hq...`

## Step 2: Configure Cloudflare destinations

Now you'll create destinations in the Cloudflare dashboard that point to Honeycomb.

### Honeycomb OTLP endpoints

Honeycomb provides separate OTLP endpoints for traces and logs:

* **Traces**: `https://api.honeycomb.io/v1/traces`
* **Logs**: `https://api.honeycomb.io/v1/logs`

### Configure trace destination

1. Navigate to your Cloudflare account's [Workers Observability ↗](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/pipelines) section
2. Click **Add destination**
3. Configure your trace destination:  
   * **Destination Name**: `honeycomb-traces` (or any descriptive name)  
   * **Destination Type**: Select **Traces**  
   * **OTLP Endpoint**: `https://api.honeycomb.io/v1/traces`  
   * **Custom Headers**: Add the authentication header:  
         * Header name: `x-honeycomb-team`  
         * Header value: Your Honeycomb API key (e.g., `hcaik_01hq...`)
4. Click **Save**

### Configure logs destination

Repeat the process for logs:

1. Click **Add destination** again
2. Configure your logs destination:  
   * **Destination Name**: `honeycomb-logs` (or any descriptive name)  
   * **Destination Type**: Select **Logs**  
   * **OTLP Endpoint**: `https://api.honeycomb.io/v1/logs`  
   * **Custom Headers**: Add the authentication header:  
         * Header name: `x-honeycomb-team`  
         * Header value: Your Honeycomb API key (same as above)
3. Click **Save**

## Step 3: Configure your Worker

With your destinations created in the Cloudflare dashboard, update your Worker's configuration to enable telemetry export.

* [  wrangler.jsonc ](#tab-panel-7458)
* [  wrangler.toml ](#tab-panel-7459)

```

{

  "observability": {

    "traces": {

      "enabled": true,

      // Must match the destination name in the dashboard

      "destinations": ["honeycomb-traces"]

    },

    "logs": {

      "enabled": true,

      // Must match the destination name in the dashboard

      "destinations": ["honeycomb-logs"]

    }

  }

}


```

```

[observability.traces]

enabled = true

destinations = [ "honeycomb-traces" ]


[observability.logs]

enabled = true

destinations = [ "honeycomb-logs" ]


```

After updating your configuration, deploy your Worker for the changes to take effect.

Note

It may take a few minutes after deployment for data to appear in Honeycomb.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/exporting-opentelemetry-data/","name":"Exporting OpenTelemetry Data"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/exporting-opentelemetry-data/honeycomb/","name":"Export to Honeycomb"}}]}
```

---

---
title: Export to Sentry
description: Sentry is a software monitoring tool that helps developers identify and debug performance issues and errors. From end-to-end distributed tracing to performance monitoring, Sentry provides code-level observability that makes it easy to diagnose issues and learn continuously about your application code health across systems and services. By exporting your Cloudflare Workers application telemetry to Sentry, you can:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/exporting-opentelemetry-data/sentry.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Export to Sentry

Sentry is a software monitoring tool that helps developers identify and debug performance issues and errors. From end-to-end distributed tracing to performance monitoring, Sentry provides code-level observability that makes it easy to diagnose issues and learn continuously about your application code health across systems and services. By exporting your Cloudflare Workers application telemetry to Sentry, you can:

* Query logs and traces in Sentry
* Create custom alerts and dashboards to monitor your Workers
![Sentry trace view with timing information displayed on a timeline](https://developers.cloudflare.com/_astro/sentry-example.DU-HO2rh_20ehfq.webp) 

This guide will walk you through exporting OpenTelemetry-compliant traces and logs to Sentry from your Cloudflare Worker application

## Prerequisites

Before you begin, ensure you have:

* Are signed up for a [Sentry account ↗](https://sentry.io/signup/) (free tier available)
* A deployed Worker that you want to monitor

## Step 1: Create a Sentry project

If you don't already have a Sentry project to send data to, you'll need to create one to start sending Cloudflare Workers application telemetry to Sentry.

1. Log in to your [Sentry account ↗](https://sentry.io/)
2. Navigate to the Insights > Projects in the navigation sidebar, which will open a list of your projects.
3. Click [**New Project** ↗](https://sentry.io/orgredirect/organizations/:orgslug/insights/projects/new/)
4. Fill out the project creation form and click **Create Project** to complete the process.

## Step 2: Get your Sentry OTLP endpoints

Sentry provides separate OTLP endpoints for traces and logs which you can use to send your telemetry data to Sentry.

* **Traces**: `https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/traces`
* **Logs**: `https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/logs`

You can find your OTLP endpoints in the your project settings.

1. Go to the [Settings > Projects ↗](https://sentry.io/orgredirect/organizations/:orgslug/settings/projects/) page in Sentry.
2. Select your project from the list and click on the project name to open the project settings.
3. Go to the "Client Keys (DSN)" sub-page for this project under the "SDK Setup" heading.

There you'll find your Sentry project's OTLP logs and OTLP traces endpoints, as well as authentication headers for the endpoints. Make sure to copy the endpoints and authentication headers.

For more details on how to use Sentry's OTLP endpoints, refer to [Sentry's OTLP documentation ↗](https://docs.sentry.io/concepts/otlp/).

## Step 3: Set up destination in the Cloudflare dashboard

To set up a destination in the Cloudflare dashboard, navigate to your Cloudflare account's [Workers Observability ↗](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/pipelines) section. Then click **Add destination** and configure either a traces or logs destination.

### Traces Destination

To configure your traces destination, click **Add destination** and configure the following:

* **Destination Name**: `sentry-traces` (or any descriptive name)
* **Destination Type**: Select **Traces**
* **OTLP Endpoint**: Your Sentry OTLP traces endpoint (e.g., `https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/traces`)
* **Custom Headers**: Add the Sentry authentication header:  
   * Header name: `x-sentry-auth`  
   * Header value: `sentry sentry_key={SENTRY_PUBLIC_KEY}` where `{SENTRY_PUBLIC_KEY}` is your Sentry project's public key

### Logs destination

To configure your logs destination, click **Add destination** and configure the following:

* **Destination Name**: `sentry-logs` (or any descriptive name)
* **Destination Type**: Select **Logs**
* **OTLP Endpoint**: Your Sentry OTLP logs endpoint (e.g., `https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/logs`)
* **Custom Headers**: Add the Sentry authentication header:  
   * Header name: `x-sentry-auth`  
   * Header value: `sentry sentry_key={SENTRY_PUBLIC_KEY}` where `{SENTRY_PUBLIC_KEY}` is your Sentry project's public key

## Step 4: Configure your Worker

With your destinations created in the Cloudflare dashboard, update your Worker's configuration to enable telemetry export.

* [  wrangler.jsonc ](#tab-panel-7460)
* [  wrangler.toml ](#tab-panel-7461)

```

{

  "observability": {

    "traces": {

      "enabled": true,

      // Must match the destination name in the dashboard

      "destinations": ["sentry-traces"]

    },

    "logs": {

      "enabled": true,

      // Must match the destination name in the dashboard

      "destinations": ["sentry-logs"]

    }

  }

}


```

```

[observability.traces]

enabled = true

destinations = [ "sentry-traces" ]


[observability.logs]

enabled = true

destinations = [ "sentry-logs" ]


```

After updating your configuration, deploy your Worker for the changes to take effect.

Note

It may take a few minutes after deployment for data to appear in Sentry.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/exporting-opentelemetry-data/","name":"Exporting OpenTelemetry Data"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/exporting-opentelemetry-data/sentry/","name":"Export to Sentry"}}]}
```

---

---
title: Logs
description: Logs are an important component of a developer's toolkit to troubleshoot and diagnose application issues and maintaining system health. The Cloudflare Developer Platform offers many tools to help developers manage their application's logs.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/logs/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Logs

Logs are an important component of a developer's toolkit to troubleshoot and diagnose application issues and maintaining system health. The Cloudflare Developer Platform offers many tools to help developers manage their application's logs.

## [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs)

Automatically ingest, filter, and analyze logs emitted from Cloudflare Workers in the Cloudflare dashboard.

## [Real-time logs](https://developers.cloudflare.com/workers/observability/logs/real-time-logs)

Access log events in near real-time. Real-time logs provide immediate feedback and visibility into the health of your Cloudflare Worker.

## [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers) Beta

Tail Workers allow developers to apply custom filtering, sampling, and transformation logic to telemetry data.

## [Workers Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush)

Send Workers Trace Event Logs to a supported destination. Workers Logpush includes metadata about requests and responses, unstructured `console.log()` messages and any uncaught exceptions.

## Video Tutorial

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/logs/","name":"Logs"}}]}
```

---

---
title: Workers Logpush
description: Send Workers Trace Event Logs to a supported third party, such as a storage or logging provider.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/logs/logpush.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workers Logpush

[Cloudflare Logpush](https://developers.cloudflare.com/logs/logpush/) supports the ability to send [Workers Trace Event Logs](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/workers%5Ftrace%5Fevents/) to a [supported destination](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/). Worker’s Trace Events Logpush includes metadata about requests and responses, unstructured `console.log()` messages and any uncaught exceptions. This product is available on the Workers Paid plan. For pricing information, refer to [Pricing](https://developers.cloudflare.com/workers/platform/pricing/#workers-trace-events-logpush).

Warning

Workers Trace Events Logpush is not available for zones on the [Cloudflare China Network](https://developers.cloudflare.com/china-network/).

## Verify your Logpush access

Wrangler version

Minimum required Wrangler version: 2.2.0\. Check your version by running `wrangler --version`. To update Wrangler, refer to [Install/Update Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

To configure a Logpush job, verify that your Cloudflare account role can use Logpush. To check your role:

1. In the Cloudflare dashboard, go to the **Members** page.  
[ Go to **Members** ](https://dash.cloudflare.com/?to=/:account/members)
2. Check your account permissions. Roles with Logpush configuration access are different than Workers permissions. Super Administrators, Administrators, and the Log Share roles have full access to Logpush.

Alternatively, create a new [API token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/) scoped at the Account level with Logs Edit permissions.

## Create a Logpush job

### Via the Cloudflare dashboard

To create a Logpush job in the Cloudflare dashboard:

1. In the Cloudflare dashboard, go to the **Logpush** page.  
[ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)
2. Select **Create a Logpush job**.
3. Select a destination and configure it, if needed.
4. Select **Workers trace events** as the data set > **Next**.
5. If needed, customize your data fields. Otherwise, select **Next**.
6. Follow the instructions on the dashboard to verify ownership of your data's destination and complete job creation.

### Via cURL

The following example sends Workers logs to R2\. For more configuration options, refer to [Enable destinations](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/) and [API configuration](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/) in the Logs documentation.

Terminal window

```

curl "https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/logpush/jobs" \

--header 'X-Auth-Key: <API_KEY>' \

--header 'X-Auth-Email: <EMAIL>' \

--header 'Content-Type: application/json' \

--data '{

  "name": "workers-logpush",

  "output_options": {

    "field_names": ["Event", "EventTimestampMs", "Outcome", "Exceptions", "Logs", "ScriptName"],

  },

  "destination_conf": "r2://<BUCKET_PATH>/{DATE}?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>",

  "dataset": "workers_trace_events",

  "enabled": true

}' | jq .


```

In Logpush, you can configure [filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) and a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) to have more control of the volume of data that is sent to your configured destination. For example, if you only want to receive logs for requests that did not result in an exception, add the following `filter` JSON property below `output_options`:

`"filter":"{\"where\": {\"key\":\"Outcome\",\"operator\":\"!eq\",\"value\":\"exception\"}}"`

## Enable logging on your Worker

### Local development

Enable logging on your Worker by adding a new property, `logpush = true`, to your Wrangler file. This can be added either in the top-level configuration or under an [environment](https://developers.cloudflare.com/workers/wrangler/environments/). Any new Workers with this property will automatically get picked up by the Logpush job.

* [  wrangler.jsonc ](#tab-panel-7462)
* [  wrangler.toml ](#tab-panel-7463)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  // Top-level configuration

  "name": "my-worker",

  "main": "src/index.js",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "workers_dev": false,

  "logpush": true,

  "route": {

    "pattern": "example.org/*",

    "zone_name": "example.org"

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker"

main = "src/index.js"

# Set this to today's date

compatibility_date = "2026-04-03"

workers_dev = false

logpush = true


[route]

pattern = "example.org/*"

zone_name = "example.org"


```

Configure via multipart script upload API:

Terminal window

```

curl --request PUT \

"https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/scripts/{script_name}" \

--header "Authorization: Bearer <API_TOKEN>" \

--form 'metadata={"main_module": "my-worker.js", "logpush": true}' \

--form '"my-worker.js"=@./my-worker.js;type=application/javascript+module'


```

### Dashboard

To enable Logpush logging via the dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select your Worker.
3. Go to **Settings** \> **Observability**.
4. For **Logpush**, select **Enable** (this is only available if you have already [created a logpush job](https://developers.cloudflare.com/workers/observability/logs/logpush/#create-a-logpush-job)).

## Limits

The `logs` and `exceptions` fields have a combined limit of 16,384 characters before fields will start being truncated. Characters are counted in the order of all `exception.name`s, `exception.message`s, and then `log.message`s.

Once that character limit is reached all fields will be truncated with `"<<<Logpush: *field* truncated>>>"` for one message before dropping logs or exceptions.

### Example

To illustrate this, suppose our Logpush event looks like the JSON below and the limit is 50 characters (rather than the actual limit of 16,384). The algorithm will:

1. Count the characters in `exception.names`:  
   1. `"SampleError"` and `"AuthError"` as 20 characters.
2. Count the characters in `exception.message`:  
   1. `"something went wrong"` counted as 20 characters leaving 10 characters remaining.  
   2. The first 10 characters of `"unable to process request authentication from client"` will be taken and counted before being truncated to `"unable to <<<Logpush: exception messages truncated>>>"`.
3. Count the characters in `log.message`:  
   1. We've already begun truncation, so `"Hello "` will be replaced with `"<<<Logpush: messages truncated>>>"` and `"World!"` will be dropped.

#### Sample Input

```

{

  "Exceptions": [

    {

      "Name": "SampleError",

      "Message": "something went wrong",

      "TimestampMs": 0

    },

    {

      "Name": "AuthError",

      "Message": "unable to process request authentication from client",

      "TimestampMs": 1

    },

  ],

  "Logs": [

    {

      "Level": "log",

      "Message": ["Hello "],

      "TimestampMs": 0

    },

    {

      "Level": "log",

      "Message": ["World!"],

      "TimestampMs": 0

    }

  ]

}


```

#### Sample Output

```

{

  "Exceptions": [

    {

      "name": "SampleError",

      "message": "something went wrong",

      "TimestampMs": 0

    },

    {

      "name": "AuthError",

      "message": "unable to <<<Logpush: exception messages truncated>>>",

      "TimestampMs": 1

    },

  ],

  "Logs": [

    {

      "Level": "log",

      "Message": ["<<<Logpush: messages truncated>>>"],

      "TimestampMs": 0

    }

  ]

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/logs/","name":"Logs"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/logs/logpush/","name":"Workers Logpush"}}]}
```

---

---
title: Real-time logs
description: Debug your Worker application by accessing logs and exceptions through the Cloudflare dashboard or `wrangler tail`.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/logs/real-time-logs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Real-time logs

With Real-time logs, access all your log events in near real-time for log events happening globally. Real-time logs is helpful for immediate feedback, such as the status of a new deployment.

Real-time logs captures [invocation logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#invocation-logs), [custom logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#custom-logs), errors, and uncaught exceptions. For high-traffic applications, real-time logs may enter sampling mode, which means some messages will be dropped and a warning will appear in your logs.

Warning

Real-time logs are not available for zones on the [Cloudflare China Network](https://developers.cloudflare.com/china-network/).

## View logs from the dashboard

To view real-time logs associated with any deployed Worker using the Cloudflare dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your **Worker**.
3. Select **Logs**.
4. In the right-hand navigation bar, select **Live**.

## View logs using `wrangler tail`

To view real-time logs associated with any deployed Worker using Wrangler:

1. Go to your Worker project directory.
2. Run [npx wrangler tail](https://developers.cloudflare.com/workers/wrangler/commands/general/#tail).

This will log any incoming requests to your application available in your local terminal.

The output of each `wrangler tail` log is a structured JSON object:

```

{

  "outcome": "ok",

  "scriptName": null,

  "exceptions": [],

  "logs": [],

  "eventTimestamp": 1590680082349,

  "event": {

    "request": {

      "url": "https://www.bytesized.xyz/",

      "method": "GET",

      "headers": {},

      "cf": {}

    }

  }

}


```

By piping the output to tools like [jq ↗](https://stedolan.github.io/jq/), you can query and manipulate the requests to look for specific information:

Terminal window

```

npx wrangler tail | jq .event.request.url


```

```

"https://www.bytesized.xyz/"

"https://www.bytesized.xyz/component---src-pages-index-js-a77e385e3bde5b78dbf6.js"

"https://www.bytesized.xyz/page-data/app-data.json"


```

You can customize how `wrangler tail` works to fit your needs. Refer to [the wrangler tail documentation](https://developers.cloudflare.com/workers/wrangler/commands/general/#tail) for available configuration options.

## Limits

Note

You can filter real-time logs in the dashboard or using [wrangler tail](https://developers.cloudflare.com/workers/wrangler/commands/general/#tail). If your Worker has a high volume of messages, filtering real-time logs can help mitgate messages from being dropped.

* Real-time logs does not store Workers Logs. To store logs, use [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs).
* If your Worker has a high volume of traffic, the real-time logs might enter sampling mode. This will cause some of your messages to be dropped and a warning to appear in your logs.
* Logs from any [Durable Objects](https://developers.cloudflare.com/durable-objects/) your Worker is using will show up in the dashboard.
* A maximum of 10 clients can view a Worker's logs at one time. This can be a combination of either dashboard sessions or `wrangler tail` calls.
* When using `wrangler tail` with [WebSocket event handlers](https://developers.cloudflare.com/workers/runtime-apis/websockets/), any `console.log` statements within those handlers are hidden until the WebSocket client closes the connection. Once the `close` is received, all messages are flushed, printing everything to the terminal at once.

## Persist logs

Logs can be persisted, filtered, and analyzed with [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs). To send logs to a third party, use [Workers Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush/) or [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/).

## Related resources

* [Errors and exceptions](https://developers.cloudflare.com/workers/observability/errors/) \- Review common Workers errors.
* [Local development and testing](https://developers.cloudflare.com/workers/development-testing/) \- Develop and test you Workers locally.
* [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs) \- Collect, store, filter and analyze logging data emitted from Cloudflare Workers.
* [Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush/) \- Learn how to push Workers Trace Event Logs to supported destinations.
* [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/) \- Learn how to attach Tail Workers to transform your logs and send them to HTTP endpoints.
* [Source maps and stack traces](https://developers.cloudflare.com/workers/observability/source-maps) \- Learn how to enable source maps and generate stack traces for Workers.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/logs/","name":"Logs"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/logs/real-time-logs/","name":"Real-time logs"}}]}
```

---

---
title: Tail Workers
description: Track and log Workers on invocation by assigning a Tail Worker to your projects.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/logs/tail-workers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Tail Workers

A Tail Worker receives information about the execution of other Workers (known as producer Workers), such as HTTP statuses, data passed to `console.log()` or uncaught exceptions. Tail Workers can process logs for alerts, debugging, or analytics.

Tail Workers are available to all customers on the Workers Paid and Enterprise tiers. Tail Workers are billed by [CPU time](https://developers.cloudflare.com/workers/platform/pricing/#workers), not by the number of requests.

![Tail Worker diagram](https://developers.cloudflare.com/_astro/tail-workers.CaYo-ajt_1Mwmpt.webp) 

A Tail Worker is automatically invoked after the invocation of a producer Worker (the Worker the Tail Worker will track) that contains the application logic. It captures events after the producer has finished executing. Events throughout the request lifecycle, including potential sub-requests via [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) and [Dynamic Dispatch](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/dynamic-dispatch/), will be included. You can filter, change the format of the data, and send events to any HTTP endpoint. For quick debugging, Tail Workers can be used to send logs to [KV](https://developers.cloudflare.com/kv/api/) or any database.

Export batches of logs and traces to Sentry, Grafana, Honeycomb and more

If you are using Tail Workers to export logs and errors to observability tools like Sentry, Grafana, Honeycomb and more — you may not need to use Tail Workers.

Instead, you can configure your Worker to [export OpenTelemetry (OTEL) format logs and traces](https://developers.cloudflare.com/workers/observability/exporting-opentelemetry-data/) to these tools. Unlike Tail Workers, when you configure an OTEL destination, logs and traces are sent in batches to your destination, rather than sent after each invocation of the Worker.

You should think of Tail Workers as the advanced-mode option, for when you need to do something custom that is not built into the Workers observability platform.

## Configure Tail Workers

To configure a Tail Worker:

1. [Create a Worker](https://developers.cloudflare.com/workers/get-started/guide) to serve as the Tail Worker.
2. Add a [tail()](https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/) handler to your Worker. The `tail()` handler is invoked every time the producer Worker to which a Tail Worker is connected is invoked. The following Worker code is a Tail Worker that sends its data to an HTTP endpoint:

JavaScript

```

export default {

  async tail(events) {

    fetch("https://example.com/endpoint", {

      method: "POST",

      body: JSON.stringify(events),

    });

  },

};


```

The following Worker code is an example of what the `events` object may look like:

```

[

  {

    "scriptName": "Example script",

    "outcome": "exception",

    "eventTimestamp": 1587058642005,

    "event": {

      "request": {

        "url": "https://example.com/some/requested/url",

        "method": "GET",

        "headers": {

          "cf-ray": "57d55f210d7b95f3",

          "x-custom-header-name": "my-header-value"

        },

        "cf": {

          "colo": "SJC"

        }

      }

    },

    "logs": [

      {

        "message": ["string passed to console.log()"],

        "level": "log",

        "timestamp": 1587058642005

      }

    ],

    "exceptions": [

      {

        "name": "Error",

        "message": "Threw a sample exception",

        "timestamp": 1587058642005

      }

    ],

    "diagnosticsChannelEvents": [

      {

        "channel": "foo",

        "message": "The diagnostic channel message",

        "timestamp": 1587058642005

      }

    ]

  }

]


```

1. Add the following to the Wrangler file of the producer Worker:

* [  wrangler.jsonc ](#tab-panel-7464)
* [  wrangler.toml ](#tab-panel-7465)

```

{

  "tail_consumers": [

    {

      "service": "<TAIL_WORKER_NAME>"

    }

  ]

}


```

```

[[tail_consumers]]

service = "<TAIL_WORKER_NAME>"


```

Note

Workers added to the `tail_consumers` array must have a `tail()` handler defined.

## Use Analytics Engine for aggregated metrics

If you need aggregated analytics rather than individual log events, consider writing to [Workers Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/) from your Tail Worker. Analytics Engine is optimized for high-cardinality, time-series data that you can query with SQL.

For example, you can use a Tail Worker to count errors by endpoint, track response times by customer, or build usage metrics, then write those aggregates to Analytics Engine for querying and visualization.

JavaScript

```

export default {

  async tail(events, env) {

    for (const event of events) {

      env.ANALYTICS.writeDataPoint({

        blobs: [event.scriptName, event.outcome],

        doubles: [1],

        indexes: [event.event?.request?.cf?.colo ?? "unknown"],

      });

    }

  },

};


```

Refer to the [Analytics Engine documentation](https://developers.cloudflare.com/analytics/analytics-engine/) for more details on writing and querying data.

## Related resources

* [tail()](https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/) Handler API docs - Learn how to set up a `tail()` handler in your Worker.
* [Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/) \- Write custom analytics from your Worker for high-cardinality, time-series queries.
* [Errors and exceptions](https://developers.cloudflare.com/workers/observability/errors/) \- Review common Workers errors.
* [Local development and testing](https://developers.cloudflare.com/workers/development-testing/) \- Develop and test you Workers locally.
* [Source maps and stack traces](https://developers.cloudflare.com/workers/observability/source-maps) \- Learn how to enable source maps and generate stack traces for Workers.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/logs/","name":"Logs"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/logs/tail-workers/","name":"Tail Workers"}}]}
```

---

---
title: Workers Logs
description: Store, filter, and analyze log data emitted from Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/logs/workers-logs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workers Logs

Workers Logs lets you automatically collect, store, filter, and analyze logging data emitted from Cloudflare Workers. Data is written to your Cloudflare Account, and you can query it in the dashboard for each of your Workers. All newly created Workers will come with the observability setting enabled by default.

Logs include [invocation logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#invocation-logs), [custom logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#custom-logs), errors, and uncaught exceptions.

![Example showing the Workers Logs Dashboard](https://developers.cloudflare.com/_astro/wobs_workers_events_122.DvoADmO-_Z1V047w.webp) 

To send logs to a third party, use [Workers Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush/) or [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/).

## Enable Workers Logs

Wrangler version

Minimum required Wrangler version: 3.78.6\. Check your version by running `wrangler --version`. To update Wrangler, refer to [Install/Update Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

You must add the observability setting for your Worker to write logs to Workers Logs. Add the following setting to your Worker's Wrangler file and redeploy your Worker.

* [  wrangler.jsonc ](#tab-panel-7468)
* [  wrangler.toml ](#tab-panel-7469)

```

{

  "observability": {

    "enabled": true,

    "head_sampling_rate": 1 // optional. default = 1.

  }

}


```

```

[observability]

enabled = true

head_sampling_rate = 1


```

[Head-based sampling](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#head-based-sampling) allows you set the percentage of Workers requests that are logged.

### Enabling with environments

[Environments](https://developers.cloudflare.com/workers/wrangler/environments/) allow you to deploy the same Worker application with different configurations. For example, you may want to configure a different `head_sampling_rate` to staging and production. To configure observability for an environment named `staging`: 1\. Add the following configuration below `[env.staging]`

* [  wrangler.jsonc ](#tab-panel-7472)
* [  wrangler.toml ](#tab-panel-7473)

```

{

  "env": {

    "staging": {

      "observability": {

        "enabled": true,

        "head_sampling_rate": 1 // optional

      }

    }

  }

}


```

```

[env.staging.observability]

enabled = true

head_sampling_rate = 1


```

1. Deploy your Worker with `npx wrangler deploy -e staging`
2. Repeat step 1 and 2 for each environment.

## View logs from the dashboard

Access logs for your Worker from the Cloudflare dashboard:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your **Worker**.
3. Select **Observability**.

## Best Practices

### Logging structured JSON objects

To get the most out of Workers Logs, it is recommended you log in JSON format. Workers Logs automatically extracts the fields and indexes them intelligently in the database. The benefit of this structured logging technique is in how it allows you to easily segment data across any dimension for fields with unlimited cardinality. Consider the following scenarios:

| Scenario | Logging Code                                               | Event Log (Partial)                           |
| -------- | ---------------------------------------------------------- | --------------------------------------------- |
| 1        | console.log("user\_id: " + 123)                            | {message: "user\_id: 123"}                    |
| 2        | console.log({user\_id: 123})                               | {user\_id: 123}                               |
| 3        | console.log({user\_id: 123, user\_email: "a@example.com"}) | {user\_id: 123, user\_email: "a@example.com"} |

The difference between these examples is in how you index your logs to enable faster queries. In scenario 1, the `user_id` is embedded within a message. To find all logs relating to a particular user\_id, you would have to run a text match. In scenarios 2 and 3, your logs can be filtered against the keys `user_id` and `user_email`.

## Features

### Invocation Logs

Each Workers invocation returns a single invocation log that contains details such as the Request, Response, and related metadata. These invocation logs can be identified by the field `$cloudflare.$metadata.type = "cf-worker-event"`. Each invocation log is enriched with information available to Cloudflare in the context of the invocation.

In the Workers Logs UI, logs are presented with a localized timestamp and a message. The message is dependent on the invocation handler. For example, Fetch requests will have a message describing the request method and the request URL, while cron events will be listed as cron. Below is a list of invocation handlers along with their invocation message.

Invocation logs can be disabled in wrangler by adding the `invocation_logs = false` configuration.

* [  wrangler.jsonc ](#tab-panel-7470)
* [  wrangler.toml ](#tab-panel-7471)

```

{

  "observability": {

    "logs": {

      "invocation_logs": false

    }

  }

}


```

```

[observability.logs]

invocation_logs = false


```

| Invocation Handler                                                                        | Invocation Message     |
| ----------------------------------------------------------------------------------------- | ---------------------- |
| [Alarm](https://developers.cloudflare.com/durable-objects/api/alarms/)                    | <Scheduled Time>       |
| [Email](https://developers.cloudflare.com/email-routing/email-workers/runtime-api/)       | <Email Recipient>      |
| [Fetch](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/)           | <Method> <URL>         |
| [Queue](https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer) | <Queue Name>           |
| [Cron](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/)        | <UNIX-cron schedule>   |
| [Tail](https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/)             | tail                   |
| [RPC](https://developers.cloudflare.com/workers/runtime-apis/rpc/)                        | <RPC method>           |
| [WebSocket](https://developers.cloudflare.com/workers/examples/websockets/)               | <WebSocket Event Type> |

### Custom logs

By default a Worker will emit [invocation logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#invocation-logs) containing details about the request, response and related metadata.

You can also add custom logs throughout your code. Any `console.log` statements within your Worker will be visible in Workers Logs. The following example demonstrates a custom `console.log` within a Worker request handler.

* [  Module Worker ](#tab-panel-7466)
* [  Service Worker ](#tab-panel-7467)

JavaScript

```

export default {

  async fetch(request) {

    const { cf } = request;

    const { city, country } = cf;


    console.log(`Request came from city: ${city} in country: ${country}`);


    return new Response("Hello worker!", {

      headers: { "content-type": "text/plain" },

    });

  },

};


```

Service Workers are deprecated

Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers.

JavaScript

```

addEventListener("fetch", (event) => {

  event.respondWith(handleRequest(event.request));

});


/**

 * Respond with hello worker text

 * @param {Request} request

 */

async function handleRequest(request) {

  const { cf } = request;

  const { city, country } = cf;


  console.log(`Request came from city: ${city} in country: ${country}`);


  return new Response("Hello worker!", {

    headers: { "content-type": "text/plain" },

  });

}


```

After you deploy the code above, view your Worker's logs in [the dashboard](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#view-logs-from-the-dashboard) or with [real-time logs](https://developers.cloudflare.com/workers/observability/logs/real-time-logs/).

### Head-based sampling

Head-based sampling allows you to log a percentage of incoming requests to your Cloudflare Worker. Especially for high-traffic applications, this helps reduce log volume and manage costs, while still providing meaningful insights into your application's performance. When you configure a head-based sampling rate, you can control the percentage of requests that get logged. All logs within the context of the request are collected.

To enable head-based sampling, set `head_sampling_rate` within the observability configuration. The valid range is from 0 to 1, where 0 indicates zero out of one hundred requests are logged, and 1 indicates every request is logged. If `head_sampling_rate` is unspecified, it is configured to a default value of 1 (100%). In the example below, `head_sampling_rate` is set to 0.01, which means one out of every one hundred requests is logged.

* [  wrangler.jsonc ](#tab-panel-7474)
* [  wrangler.toml ](#tab-panel-7475)

```

{

  "observability": {

    "enabled": true,

    "head_sampling_rate": 0.01 // 1% sampling rate

  }

}


```

```

[observability]

enabled = true

head_sampling_rate = 0.01


```

## Limits

| Description                       | Limit     |
| --------------------------------- | --------- |
| Maximum log retention period      | 7 Days    |
| Maximum logs per account per day1 | 5 Billion |
| Maximum log size2                 | 256 KB    |

1 There is a daily limit of 5 billion logs per account per day. After the limit is exceed, a 1% head-based sample will be applied for the remainder of the day.

2 A single log has a maximum size limit of [256 KB](https://developers.cloudflare.com/workers/platform/limits/#log-size). Logs exceeding that size will be truncated and the log's `$cloudflare.truncated` field will be set to true.

## Pricing

Billing start date

Workers Logs billing will begin on April 21, 2025.

Workers Logs is included in both the Free and Paid [Workers plans](https://developers.cloudflare.com/workers/platform/pricing/).

| Log Events Written | Retention                                                    |        |
| ------------------ | ------------------------------------------------------------ | ------ |
| **Workers Free**   | 200,000 per day                                              | 3 Days |
| **Workers Paid**   | 20 million included per month  +$0.60 per additional million | 7 Days |

### Examples

#### Example 1

A Worker serves 15 million requests per month. Each request emits 1 invocation log and 1 `console.log`. `head_sampling_rate` is configured to 1.

| Monthly Costs | Formula |                                                                                                                        |
| ------------- | ------- | ---------------------------------------------------------------------------------------------------------------------- |
| **Logs**      | $6.00   | ((15,000,000 requests per month \* 2 logs per request \* 100% sample) - 20,000,000 included logs) / 1,000,000 \* $0.60 |
| **Total**     | $6.00   |                                                                                                                        |

#### Example 2

A Worker serves 1 billion requests per month. Each request emits 1 invocation log and 1 `console.log`. `head_sampling_rate` is configured to 0.1.

| Monthly Costs | Formula |                                                                                                                          |
| ------------- | ------- | ------------------------------------------------------------------------------------------------------------------------ |
| **Logs**      | $108.00 | ((1,000,000,000 requests per month \* 2 logs per request \* 10% sample) - 20,000,000 included logs) / 1,000,000 \* $0.60 |
| **Total**     | $108.00 |                                                                                                                          |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/logs/","name":"Logs"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/logs/workers-logs/","name":"Workers Logs"}}]}
```

---

---
title: MCP server
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/mcp-server.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# MCP server

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/mcp-server/","name":"MCP server"}}]}
```

---

---
title: Metrics and analytics
description: Diagnose issues with Workers metrics, and review request data for a zone with Workers analytics.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/metrics-and-analytics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Metrics and analytics

There are two graphical sources of information about your Workers traffic at a given time: Workers metrics and zone-based Workers analytics.

Workers metrics can help you diagnose issues and understand your Workers' workloads by showing performance and usage of your Workers. If your Worker runs on a route on a zone, or on a few zones, Workers metrics will show how much traffic your Worker is handling on a per-zone basis, and how many requests your site is getting.

Zone analytics show how much traffic all Workers assigned to a zone are handling.

## Workers metrics

Workers metrics aggregate request data for an individual Worker (if your Worker is running across multiple domains, and on `*.workers.dev`, metrics will aggregate requests across them). To view your Worker's metrics:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker to view its metrics.

There are two metrics that can help you understand the health of your Worker in a given moment: requests success and error metrics, and invocation statuses.

### Requests

The first graph shows historical request counts from the Workers runtime broken down into successful requests, errored requests, and subrequests.

* **Total**: All incoming requests registered by a Worker. Requests blocked by [WAF ↗](https://www.cloudflare.com/waf/) or other security features will not count.
* **Success**: Requests that returned a Success or Client Disconnected invocation status.
* **Errors**: Requests that returned a Script Threw Exception, Exceeded Resources, or Internal Error invocation status — refer to [Invocation Statuses](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/#invocation-statuses) for a breakdown of where your errors are coming from.

Request traffic data may display a drop off near the last few minutes displayed in the graph for time ranges less than six hours. This does not reflect a drop in traffic, but a slight delay in aggregation and metrics delivery.

### Subrequests

Subrequests are requests triggered by calling `fetch` from within a Worker. A subrequest that throws an uncaught error will not be counted.

* **Total**: All subrequests triggered by calling `fetch` from within a Worker.
* **Cached**: The number of cached responses returned.
* **Uncached**: The number of uncached responses returned.

### Wall time per execution

Wall time represents the elapsed time in milliseconds between the start of a Worker invocation, and when the Workers runtime determines that no more JavaScript needs to run. Specifically, wall time per execution chart measures the wall time that the JavaScript context remained open — including time spent waiting on I/O, and time spent executing in your Worker's[waitUntil()](https://developers.cloudflare.com/workers/runtime-apis/context/#waituntil) handler. Wall time is not the same as the time it takes your Worker to send the final byte of a response back to the client - wall time can be higher, if tasks within`waitUntil()` are still running after the response has been sent, or it can be lower. For example, when returning a response with a large body, the Workers runtime can, in some cases, determine that no more JavaScript needs to run, and closes the JavaScript context before all the bytes have passed through and been sent.

The Wall Time per execution chart shows historical wall time data broken down into relevant quantiles using [reservoir sampling ↗](https://en.wikipedia.org/wiki/Reservoir%5Fsampling). Learn more about [interpreting quantiles ↗](https://www.statisticshowto.com/quantile-definition-find-easy-steps/).

### CPU Time per execution

The CPU Time per execution chart shows historical CPU time data broken down into relevant quantiles using [reservoir sampling ↗](https://en.wikipedia.org/wiki/Reservoir%5Fsampling). Learn more about [interpreting quantiles ↗](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). In some cases, higher quantiles may appear to exceed [CPU time limits](https://developers.cloudflare.com/workers/platform/limits/#cpu-time) without generating invocation errors because of a mechanism in the Workers runtime that allows rollover CPU time for requests below the CPU limit.

### Execution duration (GB-seconds)

The Duration per request chart shows historical [duration](https://developers.cloudflare.com/workers/platform/limits/#duration) per Worker invocation. The data is broken down into relevant quantiles, similar to the CPU time chart. Learn more about [interpreting quantiles ↗](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). Understanding duration on your Worker is especially useful when you are intending to do a significant amount of computation on the Worker itself.

### Invocation statuses

To review invocation statuses:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select your Worker.
3. Find the **Summary** graph in **Metrics**.
4. Select **Errors**.

Worker invocation statuses indicate whether a Worker executed successfully or failed to generate a response in the Workers runtime. Invocation statuses differ from HTTP status codes. In some cases, a Worker invocation succeeds but does not generate a successful HTTP status because of another error encountered outside of the Workers runtime. Some invocation statuses result in a [Workers error code](https://developers.cloudflare.com/workers/observability/errors/#error-pages-generated-by-workers) being returned to the client.

| Invocation status      | Definition                                                                   | Workers error code | GraphQL field        |
| ---------------------- | ---------------------------------------------------------------------------- | ------------------ | -------------------- |
| Success                | Worker executed successfully                                                 | success            |                      |
| Client disconnected    | HTTP client (that is, the browser) disconnected before the request completed | clientDisconnected |                      |
| Worker threw exception | Worker threw an unhandled JavaScript exception                               | 1101               | scriptThrewException |
| Exceeded resources¹    | Worker exceeded runtime limits                                               | 1102, 1027         | exceededResources    |
| Internal error²        | Workers runtime encountered an error                                         | internalError      |                      |

¹ The Exceeded Resources status may appear when the Worker exceeds a [runtime limit](https://developers.cloudflare.com/workers/platform/limits/#request-and-response-limits). The most common cause is excessive CPU time, but is also caused by a Worker exceeding startup time or free tier limits.

² The Internal Error status may appear when the Workers runtime fails to process a request due to an internal failure in our system. These errors are not caused by any issue with the Worker code nor any resource limit. While requests with Internal Error status are rare, some may appear during normal operation. These requests are not counted towards usage for billing purposes. If you notice an elevated rate of requests with Internal Error status, review [www.cloudflarestatus.com ↗](https://www.cloudflarestatus.com/).

To further investigate exceptions, use [wrangler tail](https://developers.cloudflare.com/workers/wrangler/commands/general/#tail).

### Request duration

The request duration chart shows how long it took your Worker to respond to requests, including code execution and time spent waiting on I/O. The request duration chart is currently only available when your Worker has [Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/) enabled.

In contrast to [execution duration](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/#execution-duration-gb-seconds), which measures only the time a Worker is active, request duration measures from the time a request comes into a data center until a response is delivered.

The data shows the duration for requests with Smart Placement enabled compared to those with Smart Placement disabled (by default, 1% of requests are routed with Smart Placement disabled). The chart shows a histogram with duration across the x-axis and the percentage of requests that fall into the corresponding duration on the y-axis.

### Metrics retention

Worker metrics can be inspected for up to three months in the past in maximum increments of one week.

## Zone analytics

Zone analytics aggregate request data for all Workers assigned to any [routes](https://developers.cloudflare.com/workers/configuration/routing/routes/) defined for a zone.

To review zone metrics:

In the Cloudflare dashboard, go to the **Workers Analytics** page for your zone.

[ Go to **Workers** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/workers) 

Zone data can be scoped by time range within the last 30 days. The dashboard includes charts and information described below.

### Subrequests

This chart shows subrequests — requests triggered by calling `fetch` from within a Worker — broken down by cache status.

* **Uncached**: Requests answered directly by your origin server or other servers responding to subrequests.
* **Cached**: Requests answered by Cloudflare’s [cache ↗](https://www.cloudflare.com/learning/cdn/what-is-caching/). As Cloudflare caches more of your content, it accelerates content delivery and reduces load on your origin.

### Bandwidth

This chart shows historical bandwidth usage for all Workers on a zone broken down by cache status.

### Status codes

This chart shows historical requests for all Workers on a zone broken down by HTTP status code.

### Total requests

This chart shows historical data for all Workers on a zone broken down by successful requests, failed requests, and subrequests. These request types are categorized by HTTP status code where `200`\-level requests are successful and `400` to `500`\-level requests are failed.

## GraphQL

Worker metrics are powered by GraphQL. Learn more about querying our data sets in the [Querying Workers Metrics with GraphQL tutorial](https://developers.cloudflare.com/analytics/graphql-api/tutorials/querying-workers-metrics/).

## Custom analytics with Analytics Engine

The metrics described above provide insight into Worker performance and runtime behavior. For custom, application-specific analytics, use [Workers Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/).

Analytics Engine is useful for:

* **Custom business metrics** \- Track events specific to your application, such as signups, purchases, or feature usage.
* **Per-customer analytics** \- Record data with high-cardinality dimensions like customer IDs or API keys.
* **Usage-based billing** \- Count API calls, compute units, or other billable events per customer.
* **Performance tracking** \- Measure response times, cache hit rates, or error rates with custom dimensions.

Writes to Analytics Engine are non-blocking and do not add latency to your Worker. Query your data using SQL through the [Analytics Engine SQL API](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/) or visualize it in [Grafana](https://developers.cloudflare.com/analytics/analytics-engine/grafana/).

Refer to the [Analytics Engine example](https://developers.cloudflare.com/workers/examples/analytics-engine/) to get started.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/metrics-and-analytics/","name":"Metrics and analytics"}}]}
```

---

---
title: Query Builder
description: Write structured queries to investigate and visualize your telemetry data.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/query-builder.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Query Builder

The Query Builder helps you write structured queries to investigate and visualize your telemetry data. The Query Builder searches the Workers Observability dataset, which currently includes all logs stored by [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs/).

The Query Builder can be found in the **Observability** page of the Cloudflare dashboard:

[ Go to **Observability** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability) 

## Enable Query Builder

The Query Builder is available to all developers and requires no enablement. Queries search all Workers Logs stored by Cloudflare. If you have not yet enabled Workers Logs, you can do so by adding the following setting to your [Worker's Wrangler file](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#enable-workers-logs) and redeploying your Worker.

* [  wrangler.jsonc ](#tab-panel-7476)
* [  wrangler.toml ](#tab-panel-7477)

```

{

  "observability": {

    "enabled": true,

    "logs": {

      "invocation_logs": true,

      "head_sampling_rate": 1 // optional. default = 1.

    }

  }

}


```

```

[observability]

enabled = true


  [observability.logs]

  invocation_logs = true

  head_sampling_rate = 1


```

## Write a query in the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select your Worker.
3. Select **Observability** in the left-hand navigation panel, and then the **Overview** tab.
4. Select a **Visualization**.
5. Optional: Add fields to Filter, Group By, Order By, and Limit. For more information, see what [composes a query](https://developers.cloudflare.com/workers/observability/query-builder/#query-composition).
6. Optional: Select the appropriate time range.
7. Select **Run**. The query will automatically run whenever changes are made.

## Query composition

### Visualization

The Query Builder supports many visualization operators, including:

| Function               | Arguments     | Description                                                   |
| ---------------------- | ------------- | ------------------------------------------------------------- |
| **Count**              | n/a           | The total number of rows matching the query conditions        |
| **Count Distinct**     | any field     | The number of occurrences of the unique values in the dataset |
| **Min**                | numeric field | The smallest value for the field in the dataset               |
| **Max**                | numeric field | The largest value for the field in the dataset                |
| **Sum**                | numeric field | The total of all of the values for the field in the dataset   |
| **Average**            | numeric field | The average of the field in the dataset                       |
| **Standard Deviation** | numeric field | The standard deviation of the field in the dataset            |
| **Variance**           | numeric field | The variance of the field in the dataset                      |
| **P001**               | numeric field | The value of the field below which 0.1% of the data falls     |
| **P01**                | numeric field | The value of the field below with 1% of the data falls        |
| **P05**                | numeric field | The value of the field below with 5% of the data falls        |
| **P10**                | numeric field | The value of the field below with 10% of the data falls       |
| **P25**                | numeric field | The value of the field below with 25% of the data falls       |
| **Median (P50)**       | numeric field | The value of the field below with 50% of the data falls       |
| **P75**                | numeric field | The value of the field below with 75% of the data falls       |
| **P90**                | numeric field | The value of the field below with 90% of the data falls       |
| **P95**                | numeric field | The value of the field below with 95% of the data falls       |
| **P99**                | numeric field | The value of the field below with 99% of the data falls       |
| **P999**               | numeric field | The value of the field below with 99.9% of the data falls     |

You can add multiple visualizations in a single query. Each visualization renders a graph. A single summary table is also returned, which shows the raw query results.

![Example of showing the Query Builder with multiple visualization](https://developers.cloudflare.com/_astro/wobs_QB_visualization_122.DhDuHs4F_Z2uqhiM.webp) 

All methods are aggregate functions. Most methods operate on a specific field in the log event. `Count` is an exception, and is an aggregate function that returns the number of log events matching the filter conditions.

### Filter

Filters help return the columns that match the specified conditions. Filters have three components: a key, an operator, and a value.

The key is any field in a log event. For example, you may choose `$workers.cpuTimeMs` or `$metadata.message`.

The operator is a logical condition that evaluates to true or false. See the table below for supported conditions:

| Data Type | Valid Conditions (Operators)                                                                     |
| --------- | ------------------------------------------------------------------------------------------------ |
| Numeric   | Equals, Does not equal, Greater, Greater or equals, Less, Less or equals, Exists, Does not exist |
| String    | Equals, Does not equal, Includes, Does not include, Regex, Exists, Does not exist, Starts with   |

The value for a numeric field is an integer. The value for a string field is any string.

To add a filter:

1. Select **+** in the **Filter** section. 2\. Select **Select key...** and input a key name. For example, `$workers.cpuTimeMs`. 3\. Select the operator and change it to the operator best suited. For example, `Greater than`. 4\. Select **Select value...** and input a value. For example, `100`.

When you run the query with the filter specified above, only log events where `$workers.cpuTimeMs > 100` will be returned.

Adding multiple filters combines them with an AND operator, meaning that only events matching all the filters will be returned.

### Search

Search is a text filter that returns only events containing the specified text. Search can be helpful as a quick filtering mechanism, or to search for unique identifiable values in your logs.

### Group By

Group By combines rows that have the same value into summary rows. For example, if a query adds `$workers.event.request.cf.country` as a Group By field, then the summary table will group by country.

### Order By

Order By affects how the results are sorted in the summary table. If `asc` is selected, the results are sorted in ascending order - from least to greatest. If `desc` is selected, the results are sorted in descending order - from greatest to least.

### Limit

Limit restricts the number of results returned. When paired with [Order By](https://developers.cloudflare.com/workers/observability/query-builder/#order-by), it can be used to return the "top" or "first" N results.

### Select time range

When you select a time range, you specify the time interval where you want to look for matching events. The retention period is dependent on your [plan type](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#pricing).

## Viewing query results

There are three views for queries: Visualizations, Invocations, and Events.

### Visualizations tab

The **Visualizations** tab shows graphs and a summary table for the query.

![Visualization Overview](https://developers.cloudflare.com/_astro/wobs_visualizations_tab_122.dttsF_Ab_1NkPwo.webp) 

### Invocations tab

The **Invocations** tab shows all logs, grouped by by the invocation, and ordered by timestamp. Only invocations matching the query criteria are returned.

![Invocations Overview](https://developers.cloudflare.com/_astro/wobs_invocation_logs_full_list_122.BDOkV-CS_1SqSVt.webp) 

### Events tab

The **Events** tab shows all logs, ordered by timestamp. Only events matching the query criteria are returned. The Events tab can be customized to add additional fields in the view.

![Overview](https://developers.cloudflare.com/_astro/wobs_events_dropdown_122.BxN7hYlH_1mkKBy.webp) 

## Save queries

It is recommended to save queries that may be reused for future investigations. You can save a query with a name, description, and custom tags by selecting **Save Query**. Queries are saved at the account-level and are accessible to all users in the account.

Saved queries can be re-run by selecting the relevant query from the **Queries** tab. You can edit the query and save edits.

Queries can be starred by users. Starred queries are unique to the user, and not to the account.

## Delete queries

Saved queries can be deleted from the **Queries** tab. If you delete a query, the query is deleted for all users in the account.

1. In the Cloudflare dashboard, go to the **Observability** page.  
[ Go to **Observability** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability)
2. Select the **Queries** tab.
3. On the right-hand side, select the three dots for additional actions.
4. Select **Delete Query** and follow the instructions.

## Share queries

Saved queries are assigned a unique URL and can be shared with any user in the account.

## Example: Composing a query

In this example, we will construct a query to find and debug all paths that respond with 5xx errors. First, we create a base query. In this base query, we want to visualize by the raw event count. We can add a filter for `$workers.event.response.status` that is greater than 500\. Then, we group by `$workers.event.request.path` and `$workers.event.response.status` to identify the number of requests that were affected by this behavior.

![Constructing a query](https://developers.cloudflare.com/_astro/wobs_QB_visualization_122.DhDuHs4F_Z2uqhiM.webp) 

The results show that the `/agents/chat/default` path has been experiencing 404s and 500s. Now, we can apply a filter for this path and investigate.

![Adding an additional field to the query](https://developers.cloudflare.com/_astro/wobs_QB_visualization_filter_122.DRsPzi0e_12UePv.webp) 

Now, we can investigate by selecting the **Invocations** tab. We can see that there were two logged invocations of this error.

![Examining the Invocations tab in the Query Builder](https://developers.cloudflare.com/_astro/wobs_invocation_logs_full_list_122.BDOkV-CS_1SqSVt.webp) 

We can expand a single invocation to view the relevant logs, and continue to debug.

![Viewing the logs for a single Invocation](https://developers.cloudflare.com/_astro/wobs_invocation_logs_122.Bno9WyO1_9W3QT.webp) 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/query-builder/","name":"Query Builder"}}]}
```

---

---
title: Source maps and stack traces
description: Adding source maps and generating stack traces for Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/source-maps.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Source maps and stack traces

[Stack traces ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/Error/stack) help with debugging your code when your application encounters an unhandled exception. Stack traces show you the specific functions that were called, in what order, from which line and file, and with what arguments.

Most JavaScript code is first bundled, often transpiled, and then minified before being deployed to production. This process creates smaller bundles to optimize performance and converts code from TypeScript to Javascript if needed.

Source maps translate compiled and minified code back to the original code that you wrote. Source maps are combined with the stack trace returned by the JavaScript runtime to present you with a stack trace.

## Source Maps

To enable source maps, add the following to your Worker's [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/):

* [  wrangler.jsonc ](#tab-panel-7478)
* [  wrangler.toml ](#tab-panel-7479)

```

{

  "upload_source_maps": true

}


```

```

upload_source_maps = true


```

When `upload_source_maps` is set to `true`, Wrangler will automatically generate and upload source map files when you run [wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) or [wrangler versions deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-deploy). ​​

Note

Miniflare can also [output source maps ↗](https://miniflare.dev/developing/source-maps) for use in local development or [testing](https://developers.cloudflare.com/workers/testing/miniflare/writing-tests).

## Stack traces

​​ When your Worker throws an uncaught exception, we fetch the source map and use it to map the stack trace of the exception back to lines of your Worker’s original source code.

You can then view the stack trace when streaming [real-time logs](https://developers.cloudflare.com/workers/observability/logs/real-time-logs/) or in [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/).

Note

The source map is retrieved after your Worker invocation completes — it's an asynchronous process that does not impact your Worker's CPU utilization or performance. Source maps are not accessible inside the Worker at runtime, if you `console.log()` the [stack property ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/Error/stack) within a Worker, you will not get a deobfuscated stack trace.

When Cloudflare attempts to remap a stack trace to the Worker's source map, it does so line-by-line, remapping as much as possible. If a line of the stack trace cannot be remapped for any reason, Cloudflare will leave that line of the stack trace unchanged, and continue to the next line of the stack trace.

## Limits

Wrangler version

Minimum required Wrangler version for source maps: 3.46.0\. Check your version by running `wrangler --version`.

| Description             | Limit         |
| ----------------------- | ------------- |
| Maximum Source Map Size | 15 MB gzipped |

## Example

Consider a simple project. `src/index.ts` serves as the entrypoint of the application and `src/calculator.ts` defines a ComplexCalculator class that supports basic arithmetic.

* wrangler.jsonc
* tsconfig.json
* Directorysrc  
   * calculator.ts  
   * index.ts

Let's see how source maps can simplify debugging an error in the ComplexCalculator class.

![Stack Trace without Source Map remapping](https://developers.cloudflare.com/_astro/without-source-map.ByYR83oU_Z1q7wOD.webp) 

With **no source maps uploaded**: notice how all the Javascript has been minified to one file, so the stack trace is missing information on file name, shows incorrect line numbers, and incorrectly references `js` instead of `ts`.

![Stack Trace with Source Map remapping](https://developers.cloudflare.com/_astro/with-source-map.PipytmVe_2dYiLI.webp) 

With **source maps uploaded**: all methods reference the correct files and line numbers.

## Related resources

* [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/logpush/) \- Learn how to attach Tail Workers to transform your logs and send them to HTTP endpoints.
* [Real-time logs](https://developers.cloudflare.com/workers/observability/logs/real-time-logs/) \- Learn how to capture Workers logs in real-time.
* [RPC error handling](https://developers.cloudflare.com/workers/runtime-apis/rpc/error-handling/) \- Learn how exceptions are handled over RPC (Remote Procedure Call).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/source-maps/","name":"Source maps and stack traces"}}]}
```

---

---
title: Sentry
description: Connect to a Sentry project from your Worker to automatically send errors and uncaught exceptions to Sentry.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/third-party-integrations/sentry.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Sentry

Connect to a Sentry project from your Worker to automatically send errors and uncaught exceptions to Sentry.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/third-party-integrations/","name":"Integrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/third-party-integrations/sentry/","name":"Sentry"}}]}
```

---

---
title: Traces
description: Tracing gives you end-to-end visibility into the life of a request as it travels through your Workers application and connected services. This helps you identify performance bottlenecks, debug issues, and understand complex request flows. With tracing you can answer questions such as:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/traces/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Traces

### What is Workers tracing?

Tracing gives you end-to-end visibility into the life of a request as it travels through your Workers application and connected services. This helps you identify performance bottlenecks, debug issues, and understand complex request flows. With tracing you can answer questions such as:

* What is the cause of a long-running request?
* How long do subrequests from my Worker take?
* How long are my calls to my KV Namespace or R2 bucket taking?
![Example trace showing a POST request to a cake shop with multiple spans including fetch requests and durable object operations](https://developers.cloudflare.com/_astro/wobs_waterfall_trace_122.BveqL__z_Q1Dwz.webp) 

### Automatic instrumentation

Cloudflare Workers provides tracing instrumentation **out of the box** \- no code changes or SDK are required. Simply enable tracing on your Worker and Cloudflare automatically captures telemetry data for:

* **Fetch calls** \- All outbound HTTP requests, capturing timing, status codes, and request metadata. This enables you to quickly identify how external dependencies affect your application's performance.
* **Binding calls** \- Interactions with various Worker bindings such as KV reads and writes, R2 object storage operations and Durable Object invocations.
* **Handler calls** \- The complete lifecycle of each Worker invocation, including triggers such as [fetch handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/),[scheduled handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/), and [queue handlers](https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer).

For a full list of instrumented operations , see the [spans and attributes documentation](https://developers.cloudflare.com/workers/observability/traces/spans-and-attributes).

### How to enable tracing

You can configure tracing by setting `observability.traces.enabled = true` in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/#observability).

* [  wrangler.jsonc ](#tab-panel-7480)
* [  wrangler.toml ](#tab-panel-7481)

```

{

  "observability": {

    "traces": {

      "enabled": true,

      // optional sampling rate (recommended for high-traffic workloads)

      "head_sampling_rate": 0.05

    }

  }

}


```

```

[observability.traces]

enabled = true

head_sampling_rate = 0.05


```

Note

In the future, Cloudflare plans to enable automatic tracing in addition to logs when you set `observability.enabled = true` in your Wrangler configuration.

While automatic tracing is in early beta, this setting will not enable tracing by default, and will only enable logs.

An updated [compatibility\_date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) will be required for this change to take effect.

### Exporting OpenTelemetry traces to a 3rd party destination

Workers tracing follows [OpenTelemetry (OTel) standards ↗](https://opentelemetry.io/). This makes it compatible with popular observability platforms, such as [Honeycomb](https://developers.cloudflare.com/workers/observability/exporting-opentelemetry-data/honeycomb/), [Grafana Cloud](https://developers.cloudflare.com/workers/observability/exporting-opentelemetry-data/grafana-cloud/), and[Axiom](https://developers.cloudflare.com/workers/observability/exporting-opentelemetry-data/axiom/), while requiring zero development effort from you. If your observability provider has an available OpenTelemetry endpoint, you can export traces (and logs)!

Learn more about exporting OpenTelemetry data from Workers [here](https://developers.cloudflare.com/workers/observability/exporting-opentelemetry-data/).

### Sampling

Default Sampling Rate

The default sampling rate is `1`, meaning 100% of requests will be traced if tracing is enabled. Set `head_sampling_rate` if you want to trace fewer requests.

With sampling, you can trace a percentage of incoming requests in your Cloudflare Worker. This allows you to manage volume and costs, while still providing meaningful insights into your application.

The valid sampling range is from `0` to `1`, where `0` indicates zero out of one hundred invocations will be traced, and `1` indicates every requests will be traced, and a number such a `0.05` indicates five out of one hundred requests will be traced.

If you have not specified a sampling rate, it defaults to `1`, meaning 100% of requests will be traced.

* [  wrangler.jsonc ](#tab-panel-7482)
* [  wrangler.toml ](#tab-panel-7483)

```

{

  "observability": {

    "traces": {

      "enabled": true,

      // set tracing sampling rate to 5%

      "head_sampling_rate": 0.05

    },

    "logs": {

      "enabled": true,

      // set logging sampling rate to 60%

      "head_sampling_rate": 0.6

    }

  }

}


```

```

[observability.traces]

enabled = true

head_sampling_rate = 0.05


[observability.logs]

enabled = true

head_sampling_rate = 0.6


```

If you have `head_sampling_rate` configured for logs, you can also create a separate rate for traces.

Sampling is [head-based ↗](https://opentelemetry.io/docs/concepts/sampling/#head-sampling), meaning that non-traced requests do not incur any tracing overhead.

### Limits & Pricing

Workers tracing is currently **free** during the initial beta period. This includes all tracing functionality such as collecting traces, storing them, and viewing them in the Cloudflare dashboard.

Starting on March 1, 2026, tracing will be billed as part of your usage on the Workers Free Paid and Enterprise plans. Each span in a trace represents one observability event, sharing the same monthly quota and pricing as [Workers logs](https://developers.cloudflare.com/workers/platform/pricing/#workers-logs):

| Events (trace spans or log events) | Retention                                                          |        |
| ---------------------------------- | ------------------------------------------------------------------ | ------ |
| **Workers Free**                   | 200,000 per day                                                    | 3 Days |
| **Workers Paid**                   | 10 million included per month +$0.60 per additional million events | 7 Days |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/traces/","name":"Traces"}}]}
```

---

---
title: Known limitations
description: Workers tracing is currently in open beta. This page documents current limitations and any upcoming features on our roadmap.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/traces/known-limitations.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Known limitations

Workers tracing is currently in open beta. This page documents current limitations and any upcoming features on our roadmap.

To provide more feedback and send feature requests, head to the [Workers tracing GitHub discussion ↗](https://github.com/cloudflare/workers-sdk/discussions/11062).

### Non-I/O operations may report time of 0 ms

Due to [security measures put in place to prevent Spectre attacks](https://developers.cloudflare.com/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading), the Workers Runtime does not update time until I/O events take place. This means that some spans will return a length of `0 ms` even when the operation took longer.

The Cloudflare Workers team is exploring security measures that would allow exposing time lengths at millisecond-level granularity in these cases.

### Trace context propagation

When exporting traces to external platforms, trace IDs are not propagated. This means traces from your Workers won't link with traces from other services in your observability tools.

We're working on automatic trace context propagation using [W3C Trace Context standards ↗](https://www.w3.org/TR/trace-context/), which will enable complete end-to-end visibility across your existing tools and services.

### Service bindings and Durable Objects appear as separate traces

Calls to other Workers via service bindings or to Durable Objects create separate traces rather than nested spans. This means you'll see multiple independent traces in your dashboard instead of a single unified trace showing the full request flow.

We're working on connecting these traces automatically.

### Incomplete spans attributes

We are planning to add more detailed attributes on each span. You can find a complete list of what is already instrumented [here](https://developers.cloudflare.com/workers/observability/traces/spans-and-attributes).

Your feedback on any missing information will help us prioritize additions and changes. Please comment on the [Workers tracing GitHub discussion ↗](https://github.com/cloudflare/workers-sdk/discussions/11062)if specific attributes would be helpful to use tracing effectively.

### Support for custom spans and attributes

Automatic instrumentation covers many platform interactions, but we know you need visibility into your own application logic too. We're working to support the [OpenTelemetry API ↗](https://www.npmjs.com/package/@opentelemetry/api) to make it easier for you to instrument custom spans within your application.

### Span and attribute names subject to change

As Workers tracing is currently in beta, span names and attribute names are not yet finalized. We may refine these names during the beta period to improve clarity and align with OpenTelemetry semantic conventions. We recommend reviewing the [spans and attributes documentation](https://developers.cloudflare.com/workers/observability/traces/spans-and-attributes) periodically for updates.

### Known bugs and other call outs

* There are currently are a few attributes that only apply to some spans (e.g.`service.name`, `faas.name`). When filtering or grouping by the Worker name across traces and logs, use `$metadata.service` instead, as it will apply consistently across all event types.
* While a trace is in progress, the event will show `Trace in Progress` on the root span. Please wait a few moments for the full trace to become available

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/traces/","name":"Traces"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/traces/known-limitations/","name":"Known limitations"}}]}
```

---

---
title: Spans and attributes
description: Cloudflare Workers provides automatic tracing instrumentation out of the box - no code changes or SDK are required.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/observability/traces/spans-and-attributes.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Spans and attributes

Cloudflare Workers provides automatic tracing instrumentation **out of the box** \- no code changes or SDK are required.

## Currently supported spans and attributes

### Attributes available on all spans

* `cloud.provider` \- Always set to `cloudflare`
* `cloud.platform` \- Always set to `cloudflare.workers`
* `faas.name` \- The name of your Worker
* `faas.invocation_id` \- A unique identifier for this specific Worker invocation
* `faas.version` \- The deployed version tag of your Worker
* `faas.invoked_region` \- The region where the Worker was invoked
* `service.name` \- The name of your Worker
* `cloudflare.colo` \- The three-letter IATA airport code of the Cloudflare data center that processed the request (e.g., `SFO`, `LHR`)
* `cloudflare.script_name` \- The name of your Worker
* `cloudflare.script_tags` \- Tags associated with your Worker deployment
* `cloudflare.script_version.id` \- The version identifier of your deployed Worker
* `cloudflare.invocation.sequence.number` \- A counter added to every emitted span and log that can be used to distinguish which was emitted first when the timestamps are the same
* `telemetry.sdk.language` \- The programming language used, set to `javascript`
* `telemetry.sdk.name` \- The telemetry SDK name, set to `cloudflare`

---

### Attributes available on all root spans

* `faas.trigger` \- The trigger that your Worker was invoked by (e.g., `http`, `cron`, `queue`, `email`)
* `cloudflare.ray_id` \- A [unique identifier](https://developers.cloudflare.com/fundamentals/reference/cloudflare-ray-id/) for every request that goes through Cloudflare
* `cloudflare.handler_type` \- The type of handler that processed the request (e.g., `fetch`, `scheduled`, `queue`, `email`, `alarm`)
* `cloudflare.entrypoint` \- The entrypoint that was invoked in your Worker (e.g. the name of your Durable Object)
* `cloudflare.execution_model` \- The execution model of the Worker (e.g., `stateless`, `stateful` for Durable Objects)
* `cloudflare.outcome` \- The outcome of the Worker invocation (e.g., `ok`, `exception`, `exceededCpu`, `exceededMemory`)
* `cloudflare.cpu_time_ms` \- The CPU time used by the Worker invocation, in milliseconds
* `cloudflare.wall_time_ms` \- The wall time used by the Worker invocation, in milliseconds

---

### [Runtime API](https://developers.cloudflare.com/workers/runtime-apis/)

#### [fetch](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/)

* `network.protocol.name`
* `network.protocol.version`
* `url.full`
* `url.scheme`
* `url.path`
* `url.query`
* `server.port`
* `server.address`
* `user_agent.original`
* `http.request.method`
* `http.request.header.content-type`
* `http.request.header.content-length`
* `http.request.header.accept`
* `http.request.header.accept-encoding`
* `http.request.body.size`
* `http.response.status_code`
* `http.response.body.size`

#### [cache\_put](https://developers.cloudflare.com/workers/runtime-apis/cache/#put)

* `cache.request.url`
* `cache.request.method`
* `cache.request.payload.status_code`
* `cache.request.payload.header.cache_control`
* `cache.request.payload.header.cache_tag`
* `cache.request.payload.header.etag`
* `cache.request.payload.header.expires`
* `cache.request.payload.header.last_modified`
* `cache.request.payload.size`
* `cache.response.success`

#### [cache\_match](https://developers.cloudflare.com/workers/runtime-apis/cache/#match)

* `cache.request.ignore_method`
* `cache.request.url`
* `cache.request.method`
* `cache.request.header.range`
* `cache.request.header.if_modified_since`
* `cache.request.header.if_none_match`
* `cache.response.status_code`
* `cache.response.body.size`
* `cache.response.cache_status`
* `cache.response.success`

#### [cache\_delete](https://developers.cloudflare.com/workers/runtime-apis/cache/#delete)

* `cache.request.ignore_method`
* `cache.request.url`
* `cache.request.method`
* `cache.response.status_code`
* `cache.response.success`

---

### [Handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/)

#### [Fetch Handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/)

* `cloudflare.verified_bot_category`
* `cloudflare.asn`
* `cloudflare.response.time_to_first_byte_ms`
* `geo.timezone`
* `geo.continent.code`
* `geo.country.code`
* `geo.locality.name`
* `geo.locality.region`
* `user_agent.orginal`
* `user_agent.os.name`
* `user_agent.os.version`
* `user_agent.browser.name`
* `user_agent.browser.major_version`
* `user_agent.browser.version`
* `user_agent.engine.name`
* `user_agent.engine.version`
* `user_agent.device.type`
* `user_agent.device.vendor`
* `user_agent.device.model`
* `http.request.method`
* `http.request.header.accept`
* `http.request.header.accept-encoding`
* `http.request.header.accept-language`
* `url.full`
* `url.path`
* `network.protocol.name`

#### [Scheduled Handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/)

* `faas.cron`
* `cloudflare.scheduled_time`

#### [QueueHandler](https://developers.cloudflare.com/workers/runtime-apis/handlers/queue/)

* `cloudflare.queue.name`
* `cloudflare.queue.batch_size`

#### [RPC Handler](https://developers.cloudflare.com/workers/runtime-apis/rpc/)

* `cloudflare.jsrpc.method`

#### [Email Handler](https://developers.cloudflare.com/email-routing/email-workers/runtime-api/)

* `cloudflare.email.from`
* `cloudflare.email.to`
* `cloudflare.email.size`

#### [Tail Handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/tail/)

* `cloudflare.trace.count`

#### [Alarm Handler](https://developers.cloudflare.com/durable-objects/api/alarms/#alarm)

* `cloudflare.scheduled_time`

---

### [D1](https://developers.cloudflare.com/d1/)

#### Attributes available on all D1 spans

* `db.system.name`
* `db.operation.name`
* `db.query.text`
* `cloudflare.binding.type`
* `cloudflare.d1.response.size_after`
* `cloudflare.d1.response.rows_read`
* `cloudflare.d1.response.rows_written`
* `cloudflare.d1.response.last_row_id`
* `cloudflare.d1.response.changed_db`
* `cloudflare.d1.response.changes`
* `cloudflare.d1.response.served_by_region`
* `cloudflare.d1.response.served_by_primary`
* `cloudflare.d1.response.sql_duration_ms`
* `cloudflare.d1.response.total_attempts`

#### [d1\_batch](https://developers.cloudflare.com/d1/worker-api/d1-database/#batch)

* `db.operation.batch.size`
* `cloudflare.d1.query.bookmark`
* `cloudflare.d1.response.bookmark`

#### [d1\_exec](https://developers.cloudflare.com/d1/worker-api/d1-database/#exec)

#### [d1\_first](https://developers.cloudflare.com/d1/worker-api/prepared-statements/#first)

* `cloudflare.d1.query.bookmark`
* `cloudflare.d1.response.bookmark`

#### [d1\_run](https://developers.cloudflare.com/d1/worker-api/prepared-statements/#run)

* `cloudflare.d1.query.bookmark`
* `cloudflare.d1.response.bookmark`

#### [d1\_all](https://developers.cloudflare.com/d1/worker-api/prepared-statements/#run)

* `cloudflare.d1.query.bookmark`
* `cloudflare.d1.response.bookmark`

#### [d1\_raw](https://developers.cloudflare.com/d1/worker-api/prepared-statements/#raw)

* `cloudflare.d1.query.bookmark`
* `cloudflare.d1.response.bookmark`

---

### [Browser Rendering](https://developers.cloudflare.com/browser-rendering/)

#### `browser_rendering_fetch`

---

### [Workers KV](https://developers.cloudflare.com/kv/)

#### Attributes available on all KV spans

* `db.system.name`
* `db.operation.name`
* `cloudflare.binding.name`
* `cloudflare.binding.type`

#### [kv\_get](https://developers.cloudflare.com/kv/api/read-key-value-pairs/#get-method)

* `cloudflare.kv.query.keys`
* `cloudflare.kv.query.keys.count`
* `cloudflare.kv.query.type`
* `cloudflare.kv.query.cache_ttl`
* `cloudflare.kv.response.size`
* `cloudflare.kv.response.returned_rows`
* `cloudflare.kv.response.metadata`
* `cloudflare.kv.response.cache_status`

#### [kv\_getWithMetadata](https://developers.cloudflare.com/kv/api/read-key-value-pairs/#getwithmetadata-method)

* `cloudflare.kv.query.keys`
* `cloudflare.kv.query.keys.count`
* `cloudflare.kv.query.type`
* `cloudflare.kv.query.cache_ttl`
* `cloudflare.kv.response.size`
* `cloudflare.kv.response.returned_rows`
* `cloudflare.kv.response.metadata`
* `cloudflare.kv.response.cache_status`

#### [kv\_put](https://developers.cloudflare.com/kv/api/write-key-value-pairs/#put-method)

* `cloudflare.kv.query.keys`
* `cloudflare.kv.query.keys.count`
* `cloudflare.kv.query.value_type`
* `cloudflare.kv.query.expiration`
* `cloudflare.kv.query.expiration_ttl`
* `cloudflare.kv.query.metadata`
* `cloudflare.kv.query.payload.size`

#### [kv\_delete](https://developers.cloudflare.com/kv/api/delete-key-value-pairs/#delete-method)

* `cloudflare.kv.query.keys`
* `cloudflare.kv.query.keys.colunt`

#### [kv\_list](https://developers.cloudflare.com/kv/api/list-keys/#list-method)

* `cloudflare.kv.query.prefix`
* `cloudflare.kv.query.limit`
* `cloudflare.kv.query.cursor`
* `cloudflare.kv.response.size`
* `cloudflare.kv.response.returned_rows`
* `cloudflare.kv.response.list_complete`
* `cloudflare.kv.response.cursor`
* `cloudflare.kv.response.cache_status`
* `cloudflare.kv.response.expiration`

---

### [R2](https://developers.cloudflare.com/r2/)

#### Attributes available on all R2 spans

* `cloudflare.binding.type`
* `cloudflare.binding.name`
* `cloudflare.r2.bucket`
* `cloudflare.r2.operation`
* `cloudflare.r2.response.success`
* `cloudflare.r2.error.message`
* `cloudflare.r2.error.code`

#### [r2\_head](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/#bucket-method-definitions)

* `cloudflare.r2.request.key`
* `cloudflare.r2.response.etag`
* `cloudflare.r2.response.size`
* `cloudflare.r2.response.uploaded`
* `cloudflare.r2.response.checksum.value`
* `cloudflare.r2.response.checksum.type`
* `cloudflare.r2.response.storage_class`
* `cloudflare.r2.response.ssec_key`
* `cloudflare.r2.response.content_type`
* `cloudflare.r2.response.content_encoding`
* `cloudflare.r2.response.content_disposition`
* `cloudflare.r2.response.content_language`
* `cloudflare.r2.response.cache_control`
* `cloudflare.r2.response.cache_expiry`
* `cloudflare.r2.response.custom_metadata`

#### [r2\_get](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/#r2getoptions)

* `cloudflare.r2.request.key`
* `cloudflare.r2.request.range.offset`
* `cloudflare.r2.request.range.length`
* `cloudflare.r2.request.range.suffix`
* `cloudflare.r2.request.range`
* `cloudflare.r2.request.ssec_key`
* `cloudflare.r2.request.only_if.etag_matches`
* `cloudflare.r2.request.only_if.etag_does_not_match`
* `cloudflare.r2.request.only_if.uploaded_before`
* `cloudflare.r2.request.only_if.uploaded_after`
* `cloudflare.r2.response.etag`
* `cloudflare.r2.response.size`
* `cloudflare.r2.response.uploaded`
* `cloudflare.r2.response.checksum.value`
* `cloudflare.r2.response.checksum.type`
* `cloudflare.r2.response.storage_class`
* `cloudflare.r2.response.ssec_key`
* `cloudflare.r2.response.content_type`
* `cloudflare.r2.response.content_encoding`
* `cloudflare.r2.response.content_disposition`
* `cloudflare.r2.response.content_language`
* `cloudflare.r2.response.cache_control`
* `cloudflare.r2.response.cache_expiry`
* `cloudflare.r2.response.custom_metadata`

#### [r2\_put](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/#r2putoptions)

* `cloudflare.r2.request.key`
* `cloudflare.r2.request.size`
* `cloudflare.r2.request.checksum.type`
* `cloudflare.r2.request.checksum.value`
* `cloudflare.r2.request.custom_metadata`
* `cloudflare.r2.request.http_metadata.content_type`
* `cloudflare.r2.request.http_metadata.content_encoding`
* `cloudflare.r2.request.http_metadata.content_disposition`
* `cloudflare.r2.request.http_metadata.content_language`
* `cloudflare.r2.request.http_metadata.cache_control`
* `cloudflare.r2.request.http_metadata.cache_expiry`
* `cloudflare.r2.request.storage_class`
* `cloudflare.r2.request.ssec_key`
* `cloudflare.r2.request.only_if.etag_matches`
* `cloudflare.r2.request.only_if.etag_does_not_match`
* `cloudflare.r2.request.only_if.uploaded_before`
* `cloudflare.r2.request.only_if.uploaded_after`
* `cloudflare.r2.response.etag`
* `cloudflare.r2.response.size`
* `cloudflare.r2.response.uploaded`
* `cloudflare.r2.response.checksum.value`
* `cloudflare.r2.response.checksum.type`
* `cloudflare.r2.response.storage_class`
* `cloudflare.r2.response.ssec_key`
* `cloudflare.r2.response.content_type`
* `cloudflare.r2.response.content_encoding`
* `cloudflare.r2.response.content_disposition`
* `cloudflare.r2.response.content_language`
* `cloudflare.r2.response.cache_control`
* `cloudflare.r2.response.cache_expiry`
* `cloudflare.r2.response.custom_metadata`

#### [r2\_list](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/#r2listoptions)

* `cloudflare.r2.request.limit`
* `cloudflare.r2.request.prefix`
* `cloudflare.r2.request.cursor`
* `cloudflare.r2.request.delimiter`
* `cloudflare.r2.request.start_after`
* `cloudflare.r2.request.include.http_metadata`
* `cloudflare.r2.request.include.custom_metadata`
* `cloudflare.r2.response.returned_objects`
* `cloudflare.r2.response.delimited_prefixes`
* `cloudflare.r2.response.truncated`
* `cloudflare.r2.response.cursor`

#### [r2\_delete](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/#bucket-method-definitions)

* `cloudflare.r2.request.keys`

#### [r2\_createMultipartUpload](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/#r2multipartoptions)

* `cloudflare.r2.request.key`
* `cloudflare.r2.request.custom_metadata`
* `cloudflare.r2.request.http_metadata.content_type`
* `cloudflare.r2.request.http_metadata.content_encoding`
* `cloudflare.r2.request.http_metadata.content_disposition`
* `cloudflare.r2.request.http_metadata.content_language`
* `cloudflare.r2.request.http_metadata.cache_control`
* `cloudflare.r2.request.http_metadata.cache_expiry`
* `cloudflare.r2.request.storage_class`
* `cloudflare.r2.request.ssec_key`
* `cloudflare.r2.response.upload_id`

#### [r2\_uploadPart](https://developers.cloudflare.com/r2/api/workers/workers-multipart-usage/)

* `cloudflare.r2.request.key`
* `cloudflare.r2.request.upload_id`
* `cloudflare.r2.request.part_number`
* `cloudflare.r2.request.ssec_key`
* `cloudflare.r2.request.size`
* `cloudflare.r2.response.etag`

#### [r2\_abortMultipartUpload](https://developers.cloudflare.com/r2/api/workers/workers-multipart-usage/)

* `cloudflare.r2.request.key`
* `cloudflare.r2.request.upload_id`

#### [r2\_completeMultipartUpload](https://developers.cloudflare.com/r2/api/workers/workers-multipart-usage/)

* `cloudflare.r2.request.key`
* `cloudflare.r2.request.upload_id`
* `cloudflare.r2.request.uploaded_parts`
* `cloudflare.r2.response.etag`
* `cloudflare.r2.response.size`
* `cloudflare.r2.response.uploaded`
* `cloudflare.r2.response.checksum.value`
* `cloudflare.r2.response.checksum.type`
* `cloudflare.r2.response.storage_class`
* `cloudflare.r2.response.ssec_key`
* `cloudflare.r2.response.content_type`
* `cloudflare.r2.response.content_encoding`
* `cloudflare.r2.response.content_disposition`
* `cloudflare.r2.response.content_language`
* `cloudflare.r2.response.cache_control`
* `cloudflare.r2.response.cache_expiry`
* `cloudflare.r2.response.custom_metadata`

---

### [Durable Object API](https://developers.cloudflare.com/durable-objects/)

#### `durable_object_subrequest`

---

### [Durable Object Storage SQL API](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api)

The SQL API allow you to modify the SQLite database embedded within a Durable Object.

#### [durable\_object\_storage\_exec](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/#exec)

* `db.system.name`
* `db.operation.name`
* `db.query.text`
* `cloudflare.durable_object.query.bindings`
* `cloudflare.durable_object.response.rows_read`
* `cloudflare.durable_object.response.rows_written`

#### [durable\_object\_storage\_getDatabaseSize](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/#databasesize)

* `db.operation.name`
* `cloudflare.durable_object.response.db_size`

#### [durable\_object\_storage\_kv\_get](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/#get)

* `cloudflare.durable_object.kv.query.keys`
* `cloudflare.durable_object.kv.query.keys.count`

#### [durable\_object\_storage\_kv\_put](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/#put)

* `cloudflare.durable_object.kv.query.keys`
* `cloudflare.durable_object.kv.query.keys.count`

#### [durable\_object\_storage\_kv\_delete](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/#delete)

* `cloudflare.durable_object.kv.query.keys`
* `cloudflare.durable_object.kv.query.keys.count`
* `cloudflare.durable_object.kv.response.deleted_count`

#### [durable\_object\_storage\_kv\_list](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/#list)

* `cloudflare.durable_object.kv.query.start`
* `cloudflare.durable_object.kv.query.startAfter`
* `cloudflare.durable_object.kv.query.end`
* `cloudflare.durable_object.kv.query.prefix`
* `cloudflare.durable_object.kv.query.reverse`
* `cloudflare.durable_object.kv.query.limit`

---

### [Durable Object Storage KV API](https://developers.cloudflare.com/durable-objects/api/legacy-kv-storage-api)

The legacy KV-backed API allows you to modify embedded storage within a Durable Object.

#### [durable\_object\_storage\_get](https://developers.cloudflare.com/durable-objects/api/legacy-kv-storage-api/#do-kv-async-get)

#### [durable\_object\_storage\_put](https://developers.cloudflare.com/durable-objects/api/legacy-kv-storage-api/#do-kv-async-put)

#### [durable\_object\_storage\_delete](https://developers.cloudflare.com/durable-objects/api/legacy-kv-storage-api/#do-kv-async-delete)

#### [durable\_object\_storage\_list](https://developers.cloudflare.com/durable-objects/api/legacy-kv-storage-api/#do-kv-async-list)

#### [durable\_object\_storage\_deleteAll](https://developers.cloudflare.com/durable-objects/api/legacy-kv-storage-api/#deleteall)

---

### [Durable Object Storage Alarms API](https://developers.cloudflare.com/durable-objects/api/alarms/)

#### [durable\_object\_alarms\_getAlarm](https://developers.cloudflare.com/durable-objects/api/alarms/#getalarm)

#### [durable\_object\_alarms\_setAlarm](https://developers.cloudflare.com/durable-objects/api/alarms/#setalarm)

#### [durable\_object\_alarms\_deleteAlarm](https://developers.cloudflare.com/durable-objects/api/alarms/#deletealarm)

---

### [Images](https://developers.cloudflare.com/images/transform-images/bindings/)

### [images\_output](https://developers.cloudflare.com/images/transform-images/bindings/#output)

* `cloudflare.binding.type`
* `cloudflare.images.options.format`
* `cloudflare.images.options.quality`
* `cloudflare.images.options.background`
* `cloudflare.images.options.anim`
* `cloudflare.images.options.transforms`
* `cloudflare.images.error.code`

### [images\_info](https://developers.cloudflare.com/images/transform-images/bindings/#info)

* `cloudflare.binding.type`
* `cloudflare.images.options.encoding`
* `cloudflare.images.result.format`
* `cloudflare.images.result.file_size`
* `cloudflare.images.result.width`
* `cloudflare.images.result.height`
* `cloudflare.images.error.code`

---

### [Email](https://developers.cloudflare.com/email-routing/)

#### [reply\_email](https://developers.cloudflare.com/email-routing/email-workers/reply-email-workers/)

#### [forward\_email](https://developers.cloudflare.com/email-routing/email-workers/runtime-api/)

#### [send\_email](https://developers.cloudflare.com/email-routing/email-workers/send-email-workers/)

---

### [Queues](https://developers.cloudflare.com/queues/)

#### [queue\_send](https://developers.cloudflare.com/queues/configuration/javascript-apis/#queue)

#### [queue\_sendBatch](https://developers.cloudflare.com/queues/configuration/javascript-apis/#queue)

---

### [Rate limiting](https://developers.cloudflare.com/workers/runtime-apis/bindings/rate-limit/)

#### [ratelimit\_run](https://developers.cloudflare.com/workers/runtime-apis/bindings/rate-limit/#best-practices)

---

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/observability/","name":"Observability"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/observability/traces/","name":"Traces"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/observability/traces/spans-and-attributes/","name":"Spans and attributes"}}]}
```

---

---
title: Vite plugin
description: A full-featured integration between Vite and the Workers runtime
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/vite-plugin/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Vite plugin

The Cloudflare Vite plugin enables a full-featured integration between [Vite ↗](https://vite.dev/) and the [Workers runtime](https://developers.cloudflare.com/workers/runtime-apis/). Your Worker code runs inside [workerd ↗](https://github.com/cloudflare/workerd), matching the production behavior as closely as possible and providing confidence as you develop and deploy your applications.

## Features

* Uses the Vite [Environment API ↗](https://vite.dev/guide/api-environment) to integrate Vite with the Workers runtime
* Provides direct access to [Workers runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/) and [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/)
* Builds your front-end assets for deployment to Cloudflare, enabling you to build static sites, SPAs, and full-stack applications
* Official support for [TanStack Start ↗](https://tanstack.com/start/) and [React Router v7 ↗](https://reactrouter.com/) with server-side rendering
* Leverages Vite's hot module replacement for consistently fast updates
* Supports `vite preview` for previewing your build output in the Workers runtime prior to deployment

## Use cases

* [TanStack Start ↗](https://tanstack.com/start/)
* [React Router v7 ↗](https://reactrouter.com/)
* Static sites, such as single-page applications, with or without an integrated backend API
* Standalone Workers
* Multi-Worker applications

## Get started

To create a new application from a ready-to-go template, refer to the [TanStack Start](https://developers.cloudflare.com/workers/framework-guides/web-apps/tanstack-start/), [React Router](https://developers.cloudflare.com/workers/framework-guides/web-apps/react-router/), [React](https://developers.cloudflare.com/workers/framework-guides/web-apps/react/) or [Vue](https://developers.cloudflare.com/workers/framework-guides/web-apps/vue/) framework guides.

To create a standalone Worker from scratch, refer to [Get started](https://developers.cloudflare.com/workers/vite-plugin/get-started/).

For a more in-depth look at adapting an existing Vite project and an introduction to key concepts, refer to the [Tutorial](https://developers.cloudflare.com/workers/vite-plugin/tutorial/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/vite-plugin/","name":"Vite plugin"}}]}
```

---

---
title: Get started
description: Get started with the Vite plugin
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/vite-plugin/get-started.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Get started

Note

This guide demonstrates creating a standalone Worker from scratch. If you would instead like to create a new application from a ready-to-go template, refer to the [TanStack Start](https://developers.cloudflare.com/workers/framework-guides/web-apps/tanstack-start/), [React Router](https://developers.cloudflare.com/workers/framework-guides/web-apps/react-router/), [React](https://developers.cloudflare.com/workers/framework-guides/web-apps/react/) or [Vue](https://developers.cloudflare.com/workers/framework-guides/web-apps/vue/) framework guides.

## Start with a basic `package.json`

package.json

```

{

  "name": "cloudflare-vite-get-started",

  "private": true,

  "version": "0.0.0",

  "type": "module",

  "scripts": {

    "dev": "vite dev",

    "build": "vite build",

    "preview": "npm run build && vite preview",

    "deploy": "npm run build && wrangler deploy"

  }

}


```

Note

Ensure that you include `"type": "module"` in order to use ES modules by default.

## Install the dependencies

 npm  yarn  pnpm  bun 

```
npm i -D vite @cloudflare/vite-plugin wrangler
```

```
yarn add -D vite @cloudflare/vite-plugin wrangler
```

```
pnpm add -D vite @cloudflare/vite-plugin wrangler
```

```
bun add -d vite @cloudflare/vite-plugin wrangler
```

## Create your Vite config file and include the Cloudflare plugin

vite.config.ts

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";


export default defineConfig({

  plugins: [cloudflare()],

});


```

The Cloudflare Vite plugin doesn't require any configuration by default and will look for a `wrangler.jsonc`, `wrangler.json` or `wrangler.toml` in the root of your application.

Refer to the [API reference](https://developers.cloudflare.com/workers/vite-plugin/reference/api/) for configuration options.

## Create your Worker config file

* [  wrangler.jsonc ](#tab-panel-7794)
* [  wrangler.toml ](#tab-panel-7795)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "cloudflare-vite-get-started",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./src/index.ts"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "cloudflare-vite-get-started"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./src/index.ts"


```

The `name` field specifies the name of your Worker. By default, this is also used as the name of the Worker's Vite Environment (see [Vite Environments](https://developers.cloudflare.com/workers/vite-plugin/reference/vite-environments/) for more information). The `main` field specifies the entry file for your Worker code.

For more information about the Worker configuration, see [Configuration](https://developers.cloudflare.com/workers/wrangler/configuration/).

## Create your Worker entry file

src/index.ts

```

export default {

  fetch() {

    return new Response(`Running in ${navigator.userAgent}!`);

  },

};


```

A request to this Worker will return **'Running in Cloudflare-Workers!'**, demonstrating that the code is running inside the Workers runtime.

## Dev, build, preview and deploy

You can now start the Vite development server (`npm run dev`), build the application (`npm run build`), preview the built application (`npm run preview`), and deploy to Cloudflare (`npm run deploy`).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/vite-plugin/","name":"Vite plugin"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/vite-plugin/get-started/","name":"Get started"}}]}
```

---

---
title: API
description: Vite plugin API
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/vite-plugin/reference/api.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# API

## `cloudflare()`

The `cloudflare` plugin should be included in the Vite `plugins` array:

vite.config.ts

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";


export default defineConfig({

  plugins: [cloudflare()],

});


```

It accepts an optional `PluginConfig` parameter.

## `interface PluginConfig`

* `configPath` ` string ` optional  
An optional path to your Worker config file. By default, a `wrangler.jsonc`, `wrangler.json`, or `wrangler.toml` file in the root of your application will be used as the Worker config.  
For more information about the Worker configuration, see [Configuration](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `config` ` WorkerConfigCustomizer<true> ` optional  
Customize or override Worker configuration programmatically. Accepts a partial configuration object or a function that receives the current config.  
Applied after any config file loads. Use it to override values, modify the existing config, or define Workers entirely in code.  
See [Programmatic configuration](https://developers.cloudflare.com/workers/vite-plugin/reference/programmatic-configuration/) for details.
* `viteEnvironment` ` { name?: string; childEnvironments?: string[] } ` optional  
Optional Vite environment options. By default, the environment name is the Worker name with `-` characters replaced with `_`. Setting the name here will override this. A typical use case is setting `viteEnvironment: { name: "ssr" }` to apply the Worker to the SSR environment.  
The `childEnvironments` option is for supporting React Server Components via [@vitejs/plugin-rsc ↗](https://github.com/vitejs/vite-plugin-react/tree/main/packages/plugin-rsc) and frameworks that build on top of it. This enables embedding additional environments with separate module graphs inside a single Worker.  
See [Vite Environments](https://developers.cloudflare.com/workers/vite-plugin/reference/vite-environments/) for more information.
* `persistState` ` boolean | { path: string } ` optional  
An optional override for state persistence. By default, state is persisted to `.wrangler/state`. A custom `path` can be provided or, alternatively, persistence can be disabled by setting the value to `false`.
* `inspectorPort` ` number | false ` optional  
An optional override for debugging your Workers. By default, the debugging inspector is enabled and listens on port `9229`. A custom port can be provided or, alternatively, setting this to `false` will disable the debugging inspector.  
See [Debugging](https://developers.cloudflare.com/workers/vite-plugin/reference/debugging/) for more information.
* `auxiliaryWorkers` ` Array<AuxiliaryWorkerConfig> ` optional  
An optional array of auxiliary Workers. Auxiliary Workers are additional Workers that are used as part of your application. You can use [service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) to call auxiliary Workers from your main (entry) Worker. All requests are routed through your entry Worker. During the build, each Worker is output to a separate subdirectory of `dist`.  
Note  
When running `wrangler deploy`, only your main (entry) Worker will be deployed. If using multiple Workers, each auxiliary Worker must be deployed individually. You can inspect the `dist` directory and then run `wrangler deploy -c dist/<auxiliary-worker>/wrangler.json` for each.

## `interface AuxiliaryWorkerConfig`

Auxiliary Workers require a `configPath`, a `config` option, or both.

* `configPath` ` string ` optional  
The path to your Worker config file. This field is required unless `config` is provided.  
For more information about the Worker configuration, see [Configuration](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `config` ` WorkerConfigCustomizer<false> ` optional  
Customize or override Worker configuration programmatically. When used without `configPath`, this allows defining auxiliary Workers entirely in code.  
See [Programmatic configuration](https://developers.cloudflare.com/workers/vite-plugin/reference/programmatic-configuration/) for usage examples.
* `viteEnvironment` ` { name?: string; childEnvironments?: string[] } ` optional  
Optional Vite environment options. By default, the environment name is the Worker name with `-` characters replaced with `_`. Setting the name here will override this.  
The `childEnvironments` option is for supporting React Server Components via [@vitejs/plugin-rsc ↗](https://github.com/vitejs/vite-plugin-react/tree/main/packages/plugin-rsc) and frameworks that build on top of it. This enables embedding additional environments with separate module graphs inside a single Worker.  
See [Vite Environments](https://developers.cloudflare.com/workers/vite-plugin/reference/vite-environments/) for more information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/vite-plugin/","name":"Vite plugin"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/vite-plugin/reference/","name":"Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/vite-plugin/reference/api/","name":"API"}}]}
```

---

---
title: Cloudflare Environments
description: Using Cloudflare environments with the Vite plugin
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/vite-plugin/reference/cloudflare-environments.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cloudflare Environments

A Worker config file may contain configuration for multiple [Cloudflare environments](https://developers.cloudflare.com/workers/wrangler/environments/). With the Cloudflare Vite plugin, you select a Cloudflare environment at dev or build time by providing the `CLOUDFLARE_ENV` environment variable. Consider the following example Worker config file:

* [  wrangler.jsonc ](#tab-panel-7796)
* [  wrangler.toml ](#tab-panel-7797)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./src/index.ts",

  "vars": {

    "MY_VAR": "Top-level var"

  },

  "env": {

    "staging": {

      "vars": {

        "MY_VAR": "Staging var"

      }

    },

    "production": {

      "vars": {

        "MY_VAR": "Production var"

      }

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./src/index.ts"


[vars]

MY_VAR = "Top-level var"


[env.staging.vars]

MY_VAR = "Staging var"


[env.production.vars]

MY_VAR = "Production var"


```

If you run `CLOUDFLARE_ENV=production vite build` then the output `wrangler.json` file generated by the build will be a flattened configuration for the 'production' Cloudflare environment, as shown in the following example:

dist/wrangler.json

```

{

  "name": "my-worker",

  "compatibility_date": "2025-04-03",

  "main": "index.js",

  "vars": { "MY_VAR": "Production var" }

}


```

Notice that the value of `MY_VAR` is `Production var`. This flattened configuration combines [top-level only](https://developers.cloudflare.com/workers/wrangler/configuration/#top-level-only-keys), [inheritable](https://developers.cloudflare.com/workers/wrangler/configuration/#inheritable-keys), and [non-inheritable](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys) keys.

Note

The default Vite environment name for a Worker is always the top-level Worker name. This enables you to reference the Worker consistently in your Vite config when using multiple Cloudflare environments. See [Vite Environments](https://developers.cloudflare.com/workers/vite-plugin/reference/vite-environments/) for more information.

Cloudflare environments can also be used in development. For example, you could run `CLOUDFLARE_ENV=development vite dev`. It is common to use the default top-level environment as the development environment and then add additional environments as necessary.

Note

Running `vite dev` or `vite build` without providing `CLOUDFLARE_ENV` will use the default top-level Cloudflare environment. As Cloudflare environments are applied at dev and build time, specifying `CLOUDFLARE_ENV` when running `vite preview` or `wrangler deploy` will have no effect.

## Secrets in local development

Warning

Do not use `vars` to store sensitive information in your Worker's Wrangler configuration file. Use secrets instead.

Put secrets for use in local development in either a `.dev.vars` file or a `.env` file, in the same directory as the Wrangler configuration file.

Note

You can use the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property) to declare which secret names your Worker requires. When defined, only the keys listed in `secrets.required` are loaded from `.dev.vars` or `.env`. Additional keys are excluded and missing keys produce a warning.

Choose to use either `.dev.vars` or `.env` but not both. If you define a `.dev.vars` file, then values in `.env` files will not be included in the `env` object during local development.

These files should be formatted using the [dotenv ↗](https://hexdocs.pm/dotenvy/dotenv-file-format.html) syntax. For example:

.dev.vars / .env

```

SECRET_KEY="value"

API_TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"


```

Do not commit secrets to git

The `.dev.vars` and `.env` files should not committed to git. Add `.dev.vars*` and `.env*` to your project's `.gitignore` file.

To set different secrets for each Cloudflare environment, create files named `.dev.vars.<environment-name>` or `.env.<environment-name>`.

When you select a Cloudflare environment in your local development, the corresponding environment-specific file will be loaded ahead of the generic `.dev.vars` (or `.env`) file.

* When using `.dev.vars.<environment-name>` files, all secrets must be defined per environment. If `.dev.vars.<environment-name>` exists then only this will be loaded; the `.dev.vars` file will not be loaded.
* In contrast, all matching `.env` files are loaded and the values are merged. For each variable, the value from the most specific file is used, with the following precedence:  
   * `.env.<environment-name>.local` (most specific)  
   * `.env.local`  
   * `.env.<environment-name>`  
   * `.env` (least specific)

Controlling `.env` handling

It is possible to control how `.env` files are loaded in local development by setting environment variables on the process running the tools.

* To disable loading local dev vars from `.env` files without providing a `.dev.vars` file, set the `CLOUDFLARE_LOAD_DEV_VARS_FROM_DOT_ENV` environment variable to `"false"`.
* To include every environment variable defined in your system's process environment as a local development variable, ensure there is no `.dev.vars` and then set the `CLOUDFLARE_INCLUDE_PROCESS_ENV` environment variable to `"true"`. This is not needed when using the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property), which loads from `process.env` automatically.

## Combining Cloudflare environments and Vite modes

You may wish to combine the concepts of [Cloudflare environments](https://developers.cloudflare.com/workers/wrangler/environments/) and [Vite modes ↗](https://vite.dev/guide/env-and-mode.html#modes). With this approach, the Vite mode can be used to select the Cloudflare environment and a single method can be used to determine environment specific configuration and code. Consider again the previous example:

* [  wrangler.jsonc ](#tab-panel-7798)
* [  wrangler.toml ](#tab-panel-7799)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./src/index.ts",

  "vars": {

    "MY_VAR": "Top-level var"

  },

  "env": {

    "staging": {

      "vars": {

        "MY_VAR": "Staging var"

      }

    },

    "production": {

      "vars": {

        "MY_VAR": "Production var"

      }

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./src/index.ts"


[vars]

MY_VAR = "Top-level var"


[env.staging.vars]

MY_VAR = "Staging var"


[env.production.vars]

MY_VAR = "Production var"


```

Next, provide `.env.staging` and `.env.production` files:

.env.staging

```

CLOUDFLARE_ENV=staging


```

.env.production

```

CLOUDFLARE_ENV=production


```

By default, `vite build` uses the 'production' Vite mode. Vite will therefore load the `.env.production` file to get the environment variables that are used in the build. Since the `.env.production` file contains `CLOUDFLARE_ENV=production`, the Cloudflare Vite plugin will select the 'production' Cloudflare environment. The value of `MY_VAR` will therefore be `'Production var'`. If you run `vite build --mode staging` then the 'staging' Vite mode will be used and the 'staging' Cloudflare environment will be selected. The value of `MY_VAR` will therefore be `'Staging var'`.

For more information about using `.env` files with Vite, see the [relevant documentation ↗](https://vite.dev/guide/env-and-mode#env-files).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/vite-plugin/","name":"Vite plugin"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/vite-plugin/reference/","name":"Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/vite-plugin/reference/cloudflare-environments/","name":"Cloudflare Environments"}}]}
```

---

---
title: Debugging
description: Debugging with the Vite plugin
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/vite-plugin/reference/debugging.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Debugging

The Cloudflare Vite plugin has debugging enabled by default and listens on port `9229`. You may choose a custom port or disable debugging by setting the `inspectorPort` option in the [plugin config](https://developers.cloudflare.com/workers/vite-plugin/reference/api#interface-pluginconfig). There are two recommended methods for debugging your Workers during local development:

## DevTools

When running `vite dev` or `vite preview`, a `/__debug` route is added that provides access to [Cloudflare's implementation ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/chrome-devtools-patches) of [Chrome's DevTools ↗](https://developer.chrome.com/docs/devtools/overview). Navigating to this route will open a DevTools tab for each of the Workers in your application.

Once the tab(s) are open, you can make a request to your application and start debugging your Worker code.

Note

When debugging multiple Workers, you may need to allow your browser to open pop-ups.

## VS Code

To set up [VS Code ↗](https://code.visualstudio.com/) to support breakpoint debugging in your application, you should create a `.vscode/launch.json` file that contains the following configuration:

.vscode/launch.json

```

{

  "configurations": [

    {

      "name": "<NAME_OF_WORKER>",

      "type": "node",

      "request": "attach",

      "websocketAddress": "ws://localhost:9229/<NAME_OF_WORKER>",

      "resolveSourceMapLocations": null,

      "attachExistingChildren": false,

      "autoAttachChildProcesses": false,

      "sourceMaps": true

    }

  ],

  "compounds": [

    {

      "name": "Debug Workers",

      "configurations": ["<NAME_OF_WORKER>"],

      "stopAll": true

    }

  ]

}


```

Here, `<NAME_OF_WORKER>` indicates the name of the Worker as specified in your Worker config file. If you have used the `inspectorPort` option to set a custom port then this should be the value provided in the `websocketaddress` field.

Note

If you have more than one Worker in your application, you should add a configuration in the `configurations` field for each and include the configuration name in the `compounds` `configurations` array.

With this set up, you can run `vite dev` or `vite preview` and then select **Debug Workers** at the top of the **Run & Debug** panel to start debugging.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/vite-plugin/","name":"Vite plugin"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/vite-plugin/reference/","name":"Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/vite-plugin/reference/debugging/","name":"Debugging"}}]}
```

---

---
title: Migrating from wrangler dev
description: Migrating from wrangler dev to the Vite plugin
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/vite-plugin/reference/migrating-from-wrangler-dev.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Migrating from wrangler dev

In most cases, migrating from [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) is straightforward and you can follow the instructions in [Get started](https://developers.cloudflare.com/workers/vite-plugin/get-started/). There are a few key differences to highlight:

## Input and output Worker config files

With the Cloudflare Vite plugin, your [Worker config file](https://developers.cloudflare.com/workers/wrangler/configuration/) (for example, `wrangler.jsonc`) is the input configuration and a separate output configuration is created as part of the build. This output file is a snapshot of your configuration at the time of the build and is modified to reference your build artifacts. It is the configuration that is used for preview and deployment. Once you have run `vite build`, running `wrangler deploy` or `vite preview` will automatically locate this output configuration file.

## Cloudflare Environments

With the Cloudflare Vite plugin, [Cloudflare Environments](https://developers.cloudflare.com/workers/vite-plugin/reference/cloudflare-environments/) are applied at dev and build time. Running `wrangler deploy --env some-env` is therefore not applicable and the environment to deploy should instead be set by running `CLOUDFLARE_ENV=some-env vite build`.

## Redundant fields in the Wrangler config file

There are various options in the [Worker config file](https://developers.cloudflare.com/workers/wrangler/configuration/) that are ignored when using Vite, as they are either no longer applicable or are replaced by Vite equivalents. If these options are provided, then warnings will be printed to the console with suggestions for how to proceed.

### Not applicable

The following build-related options are handled by Vite and are not applicable when using the Cloudflare Vite plugin:

* `tsconfig`
* `rules`
* `build`
* `no_bundle`
* `find_additional_modules`
* `base_dir`
* `preserve_file_names`

### Not supported

* `site` — Use [Workers Assets](https://developers.cloudflare.com/workers/static-assets/) instead.

### Replaced by Vite equivalents

The following options have Vite equivalents that should be used instead:

| Wrangler option                                      | Vite equivalent                                                              |
| ---------------------------------------------------- | ---------------------------------------------------------------------------- |
| define                                               | [define ↗](https://vite.dev/config/shared-options.html#define)               |
| alias                                                | [resolve.alias ↗](https://vite.dev/config/shared-options.html#resolve-alias) |
| minify                                               | [build.minify ↗](https://vite.dev/config/build-options.html#build-minify)    |
| Local dev settings (ip, port, local\_protocol, etc.) | [Server options ↗](https://vite.dev/config/server-options.html)              |

See [Vite Environments](https://developers.cloudflare.com/workers/vite-plugin/reference/vite-environments/) for more information about configuring your Worker environments in Vite.

### Inferred

If [build.sourcemap ↗](https://vite.dev/config/build-options#build-sourcemap) is enabled for a given Worker environment in the Vite config, `"upload_source_maps": true` is automatically added to the output Wrangler configuration file. This means that generated sourcemaps are uploaded by default. To override this setting, you can set the value of `upload_source_maps` explicitly in the input Worker config.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/vite-plugin/","name":"Vite plugin"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/vite-plugin/reference/","name":"Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/vite-plugin/reference/migrating-from-wrangler-dev/","name":"Migrating from wrangler dev"}}]}
```

---

---
title: Non-JavaScript modules
description: Additional module types that can be imported in your Worker
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/vite-plugin/reference/non-javascript-modules.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Non-JavaScript modules

In addition to TypeScript and JavaScript, the following module types are automatically configured to be importable in your Worker code.

| Module extension    | Imported type      |
| ------------------- | ------------------ |
| .txt                | string             |
| .html               | string             |
| .sql                | string             |
| .bin                | ArrayBuffer        |
| .wasm, .wasm?module | WebAssembly.Module |

For example, with the following import, `text` will be a string containing the contents of `example.txt`:

JavaScript

```

import text from "./example.txt";


```

This is also the basis for importing Wasm, as in the following example:

TypeScript

```

import wasm from "./example.wasm";


// Instantiate Wasm modules in the module scope

const instance = await WebAssembly.instantiate(wasm);


export default {

  fetch() {

    const result = instance.exports.exported_func();


    return new Response(result);

  },

};


```

Note

Cloudflare Workers does not support `WebAssembly.instantiateStreaming()`.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/vite-plugin/","name":"Vite plugin"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/vite-plugin/reference/","name":"Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/vite-plugin/reference/non-javascript-modules/","name":"Non-JavaScript modules"}}]}
```

---

---
title: Programmatic configuration
description: Configure Workers programmatically using the Vite plugin
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/vite-plugin/reference/programmatic-configuration.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Programmatic configuration

The Wrangler configuration file is optional when using the Cloudflare Vite plugin. Without one, the plugin uses default values. You can customize Worker configuration programmatically with the `config` option. This is useful when the Cloudflare plugin runs inside another plugin or framework.

Note

Programmatic configuration is primarily designed for use by frameworks and plugin developers. Users should normally use Wrangler config files instead. Configuration set via the `config` option will not be included when running `wrangler types` or resource based Wrangler CLI commands such as `wrangler kv` or `wrangler d1`.

## Default configuration

Without a configuration file, the plugin generates sensible defaults for an assets-only Worker. The `name` comes from `package.json` or the project directory name. The `compatibility_date` uses the latest date supported by your installed Miniflare version.

## The `config` option

The `config` option offers three ways to programmatically configure your Worker. You can set any property from the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), though some options are [ignored or replaced by Vite equivalents](https://developers.cloudflare.com/workers/vite-plugin/reference/migrating-from-wrangler-dev/#redundant-fields-in-the-wrangler-config-file).

Note

You cannot define [Cloudflare environments](https://developers.cloudflare.com/workers/vite-plugin/reference/cloudflare-environments/) via `config`, as they are resolved before this option is applied.

### Configuration object

Set `config` to an object to provide values that merge with defaults and Wrangler config file settings:

vite.config.ts

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";


export default defineConfig({

  plugins: [

    cloudflare({

      config: {

        compatibility_date: "2025-01-01",

        vars: {

          API_URL: "https://api.example.com",

        },

      },

    }),

  ],

});


```

These values merge with Wrangler config file values, with the `config` values taking precedence.

### Dynamic configuration function

Use a function when configuration depends on existing config values or external data, or if you need to compute or conditionally set values:

vite.config.ts

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";


export default defineConfig({

  plugins: [

    cloudflare({

      config: (userConfig) => ({

        vars: {

          WORKER_NAME: userConfig.name,

          BUILD_TIME: new Date().toISOString(),

        },

      }),

    }),

  ],

});


```

The function receives the current configuration (defaults or loaded config file). Return an object with values to merge.

### In-place editing

A `config` function can mutate the config object directly instead of returning overrides. This is useful for deleting properties or removing array items:

vite.config.ts

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";


export default defineConfig({

  plugins: [

    cloudflare({

      config: (userConfig) => {

        // Replace all existing compatibility flags

        userConfig.compatibility_flags = ["nodejs_compat"];

      },

    }),

  ],

});


```

Note

When editing in place, do not return a value from the function.

## Auxiliary Workers

Auxiliary Workers also support the `config` option, enabling multi-Worker architectures without config files.

Define auxiliary Workers without config files using `config` inside the `auxiliaryWorkers` array:

vite.config.ts

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";


export default defineConfig({

  plugins: [

    cloudflare({

      config: {

        name: "entry-worker",

        main: "./src/entry.ts",

        compatibility_date: "2025-01-01",

        services: [{ binding: "API", service: "api-worker" }],

      },

      auxiliaryWorkers: [

        {

          config: {

            name: "api-worker",

            main: "./src/api.ts",

            compatibility_date: "2025-01-01",

          },

        },

      ],

    }),

  ],

});


```

### Configuration overrides

Combine a config file with `config` to override specific values:

vite.config.ts

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";


export default defineConfig({

  plugins: [

    cloudflare({

      configPath: "./wrangler.jsonc",

      auxiliaryWorkers: [

        {

          configPath: "./workers/api/wrangler.jsonc",

          config: {

            vars: {

              ENDPOINT: "https://api.example.com/v2",

            },

          },

        },

      ],

    }),

  ],

});


```

### Configuration inheritance

Auxiliary Workers receive the resolved entry Worker config in the second parameter to the `config` function. This makes it straightforward to inherit configuration from the entry Worker in auxiliary Workers.

vite.config.ts

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";


export default defineConfig({

  plugins: [

    cloudflare({

      auxiliaryWorkers: [

        {

          config: (_, { entryWorkerConfig }) => ({

            name: "auxiliary-worker",

            main: "./src/auxiliary-worker.ts",

            // Inherit compatibility settings from entry Worker

            compatibility_date: entryWorkerConfig.compatibility_date,

            compatibility_flags: entryWorkerConfig.compatibility_flags,

          }),

        },

      ],

    }),

  ],

});


```

## Configuration merging behavior

The `config` option uses [defu ↗](https://github.com/unjs/defu) for merging configuration objects.

* Object properties are recursively merged
* Arrays are concatenated (`config` values first, then existing values)
* Primitive values from `config` override existing values
* `undefined` values in `config` do not override existing values

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/vite-plugin/","name":"Vite plugin"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/vite-plugin/reference/","name":"Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/vite-plugin/reference/programmatic-configuration/","name":"Programmatic configuration"}}]}
```

---

---
title: Secrets
description: Using secrets with the Vite plugin
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/vite-plugin/reference/secrets.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Secrets

[Secrets](https://developers.cloudflare.com/workers/configuration/secrets/) are typically used for storing sensitive information such as API keys and auth tokens. For deployed Workers, they are set via the dashboard or Wrangler CLI.

In local development, secrets can be provided to your Worker by using a [.dev.vars](https://developers.cloudflare.com/workers/configuration/secrets/#local-development-with-secrets) file. If you are using [Cloudflare Environments](https://developers.cloudflare.com/workers/vite-plugin/reference/cloudflare-environments/) then the relevant `.dev.vars` file will be selected. For example, `CLOUDFLARE_ENV=staging vite dev` will load `.dev.vars.staging` if it exists and fall back to `.dev.vars`.

Note

The `vite build` command copies the relevant `.dev.vars` file to the output directory. This is only used when running `vite preview` and is not deployed with your Worker.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/vite-plugin/","name":"Vite plugin"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/vite-plugin/reference/","name":"Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/vite-plugin/reference/secrets/","name":"Secrets"}}]}
```

---

---
title: Static Assets
description: Static assets and the Vite plugin
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/vite-plugin/reference/static-assets.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Static Assets

This guide focuses on the areas of working with static assets that are unique to the Vite plugin. For more general documentation, see [Static Assets](https://developers.cloudflare.com/workers/static-assets/).

## Configuration

The Vite plugin does not require that you provide the `assets` field in order to enable assets and instead determines whether assets should be included based on whether the `client` environment has been built. By default, the `client` environment is built if any of the following conditions are met:

* There is an `index.html` file in the root of your project
* `build.rollupOptions.input` or `environments.client.build.rollupOptions.input` is specified in your Vite config
* You have a non-empty [public directory ↗](https://vite.dev/guide/assets#the-public-directory)
* Your Worker [imports assets as URLs ↗](https://vite.dev/guide/assets#importing-asset-as-url)

On running `vite build`, an output `wrangler.json` configuration file is generated as part of the build output. The `assets.directory` field in this file is automatically populated with the path to your `client` build output. It is therefore not necessary to provide the `assets.directory` field in your input Worker configuration.

The `assets` configuration should be used, however, if you wish to set [routing configuration](https://developers.cloudflare.com/workers/static-assets/routing/) or enable the [assets binding](https://developers.cloudflare.com/workers/static-assets/binding/#binding). The following example configures the `not_found_handling` for a single-page application so that the fallback will always be the root `index.html` file.

* [  wrangler.jsonc ](#tab-panel-7800)
* [  wrangler.toml ](#tab-panel-7801)

```

{

  "assets": {

    "not_found_handling": "single-page-application"

  }

}


```

```

[assets]

not_found_handling = "single-page-application"


```

## Features

The Vite plugin ensures that all of Vite's [static asset handling ↗](https://vite.dev/guide/assets) features are supported in your Worker as well as in your frontend. These include importing assets as URLs, importing as strings and importing from the `public` directory as well as inlining assets.

Assets [imported as URLs ↗](https://vite.dev/guide/assets#importing-asset-as-url) can be fetched via the [assets binding](https://developers.cloudflare.com/workers/static-assets/binding/#binding). As the binding's `fetch` method requires a full URL, we recommend using the request URL as the `base`. This is demonstrated in the following example:

TypeScript

```

import myImage from "./my-image.png";


export default {

  fetch(request, env) {

    return env.ASSETS.fetch(new URL(myImage, request.url));

  },

};


```

Assets imported as URLs in your Worker will automatically be moved to the client build output. When running `vite build` the paths of any moved assets will be displayed in the console.

Note

If you are developing a multi-Worker application, assets can only be accessed on the client and in your entry Worker.

## Headers and redirects

Custom [headers](https://developers.cloudflare.com/workers/static-assets/headers/) and [redirects](https://developers.cloudflare.com/workers/static-assets/redirects/) are supported at build, preview and deploy time by adding `_headers` and `_redirects` files to your [public directory ↗](https://vite.dev/guide/assets#the-public-directory). The paths in these files should reflect the structure of your client build output. For example, generated assets are typically located in an [assets subdirectory ↗](https://vite.dev/config/build-options#build-assetsdir).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/vite-plugin/","name":"Vite plugin"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/vite-plugin/reference/","name":"Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/vite-plugin/reference/static-assets/","name":"Static Assets"}}]}
```

---

---
title: Vite Environments
description: Vite environments and the Vite plugin
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/vite-plugin/reference/vite-environments.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Vite Environments

The [Vite Environment API ↗](https://vite.dev/guide/api-environment), released in Vite 6, is the key feature that enables the Cloudflare Vite plugin to integrate Vite directly with the Workers runtime. It is not necessary to understand all the intricacies of the Environment API as an end user, but it is useful to have a high-level understanding.

## Default behavior

Vite creates two environments by default: `client` and `ssr`. A front-end only application uses the `client` environment, whereas a full-stack application created with a framework typically uses the `client` environment for front-end code and the `ssr` environment for server-side rendering.

By default, when you add a Worker using the Cloudflare Vite plugin, an additional environment is created. Its name is derived from the Worker name, with any dashes replaced with underscores. This name can be used to reference the environment in your Vite config in order to apply environment specific configuration.

Note

The default Vite environment name for a Worker is always the top-level Worker name. This enables you to reference the Worker consistently in your Vite config when using multiple [Cloudflare Environments](https://developers.cloudflare.com/workers/vite-plugin/reference/cloudflare-environments/).

## Environment configuration

In the following example we have a Worker named `my-worker` that is associated with a Vite environment named `my_worker`. We use the Vite config to set global constant replacements for this environment:

* [  wrangler.jsonc ](#tab-panel-7802)
* [  wrangler.toml ](#tab-panel-7803)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "main": "./src/index.ts"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./src/index.ts"


```

vite.config.ts

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";


export default defineConfig({

  environments: {

    my_worker: {

      define: {

        __APP_VERSION__: JSON.stringify("v1.0.0"),

      },

    },

  },

  plugins: [cloudflare()],

});


```

For more information about Vite's configuration options, see [Configuring Vite ↗](https://vite.dev/config/).

The default behavior of using the Worker name as the environment name is appropriate when you have a standalone Worker, such as an API that is accessed from your front-end application, or an [auxiliary Worker](https://developers.cloudflare.com/workers/vite-plugin/reference/api/#interface-pluginconfig) that is accessed via service bindings.

## Full-stack frameworks

If you are using the Cloudflare Vite plugin with [TanStack Start ↗](https://tanstack.com/start/) or [React Router v7 ↗](https://reactrouter.com/), then your Worker is used for server-side rendering and tightly integrated with the framework. To support this, you should assign it to the `ssr` environment by setting `viteEnvironment.name` in the plugin config.

vite.config.ts

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";

import { reactRouter } from "@react-router/dev/vite";


export default defineConfig({

  plugins: [cloudflare({ viteEnvironment: { name: "ssr" } }), reactRouter()],

});


```

This merges the Worker's environment configuration with the framework's SSR configuration and ensures that the Worker is included as part of the framework's build output.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/vite-plugin/","name":"Vite plugin"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/vite-plugin/reference/","name":"Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/vite-plugin/reference/vite-environments/","name":"Vite Environments"}}]}
```

---

---
title: Tutorial - React SPA with an API
description: Create a React SPA with an API Worker using the Vite plugin
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/vite-plugin/tutorial.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Tutorial - React SPA with an API

**Last reviewed:**  12 months ago 

This tutorial takes you through the steps needed to adapt a Vite project to use the Cloudflare Vite plugin. Much of the content can also be applied to adapting existing Vite projects and to front-end frameworks other than React.

Note

If you want to start a new app with a template already set up with Vite, React and the Cloudflare Vite plugin, refer to the [React framework guide](https://developers.cloudflare.com/workers/framework-guides/web-apps/react/). To create a standalone Worker, refer to [Get started](https://developers.cloudflare.com/workers/vite-plugin/get-started/).

## Introduction

In this tutorial, you will create a React SPA that can be deployed as a Worker with static assets. You will then add an API Worker that can be accessed from the front-end code. You will develop, build, and preview the application using Vite before finally deploying to Cloudflare.

## Set up and configure the React SPA

### Scaffold a Vite project

Start by creating a React TypeScript project with Vite.

 npm  yarn  pnpm 

```
npm create vite@latest -- cloudflare-vite-tutorial --template react-ts
```

```
yarn create vite cloudflare-vite-tutorial --template react-ts
```

```
pnpm create vite@latest cloudflare-vite-tutorial --template react-ts
```

Next, open the `cloudflare-vite-tutorial` directory in your editor of choice.

### Add the Cloudflare dependencies

 npm  yarn  pnpm  bun 

```
npm i -D @cloudflare/vite-plugin wrangler
```

```
yarn add -D @cloudflare/vite-plugin wrangler
```

```
pnpm add -D @cloudflare/vite-plugin wrangler
```

```
bun add -d @cloudflare/vite-plugin wrangler
```

### Add the plugin to your Vite config

vite.config.ts

```

import { defineConfig } from "vite";

import react from "@vitejs/plugin-react";

import { cloudflare } from "@cloudflare/vite-plugin";


export default defineConfig({

  plugins: [react(), cloudflare()],

});


```

The Cloudflare Vite plugin doesn't require any configuration by default and will look for a `wrangler.jsonc`, `wrangler.json` or `wrangler.toml` in the root of your application.

Refer to the [API reference](https://developers.cloudflare.com/workers/vite-plugin/reference/api/) for configuration options.

### Create your Worker config file

* [  wrangler.jsonc ](#tab-panel-7804)
* [  wrangler.toml ](#tab-panel-7805)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "cloudflare-vite-tutorial",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "not_found_handling": "single-page-application"

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "cloudflare-vite-tutorial"

# Set this to today's date

compatibility_date = "2026-04-03"


[assets]

not_found_handling = "single-page-application"


```

The [not\_found\_handling](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/) value has been set to `single-page-application`. This means that all not found requests will serve the `index.html` file. With the Cloudflare plugin, the `assets` routing configuration is used in place of Vite's default behavior. This ensures that your application's [routing configuration](https://developers.cloudflare.com/workers/static-assets/routing/) works the same way while developing as it does when deployed to production.

Note that the [directory](https://developers.cloudflare.com/workers/static-assets/binding/#directory) field is not used when configuring assets with Vite. The `directory` in the output configuration will automatically point to the client build output. See [Static Assets](https://developers.cloudflare.com/workers/vite-plugin/reference/static-assets/) for more information.

Note

When using the Cloudflare Vite plugin, the Worker config (for example, `wrangler.jsonc`) that you provide is the input configuration file. A separate output `wrangler.json` file is created when you run `vite build`. This output file is a snapshot of your configuration at the time of the build and is modified to reference your build artifacts. It is the configuration that is used for preview and deployment.

### Update the .gitignore file

When developing Workers, additional files are used and/or generated that should not be stored in git. Add the following lines to your `.gitignore` file:

.gitignore

```

.wrangler

.dev.vars*


```

### Run the development server

Run `npm run dev` to start the Vite development server and verify that your application is working as expected.

For a purely front-end application, you could now build (`npm run build`), preview (`npm run preview`), and deploy (`npm exec wrangler deploy`) your application. This tutorial, however, will show you how to go a step further and add an API Worker.

## Add an API Worker

### Configure TypeScript for your Worker code

 npm  yarn  pnpm  bun 

```
npm i -D @cloudflare/workers-types
```

```
yarn add -D @cloudflare/workers-types
```

```
pnpm add -D @cloudflare/workers-types
```

```
bun add -d @cloudflare/workers-types
```

tsconfig.worker.json

```

{

  "extends": "./tsconfig.node.json",

  "compilerOptions": {

    "tsBuildInfoFile": "./node_modules/.tmp/tsconfig.worker.tsbuildinfo",

    "types": ["@cloudflare/workers-types/2023-07-01", "vite/client"],

  },

  "include": ["worker"],

}


```

tsconfig.json

```

{

  "files": [],

  "references": [

    { "path": "./tsconfig.app.json" },

    { "path": "./tsconfig.node.json" },

    { "path": "./tsconfig.worker.json" },

  ],

}


```

### Add to your Worker configuration

* [  wrangler.jsonc ](#tab-panel-7806)
* [  wrangler.toml ](#tab-panel-7807)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "cloudflare-vite-tutorial",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "not_found_handling": "single-page-application"

  },

  "main": "./worker/index.ts"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "cloudflare-vite-tutorial"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./worker/index.ts"


[assets]

not_found_handling = "single-page-application"


```

The `main` field specifies the entry file for your Worker code.

### Add your API Worker

worker/index.ts

```

export default {

  fetch(request) {

    const url = new URL(request.url);


    if (url.pathname.startsWith("/api/")) {

      return Response.json({

        name: "Cloudflare",

      });

    }


    return new Response(null, { status: 404 });

  },

} satisfies ExportedHandler;


```

The Worker above will be invoked for any non-navigation request that does not match a static asset. It returns a JSON response if the `pathname` starts with `/api/` and otherwise return a `404` response.

Note

For top-level navigation requests, browsers send a `Sec-Fetch-Mode: navigate` header. If this is present and the URL does not match a static asset, the `not_found_handling` behavior will be invoked rather than the Worker. This implicit routing is the default behavior.

If you would instead like to define the routes that invoke your Worker explicitly, you can provide an array of route patterns to [run\_worker\_first](https://developers.cloudflare.com/workers/static-assets/binding/#run%5Fworker%5Ffirst). This opts out of interpreting the `Sec-Fetch-Mode` header.

* [  wrangler.jsonc ](#tab-panel-7808)
* [  wrangler.toml ](#tab-panel-7809)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "cloudflare-vite-tutorial",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "assets": {

    "not_found_handling": "single-page-application",

    "run_worker_first": [

      "/api/*"

    ]

  },

  "main": "./worker/index.ts"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "cloudflare-vite-tutorial"

# Set this to today's date

compatibility_date = "2026-04-03"

main = "./worker/index.ts"


[assets]

not_found_handling = "single-page-application"

run_worker_first = [ "/api/*" ]


```

### Call the API from the client

Edit `src/App.tsx` so that it includes an additional button that calls the API and sets some state:

src/App.tsx

```

import { useState } from "react";

import reactLogo from "./assets/react.svg";

import viteLogo from "/vite.svg";

import "./App.css";


function App() {

  const [count, setCount] = useState(0);

  const [name, setName] = useState("unknown");


  return (

    <>

16 collapsed lines

      <div>

        <a href="https://vite.dev" target="_blank">

          <img src={viteLogo} className="logo" alt="Vite logo" />

        </a>

        <a href="https://react.dev" target="_blank">

          <img src={reactLogo} className="logo react" alt="React logo" />

        </a>

      </div>

      <h1>Vite + React</h1>

      <div className="card">

        <button

          onClick={() => setCount((count) => count + 1)}

          aria-label="increment"

        >

          count is {count}

        </button>

        <p>

          Edit <code>src/App.tsx</code> and save to test HMR

        </p>

      </div>

      <div className="card">

        <button

          onClick={() => {

            fetch("/api/")

              .then((res) => res.json() as Promise<{ name: string }>)

              .then((data) => setName(data.name));

          }}

          aria-label="get name"

        >

          Name from API is: {name}

        </button>

        <p>

          Edit <code>api/index.ts</code> to change the name

        </p>

      </div>

      <p className="read-the-docs">

        Click on the Vite and React logos to learn more

      </p>

    </>

  );

}


export default App;


```

Now, if you click the button, it will display 'Name from API is: Cloudflare'.

Increment the counter to update the application state in the browser. Next, edit `api/index.ts` by changing the `name` it returns to `'Cloudflare Workers'`. If you click the button again, it will display the new `name` while preserving the previously set counter value.

With Vite and the Cloudflare plugin, you can iterate on the client and server parts of your app together, without losing UI state between edits.

### Build your application

Run `npm run build` to build your application.

Terminal window

```

npm run build


```

If you inspect the `dist` directory, you will see that it contains two subdirectories:

* `client` \- the client code that runs in the browser
* `cloudflare_vite_tutorial` \- the Worker code alongside the output `wrangler.json` configuration file

### Preview your application

Run `npm run preview` to validate that your application runs as expected.

Terminal window

```

npm run preview


```

This command will run your build output locally in the Workers runtime, closely matching its behaviour in production.

### Deploy to Cloudflare

Run `npm exec wrangler deploy` to deploy your application to Cloudflare.

Terminal window

```

npm exec wrangler deploy


```

This command will automatically use the output `wrangler.json` that was included in the build output.

## Next steps

In this tutorial, we created an SPA that could be deployed as a Worker with static assets. We then added an API Worker that could be accessed from the front-end code. Finally, we deployed both the client and server-side parts of the application to Cloudflare.

Possible next steps include:

* Adding a binding to another Cloudflare service such as a [KV namespace](https://developers.cloudflare.com/kv/) or [D1 database](https://developers.cloudflare.com/d1/)
* Expanding the API to include additional routes
* Using a library, such as [Hono ↗](https://hono.dev/) or [tRPC ↗](https://trpc.io/), in your API Worker

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/vite-plugin/","name":"Vite plugin"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/vite-plugin/tutorial/","name":"Tutorial - React SPA with an API"}}]}
```

---

---
title: Languages
description: Languages supported on Workers, a polyglot platform.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Languages

Workers is a polyglot platform, and provides first-class support for the following programming languages:

* [ JavaScript ](https://developers.cloudflare.com/workers/languages/javascript/)
* [ TypeScript ](https://developers.cloudflare.com/workers/languages/typescript/)
* [ Python Workers ](https://developers.cloudflare.com/workers/languages/python/)
* [ Rust ](https://developers.cloudflare.com/workers/languages/rust/)

Workers also supports [WebAssembly](https://developers.cloudflare.com/workers/runtime-apis/webassembly/) (abbreviated as "Wasm") — a binary format that many languages can be compiled to. This allows you to write Workers using programming language beyond the languages listed above, including C, C++, Kotlin, Go and more.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}}]}
```

---

---
title: JavaScript
description: The Workers platform is designed to be JavaScript standards compliant and web-interoperable, and supports JavaScript standards, as defined by TC39 (ECMAScript). Wherever possible, it uses web platform APIs, so that code can be reused across client and server, as well as across WinterCG JavaScript runtimes.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/javascript/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# JavaScript

The Workers platform is designed to be [JavaScript standards compliant ↗](https://ecma-international.org/publications-and-standards/standards/ecma-262/) and web-interoperable, and supports JavaScript standards, as defined by [TC39 ↗](https://tc39.es/) (ECMAScript). Wherever possible, it uses web platform APIs, so that code can be reused across client and server, as well as across [WinterCG ↗](https://wintercg.org/) JavaScript runtimes.

Refer to [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/) for more information on specific JavaScript APIs available in Workers.

### Resources

* [Getting Started](https://developers.cloudflare.com/workers/get-started/guide/)
* [Quickstarts](https://developers.cloudflare.com/workers/get-started/quickstarts/) – More example repos to use as a basis for your projects
* [TypeScript type definitions ↗](https://github.com/cloudflare/workers-types)
* [JavaScript and web standard APIs](https://developers.cloudflare.com/workers/runtime-apis/web-standards/)
* [Tutorials](https://developers.cloudflare.com/workers/tutorials/)
* [Examples](https://developers.cloudflare.com/workers/examples/?languages=JavaScript)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/javascript/","name":"JavaScript"}}]}
```

---

---
title: Examples
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/javascript/examples.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Examples

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/javascript/","name":"JavaScript"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/languages/javascript/examples/","name":"Examples"}}]}
```

---

---
title: Python Workers
description: Write Workers in 100% Python
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/python/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Python Workers

Cloudflare Workers provides a first-class Python experience, including support for:

* Easy to install and fast-booting [Packages](https://developers.cloudflare.com/workers/languages/python/packages), including [FastAPI ↗](https://fastapi.tiangolo.com/), [Langchain ↗](https://pypi.org/project/langchain/), [httpx ↗](https://www.python-httpx.org/), [Pydantic ↗](https://docs.pydantic.dev/latest/) and more.
* A robust [foreign function interface (FFI)](https://developers.cloudflare.com/workers/languages/python/ffi) that lets you use JavaScript objects and functions directly from Python — including all [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/)
* An ecosystem of services on the Workers Platform accessible via [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/), including:  
   * State storage and databases like [KV](https://developers.cloudflare.com/kv), [D1](https://developers.cloudflare.com/d1), [Durable Objects](https://developers.cloudflare.com/durable-objects/)  
   * Access to [Environment Variables](https://developers.cloudflare.com/workers/configuration/environment-variables/), [Secrets](https://developers.cloudflare.com/workers/configuration/secrets/), and other Workers using [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/)  
   * AI capabilities with [Workers AI](https://developers.cloudflare.com/workers-ai/), [Vectorize](https://developers.cloudflare.com/vectorize)  
   * File storage with [R2](https://developers.cloudflare.com/r2)  
   * [Durable Workflows](https://developers.cloudflare.com/workflows/), [Queues](https://developers.cloudflare.com/queues/), and [ more](https://developers.cloudflare.com/workers/runtime-apis/bindings/)

## Introduction

A Python Worker can be as simple as four lines of code:

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        return Response("Hello World!")


```

Similar to other Workers, the main entry point for a Python worker is the [fetch handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch) which handles incoming requests sent to the Worker.

In a Python Worker, this handler is placed in a `Default` class that extends the `WorkerEntrypoint` class (which you can import from the `workers` SDK module).

Python Workers are in beta.

You must add the `python_workers` compatibility flag to your Worker, while Python Workers are in open beta. Packages are supported using the [pywrangler](https://developers.cloudflare.com/workers/languages/python/packages) tool.

We'd love your feedback. Join the #python-workers channel in the [Cloudflare Developers Discord ↗](https://discord.cloudflare.com/) and let us know what you'd like to see next.

### The `pywrangler` CLI tool

To run a Python Worker locally, install packages, and deploy it to Cloudflare, you use [pywrangler ↗](https://github.com/cloudflare/workers-py), the CLI for Python Workers.

To set it up, first, ensure [uv ↗](https://docs.astral.sh/uv/#installation) and [Node ↗](https://nodejs.org/en) are installed.

Then set up your development environment:

Terminal window

```

uvx --from workers-py pywrangler init


```

This will create a `pyproject.toml` file with `workers-py` as a development dependency. `pywrangler init` will create a wrangler config file. You can then run `pywrangler` with:

Terminal window

```

uv run pywrangler dev


```

To deploy a Python Worker to Cloudflare, run `pywrangler deploy`:

Terminal window

```

uv run pywrangler deploy


```

### Python Worker Templates

When you initialize a new Python Worker project and select from one of many templates:

Terminal window

```

uv run pywrangler init


```

Or you can clone the examples repository to explore more options:

Terminal window

```

git clone https://github.com/cloudflare/python-workers-examples

cd python-workers-examples/01-hello


```

## Next Up

* Learn more about [the basics of Python Workers](https://developers.cloudflare.com/workers/languages/python/basics)
* Learn details about local development, deployment, and [how to Python Workers work](https://developers.cloudflare.com/workers/languages/python/how-python-workers-work).
* Explore the [package](https://developers.cloudflare.com/workers/languages/python/packages) docs for instructions on how to use packages with Python Workers.
* Understand which parts of the [Python Standard Library](https://developers.cloudflare.com/workers/languages/python/stdlib) are supported in Python Workers.
* Learn about Python Workers' [foreign function interface (FFI)](https://developers.cloudflare.com/workers/languages/python/ffi), and how to use it to work with [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings) and [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/python/","name":"Python Workers"}}]}
```

---

---
title: The Basics
description: Learn the basics of Python Workers
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/python/basics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# The Basics

## Fetch Handler

As mentioned in the [introduction to Python Workers](https://developers.cloudflare.com/workers/languages/python/), a Python Worker can be as simple as four lines of code:

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        return Response("Hello World!")


```

Similar to other Workers, the main entry point for a Python worker is the [fetch handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch) which handles incoming requests sent to the Worker.

In a Python Worker, this handler is placed in a `Default` class that extends the `WorkerEntrypoint` class (which you can import from the `workers` SDK module).

## The `Request` Interface

The `request` parameter passed to your `fetch` handler is a JavaScript Request object, exposed via the [foreign function interface (FFI)](https://developers.cloudflare.com/workers/languages/python/ffi), allowing you to access it directly from your Python code.

Let's try editing the worker to accept a POST request. We know from the[documentation for Request](https://developers.cloudflare.com/workers/runtime-apis/request) that we can call`await request.json()` within an `async` function to parse the request body as JSON.

In a Python Worker, you would write:

Python

```

from workers import WorkerEntrypoint, Response

from hello import hello


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        name = (await request.json()).name

        return Response(hello(name))


```

Many other JavaScript APIs are available in Python Workers via the FFI, so you can call other methods in a similar way.

Once you edit the `src/entry.py`, Wrangler will automatically restart the local development server.

Now, if you send a POST request with the appropriate body, your Worker will respond with a personalized message.

Terminal window

```

curl --header "Content-Type: application/json" \

  --request POST \

  --data '{"name": "Python"}' http://localhost:8787


```

```

Hello, Python!


```

## The `env` Attribute

The `env` attribute on the `WorkerEntrypoint` can be used to access[environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/),[secrets](https://developers.cloudflare.com/workers/configuration/secrets/),and[bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/).

For example, let us try setting and using an environment variable in a Python Worker. First, add the environment variable to your Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):

* [  wrangler.jsonc ](#tab-panel-7436)
* [  wrangler.toml ](#tab-panel-7437)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "hello-python-worker",

  "main": "src/entry.py",

  "compatibility_flags": [

    "python_workers"

  ],

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "vars": {

    "API_HOST": "example.com"

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "hello-python-worker"

main = "src/entry.py"

compatibility_flags = [ "python_workers" ]

# Set this to today's date

compatibility_date = "2026-04-03"


[vars]

API_HOST = "example.com"


```

Then, you can access the `API_HOST` environment variable via the `env` parameter:

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        return Response(self.env.API_HOST)


```

## Modules

Python workers can be split across multiple files.

Let's create a new Python file, called `src/hello.py`:

Python

```

def hello(name):

    return "Hello, " + name + "!"


```

Now, we can modify `src/entry.py` to make use of the new module.

Python

```

from hello import hello

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        return Response(hello("World"))


```

Once you edit `src/entry.py`, [pywrangler](https://developers.cloudflare.com/workers/languages/python/#the-pywrangler-cli-tool) will automatically detect the change and reload your Worker.

## Types and Autocompletion

When developing Python Workers, you can take advantage of type hints and autocompletion in your IDE.

To enable them, install the `workers-runtime-sdk` package in your `pyproject.toml` file.

```

[dependency-groups]

dev = [

    "workers-py",

    "workers-runtime-sdk"

]


```

Additionally, you can generate types based on your Worker configuration using `uv run pywrangler types`

This includes Env types based on your bindings, module rules, and runtime types based on the compatibility\_date and compatibility\_flags in your config file.

## Upgrading `pywrangler`

To upgrade to the latest version of [pywrangler](https://developers.cloudflare.com/workers/languages/python/#the-pywrangler-cli-tool) globally, run the following command:

Terminal window

```

uv tool upgrade workers-py


```

To upgrade to the latest version of `pywrangler` in a specific project, run the following command:

Terminal window

```

uv lock --upgrade-package workers-py


```

## Next Up

* Learn details about local development, deployment, and [how Python Workers work](https://developers.cloudflare.com/workers/languages/python/how-python-workers-work).
* Explore the [package](https://developers.cloudflare.com/workers/languages/python/packages) docs for instructions on how to use packages with Python Workers.
* Understand which parts of the [Python Standard Library](https://developers.cloudflare.com/workers/languages/python/stdlib) are supported in Python Workers.
* Learn about Python Workers' [foreign function interface (FFI)](https://developers.cloudflare.com/workers/languages/python/ffi), and how to use it to work with [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings) and [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/python/","name":"Python Workers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/languages/python/basics/","name":"The Basics"}}]}
```

---

---
title: Examples
description: Cloudflare has a wide range of Python examples in the Workers Example gallery.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/python/examples.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Examples

**Last reviewed:**  about 2 years ago 

Cloudflare has a wide range of Python examples in the [Workers Example gallery](https://developers.cloudflare.com/workers/examples/?languages=Python).

In addition to those examples, consider the following ones that illustrate Python-specific behavior.

## Modules in your Worker

Let's say your Worker has the following structure:

```

├── src

│   ├── module.py

│   └── main.py

├── uv.lock

├── pyproject.toml

└── wrangler.toml


```

In order to import `module.py` in `main.py`, you would use the following import statement:

Python

```

import module


```

In this case, the main module is set to `src/main.py` in the wrangler.toml file like so:

```

main = "src/main.py"


```

This means that the `src` directory does not need to be specified in the import statement.

## Parse an incoming request URL

Python

```

from workers import WorkerEntrypoint, Response

from urllib.parse import urlparse, parse_qs


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        # Parse the incoming request URL

        url = urlparse(request.url)

        # Parse the query parameters into a Python dictionary

        params = parse_qs(url.query)


        if "name" in params:

            greeting = "Hello there, {name}".format(name=params["name"][0])

            return Response(greeting)


        if url.path == "/favicon.ico":

          return Response("")


        return Response("Hello world!")


```

## Parse JSON from the incoming request

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        name = (await request.json()).name

        return Response("Hello, {name}".format(name=name))


```

## Read bundled asset files in your Worker

Let's say your Worker has the following structure:

```

├── src

│   ├── file.html

│   └── main.py

└── wrangler.jsonc


```

In order to read a file in your Worker, you would do the following:

Python

```

from pathlib import Path

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        html_file = Path(__file__).parent / "file.html"

        return Response(html_file.read_text(), headers={"Content-Type": "text/html"})


```

## Emit logs from your Python Worker

Python

```

# To use the JavaScript console APIs

from js import console

from workers import WorkerEntrypoint, Response

# To use the native Python logging

import logging


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        # Use the console APIs from JavaScript

        # https://developer.mozilla.org/en-US/docs/Web/API/console

        console.log("console.log from Python!")


        # Alternatively, use the native Python logger

        logger = logging.getLogger(__name__)


        # The default level is warning. We can change that to info.

        logging.basicConfig(level=logging.INFO)


        logger.error("error from Python!")

        logger.info("info log from Python!")


        # Or just use print()

        print("print() from Python!")


        return Response("We're testing logging!")


```

## Publish to a Queue

Python

```

from js import Object

from pyodide.ffi import to_js as _to_js


from workers import WorkerEntrypoint, Response


# to_js converts between Python dictionaries and JavaScript Objects

def to_js(obj):

   return _to_js(obj, dict_converter=Object.fromEntries)


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        # Bindings are available on the 'env' attribute

        # https://developers.cloudflare.com/queues/


        # The default contentType is "json"

        # We can also pass plain text strings

        await self.env.QUEUE.send("hello", contentType="text")

        # Send a JSON payload

        await self.env.QUEUE.send(to_js({"hello": "world"}))


        # Return a response

        return Response.json({"write": "success"})


```

## Query a D1 Database

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        results = await self.env.DB.prepare("PRAGMA table_list").run()

        # Return a JSON response

        return Response.json(results)


```

Refer to [Query D1 from Python Workers](https://developers.cloudflare.com/d1/examples/query-d1-from-python-workers/) for a more in-depth tutorial that covers how to create a new D1 database and configure bindings to D1.

## Durable Object

Python

```

from workers import WorkerEntrypoint, Response, DurableObject

from pyodide.ffi import to_js


class List(DurableObject):

    async def get_messages(self):

        messages = await self.ctx.storage.get("messages")

        return messages if messages else []


    async def add_message(self, message):

        messages = await self.get_messages()

        messages.append(message)

        await self.ctx.storage.put("messages", to_js(messages))

        return


    async def say_hello(self):

        result = self.ctx.storage.sql.exec(

            "SELECT 'Hello, World!' as greeting"

        ).one()


        return result.greeting


```

Refer to [Durable Objects documentation](https://developers.cloudflare.com/durable-objects/get-started/) for more information.

## Cron Trigger

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def scheduled(self, controller, env, ctx):

        print("cron processed")


```

Refer to [Cron Triggers documentation](https://developers.cloudflare.com/workers/configuration/cron-triggers/) for more information.

## Workflows

Python

```

from workers import WorkflowEntrypoint


class MyWorkflow(WorkflowEntrypoint):

    async def run(self, event, step):

        @step.do()

        async def step_a():

            # do some work

            return 10


        @step.do()

        async def step_b():

            # do some work

            return 20


        @step.do(concurrent=True)

        async def my_final_step(step_a, step_b):

            # should return 30

            return step_a + step_b


        await my_final_step()


```

Refer to the [Python Workflows documentation](https://developers.cloudflare.com/workflows/python/) for more information.

## More Examples

Or you can clone [the examples repository ↗](https://github.com/cloudflare/python-workers-examples) to explore even more examples:

Terminal window

```

git clone https://github.com/cloudflare/python-workers-examples


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/python/","name":"Python Workers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/languages/python/examples/","name":"Examples"}}]}
```

---

---
title: Foreign Function Interface (FFI)
description: Via Pyodide, Python Workers provide a Foreign Function Interface (FFI) to JavaScript. This allows you to:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/python/ffi.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Foreign Function Interface (FFI)

Via [Pyodide ↗](https://pyodide.org/en/stable/), Python Workers provide a [Foreign Function Interface (FFI) ↗](https://en.wikipedia.org/wiki/Foreign%5Ffunction%5Finterface) to JavaScript. This allows you to:

* Use [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) to resources on Cloudflare, including [Workers AI](https://developers.cloudflare.com/workers-ai/), [Vectorize](https://developers.cloudflare.com/vectorize/), [R2](https://developers.cloudflare.com/r2/), [KV](https://developers.cloudflare.com/kv/), [D1](https://developers.cloudflare.com/d1/), [Queues](https://developers.cloudflare.com/queues/), [Durable Objects](https://developers.cloudflare.com/durable-objects/), [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) and more.
* Use JavaScript globals, like [Request](https://developers.cloudflare.com/workers/runtime-apis/request/), [Response](https://developers.cloudflare.com/workers/runtime-apis/response/), and [fetch()](https://developers.cloudflare.com/workers/runtime-apis/fetch/).
* Use the full feature set of Cloudflare Workers — if an API is accessible in JavaScript, you can also access it in a Python Worker, writing exclusively Python code.

The details of Pyodide's Foreign Function Interface are documented [here ↗](https://pyodide.org/en/stable/usage/type-conversions.html), and Workers written in Python are able to take full advantage of this.

## Using Bindings from Python Workers

Bindings allow your Worker to interact with resources on the Cloudflare Developer Platform. When you declare a binding on your Worker, you grant it a specific capability, such as being able to read and write files to an [R2](https://developers.cloudflare.com/r2/) bucket.

For example, to access a [KV](https://developers.cloudflare.com/kv) namespace from a Python Worker, you would declare the following in your Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):

* [  wrangler.jsonc ](#tab-panel-7438)
* [  wrangler.toml ](#tab-panel-7439)

```

{

  "main": "./src/index.py",

  "kv_namespaces": [

    {

      "binding": "FOO",

      "id": "<YOUR_KV_NAMESPACE_ID>"

    }

  ]

}


```

```

main = "./src/index.py"


[[kv_namespaces]]

binding = "FOO"

id = "<YOUR_KV_NAMESPACE_ID>"


```

...and then call `.get()` on the binding object that is exposed on `env`:

Python

```

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        await self.env.FOO.put("bar", "baz")

        bar = await self.env.FOO.get("bar")

        return Response(bar) # returns "baz"


```

Under the hood, `env` is actually a JavaScript object. When you call `.FOO`, you are accessing this property via a [JsProxy ↗](https://pyodide.org/en/stable/usage/api/python-api/ffi.html#pyodide.ffi.JsProxy) — special proxy object that makes a JavaScript object behave like a Python object.

### Converting Python to JavaScript

Occasionally, to interoperate with JavaScript APIs, you may need to convert a Python object to JavaScript. Pyodide provides a `to_js` function to facilitate this conversion.

Python

```

from js import Object

from pyodide.ffi import to_js as _to_js


from workers import WorkerEntrypoint, Response


# to_js converts between Python dictionaries and JavaScript Objects

def to_js(obj):

   return _to_js(obj, dict_converter=Object.fromEntries)

  ```


```

For more details, see out the [documentation on pyodide.ffi.to\_js ↗](https://pyodide.org/en/stable/usage/api/python-api/ffi.html#pyodide.ffi.to%5Fjs).

## Using JavaScript globals from Python Workers

When writing Workers in Python, you can access JavaScript globals by importing them from the `js` module. For example, note how `Response` is imported from `js` in the example below:

Python

```

from workers import WorkerEntrypoint

from js import Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        return Response.new("Hello World!")


```

Refer to the [Python examples](https://developers.cloudflare.com/workers/languages/python/examples/) to learn how to call into JavaScript functions from Python, including `console.log` and logging, providing options to `Response`, and parsing JSON.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/python/","name":"Python Workers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/languages/python/ffi/","name":"Foreign Function Interface (FFI)"}}]}
```

---

---
title: How Python Workers Work
description: Workers written in Python are executed by Pyodide. Pyodide is a port of CPython (the reference implementation of Python — commonly referred to as just &#34;Python&#34;) to WebAssembly.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/python/how-python-workers-work.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# How Python Workers Work

Workers written in Python are executed by [Pyodide ↗](https://pyodide.org/en/stable/index.html). Pyodide is a port of [CPython ↗](https://github.com/python) (the reference implementation of Python — commonly referred to as just "Python") to WebAssembly.

When you write a Python Worker, your code is interpreted directly by Pyodide, within a V8 isolate. Refer to [How Workers works](https://developers.cloudflare.com/workers/reference/how-workers-works/) to learn more.

## Local Development

A basic Python Worker includes a Python file with a `Default` class extending `WorkerEntrypoint`, such as:

Python

```

from workers import Response, WorkerEntrypoint


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        return Response("Hello world!")


```

...and a [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) that points to this `.py` file:

* [  wrangler.jsonc ](#tab-panel-7440)
* [  wrangler.toml ](#tab-panel-7441)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "hello-world-python-worker",

  "main": "src/entry.py",

  // Set this to today's date

  "compatibility_date": "2026-04-03"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "hello-world-python-worker"

main = "src/entry.py"

# Set this to today's date

compatibility_date = "2026-04-03"


```

When you run `uv run pywrangler dev` to do local dev, the Workers runtime will:

1. Determine which version of Pyodide is required, based on your compatibility date
2. Install any packages necessary based on your `pyproject.toml` file
3. Create a new v8 isolate for your Worker, and automatically inject Pyodide
4. Serve your Python code using Pyodide

There are no extra toolchain or precompilation steps needed. The Python execution environment is provided directly by the Workers runtime, mirroring how Workers written in JavaScript work.

Refer to the [Python examples](https://developers.cloudflare.com/workers/languages/python/examples/) to learn how to use Python within Workers.

## Deployment Lifecycle and Cold Start Optimizations

To reduce cold start times, when you deploy a Python Worker, Cloudflare performs as much of the expensive work as possible upfront, at deploy time. When you run npx `uv run pywrangler deploy`, the following happens:

1. Wrangler uploads your Python code and any packages included in your `pyproject.toml` to the Workers API.
2. Cloudflare sends your Python code to the Workers runtime to be validated.
3. Cloudflare creates a new v8 isolate for your Worker, automatically injecting Pyodide.
4. Cloudflare scans the Worker’s code for import statements, execute them, and then take a snapshot of the Worker’s WebAssembly linear memory. Effectively, we perform the expensive work of importing packages at deploy time, rather than at runtime.
5. Cloudflare deploys this snapshot alongside your Worker’s Python code to the Cloudflare network.

When a request comes in to your Worker, we load this snapshot and use it to bootstrap your Worker in an isolate, avoiding expensive initialization time:

![Diagram of how Python Workers are deployed to Cloudflare](https://developers.cloudflare.com/_astro/python-workers-deployment.B83dgcK7_2nS876.webp) 

Refer to the [blog post introducing Python Workers ↗](https://blog.cloudflare.com/python-workers) for more detail about performance optimizations and how the Workers runtime will reduce cold starts for Python Workers.

## Pyodide and Python versions

A new version of Python is released every year in August, and a new version of Pyodide is released six (6) months later. When this new version of Pyodide is published, we will add it to Workers by gating it behind a Compatibility Flag, which is only enabled after a specified Compatibility Date. This lets us continually provide updates, without risk of breaking changes, extending the commitment we’ve made for JavaScript to Python.

Each Python release has a [five (5) year support window ↗](https://devguide.python.org/versions/). Once this support window has passed for a given version of Python, security patches are no longer applied, making this version unsafe to rely on. To mitigate this risk, while still trying to hold as true as possible to our commitment of stability and long-term support, after five years any Python Worker still on a Python release that is outside of the support window will be automatically moved forward to the next oldest Python release. Python is a mature and stable language, so we expect that in most cases, your Python Worker will continue running without issue. But we recommend updating the compatibility date of your Worker regularly, to stay within the support window.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/python/","name":"Python Workers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/languages/python/how-python-workers-work/","name":"How Python Workers Work"}}]}
```

---

---
title: Packages
description: Pywrangler is a CLI tool for managing packages and Python Workers.
It is meant as a wrapper for wrangler that sets up a full environment for you, including bundling your packages into
your worker bundle on deployment.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/python/packages/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Packages

[Pywrangler ↗](https://github.com/cloudflare/workers-py?tab=readme-ov-file#pywrangler) is a CLI tool for managing packages and Python Workers. It is meant as a wrapper for wrangler that sets up a full environment for you, including bundling your packages into your worker bundle on deployment.

To get started, create a pyproject.toml file with the following contents:

```

[project]

name = "YourProjectName"

version = "0.1.0"

description = "Add your description here"

requires-python = ">=3.12"

dependencies = [

    "fastapi"

]


[dependency-groups]

dev = [

  "workers-py",

  "workers-runtime-sdk"

]


```

The above will allow your worker to depend on the [FastAPI ↗](https://fastapi.tiangolo.com/) package.

To run the worker locally:

```

uv run pywrangler dev


```

To deploy your worker:

```

uv run pywrangler deploy


```

Your dependencies will get bundled with your worker automatically on deployment.

The `pywrangler` CLI also supports all commands supported by the `wrangler` tool, for the full list of commands run `uv run pywrangler --help`.

## Supported Libraries

Python Workers support pure Python packages on [PyPI ↗](https://pypi.org/), as well as [packages that are included in Pyodide ↗](https://pyodide.org/en/stable/usage/packages-in-pyodide.html).

If you would like to use a package that is not pure Python and not yet supported in Pyodide, request support via the [Python Packages Discussions ↗](https://github.com/cloudflare/workerd/discussions/categories/python-packages) on the Cloudflare Workers Runtime GitHub repository.

## HTTP Client Libraries

Only HTTP libraries that are able to make requests asynchronously are supported. Currently, these include [aiohttp ↗](https://docs.aiohttp.org/en/stable/index.html) and [httpx ↗](https://www.python-httpx.org/). You can also use the [fetch() API](https://developers.cloudflare.com/workers/runtime-apis/fetch/) from JavaScript, using Python Workers' [foreign function interface](https://developers.cloudflare.com/workers/languages/python/ffi) to make HTTP requests.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/python/","name":"Python Workers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/languages/python/packages/","name":"Packages"}}]}
```

---

---
title: FastAPI
description: The FastAPI package is supported in Python Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/python/packages/fastapi.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# FastAPI

The FastAPI package is supported in Python Workers.

FastAPI applications use a protocol called the [Asynchronous Server Gateway Interface (ASGI) ↗](https://asgi.readthedocs.io/en/latest/). This means that FastAPI never reads from or writes to a socket itself. An ASGI application expects to be hooked up to an ASGI server, typically [uvicorn ↗](https://www.uvicorn.org/). The ASGI server handles all of the raw sockets on the application’s behalf.

The Workers runtime provides [an ASGI server ↗](https://github.com/cloudflare/workerd/blob/main/src/pyodide/internal/workers-api/src/asgi.py) directly to your Python Worker, which lets you use FastAPI in Python Workers.

## Get Started

Clone the `cloudflare/python-workers-examples` repository and run the FastAPI example:

Terminal window

```

git clone https://github.com/cloudflare/python-workers-examples

cd python-workers-examples/03-fastapi

uv run pywrangler dev


```

### Example code

Python

```

from workers import WorkerEntrypoint

from fastapi import FastAPI, Request

from pydantic import BaseModel

import asgi


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        return await asgi.fetch(app, request, self.env)


app = FastAPI()


@app.get("/")

async def root():

    return {"message": "Hello, World!"}


@app.get("/env")

async def root(req: Request):

    env = req.scope["env"]

    return {"message": "Here is an example of getting an environment variable: " + env.MESSAGE}


class Item(BaseModel):

    name: str

    description: str | None = None

    price: float

    tax: float | None = None


@app.post("/items/")

async def create_item(item: Item):

    return item


@app.put("/items/{item_id}")

async def create_item(item_id: int, item: Item, q: str | None = None):

    result = {"item_id": item_id, **item.dict()}

    if q:

        result.update({"q": q})

    return result


@app.get("/items/{item_id}")

async def read_item(item_id: int):

    return {"item_id": item_id}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/python/","name":"Python Workers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/languages/python/packages/","name":"Packages"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/languages/python/packages/fastapi/","name":"FastAPI"}}]}
```

---

---
title: Langchain
description: LangChain is the most popular framework for building AI applications powered by large language models (LLMs).
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/python/packages/langchain.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Langchain

[LangChain ↗](https://www.langchain.com/) is the most popular framework for building AI applications powered by large language models (LLMs).

LangChain publishes multiple Python packages. The following are provided by the Workers runtime:

* [langchain ↗](https://pypi.org/project/langchain/) (version `0.1.8`)
* [langchain-core ↗](https://pypi.org/project/langchain-core/) (version `0.1.25`)
* [langchain-openai ↗](https://pypi.org/project/langchain-openai/) (version `0.0.6`)

## Get Started

Clone the `cloudflare/python-workers-examples` repository and run the LangChain example:

Terminal window

```

git clone https://github.com/cloudflare/python-workers-examples

cd 05-langchain

uv run pywrangler dev


```

### Example code

Python

```

from workers import WorkerEntrypoint, Response

from langchain_core.prompts import PromptTemplate

from langchain_openai import OpenAI


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        prompt = PromptTemplate.from_template("Complete the following sentence: I am a {profession} and ")

        llm = OpenAI(api_key=self.env.API_KEY)

        chain = prompt | llm


        res = await chain.ainvoke({"profession": "electrician"})

        return Response(res.split(".")[0].strip())


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/python/","name":"Python Workers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/languages/python/packages/","name":"Packages"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/languages/python/packages/langchain/","name":"Langchain"}}]}
```

---

---
title: Standard Library
description: Workers written in Python are executed by Pyodide.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/python/stdlib.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Standard Library

Workers written in Python are executed by [Pyodide ↗](https://pyodide.org/en/stable/index.html).

Pyodide is a port of CPython to WebAssembly — for the most part it behaves identically to [CPython ↗](https://github.com/python) (the reference implementation of Python — commonly referred to as just "Python"). The majority of the CPython test suite passes when run against Pyodide. For the most part, you shouldn't need to worry about differences in behavior.

The full [Python Standard Library ↗](https://docs.python.org/3/library/index.html) is available in Python Workers, with the following exceptions:

## Modules with limited functionality

* `decimal`: The decimal module has C (\_decimal) and Python (\_pydecimal) implementations with the same functionality. Only the C implementation is available (compiled to WebAssembly)
* `pydoc`: Help messages for Python builtins are not available
* `webbrowser`: The original webbrowser module is not available.

## Excluded modules

The following modules are not available in Python Workers:

* curses
* dbm
* ensurepip
* fcntl
* grp
* idlelib
* lib2to3
* msvcrt
* pwd
* resource
* syslog
* termios
* tkinter
* turtle.py
* turtledemo
* venv
* winreg
* winsound

The following modules can be imported, but are not functional due to the limitations of the WebAssembly VM.

* multiprocessing
* threading
* sockets

The following are present but cannot be imported due to a dependency on the termios package which has been removed:

* pty
* tty

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/python/","name":"Python Workers"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/languages/python/stdlib/","name":"Standard Library"}}]}
```

---

---
title: Rust
description: Write Workers in 100% Rust using the [`workers-rs` crate](https://github.com/cloudflare/workers-rs)
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/rust/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Rust

Cloudflare Workers provides support for Rust via the [workers-rs crate ↗](https://github.com/cloudflare/workers-rs), which makes [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis) and [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) to developer platform products, such as [Workers KV](https://developers.cloudflare.com/kv/concepts/how-kv-works/), [R2](https://developers.cloudflare.com/r2/), and [Queues](https://developers.cloudflare.com/queues/), available directly from your Rust code.

By following this guide, you will learn how to build a Worker entirely in the Rust programming language.

## Prerequisites

Before starting this guide, make sure you have:

* A recent version of [Rust ↗](https://rustup.rs/)
* [npm ↗](https://docs.npmjs.com/getting-started)
* The Rust `wasm32-unknown-unknown` toolchain:

Terminal window

```

rustup target add wasm32-unknown-unknown


```

* And `cargo-generate` sub-command by running:

Terminal window

```

cargo install cargo-generate


```

## 1\. Create a new project with Wrangler

Open a terminal window, and run the following command to generate a Worker project template in Rust:

Terminal window

```

cargo generate cloudflare/workers-rs


```

Your project will be created in a new directory that you named, in which you will find the following files and folders:

* `Cargo.toml` \- The standard project configuration file for Rust's [Cargo ↗](https://doc.rust-lang.org/cargo/) package manager. The template pre-populates some best-practice settings for building for Wasm on Workers.
* `wrangler.toml` \- Wrangler configuration, pre-populated with a custom build command to invoke `worker-build` (Refer to [Wrangler Bundling](https://developers.cloudflare.com/workers/languages/rust/#bundling-worker-build)).
* `src` \- Rust source directory, pre-populated with Hello World Worker.

## 2\. Develop locally

After you have created your first Worker, run the [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) command to start a local server for developing your Worker. This will allow you to test your Worker in development.

Terminal window

```

npx wrangler dev


```

If you have not used Wrangler before, it will try to open your web browser to login with your Cloudflare account.

Note

If you have issues with this step or you do not have access to a browser interface, refer to the [wrangler login](https://developers.cloudflare.com/workers/wrangler/commands/general/#login) documentation for more information.

Go to [http://localhost:8787 ↗](http://localhost:8787) to review your Worker running. Any changes you make to your code will trigger a rebuild, and reloading the page will show you the up-to-date output of your Worker.

## 3\. Write your Worker code

With your new project generated, write your Worker code. Find the entrypoint to your Worker in `src/lib.rs`:

```

use worker::*;


#[event(fetch)]

async fn main(req: Request, env: Env, ctx: Context) -> Result<Response> {

    Response::ok("Hello, World!")

}


```

Note

There is some counterintuitive behavior going on here:

1. `workers-rs` provides an `event` macro which expects a handler function signature identical to those seen in JavaScript Workers.
2. `async` is not generally supported by Wasm, but you are able to use `async` in a `workers-rs` project (refer to [async](https://developers.cloudflare.com/workers/languages/rust/#async-wasm-bindgen-futures)).

### Related runtime APIs

`workers-rs` provides a runtime API which closely matches Worker's JavaScript API, and enables integration with Worker's platform features. For detailed documentation of the API, refer to [docs.rs/worker ↗](https://docs.rs/worker/latest/worker/).

#### `event` macro

This macro allows you to define entrypoints to your Worker. The `event` macro supports the following events:

* `fetch` \- Invoked by an incoming HTTP request.
* `scheduled` \- Invoked by [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/).
* `queue` \- Invoked by incoming message batches from [Queues](https://developers.cloudflare.com/queues/) (Requires `queue` feature in `Cargo.toml`, refer to the [workers-rs GitHub repository and queues feature flag ↗](https://github.com/cloudflare/workers-rs#queues)).
* `start` \- Invoked when the Worker is first launched (such as, to install panic hooks).

#### `fetch` parameters

The `fetch` handler provides three arguments which match the JavaScript API:

1. **[Request ↗](https://docs.rs/worker/latest/worker/struct.Request.html)**

An object representing the incoming request. This includes methods for accessing headers, method, path, Cloudflare properties, and body (with support for asynchronous streaming and JSON deserialization with [Serde ↗](https://serde.rs/)).

1. **[Env ↗](https://docs.rs/worker/latest/worker/struct.Env.html)**

Provides access to Worker [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/).

* [Secret ↗](https://docs.rs/worker/latest/worker/struct.Secret.html) \- Secret value configured in Cloudflare dashboard or using `wrangler secret put`.
* [Var ↗](https://docs.rs/worker/latest/worker/type.Var.html) \- Environment variable defined in `wrangler.toml`.
* [KvStore ↗](https://docs.rs/worker/latest/worker/kv/struct.KvStore.html) \- Workers [KV](https://developers.cloudflare.com/kv/api/) namespace binding.
* [ObjectNamespace ↗](https://docs.rs/worker/latest/worker/durable/struct.ObjectNamespace.html) \- [Durable Object](https://developers.cloudflare.com/durable-objects/) binding.
* [Fetcher ↗](https://docs.rs/worker/latest/worker/struct.Fetcher.html) \- [Service binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) to another Worker.
* [Bucket ↗](https://docs.rs/worker/latest/worker/struct.Bucket.html) \- [R2](https://developers.cloudflare.com/r2/) Bucket binding.
* [D1Database ↗](https://docs.rs/worker/latest/worker/d1/struct.D1Database.html) \- [D1](https://developers.cloudflare.com/d1/) database binding.
* [Queue ↗](https://docs.rs/worker/latest/worker/struct.Queue.html) \- [Queues](https://developers.cloudflare.com/queues/) producer binding.
* [Ai ↗](https://docs.rs/worker/latest/worker/struct.Ai.html) \- [Workers AI](https://developers.cloudflare.com/workers-ai/) binding.
* [Hyperdrive ↗](https://docs.rs/worker/latest/worker/struct.Hyperdrive.html) \- [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) binding.
* [AnalyticsEngineDataset ↗](https://docs.rs/worker/latest/worker/struct.AnalyticsEngineDataset.html) \- [Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/) binding.
* [DynamicDispatcher ↗](https://docs.rs/worker/latest/worker/struct.DynamicDispatcher.html) \- [Dynamic Dispatch](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/dynamic-dispatch/) binding.
* [SecretStore ↗](https://docs.rs/worker/latest/worker/struct.SecretStore.html) \- [Secrets Store](https://developers.cloudflare.com/secrets-store/) binding.
* [RateLimiter ↗](https://docs.rs/worker/latest/worker/struct.RateLimiter.html) \- [Rate Limiting](https://developers.cloudflare.com/workers/runtime-apis/bindings/rate-limit/) binding.
1. **[Context ↗](https://docs.rs/worker/latest/worker/struct.Context.html)**

Provides access to [waitUntil](https://developers.cloudflare.com/workers/runtime-apis/context/#waituntil) (deferred asynchronous tasks) and [passThroughOnException](https://developers.cloudflare.com/workers/runtime-apis/context/#passthroughonexception) (fail open) functionality.

#### [Response ↗](https://docs.rs/worker/latest/worker/struct.Response.html)

The `fetch` handler expects a [Response ↗](https://docs.rs/worker/latest/worker/struct.Response.html) return type, which includes support for streaming responses to the client asynchronously. This is also the return type of any subrequests made from your Worker. There are methods for accessing status code and headers, as well as streaming the body asynchronously or deserializing from JSON using [Serde ↗](https://serde.rs/).

#### `Router`

Implements convenient [routing API ↗](https://docs.rs/worker/latest/worker/struct.Router.html) to serve multiple paths from one Worker. Refer to the [Router example in the worker-rs GitHub repository ↗](https://github.com/cloudflare/workers-rs#or-use-the-router).

## 4\. Deploy your Worker project

With your project configured, you can now deploy your Worker, to a `*.workers.dev` subdomain, or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), if you have one configured. If you have not configured any subdomain or domain, Wrangler will prompt you during the deployment process to set one up.

Terminal window

```

npx wrangler deploy


```

Preview your Worker at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`.

Note

When pushing to your `*.workers.dev` subdomain for the first time, you may see [523 errors](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-5xx-errors/error-523/) while DNS is propagating. These errors should resolve themselves after a minute or so.

After completing these steps, you will have a basic Rust-based Worker deployed. From here, you can add crate dependencies and write code in Rust to implement your Worker application. If you would like to know more about the inner workings of how Rust compiled to Wasm is supported by Workers, the next section outlines the libraries and tools involved.

## How this deployment works

Wasm Workers are invoked from a JavaScript entrypoint script which is created automatically for you when using `workers-rs`.

### JavaScript Plumbing (`wasm-bindgen`)

To access platform features such as bindings, Wasm Workers must be able to access methods from the JavaScript runtime API.

This interoperability is achieved using [wasm-bindgen ↗](https://rustwasm.github.io/wasm-bindgen/), which provides the glue code needed to import runtime APIs to, and export event handlers from, the Wasm module. `wasm-bindgen` also provides [js-sys ↗](https://docs.rs/js-sys/latest/js%5Fsys/), which implements types for interacting with JavaScript objects. In practice, this is an implementation detail, as `workers-rs`'s API handles conversion to and from JavaScript objects, and interaction with imported JavaScript runtime APIs for you.

Note

If you are using `wasm-bindgen` without `workers-rs` / `worker-build`, then you will need to patch the JavaScript that it emits. This is because when you import a `wasm` file in Workers, you get a `WebAssembly.Module` instead of a `WebAssembly.Instance` for performance and security reasons.

To patch the JavaScript that `wasm-bindgen` emits:

1. Run `wasm-pack build --target bundler` as you normally would.
2. Patch the JavaScript file that it produces (the following code block assumes the file is called `mywasmlib.js`):

JavaScript

```

import * as imports from "./mywasmlib_bg.js";


// switch between both syntax for node and for workerd

import wkmod from "./mywasmlib_bg.wasm";

import * as nodemod from "./mywasmlib_bg.wasm";

if (typeof process !== "undefined" && process.release.name === "node") {

  imports.__wbg_set_wasm(nodemod);

} else {

  const instance = new WebAssembly.Instance(wkmod, {

    "./mywasmlib_bg.js": imports,

  });

  imports.__wbg_set_wasm(instance.exports);

}


export * from "./mywasmlib_bg.js";


```

1. In your Worker entrypoint, import the function and use it directly:

JavaScript

```

import { myFunction } from "path/to/mylib.js";


```

### Async (`wasm-bindgen-futures`)

[wasm-bindgen-futures ↗](https://rustwasm.github.io/wasm-bindgen/api/wasm%5Fbindgen%5Ffutures/) (part of the `wasm-bindgen` project) provides interoperability between Rust Futures and JavaScript Promises. `workers-rs` invokes the entire event handler function using `spawn_local`, meaning that you can program using async Rust, which is turned into a single JavaScript Promise and run on the JavaScript event loop. Calls to imported JavaScript runtime APIs are automatically converted to Rust Futures that can be invoked from async Rust functions.

### Bundling (`worker-build`)

To run the resulting Wasm binary on Workers, `workers-rs` includes a build tool called [worker-build ↗](https://github.com/cloudflare/workers-rs/tree/main/worker-build) which:

1. Creates a JavaScript entrypoint script that properly invokes the module using `wasm-bindgen`'s JavaScript API.
2. Invokes `web-pack` to minify and bundle the JavaScript code.
3. Outputs a directory structure that Wrangler can use to bundle and deploy the final Worker.

`worker-build` is invoked by default in the template project using a custom build command specified in the `wrangler.toml` file.

### Binary Size (`wasm-opt`)

Unoptimized Rust Wasm binaries can be large and may exceed Worker bundle size limits or experience long startup times. The template project pre-configures several useful size optimizations in your `Cargo.toml` file:

```

[profile.release]

lto = true

strip = true

codegen-units = 1


```

Finally, `worker-bundle` automatically invokes [wasm-opt ↗](https://github.com/brson/wasm-opt-rs) to further optimize binary size before upload.

## Related resources

* [Rust Wasm Book ↗](https://rustwasm.github.io/docs/book/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/rust/","name":"Rust"}}]}
```

---

---
title: Supported crates
description: Learn about popular Rust crates which have been confirmed to work on Workers when using workers-rs (or in some cases just wasm-bindgen), to write Workers in WebAssembly.
Each Rust crate example includes any custom configuration that is required.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/rust/crates.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Supported crates

## Background

Learn about popular Rust crates which have been confirmed to work on Workers when using [workers-rs ↗](https://github.com/cloudflare/workers-rs) (or in some cases just `wasm-bindgen`), to write Workers in WebAssembly. Each Rust crate example includes any custom configuration that is required.

This is not an exhaustive list, many Rust crates can be compiled to the [wasm32-unknown-unknown ↗](https://doc.rust-lang.org/rustc/platform-support/wasm64-unknown-unknown.html) target that is supported by Workers. In some cases, this may require disabling default features or enabling a Wasm-specific feature. It is important to consider the addition of new dependencies, as this can significantly increase the [size](https://developers.cloudflare.com/workers/platform/limits/#worker-size) of your Worker.

## `time`

Many crates which have been made Wasm-friendly, will use the `time` crate instead of `std::time`. For the `time` crate to work in Wasm, the `wasm-bindgen` feature must be enabled to obtain timing information from JavaScript.

## `tracing`

Tracing can be enabled by using the `tracing-web` crate and the `time` feature for `tracing-subscriber`. Due to [timing limitations](https://developers.cloudflare.com/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading) on Workers, spans will have identical start and end times unless they encompass I/O.

[Refer to the tracing example ↗](https://github.com/cloudflare/workers-rs/tree/main/examples/tracing) for more information.

## `reqwest`

The [reqwest library ↗](https://docs.rs/reqwest/latest/reqwest/) can be compiled to Wasm, and hooks into the JavaScript `fetch` API automatically using `wasm-bindgen`.

## `tokio-postgres`

`tokio-postgres` can be compiled to Wasm. It must be configured to use a `Socket` from `workers-rs`:

[Refer to the tokio-postgres example ↗](https://github.com/cloudflare/workers-rs/tree/main/examples/tokio-postgres) for more information.

## `hyper`

The `hyper` crate contains two HTTP clients, the lower-level `conn` module and the higher-level `Client`. The `conn` module can be used with Workers `Socket`, however `Client` requires timing dependencies which are not yet Wasm friendly.

[Refer to the hyper example ↗](https://github.com/cloudflare/workers-rs/tree/main/examples/hyper) for more information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/rust/","name":"Rust"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/languages/rust/crates/","name":"Supported crates"}}]}
```

---

---
title: TypeScript
description: TypeScript is a first-class language on Cloudflare Workers. All APIs provided in Workers are fully typed, and type definitions are generated directly from workerd, the open-source Workers runtime.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/typescript/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# TypeScript

TypeScript is a first-class language on Cloudflare Workers. All APIs provided in Workers are fully typed, and type definitions are generated directly from [workerd ↗](https://github.com/cloudflare/workerd), the open-source Workers runtime.

We recommend you generate types for your Worker by running [wrangler types](https://developers.cloudflare.com/workers/wrangler/commands/general/#types). Cloudflare also publishes type definitions to [GitHub ↗](https://github.com/cloudflare/workers-types) and [npm ↗](https://www.npmjs.com/package/@cloudflare/workers-types) (`npm install -D @cloudflare/workers-types`).

### 

Generate types that match your Worker's configuration

Cloudflare continuously improves [workerd ↗](https://github.com/cloudflare/workerd), the open-source Workers runtime. Changes in workerd can introduce JavaScript API changes, thus changing the respective TypeScript types.

This means the correct types for your Worker depend on:

1. Your Worker's [compatibility date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/).
2. Your Worker's [compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags/).
3. Your Worker's bindings, which are defined in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration).
4. Any [module rules](https://developers.cloudflare.com/workers/wrangler/configuration/#bundling) you have specified in your Wrangler configuration file under `rules`.

For example, the runtime will only allow you to use the [AsyncLocalStorage ↗](https://nodejs.org/api/async%5Fcontext.html#class-asynclocalstorage) class if you have `compatibility_flags = ["nodejs_als"]` in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This should be reflected in the type definitions.

To ensure that your type definitions always match your Worker's configuration, you can dynamically generate types by running:

 npm  yarn  pnpm 

```
npx wrangler types
```

```
yarn wrangler types
```

```
pnpm wrangler types
```

See [the wrangler types command docs](https://developers.cloudflare.com/workers/wrangler/commands/general/#types) for more details.

Note

If you are running a version of Wrangler that is greater than `3.66.0` but below `4.0.0`, you will need to include the `--experimental-include-runtime` flag. During its experimental release, runtime types were output to a separate file (`.wrangler/types/runtime.d.ts` by default). If you have an older version of Wrangler, you can access runtime types through the `@cloudflare/workers-types` package.

This will generate a `d.ts` file and (by default) save it to `worker-configuration.d.ts`. This will include `Env` types based on your Worker bindings _and_ runtime types based on your Worker's compatibility date and flags.

You should then add that file to your `tsconfig.json`'s `compilerOptions.types` array. If you have the `nodejs_compat` compatibility flag, you should also install `@types/node`.

You can commit your types file to git if you wish.

Note

To ensure that your types are always up-to-date, make sure to run `wrangler types` after any changes to your config file.

### 

Migrating from `@cloudflare/workers-types` to `wrangler types`

We recommend you use `wrangler types` to generate runtime types, rather than using the `@cloudflare/workers-types` package, as it generates types based on your Worker's [compatibility date ↗](https://github.com/cloudflare/workerd/tree/main/npm/workers-types#compatibility-dates) and `compatibility flags`, ensuring that types match the exact runtime APIs made available to your Worker.

Note

There are no plans to stop publishing the `@cloudflare/workers-types` package, which will still be the recommended way to type libraries and shared packages in the workers environment.

#### 1\. Uninstall `@cloudflare/workers-types`

 npm  yarn  pnpm  bun 

```
npm uninstall @cloudflare/workers-types
```

```
yarn remove @cloudflare/workers-types
```

```
pnpm remove @cloudflare/workers-types
```

```
bun remove @cloudflare/workers-types
```

#### 2\. Generate runtime types using Wrangler

 npm  yarn  pnpm 

```
npx wrangler types
```

```
yarn wrangler types
```

```
pnpm wrangler types
```

This will generate a `.d.ts` file, saved to `worker-configuration.d.ts` by default. This will also generate `Env` types. If for some reason you do not want to include those, you can set `--include-env=false`.

You can now remove any imports from `@cloudflare/workers-types` in your Worker code.

Note

If you are running a version of Wrangler that is greater than `3.66.0` but below `4.0.0`, you will need to include the `--experimental-include-runtime` flag. During its experimental release, runtime types were output to a separate file (`.wrangler/types/runtime.d.ts` by default). If you have an older version of Wrangler, you can access runtime types through the `@cloudflare/workers-types` package.

#### 3\. Make sure your `tsconfig.json` includes the generated types

```

{

  "compilerOptions": {

    "types": ["./worker-configuration.d.ts"]

  }

}


```

Note that if you have specified a custom path for the runtime types file, you should use that in your `compilerOptions.types` array instead of the default path.

#### 4\. Add @types/node if you are using [nodejs\_compat](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) (Optional)

If you are using the `nodejs_compat` compatibility flag, you should also install `@types/node`.

 npm  yarn  pnpm  bun 

```
npm i @types/node
```

```
yarn add @types/node
```

```
pnpm add @types/node
```

```
bun add @types/node
```

Then add this to your `tsconfig.json`.

```

{

  "compilerOptions": {

    "types": ["./worker-configuration.d.ts", "node"]

  }

}


```

#### 5\. Update your scripts and CI pipelines

Regardless of your specific framework or build tools, you should run the `wrangler types` command before any tasks that rely on TypeScript.

Most projects will have existing build and development scripts, as well as some type-checking. In the example below, we're adding the `wrangler types` before the type-checking script in the project:

```

{

  "scripts": {

    "dev": "existing-dev-command",

    "build": "existing-build-command",

    "generate-types": "wrangler types",

    "type-check": "generate-types && tsc"

  }

}


```

We recommend you commit your generated types file for use in CI. You can run `wrangler types` before other CI commands, as it should not take more than a few seconds. For example:

* [ npm ](#tab-panel-7442)
* [ yarn ](#tab-panel-7443)
* [ pnpm ](#tab-panel-7444)

```

- run: npm run generate-types

- run: npm run build

- run: npm test


```

```

- run: yarn generate-types

- run: yarn build

- run: yarn test


```

```

- run: pnpm run generate-types

- run: pnpm run build

- run: pnpm test


```

Alternatively, if you commit your generated types file and want to verify it stays up-to-date in CI, you can use the `--check` flag:

* [ npm ](#tab-panel-7445)
* [ yarn ](#tab-panel-7446)
* [ pnpm ](#tab-panel-7447)

```

- run: npx wrangler types --check

- run: npm run build

- run: npm test


```

```

- run: yarn wrangler types --check

- run: yarn build

- run: yarn test


```

```

- run: pnpm wrangler types --check

- run: pnpm run build

- run: pnpm test


```

This fails the CI job if the committed types file is out-of-date, prompting developers to regenerate and commit the updated types.

### Resources

* [TypeScript template ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare/templates/hello-world/ts)
* [@cloudflare/workers-types ↗](https://github.com/cloudflare/workers-types)
* [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/)
* [TypeScript Examples](https://developers.cloudflare.com/workers/examples/?languages=TypeScript)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/typescript/","name":"TypeScript"}}]}
```

---

---
title: Examples
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/languages/typescript/examples.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Examples

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/languages/","name":"Languages"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/languages/typescript/","name":"TypeScript"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/languages/typescript/examples/","name":"Examples"}}]}
```

---

---
title: Glossary
description: Review the definitions for terms used across Cloudflare's Workers documentation.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/glossary.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Glossary

Review the definitions for terms used across Cloudflare's Workers documentation.

| Term                                           | Definition                                                                                                                                                                                                                                                                                                                                                |
| ---------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Auxiliary Worker                               | A Worker created locally via the [Workers Vitest integration](https://developers.cloudflare.com/workers/testing/vitest-integration/) that runs in a separate isolate to the test runner, with a different global scope.                                                                                                                                   |
| binding                                        | [Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) allow your Workers to interact with resources on the Cloudflare Developer Platform.                                                                                                                                                                                          |
| C3                                             | [C3](https://developers.cloudflare.com/learning-paths/workers/get-started/c3-and-wrangler/) is a command-line tool designed to help you set up and deploy new applications to Cloudflare.                                                                                                                                                                 |
| CPU time                                       | [CPU time](https://developers.cloudflare.com/workers/platform/limits/#cpu-time) is the amount of time the central processing unit (CPU) actually spends doing work, during a given request.                                                                                                                                                               |
| Cron Triggers                                  | [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/) allow users to map a cron expression to a Worker using a [scheduled() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/) that enables Workers to be executed on a schedule.                                                     |
| D1                                             | [D1](https://developers.cloudflare.com/d1/) is Cloudflare's native serverless database.                                                                                                                                                                                                                                                                   |
| deployment                                     | [Deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#deployments) track the version(s) of your Worker that are actively serving traffic.                                                                                                                                                                       |
| Durable Objects                                | [Durable Objects](https://developers.cloudflare.com/durable-objects/) is a globally distributed coordination API with strongly consistent storage.                                                                                                                                                                                                        |
| duration                                       | [Duration](https://developers.cloudflare.com/workers/platform/limits/#duration) is a measurement of wall-clock time — the total amount of time from the start to end of an invocation of a Worker.                                                                                                                                                        |
| environment                                    | [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) allow you to deploy the same Worker application with different configuration for each environment. Only available for use with a [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).                                       |
| environment variable                           | [Environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/) are a type of binding that allow you to attach text strings or JSON values to your Worker.                                                                                                                                                        |
| handler                                        | [Handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/) are methods on Workers that can receive and process external inputs, and can be invoked from outside your Worker.                                                                                                                                                            |
| isolate                                        | [Isolates](https://developers.cloudflare.com/workers/reference/how-workers-works/#isolates) are lightweight contexts that provide your code with variables it can access and a safe environment to be executed within.                                                                                                                                    |
| KV                                             | [Workers KV](https://developers.cloudflare.com/kv/) is Cloudflare's key-value data storage.                                                                                                                                                                                                                                                               |
| module Worker                                  | Refers to a Worker written in [module syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/).                                                                                                                                                                                                                            |
| origin                                         | [Origin](https://www.cloudflare.com/learning/cdn/glossary/origin-server/) generally refers to the web server behind Cloudflare where your application is hosted.                                                                                                                                                                                          |
| Pages                                          | [Cloudflare Pages](https://developers.cloudflare.com/pages/) is Cloudflare's product offering for building and deploying full-stack applications.                                                                                                                                                                                                         |
| Queues                                         | [Queues](https://developers.cloudflare.com/queues/) integrates with Cloudflare Workers and enables you to build applications that can guarantee delivery.                                                                                                                                                                                                 |
| R2                                             | [R2](https://developers.cloudflare.com/r2/) is an S3-compatible distributed object storage designed to eliminate the obstacles of sharing data across clouds.                                                                                                                                                                                             |
| rollback                                       | [Rollbacks](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/rollbacks/) are a way to deploy an older deployment to the Cloudflare global network.                                                                                                                                                                        |
| secret                                         | [Secrets](https://developers.cloudflare.com/workers/configuration/secrets/) are a type of binding that allow you to attach encrypted text values to your Worker.                                                                                                                                                                                          |
| service Worker                                 | Refers to a Worker written in [service worker](https://developer.mozilla.org/en-US/docs/Web/API/Service%5FWorker%5FAPI) [syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/).                                                                                                                                         |
| subrequest                                     | A subrequest is any request that a Worker makes to either Internet resources using the [Fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch/) or requests to other Cloudflare services like [R2](https://developers.cloudflare.com/r2/), [KV](https://developers.cloudflare.com/kv/), or [D1](https://developers.cloudflare.com/d1/). |
| Tail Worker                                    | A [Tail Worker](https://developers.cloudflare.com/workers/observability/logs/tail-workers/) receives information about the execution of other Workers (known as producer Workers), such as HTTP statuses, data passed to console.log() or uncaught exceptions.                                                                                            |
| V8                                             | Chrome V8 is a [JavaScript engine](https://www.cloudflare.com/learning/serverless/glossary/what-is-chrome-v8/), which means that it [executes JavaScript code](https://developers.cloudflare.com/workers/reference/how-workers-works/).                                                                                                                   |
| version                                        | A [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#versions) is defined by the state of code as well as the state of configuration in a Worker's Wrangler file.                                                                                                                                                |
| wall-clock time                                | [Wall-clock time](https://developers.cloudflare.com/workers/platform/limits/#duration) is the total amount of time from the start to end of an invocation of a Worker.                                                                                                                                                                                    |
| workerd                                        | [workerd](https://github.com/cloudflare/workerd?cf%5Ftarget%5Fid=D15F29F105B3A910EF4B2ECB12D02E2A) is a JavaScript / Wasm server runtime based on the same code that powers Cloudflare Workers.                                                                                                                                                           |
| Wrangler                                       | [Wrangler](https://developers.cloudflare.com/learning-paths/workers/get-started/c3-and-wrangler/) is the Cloudflare Developer Platform command-line interface (CLI) that allows you to manage projects, such as Workers, created from the Cloudflare Developer Platform product offering.                                                                 |
| wrangler.toml / wrangler.json / wrangler.jsonc | The [configuration](https://developers.cloudflare.com/workers/wrangler/configuration/) used to customize the development and deployment setup for a Worker or a Pages Function.                                                                                                                                                                           |

View more terms 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/glossary/","name":"Glossary"}}]}
```

---

---
title: Workers Best Practices
description: Code patterns and configuration guidance for building fast, reliable, observable, and secure Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/best-practices/workers-best-practices.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workers Best Practices

Best practices for Workers based on production patterns, Cloudflare's own internal usage, and common issues seen across the developer community.

## Configuration

### Keep your compatibility date current

The [compatibility\_date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) controls which runtime features and bug fixes are available to your Worker. Setting it to today's date on new projects ensures you get the latest behavior. Periodically updating it on existing projects gives you access to new APIs and fixes without changing your code.

* [  wrangler.jsonc ](#tab-panel-6992)
* [  wrangler.toml ](#tab-panel-6993)

```

{

  "name": "my-worker",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": ["nodejs_compat"],

}


```

```

name = "my-worker"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


```

For more information, refer to [Compatibility dates](https://developers.cloudflare.com/workers/configuration/compatibility-dates/).

### Enable nodejs\_compat

The [nodejs\_compat](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) compatibility flag gives your Worker access to Node.js built-in modules like `node:crypto`, `node:buffer`, `node:stream`, and others. Many libraries depend on these modules, and enabling this flag avoids cryptic import errors at runtime.

* [  wrangler.jsonc ](#tab-panel-6994)
* [  wrangler.toml ](#tab-panel-6995)

```

{

  "name": "my-worker",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": ["nodejs_compat"],

}


```

```

name = "my-worker"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


```

For more information, refer to [Node.js compatibility](https://developers.cloudflare.com/workers/runtime-apis/nodejs/).

### Generate binding types with wrangler types

Do not hand-write your `Env` interface. Run [wrangler types](https://developers.cloudflare.com/workers/wrangler/commands/general/#types) to generate a type definition file that matches your actual Wrangler configuration. This catches mismatches between your config and code at compile time instead of at deploy time.

Re-run `wrangler types` whenever you add or rename a binding.

 npm  yarn  pnpm 

```
npx wrangler types
```

```
yarn wrangler types
```

```
pnpm wrangler types
```

* [  JavaScript ](#tab-panel-7004)
* [  TypeScript ](#tab-panel-7005)

src/index.js

```

// ✅ Good: Env is generated by wrangler types and always matches your config

// Do not manually define Env — it drifts from your actual bindings


export default {

  async fetch(request, env) {

    // env.MY_KV, env.MY_BUCKET, etc. are all correctly typed

    const value = await env.MY_KV.get("key");

    return new Response(value);

  },

};


```

src/index.ts

```

// ✅ Good: Env is generated by wrangler types and always matches your config

// Do not manually define Env — it drifts from your actual bindings


export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    // env.MY_KV, env.MY_BUCKET, etc. are all correctly typed

    const value = await env.MY_KV.get("key");

    return new Response(value);

  },

} satisfies ExportedHandler<Env>;


```

For more information, refer to [wrangler types](https://developers.cloudflare.com/workers/wrangler/commands/general/#types).

### Store secrets with wrangler secret, not in source

Secrets (API keys, tokens, database credentials) must never appear in your Wrangler configuration or source code. Use [wrangler secret put](https://developers.cloudflare.com/workers/configuration/secrets/) to store them securely, and access them through `env` at runtime. For local development, use a `.env` file (and make sure it is in your `.gitignore`). For more information, refer to [Environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/).

* [  wrangler.jsonc ](#tab-panel-6996)
* [  wrangler.toml ](#tab-panel-6997)

```

{

  "name": "my-worker",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": ["nodejs_compat"],


  // ✅ Good: non-secret configuration lives in version control

  "vars": {

    "API_BASE_URL": "https://api.example.com",

  },


  // 🔴 Bad: never put secrets here

  // "API_KEY": "sk-live-abc123..."

}


```

```

name = "my-worker"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


[vars]

API_BASE_URL = "https://api.example.com"


```

To add a secret, run the following command and provide the secret interactively when prompted:

 npm  yarn  pnpm 

```
npx wrangler secret put API_KEY
```

```
yarn wrangler secret put API_KEY
```

```
pnpm wrangler secret put API_KEY
```

You can also pipe secrets from other tools or environment variables:

Terminal window

```

# Pipe from another CLI tool

npx some-cli-tool --get-secret | npx wrangler secret put API_KEY

# Pipe from an environment variable or .env file

echo "$API_KEY" | npx wrangler secret put API_KEY


```

For more information, refer to [Secrets](https://developers.cloudflare.com/workers/configuration/secrets/).

### Configure environments deliberately

[Wrangler environments](https://developers.cloudflare.com/workers/wrangler/environments/) let you deploy the same code to separate Workers for production, staging, and development. Each environment creates a distinct Worker named `{name}-{env}` (for example, `my-api-production` and `my-api-staging`).

Each environment is treated separately. Bindings and vars need to be declared per environment and are not inherited. Refer to [non-inheritable keys](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys). The root Worker (without an environment suffix) is a separate deployment. If you do not intend to use it, do not deploy without specifying an environment using `--env`.

* [  wrangler.jsonc ](#tab-panel-7006)
* [  wrangler.toml ](#tab-panel-7007)

```

{

  "name": "my-api",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": ["nodejs_compat"],


  // This binding only applies to the root Worker

  "kv_namespaces": [{ "binding": "CACHE", "id": "dev-kv-id" }],


  "env": {

    // Production environment: deploys as "my-api-production"

    "production": {

      "kv_namespaces": [{ "binding": "CACHE", "id": "prod-kv-id" }],

      "routes": [

        { "pattern": "api.example.com/*", "zone_name": "example.com" },

      ],

    },

    // Staging environment: deploys as "my-api-staging"

    "staging": {

      "kv_namespaces": [{ "binding": "CACHE", "id": "staging-kv-id" }],

      "routes": [

        { "pattern": "api-staging.example.com/*", "zone_name": "example.com" },

      ],

    },

  },

}


```

```

name = "my-api"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


[[kv_namespaces]]

binding = "CACHE"

id = "dev-kv-id"


[[env.production.kv_namespaces]]

binding = "CACHE"

id = "prod-kv-id"


[[env.production.routes]]

pattern = "api.example.com/*"

zone_name = "example.com"


[[env.staging.kv_namespaces]]

binding = "CACHE"

id = "staging-kv-id"


[[env.staging.routes]]

pattern = "api-staging.example.com/*"

zone_name = "example.com"


```

With this configuration file, to deploy to staging:

 npm  yarn  pnpm 

```
npx wrangler deploy --env staging
```

```
yarn wrangler deploy --env staging
```

```
pnpm wrangler deploy --env staging
```

For more information, refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/).

### Set up custom domains or routes correctly

Workers support two routing mechanisms, and they serve different purposes:

* **[Custom domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/)**: The Worker **is** the origin. Cloudflare creates DNS records and SSL certificates automatically. Use this when your Worker handles all traffic for a hostname.
* **[Routes](https://developers.cloudflare.com/workers/configuration/routing/routes/)**: The Worker runs **in front of** an existing origin server. You must have a Cloudflare proxied (orange-clouded) DNS record for the hostname before adding a route.

The most common mistake with routes is missing the DNS record. Without a proxied DNS record, requests to the hostname return `ERR_NAME_NOT_RESOLVED` and never reach your Worker. If you do not have a real origin, add a proxied `AAAA` record pointing to `100::` as a placeholder.

* [  wrangler.jsonc ](#tab-panel-7002)
* [  wrangler.toml ](#tab-panel-7003)

```

{

  "name": "my-worker",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": ["nodejs_compat"],


  // Option 1: Custom domain — Worker is the origin, DNS is managed automatically

  "routes": [{ "pattern": "api.example.com", "custom_domain": true }],


  // Option 2: Route — Worker runs in front of an existing origin

  // Requires a proxied DNS record for shop.example.com

  // "routes": [

  //   { "pattern": "shop.example.com/*", "zone_name": "example.com" }

  // ]

}


```

```

name = "my-worker"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


[[routes]]

pattern = "api.example.com"

custom_domain = true


```

For more information, refer to [Routing](https://developers.cloudflare.com/workers/configuration/routing/).

## Request and response handling

### Stream request and response bodies

Regardless of memory limits, streaming large requests and responses is a best practice in any language. It reduces peak memory usage and improves time-to-first-byte. Workers have a [128 MB memory limit](https://developers.cloudflare.com/workers/platform/limits/), so buffering an entire body with `await response.text()` or `await request.arrayBuffer()` will crash your Worker on large payloads.

For request bodies you do consume entirely (JSON payloads, file uploads), enforce a maximum size before reading. This prevents clients from sending data you do not want to process.

Stream data through your Worker using `TransformStream` to pipe from a source to a destination without holding it all in memory.

* [  JavaScript ](#tab-panel-7010)
* [  TypeScript ](#tab-panel-7011)

src/index.js

```

// 🔴 Bad: buffers the entire response body in memory

const badHandler = {

  async fetch(request, env) {

    const response = await fetch("https://api.example.com/large-dataset");

    const text = await response.text();

    return new Response(text);

  },

};


// ✅ Good: stream the response body through without buffering

export default {

  async fetch(request, env) {

    const response = await fetch("https://api.example.com/large-dataset");

    return new Response(response.body, response);

  },

};


```

src/index.ts

```

// 🔴 Bad: buffers the entire response body in memory

const badHandler = {

  async fetch(request: Request, env: Env): Promise<Response> {

    const response = await fetch("https://api.example.com/large-dataset");

    const text = await response.text();

    return new Response(text);

  },

} satisfies ExportedHandler<Env>;


// ✅ Good: stream the response body through without buffering

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const response = await fetch("https://api.example.com/large-dataset");

    return new Response(response.body, response);

  },

} satisfies ExportedHandler<Env>;


```

When you need to concatenate multiple responses (for example, fetching data from several upstream APIs), pipe each body sequentially into a single writable stream. This avoids buffering any of the responses in memory.

* [  JavaScript ](#tab-panel-7014)
* [  TypeScript ](#tab-panel-7015)

src/concat.js

```

export default {

  async fetch(request, env) {

    const urls = [

      "https://api.example.com/part-1",

      "https://api.example.com/part-2",

      "https://api.example.com/part-3",

    ];


    const { readable, writable } = new TransformStream();


    // ✅ Good: pipe each response body sequentially without buffering

    const pipeline = (async () => {

      for (const url of urls) {

        const response = await fetch(url);

        if (response.body) {

          // pipeTo with preventClose keeps the writable open for the next response

          await response.body.pipeTo(writable, {

            preventClose: true,

          });

        }

      }

      await writable.close();

    })();


    // Return the readable side immediately — data streams as it arrives

    return new Response(readable, {

      headers: { "Content-Type": "application/octet-stream" },

    });

  },

};


```

src/concat.ts

```

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const urls = [

      "https://api.example.com/part-1",

      "https://api.example.com/part-2",

      "https://api.example.com/part-3",

    ];


    const { readable, writable } = new TransformStream();


    // ✅ Good: pipe each response body sequentially without buffering

    const pipeline = (async () => {

      for (const url of urls) {

        const response = await fetch(url);

        if (response.body) {

          // pipeTo with preventClose keeps the writable open for the next response

          await response.body.pipeTo(writable, {

            preventClose: true,

          });

        }

      }

      await writable.close();

    })();


    // Return the readable side immediately — data streams as it arrives

    return new Response(readable, {

      headers: { "Content-Type": "application/octet-stream" },

    });

  },

} satisfies ExportedHandler<Env>;


```

For more information, refer to [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/).

### Use waitUntil for work after the response

[ctx.waitUntil()](https://developers.cloudflare.com/workers/runtime-apis/context/) lets you perform work after the response is sent to the client, such as analytics, cache writes, non-critical logging, or webhook notifications. This keeps your response fast while still completing background tasks.

There are two common pitfalls: destructuring `ctx` (which loses the `this` binding and throws "Illegal invocation"), and exceeding the 30-second `waitUntil` time limit after the response is sent.

* [  JavaScript ](#tab-panel-7022)
* [  TypeScript ](#tab-panel-7023)

src/index.js

```

// 🔴 Bad: destructuring ctx loses the `this` binding

const badHandler = {

  async fetch(request, env, ctx) {

    const { waitUntil } = ctx; // "Illegal invocation" at runtime

    waitUntil(fetch("https://analytics.example.com/events"));

    return new Response("OK");

  },

};


// ✅ Good: send the response immediately, do background work after

export default {

  async fetch(request, env, ctx) {

    const data = await processRequest(request);


    ctx.waitUntil(logToAnalytics(env, data));

    ctx.waitUntil(updateCache(env, data));


    return Response.json(data);

  },

};


async function logToAnalytics(env, data) {

  await fetch("https://analytics.example.com/events", {

    method: "POST",

    body: JSON.stringify(data),

  });

}


async function updateCache(env, data) {

  await env.CACHE.put("latest", JSON.stringify(data));

}


```

src/index.ts

```

// 🔴 Bad: destructuring ctx loses the `this` binding

const badHandler = {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    const { waitUntil } = ctx; // "Illegal invocation" at runtime

    waitUntil(fetch("https://analytics.example.com/events"));

    return new Response("OK");

  },

} satisfies ExportedHandler<Env>;


// ✅ Good: send the response immediately, do background work after

export default {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    const data = await processRequest(request);


    ctx.waitUntil(logToAnalytics(env, data));

    ctx.waitUntil(updateCache(env, data));


    return Response.json(data);

  },

} satisfies ExportedHandler<Env>;


async function logToAnalytics(env: Env, data: unknown): Promise<void> {

  await fetch("https://analytics.example.com/events", {

    method: "POST",

    body: JSON.stringify(data),

  });

}


async function updateCache(env: Env, data: unknown): Promise<void> {

  await env.CACHE.put("latest", JSON.stringify(data));

}


```

For more information, refer to [Context](https://developers.cloudflare.com/workers/runtime-apis/context/).

## Architecture

### Use bindings for Cloudflare services, not REST APIs

Some Cloudflare services like R2, KV, D1, Queues, and Workflows are available as [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/). Bindings are direct, in-process references that require no network hop, no authentication, and no extra latency. Using the REST API from within a Worker wastes time and adds unnecessary complexity.

* [  JavaScript ](#tab-panel-7016)
* [  TypeScript ](#tab-panel-7017)

src/index.js

```

// 🔴 Bad: calling the REST API from a Worker

const badHandler = {

  async fetch(request, env) {

    const response = await fetch(

      "https://api.cloudflare.com/client/v4/accounts/ACCOUNT_ID/r2/buckets/BUCKET_NAME/objects/my-file",

      { headers: { Authorization: `Bearer ${env.CF_API_TOKEN}` } },

    );

    return new Response(response.body);

  },

};


// ✅ Good: use the binding directly — no network hop, no auth needed

export default {

  async fetch(request, env) {

    const object = await env.MY_BUCKET.get("my-file");


    if (!object) {

      return new Response("Not found", { status: 404 });

    }


    return new Response(object.body, {

      headers: {

        "Content-Type":

          object.httpMetadata?.contentType ?? "application/octet-stream",

      },

    });

  },

};


```

src/index.ts

```

// 🔴 Bad: calling the REST API from a Worker

const badHandler = {

  async fetch(request: Request, env: Env): Promise<Response> {

    const response = await fetch(

      "https://api.cloudflare.com/client/v4/accounts/ACCOUNT_ID/r2/buckets/BUCKET_NAME/objects/my-file",

      { headers: { Authorization: `Bearer ${env.CF_API_TOKEN}` } },

    );

    return new Response(response.body);

  },

} satisfies ExportedHandler<Env>;


// ✅ Good: use the binding directly — no network hop, no auth needed

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const object = await env.MY_BUCKET.get("my-file");


    if (!object) {

      return new Response("Not found", { status: 404 });

    }


    return new Response(object.body, {

      headers: {

        "Content-Type":

          object.httpMetadata?.contentType ?? "application/octet-stream",

      },

    });

  },

} satisfies ExportedHandler<Env>;


```

### Use Queues and Workflows for async and background work

Long-running, retryable, or non-urgent tasks should not block a request. Use [Queues](https://developers.cloudflare.com/queues/) and [Workflows](https://developers.cloudflare.com/workflows/) to move work out of the critical path. They serve different purposes:

**Use Queues when** you need to decouple a producer from a consumer. Queues are a message broker: one Worker sends a message, another Worker processes it later. They are the right choice for fan-out (one event triggers many consumers), buffering and batching (aggregate messages before writing to a downstream service), and simple single-step background jobs (send an email, fire a webhook, write a log). Queues provide at-least-once delivery with configurable retries per message.

**Use Workflows when** the background work has multiple steps that depend on each other. Workflows are a durable execution engine: each step's return value is persisted, and if a step fails, only that step is retried — not the entire job. They are the right choice for multi-step processes (charge a card, then create a shipment, then send a confirmation), long-running tasks that need to pause and resume (wait hours or days for an external event or human approval via `step.waitForEvent()`), and complex conditional logic where later steps depend on earlier results. Workflows can run for hours, days, or weeks.

**Use both together** when a high-throughput entry point feeds into complex processing. For example, a Queue can buffer incoming orders, and the consumer can create a Workflow instance for each order that requires multi-step fulfillment.

* [  JavaScript ](#tab-panel-7012)
* [  TypeScript ](#tab-panel-7013)

src/index.js

```

export default {

  async fetch(request, env) {

    const order = await request.json();


    if (order.type === "simple") {

      // ✅ Queue: single-step background job — send a message for async processing

      await env.ORDER_QUEUE.send({

        orderId: order.id,

        action: "send-confirmation-email",

      });

    } else {

      // ✅ Workflow: multi-step durable process — payment, fulfillment, notification

      const instance = await env.FULFILLMENT_WORKFLOW.create({

        params: { orderId: order.id },

      });

    }


    return Response.json({ status: "accepted" }, { status: 202 });

  },

};


```

src/index.ts

```

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const order = await request.json<{ id: string; type: string }>();


    if (order.type === "simple") {

      // ✅ Queue: single-step background job — send a message for async processing

      await env.ORDER_QUEUE.send({

        orderId: order.id,

        action: "send-confirmation-email",

      });

    } else {

      // ✅ Workflow: multi-step durable process — payment, fulfillment, notification

      const instance = await env.FULFILLMENT_WORKFLOW.create({

        params: { orderId: order.id },

      });

    }


    return Response.json({ status: "accepted" }, { status: 202 });

  },

} satisfies ExportedHandler<Env>;


```

For more information, refer to [Queues](https://developers.cloudflare.com/queues/) and [Workflows](https://developers.cloudflare.com/workflows/).

### Use service bindings for Worker-to-Worker communication

When one Worker needs to call another, use [service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) instead of making an HTTP request to a public URL. Service bindings are zero-cost, bypass the public internet, and support type-safe RPC.

* [  JavaScript ](#tab-panel-7020)
* [  TypeScript ](#tab-panel-7021)

src/index.js

```

import { WorkerEntrypoint } from "cloudflare:workers";


// The "auth" Worker exposes RPC methods

export class AuthService extends WorkerEntrypoint {

  async verifyToken(token) {

    // Token verification logic

    return { userId: "user-123", valid: true };

  }

}


// The "api" Worker calls the auth Worker via a service binding

export default {

  async fetch(request, env) {

    const token = request.headers.get("Authorization")?.replace("Bearer ", "");


    if (!token) {

      return new Response("Unauthorized", { status: 401 });

    }


    // ✅ Good: call another Worker via service binding RPC — no network hop

    const auth = await env.AUTH_SERVICE.verifyToken(token);


    if (!auth.valid) {

      return new Response("Invalid token", { status: 403 });

    }


    return Response.json({ userId: auth.userId });

  },

};


```

src/index.ts

```

import { WorkerEntrypoint } from "cloudflare:workers";


// The "auth" Worker exposes RPC methods

export class AuthService extends WorkerEntrypoint {

  async verifyToken(

    token: string,

  ): Promise<{ userId: string; valid: boolean }> {

    // Token verification logic

    return { userId: "user-123", valid: true };

  }

}


// The "api" Worker calls the auth Worker via a service binding

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const token = request.headers.get("Authorization")?.replace("Bearer ", "");


    if (!token) {

      return new Response("Unauthorized", { status: 401 });

    }


    // ✅ Good: call another Worker via service binding RPC — no network hop

    const auth = await env.AUTH_SERVICE.verifyToken(token);


    if (!auth.valid) {

      return new Response("Invalid token", { status: 403 });

    }


    return Response.json({ userId: auth.userId });

  },

} satisfies ExportedHandler<Env>;


```

### Use Hyperdrive for external database connections

Always use [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) when connecting to a remote PostgreSQL or MySQL database from a Worker. Hyperdrive maintains a regional connection pool close to your database, eliminating the per-request cost of TCP handshake, TLS negotiation, and connection setup. It also caches query results where possible.

Create a new `Client` on each request. Hyperdrive manages the underlying pool, so client creation is fast. Requires `nodejs_compat` for database driver support.

* [  wrangler.jsonc ](#tab-panel-6998)
* [  wrangler.toml ](#tab-panel-6999)

```

{

  "name": "my-worker",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": ["nodejs_compat"],


  "hyperdrive": [{ "binding": "HYPERDRIVE", "id": "<YOUR_HYPERDRIVE_ID>" }],

}


```

```

name = "my-worker"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


[[hyperdrive]]

binding = "HYPERDRIVE"

id = "<YOUR_HYPERDRIVE_ID>"


```

* [  JavaScript ](#tab-panel-7028)
* [  TypeScript ](#tab-panel-7029)

src/index.js

```

import { Client } from "pg";


export default {

  async fetch(request, env) {

    // ✅ Good: create a new client per request — Hyperdrive pools the underlying connection

    const client = new Client({

      connectionString: env.HYPERDRIVE.connectionString,

    });


    try {

      await client.connect();

      const result = await client.query("SELECT id, name FROM users LIMIT 10");

      return Response.json(result.rows);

    } catch (e) {

      console.error(

        JSON.stringify({ message: "database query failed", error: String(e) }),

      );

      return Response.json({ error: "Database error" }, { status: 500 });

    }

  },

};


// 🔴 Bad: connecting directly to a remote database without Hyperdrive

// Every request pays the full TCP + TLS + auth cost (often 300-500ms)

const badHandler = {

  async fetch(request, env) {

    const client = new Client({

      connectionString: "postgres://user:pass@db.example.com:5432/mydb",

    });

    await client.connect();

    const result = await client.query("SELECT id, name FROM users LIMIT 10");

    return Response.json(result.rows);

  },

};


```

src/index.ts

```

import { Client } from "pg";


export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    // ✅ Good: create a new client per request — Hyperdrive pools the underlying connection

    const client = new Client({

      connectionString: env.HYPERDRIVE.connectionString,

    });


    try {

      await client.connect();

      const result = await client.query("SELECT id, name FROM users LIMIT 10");

      return Response.json(result.rows);

    } catch (e) {

      console.error(

        JSON.stringify({ message: "database query failed", error: String(e) }),

      );

      return Response.json({ error: "Database error" }, { status: 500 });

    }

  },

} satisfies ExportedHandler<Env>;


// 🔴 Bad: connecting directly to a remote database without Hyperdrive

// Every request pays the full TCP + TLS + auth cost (often 300-500ms)

const badHandler = {

  async fetch(request: Request, env: Env): Promise<Response> {

    const client = new Client({

      connectionString: "postgres://user:pass@db.example.com:5432/mydb",

    });

    await client.connect();

    const result = await client.query("SELECT id, name FROM users LIMIT 10");

    return Response.json(result.rows);

  },

} satisfies ExportedHandler<Env>;


```

For more information, refer to [Hyperdrive](https://developers.cloudflare.com/hyperdrive/).

### Use Durable Objects for WebSockets

Plain Workers can upgrade HTTP connections to WebSockets, but they lack persistent state and hibernation. If the isolate is evicted, the connection is lost because there is no persistent actor to hold it. For reliable, long-lived WebSocket connections, use [Durable Objects](https://developers.cloudflare.com/durable-objects/) with the [Hibernation API](https://developers.cloudflare.com/durable-objects/best-practices/websockets/). Durable Objects keep WebSocket connections open even while the object is evicted from memory, and automatically wake up when a message arrives.

Use `this.ctx.acceptWebSocket()` instead of `ws.accept()` to enable hibernation. Use `setWebSocketAutoResponse` for ping/pong heartbeats that do not wake the object.

* [  JavaScript ](#tab-panel-7036)
* [  TypeScript ](#tab-panel-7037)

src/index.js

```

import { DurableObject } from "cloudflare:workers";


// Parent Worker: upgrades HTTP to WebSocket and routes to a Durable Object

export default {

  async fetch(request, env) {

    if (request.headers.get("Upgrade") !== "websocket") {

      return new Response("Expected WebSocket", { status: 426 });

    }


    const stub = env.CHAT_ROOM.getByName("default-room");

    return stub.fetch(request);

  },

};


// Durable Object: manages WebSocket connections with hibernation

export class ChatRoom extends DurableObject {

  constructor(ctx, env) {

    super(ctx, env);

    // Auto ping/pong without waking the object

    this.ctx.setWebSocketAutoResponse(

      new WebSocketRequestResponsePair("ping", "pong"),

    );

  }


  async fetch(request) {

    const pair = new WebSocketPair();

    const [client, server] = Object.values(pair);


    // ✅ Good: acceptWebSocket enables hibernation

    this.ctx.acceptWebSocket(server);


    return new Response(null, { status: 101, webSocket: client });

  }


  // Called when a message arrives — the object wakes from hibernation if needed

  async webSocketMessage(ws, message) {

    for (const conn of this.ctx.getWebSockets()) {

      conn.send(typeof message === "string" ? message : "binary");

    }

  }


  async webSocketClose(ws, code, reason, wasClean) {

    ws.close(code, reason);

  }

}


```

src/index.ts

```

import { DurableObject } from "cloudflare:workers";


// Parent Worker: upgrades HTTP to WebSocket and routes to a Durable Object

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    if (request.headers.get("Upgrade") !== "websocket") {

      return new Response("Expected WebSocket", { status: 426 });

    }


    const stub = env.CHAT_ROOM.getByName("default-room");

    return stub.fetch(request);

  },

} satisfies ExportedHandler<Env>;


// Durable Object: manages WebSocket connections with hibernation

export class ChatRoom extends DurableObject {

  constructor(ctx: DurableObjectState, env: Env) {

    super(ctx, env);

    // Auto ping/pong without waking the object

    this.ctx.setWebSocketAutoResponse(

      new WebSocketRequestResponsePair("ping", "pong"),

    );

  }


  async fetch(request: Request): Promise<Response> {

    const pair = new WebSocketPair();

    const [client, server] = Object.values(pair);


    // ✅ Good: acceptWebSocket enables hibernation

    this.ctx.acceptWebSocket(server);


    return new Response(null, { status: 101, webSocket: client });

  }


  // Called when a message arrives — the object wakes from hibernation if needed

  async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer) {

    for (const conn of this.ctx.getWebSockets()) {

      conn.send(typeof message === "string" ? message : "binary");

    }

  }


  async webSocketClose(

    ws: WebSocket,

    code: number,

    reason: string,

    wasClean: boolean,

  ) {

    ws.close(code, reason);

  }

}


```

For more information, refer to [Durable Objects WebSocket best practices](https://developers.cloudflare.com/durable-objects/best-practices/websockets/).

### Use Workers Static Assets for new projects

[Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/) is the recommended way to deploy static sites, single-page applications, and full-stack apps on Cloudflare. If you are starting a new project, use Workers instead of Pages. Pages continues to work, but new features and optimizations are focused on Workers.

For a purely static site, point `assets.directory` at your build output. No Worker script is needed. For a full-stack app, add a `main` entry point and an `ASSETS` binding to serve static files alongside your API.

* [  wrangler.jsonc ](#tab-panel-7000)
* [  wrangler.toml ](#tab-panel-7001)

```

{

  // Static site — no Worker script needed

  "name": "my-static-site",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": ["nodejs_compat"],


  "assets": {

    "directory": "./dist",

  },

}


```

```

name = "my-static-site"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


[assets]

directory = "./dist"


```

For more information, refer to [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/).

## Observability

### Enable Workers Logs and Traces

Production Workers without observability are a black box. Enable logs and traces before you deploy to production. When an intermittent error appears, you need data already being collected to diagnose it.

Enable them in your Wrangler configuration and use `head_sampling_rate` to control volume and manage costs. A sampling rate of `1` captures everything; lower it for high-traffic Workers.

Use structured JSON logging with `console.log` so logs are searchable and filterable. Use `console.error` for errors and `console.warn` for warnings. These appear at the correct severity level in the Workers Observability dashboard.

* [  wrangler.jsonc ](#tab-panel-7008)
* [  wrangler.toml ](#tab-panel-7009)

```

{

  "name": "my-worker",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": ["nodejs_compat"],


  "observability": {

    "enabled": true,

    "logs": {

      // Capture 100% of logs — lower this for high-traffic Workers

      "head_sampling_rate": 1,

    },

    "traces": {

      "enabled": true,

      "head_sampling_rate": 0.01, // Sample 1% of traces

    },

  },

}


```

```

name = "my-worker"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


[observability]

enabled = true


  [observability.logs]

  head_sampling_rate = 1


  [observability.traces]

  enabled = true

  head_sampling_rate = 0.01


```

* [  JavaScript ](#tab-panel-7034)
* [  TypeScript ](#tab-panel-7035)

src/index.js

```

export default {

  async fetch(request, env) {

    const url = new URL(request.url);


    try {

      // ✅ Good: structured JSON — searchable and filterable in the dashboard

      console.log(

        JSON.stringify({

          message: "incoming request",

          method: request.method,

          path: url.pathname,

        }),

      );


      const result = await env.MY_KV.get(url.pathname);

      return new Response(result ?? "Not found", {

        status: result ? 200 : 404,

      });

    } catch (e) {

      // ✅ Good: console.error appears as "error" severity in Workers Observability

      console.error(

        JSON.stringify({

          message: "request failed",

          error: e instanceof Error ? e.message : String(e),

          path: url.pathname,

        }),

      );

      return Response.json({ error: "Internal server error" }, { status: 500 });

    }

  },

};


// 🔴 Bad: unstructured string logs are hard to query

const badHandler = {

  async fetch(request, env) {

    const url = new URL(request.url);

    console.log("Got a request to " + url.pathname);

    return new Response("OK");

  },

};


```

src/index.ts

```

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    const url = new URL(request.url);


    try {

      // ✅ Good: structured JSON — searchable and filterable in the dashboard

      console.log(

        JSON.stringify({

          message: "incoming request",

          method: request.method,

          path: url.pathname,

        }),

      );


      const result = await env.MY_KV.get(url.pathname);

      return new Response(result ?? "Not found", {

        status: result ? 200 : 404,

      });

    } catch (e) {

      // ✅ Good: console.error appears as "error" severity in Workers Observability

      console.error(

        JSON.stringify({

          message: "request failed",

          error: e instanceof Error ? e.message : String(e),

          path: url.pathname,

        }),

      );

      return Response.json({ error: "Internal server error" }, { status: 500 });

    }

  },

} satisfies ExportedHandler<Env>;


// 🔴 Bad: unstructured string logs are hard to query

const badHandler = {

  async fetch(request: Request, env: Env): Promise<Response> {

    const url = new URL(request.url);

    console.log("Got a request to " + url.pathname);

    return new Response("OK");

  },

} satisfies ExportedHandler<Env>;


```

For more information, refer to [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs/) and [Traces](https://developers.cloudflare.com/workers/observability/traces/).

For more information on all available observability tools, refer to [Workers Observability](https://developers.cloudflare.com/workers/observability/).

## Code patterns

### Do not store request-scoped state in global scope

Workers reuse isolates across requests. A variable set during one request is still present during the next. This causes cross-request data leaks, stale state, and "Cannot perform I/O on behalf of a different request" errors.

Pass state through function arguments or store it on `env` bindings. Never in module-level variables.

* [  JavaScript ](#tab-panel-7030)
* [  TypeScript ](#tab-panel-7031)

src/index.js

```

// 🔴 Bad: global mutable state leaks between requests

let currentUser = null;


const badHandler = {

  async fetch(request, env, ctx) {

    // Storing request-scoped data globally means the next request sees stale data

    currentUser = request.headers.get("X-User-Id");

    const result = await handleRequest(currentUser, env);

    return Response.json(result);

  },

};


// ✅ Good: pass request-scoped data through function arguments

export default {

  async fetch(request, env, ctx) {

    const userId = request.headers.get("X-User-Id");

    const result = await handleRequest(userId, env);


    return Response.json(result);

  },

};


async function handleRequest(userId, env) {

  return { userId };

}


```

src/index.ts

```

// 🔴 Bad: global mutable state leaks between requests

let currentUser: string | null = null;


const badHandler = {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    // Storing request-scoped data globally means the next request sees stale data

    currentUser = request.headers.get("X-User-Id");

    const result = await handleRequest(currentUser, env);

    return Response.json(result);

  },

} satisfies ExportedHandler<Env>;


// ✅ Good: pass request-scoped data through function arguments

export default {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    const userId = request.headers.get("X-User-Id");

    const result = await handleRequest(userId, env);


    return Response.json(result);

  },

} satisfies ExportedHandler<Env>;


async function handleRequest(userId: string | null, env: Env): Promise<object> {

  return { userId };

}


```

For more information, refer to [Workers errors](https://developers.cloudflare.com/workers/observability/errors/#cannot-perform-io-on-behalf-of-a-different-request).

### Always await or waitUntil your Promises

A `Promise` that is not `await`ed, `return`ed, or passed to `ctx.waitUntil()` is a floating promise. Floating promises cause silent bugs: dropped results, swallowed errors, and unfinished work. The Workers runtime may terminate your isolate before a floating promise completes.

Enable the `no-floating-promises` lint rule to catch these at development time. If you use ESLint, enable [@typescript-eslint/no-floating-promises ↗](https://typescript-eslint.io/rules/no-floating-promises/). If you use oxlint, enable [typescript/no-floating-promises ↗](https://oxc.rs/docs/guide/usage/linter/rules/typescript/no-floating-promises.html).

Terminal window

```

# ESLint (typescript-eslint)

npx eslint --rule '{"@typescript-eslint/no-floating-promises": "error"}' src/


# oxlint

npx oxlint --deny typescript/no-floating-promises src/


```

* [  JavaScript ](#tab-panel-7032)
* [  TypeScript ](#tab-panel-7033)

src/index.js

```

export default {

  async fetch(request, env, ctx) {

    const data = await request.json();


    // 🔴 Bad: floating promise — result is dropped, errors are swallowed

    fetch("https://api.example.com/webhook", {

      method: "POST",

      body: JSON.stringify(data),

    });


    // ✅ Good: await if you need the result before responding

    const response = await fetch("https://api.example.com/process", {

      method: "POST",

      body: JSON.stringify(data),

    });


    // ✅ Good: waitUntil if you do not need the result before responding

    ctx.waitUntil(

      fetch("https://api.example.com/webhook", {

        method: "POST",

        body: JSON.stringify(data),

      }),

    );


    return new Response("OK");

  },

};


```

src/index.ts

```

export default {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    const data = await request.json();


    // 🔴 Bad: floating promise — result is dropped, errors are swallowed

    fetch("https://api.example.com/webhook", {

      method: "POST",

      body: JSON.stringify(data),

    });


    // ✅ Good: await if you need the result before responding

    const response = await fetch("https://api.example.com/process", {

      method: "POST",

      body: JSON.stringify(data),

    });


    // ✅ Good: waitUntil if you do not need the result before responding

    ctx.waitUntil(

      fetch("https://api.example.com/webhook", {

        method: "POST",

        body: JSON.stringify(data),

      }),

    );


    return new Response("OK");

  },

} satisfies ExportedHandler<Env>;


```

## Security

### Use Web Crypto for secure token generation

The Workers runtime provides the [Web Crypto API](https://developers.cloudflare.com/workers/runtime-apis/web-crypto/) for cryptographic operations. Use `crypto.randomUUID()` for unique identifiers and `crypto.getRandomValues()` for random bytes. Never use `Math.random()` for anything security-sensitive. It is not cryptographically secure.

Node.js [node:crypto](https://developers.cloudflare.com/workers/runtime-apis/nodejs/crypto/) is also fully supported when `nodejs_compat` is enabled, so you can use whichever API you or your libraries prefer.

* [  JavaScript ](#tab-panel-7018)
* [  TypeScript ](#tab-panel-7019)

src/index.js

```

export default {

  async fetch(request, env) {

    // 🔴 Bad: Math.random() is predictable and not suitable for security

    const badToken = Math.random().toString(36).substring(2);


    // ✅ Good: cryptographically secure random UUID

    const sessionId = crypto.randomUUID();


    // ✅ Good: cryptographically secure random bytes for tokens

    const tokenBytes = new Uint8Array(32);

    crypto.getRandomValues(tokenBytes);

    const token = Array.from(tokenBytes)

      .map((b) => b.toString(16).padStart(2, "0"))

      .join("");


    return Response.json({ sessionId, token });

  },

};


```

src/index.ts

```

export default {

  async fetch(request: Request, env: Env): Promise<Response> {

    // 🔴 Bad: Math.random() is predictable and not suitable for security

    const badToken = Math.random().toString(36).substring(2);


    // ✅ Good: cryptographically secure random UUID

    const sessionId = crypto.randomUUID();


    // ✅ Good: cryptographically secure random bytes for tokens

    const tokenBytes = new Uint8Array(32);

    crypto.getRandomValues(tokenBytes);

    const token = Array.from(tokenBytes)

      .map((b) => b.toString(16).padStart(2, "0"))

      .join("");


    return Response.json({ sessionId, token });

  },

} satisfies ExportedHandler<Env>;


```

When comparing secret values (API keys, tokens, HMAC signatures), use `crypto.subtle.timingSafeEqual()` to prevent timing side-channel attacks. Do not short-circuit on length mismatch. Encode both values to a fixed-size hash first.

* [  JavaScript ](#tab-panel-7024)
* [  TypeScript ](#tab-panel-7025)

src/verify.js

```

async function verifyToken(provided, expected) {

  const encoder = new TextEncoder();


  // ✅ Good: hash both values to a fixed size, then compare in constant time

  // This avoids leaking the length of the expected value

  const [providedHash, expectedHash] = await Promise.all([

    crypto.subtle.digest("SHA-256", encoder.encode(provided)),

    crypto.subtle.digest("SHA-256", encoder.encode(expected)),

  ]);


  return crypto.subtle.timingSafeEqual(providedHash, expectedHash);

}


// 🔴 Bad: direct string comparison leaks timing information

function verifyTokenInsecure(provided, expected) {

  return provided === expected;

}


```

src/verify.ts

```

async function verifyToken(

  provided: string,

  expected: string,

): Promise<boolean> {

  const encoder = new TextEncoder();


  // ✅ Good: hash both values to a fixed size, then compare in constant time

  // This avoids leaking the length of the expected value

  const [providedHash, expectedHash] = await Promise.all([

    crypto.subtle.digest("SHA-256", encoder.encode(provided)),

    crypto.subtle.digest("SHA-256", encoder.encode(expected)),

  ]);


  return crypto.subtle.timingSafeEqual(providedHash, expectedHash);

}


// 🔴 Bad: direct string comparison leaks timing information

function verifyTokenInsecure(provided: string, expected: string): boolean {

  return provided === expected;

}


```

### Do not use passThroughOnException as error handling

`passThroughOnException()` is a fail-open mechanism that sends requests to your origin when your Worker throws an unhandled exception. While it can be useful during migration from an origin server, it hides bugs and makes debugging difficult. Use explicit `try...catch` blocks with structured error responses instead.

* [  JavaScript ](#tab-panel-7038)
* [  TypeScript ](#tab-panel-7039)

src/index.js

```

// 🔴 Bad: hides errors by falling through to origin

const badHandler = {

  async fetch(request, env, ctx) {

    ctx.passThroughOnException();

    const result = await handleRequest(request, env);

    return Response.json(result);

  },

};


// ✅ Good: explicit error handling with structured responses

export default {

  async fetch(request, env, ctx) {

    try {

      const result = await handleRequest(request, env);

      return Response.json(result);

    } catch (error) {

      const message = error instanceof Error ? error.message : "Unknown error";


      console.error(

        JSON.stringify({

          message: "unhandled error",

          error: message,

          path: new URL(request.url).pathname,

        }),

      );


      return Response.json({ error: "Internal server error" }, { status: 500 });

    }

  },

};


async function handleRequest(request, env) {

  return { status: "ok" };

}


```

src/index.ts

```

// 🔴 Bad: hides errors by falling through to origin

const badHandler = {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    ctx.passThroughOnException();

    const result = await handleRequest(request, env);

    return Response.json(result);

  },

} satisfies ExportedHandler<Env>;


// ✅ Good: explicit error handling with structured responses

export default {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    try {

      const result = await handleRequest(request, env);

      return Response.json(result);

    } catch (error) {

      const message = error instanceof Error ? error.message : "Unknown error";


      console.error(

        JSON.stringify({

          message: "unhandled error",

          error: message,

          path: new URL(request.url).pathname,

        }),

      );


      return Response.json({ error: "Internal server error" }, { status: 500 });

    }

  },

} satisfies ExportedHandler<Env>;


async function handleRequest(request: Request, env: Env): Promise<object> {

  return { status: "ok" };

}


```

## Development and testing

### Test with @cloudflare/vitest-pool-workers

The [@cloudflare/vitest-pool-workers](https://developers.cloudflare.com/workers/testing/vitest-integration/) package runs your tests inside the Workers runtime, giving you access to real bindings (KV, R2, D1, Durable Objects) during tests. This catches issues that Node.js-based tests miss, like unsupported APIs or missing compatibility flags.

One known pitfall: the Vitest pool automatically injects `nodejs_compat`, so tests pass even if your Wrangler configuration does not have the flag. Always confirm your `wrangler.jsonc` includes `nodejs_compat` if your code depends on Node.js built-in modules.

* [  JavaScript ](#tab-panel-7026)
* [  TypeScript ](#tab-panel-7027)

test/index.test.js

```

import { describe, it, expect } from "vitest";

import { env } from "cloudflare:workers";


describe("KV operations", () => {

  it("should store and retrieve a value", async () => {

    await env.MY_KV.put("key", "value");

    const result = await env.MY_KV.get("key");

    expect(result).toBe("value");

  });


  it("should return null for missing keys", async () => {

    const result = await env.MY_KV.get("nonexistent");

    // ✅ Good: test the null case explicitly

    expect(result).toBeNull();

  });

});


```

test/index.test.ts

```

import { describe, it, expect } from "vitest";

import { env } from "cloudflare:workers";


describe("KV operations", () => {

  it("should store and retrieve a value", async () => {

    await env.MY_KV.put("key", "value");

    const result = await env.MY_KV.get("key");

    expect(result).toBe("value");

  });


  it("should return null for missing keys", async () => {

    const result = await env.MY_KV.get("nonexistent");

    // ✅ Good: test the null case explicitly

    expect(result).toBeNull();

  });

});


```

For more information, refer to [Testing with Vitest](https://developers.cloudflare.com/workers/testing/vitest-integration/).

## Related resources

* [Rules of Durable Objects](https://developers.cloudflare.com/durable-objects/best-practices/rules-of-durable-objects/): best practices for stateful, coordinated applications.
* [Rules of Workflows](https://developers.cloudflare.com/workflows/build/rules-of-workflows/): best practices for durable, multi-step Workflows.
* [Platform limits](https://developers.cloudflare.com/workers/platform/limits/): CPU time, memory, subrequest, and other limits.
* [Workers errors](https://developers.cloudflare.com/workers/observability/errors/): error codes and debugging guidance.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/best-practices/","name":"Best practices"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/best-practices/workers-best-practices/","name":"Workers Best Practices"}}]}
```

---

---
title: Analytics Engine
description: Use Workers to receive performance analytics about your applications, products and projects.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/databases/analytics-engine.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Analytics Engine

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/databases/","name":"Databases"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/databases/analytics-engine/","name":"Analytics Engine"}}]}
```

---

---
title: Connect to databases
description: Learn about the different kinds of database integrations Cloudflare supports.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/databases/connecting-to-databases.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Connect to databases

Cloudflare Workers can connect to and query your data in both SQL and NoSQL databases, including:

* Cloudflare's own [D1](https://developers.cloudflare.com/d1/), a serverless SQL-based database.
* Traditional hosted relational databases, including Postgres and MySQL, using [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) (recommended) to significantly speed up access.
* Serverless databases, including Supabase, MongoDB Atlas, PlanetScale, and Prisma.

### D1 SQL database

D1 is Cloudflare's own SQL-based, serverless database. It is optimized for global access from Workers, and can scale out with multiple, smaller (10GB) databases, such as per-user, per-tenant or per-entity databases. Similar to some serverless databases, D1 pricing is based on query and storage costs.

| Database                                    | Library or Driver                                                                                                                                                               | Connection Method                                                                                                                                                         |
| ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [D1](https://developers.cloudflare.com/d1/) | [Workers binding](https://developers.cloudflare.com/d1/worker-api/), integrates with [Prisma ↗](https://www.prisma.io/), [Drizzle ↗](https://orm.drizzle.team/), and other ORMs | [Workers binding](https://developers.cloudflare.com/d1/worker-api/), [REST API](https://developers.cloudflare.com/api/resources/d1/subresources/database/methods/create/) |

### Traditional SQL databases

Traditional databases use SQL drivers that use [TCP sockets](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) to connect to the database. TCP is the de-facto standard protocol that many databases, such as PostgreSQL and MySQL, use for client connectivity. These drivers are also widely compatible with your preferred ORM libraries and query builders.

This also includes serverless databases that are PostgreSQL or MySQL-compatible like [Supabase](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-database-providers/supabase/), [Neon](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-database-providers/neon/), or PlanetScale (either [MySQL](https://developers.cloudflare.com/hyperdrive/examples/connect-to-mysql/mysql-database-providers/planetscale/) or [PostgreSQL](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-database-providers/planetscale-postgres/)), which can be connected to using both native [TCP sockets and Hyperdrive](https://developers.cloudflare.com/hyperdrive/) or [serverless HTTP-based drivers](https://developers.cloudflare.com/workers/databases/connecting-to-databases/#serverless-databases) (detailed below).

| Database                                                                  | Integration       | Library or Driver                                                                                   | Connection Method                                                                                                                                                                                                        |
| ------------------------------------------------------------------------- | ----------------- | --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [Postgres](https://developers.cloudflare.com/workers/tutorials/postgres/) | Direct connection | [node-postgres ↗](https://node-postgres.com/),[Postgres.js ↗](https://github.com/porsager/postgres) | [TCP Socket](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) via database driver, using [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) for optimal performance (optional, recommended) |
| [MySQL](https://developers.cloudflare.com/workers/tutorials/mysql/)       | Direct connection | [mysql2 ↗](https://github.com/sidorares/node-mysql2), [mysql ↗](https://github.com/mysqljs/mysql)   | [TCP Socket](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) via database driver, using [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) for optimal performance (optional, recommended) |

Speed up database connectivity with Hyperdrive

Connecting to SQL databases with TCP sockets requires multiple roundtrips to establish a secure connection before a query to the database is made. Since a connection must be re-established on every Worker invocation, this adds unnecessary latency.

[Hyperdrive](https://developers.cloudflare.com/hyperdrive/) solves this by pooling database connections globally to eliminate unnecessary roundtrips and speed up your database access. Learn more about [how Hyperdrive works](https://developers.cloudflare.com/hyperdrive/concepts/how-hyperdrive-works/).

### Serverless databases

Serverless databases may provide direct connection to the underlying database, or provide HTTP-based proxies and drivers (also known as serverless drivers).

For PostgreSQL and MySQL serverless databases, you can connect to the underlying database directly using the native database drivers and ORMs you are familiar with, using Hyperdrive (recommended) to speed up connectivity and pool database connections. When you use Hyperdrive, your connection pool is managed across all of Cloudflare regions and optimized for usage from Workers.

You can also use serverless driver libraries to connect to the HTTP-based proxies managed by the database provider. These may also provide connection pooling for traditional SQL databases and reduce the amount of roundtrips needed to establish a secure connection, similarly to Hyperdrive.

| Database                                                                                                    | Library or Driver                                                                                                                                                                                                                                                                                                                                                | Connection Method                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| ----------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [PlanetScale ↗](https://planetscale.com/blog/introducing-the-planetscale-serverless-driver-for-javascript)  | [Hyperdrive (MySQL)](https://developers.cloudflare.com/hyperdrive/examples/connect-to-mysql/mysql-database-providers/planetscale), [Hyperdrive (PostgreSQL)](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-database-providers/planetscale-postgres/), [@planetscale/database ↗](https://github.com/planetscale/database-js) | [mysql2](https://developers.cloudflare.com/hyperdrive/examples/connect-to-mysql/mysql-drivers-and-libraries/mysql2/), [mysql](https://developers.cloudflare.com/hyperdrive/examples/connect-to-mysql/mysql-drivers-and-libraries/mysql/), [node-postgres](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/node-postgres/), [Postgres.js](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/postgres-js/), or API via client library |
| [Supabase ↗](https://github.com/supabase/supabase/tree/master/examples/with-cloudflare-workers)             | [Hyperdrive](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-database-providers/supabase/), [@supabase/supabase-js ↗](https://github.com/supabase/supabase-js)                                                                                                                                                                | [node-postgres](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/node-postgres/),[Postgres.js](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/postgres-js/), or API via client library                                                                                                                                                                                                                                            |
| [Prisma ↗](https://www.prisma.io/docs/guides/deployment/deployment-guides/deploying-to-cloudflare-workers)  | [prisma ↗](https://github.com/prisma/prisma)                                                                                                                                                                                                                                                                                                                     | API via client library                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
| [Neon ↗](https://blog.cloudflare.com/neon-postgres-database-from-workers/)                                  | [Hyperdrive](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-database-providers/neon/), [@neondatabase/serverless ↗](https://neon.tech/blog/serverless-driver-for-postgres/)                                                                                                                                                  | [node-postgres](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/node-postgres/),[Postgres.js](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/postgres-js/), or API via client library                                                                                                                                                                                                                                            |
| [Hasura ↗](https://hasura.io/blog/building-applications-with-cloudflare-workers-and-hasura-graphql-engine/) | API                                                                                                                                                                                                                                                                                                                                                              | GraphQL API via fetch()                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| [Upstash Redis ↗](https://blog.cloudflare.com/cloudflare-workers-database-integration-with-upstash/)        | [@upstash/redis ↗](https://github.com/upstash/upstash-redis)                                                                                                                                                                                                                                                                                                     | API via client library                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
| [TiDB Cloud ↗](https://docs.pingcap.com/tidbcloud/integrate-tidbcloud-with-cloudflare)                      | [@tidbcloud/serverless ↗](https://github.com/tidbcloud/serverless-js)                                                                                                                                                                                                                                                                                            | API via client library                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |

Once you have installed the necessary packages, use the APIs provided by these packages to connect to your database and perform operations on it. Refer to detailed links for service-specific instructions.

## Authentication

If your database requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [wrangler secret](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret) command:

Terminal window

```

wrangler secret put <SECRET_NAME>


```

Then, retrieve the secret value in your code using the following code snippet:

JavaScript

```

const secretValue = env.<SECRET_NAME>;


```

Use the secret value to authenticate with the external service. For example, if the external service requires an API key or database username and password for authentication, include these in using the relevant service's library or API.

For services that require mTLS authentication, use [mTLS certificates](https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls) to present a client certificate.

## Next steps

* Learn how to connect to [an existing PostgreSQL database](https://developers.cloudflare.com/hyperdrive/) with Hyperdrive.
* Discover [other storage options available](https://developers.cloudflare.com/workers/platform/storage-options/) for use with Workers.
* [Create your first database](https://developers.cloudflare.com/d1/get-started/) with Cloudflare D1.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/databases/","name":"Databases"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/databases/connecting-to-databases/","name":"Connect to databases"}}]}
```

---

---
title: Cloudflare D1
description: Cloudflare’s native serverless database.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/databases/d1.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cloudflare D1

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/databases/","name":"Databases"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/databases/d1/","name":"Cloudflare D1"}}]}
```

---

---
title: Hyperdrive
description: Use Workers to accelerate queries you make to existing databases.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/databases/hyperdrive.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Hyperdrive

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/databases/","name":"Databases"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/databases/hyperdrive/","name":"Hyperdrive"}}]}
```

---

---
title: 3rd Party Integrations
description: Connect to third-party databases such as Supabase, Turso and PlanetScale)
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/databases/third-party-integrations/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# 3rd Party Integrations

## Background

Connect to databases by configuring connection strings and credentials as [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) in your Worker.

Connecting to a regional database from a Worker?

If your Worker is connecting to a regional database, you can reduce your query latency by using [Hyperdrive](https://developers.cloudflare.com/hyperdrive) and [Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/) which are both included in any Workers plan. Hyperdrive will pool your databases connections globally across Cloudflare's network. Smart Placement will monitor your application to run your Workers closest to your backend infrastructure when this reduces the latency of your Worker invocations. Learn more about [how Smart Placement works](https://developers.cloudflare.com/workers/configuration/placement/).

## Database credentials

When you rotate or update database credentials, you must update the corresponding [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) in your Worker. Use the [wrangler secret put](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret) command to update secrets securely or update the secret directly in the [Cloudflare dashboard ↗](https://dash.cloudflare.com/?to=/:account/workers/services/view/:worker/production/settings).

## Database limits

You can connect to multiple databases by configuring separate sets of secrets for each database connection. Use descriptive secret names to distinguish between different database connections (for example, `DATABASE_URL_PROD` and `DATABASE_URL_STAGING`).

## Popular providers

* [ Neon ](https://developers.cloudflare.com/workers/databases/third-party-integrations/neon/)
* [ PlanetScale ](https://developers.cloudflare.com/workers/databases/third-party-integrations/planetscale/)
* [ Supabase ](https://developers.cloudflare.com/workers/databases/third-party-integrations/supabase/)
* [ Turso ](https://developers.cloudflare.com/workers/databases/third-party-integrations/turso/)
* [ Upstash ](https://developers.cloudflare.com/workers/databases/third-party-integrations/upstash/)
* [ Xata ](https://developers.cloudflare.com/workers/databases/third-party-integrations/xata/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/databases/","name":"Databases"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/databases/third-party-integrations/","name":"3rd Party Integrations"}}]}
```

---

---
title: Neon
description: Connect Workers to a Neon Postgres database.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/databases/third-party-integrations/neon.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Neon

[Neon ↗](https://neon.tech/) is a fully managed serverless PostgreSQL. It separates storage and compute to offer modern developer features, such as serverless, branching, and bottomless storage.

Note

You can connect to Neon using [Hyperdrive](https://developers.cloudflare.com/hyperdrive) (recommended), or using the Neon serverless driver, `@neondatabase/serverless`. Both provide connection pooling and reduce the amount of round trips required to create a secure connection from Workers to your database.

Hyperdrive can provide the lowest possible latencies because it performs the database connection setup and connection pooling across Cloudflare's network. Hyperdrive supports native database drivers, libraries, and ORMs, and is included in all [Workers plans](https://developers.cloudflare.com/hyperdrive/platform/pricing/). Learn more about Hyperdrive in [How Hyperdrive Works](https://developers.cloudflare.com/hyperdrive/concepts/how-hyperdrive-works/).

* [ Hyperdrive (recommended) ](#tab-panel-7137)
* [ Neon serverless driver ](#tab-panel-7138)

To connect to Neon using [Hyperdrive](https://developers.cloudflare.com/hyperdrive), follow these steps:

## 1\. Allow Hyperdrive access

You can connect Hyperdrive to any existing Neon database by creating a new user and fetching your database connection string.

### Neon Dashboard

1. Go to the [**Neon dashboard** ↗](https://console.neon.tech/app/projects) and select the project (database) you wish to connect to.
2. Select **Roles** from the sidebar and select **New Role**. Enter `hyperdrive-user` as the name (or your preferred name) and **copy the password**. Note that the password will not be displayed again: you will have to reset it if you do not save it somewhere.
3. Select **Dashboard** from the sidebar > go to the **Connection Details** pane > ensure you have selected the **branch**, **database** and **role** (for example,`hyperdrive-user`) that Hyperdrive will connect through.
4. Select the `psql` and **uncheck the connection pooling** checkbox. Note down the connection string (starting with `postgres://hyperdrive-user@...`) from the text box.

With both the connection string and the password, you can now create a Hyperdrive database configuration.

## 2\. Create a database configuration

To configure Hyperdrive, you will need:

* The IP address (or hostname) and port of your database.
* The database username (for example, `hyperdrive-demo`) you configured in a previous step.
* The password associated with that username.
* The name of the database you want Hyperdrive to connect to. For example, `postgres`.

Hyperdrive accepts the combination of these parameters in the common connection string format used by database drivers:

```

postgres://USERNAME:PASSWORD@HOSTNAME_OR_IP_ADDRESS:PORT/database_name


```

Most database providers will provide a connection string you can directly copy-and-paste directly into Hyperdrive.

* [ Dashboard ](#tab-panel-7133)
* [ Wrangler CLI ](#tab-panel-7134)

To create a Hyperdrive configuration with the Cloudflare dashboard:

1. In the Cloudflare dashboard, go to the **Hyperdrive** page.  
[ Go to **Hyperdrive** ](https://dash.cloudflare.com/?to=/:account/workers/hyperdrive)
2. Select **Create Configuration**.
3. Fill out the form, including the connection string.
4. Select **Create**.

To create a Hyperdrive configuration with the [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/install-and-update/):

1. Open your terminal and run the following command. Replace `<NAME_OF_HYPERDRIVE_CONFIG>` with a name for your Hyperdrive configuration and paste the connection string provided from your database host, or replace `user`, `password`, `HOSTNAME_OR_IP_ADDRESS`, `port`, and `database_name` placeholders with those specific to your database:  
Terminal window  
```  
npx wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string="postgres://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name"  
```
2. This command outputs a binding for the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):  
   * [  wrangler.jsonc ](#tab-panel-7131)  
   * [  wrangler.toml ](#tab-panel-7132)  
```  
{  
  "$schema": "./node_modules/wrangler/config-schema.json",  
  "name": "hyperdrive-example",  
  "main": "src/index.ts",  
  // Set this to today's date  
  "compatibility_date": "2026-04-03",  
  "compatibility_flags": [  
    "nodejs_compat"  
  ],  
  // Pasted from the output of `wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string=[...]` above.  
  "hyperdrive": [  
    {  
      "binding": "HYPERDRIVE",  
      "id": "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>"  
    }  
  ]  
}  
```  
```  
"$schema" = "./node_modules/wrangler/config-schema.json"  
name = "hyperdrive-example"  
main = "src/index.ts"  
# Set this to today's date  
compatibility_date = "2026-04-03"  
compatibility_flags = [ "nodejs_compat" ]  
[[hyperdrive]]  
binding = "HYPERDRIVE"  
id = "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>"  
```

Note

Hyperdrive will attempt to connect to your database with the provided credentials to verify they are correct before creating a configuration. If you encounter an error when attempting to connect, refer to Hyperdrive's [troubleshooting documentation](https://developers.cloudflare.com/hyperdrive/observability/troubleshooting/) to debug possible causes.

## 3\. Use Hyperdrive from your Worker

Install the `node-postgres` driver:

 npm  yarn  pnpm  bun 

```
npm i pg@>8.16.3
```

```
yarn add pg@>8.16.3
```

```
pnpm add pg@>8.16.3
```

```
bun add pg@>8.16.3
```

Note

The minimum version of `node-postgres` required for Hyperdrive is `8.16.3`.

If using TypeScript, install the types package:

 npm  yarn  pnpm  bun 

```
npm i -D @types/pg
```

```
yarn add -D @types/pg
```

```
pnpm add -D @types/pg
```

```
bun add -d @types/pg
```

Add the required Node.js compatibility flags and Hyperdrive binding to your `wrangler.jsonc` file:

* [  wrangler.jsonc ](#tab-panel-7135)
* [  wrangler.toml ](#tab-panel-7136)

```

{

  // required for database drivers to function

  "compatibility_flags": [

    "nodejs_compat"

  ],

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "hyperdrive": [

    {

      "binding": "HYPERDRIVE",

      "id": "<your-hyperdrive-id-here>"

    }

  ]

}


```

```

compatibility_flags = [ "nodejs_compat" ]

# Set this to today's date

compatibility_date = "2026-04-03"


[[hyperdrive]]

binding = "HYPERDRIVE"

id = "<your-hyperdrive-id-here>"


```

Create a new `Client` instance and pass the Hyperdrive `connectionString`:

TypeScript

```

// filepath: src/index.ts

import { Client } from "pg";


export default {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    // Create a new client instance for each request. Hyperdrive maintains the

    // underlying database connection pool, so creating a new client is fast.

    const client = new Client({

      connectionString: env.HYPERDRIVE.connectionString,

    });


    try {

      // Connect to the database

      await client.connect();


      // Perform a simple query

      const result = await client.query("SELECT * FROM pg_tables");


      return Response.json({

        success: true,

        result: result.rows,

      });

    } catch (error: any) {

      console.error("Database error:", error.message);


      return new Response("Internal error occurred", { status: 500 });

    }

  },

};


```

Note

When connecting to a Neon database with Hyperdrive, you should use a driver like [node-postgres (pg)](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/node-postgres/) or [Postgres.js](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/postgres-js/) to connect directly to the underlying database instead of the [Neon serverless driver ↗](https://neon.tech/docs/serverless/serverless-driver). Hyperdrive is optimized for database access for Workers and will perform global connection pooling and fast query routing by connecting directly to your database.

## Next steps

* Learn more about [How Hyperdrive Works](https://developers.cloudflare.com/hyperdrive/concepts/how-hyperdrive-works/).
* Refer to the [troubleshooting guide](https://developers.cloudflare.com/hyperdrive/observability/troubleshooting/) to debug common issues.
* Understand more about other [storage options](https://developers.cloudflare.com/workers/platform/storage-options/) available to Cloudflare Workers.

To connect to Neon using `@neondatabase/serverless`, follow these steps:

1. You need to have an existing Neon database to connect to. [Create a Neon database ↗](https://neon.tech/docs/postgres/tutorial-createdb#create-a-table) or [load data from an existing database to Neon ↗](https://neon.tech/docs/import/import-from-postgres).
2. Create an `elements` table using the Neon SQL editor. The SQL Editor allows you to query your databases directly from the Neon Console.  
```  
CREATE TABLE elements (  
  id INTEGER NOT NULL,  
  elementName TEXT NOT NULL,  
  atomicNumber INTEGER NOT NULL,  
  symbol TEXT NOT NULL  
);  
```
3. Insert some data into your newly created table.  
```  
INSERT INTO elements (id, elementName, atomicNumber, symbol)  
VALUES  
  (1, 'Hydrogen', 1, 'H'),  
  (2, 'Helium', 2, 'He'),  
  (3, 'Lithium', 3, 'Li'),  
  (4, 'Beryllium', 4, 'Be'),  
  (5, 'Boron', 5, 'B'),  
  (6, 'Carbon', 6, 'C'),  
  (7, 'Nitrogen', 7, 'N'),  
  (8, 'Oxygen', 8, 'O'),  
  (9, 'Fluorine', 9, 'F'),  
  (10, 'Neon', 10, 'Ne');  
```
4. Configure the Neon database credentials in your Worker:  
You need to add your Neon database connection string as a secret to your Worker. Get your connection string from the [Neon Console ↗](https://console.neon.tech) under **Connection Details**, then add it as a secret using Wrangler:  
Terminal window  
```  
# Add the database connection string as a secret  
npx wrangler secret put DATABASE_URL  
# When prompted, paste your Neon database connection string  
```
5. In your Worker, install the `@neondatabase/serverless` driver to connect to your database and start manipulating data:  
 npm  yarn  pnpm  bun  
```  
npm i @neondatabase/serverless  
```  
```  
yarn add @neondatabase/serverless  
```  
```  
pnpm add @neondatabase/serverless  
```  
```  
bun add @neondatabase/serverless  
```
6. The following example shows how to make a query to your Neon database in a Worker. The credentials needed to connect to Neon have been added as secrets to your Worker.  
JavaScript  
```  
import { Client } from "@neondatabase/serverless";  
export default {  
  async fetch(request, env, ctx) {  
    const client = new Client(env.DATABASE_URL);  
    await client.connect();  
    const { rows } = await client.query("SELECT * FROM elements");  
    return new Response(JSON.stringify(rows));  
  },  
};  
```

To learn more about Neon, refer to [Neon's official documentation ↗](https://neon.tech/docs/introduction).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/databases/","name":"Databases"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/databases/third-party-integrations/","name":"3rd Party Integrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/databases/third-party-integrations/neon/","name":"Neon"}}]}
```

---

---
title: PlanetScale
description: PlanetScale is a database platform that provides MySQL-compatible and PostgreSQL databases, making them more scalable, easier and safer to manage.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/databases/third-party-integrations/planetscale.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# PlanetScale

[PlanetScale ↗](https://planetscale.com/) is a database platform that provides MySQL-compatible and PostgreSQL databases, making them more scalable, easier and safer to manage.

Note

You can connect to PlanetScale using [Hyperdrive](https://developers.cloudflare.com/hyperdrive) (recommended), or using the PlanetScale serverless driver, `@planetscale/database`. Both provide connection pooling and reduce the amount of round trips required to create a secure connection from Workers to your database.

Hyperdrive can provide lower latencies because it performs the database connection setup and connection pooling across Cloudflare's network. Hyperdrive supports native database drivers, libraries, and ORMs, and is included in all [Workers plans](https://developers.cloudflare.com/hyperdrive/platform/pricing/). Learn more about Hyperdrive in [How Hyperdrive Works](https://developers.cloudflare.com/hyperdrive/concepts/how-hyperdrive-works/).

* [ Hyperdrive (recommended) ](#tab-panel-7143)
* [ PlanetScale serverless driver ](#tab-panel-7144)

To connect to PlanetScale using [Hyperdrive](https://developers.cloudflare.com/hyperdrive), follow these steps:

## 1\. Allow Hyperdrive access

You can connect Hyperdrive to any existing PlanetScale MySQL-compatible database by creating a new user and fetching your database connection string.

### PlanetScale Dashboard

1. Go to the [**PlanetScale dashboard** ↗](https://app.planetscale.com/) and select the database you wish to connect to.
2. Click **Connect**. Enter `hyperdrive-user` as the password name (or your preferred name) and configure the permissions as desired. Select **Create password**. Note the username and password as they will not be displayed again.
3. Select **Other** as your language or framework. Note down the database host, database name, database username, and password. You will need these to create a database configuration in Hyperdrive.

With the host, database name, username and password, you can now create a Hyperdrive database configuration.

Note

To reduce latency, use a [Placement Hint](https://developers.cloudflare.com/workers/configuration/placement/#configure-explicit-placement-hints) to run your Worker close to your PlanetScale database. This is especially useful when a single request makes multiple queries.

wrangler.jsonc

```

{

  "placement": {

    // Match to your PlanetScale region, for example "gcp:us-east4" or "aws:us-east-1"

    "region": "gcp:us-east4",

  },

}


```

## 2\. Create a database configuration

To configure Hyperdrive, you will need:

* The IP address (or hostname) and port of your database.
* The database username (for example, `hyperdrive-demo`) you configured in a previous step.
* The password associated with that username.
* The name of the database you want Hyperdrive to connect to. For example, `mysql`.

Hyperdrive accepts the combination of these parameters in the common connection string format used by database drivers:

```

mysql://USERNAME:PASSWORD@HOSTNAME_OR_IP_ADDRESS:PORT/database_name


```

Most database providers will provide a connection string you can copy-and-paste directly into Hyperdrive.

To create a Hyperdrive configuration with the [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/install-and-update/), open your terminal and run the following command.

* Replace <NAME\_OF\_HYPERDRIVE\_CONFIG> with a name for your Hyperdrive configuration and paste the connection string provided from your database host, or,
* Replace `user`, `password`, `HOSTNAME_OR_IP_ADDRESS`, `port`, and `database_name` placeholders with those specific to your database:

Terminal window

```

npx wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string="mysql://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name"


```

Note

Hyperdrive will attempt to connect to your database with the provided credentials to verify they are correct before creating a configuration. If you encounter an error when attempting to connect, refer to Hyperdrive's [troubleshooting documentation](https://developers.cloudflare.com/hyperdrive/observability/troubleshooting/) to debug possible causes.

This command outputs a binding for the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):

* [  wrangler.jsonc ](#tab-panel-7139)
* [  wrangler.toml ](#tab-panel-7140)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "hyperdrive-example",

  "main": "src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": [

    "nodejs_compat"

  ],

  // Pasted from the output of `wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string=[...]` above.

  "hyperdrive": [

    {

      "binding": "HYPERDRIVE",

      "id": "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>"

    }

  ]

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "hyperdrive-example"

main = "src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


[[hyperdrive]]

binding = "HYPERDRIVE"

id = "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>"


```

## 3\. Use Hyperdrive from your Worker

Install the [mysql2 ↗](https://github.com/sidorares/node-mysql2) driver:

 npm  yarn  pnpm  bun 

```
npm i mysql2@>3.13.0
```

```
yarn add mysql2@>3.13.0
```

```
pnpm add mysql2@>3.13.0
```

```
bun add mysql2@>3.13.0
```

Note

`mysql2` v3.13.0 or later is required

Add the required Node.js compatibility flags and Hyperdrive binding to your `wrangler.jsonc` file:

* [  wrangler.jsonc ](#tab-panel-7141)
* [  wrangler.toml ](#tab-panel-7142)

```

{

  // required for database drivers to function

  "compatibility_flags": [

    "nodejs_compat"

  ],

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "hyperdrive": [

    {

      "binding": "HYPERDRIVE",

      "id": "<your-hyperdrive-id-here>"

    }

  ]

}


```

```

compatibility_flags = [ "nodejs_compat" ]

# Set this to today's date

compatibility_date = "2026-04-03"


[[hyperdrive]]

binding = "HYPERDRIVE"

id = "<your-hyperdrive-id-here>"


```

Create a new `connection` instance and pass the Hyperdrive parameters:

TypeScript

```

// mysql2 v3.13.0 or later is required

import { createConnection } from "mysql2/promise";


export default {

  async fetch(request, env, ctx): Promise<Response> {

    // Create a new connection on each request. Hyperdrive maintains the underlying

    // database connection pool, so creating a new connection is fast.

    const connection = await createConnection({

      host: env.HYPERDRIVE.host,

      user: env.HYPERDRIVE.user,

      password: env.HYPERDRIVE.password,

      database: env.HYPERDRIVE.database,

      port: env.HYPERDRIVE.port,


      // Required to enable mysql2 compatibility for Workers

      disableEval: true,

    });


    try {

      // Sample query

      const [results, fields] = await connection.query("SHOW tables;");


      // Return result rows as JSON

      return Response.json({ results, fields });

    } catch (e) {

      console.error(e);

      return Response.json(

        { error: e instanceof Error ? e.message : e },

        { status: 500 },

      );

    }

  },

} satisfies ExportedHandler<Env>;


```

Note

The minimum version of `mysql2` required for Hyperdrive is `3.13.0`.

Note

When connecting to a PlanetScale database with Hyperdrive, you should use a driver like [node-postgres (pg)](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/node-postgres/) or [Postgres.js](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/postgres-js/) to connect directly to the underlying database instead of the [PlanetScale serverless driver ↗](https://planetscale.com/docs/tutorials/planetscale-serverless-driver). Hyperdrive is optimized for database access for Workers and will perform global connection pooling and fast query routing by connecting directly to your database.

## Next steps

* Learn more about [How Hyperdrive Works](https://developers.cloudflare.com/hyperdrive/concepts/how-hyperdrive-works/).
* Refer to the [troubleshooting guide](https://developers.cloudflare.com/hyperdrive/observability/troubleshooting/) to debug common issues.
* Understand more about other [storage options](https://developers.cloudflare.com/workers/platform/storage-options/) available to Cloudflare Workers.

## Set up an integration with PlanetScale

To set up an integration with PlanetScale:

1. You need to have an existing PlanetScale database to connect to. [Create a PlanetScale database ↗](https://planetscale.com/docs/tutorials/planetscale-quick-start-guide#create-a-database) or [import an existing database to PlanetScale ↗](https://planetscale.com/docs/imports/database-imports#overview).
2. From the [PlanetScale web console ↗](https://planetscale.com/docs/concepts/web-console#get-started), create a `products` table with the following query:  
```  
CREATE TABLE products (  
  id int NOT NULL AUTO_INCREMENT PRIMARY KEY,  
  name varchar(255) NOT NULL,  
  image_url varchar(255),  
  category_id INT,  
  KEY category_id_idx (category_id)  
);  
```
3. Insert some data in your newly created table. Run the following command to add a product and category to your table:  
```  
INSERT INTO products (name, image_url, category_id)  
VALUES ('Ballpoint pen', 'https://example.com/500x500', '1');  
```
4. Configure the PlanetScale database credentials in your Worker:  
You need to add your PlanetScale database credentials as secrets to your Worker. Get your connection details from the [PlanetScale Dashboard ↗](https://app.planetscale.com) by creating a connection string, then add them as secrets using Wrangler:  
Terminal window  
```  
# Add the database host as a secret  
npx wrangler secret put DATABASE_HOST  
# When prompted, paste your PlanetScale host  
# Add the database username as a secret  
npx wrangler secret put DATABASE_USERNAME  
# When prompted, paste your PlanetScale username  
# Add the database password as a secret  
npx wrangler secret put DATABASE_PASSWORD  
# When prompted, paste your PlanetScale password  
```
5. In your Worker, install the `@planetscale/database` driver to connect to your PlanetScale database and start manipulating data:  
 npm  yarn  pnpm  bun  
```  
npm i @planetscale/database  
```  
```  
yarn add @planetscale/database  
```  
```  
pnpm add @planetscale/database  
```  
```  
bun add @planetscale/database  
```
6. The following example shows how to make a query to your PlanetScale database in a Worker. The credentials needed to connect to PlanetScale have been added as secrets to your Worker.  
JavaScript  
```  
import { connect } from "@planetscale/database";  
export default {  
  async fetch(request, env) {  
    const config = {  
      host: env.DATABASE_HOST,  
      username: env.DATABASE_USERNAME,  
      password: env.DATABASE_PASSWORD,  
      // see https://github.com/cloudflare/workerd/issues/698  
      fetch: (url, init) => {  
        delete init["cache"];  
        return fetch(url, init);  
      },  
    };  
    const conn = connect(config);  
    const data = await conn.execute("SELECT * FROM products;");  
    return new Response(JSON.stringify(data.rows), {  
      status: 200,  
      headers: {  
        "Content-Type": "application/json",  
      },  
    });  
  },  
};  
```

To learn more about PlanetScale, refer to [PlanetScale's official documentation ↗](https://docs.planetscale.com/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/databases/","name":"Databases"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/databases/third-party-integrations/","name":"3rd Party Integrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/databases/third-party-integrations/planetscale/","name":"PlanetScale"}}]}
```

---

---
title: Supabase
description: Supabase is an open source Firebase alternative and a PostgreSQL database service that offers real-time functionality, database backups, and extensions. With Supabase, developers can quickly set up a PostgreSQL database and build applications.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/databases/third-party-integrations/supabase.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Supabase

[Supabase ↗](https://supabase.com/) is an open source Firebase alternative and a PostgreSQL database service that offers real-time functionality, database backups, and extensions. With Supabase, developers can quickly set up a PostgreSQL database and build applications.

Note

The Supabase client (`@supabase/supabase-js`) provides access to Supabase's various features, including database access. If you need access to all of the Supabase client functionality, use the Supabase client.

If you want to connect directly to the Supabase Postgres database, connect using [Hyperdrive](https://developers.cloudflare.com/hyperdrive). Hyperdrive can provide lower latencies because it performs the database connection setup and connection pooling across Cloudflare's network. Hyperdrive supports native database drivers, libraries, and ORMs, and is included in all [Workers plans](https://developers.cloudflare.com/hyperdrive/platform/pricing/). Learn more about Hyperdrive in [How Hyperdrive Works](https://developers.cloudflare.com/hyperdrive/concepts/how-hyperdrive-works/).

* [ Supabase client ](#tab-panel-7151)
* [ Hyperdrive ](#tab-panel-7152)

### Supabase client setup

To set up an integration with Supabase:

1. You need to have an existing Supabase database to connect to. [Create a Supabase database ↗](https://supabase.com/docs/guides/database/tables#creating-tables) or [have an existing database to connect to Supabase and load data from ↗](https://supabase.com/docs/guides/database/tables#loading-data).
2. Create a `countries` table with the following query. You can create a table in your Supabase dashboard in two ways:  
   * Use the table editor, which allows you to set up Postgres similar to a spreadsheet.  
   * Alternatively, use the [SQL editor ↗](https://supabase.com/docs/guides/database/overview#the-sql-editor):  
```  
CREATE TABLE countries (  
id SERIAL PRIMARY KEY,  
name VARCHAR(255) NOT NULL  
);  
```
3. Insert some data in your newly created table. Run the following commands to add countries to your table:  
```  
INSERT INTO countries (name) VALUES ('United States');  
INSERT INTO countries (name) VALUES ('Canada');  
INSERT INTO countries (name) VALUES ('The Netherlands');  
```
4. Configure the Supabase database credentials in your Worker:  
You need to add your Supabase URL and anon key as secrets to your Worker. Get these from your [Supabase Dashboard ↗](https://supabase.com/dashboard) under **Settings** \> **API**, then add them as secrets using Wrangler:  
Terminal window  
```  
# Add the Supabase URL as a secret  
npx wrangler secret put SUPABASE_URL  
# When prompted, paste your Supabase project URL  
# Add the Supabase anon key as a secret  
npx wrangler secret put SUPABASE_KEY  
# When prompted, paste your Supabase anon/public key  
```
5. In your Worker, install the `@supabase/supabase-js` driver to connect to your database and start manipulating data:  
 npm  yarn  pnpm  bun  
```  
npm i @supabase/supabase-js  
```  
```  
yarn add @supabase/supabase-js  
```  
```  
pnpm add @supabase/supabase-js  
```  
```  
bun add @supabase/supabase-js  
```
6. The following example shows how to make a query to your Supabase database in a Worker. The credentials needed to connect to Supabase have been added as secrets to your Worker.  
JavaScript  
```  
import { createClient } from "@supabase/supabase-js";  
export default {  
  async fetch(request, env) {  
    const supabase = createClient(env.SUPABASE_URL, env.SUPABASE_KEY);  
    const { data, error } = await supabase.from("countries").select("*");  
    if (error) throw error;  
    return new Response(JSON.stringify(data), {  
      headers: {  
        "Content-Type": "application/json",  
      },  
    });  
  },  
};  
```

To learn more about Supabase, refer to [Supabase's official documentation ↗](https://supabase.com/docs).

When connecting to Supabase with Hyperdrive, you connect directly to the underlying Postgres database. This provides the lowest latency for databsae queries when accessed server-side from Workers. To connect to Supabase using [Hyperdrive](https://developers.cloudflare.com/hyperdrive), follow these steps:

## 1\. Allow Hyperdrive access

You can connect Hyperdrive to any existing Supabase database as the Postgres user which is set up during project creation. Alternatively, to create a new user for Hyperdrive, run these commands in the [SQL Editor ↗](https://supabase.com/dashboard/project/%5F/sql/new).

The database endpoint can be found in the [database settings page ↗](https://supabase.com/dashboard/project/%5F/settings/database).

With a database user, password, database endpoint (hostname and port) and database name (default: postgres), you can now set up Hyperdrive.

## 2\. Create a database configuration

To configure Hyperdrive, you will need:

* The IP address (or hostname) and port of your database.
* The database username (for example, `hyperdrive-demo`) you configured in a previous step.
* The password associated with that username.
* The name of the database you want Hyperdrive to connect to. For example, `postgres`.

Hyperdrive accepts the combination of these parameters in the common connection string format used by database drivers:

```

postgres://USERNAME:PASSWORD@HOSTNAME_OR_IP_ADDRESS:PORT/database_name


```

Most database providers will provide a connection string you can directly copy-and-paste directly into Hyperdrive.

* [ Dashboard ](#tab-panel-7147)
* [ Wrangler CLI ](#tab-panel-7148)

To create a Hyperdrive configuration with the Cloudflare dashboard:

1. In the Cloudflare dashboard, go to the **Hyperdrive** page.  
[ Go to **Hyperdrive** ](https://dash.cloudflare.com/?to=/:account/workers/hyperdrive)
2. Select **Create Configuration**.
3. Fill out the form, including the connection string.
4. Select **Create**.

To create a Hyperdrive configuration with the [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/install-and-update/):

1. Open your terminal and run the following command. Replace `<NAME_OF_HYPERDRIVE_CONFIG>` with a name for your Hyperdrive configuration and paste the connection string provided from your database host, or replace `user`, `password`, `HOSTNAME_OR_IP_ADDRESS`, `port`, and `database_name` placeholders with those specific to your database:  
Terminal window  
```  
npx wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string="postgres://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name"  
```
2. This command outputs a binding for the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):  
   * [  wrangler.jsonc ](#tab-panel-7145)  
   * [  wrangler.toml ](#tab-panel-7146)  
```  
{  
  "$schema": "./node_modules/wrangler/config-schema.json",  
  "name": "hyperdrive-example",  
  "main": "src/index.ts",  
  // Set this to today's date  
  "compatibility_date": "2026-04-03",  
  "compatibility_flags": [  
    "nodejs_compat"  
  ],  
  // Pasted from the output of `wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string=[...]` above.  
  "hyperdrive": [  
    {  
      "binding": "HYPERDRIVE",  
      "id": "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>"  
    }  
  ]  
}  
```  
```  
"$schema" = "./node_modules/wrangler/config-schema.json"  
name = "hyperdrive-example"  
main = "src/index.ts"  
# Set this to today's date  
compatibility_date = "2026-04-03"  
compatibility_flags = [ "nodejs_compat" ]  
[[hyperdrive]]  
binding = "HYPERDRIVE"  
id = "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>"  
```

Note

Hyperdrive will attempt to connect to your database with the provided credentials to verify they are correct before creating a configuration. If you encounter an error when attempting to connect, refer to Hyperdrive's [troubleshooting documentation](https://developers.cloudflare.com/hyperdrive/observability/troubleshooting/) to debug possible causes.

## 3\. Use Hyperdrive from your Worker

Install the `node-postgres` driver:

 npm  yarn  pnpm  bun 

```
npm i pg@>8.16.3
```

```
yarn add pg@>8.16.3
```

```
pnpm add pg@>8.16.3
```

```
bun add pg@>8.16.3
```

Note

The minimum version of `node-postgres` required for Hyperdrive is `8.16.3`.

If using TypeScript, install the types package:

 npm  yarn  pnpm  bun 

```
npm i -D @types/pg
```

```
yarn add -D @types/pg
```

```
pnpm add -D @types/pg
```

```
bun add -d @types/pg
```

Add the required Node.js compatibility flags and Hyperdrive binding to your `wrangler.jsonc` file:

* [  wrangler.jsonc ](#tab-panel-7149)
* [  wrangler.toml ](#tab-panel-7150)

```

{

  // required for database drivers to function

  "compatibility_flags": [

    "nodejs_compat"

  ],

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "hyperdrive": [

    {

      "binding": "HYPERDRIVE",

      "id": "<your-hyperdrive-id-here>"

    }

  ]

}


```

```

compatibility_flags = [ "nodejs_compat" ]

# Set this to today's date

compatibility_date = "2026-04-03"


[[hyperdrive]]

binding = "HYPERDRIVE"

id = "<your-hyperdrive-id-here>"


```

Create a new `Client` instance and pass the Hyperdrive `connectionString`:

TypeScript

```

// filepath: src/index.ts

import { Client } from "pg";


export default {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    // Create a new client instance for each request. Hyperdrive maintains the

    // underlying database connection pool, so creating a new client is fast.

    const client = new Client({

      connectionString: env.HYPERDRIVE.connectionString,

    });


    try {

      // Connect to the database

      await client.connect();


      // Perform a simple query

      const result = await client.query("SELECT * FROM pg_tables");


      return Response.json({

        success: true,

        result: result.rows,

      });

    } catch (error: any) {

      console.error("Database error:", error.message);


      return new Response("Internal error occurred", { status: 500 });

    }

  },

};


```

Note

When connecting to a Supabase database with Hyperdrive, you should use a driver like [node-postgres (pg)](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/node-postgres/) or [Postgres.js](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/postgres-js/) to connect directly to the underlying database instead of the [Supabase JavaScript client ↗](https://github.com/supabase/supabase-js). Hyperdrive is optimized for database access for Workers and will perform global connection pooling and fast query routing by connecting directly to your database.

## Next steps

* Learn more about [How Hyperdrive Works](https://developers.cloudflare.com/hyperdrive/concepts/how-hyperdrive-works/).
* Refer to the [troubleshooting guide](https://developers.cloudflare.com/hyperdrive/observability/troubleshooting/) to debug common issues.
* Understand more about other [storage options](https://developers.cloudflare.com/workers/platform/storage-options/) available to Cloudflare Workers.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/databases/","name":"Databases"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/databases/third-party-integrations/","name":"3rd Party Integrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/databases/third-party-integrations/supabase/","name":"Supabase"}}]}
```

---

---
title: Turso
description: Turso is an edge-hosted, distributed database based on libSQL, an open-source fork of SQLite. Turso was designed to minimize query latency for applications where queries comes from anywhere in the world.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/databases/third-party-integrations/turso.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Turso

[Turso ↗](https://turso.tech/) is an edge-hosted, distributed database based on [libSQL ↗](https://libsql.org/), an open-source fork of SQLite. Turso was designed to minimize query latency for applications where queries comes from anywhere in the world.

## Set up an integration with Turso

To set up an integration with Turso:

1. You need to install Turso CLI to create and populate a database. Use one of the following two commands in your terminal to install the Turso CLI:  
Terminal window  
```  
# On macOS and linux with homebrew  
brew install tursodatabase/tap/turso  
# Manual scripted installation  
curl -sSfL https://get.tur.so/install.sh | bash  
```  
Next, run the following command to make sure the Turso CLI is installed:  
Terminal window  
```  
turso --version  
```
2. Before you create your first Turso database, you have to authenticate with your GitHub account by running:  
Terminal window  
```  
turso auth login  
```  
```  
Waiting for authentication...  
✔  Success! Logged in as <YOUR_GITHUB_USERNAME>  
```  
After you have authenticated, you can create a database using the command `turso db create <DATABASE_NAME>`. Turso will create a database and automatically choose a location closest to you.  
Terminal window  
```  
turso db create my-db  
```  
```  
# Example:  
Creating database my-db in Amsterdam, Netherlands (ams)  
# Once succeeded:  
Created database my-db in Amsterdam, Netherlands (ams) in 13 seconds.  
```  
With the first database created, you can now connect to it directly and execute SQL queries against it.  
Terminal window  
```  
turso db shell my-db  
```
3. Copy the following SQL query into the shell you just opened:  
```  
CREATE TABLE elements (  
  id INTEGER NOT NULL,  
  elementName TEXT NOT NULL,  
  atomicNumber INTEGER NOT NULL,  
  symbol TEXT NOT NULL  
);  
INSERT INTO elements (id, elementName, atomicNumber, symbol)  
VALUES (1, 'Hydrogen', 1, 'H'),  
  (2, 'Helium', 2, 'He'),  
  (3, 'Lithium', 3, 'Li'),  
  (4, 'Beryllium', 4, 'Be'),  
  (5, 'Boron', 5, 'B'),  
  (6, 'Carbon', 6, 'C'),  
  (7, 'Nitrogen', 7, 'N'),  
  (8, 'Oxygen', 8, 'O'),  
  (9, 'Fluorine', 9, 'F'),  
  (10, 'Neon', 10, 'Ne');  
```
4. Configure the Turso database credentials in your Worker:  
You need to add your Turso database URL and authentication token as secrets to your Worker. First, get your database URL and create an authentication token:  
Terminal window  
```  
# Get your database URL  
turso db show my-db --url  
# Create an authentication token  
turso db tokens create my-db  
```  
Then add these as secrets to your Worker using Wrangler:  
Terminal window  
```  
# Add the database URL as a secret  
npx wrangler secret put TURSO_URL  
# When prompted, paste your database URL  
# Add the authentication token as a secret  
npx wrangler secret put TURSO_AUTH_TOKEN  
# When prompted, paste your authentication token  
```
5. In your Worker, install the Turso client library:  
 npm  yarn  pnpm  bun  
```  
npm i @libsql/client  
```  
```  
yarn add @libsql/client  
```  
```  
pnpm add @libsql/client  
```  
```  
bun add @libsql/client  
```
6. The following example shows how to make a query to your Turso database in a Worker. The credentials needed to connect to Turso have been added as [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) to your Worker.  
TypeScript  
```  
import { Client as LibsqlClient, createClient } from "@libsql/client/web";  
export interface Env {  
  TURSO_URL?: string;  
  TURSO_AUTH_TOKEN?: string;  
}  
export default {  
  async fetch(request, env, ctx): Promise<Response> {  
    const client = buildLibsqlClient(env);  
    try {  
      const res = await client.execute("SELECT * FROM elements");  
      return new Response(JSON.stringify(res), {  
        status: 200,  
        headers: { "Content-Type": "application/json" },  
      });  
    } catch (error) {  
      console.error("Error executing SQL query:", error);  
      return new Response(  
        JSON.stringify({ error: "Internal Server Error" }),  
        {  
          status: 500,  
        },  
      );  
    }  
  },  
} satisfies ExportedHandler<Env>;  
function buildLibsqlClient(env: Env): LibsqlClient {  
  const url = env.TURSO_URL?.trim();  
  if (url === undefined) {  
    throw new Error("TURSO_URL env var is not defined");  
  }  
  const authToken = env.TURSO_AUTH_TOKEN?.trim();  
  if (authToken == undefined) {  
    throw new Error("TURSO_AUTH_TOKEN env var is not defined");  
  }  
  return createClient({ url, authToken });  
}  
```  
   * The libSQL client library import `@libsql/client/web` must be imported exactly as shown when working with Cloudflare Workers. The non-web import will not work in the Workers environment.  
   * The `Env` interface contains the [environment variable](https://developers.cloudflare.com/workers/configuration/environment-variables/) and [secret](https://developers.cloudflare.com/workers/configuration/secrets/) defined when you added the Turso integration in step 4.  
   * The `Env` interface also caches the libSQL client object and router, which was created on the first request to the Worker.  
   * The Worker uses `buildLibsqlClient` to query the `elements` database and returns the response as a JSON object.

With your environment configured and your code ready, you can now test your Worker locally before you deploy.

To learn more about Turso, refer to [Turso's official documentation ↗](https://docs.turso.tech).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/databases/","name":"Databases"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/databases/third-party-integrations/","name":"3rd Party Integrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/databases/third-party-integrations/turso/","name":"Turso"}}]}
```

---

---
title: Upstash
description: Upstash is a serverless database with Redis* and Kafka API. Upstash also offers QStash, a task queue/scheduler designed for the serverless.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/databases/third-party-integrations/upstash.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Upstash

[Upstash ↗](https://upstash.com/) is a serverless database with Redis\* and Kafka API. Upstash also offers QStash, a task queue/scheduler designed for the serverless.

## Upstash for Redis

To set up an integration with Upstash:

1. You need an existing Upstash database to connect to. [Create an Upstash database ↗](https://docs.upstash.com/redis#create-a-database) or [load data from an existing database to Upstash ↗](https://docs.upstash.com/redis/howto/connectclient).
2. Insert some data to your Upstash database. You can add data to your Upstash database in two ways:  
   * Use the CLI directly from your Upstash console.  
   * Alternatively, install [redis-cli ↗](https://redis.io/docs/getting-started/installation/) locally and run the following commands.  
Terminal window  
```  
set GB "Ey up?"  
```  
```  
OK  
```  
Terminal window  
```  
set US "Yo, what’s up?"  
```  
```  
OK  
```  
Terminal window  
```  
set NL "Hoi, hoe gaat het?"  
```  
```  
OK  
```
3. Configure the Upstash Redis credentials in your Worker:  
You need to add your Upstash Redis database URL and token as secrets to your Worker. Get these from your [Upstash Console ↗](https://console.upstash.com) under your database details, then add them as secrets using Wrangler:  
Terminal window  
```  
# Add the Upstash Redis URL as a secret  
npx wrangler secret put UPSTASH_REDIS_REST_URL  
# When prompted, paste your Upstash Redis REST URL  
# Add the Upstash Redis token as a secret  
npx wrangler secret put UPSTASH_REDIS_REST_TOKEN  
# When prompted, paste your Upstash Redis REST token  
```
4. In your Worker, install the `@upstash/redis`, a HTTP client to connect to your database and start manipulating data:  
 npm  yarn  pnpm  bun  
```  
npm i @upstash/redis  
```  
```  
yarn add @upstash/redis  
```  
```  
pnpm add @upstash/redis  
```  
```  
bun add @upstash/redis  
```
5. The following example shows how to make a query to your Upstash database in a Worker. The credentials needed to connect to Upstash have been added as secrets to your Worker.  
JavaScript  
```  
import { Redis } from "@upstash/redis/cloudflare";  
export default {  
  async fetch(request, env) {  
    const redis = Redis.fromEnv(env);  
    const country = request.headers.get("cf-ipcountry");  
    if (country) {  
      const greeting = await redis.get(country);  
      if (greeting) {  
        return new Response(greeting);  
      }  
    }  
    return new Response("Hello What's up!");  
  },  
};  
```  
Note  
`Redis.fromEnv(env)` automatically picks up the default `url` and `token` names created in the integration.  
If you have renamed the secrets, you must declare them explicitly like in the [Upstash basic example ↗](https://docs.upstash.com/redis/sdks/redis-ts/getstarted#basic-usage).

To learn more about Upstash, refer to the [Upstash documentation ↗](https://docs.upstash.com/redis).

## Upstash QStash

To set up an integration with Upstash QStash:

1. Configure the [publicly available HTTP endpoint ↗](https://docs.upstash.com/qstash#1-public-api) that you want to send your messages to.
2. Configure the Upstash QStash credentials in your Worker:  
You need to add your Upstash QStash token as a secret to your Worker. Get your token from your [Upstash Console ↗](https://console.upstash.com) under QStash settings, then add it as a secret using Wrangler:  
Terminal window  
```  
# Add the QStash token as a secret  
npx wrangler secret put QSTASH_TOKEN  
# When prompted, paste your QStash token  
```
3. In your Worker, install the `@upstash/qstash`, a HTTP client to connect to your database QStash endpoint:  
 npm  yarn  pnpm  bun  
```  
npm i @upstash/qstash  
```  
```  
yarn add @upstash/qstash  
```  
```  
pnpm add @upstash/qstash  
```  
```  
bun add @upstash/qstash  
```
4. Refer to the [Upstash documentation on how to receive webhooks from QStash in your Cloudflare Worker ↗](https://docs.upstash.com/qstash/quickstarts/cloudflare-workers#3-use-qstash-in-your-handler).

\* Redis is a trademark of Redis Ltd. Any rights therein are reserved to Redis Ltd. Any use by Upstash is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and Upstash.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/databases/","name":"Databases"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/databases/third-party-integrations/","name":"3rd Party Integrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/databases/third-party-integrations/upstash/","name":"Upstash"}}]}
```

---

---
title: Xata
description: Xata is a PostgreSQL database platform designed to help developers operate and scale databases with enhanced productivity and performance. Xata provides features like instant copy-on-write database branches, zero-downtime schema changes, data anonymization, AI-powered performance monitoring, and BYOC.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/databases/third-party-integrations/xata.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Xata

[Xata ↗](https://xata.io) is a PostgreSQL database platform designed to help developers operate and scale databases with enhanced productivity and performance. Xata provides features like instant copy-on-write database branches, zero-downtime schema changes, data anonymization, AI-powered performance monitoring, and BYOC.

Note

You can connect to Xata using [Hyperdrive](https://developers.cloudflare.com/hyperdrive), which provides connection pooling and reduces the amount of round trips required to create a secure connection from Workers to your database.

Hyperdrive can provide lower latencies because it performs the database connection setup and connection pooling across Cloudflare's network. Hyperdrive supports native database drivers, libraries, and ORMs, and is included in all [Workers plans](https://developers.cloudflare.com/hyperdrive/platform/pricing/). Learn more about Hyperdrive in [How Hyperdrive Works](https://developers.cloudflare.com/hyperdrive/concepts/how-hyperdrive-works/).

Refer to the full [Xata documentation ↗](https://xata.io/documentation).

To connect to Xata using [Hyperdrive](https://developers.cloudflare.com/hyperdrive), follow these steps:

## 1\. Allow Hyperdrive access

You can connect Hyperdrive to any existing Xata PostgreSQL database with the connection string provided by Xata.

### Xata dashboard

To retrieve your connection string from the Xata dashboard:

1. Go to the [**Xata dashboard** ↗](https://xata.io/).
2. Select the database you want to connect to.
3. Copy the `PostgreSQL` connection string.

Refer to the full [Xata documentation ↗](https://xata.io/documentation).

## 2\. Create a database configuration

To configure Hyperdrive, you will need:

* The IP address (or hostname) and port of your database.
* The database username (for example, `hyperdrive-demo`) you configured in a previous step.
* The password associated with that username.
* The name of the database you want Hyperdrive to connect to. For example, `postgres`.

Hyperdrive accepts the combination of these parameters in the common connection string format used by database drivers:

```

postgres://USERNAME:PASSWORD@HOSTNAME_OR_IP_ADDRESS:PORT/database_name


```

Most database providers will provide a connection string you can directly copy-and-paste directly into Hyperdrive.

* [ Dashboard ](#tab-panel-7155)
* [ Wrangler CLI ](#tab-panel-7156)

To create a Hyperdrive configuration with the Cloudflare dashboard:

1. In the Cloudflare dashboard, go to the **Hyperdrive** page.  
[ Go to **Hyperdrive** ](https://dash.cloudflare.com/?to=/:account/workers/hyperdrive)
2. Select **Create Configuration**.
3. Fill out the form, including the connection string.
4. Select **Create**.

To create a Hyperdrive configuration with the [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/install-and-update/):

1. Open your terminal and run the following command. Replace `<NAME_OF_HYPERDRIVE_CONFIG>` with a name for your Hyperdrive configuration and paste the connection string provided from your database host, or replace `user`, `password`, `HOSTNAME_OR_IP_ADDRESS`, `port`, and `database_name` placeholders with those specific to your database:  
Terminal window  
```  
npx wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string="postgres://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name"  
```
2. This command outputs a binding for the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):  
   * [  wrangler.jsonc ](#tab-panel-7153)  
   * [  wrangler.toml ](#tab-panel-7154)  
```  
{  
  "$schema": "./node_modules/wrangler/config-schema.json",  
  "name": "hyperdrive-example",  
  "main": "src/index.ts",  
  // Set this to today's date  
  "compatibility_date": "2026-04-03",  
  "compatibility_flags": [  
    "nodejs_compat"  
  ],  
  // Pasted from the output of `wrangler hyperdrive create <NAME_OF_HYPERDRIVE_CONFIG> --connection-string=[...]` above.  
  "hyperdrive": [  
    {  
      "binding": "HYPERDRIVE",  
      "id": "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>"  
    }  
  ]  
}  
```  
```  
"$schema" = "./node_modules/wrangler/config-schema.json"  
name = "hyperdrive-example"  
main = "src/index.ts"  
# Set this to today's date  
compatibility_date = "2026-04-03"  
compatibility_flags = [ "nodejs_compat" ]  
[[hyperdrive]]  
binding = "HYPERDRIVE"  
id = "<ID OF THE CREATED HYPERDRIVE CONFIGURATION>"  
```

Note

Hyperdrive will attempt to connect to your database with the provided credentials to verify they are correct before creating a configuration. If you encounter an error when attempting to connect, refer to Hyperdrive's [troubleshooting documentation](https://developers.cloudflare.com/hyperdrive/observability/troubleshooting/) to debug possible causes.

## 3\. Use Hyperdrive from your Worker

Install the `node-postgres` driver:

 npm  yarn  pnpm  bun 

```
npm i pg@>8.16.3
```

```
yarn add pg@>8.16.3
```

```
pnpm add pg@>8.16.3
```

```
bun add pg@>8.16.3
```

Note

The minimum version of `node-postgres` required for Hyperdrive is `8.16.3`.

If using TypeScript, install the types package:

 npm  yarn  pnpm  bun 

```
npm i -D @types/pg
```

```
yarn add -D @types/pg
```

```
pnpm add -D @types/pg
```

```
bun add -d @types/pg
```

Add the required Node.js compatibility flags and Hyperdrive binding to your `wrangler.jsonc` file:

* [  wrangler.jsonc ](#tab-panel-7157)
* [  wrangler.toml ](#tab-panel-7158)

```

{

  // required for database drivers to function

  "compatibility_flags": [

    "nodejs_compat"

  ],

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "hyperdrive": [

    {

      "binding": "HYPERDRIVE",

      "id": "<your-hyperdrive-id-here>"

    }

  ]

}


```

```

compatibility_flags = [ "nodejs_compat" ]

# Set this to today's date

compatibility_date = "2026-04-03"


[[hyperdrive]]

binding = "HYPERDRIVE"

id = "<your-hyperdrive-id-here>"


```

Create a new `Client` instance and pass the Hyperdrive `connectionString`:

TypeScript

```

// filepath: src/index.ts

import { Client } from "pg";


export default {

  async fetch(

    request: Request,

    env: Env,

    ctx: ExecutionContext,

  ): Promise<Response> {

    // Create a new client instance for each request. Hyperdrive maintains the

    // underlying database connection pool, so creating a new client is fast.

    const client = new Client({

      connectionString: env.HYPERDRIVE.connectionString,

    });


    try {

      // Connect to the database

      await client.connect();


      // Perform a simple query

      const result = await client.query("SELECT * FROM pg_tables");


      return Response.json({

        success: true,

        result: result.rows,

      });

    } catch (error: any) {

      console.error("Database error:", error.message);


      return new Response("Internal error occurred", { status: 500 });

    }

  },

};


```

## Next steps

* Learn more about [How Hyperdrive Works](https://developers.cloudflare.com/hyperdrive/concepts/how-hyperdrive-works/).
* Refer to the [troubleshooting guide](https://developers.cloudflare.com/hyperdrive/observability/troubleshooting/) to debug common issues.
* Understand more about other [storage options](https://developers.cloudflare.com/workers/platform/storage-options/) available to Cloudflare Workers.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/databases/","name":"Databases"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/databases/third-party-integrations/","name":"3rd Party Integrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/databases/third-party-integrations/xata/","name":"Xata"}}]}
```

---

---
title: Vectorize (vector database)
description: A globally distributed vector database that enables you to build full-stack, AI-powered applications with Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/databases/vectorize.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Vectorize (vector database)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/databases/","name":"Databases"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/databases/vectorize/","name":"Vectorize (vector database)"}}]}
```

---

---
title: Agents SDK
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/ai-and-agents/agents-sdk.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Agents SDK

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/ai-and-agents/","name":"AI & agents"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/ai-and-agents/agents-sdk/","name":"Agents SDK"}}]}
```

---

---
title: LangChain
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/ai-and-agents/langchain.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# LangChain

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/ai-and-agents/","name":"AI & agents"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/ai-and-agents/langchain/","name":"LangChain"}}]}
```

---

---
title: FastAPI
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/apis/fast-api.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# FastAPI

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/apis/","name":"APIs"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/apis/fast-api/","name":"FastAPI"}}]}
```

---

---
title: Hono
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/apis/hono.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Hono

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/apis/","name":"APIs"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/apis/hono/","name":"Hono"}}]}
```

---

---
title: Deploy an existing project
description: Learn how Wrangler automatically detects and configures your project for Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/automatic-configuration.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Deploy an existing project

Wrangler can automatically detect your framework and configure your project for Cloudflare Workers. This allows you to deploy existing projects with a single command, without manually setting up configuration files or installing adapters.

Note

Minimum required Wrangler version: **4.68.0**. Check your version by running `wrangler --version`. To update Wrangler, refer to [Install/Update Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/).

## How it works

When you run `wrangler deploy` or `wrangler setup` in a project directory without a Wrangler configuration file, Wrangler will:

1. **Detect your framework** \- Analyzes your project to identify the framework you're using
2. **Prompt for confirmation** \- Shows the detected settings and asks you to confirm before making changes
3. **Install adapters** \- Installs any required Cloudflare adapters for your framework
4. **Generate configuration** \- Creates a `wrangler.jsonc` file with appropriate settings
5. **Update package.json** \- Adds helpful scripts like `deploy`, `preview`, and `cf-typegen`
6. **Configure git** \- Adds Wrangler-specific entries to `.gitignore`

## Supported frameworks

Automatic configuration supports the following frameworks:

| Framework                                                                                                     | Adapter/Tool                 | Notes                                                                                                        |
| ------------------------------------------------------------------------------------------------------------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------ |
| [Next.js](https://developers.cloudflare.com/workers/framework-guides/web-apps/nextjs/)                        | @opennextjs/cloudflare       | Runs @opennextjs/cloudflare migrate automatically. [R2 caching](#nextjs-caching) is configured if available. |
| [Astro](https://developers.cloudflare.com/workers/framework-guides/web-apps/astro/)                           | @astrojs/cloudflare          | Runs astro add cloudflare automatically                                                                      |
| [SvelteKit](https://developers.cloudflare.com/workers/framework-guides/web-apps/sveltekit/)                   | @sveltejs/adapter-cloudflare | Runs sv add sveltekit-adapter automatically                                                                  |
| [Nuxt](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/nuxt/)         | Built-in Cloudflare preset   |                                                                                                              |
| [React Router](https://developers.cloudflare.com/workers/framework-guides/web-apps/react-router/)             | Cloudflare Vite plugin       |                                                                                                              |
| [Solid Start](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/solid/) | Built-in Cloudflare preset   |                                                                                                              |
| [TanStack Start](https://developers.cloudflare.com/workers/framework-guides/web-apps/tanstack-start/)         | Cloudflare Vite plugin       |                                                                                                              |
| [Angular](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/angular/)   |                              |                                                                                                              |
| [Analog](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/analog/)     | Built-in Cloudflare preset   |                                                                                                              |
| [Vite](https://developers.cloudflare.com/workers/vite-plugin/)                                                | Cloudflare Vite plugin       |                                                                                                              |
| [Vike](https://developers.cloudflare.com/workers/framework-guides/web-apps/vike/)                             |                              |                                                                                                              |
| [Waku](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/waku/)         |                              |                                                                                                              |
| Static sites                                                                                                  | None                         | Any directory with an index.html                                                                             |

Automatic configuration may also work with other projects, such as React or Vue SPAs. Try running `wrangler deploy` or `wrangler setup` to see if your project is detected.

## Files created and modified

When automatic configuration runs, the following files may be created or modified:

### `wrangler.jsonc`

A new Wrangler configuration file is created with settings appropriate for your framework:

* [  wrangler.jsonc ](#tab-panel-7394)
* [  wrangler.toml ](#tab-panel-7395)

```

{

  "$schema": "node_modules/wrangler/config-schema.json",

  "name": "my-project",

  "main": "dist/_worker.js/index.js",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "compatibility_flags": ["nodejs_compat"],

  "assets": {

    "binding": "ASSETS",

    "directory": "dist",

  },

  "observability": {

    "enabled": true,

  },

}


```

```

"$schema" = "node_modules/wrangler/config-schema.json"

name = "my-project"

main = "dist/_worker.js/index.js"

# Set this to today's date

compatibility_date = "2026-04-03"

compatibility_flags = [ "nodejs_compat" ]


[assets]

binding = "ASSETS"

directory = "dist"


[observability]

enabled = true


```

The exact configuration varies based on your framework.

### `package.json`

New scripts are added to your `package.json`:

```

{

  "scripts": {

    "deploy": "npm run build && wrangler deploy",

    "preview": "npm run build && wrangler dev",

    "cf-typegen": "wrangler types"

  }

}


```

### `.gitignore`

Wrangler-specific entries are added:

```

# wrangler files

.wrangler

.dev.vars*

!.dev.vars.example


```

### `.assetsignore`

For frameworks that generate worker files in the output directory, an `.assetsignore` file is created to exclude them from static asset uploads:

```

_worker.js

_routes.json


```

## Using automatic configuration

### Deploy with automatic configuration

To deploy an existing project, run [wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) in your project directory:

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

Wrangler will detect your framework, show the configuration it will apply, and prompt you to confirm before making changes and deploying.

### Configure without deploying

To configure your project without deploying, use [wrangler setup](https://developers.cloudflare.com/workers/wrangler/commands/general/#setup):

 npm  yarn  pnpm 

```
npx wrangler setup
```

```
yarn wrangler setup
```

```
pnpm wrangler setup
```

This is useful when you want to review the generated configuration before deploying.

### Preview changes with dry run

To see what changes would be made without actually modifying any files:

 npm  yarn  pnpm 

```
npx wrangler setup --dry-run
```

```
yarn wrangler setup --dry-run
```

```
pnpm wrangler setup --dry-run
```

This outputs a summary of the configuration that would be generated.

## Non-interactive mode

To skip the confirmation prompts, use the [\--yes flag](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy):

 npm  yarn  pnpm 

```
npx wrangler deploy --yes
```

```
yarn wrangler deploy --yes
```

```
pnpm wrangler deploy --yes
```

This applies the configuration automatically using sensible defaults. This is useful in CI/CD environments or when you want to accept the detected settings without reviewing them.

## Importing a repository from the dashboard

When you import a GitHub or GitLab repository via the Cloudflare dashboard, autoconfig runs non-interactively. If your repository does not have a Wrangler configuration file, [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/) will create a pull request with the necessary configuration.

The PR includes all the configuration changes described above. A preview deployment is generated so you can test the changes before merging. Once merged, your project is ready for deployment.

For more details, refer to [Automatic pull requests](https://developers.cloudflare.com/workers/ci-cd/builds/automatic-prs/).

## Skipping automatic configuration

If you do not want automatic configuration to run, ensure you have a valid Wrangler configuration file (`wrangler.toml`, `wrangler.json`, or `wrangler.jsonc`) in your project before running `wrangler deploy`.

You can also manually configure your project by following the framework-specific guides in the [Framework guides](https://developers.cloudflare.com/workers/framework-guides/).

## Next.js caching

For Next.js projects, automatic configuration will set up [R2](https://developers.cloudflare.com/r2/) for caching if your Cloudflare account has R2 enabled. R2 caching improves performance for [Incremental Static Regeneration (ISR) ↗](https://opennext.js.org/cloudflare/caching) and other Next.js caching features.

* **If R2 is enabled on your account**: Automatic configuration creates an R2 bucket and configures caching automatically.
* **If R2 is not enabled**: Your project will be configured without caching. You can [enable R2](https://developers.cloudflare.com/r2/get-started/) later and manually configure caching by following the [OpenNext caching documentation ↗](https://opennext.js.org/cloudflare/caching).

To check if R2 is enabled or to enable it, go to **Storage & Databases** \> **R2** in the [Cloudflare dashboard ↗](https://dash.cloudflare.com/).

## Troubleshooting

### Multiple frameworks detected

When you import a repository via [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/) in the Cloudflare dashboard, automatic configuration will fail if your project contains multiple frameworks. To resolve this, set the [root directory](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) to the path containing only one framework. For monorepos, refer to [monorepo setup](https://developers.cloudflare.com/workers/ci-cd/builds/advanced-setups/#monorepos).

When running `wrangler deploy` or `wrangler setup` locally, Wrangler will prompt you to select which framework to use if multiple frameworks are detected.

### Framework not detected

If your framework is not detected, ensure your `package.json` includes the framework as a dependency.

### Configuration already exists

If a Wrangler configuration file already exists, automatic configuration will not run. To reconfigure your project, delete the existing configuration file and run `wrangler deploy` or `wrangler setup` again.

### Workspaces

Support for monorepos and npm/yarn/pnpm workspaces is currently limited. Wrangler analyzes the project directory where you run the command, but does not detect dependencies installed at the workspace root. This can cause framework detection to fail if the framework is listed as a dependency in the workspace's root `package.json` rather than in the individual project's `package.json`.

If you encounter issues, report them in the [Wrangler GitHub repository ↗](https://github.com/cloudflare/workers-sdk/issues/new/choose).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/automatic-configuration/","name":"Deploy an existing project"}}]}
```

---

---
title: Expo
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/mobile-apps/expo.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Expo

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/mobile-apps/","name":"Mobile applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/mobile-apps/expo/","name":"Expo"}}]}
```

---

---
title: Astro
description: Create an Astro application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ SSG ](https://developers.cloudflare.com/search/?tags=SSG)[ Full stack ](https://developers.cloudflare.com/search/?tags=Full%20stack)[ Astro ](https://developers.cloudflare.com/search/?tags=Astro) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/astro.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Astro

**Start from CLI**: Scaffold an Astro project on Workers, and pick your template.

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-astro-app --framework=astro
```

```
yarn create cloudflare my-astro-app --framework=astro
```

```
pnpm create cloudflare@latest my-astro-app --framework=astro
```

---

**Or just deploy**: Create a static blog with Astro and deploy it on Cloudflare Workers, with CI/CD and previews all set up for you.

[![Deploy to Workers](https://deploy.workers.cloudflare.com/button)](https://dash.cloudflare.com/?to=/:account/workers-and-pages/create/deploy-to-workers&repository=https://github.com/cloudflare/templates/tree/main/astro-blog-starter-template)

## What is Astro?

[Astro ↗](https://astro.build/) is a JavaScript web framework designed for creating websites that display large amounts of content (such as blogs, documentation sites, or online stores).

Astro emphasizes performance through minimal client-side JavaScript - by default, it renders as much content as possible at build time, or [on-demand ↗](https://docs.astro.build/en/guides/on-demand-rendering/) on the "server" - this can be a Cloudflare Worker. [“Islands” ↗](https://docs.astro.build/en/concepts/islands/) of JavaScript are added only where interactivity or personalization is needed.

Astro is also framework-agnostic, and supports every major UI framework, including React, Preact, Svelte, Vue, SolidJS, via its official [integrations ↗](https://astro.build/integrations/).

## Deploy a new Astro project on Workers

1. **Create a new project with the create-cloudflare CLI (C3).**  
 npm  yarn  pnpm  
```  
npm create cloudflare@latest -- my-astro-app --framework=astro  
```  
```  
yarn create cloudflare my-astro-app --framework=astro  
```  
```  
pnpm create cloudflare@latest my-astro-app --framework=astro  
```  
What's happening behind the scenes?  
When you run this command, C3 creates a new project directory, initiates [Astro's official setup tool ↗](https://docs.astro.build/en/tutorial/1-setup/2/), and configures the project for Cloudflare. It then offers the option to instantly deploy your application to Cloudflare.
2. **Develop locally.**  
After creating your project, run the following command in your project directory to start a local development server.  
 npm  yarn  pnpm  
```  
npm run dev  
```  
```  
yarn run dev  
```  
```  
pnpm run dev  
```
3. **Deploy your project.**  
You can deploy your project to a [\*.workers.dev subdomain](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) or a [custom domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) from your local machine or any CI/CD system (including [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/#workers-builds)). Use the following command to build and deploy. If you're using a CI service, be sure to update your "deploy command" accordingly.  
 npm  yarn  pnpm  
```  
npm run deploy  
```  
```  
yarn run deploy  
```  
```  
pnpm run deploy  
```

## Deploy an existing Astro project on Workers

Automatic configuration

Run `wrangler deploy` in a project without a Wrangler configuration file and Wrangler will automatically detect Astro, generate the necessary configuration, and deploy your project.

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

Learn more about [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/).

Astro Detected 

Generated configuration 

wrangler.jsonc

main: dist/\_worker.js/index.js 

wrangler.jsonc

assets: directory: ./dist, binding: ASSETS 

wrangler.jsonc

compatibility\_flags: nodejs\_compat 

wrangler.jsonc

observability: enabled: true 

astro.config.mjs

adapter: @astrojs/cloudflare 

Workers Deployed 

Wrangler handles configuration automatically 

## Manual configuration

If you prefer to configure your project manually, follow the steps below.

### If you have a static site

If your Astro project is entirely pre-rendered, follow these steps:

1. **Add a Wrangler configuration file**  
In your project root, create a Wrangler configuration file with the following content:  
   * [  wrangler.jsonc ](#tab-panel-7396)  
   * [  wrangler.toml ](#tab-panel-7397)  
```  
{  
  "name": "my-astro-app",  
  // Set this to today's date  
  "compatibility_date": "2026-04-03",  
  "assets": {  
    "directory": "./dist"  
  }  
}  
```  
```  
name = "my-astro-app"  
# Set this to today's date  
compatibility_date = "2026-04-03"  
[assets]  
directory = "./dist"  
```  
What's this configuration doing?  
The key part of this config is the `assets` field, which tells Wrangler where to find your static assets. In this case, we're telling Wrangler to look in the `./dist` directory. If your assets are in a different directory, update the `directory` value accordingly. Read about other [asset configuration options](https://developers.cloudflare.com/workers/wrangler/configuration/#assets).  
Also note how there's no `main` field in this config - this is because you're only serving static assets, so no Worker code is needed for on demand rendering/SSR.
2. **Build and deploy your project**  
You can deploy your project to a [\*.workers.dev subdomain](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) or a [custom domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) from your local machine or any CI/CD system (including [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/#workers-builds)). Use the following command to build and deploy. If you're using a CI service, be sure to update your "deploy command" accordingly.  
 npm  yarn  pnpm  
```  
npx astro build  
```  
```  
yarn astro build  
```  
```  
pnpm astro build  
```  
 npm  yarn  pnpm  
```  
npx wrangler@latest deploy  
```  
```  
yarn wrangler@latest deploy  
```  
```  
pnpm wrangler@latest deploy  
```

### If your site uses on demand rendering

If your Astro project uses [on demand rendering (also known as SSR) ↗](https://docs.astro.build/en/guides/on-demand-rendering/), follow these steps:

1. **Install the Astro Cloudflare adapter**  
 npm  yarn  pnpm  
```  
npx astro add cloudflare  
```  
```  
yarn astro add cloudflare  
```  
```  
pnpm astro add cloudflare  
```  
What's happening behind the scenes?  
This command installs the Cloudflare adapter and makes the appropriate changes to your `astro.config.mjs` file in one step. By default, this sets the build output configuration to `output: 'server'`, which server renders all your pages by default. If there are certain pages that _don't_ need on demand rendering/SSR, for example static pages like a privacy policy, you should set `export const prerender = true` for that page or route to pre-render it. You can read more about the adapter configuration options [in the Astro docs ↗](https://docs.astro.build/en/guides/integrations-guide/cloudflare/#options).
2. **Add a `.assetsignore` file**Create a `.assetsignore` file in your `public/` folder, and add the following lines to it:  
.assetsignore  
```  
_worker.js  
_routes.json  
```
3. **Add a Wrangler configuration file**  
In your project root, create a Wrangler configuration file with the following content:  
   * [  wrangler.jsonc ](#tab-panel-7400)  
   * [  wrangler.toml ](#tab-panel-7401)  
```  
{  
  "name": "my-astro-app",  
  "main": "./dist/_worker.js/index.js",  
  // Update to today's date  
  // Set this to today's date  
  "compatibility_date": "2026-04-03",  
  "compatibility_flags": ["nodejs_compat"],  
  "assets": {  
    "binding": "ASSETS",  
    "directory": "./dist"  
  },  
  "observability": {  
    "enabled": true  
  }  
}  
```  
```  
name = "my-astro-app"  
main = "./dist/_worker.js/index.js"  
# Set this to today's date  
compatibility_date = "2026-04-03"  
compatibility_flags = [ "nodejs_compat" ]  
[assets]  
binding = "ASSETS"  
directory = "./dist"  
[observability]  
enabled = true  
```  
What's this configuration doing?  
The key parts of this config are:  
   * `main` points to the entry point of your Worker script. This is generated by the Astro adapter, and is what powers your server-rendered pages.  
   * `assets.directory` tells Wrangler where to find your static assets. In this case, we're telling Wrangler to look in the `./dist` directory. If your assets are in a different directory, update the `directory` value accordingly.  
Read more about [Wrangler configuration options](https://developers.cloudflare.com/workers/wrangler/configuration/) and [asset configuration options](https://developers.cloudflare.com/workers/wrangler/configuration/#assets).
4. **Build and deploy your project**  
You can deploy your project to a [\*.workers.dev subdomain](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) or a [custom domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) from your local machine or any CI/CD system (including [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/#workers-builds)). Use the following command to build and deploy. If you're using a CI service, be sure to update your "deploy command" accordingly.  
 npm  yarn  pnpm  
```  
npx astro build  
```  
```  
yarn astro build  
```  
```  
pnpm astro build  
```  
 npm  yarn  pnpm  
```  
npx wrangler@latest deploy  
```  
```  
yarn wrangler@latest deploy  
```  
```  
pnpm wrangler@latest deploy  
```

## Bindings

Note

You cannot use bindings if you're using Astro to generate a purely static site.

With bindings, your Astro application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more. Refer to the [bindings overview](https://developers.cloudflare.com/workers/runtime-apis/bindings/) for more information on what's available and how to configure them.

The [Astro docs ↗](https://docs.astro.build/en/guides/integrations-guide/cloudflare/#cloudflare-runtime) provide information about how you can access them in your `locals`.

## Sessions

Astro's [Sessions API ↗](https://docs.astro.build/en/guides/sessions/) allows you to store user data between requests, such as user preferences, shopping carts, or authentication credentials. When using the Cloudflare adapter, Astro automatically configures [Workers KV](https://developers.cloudflare.com/kv/) for session storage.

Wrangler automatically provisions a KV namespace named `SESSION` when you deploy, so no manual setup is required.

```

---

export const prerender = false;

const cart = await Astro.session?.get("cart");

---


<a href="/checkout">{cart?.length ?? 0} items</a>


```

You can customize the KV binding name with the [sessionKVBindingName ↗](https://docs.astro.build/en/guides/integrations-guide/cloudflare/#sessionkvbindingname) adapter option if you want to use a different binding name.

## Custom 404 pages

To serve a custom 404 page for your Astro site, add `not_found_handling` to your Wrangler configuration:

* [  wrangler.jsonc ](#tab-panel-7398)
* [  wrangler.toml ](#tab-panel-7399)

```

{

  "assets": {

    "directory": "./dist",

    "not_found_handling": "404-page"

  }

}


```

```

[assets]

directory = "./dist"

not_found_handling = "404-page"


```

This tells Cloudflare to serve your custom 404 page (for example, `src/pages/404.astro`) when a route is not found. Read more about [static asset routing behavior](https://developers.cloudflare.com/workers/static-assets/routing/).

## Astro's build configuration

The Astro Cloudflare adapter sets the build output configuration to `output: 'server'`, which means all pages are rendered on-demand in your Cloudflare Worker. If there are certain pages that _don't_ need on demand rendering/SSR, for example static pages such as a privacy policy, you should set `export const prerender = true` for that page or route to pre-render it. You can read more about on-demand rendering [in the Astro docs ↗](https://docs.astro.build/en/guides/on-demand-rendering/).

If you want to use Astro as a static site generator, you do not need the Astro Cloudflare adapter. Astro will pre-render all pages at build time by default, and you can simply upload those static assets to be served by Cloudflare.

## Node.js requirements

Astro 5.x requires Node.js 18.17.1 or higher. Astro 6 (currently in beta) requires Node.js 22 or higher. If you're using [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/), ensure your build environment meets these requirements.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/astro/","name":"Astro"}}]}
```

---

---
title: Microfrontends
description: Split a single application into independently deployable frontends, using a router worker and service bindings
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/microfrontends.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Microfrontends

Microfrontends let you split a single application into smaller, independently deployable units that render as one cohesive application. Different teams using different technologies can develop, test, and deploy each microfrontend.

Use microfrontends when you want to:

* Enable many teams to deploy independently without coordinating releases
* Gradually migrate from a monolith to a distributed architecture
* Build multi-framework applications (for example, Astro, Remix, and Next.js in one app)

## Get started

Create a microfrontend project:

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://dash.cloudflare.com/?to=/:account/workers-and-pages/create?type=vmfe)

This template automatically creates a router worker with pre-configured routing logic, and lets you configure [Service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) to Workers you have already deployed to your Cloudflare account. The code or this template is available on GitHub at [cloudflare/templates ↗](https://github.com/cloudflare/templates/tree/main/microfrontend-template).

## How it works

graph LR
    A[Browser Request] --> B[Router Worker]
    B -->|Service Binding| C[Microfrontend A]
    B -->|Service Binding| D[Microfrontend B]
    B -->|Service Binding| E[Microfrontend C]

The router worker:

1. Analyzes the incoming request path
2. Matches it against configured routes
3. Forwards the request to the appropriate microfrontend via service binding
4. Rewrites HTML, CSS, and headers to ensure assets load correctly
5. Returns the response to the browser

Each microfrontend can be:

* A full-framework application (Next.js, SvelteKit, Astro, etc.)
* A static site with [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/)
* Built with different frameworks and technologies

## Routing logic

The router worker uses a `ROUTES` [environment variable](https://developers.cloudflare.com/workers/configuration/environment-variables/) to determine which microfrontend handles each path. Routes are matched by specificity, with longer paths taking precedence.

Example `ROUTES` configuration:

```

{

  "routes": [

    { "path": "/app-a", "binding": "MICROFRONTEND_A", "preload": true },

    { "path": "/app-b", "binding": "MICROFRONTEND_B", "preload": true },

    { "path": "/", "binding": "MICROFRONTEND_HOME" }

  ],

  "smoothTransitions": true

}


```

Each route requires:

* `path`: The mount path for the microfrontend (must be distinct from other routes)
* `binding`: The name of the service binding in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/)
* `preload` (optional): Whether to prefetch this microfrontend for faster navigation

When a request comes in for `/app-a/dashboard`, the router:

1. Matches it to the `/app-a` route
2. Forwards the request to `MICROFRONTEND_A`
3. Strips the `/app-a` prefix, so the microfrontend receives `/dashboard`

The router includes path matching logic that supports:

TypeScript

```

// Static paths

{ "path": "/dashboard" }


// Dynamic parameters

{ "path": "/users/:id" }


// Wildcard matching (zero or more segments)

{ "path": "/docs/:path*" }


// Required segments (one or more segments)

{ "path": "/api/:path+" }


```

## Path rewriting

The router worker uses [HTMLRewriter](https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/) to automatically rewrite HTML attributes to include the mount path prefix, ensuring assets load from the correct location.

When a microfrontend mounted at `/app-a` returns HTML:

```

<link rel="stylesheet" href="/assets/styles.css" />

<script src="/assets/app.js"></script>

<img src="/static/logo.png" />


```

The router rewrites it to:

```

<link rel="stylesheet" href="/app-a/assets/styles.css" />

<script src="/app-a/assets/app.js"></script>

<img src="/app-a/static/logo.png" />


```

The rewriter handles these attributes across all HTML elements:

* `href`, `src`, `poster`, `action`, `srcset`
* `data-*` attributes like `data-src`, `data-href`, `data-background`
* Framework-specific attributes like `astro-component-url`

The router only rewrites paths that start with configured asset prefixes to avoid breaking external URLs:

JavaScript

```

// Default asset prefixes

const DEFAULT_ASSET_PREFIXES = [

  "/assets/",

  "/static/",

  "/build/",

  "/_astro/",

  "/fonts/",

];


```

Most frameworks work with the default prefixes. For frameworks with different build outputs (like Next.js which uses `/_next/`), you can configure custom prefixes using the `ASSET_PREFIXES` [environment variable](https://developers.cloudflare.com/workers/configuration/environment-variables/):

```

["/_next/", "/public/"]


```

## Asset handling

The router also rewrites CSS files to ensure `url()` references work correctly. When a microfrontend mounted at `/app-a` returns CSS:

```

.hero {

  background: url(/assets/hero.jpg);

}


.icon {

  background: url("/static/icon.svg");

}


```

The router rewrites it to:

```

.hero {

  background: url(/app-a/assets/hero.jpg);

}


.icon {

  background: url("/app-a/static/icon.svg");

}


```

The router also handles:

* **Redirect headers**: Rewrites `Location` headers to include the mount path
* **Cookie paths**: Updates `Set-Cookie` headers to scope cookies to the mount path

## Route Preloading

When `preload: true` is set on a static mount route, the router automatically preloads those routes to enable faster navigation. The router uses **browser-specific optimization** to provide the best performance for each browser:

### Chromium Browsers (Chrome, Edge, Opera, Brave)

For Chromium-based browsers, the router uses the **Speculation Rules API** \- a modern, browser-native prefetching mechanism:

* Injects `<script type="speculationrules">` into the `<head>` element
* Browser handles prefetching automatically with optimal priority management
* Respects user preferences (battery saver, data saver modes)
* Uses per-document in-memory cache for faster access
* Not blocked by Cache-Control headers
* More efficient than JavaScript-based fetching

**Example injected speculation rules:**

```

{

  "prefetch": [

    {

      "urls": ["/app1", "/app2", "/dashboard"]

    }

  ]

}


```

## Smooth transitions

You can enable smooth page transitions between microfrontends using the [View Transitions API ↗](https://developer.mozilla.org/en-US/docs/Web/API/View%5FTransitions%5FAPI).

To enable smooth transitions, set `"smoothTransitions": true` in your `ROUTES` configuration:

```

{

  "routes": [

    { "path": "/app-a", "binding": "MICROFRONTEND_A" },

    { "path": "/app-b", "binding": "MICROFRONTEND_B" }

  ],

  "smoothTransitions": true

}


```

The router automatically injects CSS into HTML responses:

```

@supports (view-transition-name: none) {

  ::view-transition-old(root),

  ::view-transition-new(root) {

    animation-duration: 0.3s;

    animation-timing-function: ease-in-out;

  }

  main {

    view-transition-name: main-content;

  }

  nav {

    view-transition-name: navigation;

  }

}


```

This feature only works in browsers that support the View Transitions API. Browsers without support will navigate normally without animations.

## Add a new microfrontend

To add a new microfrontend to your application after initial setup:

1. **Create and deploy the new microfrontend worker**  
Deploy your new microfrontend as a separate Worker. This can be a [framework application](https://developers.cloudflare.com/workers/framework-guides/) (Next.js, Astro, etc.) or a static site with [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/).
2. **Add a [service binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) in your router's Wrangler configuration file**  
   * [  wrangler.jsonc ](#tab-panel-7402)  
   * [  wrangler.toml ](#tab-panel-7403)  
```  
{  
  "$schema": "./node_modules/wrangler/config-schema.json",  
  "services": [  
    {  
      "binding": "MICROFRONTEND_C",  
      "service": "my-new-microfrontend"  
    }  
  ]  
}  
```  
```  
[[services]]  
binding = "MICROFRONTEND_C"  
service = "my-new-microfrontend"  
```
3. **Update the `ROUTES` environment variable**  
Add your new route to the `ROUTES` configuration:  
```  
{  
  "routes": [  
    { "path": "/app-a", "binding": "MICROFRONTEND_A", "preload": true },  
    { "path": "/app-b", "binding": "MICROFRONTEND_B", "preload": true },  
    { "path": "/app-c", "binding": "MICROFRONTEND_C", "preload": true },  
    { "path": "/", "binding": "MICROFRONTEND_HOME" }  
  ]  
}  
```
4. **Redeploy the router worker**  
Terminal window  
```  
npx wrangler deploy  
```

Your new microfrontend is now accessible at the configured path (for example, `/app-c`).

## Local development

During development, you can test your microfrontend architecture locally using Wrangler's service binding support. Run the router Worker locally using `wrangler dev`, and then in separate terminals run each of the microfrontends.

If you only need to work on one of the microfrontends, you can run the others remotely using [remote bindings](https://developers.cloudflare.com/workers/development-testing/#remote-bindings), without needing to have access to the source code or run a local dev server.

For each microfrontend you want to run remotely while in local dev, configure its service binding with the remote flag:

* [  wrangler.jsonc ](#tab-panel-7404)
* [  wrangler.toml ](#tab-panel-7405)

```

{

"services": [

  {

  "binding": "<BINDING_NAME>",

  "service": "<WORKER_NAME>",

  "remote": true

  }

]

}


```

```

[[services]]

binding = "<BINDING_NAME>"

service = "<WORKER_NAME>"

remote = true


```

## Deployment

Each microfrontend can be deployed independently without redeploying the router or other microfrontends. This enables teams to:

* Deploy updates on their own schedule
* Roll back individual microfrontends without affecting others
* Test and release features independently

When you deploy a microfrontend worker, the router automatically routes requests to the latest version via the service binding. No router changes are required unless you are adding new routes or updating the `ROUTES` configuration.

To deploy to production, you can use [custom domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) for your router worker, and configure [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/) for continuous deployment from your Git repository.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/microfrontends/","name":"Microfrontends"}}]}
```

---

---
title: More guides...
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/more-web-frameworks/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# More guides...

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/","name":"More guides..."}}]}
```

---

---
title: Analog
description: Create an Analog application and deploy it to Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Full stack ](https://developers.cloudflare.com/search/?tags=Full%20stack)[ Angular ](https://developers.cloudflare.com/search/?tags=Angular) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/more-web-frameworks/analog.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Analog

In this guide, you will create a new [Analog ↗](https://analogjs.org/) application and deploy to Cloudflare Workers.

[Analog ↗](https://analogjs.org/) is a fullstack meta-framework for Angular, powered by [Vite ↗](https://vitejs.dev/) and [Nitro ↗](https://nitro.unjs.io/).

Already have an Analog project?

Run `wrangler deploy` in a project without a Wrangler configuration file and Wrangler will automatically detect Analog, generate the necessary configuration, and deploy your project.

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

Learn more about [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/).

Analog Detected 

Generated configuration 

wrangler.jsonc

main: .output/server/index.mjs 

wrangler.jsonc

assets: directory: .output/public 

wrangler.jsonc

compatibility\_flags: nodejs\_compat 

wrangler.jsonc

observability: enabled: true 

Workers Deployed 

Wrangler handles configuration automatically 

## 1\. Set up a new project

Use the [create-cloudflare ↗](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Analog's official setup tool, and provide the option to deploy instantly.

To use `create-cloudflare` to create a new Analog project, run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-analog-app --framework=analog
```

```
yarn create cloudflare my-analog-app --framework=analog
```

```
pnpm create cloudflare@latest my-analog-app --framework=analog
```

After setting up your project, change your directory by running the following command:

Terminal window

```

cd my-analog-app


```

## 2\. Develop locally

After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development.

 npm  yarn  pnpm 

```
npm run dev
```

```
yarn run dev
```

```
pnpm run dev
```

## 3\. Deploy your Project

Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/).

The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.

 npm  yarn  pnpm 

```
npm run deploy
```

```
yarn run deploy
```

```
pnpm run deploy
```

---

## Bindings

Your Analog application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using product bindings. The [Nitro documentation ↗](https://nitro.unjs.io/deploy/providers/cloudflare#direct-access-to-cloudflare-bindings) provides information about configuring bindings and how you can access them in your Analog API routes.

With bindings, your application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more.

[ Bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/) Access to compute, storage, AI and more. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/","name":"More guides..."}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/analog/","name":"Analog"}}]}
```

---

---
title: Angular
description: Create an Angular application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Full stack ](https://developers.cloudflare.com/search/?tags=Full%20stack)[ Angular ](https://developers.cloudflare.com/search/?tags=Angular) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/more-web-frameworks/angular.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Angular

In this guide, you will create a new [Angular ↗](https://angular.dev/) application and deploy to Cloudflare Workers (with the new [Workers Assets](https://developers.cloudflare.com/workers/static-assets/)).

Automatic configuration

Run `wrangler deploy` in a project without a Wrangler configuration file and Wrangler will automatically detect Angular, generate the necessary configuration, and deploy your project.

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

Learn more about [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/).

Angular Detected 

Generated configuration 

wrangler.jsonc

assets: directory: dist/browser 

wrangler.jsonc

observability: enabled: true 

Workers Deployed 

Wrangler handles configuration automatically 

## 1\. Set up a new project

Use the [create-cloudflare ↗](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Angular's official setup tool, and provide the option to deploy instantly.

To use `create-cloudflare` to create a new Angular project with Workers Assets, run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-angular-app --framework=angular
```

```
yarn create cloudflare my-angular-app --framework=angular
```

```
pnpm create cloudflare@latest my-angular-app --framework=angular
```

After setting up your project, change your directory by running the following command:

Terminal window

```

cd my-angular-app


```

## 2\. Develop locally

After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development.

 npm  yarn  pnpm 

```
npm run start
```

```
yarn run start
```

```
pnpm run start
```

## 3\. Deploy your Project

Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/).

The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.

 npm  yarn  pnpm 

```
npm run deploy
```

```
yarn run deploy
```

```
pnpm run deploy
```

---

## Static assets

By default, Cloudflare first tries to match a request path against a static asset path, which is based on the file structure of the uploaded asset directory. This is either the directory specified by `assets.directory` in your Wrangler config or, in the case of the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), the output directory of the client build. Failing that, we invoke a Worker if one is present. If there is no Worker, or the Worker then uses the asset binding, Cloudflare will fallback to the behaviour set by [not\_found\_handling](https://developers.cloudflare.com/workers/static-assets/#routing-behavior).

Refer to the [routing documentation](https://developers.cloudflare.com/workers/static-assets/routing/) for more information about how routing works with static assets, and how to customize this behavior.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/","name":"More guides..."}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/angular/","name":"Angular"}}]}
```

---

---
title: Docusaurus
description: Create a Docusaurus application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ SSG ](https://developers.cloudflare.com/search/?tags=SSG) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/more-web-frameworks/docusaurus.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Docusaurus

**Start from CLI**: Scaffold a Docusaurus project on Workers, and pick your template.

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-docusaurus-app --framework=docusaurus
```

```
yarn create cloudflare my-docusaurus-app --framework=docusaurus
```

```
pnpm create cloudflare@latest my-docusaurus-app --framework=docusaurus
```

**Or just deploy**: Create a documentation site with Docusaurus and deploy it on Cloudflare Workers, with CI/CD and previews all set up for you.

[![Deploy to Workers](https://deploy.workers.cloudflare.com/button)](https://dash.cloudflare.com/?to=/:account/workers-and-pages/create/deploy-to-workers&repository=https://github.com/cloudflare/templates/tree/staging/astro-blog-starter-template)

## What is Docusaurus?

[Docusaurus ↗](https://docusaurus.io/) is an open-source framework for building, deploying, and maintaining documentation websites. It is built on React and provides an intuitive way to create static websites with a focus on documentation.

Docusaurus is designed to be easy to use and customizable, making it a popular choice for developers and organizations looking to create documentation sites quickly.

## Deploy a new Docusaurus project on Workers

1. **Create a new project with the create-cloudflare CLI (C3).**  
 npm  yarn  pnpm  
```  
npm create cloudflare@latest -- my-docusaurus-app --framework=docusaurus --platform=workers  
```  
```  
yarn create cloudflare my-docusaurus-app --framework=docusaurus --platform=workers  
```  
```  
pnpm create cloudflare@latest my-docusaurus-app --framework=docusaurus --platform=workers  
```  
What's happening behind the scenes?  
When you run this command, C3 creates a new project directory, initiates [Docusaurus' official setup tool ↗](https://docusaurus.io/docs/installation), and configures the project for Cloudflare. It then offers the option to instantly deploy your application to Cloudflare.
2. **Develop locally.**  
After creating your project, run the following command in your project directory to start a local development server.  
 npm  yarn  pnpm  
```  
npm run dev  
```  
```  
yarn run dev  
```  
```  
pnpm run dev  
```
3. **Deploy your project.**  
Your project can be deployed to a [\*.workers.dev subdomain](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your local machine or any CI/CD system, (including [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/#workers-builds/)).  
Use the following command to build and deploy your project. If you're using a CI service, be sure to update your "deploy command" accordingly.  
 npm  yarn  pnpm  
```  
npm run deploy  
```  
```  
yarn run deploy  
```  
```  
pnpm run deploy  
```

## Deploy an existing Docusaurus project on Workers

### If you have a static site

If your Docusaurus project is entirely pre-rendered (which it usually is), follow these steps:

1. **Add a Wrangler configuration file.**  
In your project root, create a Wrangler configuration file with the following content:  
   * [  wrangler.jsonc ](#tab-panel-7406)  
   * [  wrangler.toml ](#tab-panel-7407)  
```  
  {  
    "name": "my-docusaurus-app",  
    // Update to today's date  
    // Set this to today's date  
    "compatibility_date": "2026-04-03",  
    "assets": {  
      "directory": "./build"  
    }  
  }  
```  
```  
name = "my-docusaurus-app"  
# Set this to today's date  
compatibility_date = "2026-04-03"  
[assets]  
directory = "./build"  
```  
What's this configuration doing?  
The key part of this config is the `assets` field, which tells Wrangler where to find your static assets. In this case, we're telling Wrangler to look in the `./build` directory. If your assets are in a different directory, update the `directory` value accordingly. Refer to other [asset configuration options](https://developers.cloudflare.com/workers/static-assets/routing/).  
Also note how there's no `main` field in this config - this is because you're only serving static assets, so no Worker code is needed for on demand rendering/SSR.
2. **Build and deploy your project.**  
You can deploy your project to a [\*.workers.dev subdomain](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) or a [custom domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) from your local machine or any CI/CD system (including [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/#workers-builds)). Use the following command to build and deploy. If you're using a CI service, be sure to update your "deploy command" accordingly.  
 npm  yarn  pnpm  
```  
npx docusaurus build  
```  
```  
yarn docusaurus build  
```  
```  
pnpm docusaurus build  
```  
 npm  yarn  pnpm  
```  
npx wrangler@latest deploy  
```  
```  
yarn wrangler@latest deploy  
```  
```  
pnpm wrangler@latest deploy  
```

## Use bindings with Docusaurus

Bindings are a way to connect your Docusaurus project to other Cloudflare services, enabling you to store and retrieve data within your application.

With bindings, your application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more.

[ Bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/) Access to compute, storage, AI and more. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/","name":"More guides..."}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/docusaurus/","name":"Docusaurus"}}]}
```

---

---
title: Gatsby
description: Create a Gatsby application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ SSG ](https://developers.cloudflare.com/search/?tags=SSG) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/more-web-frameworks/gatsby.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Gatsby

In this guide, you will create a new [Gatsby ↗](https://www.gatsbyjs.com/) application and deploy to Cloudflare Workers (with the new [Workers Assets](https://developers.cloudflare.com/workers/static-assets/)).

## 1\. Set up a new project

Use the [create-cloudflare ↗](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Gatsby's official setup tool, and provide the option to deploy instantly.

To use `create-cloudflare` to create a new Gatsby project with Workers Assets, run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-gatsby-app --framework=gatsby
```

```
yarn create cloudflare my-gatsby-app --framework=gatsby
```

```
pnpm create cloudflare@latest my-gatsby-app --framework=gatsby
```

After setting up your project, change your directory by running the following command:

Terminal window

```

cd my-gatsby-app


```

## 2\. Develop locally

After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development.

 npm  yarn  pnpm 

```
npm run dev
```

```
yarn run dev
```

```
pnpm run dev
```

## 3\. Deploy your Project

Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/).

The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.

 npm  yarn  pnpm 

```
npm run deploy
```

```
yarn run deploy
```

```
pnpm run deploy
```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/","name":"More guides..."}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/gatsby/","name":"Gatsby"}}]}
```

---

---
title: Hono
description: Create a Hono application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Hono ](https://developers.cloudflare.com/search/?tags=Hono) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/more-web-frameworks/hono.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Hono

**Start from CLI** \- scaffold a full-stack app with a Hono API, React SPA and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) for lightning-fast development.

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-hono-app --template=cloudflare/templates/vite-react-template
```

```
yarn create cloudflare my-hono-app --template=cloudflare/templates/vite-react-template
```

```
pnpm create cloudflare@latest my-hono-app --template=cloudflare/templates/vite-react-template
```

---

**Or just deploy** \- create a full-stack app using Hono, React and Vite, with CI/CD and previews all set up for you.

[![Deploy to Workers](https://deploy.workers.cloudflare.com/button)](https://dash.cloudflare.com/?to=/:account/workers-and-pages/create/deploy-to-workers&repository=https://github.com/cloudflare/templates/tree/main/vite-react-template)

## What is Hono?

[Hono ↗](https://hono.dev/) is an ultra-fast, lightweight framework for building web applications, and works fantastically with Cloudflare Workers. With Workers Assets, you can easily combine a Hono API running on Workers with a SPA to create a full-stack app.

## Creating a full-stack Hono app with a React SPA

1. **Create a new project with the create-cloudflare CLI (C3)**  
 npm  yarn  pnpm  
```  
npm create cloudflare@latest -- my-hono-app --template=cloudflare/templates/vite-react-template  
```  
```  
yarn create cloudflare my-hono-app --template=cloudflare/templates/vite-react-template  
```  
```  
pnpm create cloudflare@latest my-hono-app --template=cloudflare/templates/vite-react-template  
```  
How is this project set up?  
Below is a simplified file tree of the project.  
   * Directorymy-hono-app  
         * Directorysrc  
                  * Directoryworker/  
                              * index.ts  
                  * Directoryreact-app/  
                              * …  
         * index.html  
         * vite.config.ts  
         * wrangler.jsonc  
`wrangler.jsonc` is your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). In this file:  
   * `main` points to `src/worker/index.ts`. This is your Hono app, which will run in a Worker.  
   * `assets.not_found_handling` is set to `single-page-application`, which means that routes that are handled by your SPA do not go to the Worker, and are thus free.  
   * If you want to add bindings to resources on Cloudflare's developer platform, you configure them here. Read more about [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/).  
`vite.config.ts` is set up to use the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/). This runs your Worker in the Cloudflare Workers runtime, ensuring your local development environment is as close to production as possible.  
`src/worker/index.ts` is your Hono app, which contains a single endpoint to begin with, `/api`. At `src/react-app/src/App.tsx`, your React app calls this endpoint to get a message back and displays this in your SPA.
2. **Develop locally with the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/)**  
After creating your project, run the following command in your project directory to start a local development server.  
 npm  yarn  pnpm  
```  
npm run dev  
```  
```  
yarn run dev  
```  
```  
pnpm run dev  
```  
What's happening in local development?  
This project uses Vite for local development and build, and thus comes with all of Vite's features, including hot module replacement (HMR).  
In addition, `vite.config.ts` is set up to use the Cloudflare Vite plugin. This runs your application in the Cloudflare Workers runtime, just like in production, and enables access to local emulations of bindings.
3. **Deploy your project**  
Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including Cloudflare's own [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/).  
The following command will build and deploy your project. If you are using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.  
 npm  yarn  pnpm  
```  
npm run deploy  
```  
```  
yarn run deploy  
```  
```  
pnpm run deploy  
```

---

## Bindings

The [Hono documentation ↗](https://hono.dev/docs/getting-started/cloudflare-workers#bindings) provides information on how you can access bindings in your Hono app.

With bindings, your application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more.

[ Bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/) Access to compute, storage, AI and more. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/","name":"More guides..."}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/hono/","name":"Hono"}}]}
```

---

---
title: Nuxt
description: Create a Nuxt application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Full stack ](https://developers.cloudflare.com/search/?tags=Full%20stack)[ Nuxt ](https://developers.cloudflare.com/search/?tags=Nuxt) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/more-web-frameworks/nuxt.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Nuxt

In this guide, you will create a new [Nuxt ↗](https://nuxt.com/) application and deploy to Cloudflare Workers (with the new [Workers Assets](https://developers.cloudflare.com/workers/static-assets/)).

Already have a Nuxt project?

Run `wrangler deploy` in a project without a Wrangler configuration file and Wrangler will automatically detect Nuxt, generate the necessary configuration, and deploy your project.

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

Learn more about [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/).

Nuxt Detected 

Generated configuration 

wrangler.jsonc

main: .output/server/index.mjs 

wrangler.jsonc

assets: directory: .output/public 

wrangler.jsonc

compatibility\_flags: nodejs\_compat 

wrangler.jsonc

observability: enabled: true 

nuxt.config.ts

preset: cloudflare 

Workers Deployed 

Wrangler handles configuration automatically 

## 1\. Set up a new project

Use the [create-cloudflare ↗](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Nuxt's official setup tool, and provide the option to deploy instantly.

To use `create-cloudflare` to create a new Nuxt project with Workers Assets, run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-nuxt-app --framework=nuxt
```

```
yarn create cloudflare my-nuxt-app --framework=nuxt
```

```
pnpm create cloudflare@latest my-nuxt-app --framework=nuxt
```

After setting up your project, change your directory by running the following command:

Terminal window

```

cd my-nuxt-app


```

## 2\. Develop locally

After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development.

 npm  yarn  pnpm 

```
npm run dev
```

```
yarn run dev
```

```
pnpm run dev
```

## 3\. Deploy your Project

Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/).

The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.

 npm  yarn  pnpm 

```
npm run deploy
```

```
yarn run deploy
```

```
pnpm run deploy
```

---

## Bindings

Your Nuxt application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using product bindings. The [Nuxt documentation ↗](https://nitro.unjs.io/deploy/providers/cloudflare#direct-access-to-cloudflare-bindings) provides information about configuring bindings and how you can access them in your Nuxt event handlers.

With bindings, your application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more.

[ Bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/) Access to compute, storage, AI and more. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/","name":"More guides..."}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/nuxt/","name":"Nuxt"}}]}
```

---

---
title: Qwik
description: Create a Qwik application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Full stack ](https://developers.cloudflare.com/search/?tags=Full%20stack) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/more-web-frameworks/qwik.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Qwik

In this guide, you will create a new [Qwik ↗](https://qwik.dev/) application and deploy to Cloudflare Workers (with the new [Workers Assets](https://developers.cloudflare.com/workers/static-assets/)).

## 1\. Set up a new project

Use the [create-cloudflare ↗](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Qwik's official setup tool, and provide the option to deploy instantly.

To use `create-cloudflare` to create a new Qwik project with Workers Assets, run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-qwik-app --framework=qwik
```

```
yarn create cloudflare my-qwik-app --framework=qwik
```

```
pnpm create cloudflare@latest my-qwik-app --framework=qwik
```

After setting up your project, change your directory by running the following command:

Terminal window

```

cd my-qwik-app


```

## 2\. Develop locally

After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development.

 npm  yarn  pnpm 

```
npm run dev
```

```
yarn run dev
```

```
pnpm run dev
```

## 3\. Deploy your Project

Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/).

The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.

 npm  yarn  pnpm 

```
npm run deploy
```

```
yarn run deploy
```

```
pnpm run deploy
```

---

## Bindings

Your Qwik application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using product bindings. The [Qwik documentation ↗](https://qwik.dev/docs/deployments/cloudflare-pages/#context) provides information about configuring bindings and how you can access them in your Qwik endpoint methods.

With bindings, your application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more.

[ Bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/) Access to compute, storage, AI and more. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/","name":"More guides..."}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/qwik/","name":"Qwik"}}]}
```

---

---
title: Solid
description: Create a Solid application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Full stack ](https://developers.cloudflare.com/search/?tags=Full%20stack) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/more-web-frameworks/solid.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Solid

Note

Support for SolidStart projects on Cloudflare Workers is currently in beta.

Already have a Solid Start project?

Run `wrangler deploy` in a project without a Wrangler configuration file and Wrangler will automatically detect Solid Start, generate the necessary configuration, and deploy your project.

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

Learn more about [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/).

Solid Start Detected 

Generated configuration 

wrangler.jsonc

main: .output/server/index.mjs 

wrangler.jsonc

assets: directory: .output/public 

wrangler.jsonc

compatibility\_flags: nodejs\_compat 

wrangler.jsonc

observability: enabled: true 

Workers Deployed 

Wrangler handles configuration automatically 

In this guide, you will create a new [Solid ↗](https://www.solidjs.com/) application and deploy to Cloudflare Workers (with the new [Workers Assets](https://developers.cloudflare.com/workers/static-assets/)).

## 1\. Set up a new project

Use the [create-cloudflare ↗](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Solid's official setup tool, and provide the option to deploy instantly.

To use `create-cloudflare` to create a new Solid project with Workers Assets, run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-solid-app --framework=solid --experimental
```

```
yarn create cloudflare my-solid-app --framework=solid --experimental
```

```
pnpm create cloudflare@latest my-solid-app --framework=solid --experimental
```

After setting up your project, change your directory by running the following command:

Terminal window

```

cd my-solid-app


```

## 2\. Develop locally

After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development.

 npm  yarn  pnpm 

```
npm run dev
```

```
yarn run dev
```

```
pnpm run dev
```

## 3\. Deploy your Project

Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/).

The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.

 npm  yarn  pnpm 

```
npm run deploy
```

```
yarn run deploy
```

```
pnpm run deploy
```

---

## Bindings

Your Solid application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using product bindings. The [Solid documentation ↗](https://docs.solidjs.com/reference/server-utilities/get-request-event) provides information about how to access platform primitives, including bindings. Specifically, for Cloudflare, you can use [getRequestEvent().nativeEvent.context.cloudflare.env ↗](https://docs.solidjs.com/solid-start/advanced/request-events#nativeevent) to access bindings.

With bindings, your application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more.

[ Bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/) Access to compute, storage, AI and more. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/","name":"More guides..."}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/solid/","name":"Solid"}}]}
```

---

---
title: Waku
description: Create a Waku application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Full stack ](https://developers.cloudflare.com/search/?tags=Full%20stack) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/more-web-frameworks/waku.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Waku

In this guide, you will create a new [Waku ↗](https://waku.gg/) application and deploy to Cloudflare Workers (with the new [Workers Assets](https://developers.cloudflare.com/workers/static-assets/)). Waku is a minimal React framework built for [React 19 ↗](https://react.dev/blog/2024/12/05/react-19) and [React Server Components ↗](https://react.dev/reference/rsc/server-components). The use of Server Components is completely optional. It can be configured to run Server Components during build and output static HTML or it can be configured to run with dynamic React server rendering. It is built on top of [Hono ↗](https://hono.dev/) and [Vite ↗](https://vite.dev/).

Already have a Waku project?

Run `wrangler deploy` in a project without a Wrangler configuration file and Wrangler will automatically detect Waku, generate the necessary configuration, and deploy your project.

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

Learn more about [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/).

Waku Detected 

Generated configuration 

wrangler.jsonc

main: dist/worker.js 

wrangler.jsonc

assets: directory: dist/public 

wrangler.jsonc

compatibility\_flags: nodejs\_compat 

wrangler.jsonc

observability: enabled: true 

Workers Deployed 

Wrangler handles configuration automatically 

## 1\. Set up a new project

Use the [create-cloudflare ↗](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate Waku's official setup tool, and provide the option to deploy instantly.

To use `create-cloudflare` to create a new Waku project with Workers Assets, run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest my-waku-app -- --framework=waku
```

```
yarn create cloudflare@latest my-waku-app --framework=waku
```

```
pnpm create cloudflare@latest my-waku-app --framework=waku
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Framework Starter`.
* For _Which development framework do you want to use?_, choose `Waku`.
* Complete the framework's own CLI wizard.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

After setting up your project, change your directory by running the following command:

Terminal window

```

cd my-waku-app


```

## 2\. Develop locally

After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development.

 npm  yarn  pnpm 

```
npm run dev
```

```
yarn run dev
```

```
pnpm run dev
```

## 3\. Deploy your project

Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/).

The following command will build and deploy your project. If you are using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.

 npm  yarn  pnpm 

```
npm run deploy
```

```
yarn run deploy
```

```
pnpm run deploy
```

---

## Bindings

Your Waku application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using product bindings. The [Waku Cloudflare documentation ↗](https://waku.gg/guides/cloudflare#accessing-cloudflare-bindings-execution-context-and-request-response-objects) provides information about configuring bindings and how you can access them in your React Server Components.

## Static assets

You can serve static assets in your Waku application by adding them to the `./public/` directory. Common examples include images, stylesheets, fonts, and web manifests.

During the build process, Waku copies `.js`, `.css`, `.html`, and `.txt` files from this directory into the final assets output. `.txt` files are used for storing data used by Server Components that are rendered at build time.

By default, Cloudflare first tries to match a request path against a static asset path, which is based on the file structure of the uploaded asset directory. This is either the directory specified by `assets.directory` in your Wrangler config or, in the case of the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), the output directory of the client build. Failing that, we invoke a Worker if one is present. If there is no Worker, or the Worker then uses the asset binding, Cloudflare will fallback to the behaviour set by [not\_found\_handling](https://developers.cloudflare.com/workers/static-assets/#routing-behavior).

Refer to the [routing documentation](https://developers.cloudflare.com/workers/static-assets/routing/) for more information about how routing works with static assets, and how to customize this behavior.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/","name":"More guides..."}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/framework-guides/web-apps/more-web-frameworks/waku/","name":"Waku"}}]}
```

---

---
title: Next.js
description: Create an Next.js application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Full stack ](https://developers.cloudflare.com/search/?tags=Full%20stack) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/nextjs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Next.js

**Start from CLI** \- scaffold a Next.js project on Workers.

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-next-app --framework=next
```

```
yarn create cloudflare my-next-app --framework=next
```

```
pnpm create cloudflare@latest my-next-app --framework=next
```

This is a simple getting started guide. For detailed documentation on how to use the Cloudflare OpenNext adapter, visit the [OpenNext website ↗](https://opennext.js.org/cloudflare).

## What is Next.js?

[Next.js ↗](https://nextjs.org/) is a [React ↗](https://react.dev/) framework for building full stack applications.

Next.js supports Server-side and Client-side rendering, as well as Partial Prerendering which lets you combine static and dynamic components in the same route.

You can deploy your Next.js app to Cloudflare Workers using the OpenNext adapter.

## Next.js supported features

Most Next.js features are supported by the Cloudflare OpenNext adapter:

| Feature                               | Cloudflare adapter  | Notes                                                                        |
| ------------------------------------- | ------------------- | ---------------------------------------------------------------------------- |
| App Router                            | 🟢 supported        |                                                                              |
| Pages Router                          | 🟢 supported        |                                                                              |
| Route Handlers                        | 🟢 supported        |                                                                              |
| React Server Components               | 🟢 supported        |                                                                              |
| Static Site Generation (SSG)          | 🟢 supported        |                                                                              |
| Server-Side Rendering (SSR)           | 🟢 supported        |                                                                              |
| Incremental Static Regeneration (ISR) | 🟢 supported        |                                                                              |
| Server Actions                        | 🟢 supported        |                                                                              |
| Response streaming                    | 🟢 supported        |                                                                              |
| asynchronous work with next/after     | 🟢 supported        |                                                                              |
| Middleware                            | 🟢 supported        |                                                                              |
| Image optimization                    | 🟢 supported        | Supported via [Cloudflare Images](https://developers.cloudflare.com/images/) |
| Partial Prerendering (PPR)            | 🟢 supported        | PPR is experimental in Next.js                                               |
| Composable Caching ('use cache')      | 🟢 supported        | Composable Caching is experimental in Next.js                                |
| Node.js in Middleware                 | ⚪ not yet supported | Node.js middleware introduced in 15.2 are not yet supported                  |

## Deploy a new Next.js project on Workers

1. **Create a new project with the create-cloudflare CLI (C3).**  
 npm  yarn  pnpm  
```  
npm create cloudflare@latest -- my-next-app --framework=next  
```  
```  
yarn create cloudflare my-next-app --framework=next  
```  
```  
pnpm create cloudflare@latest my-next-app --framework=next  
```  
What's happening behind the scenes?  
When you run this command, C3 creates a new project directory, initiates[Next.js's official setup tool ↗](https://nextjs.org/docs/app/api-reference/cli/create-next-app), and configures the project for Cloudflare. It then offers the option to instantly deploy your application to Cloudflare.
2. **Develop locally.**  
After creating your project, run the following command in your project directory to start a local development server. The command uses the Next.js development server. It offers the best developer experience by quickly reloading your app every time the source code is updated.  
 npm  yarn  pnpm  
```  
npm run dev  
```  
```  
yarn run dev  
```  
```  
pnpm run dev  
```
3. **Test and preview your site with the Cloudflare adapter.**  
 npm  yarn  pnpm  
```  
npm run preview  
```  
```  
yarn run preview  
```  
```  
pnpm run preview  
```  
What's the difference between dev and preview?  
The command used in the previous step uses the Next.js development server, which runs in Node.js. However, your deployed application will run on Cloudflare Workers, which uses the `workerd` runtime. Therefore when running integration tests and previewing your application, you should use the preview command, which is more accurate to production, as it executes your application in the `workerd` runtime using `wrangler dev`.
4. **Deploy your project.**  
You can deploy your project to a [\*.workers.dev subdomain](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) or a [custom domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) from your local machine or any CI/CD system (including [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/#workers-builds)). Use the following command to build and deploy. If you're using a CI service, be sure to update your "deploy command" accordingly.  
 npm  yarn  pnpm  
```  
npm run deploy  
```  
```  
yarn run deploy  
```  
```  
pnpm run deploy  
```  
Note  
[**Workers Builds**](https://developers.cloudflare.com/workers/ci-cd/builds/) requires you to configure environment variables in the ["Build Variables and secrets"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#:~:text=Build%20variables%20and%20secrets) section.  
This ensures the Next build has the necessary access to both public `NEXT_PUBLC...` variables and [non-NEXT\_PUBLIC\_... ↗](https://nextjs.org/docs/pages/guides/environment-variables#bundling-environment-variables-for-the-browser), which are essential for tasks like inlining and building SSG pages.  
Learn more in the [OpenNext environment variable guide ↗](https://opennext.js.org/cloudflare/howtos/env-vars#workers-builds)

## Deploy an existing Next.js project on Workers

Automatic configuration

Run `wrangler deploy` in a project without a Wrangler configuration file and Wrangler will automatically detect Next.js, generate the necessary configuration, and deploy your project.

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

Learn more about [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/).

Next.js Detected 

Generated configuration 

wrangler.jsonc

main: .open-next/worker.js 

wrangler.jsonc

assets: directory: .open-next/assets 

wrangler.jsonc

compatibility\_flags: nodejs\_compat 

wrangler.jsonc

observability: enabled: true 

package.json

adapter: @opennextjs/cloudflare 

Workers Deployed 

Wrangler handles configuration automatically 

## Manual configuration

If you prefer to configure your project manually, follow the steps below.

1. **Install [@opennextjs/cloudflare ↗](https://www.npmjs.com/package/@opennextjs/cloudflare)**  
 npm  yarn  pnpm  bun  
```  
npm i @opennextjs/cloudflare@latest  
```  
```  
yarn add @opennextjs/cloudflare@latest  
```  
```  
pnpm add @opennextjs/cloudflare@latest  
```  
```  
bun add @opennextjs/cloudflare@latest  
```
2. **Install [wrangler CLI ↗](https://developers.cloudflare.com/workers/wrangler) as a devDependency**  
 npm  yarn  pnpm  bun  
```  
npm i -D wrangler@latest  
```  
```  
yarn add -D wrangler@latest  
```  
```  
pnpm add -D wrangler@latest  
```  
```  
bun add -d wrangler@latest  
```
3. **Add a Wrangler configuration file**  
In your project root, create a [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) with the following content:  
   * [  wrangler.jsonc ](#tab-panel-7408)  
   * [  wrangler.toml ](#tab-panel-7409)  
```  
{  
  "$schema": "./node_modules/wrangler/config-schema.json",  
  "main": ".open-next/worker.js",  
  "name": "my-app",  
  // Set this to today's date  
  "compatibility_date": "2026-04-03",  
  "compatibility_flags": [  
    "nodejs_compat"  
  ],  
  "assets": {  
    "directory": ".open-next/assets",  
    "binding": "ASSETS"  
  }  
}  
```  
```  
"$schema" = "./node_modules/wrangler/config-schema.json"  
main = ".open-next/worker.js"  
name = "my-app"  
# Set this to today's date  
compatibility_date = "2026-04-03"  
compatibility_flags = [ "nodejs_compat" ]  
[assets]  
directory = ".open-next/assets"  
binding = "ASSETS"  
```  
Note  
As shown above, you must enable the [nodejs\_compat compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) _and_ set your [compatibility date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) to `2024-09-23` or later for your Next.js app to work with @opennextjs/cloudflare.
4. **Add a configuration file for OpenNext**  
In your project root, create an OpenNext configuration file named `open-next.config.ts` with the following content:  
TypeScript  
```  
import { defineCloudflareConfig } from "@opennextjs/cloudflare";  
export default defineCloudflareConfig();  
```  
Note  
`open-next.config.ts` is where you can configure the caching, see the [adapter documentation ↗](https://opennext.js.org/cloudflare/caching) for more information
5. **Update `package.json`**  
You can add the following scripts to your `package.json`:  
```  
"preview": "opennextjs-cloudflare build && opennextjs-cloudflare preview",  
"deploy": "opennextjs-cloudflare build && opennextjs-cloudflare deploy",  
"cf-typegen": "wrangler types --env-interface CloudflareEnv cloudflare-env.d.ts"  
```  
Usage  
   * `preview`: Builds your app and serves it locally, allowing you to quickly preview your app running locally in the Workers runtime, via a single command.  
   * `deploy`: Builds your app, and then deploys it to Cloudflare  
   * `cf-typegen`: Generates a `cloudflare-env.d.ts` file at the root of your project containing the types for the env.
6. **Develop locally.**  
After creating your project, run the following command in your project directory to start a local development server. The command uses the Next.js development server. It offers the best developer experience by quickly reloading your app after your source code is updated.  
 npm  yarn  pnpm  
```  
npm run dev  
```  
```  
yarn run dev  
```  
```  
pnpm run dev  
```
7. **Test your site with the Cloudflare adapter.**  
The command used in the previous step uses the Next.js development server to offer a great developer experience. However your application will run on Cloudflare Workers so you want to run your integration tests and verify that your application works correctly in this environment.  
 npm  yarn  pnpm  
```  
npm run preview  
```  
```  
yarn run preview  
```  
```  
pnpm run preview  
```
8. **Deploy your project.**  
You can deploy your project to a [\*.workers.dev subdomain](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) or a [custom domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) from your local machine or any CI/CD system (including [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/#workers-builds)). Use the following command to build and deploy. If you're using a CI service, be sure to update your "deploy command" accordingly.  
 npm  yarn  pnpm  
```  
npm run deploy  
```  
```  
yarn run deploy  
```  
```  
pnpm run deploy  
```  
Note  
[**Workers Builds**](https://developers.cloudflare.com/workers/ci-cd/builds/) requires you to configure environment variables in the ["Build Variables and secrets"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#:~:text=Build%20variables%20and%20secrets) section.  
This ensures the Next build has the necessary access to both public `NEXT_PUBLC...` variables and [non-NEXT\_PUBLIC\_... ↗](https://nextjs.org/docs/pages/guides/environment-variables#bundling-environment-variables-for-the-browser), which are essential for tasks like inlining and building SSG pages.  
Learn more in the [OpenNext environment variable guide ↗](https://opennext.js.org/cloudflare/howtos/env-vars#workers-builds)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/nextjs/","name":"Next.js"}}]}
```

---

---
title: React + Vite
description: Create a React application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ SPA ](https://developers.cloudflare.com/search/?tags=SPA) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/react.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# React + Vite

**Start from CLI** \- scaffold a full-stack app with a React SPA, Cloudflare Workers API, and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) for lightning-fast development.

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-react-app --framework=react
```

```
yarn create cloudflare my-react-app --framework=react
```

```
pnpm create cloudflare@latest my-react-app --framework=react
```

---

**Or just deploy** \- create a full-stack app using React, Hono API and Vite, with CI/CD and previews all set up for you.

[![Deploy to Workers](https://deploy.workers.cloudflare.com/button)](https://dash.cloudflare.com/?to=/:account/workers-and-pages/create/deploy-to-workers&repository=https://github.com/cloudflare/templates/tree/main/vite-react-template)

## What is React?

[React ↗](https://react.dev/) is a framework for building user interfaces. It allows you to create reusable UI components and manage the state of your application efficiently. You can use React to build a single-page application (SPA), and combine it with a backend API running on Cloudflare Workers to create a full-stack application.

## Creating a full-stack app with React

1. **Create a new project with the create-cloudflare CLI (C3)**  
 npm  yarn  pnpm  
```  
npm create cloudflare@latest -- my-react-app --framework=react  
```  
```  
yarn create cloudflare my-react-app --framework=react  
```  
```  
pnpm create cloudflare@latest my-react-app --framework=react  
```  
How is this project set up?  
Below is a simplified file tree of the project.  
   * Directorymy-react-app  
         * Directorysrc/  
                  * App.tsx  
         * Directoryworker/  
                  * index.ts  
         * index.html  
         * vite.config.ts  
         * wrangler.jsonc  
`wrangler.jsonc` is your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). In this file:  
   * `main` points to `worker/index.ts`. This is your Worker, which is going to act as your backend API.  
   * `assets.not_found_handling` is set to `single-page-application`, which means that routes that are handled by your React SPA do not go to the Worker, and are thus free.  
   * If you want to add bindings to resources on Cloudflare's developer platform, you configure them here. Read more about [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/).  
`vite.config.ts` is set up to use the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/). This runs your Worker in the Cloudflare Workers runtime, ensuring your local development environment is as close to production as possible.  
`worker/index.ts` is your backend API, which contains a single endpoint, `/api/`, that returns a text response. At `src/App.tsx`, your React app calls this endpoint to get a message back and displays this.
2. **Develop locally with the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/)**  
After creating your project, run the following command in your project directory to start a local development server.  
 npm  yarn  pnpm  
```  
npm run dev  
```  
```  
yarn run dev  
```  
```  
pnpm run dev  
```  
What's happening in local development?  
This project uses Vite for local development and build, and thus comes with all of Vite's features, including hot module replacement (HMR).  
In addition, `vite.config.ts` is set up to use the Cloudflare Vite plugin. This runs your application in the Cloudflare Workers runtime, just like in production, and enables access to local emulations of bindings.
3. **Deploy your project**  
Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including Cloudflare's own [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/).  
The following command will build and deploy your project. If you are using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.  
 npm  yarn  pnpm  
```  
npm run deploy  
```  
```  
yarn run deploy  
```  
```  
pnpm run deploy  
```

---

## Asset Routing

If you're using React as a SPA, you will want to set `not_found_handling = "single-page-application"` in your Wrangler configuration file.

By default, Cloudflare first tries to match a request path against a static asset path, which is based on the file structure of the uploaded asset directory. This is either the directory specified by `assets.directory` in your Wrangler config or, in the case of the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), the output directory of the client build. Failing that, we invoke a Worker if one is present. If there is no Worker, or the Worker then uses the asset binding, Cloudflare will fallback to the behaviour set by [not\_found\_handling](https://developers.cloudflare.com/workers/static-assets/#routing-behavior).

Refer to the [routing documentation](https://developers.cloudflare.com/workers/static-assets/routing/) for more information about how routing works with static assets, and how to customize this behavior.

## Use bindings with React

Your new project also contains a Worker at `./worker/index.ts`, which you can use as a backend API for your React application. While your React application cannot directly access Workers bindings, it can interact with them through this Worker. You can make [fetch() requests](https://developers.cloudflare.com/workers/runtime-apis/fetch/) from your React application to the Worker, which can then handle the request and use bindings. Learn how to [configure Workers bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/).

With bindings, your application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more.

[ Bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/) Access to compute, storage, AI and more. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/react/","name":"React + Vite"}}]}
```

---

---
title: React Router (formerly Remix)
description: Create a React Router application and deploy it to Cloudflare Workers
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Full stack ](https://developers.cloudflare.com/search/?tags=Full%20stack) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/react-router.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# React Router (formerly Remix)

**Start from CLI**: Scaffold a full-stack app with [React Router v7 ↗](https://reactrouter.com/) and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) for lightning-fast development.

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-react-router-app --framework=react-router
```

```
yarn create cloudflare my-react-router-app --framework=react-router
```

```
pnpm create cloudflare@latest my-react-router-app --framework=react-router
```

**Or just deploy**: Create a full-stack app using React Router v7, with CI/CD and previews all set up for you.

[![Deploy to Workers](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/react-router-starter-template)

Note

SPA mode and prerendering are not currently supported when using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/). If you wish to use React Router in an SPA then we recommend starting with the [React template](https://developers.cloudflare.com/workers/framework-guides/web-apps/react/) and using React Router [as a library ↗](https://reactrouter.com/start/data/installation).

Already have a React Router project?

Run `wrangler deploy` in a project without a Wrangler configuration file and Wrangler will automatically detect React Router, generate the necessary configuration, and deploy your project.

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

Learn more about [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/).

React Router Detected 

Generated configuration 

wrangler.jsonc

main: build/server/index.js 

wrangler.jsonc

assets: directory: build/client 

wrangler.jsonc

compatibility\_flags: nodejs\_compat 

wrangler.jsonc

observability: enabled: true 

Workers Deployed 

Wrangler handles configuration automatically 

## What is React Router?

[React Router v7 ↗](https://reactrouter.com/) is a full-stack React framework for building web applications. It combines with the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) to provide a first-class experience for developing, building and deploying your apps on Cloudflare.

## Creating a full-stack React Router app

1. **Create a new project with the create-cloudflare CLI (C3)**  
 npm  yarn  pnpm  
```  
npm create cloudflare@latest -- my-react-router-app --framework=react-router  
```  
```  
yarn create cloudflare my-react-router-app --framework=react-router  
```  
```  
pnpm create cloudflare@latest my-react-router-app --framework=react-router  
```  
How is this project set up?  
Below is a simplified file tree of the project.  
   * Directorymy-react-router-app  
         * Directoryapp  
                  * Directoryroutes  
                              * ...  
                  * entry.server.ts  
                  * root.tsx  
                  * routes.ts  
         * Directoryworkers  
                  * app.ts  
         * react-router.config.ts  
         * vite.config.ts  
         * wrangler.jsonc  
`react-router.config.ts` is your [React Router config file ↗](https://reactrouter.com/explanation/special-files#react-routerconfigts). In this file:  
   * `ssr` is set to `true`, meaning that your application will use server-side rendering.  
   * `future.v8_viteEnvironmentApi` is set to `true` to enable compatibility with the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).  
`vite.config.ts` is your [Vite config file ↗](https://vite.dev/config/). The React Router and Cloudflare plugins are included in the `plugins` array. The [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) runs your server code in the Workers runtime, ensuring your local development environment is as close to production as possible.  
`wrangler.jsonc` is your [Worker config file](https://developers.cloudflare.com/workers/wrangler/configuration/). In this file:  
   * `main` points to `./workers/app.ts`. This is the entry file for your Worker. The default export includes a [fetch handler](https://developers.cloudflare.com/workers/runtime-apis/fetch/), which delegates the request to React Router.  
   * If you want to add [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) to resources on Cloudflare's developer platform, you configure them here.
2. **Develop locally**  
After creating your project, run the following command in your project directory to start a local development server.  
 npm  yarn  pnpm  
```  
npm run dev  
```  
```  
yarn run dev  
```  
```  
pnpm run dev  
```  
What's happening in local development?  
This project uses React Router in combination with the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/). This means that your application runs in the Cloudflare Workers runtime, just like in production, and enables access to local emulations of bindings.
3. **Deploy your project**  
Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) from your own machine or from any CI/CD system, including Cloudflare's own [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/).  
The following command will build and deploy your project. If you are using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.  
 npm  yarn  pnpm  
```  
npm run deploy  
```  
```  
yarn run deploy  
```  
```  
pnpm run deploy  
```

## Use bindings with React Router

With bindings, your application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more.

Once you have configured the bindings in the Wrangler configuration file, they are then available within `context.cloudflare` in your loader or action functions:

app/routes/home.tsx

```

export function loader({ context }: Route.LoaderArgs) {

  return { message: context.cloudflare.env.VALUE_FROM_CLOUDFLARE };

}


export default function Home({ loaderData }: Route.ComponentProps) {

  return <Welcome message={loaderData.message} />;

}


```

As you have direct access to your Worker entry file (`workers/app.ts`), you can also add additional exports such as [Durable Objects](https://developers.cloudflare.com/durable-objects/) and [Workflows](https://developers.cloudflare.com/workflows/)

Example: Using Workflows

Here is an example of how to set up a simple Workflow in your Worker entry file.

workers/app.ts

```

import { createRequestHandler } from "react-router";

import { WorkflowEntrypoint, type WorkflowStep, type WorkflowEvent } from 'cloudflare:workers';


declare global {

  interface CloudflareEnvironment extends Env {}

}


type Env = {

  MY_WORKFLOW: Workflow;

};


export class MyWorkflow extends WorkflowEntrypoint<Env> {

  override async run(event: WorkflowEvent<{ hello: string }>, step: WorkflowStep) {

    await step.do("first step", async () => {

      return { output: "First step result" };

    });


    await step.sleep("sleep", "1 second");


    await step.do("second step", async () => {

      return { output: "Second step result" };

    });


    return "Workflow output";

  }

}


const requestHandler = createRequestHandler(

  () => import("virtual:react-router/server-build"),

  import.meta.env.MODE

);


export default {

  async fetch(request, env, ctx) {

    return requestHandler(request, {

      cloudflare: { env, ctx },

    });

  },

} satisfies ExportedHandler<CloudflareEnvironment>;


```

Configure it in your Wrangler configuration file:

* [  wrangler.jsonc ](#tab-panel-7410)
* [  wrangler.toml ](#tab-panel-7411)

```

{

  "workflows": [

    {

      "name": "my-workflow",

      "binding": "MY_WORKFLOW",

      "class_name": "MyWorkflow"

    }

  ]

}


```

```

[[workflows]]

name = "my-workflow"

binding = "MY_WORKFLOW"

class_name = "MyWorkflow"


```

And then use it in your application:

app/routes/home.tsx

```

export async function action({ context }: Route.ActionArgs) {

  const env = context.cloudflare.env;

  const instance = await env.MY_WORKFLOW.create({ params: { "hello": "world" } })

  return { id: instance.id, details: await instance.status() };

}


```

With bindings, your application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more.

[ Bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/) Access to compute, storage, AI and more. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/react-router/","name":"React Router (formerly Remix)"}}]}
```

---

---
title: RedwoodSDK
description: Create a RedwoodSDK application and deploy it to Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Full stack ](https://developers.cloudflare.com/search/?tags=Full%20stack) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/redwoodsdk.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# RedwoodSDK

In this guide, you will create a new [RedwoodSDK ↗](https://rwsdk.com/) application and deploy it to Cloudflare Workers.

RedwoodSDK is a framework for building server-side web applications on Cloudflare. It is a Vite plugin that provides SSR, React Server Components, Server Functions, and realtime capabilities.

## Deploy a new RedwoodSDK application on Workers

1. **Create a new project.**  
Run the following command, replacing `my-project-name` with your desired project name:  
 npm  yarn  pnpm  
```  
npx create-rwsdk my-project-name  
```  
```  
yarn dlx create-rwsdk my-project-name  
```  
```  
pnpx create-rwsdk my-project-name  
```
2. **Change the directory.**  
Terminal window  
```  
cd my-project-name  
```
3. **Install dependencies.**  
 npm  yarn  pnpm  bun  
```  
npm install  
```  
```  
yarn install  
```  
```  
pnpm install  
```  
```  
bun install  
```
4. **Develop locally.**  
Run the following command in the project directory to start a local development server. RedwoodSDK is a Vite plugin, so you can use the same development workflow as any other Vite project:  
 npm  yarn  pnpm  
```  
npm run dev  
```  
```  
yarn run dev  
```  
```  
pnpm run dev  
```  
Access the development server in your browser at `http://localhost:5173`, where you should see "Hello, World!" displayed on the page.
5. **Add your first route.**  
The entry point of your application is `src/worker.tsx`. Open that file in your editor.  
You will see the `defineApp` function, which handles requests by returning responses to the client:  
```  
import { defineApp } from "rwsdk/worker";  
import { route, render } from "rwsdk/router";  
import { Document } from "@/app/Document";  
import { Home } from "@/app/pages/Home";  
export default defineApp([  
  render(Document, [route("/", () => new Response("Hello, World!"))]),  
]);  
```  
Add a `/ping` route handler:  
```  
import { defineApp } from "rwsdk/worker";  
import { route, render } from "rwsdk/router";  
export default defineApp([  
  render(Document, [  
    route("/", () => new Response("Hello, World!")),  
    route("/ping", function () {  
      return <h1>Pong!</h1>;  
    }),  
  ]),  
]);  
```  
Navigate to `http://localhost:5173/ping` to see "Pong!" displayed on the page.  
Routes can return JSX directly. RedwoodSDK has support for React Server Components, which renders JSX on the server and sends HTML to the client.
6. **Deploy your project.**  
You can deploy your project to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), either from your local machine or from any CI/CD system, including [Cloudflare Workers CI/CD](https://developers.cloudflare.com/workers/ci-cd/builds/).  
Use the following command to build and deploy. If you are using CI, make sure to update your [deploy command](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration accordingly.  
 npm  yarn  pnpm  
```  
npm run release  
```  
```  
yarn run release  
```  
```  
pnpm run release  
```  
The first time you run the command it might fail and ask you to create a workers.dev subdomain. Go to the dashboard and open the Workers menu. Opening the Workers landing page for the first time will create a workers.dev subdomain automatically.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/redwoodsdk/","name":"RedwoodSDK"}}]}
```

---

---
title: SvelteKit
description: Create a SvelteKit application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ SPA ](https://developers.cloudflare.com/search/?tags=SPA) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/sveltekit.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# SvelteKit

In this guide, you will create a new [SvelteKit ↗](https://svelte.dev/docs/kit/introduction) application and deploy to Cloudflare Workers.

Already have a SvelteKit project?

Run `wrangler deploy` in a project without a Wrangler configuration file and Wrangler will automatically detect SvelteKit, generate the necessary configuration, and deploy your project.

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

Learn more about [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/).

SvelteKit Detected 

Generated configuration 

wrangler.jsonc

main: .svelte-kit/cloudflare/\_worker.js 

wrangler.jsonc

assets: directory: .svelte-kit/cloudflare 

wrangler.jsonc

compatibility\_flags: nodejs\_compat 

wrangler.jsonc

observability: enabled: true 

svelte.config.js

adapter: @sveltejs/adapter-cloudflare 

Workers Deployed 

Wrangler handles configuration automatically 

## 1\. Set up a new project

Use the [create-cloudflare ↗](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, initiate SvelteKit's official setup tool, and provide the option to deploy instantly.

To use `create-cloudflare` to create a new SvelteKit project with Workers Assets, run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-svelte-app --framework=svelte
```

```
yarn create cloudflare my-svelte-app --framework=svelte
```

```
pnpm create cloudflare@latest my-svelte-app --framework=svelte
```

After setting up your project, change your directory by running the following command:

Terminal window

```

cd my-svelte-app


```

## 2\. Develop locally

After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development.

 npm  yarn  pnpm 

```
npm run dev
```

```
yarn run dev
```

```
pnpm run dev
```

## 3\. Deploy your Project

Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/).

The following command will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.

 npm  yarn  pnpm 

```
npm run deploy
```

```
yarn run deploy
```

```
pnpm run deploy
```

---

## Bindings

Your SvelteKit application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using product bindings. The [SvelteKit documentation ↗](https://kit.svelte.dev/docs/adapter-cloudflare#runtime-apis) provides information about configuring bindings and how you can access them in your SvelteKit hooks and endpoints.

With bindings, your application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more.

[ Bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/) Access to compute, storage, AI and more. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/sveltekit/","name":"SvelteKit"}}]}
```

---

---
title: TanStack Start
description: Deploy a TanStack Start application to Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Full stack ](https://developers.cloudflare.com/search/?tags=Full%20stack) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/tanstack-start.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# TanStack Start

[TanStack Start ↗](https://tanstack.com/start) is a full-stack framework for building web applications with server-side rendering, streaming, server functions, and bundling.

Already have a TanStack Start project?

Run `wrangler deploy` in a project without a Wrangler configuration file and Wrangler will automatically detect TanStack Start, generate the necessary configuration, and deploy your project.

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

Learn more about [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/).

TanStack Start Detected 

Generated configuration 

wrangler.jsonc

main: .output/server/index.mjs 

wrangler.jsonc

assets: directory: .output/public 

wrangler.jsonc

compatibility\_flags: nodejs\_compat 

wrangler.jsonc

observability: enabled: true 

Workers Deployed 

Wrangler handles configuration automatically 

## Create a new application

Create a TanStack Start application pre-configured for Cloudflare Workers:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-tanstack-start-app --framework=tanstack-start
```

```
yarn create cloudflare my-tanstack-start-app --framework=tanstack-start
```

```
pnpm create cloudflare@latest my-tanstack-start-app --framework=tanstack-start
```

Start a local development server to preview your project during development:

 npm  yarn  pnpm 

```
npm run dev
```

```
yarn run dev
```

```
pnpm run dev
```

## Configure an existing application

If you have an existing TanStack Start application, configure it to run on Cloudflare Workers:

1. Install `@cloudflare/vite-plugin` and `wrangler`:  
 npm  yarn  pnpm  bun  
```  
npm i @cloudflare/vite-plugin wrangler -- -D  
```  
```  
yarn add @cloudflare/vite-plugin wrangler -D  
```  
```  
pnpm add @cloudflare/vite-plugin wrangler -D  
```  
```  
bun add @cloudflare/vite-plugin wrangler -D  
```
2. Add the Cloudflare plugin to your Vite configuration:  
   * [  JavaScript ](#tab-panel-7422)  
   * [  TypeScript ](#tab-panel-7423)  
vite.config.js  
```  
import { defineConfig } from "vite";  
import { tanstackStart } from "@tanstack/react-start/plugin/vite";  
import { cloudflare } from "@cloudflare/vite-plugin";  
import react from "@vitejs/plugin-react";  
export default defineConfig({  
  plugins: [  
    cloudflare({ viteEnvironment: { name: "ssr" } }),  
    tanstackStart(),  
    react(),  
  ],  
});  
```  
vite.config.ts  
```  
import { defineConfig } from "vite";  
import { tanstackStart } from "@tanstack/react-start/plugin/vite";  
import { cloudflare } from "@cloudflare/vite-plugin";  
import react from "@vitejs/plugin-react";  
export default defineConfig({  
  plugins: [  
    cloudflare({ viteEnvironment: { name: "ssr" } }),  
    tanstackStart(),  
    react(),  
  ],  
});  
```
3. Add a `wrangler.jsonc` configuration file:  
   * [  wrangler.jsonc ](#tab-panel-7414)  
   * [  wrangler.toml ](#tab-panel-7415)  
```  
{  
  "$schema": "node_modules/wrangler/config-schema.json",  
  "name": "<YOUR_PROJECT_NAME>",  
  // Set this to today's date  
  "compatibility_date": "2026-04-03",  
  "compatibility_flags": ["nodejs_compat"],  
  "main": "@tanstack/react-start/server-entry",  
  "observability": {  
    "enabled": true,  
  },  
}  
```  
```  
"$schema" = "node_modules/wrangler/config-schema.json"  
name = "<YOUR_PROJECT_NAME>"  
# Set this to today's date  
compatibility_date = "2026-04-03"  
compatibility_flags = [ "nodejs_compat" ]  
main = "@tanstack/react-start/server-entry"  
[observability]  
enabled = true  
```
4. Update the `scripts` section in `package.json`:  
package.json  
```  
{  
  "scripts": {  
    "dev": "vite dev",  
    "build": "vite build",  
    "preview": "vite preview",  
    "deploy": "npm run build && wrangler deploy",  
    "cf-typegen": "wrangler types"  
  }  
}  
```

## Deploy

Deploy to a `*.workers.dev` subdomain or a [custom domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) from your machine or any CI/CD system, including [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/).

 npm  yarn  pnpm 

```
npm run deploy
```

```
yarn run deploy
```

```
pnpm run deploy
```

Note

Preview the build locally before deploying:

 npm  yarn  pnpm 

```
npm run preview
```

```
yarn run preview
```

```
pnpm run preview
```

## Custom entrypoints

TanStack Start uses `@tanstack/react-start/server-entry` as your default entrypoint. Create a custom server entrypoint to add additional Workers handlers such as [Queues](https://developers.cloudflare.com/queues/) and [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/). This is also where you can add additional exports such as [Durable Objects](https://developers.cloudflare.com/durable-objects/) and [Workflows](https://developers.cloudflare.com/workflows/).

1. Create a custom server entrypoint file:  
   * [  JavaScript ](#tab-panel-7426)  
   * [  TypeScript ](#tab-panel-7427)  
src/server.js  
```  
import handler from "@tanstack/react-start/server-entry";  
// Export Durable Objects as named exports  
export { MyDurableObject } from "./my-durable-object";  
export default {  
  fetch: handler.fetch,  
  // Handle Queue messages  
  async queue(batch, env, ctx) {  
    for (const message of batch.messages) {  
      console.log("Processing message:", message.body);  
      message.ack();  
    }  
  },  
  // Handle Cron Triggers  
  async scheduled(event, env, ctx) {  
    console.log("Cron triggered:", event.cron);  
  },  
};  
```  
src/server.ts  
```  
import handler from "@tanstack/react-start/server-entry";  
// Export Durable Objects as named exports  
export { MyDurableObject } from "./my-durable-object";  
export default {  
  fetch: handler.fetch,  
  // Handle Queue messages  
  async queue(batch, env, ctx) {  
    for (const message of batch.messages) {  
      console.log("Processing message:", message.body);  
      message.ack();  
    }  
  },  
  // Handle Cron Triggers  
  async scheduled(event, env, ctx) {  
    console.log("Cron triggered:", event.cron);  
  },  
};  
```
2. Update your Wrangler configuration to point to your custom entrypoint:  
   * [  wrangler.jsonc ](#tab-panel-7412)  
   * [  wrangler.toml ](#tab-panel-7413)  
```  
{  
  "main": "src/server.ts",  
}  
```  
```  
main = "src/server.ts"  
```

### Test scheduled handlers locally

Test your scheduled handler locally using the `/cdn-cgi/handler/scheduled` endpoint:

Terminal window

```

curl "http://localhost:3000/cdn-cgi/handler/scheduled?cron=*+*+*+*+*"


```

Example: Using Workflows

Export a Workflow class from your custom entrypoint to run durable, multi-step tasks:

* [  JavaScript ](#tab-panel-7428)
* [  TypeScript ](#tab-panel-7429)

app/server.js

```

import {

  WorkflowEntrypoint,

  WorkflowStep,

  WorkflowEvent,

} from "cloudflare:workers";


export class MyWorkflow extends WorkflowEntrypoint {

  async run(event, step) {

    const result = await step.do("process data", async () => {

      return `Processed: ${event.payload.input}`;

    });


    await step.sleep("wait", "10 seconds");


    await step.do("finalize", async () => {

      console.log("Workflow complete:", result);

    });

  }

}


```

app/server.ts

```

import {

  WorkflowEntrypoint,

  WorkflowStep,

  WorkflowEvent,

} from "cloudflare:workers";


export class MyWorkflow extends WorkflowEntrypoint<Env> {

  async run(event: WorkflowEvent<{ input: string }>, step: WorkflowStep) {

    const result = await step.do("process data", async () => {

      return `Processed: ${event.payload.input}`;

    });


    await step.sleep("wait", "10 seconds");


    await step.do("finalize", async () => {

      console.log("Workflow complete:", result);

    });

  }

}


```

Add the Workflow configuration to your Wrangler configuration:

* [  wrangler.jsonc ](#tab-panel-7416)
* [  wrangler.toml ](#tab-panel-7417)

```

{

  "workflows": [

    {

      "name": "my-workflow",

      "binding": "MY_WORKFLOW",

      "class_name": "MyWorkflow",

    },

  ],

}


```

```

[[workflows]]

name = "my-workflow"

binding = "MY_WORKFLOW"

class_name = "MyWorkflow"


```

Example: Using Service Bindings

Add a service binding to call another Worker's RPC methods from your TanStack Start application:

* [  wrangler.jsonc ](#tab-panel-7418)
* [  wrangler.toml ](#tab-panel-7419)

```

{

  "services": [

    {

      "binding": "AUTH_SERVICE",

      "service": "auth-worker",

    },

  ],

}


```

```

[[services]]

binding = "AUTH_SERVICE"

service = "auth-worker"


```

Call the bound Worker's methods from a server function:

* [  JavaScript ](#tab-panel-7424)
* [  TypeScript ](#tab-panel-7425)

app/routes/index.jsx

```

import { createServerFn } from "@tanstack/react-start";

import { env } from "cloudflare:workers";


const verifyUser = createServerFn()

  .inputValidator((token) => token)

  .handler(async ({ data: token }) => {

    const result = await env.AUTH_SERVICE.verify(token);

    return result;

  });


```

app/routes/index.tsx

```

import { createServerFn } from "@tanstack/react-start";

import { env } from "cloudflare:workers";


const verifyUser = createServerFn()

  .inputValidator((token: string) => token)

  .handler(async ({ data: token }) => {

    const result = await env.AUTH_SERVICE.verify(token);

    return result;

  });


```

## Bindings

Your TanStack Start application can be fully integrated with the Cloudflare Developer Platform, in both local development and in production, by using [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/).

Access bindings by [importing the env object](https://developers.cloudflare.com/workers/runtime-apis/bindings/#importing-env-as-a-global) in your server-side code:

* [  JavaScript ](#tab-panel-7430)
* [  TypeScript ](#tab-panel-7431)

app/routes/index.jsx

```

import { createFileRoute } from "@tanstack/react-router";

import { createServerFn } from "@tanstack/react-start";

import { env } from "cloudflare:workers";


export const Route = createFileRoute("/")({

  loader: () => getData(),

  component: RouteComponent,

});


const getData = createServerFn().handler(() => {

  // Access bindings via env

  // For example: env.MY_KV, env.MY_BUCKET, env.AI, etc.

});


function RouteComponent() {

  // ...

}


```

app/routes/index.tsx

```

import { createFileRoute } from "@tanstack/react-router";

import { createServerFn } from "@tanstack/react-start";

import { env } from "cloudflare:workers";


export const Route = createFileRoute("/")({

  loader: () => getData(),

  component: RouteComponent,

});


const getData = createServerFn().handler(() => {

  // Access bindings via env

  // For example: env.MY_KV, env.MY_BUCKET, env.AI, etc.

});


function RouteComponent() {

  // ...

}


```

Generate TypeScript types for your bindings based on your Wrangler configuration:

 npm  yarn  pnpm 

```
npm run cf-typegen
```

```
yarn run cf-typegen
```

```
pnpm run cf-typegen
```

With bindings, your application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more.

[ Bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/) Access to compute, storage, AI and more. 

### Use R2 in a server function

Add an [R2 bucket binding](https://developers.cloudflare.com/r2/api/workers/workers-api-usage/#4-bind-your-bucket-to-a-worker) to your Wrangler configuration:

* [  wrangler.jsonc ](#tab-panel-7420)
* [  wrangler.toml ](#tab-panel-7421)

```

{

  "r2_buckets": [

    {

      "binding": "MY_BUCKET",

      "bucket_name": "<YOUR_BUCKET_NAME>",

    },

  ],

}


```

```

[[r2_buckets]]

binding = "MY_BUCKET"

bucket_name = "<YOUR_BUCKET_NAME>"


```

Access the bucket in a server function:

* [  JavaScript ](#tab-panel-7432)
* [  TypeScript ](#tab-panel-7433)

app/routes/index.jsx

```

import { createServerFn } from "@tanstack/react-start";

import { env } from "cloudflare:workers";


const uploadFile = createServerFn({ method: "POST" })

  .validator((data) => data)

  .handler(async ({ data }) => {

    await env.MY_BUCKET.put(data.key, data.content);

    return { success: true };

  });


const getFile = createServerFn()

  .validator((key) => key)

  .handler(async ({ data: key }) => {

    const object = await env.MY_BUCKET.get(key);

    return object ? await object.text() : null;

  });


```

app/routes/index.tsx

```

import { createServerFn } from "@tanstack/react-start";

import { env } from "cloudflare:workers";


const uploadFile = createServerFn({ method: "POST" })

  .validator((data: { key: string; content: string }) => data)

  .handler(async ({ data }) => {

    await env.MY_BUCKET.put(data.key, data.content);

    return { success: true };

  });


const getFile = createServerFn()

  .validator((key: string) => key)

  .handler(async ({ data: key }) => {

    const object = await env.MY_BUCKET.get(key);

    return object ? await object.text() : null;

  });


```

## Static prerendering

Prerender your application to static HTML at build time and serve as [static assets](https://developers.cloudflare.com/workers/static-assets/).

* [  JavaScript ](#tab-panel-7434)
* [  TypeScript ](#tab-panel-7435)

vite.config.js

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";

import { tanstackStart } from "@tanstack/react-start/plugin/vite";

import react from "@vitejs/plugin-react";


export default defineConfig({

  plugins: [

    cloudflare({ viteEnvironment: { name: "ssr" } }),

    tanstackStart({

      prerender: {

        enabled: true,

      },

    }),

    react(),

  ],

});


```

vite.config.ts

```

import { defineConfig } from "vite";

import { cloudflare } from "@cloudflare/vite-plugin";

import { tanstackStart } from "@tanstack/react-start/plugin/vite";

import react from "@vitejs/plugin-react";


export default defineConfig({

  plugins: [

    cloudflare({ viteEnvironment: { name: "ssr" } }),

    tanstackStart({

      prerender: {

        enabled: true,

      },

    }),

    react(),

  ],

});


```

For more options, refer to [TanStack Start static prerendering ↗](https://tanstack.com/start/latest/docs/framework/react/guide/static-prerendering).

Note

Requires `@tanstack/react-start` v1.138.0 or later.

### Prerendering data sources

Warning

Prerendering runs at build time. It uses your local environment variables, secrets, and bindings storage data.

To prerender with production data, use [remote bindings](https://developers.cloudflare.com/workers/development-testing/#remote-bindings).

In CI environments, environment variables or secrets may not be available during the build. To make them accessible:

* Set `CLOUDFLARE_INCLUDE_PROCESS_ENV=true` in your CI environment and provide the required values as environment variables.
* If using [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/), update your [build settings](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/tanstack-start/","name":"TanStack Start"}}]}
```

---

---
title: Vike
description: Create a Vike application and deploy it to Cloudflare Workers
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ Full stack ](https://developers.cloudflare.com/search/?tags=Full%20stack) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/vike.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Vike

You can deploy your [Vike ↗](https://vike.dev) app to Cloudflare using the Vike extension [vike-photon ↗](https://vike.dev/vike-photon).

All app types (SSR/SPA/SSG) are supported.

Already have a Vike project?

Run `wrangler deploy` in a project without a Wrangler configuration file and Wrangler will automatically detect Vike, generate the necessary configuration, and deploy your project.

 npm  yarn  pnpm 

```
npx wrangler deploy
```

```
yarn wrangler deploy
```

```
pnpm wrangler deploy
```

Learn more about [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/).

Vike Detected 

Generated configuration 

wrangler.jsonc

main: dist/server/index.js 

wrangler.jsonc

assets: directory: dist/client 

wrangler.jsonc

compatibility\_flags: nodejs\_compat 

wrangler.jsonc

observability: enabled: true 

Workers Deployed 

Wrangler handles configuration automatically 

## What is Vike?

[Vike ↗](https://vike.dev) is a Next.js/Nuxt alternative for advanced applications, powered by a modular architecture for unprecedented flexibility and stability.

## New app

Use [vike.dev/new ↗](https://vike.dev/new) to scaffold a new Vike app that uses `vike-photon` with `@photonjs/cloudflare`.

## Add to existing app

1. npm  yarn  pnpm  bun  
```  
npm i wrangler vike-photon @photonjs/cloudflare  
```  
```  
yarn add wrangler vike-photon @photonjs/cloudflare  
```  
```  
pnpm add wrangler vike-photon @photonjs/cloudflare  
```  
```  
bun add wrangler vike-photon @photonjs/cloudflare  
```
2. pages/+config.ts  
```  
import type { Config } from 'vike/types'  
import vikePhoton from 'vike-photon/config'  
export default {  
  extends: [vikePhoton]  
} satisfies Config  
```
3. package.json  
```  
{  
  "scripts": {  
    "dev": "vike dev",  
    "preview": "vike build && vike preview",  
    "deploy": "vike build && wrangler deploy"  
  }  
}  
```  
wrangler.jsonc  
```  
{  
  "$schema": "node_modules/wrangler/config-schema.json",  
  "compatibility_date": "2025-08-06",  
  "name": "my-vike-cloudflare-app",  
  "main": "virtual:photon:cloudflare:server-entry",  
  // Only required if your app depends a Node.js API  
  "compatibility_flags": ["nodejs_compat"]  
}  
```
4. .gitignore  
```  
.wrangler/  
```
5. **(Optional)** By default, Photon uses a built-in server that supports basic features like SSR. If you need additional server functionalities (e.g. [file uploads ↗](https://hono.dev/examples/file-upload) or [API routes ↗](https://vike.dev/api-routes)), then [create your own server ↗](https://vike.dev/vike-photon#server).

## Cloudflare APIs (bindings)

To access Cloudflare APIs (such as [D1](https://developers.cloudflare.com/d1/) and [KV](https://developers.cloudflare.com/kv/)), use [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) which are available via the `env` object [imported from cloudflare:workers](https://developers.cloudflare.com/workers/runtime-apis/bindings/#importing-env-as-a-global).

TypeScript

```

import { env } from 'cloudflare:workers'

// Key-value store

env.KV.get('my-key')

// Environment variable

env.LOG_LEVEL

// ...


```

> Example of using Cloudflare D1:
> 
> npm  yarn  pnpm 
> 
> ```
> npm create vike@latest -- --react --hono --drizzle --cloudflare
> ```
> 
> ```
> yarn create vike --react --hono --drizzle --cloudflare
> ```
> 
> ```
> pnpm create vike@latest --react --hono --drizzle --cloudflare
> ```
> 
> Or go to [vike.dev/new ↗](https://vike.dev/new) and select `Cloudflare` with an ORM.

## TypeScript

If you use TypeScript, run [wrangler types](https://developers.cloudflare.com/workers/wrangler/commands/general/#types) whenever you change your Cloudflare configuration to update the `worker-configuration.d.ts` file.

 npm  yarn  pnpm 

```
npx wrangler types
```

```
yarn wrangler types
```

```
pnpm wrangler types
```

Then commit:

Terminal window

```

git commit -am "update cloudflare types"


```

Make sure TypeScript loads it:

tsconfig.json

```

{

  "compilerOptions": {

    "types": ["./worker-configuration.d.ts"]

 }

}


```

See also: [Cloudflare Workers > TypeScript](https://developers.cloudflare.com/workers/languages/typescript/).

## See also

* [Vike Docs > Cloudflare ↗](https://vike.dev/cloudflare)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/vike/","name":"Vike"}}]}
```

---

---
title: Vue
description: Create a Vue application and deploy it to Cloudflare Workers with Workers Assets.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ SPA ](https://developers.cloudflare.com/search/?tags=SPA) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/framework-guides/web-apps/vue.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Vue

In this guide, you will create a new [Vue ↗](https://vuejs.org/) application and deploy to Cloudflare Workers (with the new [Workers Assets](https://developers.cloudflare.com/workers/static-assets/)).

## 1\. Set up a new project

Use the [create-cloudflare ↗](https://www.npmjs.com/package/create-cloudflare) CLI (C3) to set up a new project. C3 will create a new project directory, use code from the official Vue template, and provide the option to deploy instantly.

To use `create-cloudflare` to create a new Vue project with Workers Assets, run the following command:

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-vue-app --framework=vue
```

```
yarn create cloudflare my-vue-app --framework=vue
```

```
pnpm create cloudflare@latest my-vue-app --framework=vue
```

How is this project set up?

Below is a simplified file tree of the project.

* Directorymy-vue-app  
   * Directorysrc/  
         * App.vue  
   * Directoryserver/  
         * index.ts  
   * index.html  
   * vite.config.ts  
   * wrangler.jsonc

`wrangler.jsonc` is your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). In this file:

* `main` points to `server/index.ts`. This is your Worker, which is going to act as your backend API.
* `assets.not_found_handling` is set to `single-page-application`, which means that routes that are handled by your Vue SPA do not go to the Worker, and are thus free.
* If you want to add bindings to resources on Cloudflare's developer platform, you configure them here. Read more about [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/).

`vite.config.ts` is set up to use the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/). This runs your Worker in the Cloudflare Workers runtime, ensuring your local development environment is as close to production as possible.

`server/index.ts` is your backend API, which contains a single endpoint, `/api/`, that returns a text response. At `src/App.vue`, your Vue app calls this endpoint to get a message back and displays this.

## 2\. Develop locally with the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/)

After you have created your project, run the following command in the project directory to start a local server. This will allow you to preview your project locally during development.

 npm  yarn  pnpm 

```
npm run dev
```

```
yarn run dev
```

```
pnpm run dev
```

What's happening in local development?

This project uses Vite for local development and build, and thus comes with all of Vite's features, including hot module replacement (HMR). In addition,`vite.config.ts` is set up to use the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/). This runs your application in the Cloudflare Workers runtime, just like in production, and enables access to local emulations of bindings.

## 3\. Deploy your project

Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/).

The following command will build and deploy your project. If you are using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately.

 npm  yarn  pnpm 

```
npm run deploy
```

```
yarn run deploy
```

```
pnpm run deploy
```

---

## Asset Routing

If you're using Vue as a SPA, you will want to set `not_found_handling = "single_page_application"` in your Wrangler configuration file.

By default, Cloudflare first tries to match a request path against a static asset path, which is based on the file structure of the uploaded asset directory. This is either the directory specified by `assets.directory` in your Wrangler config or, in the case of the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), the output directory of the client build. Failing that, we invoke a Worker if one is present. If there is no Worker, or the Worker then uses the asset binding, Cloudflare will fallback to the behaviour set by [not\_found\_handling](https://developers.cloudflare.com/workers/static-assets/#routing-behavior).

Refer to the [routing documentation](https://developers.cloudflare.com/workers/static-assets/routing/) for more information about how routing works with static assets, and how to customize this behavior.

## Use bindings with Vue

Your new project also contains a Worker at `./server/index.ts`, which you can use as a backend API for your Vue application. While your Vue application cannot directly access Workers bindings, it can interact with them through this Worker. You can make [fetch() requests](https://developers.cloudflare.com/workers/runtime-apis/fetch/) from your Vue application to the Worker, which can then handle the request and use bindings.

With bindings, your application can be fully integrated with the Cloudflare Developer Platform, giving you access to compute, storage, AI and more.

[ Bindings ](https://developers.cloudflare.com/workers/runtime-apis/bindings/) Access to compute, storage, AI and more. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/framework-guides/","name":"Framework guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/framework-guides/web-apps/","name":"Web applications"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/framework-guides/web-apps/vue/","name":"Vue"}}]}
```

---

---
title: Dashboard
description: Follow this guide to create a Workers application using the Cloudflare dashboard.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/get-started/dashboard.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Dashboard

Follow this guide to create a Workers application using the Cloudflare dashboard.

Try the Playground

The quickest way to experiment with Cloudflare Workers is in the [Playground ↗](https://workers.cloudflare.com/playground). The Playground does not require any setup. It is an instant way to preview and test a Worker directly in the browser.

## Prerequisites

[Create a Cloudflare account](https://developers.cloudflare.com/fundamentals/account/create-account/), if you have not already.

## Setup

To get started with a new Workers application:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select **Create application**. From here, you can:  
   * Select from the gallery of production-ready templates  
   * Import an existing Git repository on your own account  
   * Let Cloudflare clone and bootstrap a public repository containing a Workers application.
3. Once you have connected to your chosen [Git provider](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/), configure your project and select **Deploy**.
4. Cloudflare will kick off a new build and deployment. Once deployed, preview your Worker at its provided `workers.dev` subdomain.

## Continue development

Applications started in the dashboard are set up with Git to help kickstart your development workflow. To continue developing on your repository, you can run:

Terminal window

```

# clone your repository locally

git clone <git repo URL>


# make sure you are in the root directory

cd <directory>


```

Now, you can preview and test your changes by [running Wrangler in your local development environment](https://developers.cloudflare.com/workers/development-testing/). Once you are ready to deploy you can run:

Terminal window

```

# adds the files to git tracking

git add .


# commits the changes

git commit -m "your message"


# push the changes to your Git provider

git push origin main


```

To do more:

* Review our [Examples](https://developers.cloudflare.com/workers/examples/) and [Tutorials](https://developers.cloudflare.com/workers/tutorials/) for inspiration.
* Set up [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) to allow your Worker to interact with other resources and unlock new functionality.
* Learn how to [test and debug](https://developers.cloudflare.com/workers/testing/) your Workers.
* Read about [Workers limits and pricing](https://developers.cloudflare.com/workers/platform/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/get-started/","name":"Getting started"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/get-started/dashboard/","name":"Dashboard"}}]}
```

---

---
title: CLI
description: Set up and deploy your first Worker with Wrangler, the Cloudflare Developer Platform CLI.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/get-started/guide.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# CLI

Set up and deploy your first Worker with Wrangler, the Cloudflare Developer Platform CLI.

This guide will instruct you through setting up and deploying your first Worker.

## Prerequisites

1. Sign up for a [Cloudflare account ↗](https://dash.cloudflare.com/sign-up/workers-and-pages).
2. Install [Node.js ↗](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).

Node.js version manager

Use a Node version manager like [Volta ↗](https://volta.sh/) or [nvm ↗](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node.js versions. [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/), discussed later in this guide, requires a Node version of `16.17.0` or later.

## 1\. Create a new Worker project

Open a terminal window and run C3 to create your Worker project. [C3 (create-cloudflare-cli) ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare.

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- my-first-worker
```

```
yarn create cloudflare my-first-worker
```

```
pnpm create cloudflare@latest my-first-worker
```

For setup, select the following options:

* For _What would you like to start with?_, choose `Hello World example`.
* For _Which template would you like to use?_, choose `Worker only`.
* For _Which language do you want to use?_, choose `JavaScript`.
* For _Do you want to use git for version control?_, choose `Yes`.
* For _Do you want to deploy your application?_, choose `No` (we will be making some changes before deploying).

Now, you have a new project set up. Move into that project folder.

Terminal window

```

cd my-first-worker


```

What files did C3 create?

In your project directory, C3 will have generated the following:

* `wrangler.jsonc`: Your [Wrangler](https://developers.cloudflare.com/workers/wrangler/configuration/#sample-wrangler-configuration) configuration file.
* `index.js` (in `/src`): A minimal `'Hello World!'` Worker written in [ES module](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) syntax.
* `package.json`: A minimal Node dependencies configuration file.
* `package-lock.json`: Refer to [npm documentation on package-lock.json ↗](https://docs.npmjs.com/cli/v9/configuring-npm/package-lock-json).
* `node_modules`: Refer to [npm documentation node\_modules ↗](https://docs.npmjs.com/cli/v7/configuring-npm/folders#node-modules).

What if I already have a project in a git repository?

In addition to creating new projects from C3 templates, C3 also supports creating new projects from existing Git repositories. To create a new project from an existing Git repository, open your terminal and run:

Terminal window

```

npm create cloudflare@latest -- --template <SOURCE>


```

`<SOURCE>` may be any of the following:

* `user/repo` (GitHub)
* `git@github.com:user/repo`
* `https://github.com/user/repo`
* `user/repo/some-template` (subdirectories)
* `user/repo#canary` (branches)
* `user/repo#1234abcd` (commit hash)
* `bitbucket:user/repo` (Bitbucket)
* `gitlab:user/repo` (GitLab)

Your existing template folder must contain the following files, at a minimum, to meet the requirements for Cloudflare Workers:

* `package.json`
* `wrangler.jsonc` [See sample Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/#sample-wrangler-configuration)
* `src/` containing a worker script referenced from `wrangler.jsonc`

## 2\. Develop with Wrangler CLI

C3 installs [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/), the Workers command-line interface, in Workers projects by default. Wrangler lets you to [create](https://developers.cloudflare.com/workers/wrangler/commands/general/#init), [test](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev), and [deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) your Workers projects.

After you have created your first Worker, run the [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) command in the project directory to start a local server for developing your Worker. This will allow you to preview your Worker locally during development.

Terminal window

```

npx wrangler dev


```

If you have never used Wrangler before, it will open your web browser so you can login to your Cloudflare account.

Go to [http://localhost:8787 ↗](http://localhost:8787) to view your Worker.

Browser issues?

If you have issues with this step or you do not have access to a browser interface, refer to the [wrangler login](https://developers.cloudflare.com/workers/wrangler/commands/general/#login) documentation.

## 3\. Write code

With your new project generated and running, you can begin to write and edit your code.

Find the `src/index.js` file. `index.js` will be populated with the code below:

Original index.js

```

export default {

  async fetch(request, env, ctx) {

    return new Response("Hello World!");

  },

};


```

Code explanation

This code block consists of a few different parts.

Updated index.js

```

export default {

  async fetch(request, env, ctx) {

    return new Response("Hello World!");

  },

};


```

`export default` is JavaScript syntax required for defining [JavaScript modules ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules#default%5Fexports%5Fversus%5Fnamed%5Fexports). Your Worker has to have a default export of an object, with properties corresponding to the events your Worker should handle.

index.js

```

export default {

  async fetch(request, env, ctx) {

    return new Response("Hello World!");

  },

};


```

This [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) will be called when your Worker receives an HTTP request. You can define additional event handlers in the exported object to respond to different types of events. For example, add a [scheduled() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/) to respond to Worker invocations via a [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/).

Additionally, the `fetch` handler will always be passed three parameters: [request, env and context](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/).

index.js

```

export default {

  async fetch(request, env, ctx) {

    return new Response("Hello World!");

  },

};


```

The Workers runtime expects `fetch` handlers to return a `Response` object or a Promise which resolves with a `Response` object. In this example, you will return a new `Response` with the string `"Hello World!"`.

Replace the content in your current `index.js` file with the content below, which changes the text output.

index.js

```

export default {

  async fetch(request, env, ctx) {

    return new Response("Hello Worker!");

  },

};


```

Then, save the file and reload the page. Your Worker's output will have changed to the new text.

No visible changes?

If the output for your Worker does not change, make sure that:

1. You saved the changes to `index.js`.
2. You have `wrangler dev` running.
3. You reloaded your browser.

## 4\. Deploy your project

Deploy your Worker via Wrangler to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/).

Terminal window

```

npx wrangler deploy


```

If you have not configured any subdomain or domain, Wrangler will prompt you during the publish process to set one up.

Preview your Worker at `<YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev`.

Seeing 523 errors?

If you see [523 errors](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-5xx-errors/error-523/) when pushing your `*.workers.dev` subdomain for the first time, wait a minute or so and the errors will resolve themselves.

## Next steps

To do more:

* Push your project to a GitHub or GitLab repository then [connect to builds](https://developers.cloudflare.com/workers/ci-cd/builds/#get-started) to enable automatic builds and deployments.
* Visit the [Cloudflare dashboard ↗](https://dash.cloudflare.com/) for simpler editing.
* Review our [Examples](https://developers.cloudflare.com/workers/examples/) and [Tutorials](https://developers.cloudflare.com/workers/tutorials/) for inspiration.
* Set up [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) to allow your Worker to interact with other resources and unlock new functionality.
* Learn how to [test and debug](https://developers.cloudflare.com/workers/testing/) your Workers.
* Read about [Workers limits and pricing](https://developers.cloudflare.com/workers/platform/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/get-started/","name":"Getting started"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/get-started/guide/","name":"CLI"}}]}
```

---

---
title: Prompting
description: Build Workers apps with AI prompts and MCP servers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI)[ LLM ](https://developers.cloudflare.com/search/?tags=LLM) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/get-started/prompting.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Prompting

You can create Workers applications from simple prompts in your favorite agent or editor, including Cursor, Windsurf, VS Code, Claude Code, Codex, and OpenCode.

## Teach your agent about Workers

Connect the [cloudflare-docs ↗](https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/docs-vectorize) MCP (Model Context Protocol) server to teach your agent about Workers. Add the server URL `https://docs.mcp.cloudflare.com/mcp` to your agent configuration ([learn more](https://developers.cloudflare.com/agents/model-context-protocol/mcp-servers-for-cloudflare/)).

You can also connect the [cloudflare-observability ↗](https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/workers-observability) MCP server (`https://observability.mcp.cloudflare.com/mcp`). This helps your agent check logs, look for exceptions, and automatically fix issues.

## Example prompts

```

Create a Cloudflare Workers application that serves as a backend API server.


```

```

Show me how to use Hyperdrive to connect my Worker to an existing Postgres database.


```

```

Create an AI chat Agent using the Cloudflare Agents SDK that responds to user messages and maintains conversation history.


```

```

Build a WebSocket-based pub/sub application using Durable Objects Hibernation APIs, where the server allows me to POST to /send-message with {topic: "foo", message: "bar"} and delivers that message to any connected client listening to that topic.


```

```

Build an image upload application using R2 pre-signed URLs that allows users to securely upload images directly to object storage without exposing bucket credentials.


```

## Use a prompt

You can use the base prompt below to provide your AI tool with context about Workers APIs and best practices.

1. Use the click-to-copy button at the top right of the code block below to copy the full prompt to your clipboard.
2. Paste into your AI tool of choice (for example OpenAI's ChatGPT or Anthropic's Claude).
3. Enter your part of the prompt at the end between the `<user_prompt>` and `</user_prompt>` tags.

Base prompt:

```

<system_context>

You are an advanced assistant specialized in generating Cloudflare Workers code. You have deep knowledge of Cloudflare's platform, APIs, and best practices.

</system_context>


<behavior_guidelines>


- Respond in a friendly and concise manner

- Focus exclusively on Cloudflare Workers solutions

- Provide complete, self-contained solutions

- Default to current best practices

- Ask clarifying questions when requirements are ambiguous


</behavior_guidelines>


<code_standards>


- Generate code in TypeScript by default unless JavaScript is specifically requested

- Add appropriate TypeScript types and interfaces

- You MUST import all methods, classes and types used in the code you generate.

- Use ES modules format exclusively (NEVER use Service Worker format)

- You SHALL keep all code in a single file unless otherwise specified

- If there is an official SDK or library for the service you are integrating with, then use it to simplify the implementation.

- Minimize other external dependencies

- Do NOT use libraries that have FFI/native/C bindings.

- Follow Cloudflare Workers security best practices

- Never bake in secrets into the code

- Include proper error handling and logging

- Include comments explaining complex logic


1334 collapsed lines

</code_standards>


<output_format>


- Use Markdown code blocks to separate code from explanations

- Provide separate blocks for:

  1. Main worker code (index.ts/index.js)

  2. Configuration (wrangler.jsonc)

  3. Type definitions (if applicable)

  4. Example usage/tests

- Always output complete files, never partial updates or diffs

- Format code consistently using standard TypeScript/JavaScript conventions


</output_format>


<cloudflare_integrations>


- When data storage is needed, integrate with appropriate Cloudflare services:

  - Workers KV for key-value storage, including configuration data, user profiles, and A/B testing

  - Durable Objects for strongly consistent state management, storage, multiplayer co-ordination, and agent use-cases

  - D1 for relational data and for its SQL dialect

  - R2 for object storage, including storing structured data, AI assets, image assets and for user-facing uploads

  - Hyperdrive to connect to existing (PostgreSQL) databases that a developer may already have

  - Queues for asynchronous processing and background tasks

  - Vectorize for storing embeddings and to support vector search (often in combination with Workers AI)

  - Workers Analytics Engine for tracking user events, billing, metrics and high-cardinality analytics

  - Workers AI as the default AI API for inference requests. If a user requests Claude or OpenAI however, use the appropriate, official SDKs for those APIs.

  - Browser Rendering for remote browser capabilties, searching the web, and using Puppeteer APIs.

  - Workers Static Assets for hosting frontend applications and static files when building a Worker that requires a frontend or uses a frontend framework such as React

- Include all necessary bindings in both code and wrangler.jsonc

- Add appropriate environment variable definitions


</cloudflare_integrations>


<configuration_requirements>


- Always provide a wrangler.jsonc (not wrangler.toml)

- Include:

  - Appropriate triggers (http, scheduled, queues)

  - Required bindings

  - Environment variables

  - Compatibility flags

  - Set compatibility_date = "2025-03-07"

  - Set compatibility_flags = ["nodejs_compat"]

  - Set `enabled = true` and `head_sampling_rate = 1` for `[observability]` when generating the wrangler configuration

  - Routes and domains (only if applicable)

  - Do NOT include dependencies in the wrangler.jsonc file

  - Only include bindings that are used in the code


<example id="wrangler.jsonc">

<code language="jsonc">

// wrangler.jsonc

{

  "name": "app-name-goes-here", // name of the app

  "main": "src/index.ts", // default file

  "compatibility_date": "2025-02-11",

  "compatibility_flags": ["nodejs_compat"], // Enable Node.js compatibility

  "observability": {

    // Enable logging by default

    "enabled": true,

   }

}

</code>

<key_points>


- Defines a name for the app the user is building

- Sets `src/index.ts` as the default location for main

- Sets `compatibility_flags: ["nodejs_compat"]`

- Sets `observability.enabled: true`


</key_points>

</example>

</configuration_requirements>


<security_guidelines>


- Implement proper request validation

- Use appropriate security headers

- Handle CORS correctly when needed

- Implement rate limiting where appropriate

- Follow least privilege principle for bindings

- Sanitize user inputs


</security_guidelines>


<testing_guidance>


- Include basic test examples

- Provide curl commands for API endpoints

- Add example environment variable values

- Include sample requests and responses


</testing_guidance>


<performance_guidelines>


- Optimize for cold starts

- Minimize unnecessary computation

- Use appropriate caching strategies

- Consider Workers limits and quotas

- Implement streaming where beneficial


</performance_guidelines>


<error_handling>


- Implement proper error boundaries

- Return appropriate HTTP status codes

- Provide meaningful error messages

- Log errors appropriately

- Handle edge cases gracefully


</error_handling>


<websocket_guidelines>


- You SHALL use the Durable Objects WebSocket Hibernation API when providing WebSocket handling code within a Durable Object.

- Always use WebSocket Hibernation API instead of legacy WebSocket API unless otherwise specified.

- Refer to the "durable_objects_websocket" example for best practices for handling WebSockets.

- Use `this.ctx.acceptWebSocket(server)` to accept the WebSocket connection and DO NOT use the `server.accept()` method.

- Define an `async webSocketMessage()` handler that is invoked when a message is received from the client.

- Define an `async webSocketClose()` handler that is invoked when the WebSocket connection is closed.

- Do NOT use the `addEventListener` pattern to handle WebSocket events inside a Durable Object. You MUST use the `async webSocketMessage()` and `async webSocketClose()` handlers here.

- Handle WebSocket upgrade requests explicitly, including validating the Upgrade header.


</websocket_guidelines>


<agents>


- Strongly prefer the `agents` to build AI Agents when asked.

- Refer to the <code_examples> for Agents.

- Use streaming responses from AI SDKs, including the OpenAI SDK, Workers AI bindings, and/or the Anthropic client SDK.

- Use the appropriate SDK for the AI service you are using, and follow the user's direction on what provider they wish to use.

- Prefer the `this.setState` API to manage and store state within an Agent, but don't avoid using `this.sql` to interact directly with the Agent's embedded SQLite database if the use-case benefits from it.

- When building a client interface to an Agent, use the `useAgent` React hook from the `agents/react` library to connect to the Agent as the preferred approach.

- When extending the `Agent` class, ensure you provide the `Env` and the optional state as type parameters - for example, `class AIAgent extends Agent<Env, MyState> { ... }`.

- Include valid Durable Object bindings in the `wrangler.jsonc` configuration for an Agent.

- You MUST set the value of `migrations[].new_sqlite_classes` to the name of the Agent class in `wrangler.jsonc`.


</agents>


<code_examples>


<example id="durable_objects_websocket">

<description>

Example of using the Hibernatable WebSocket API in Durable Objects to handle WebSocket connections.

</description>


<code language="typescript">

import { DurableObject } from "cloudflare:workers";


interface Env {

WEBSOCKET_HIBERNATION_SERVER: DurableObject<Env>;

}


// Durable Object

export class WebSocketHibernationServer extends DurableObject {

async fetch(request) {

// Creates two ends of a WebSocket connection.

const webSocketPair = new WebSocketPair();

const [client, server] = Object.values(webSocketPair);


    // Calling `acceptWebSocket()` informs the runtime that this WebSocket is to begin terminating

    // request within the Durable Object. It has the effect of "accepting" the connection,

    // and allowing the WebSocket to send and receive messages.

    // Unlike `ws.accept()`, `state.acceptWebSocket(ws)` informs the Workers Runtime that the WebSocket

    // is "hibernatable", so the runtime does not need to pin this Durable Object to memory while

    // the connection is open. During periods of inactivity, the Durable Object can be evicted

    // from memory, but the WebSocket connection will remain open. If at some later point the

    // WebSocket receives a message, the runtime will recreate the Durable Object

    // (run the `constructor`) and deliver the message to the appropriate handler.

    this.ctx.acceptWebSocket(server);


    return new Response(null, {

          status: 101,

          webSocket: client,

    });


    },


    async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer): void | Promise<void> {

     // Upon receiving a message from the client, reply with the same message,

     // but will prefix the message with "[Durable Object]: " and return the

     // total number of connections.

     ws.send(

     `[Durable Object] message: ${message}, connections: ${this.ctx.getWebSockets().length}`,

     );

    },


    async webSocketClose(ws: WebSocket, code: number, reason: string, wasClean: boolean) void | Promise<void> {

     // If the client closes the connection, the runtime will invoke the webSocketClose() handler.

     ws.close(code, "Durable Object is closing WebSocket");

    },


    async webSocketError(ws: WebSocket, error: unknown): void | Promise<void> {

     console.error("WebSocket error:", error);

     ws.close(1011, "WebSocket error");

    }


}


</code>


<configuration>

{

  "name": "websocket-hibernation-server",

  "durable_objects": {

    "bindings": [

      {

        "name": "WEBSOCKET_HIBERNATION_SERVER",

        "class_name": "WebSocketHibernationServer"

      }

    ]

  },

  "migrations": [

    {

      "tag": "v1",

      "new_classes": ["WebSocketHibernationServer"]

    }

  ]

}

</configuration>


<key_points>


- Uses the WebSocket Hibernation API instead of the legacy WebSocket API

- Calls `this.ctx.acceptWebSocket(server)` to accept the WebSocket connection

- Has a `webSocketMessage()` handler that is invoked when a message is received from the client

- Has a `webSocketClose()` handler that is invoked when the WebSocket connection is closed

- Does NOT use the `server.addEventListener` API unless explicitly requested.

- Don't over-use the "Hibernation" term in code or in bindings. It is an implementation detail.

  </key_points>

  </example>


<example id="durable_objects_alarm_example">

<description>

Example of using the Durable Object Alarm API to trigger an alarm and reset it.

</description>


<code language="typescript">

import { DurableObject } from "cloudflare:workers";


interface Env {

ALARM_EXAMPLE: DurableObject<Env>;

}


export default {

  async fetch(request, env) {

    let url = new URL(request.url);

    let userId = url.searchParams.get("userId") || crypto.randomUUID();

    return await env.ALARM_EXAMPLE.getByName(userId).fetch(request);

  },

};


const SECONDS = 1000;


export class AlarmExample extends DurableObject {

constructor(ctx, env) {

this.ctx = ctx;

this.storage = ctx.storage;

}

async fetch(request) {

// If there is no alarm currently set, set one for 10 seconds from now

let currentAlarm = await this.storage.getAlarm();

if (currentAlarm == null) {

this.storage.setAlarm(Date.now() + 10 \_ SECONDS);

}

}

async alarm(alarmInfo) {

// The alarm handler will be invoked whenever an alarm fires.

// You can use this to do work, read from the Storage API, make HTTP calls

// and set future alarms to run using this.storage.setAlarm() from within this handler.

if (alarmInfo?.retryCount != 0) {

console.log("This alarm event has been attempted ${alarmInfo?.retryCount} times before.");

}


// Set a new alarm for 10 seconds from now before exiting the handler

this.storage.setAlarm(Date.now() + 10 \_ SECONDS);

}

}


</code>


<configuration>

{

  "name": "durable-object-alarm",

  "durable_objects": {

    "bindings": [

      {

        "name": "ALARM_EXAMPLE",

        "class_name": "DurableObjectAlarm"

      }

    ]

  },

  "migrations": [

    {

      "tag": "v1",

      "new_classes": ["DurableObjectAlarm"]

    }

  ]

}

</configuration>


<key_points>


- Uses the Durable Object Alarm API to trigger an alarm

- Has a `alarm()` handler that is invoked when the alarm is triggered

- Sets a new alarm for 10 seconds from now before exiting the handler

  </key_points>

  </example>


<example id="kv_session_authentication_example">

<description>

Using Workers KV to store session data and authenticate requests, with Hono as the router and middleware.

</description>


<code language="typescript">

// src/index.ts

import { Hono } from 'hono'

import { cors } from 'hono/cors'


interface Env {

AUTH_TOKENS: KVNamespace;

}


const app = new Hono<{ Bindings: Env }>()


// Add CORS middleware

app.use('\*', cors())


app.get('/', async (c) => {

try {

// Get token from header or cookie

const token = c.req.header('Authorization')?.slice(7) ||

c.req.header('Cookie')?.match(/auth_token=([^;]+)/)?.[1];

if (!token) {

return c.json({

authenticated: false,

message: 'No authentication token provided'

}, 403)

}


    // Check token in KV

    const userData = await c.env.AUTH_TOKENS.get(token)


    if (!userData) {

      return c.json({

        authenticated: false,

        message: 'Invalid or expired token'

      }, 403)

    }


    return c.json({

      authenticated: true,

      message: 'Authentication successful',

      data: JSON.parse(userData)

    })


} catch (error) {

console.error('Authentication error:', error)

return c.json({

authenticated: false,

message: 'Internal server error'

}, 500)

}

})


export default app

</code>


<configuration>

{

  "name": "auth-worker",

  "main": "src/index.ts",

  "compatibility_date": "2025-02-11",

  "kv_namespaces": [

    {

      "binding": "AUTH_TOKENS",

      "id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",

      "preview_id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

    }

  ]

}

</configuration>


<key_points>


- Uses Hono as the router and middleware

- Uses Workers KV to store session data

- Uses the Authorization header or Cookie to get the token

- Checks the token in Workers KV

- Returns a 403 if the token is invalid or expired


</key_points>

</example>


<example id="queue_producer_consumer_example">

<description>

Use Cloudflare Queues to produce and consume messages.

</description>


<code language="typescript">

// src/producer.ts

interface Env {

  REQUEST_QUEUE: Queue;

  UPSTREAM_API_URL: string;

  UPSTREAM_API_KEY: string;

}


export default {

async fetch(request: Request, env: Env) {

const info = {

timestamp: new Date().toISOString(),

method: request.method,

url: request.url,

headers: Object.fromEntries(request.headers),

};

await env.REQUEST_QUEUE.send(info);


return Response.json({

message: 'Request logged',

requestId: crypto.randomUUID()

});


},


async queue(batch: MessageBatch<any>, env: Env) {

const requests = batch.messages.map(msg => msg.body);


    const response = await fetch(env.UPSTREAM_API_URL, {

      method: 'POST',

      headers: {

        'Content-Type': 'application/json',

        'Authorization': `Bearer ${env.UPSTREAM_API_KEY}`

      },

      body: JSON.stringify({

        timestamp: new Date().toISOString(),

        batchSize: requests.length,

        requests

      })

    });


    if (!response.ok) {

      throw new Error(`Upstream API error: ${response.status}`);

    }


}

};


</code>


<configuration>

{

  "name": "request-logger-consumer",

  "main": "src/index.ts",

  "compatibility_date": "2025-02-11",

  "queues": {

        "producers": [{

      "name": "request-queue",

      "binding": "REQUEST_QUEUE"

    }],

    "consumers": [{

      "name": "request-queue",

      "dead_letter_queue": "request-queue-dlq",

      "retry_delay": 300

    }]

  },

  "vars": {

    "UPSTREAM_API_URL": "https://api.example.com/batch-logs",

    "UPSTREAM_API_KEY": ""

  }

}

</configuration>


<key_points>


- Defines both a producer and consumer for the queue

- Uses a dead letter queue for failed messages

- Uses a retry delay of 300 seconds to delay the re-delivery of failed messages

- Shows how to batch requests to an upstream API


</key_points>

</example>


<example id="hyperdrive_connect_to_postgres">

<description>

Connect to and query a Postgres database using Cloudflare Hyperdrive.

</description>


<code language="typescript">

// Postgres.js 3.4.5 or later is recommended

import postgres from "postgres";


export interface Env {

// If you set another name in the Wrangler config file as the value for 'binding',

// replace "HYPERDRIVE" with the variable name you defined.

HYPERDRIVE: Hyperdrive;

}


export default {

async fetch(request, env, ctx): Promise<Response> {

console.log(JSON.stringify(env));

// Create a database client that connects to your database via Hyperdrive.

//

// Hyperdrive generates a unique connection string you can pass to

// supported drivers, including node-postgres, Postgres.js, and the many

// ORMs and query builders that use these drivers.

const sql = postgres(env.HYPERDRIVE.connectionString)


    try {

      // Test query

      const results = await sql`SELECT * FROM pg_tables`;


      // Return result rows as JSON

      return Response.json(results);

    } catch (e) {

      console.error(e);

      return Response.json(

        { error: e instanceof Error ? e.message : e },

        { status: 500 },

      );

    }


},

} satisfies ExportedHandler<Env>;


</code>


<configuration>

{

  "name": "hyperdrive-postgres",

  "main": "src/index.ts",

  "compatibility_date": "2025-02-11",

  "hyperdrive": [

    {

      "binding": "HYPERDRIVE",

      "id": "<YOUR_DATABASE_ID>"

    }

  ]

}

</configuration>


<usage>

// Install Postgres.js

npm install postgres


// Create a Hyperdrive configuration

npx wrangler hyperdrive create <YOUR_CONFIG_NAME> --connection-string="postgres://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name"


</usage>


<key_points>


- Installs and uses Postgres.js as the database client/driver.

- Creates a Hyperdrive configuration using wrangler and the database connection string.

- Uses the Hyperdrive connection string to connect to the database.

- Calling `sql.end()` is optional, as Hyperdrive will handle the connection pooling.


</key_points>

</example>


<example id="workflows">

<description>

Using Workflows for durable execution, async tasks, and human-in-the-loop workflows.

</description>


<code language="typescript">

import { WorkflowEntrypoint, WorkflowStep, WorkflowEvent } from 'cloudflare:workers';


type Env = {

// Add your bindings here, e.g. Workers KV, D1, Workers AI, etc.

MY_WORKFLOW: Workflow;

};


// User-defined params passed to your workflow

type Params = {

email: string;

metadata: Record<string, string>;

};


export class MyWorkflow extends WorkflowEntrypoint<Env, Params> {

async run(event: WorkflowEvent<Params>, step: WorkflowStep) {

// Can access bindings on `this.env`

// Can access params on `event.payload`

const files = await step.do('my first step', async () => {

// Fetch a list of files from $SOME_SERVICE

return {

files: [

'doc_7392_rev3.pdf',

'report_x29_final.pdf',

'memo_2024_05_12.pdf',

'file_089_update.pdf',

'proj_alpha_v2.pdf',

'data_analysis_q2.pdf',

'notes_meeting_52.pdf',

'summary_fy24_draft.pdf',

],

};

});


    const apiResponse = await step.do('some other step', async () => {

      let resp = await fetch('https://api.cloudflare.com/client/v4/ips');

      return await resp.json<any>();

    });


    await step.sleep('wait on something', '1 minute');


    await step.do(

      'make a call to write that could maybe, just might, fail',

      // Define a retry strategy

      {

        retries: {

          limit: 5,

          delay: '5 second',

          backoff: 'exponential',

        },

        timeout: '15 minutes',

      },

      async () => {

        // Do stuff here, with access to the state from our previous steps

        if (Math.random() > 0.5) {

          throw new Error('API call to $STORAGE_SYSTEM failed');

        }

      },

    );


}

}


export default {

async fetch(req: Request, env: Env): Promise<Response> {

let url = new URL(req.url);


    if (url.pathname.startsWith('/favicon')) {

      return Response.json({}, { status: 404 });

    }


    // Get the status of an existing instance, if provided

    let id = url.searchParams.get('instanceId');

    if (id) {

      let instance = await env.MY_WORKFLOW.get(id);

      return Response.json({

        status: await instance.status(),

      });

    }


    const data = await req.json()


    // Spawn a new instance and return the ID and status

    let instance = await env.MY_WORKFLOW.create({

      // Define an ID for the Workflow instance

      id: crypto.randomUUID(),

       // Pass data to the Workflow instance

      // Available on the WorkflowEvent

       params: data,

    });


    return Response.json({

      id: instance.id,

      details: await instance.status(),

    });


},

};


</code>


<configuration>

{

  "name": "workflows-starter",

  "main": "src/index.ts",

  "compatibility_date": "2025-02-11",

  "workflows": [

    {

      "name": "workflows-starter",

      "binding": "MY_WORKFLOW",

      "class_name": "MyWorkflow"

    }

  ]

}

</configuration>


<key_points>


- Defines a Workflow by extending the WorkflowEntrypoint class.

- Defines a run method on the Workflow that is invoked when the Workflow is started.

- Ensures that `await` is used before calling `step.do` or `step.sleep`

- Passes a payload (event) to the Workflow from a Worker

- Defines a payload type and uses TypeScript type arguments to ensure type safety


</key_points>

</example>


<example id="workers_analytics_engine">

<description>

 Using Workers Analytics Engine for writing event data.

</description>


<code language="typescript">

interface Env {

 USER_EVENTS: AnalyticsEngineDataset;

}


export default {

async fetch(req: Request, env: Env): Promise<Response> {

let url = new URL(req.url);

let path = url.pathname;

let userId = url.searchParams.get("userId");


     // Write a datapoint for this visit, associating the data with

     // the userId as our Analytics Engine 'index'

     env.USER_EVENTS.writeDataPoint({

      // Write metrics data: counters, gauges or latency statistics

      doubles: [],

      // Write text labels - URLs, app names, event_names, etc

      blobs: [path],

      // Provide an index that groups your data correctly.

      indexes: [userId],

     });


     return Response.json({

      hello: "world",

     });

    ,


};


</code>


<configuration>

{

  "name": "analytics-engine-example",

  "main": "src/index.ts",

  "compatibility_date": "2025-02-11",

  "analytics_engine_datasets": [

      {

        "binding": "<BINDING_NAME>",

        "dataset": "<DATASET_NAME>"

      }

    ]

  }

}

</configuration>


<usage>

// Query data within the 'temperatures' dataset

// This is accessible via the REST API at https://api.cloudflare.com/client/v4/accounts/{account_id}/analytics_engine/sql

SELECT

    timestamp,

    blob1 AS location_id,

    double1 AS inside_temp,

    double2 AS outside_temp

FROM temperatures

WHERE timestamp > NOW() - INTERVAL '1' DAY


// List the datasets (tables) within your Analytics Engine

curl "<https://api.cloudflare.com/client/v4/accounts/{account_id}/analytics_engine/sql>" \

--header "Authorization: Bearer <API_TOKEN>" \

--data "SHOW TABLES"


</usage>


<key_points>


- Binds an Analytics Engine dataset to the Worker

- Uses the `AnalyticsEngineDataset` type when using TypeScript for the binding

- Writes event data using the `writeDataPoint` method and writes an `AnalyticsEngineDataPoint`

- Does NOT `await` calls to `writeDataPoint`, as it is non-blocking

- Defines an index as the key representing an app, customer, merchant or tenant.

- Developers can use the GraphQL or SQL APIs to query data written to Analytics Engine

  </key_points>

  </example>


<example id="browser_rendering_workers">

<description>

Use the Browser Rendering API as a headless browser to interact with websites from a Cloudflare Worker.

</description>


<code language="typescript">

import puppeteer from "@cloudflare/puppeteer";


interface Env {

  BROWSER_RENDERING: Fetcher;

}


export default {

  async fetch(request, env): Promise<Response> {

    const { searchParams } = new URL(request.url);

    let url = searchParams.get("url");


    if (url) {

      url = new URL(url).toString(); // normalize

      const browser = await puppeteer.launch(env.MYBROWSER);

      const page = await browser.newPage();

      await page.goto(url);

      // Parse the page content

      const content = await page.content();

      // Find text within the page content

      const text = await page.$eval("body", (el) => el.textContent);

      // Do something with the text

      // e.g. log it to the console, write it to KV, or store it in a database.

      console.log(text);


      // Ensure we close the browser session

      await browser.close();


      return Response.json({

        bodyText: text,

      })

    } else {

      return Response.json({

          error: "Please add an ?url=https://example.com/ parameter"

      }, { status: 400 })

    }

  },

} satisfies ExportedHandler<Env>;

</code>


<configuration>

{

  "name": "browser-rendering-example",

  "main": "src/index.ts",

  "compatibility_date": "2025-02-11",

  "browser": [

    {

      "binding": "BROWSER_RENDERING",

    }

  ]

}

</configuration>


<usage>

// Install @cloudflare/puppeteer

npm install @cloudflare/puppeteer --save-dev

</usage>


<key_points>


- Configures a BROWSER_RENDERING binding

- Passes the binding to Puppeteer

- Uses the Puppeteer APIs to navigate to a URL and render the page

- Parses the DOM and returns context for use in the response

- Correctly creates and closes the browser instance


</key_points>

</example>


<example id="static-assets">

<description>

Serve Static Assets from a Cloudflare Worker and/or configure a Single Page Application (SPA) to correctly handle HTTP 404 (Not Found) requests and route them to the entrypoint.

</description>

<code language="typescript">

// src/index.ts


interface Env {

  ASSETS: Fetcher;

}


export default {

  fetch(request, env) {

    const url = new URL(request.url);


    if (url.pathname.startsWith("/api/")) {

      return Response.json({

        name: "Cloudflare",

      });

    }


    return env.ASSETS.fetch(request);

  },

} satisfies ExportedHandler<Env>;

</code>

<configuration>

{

  "name": "my-app",

  "main": "src/index.ts",

  "compatibility_date": "<TBD>",

  "assets": { "directory": "./public/", "not_found_handling": "single-page-application", "binding": "ASSETS" },

  "observability": {

    "enabled": true

  }

}

</configuration>

<key_points>

- Configures a ASSETS binding

- Uses /public/ as the directory the build output goes to from the framework of choice

- The Worker will handle any requests that a path cannot be found for and serve as the API

- If the application is a single-page application (SPA), HTTP 404 (Not Found) requests will direct to the SPA.


</key_points>

</example>


<example id="agents">

<code language="typescript">

<description>

Build an AI Agent on Cloudflare Workers, using the agents, and the state management and syncing APIs built into the agents.

</description>


<code language="typescript">

// src/index.ts

import { Agent, AgentNamespace, Connection, ConnectionContext, getAgentByName, routeAgentRequest, WSMessage } from 'agents';

import { OpenAI } from "openai";


interface Env {

  AIAgent: AgentNamespace<Agent>;

  OPENAI_API_KEY: string;

}


export class AIAgent extends Agent {

  // Handle HTTP requests with your Agent

  async onRequest(request) {

    // Connect with AI capabilities

    const ai = new OpenAI({

      apiKey: this.env.OPENAI_API_KEY,

    });


    // Process and understand

    const response = await ai.chat.completions.create({

      model: "gpt-4",

      messages: [{ role: "user", content: await request.text() }],

    });


    return new Response(response.choices[0].message.content);

  }


  async processTask(task) {

    await this.understand(task);

    await this.act();

    await this.reflect();

  }


  // Handle WebSockets

  async onConnect(connection: Connection) {

   await this.initiate(connection);

   connection.accept()

  }


  async onMessage(connection, message) {

    const understanding = await this.comprehend(message);

    await this.respond(connection, understanding);

  }


  async evolve(newInsight) {

      this.setState({

        ...this.state,

        insights: [...(this.state.insights || []), newInsight],

        understanding: this.state.understanding + 1,

      });

    }


  onStateUpdate(state, source) {

    console.log("Understanding deepened:", {

      newState: state,

      origin: source,

    });

  }


  // Scheduling APIs

  // An Agent can schedule tasks to be run in the future by calling this.schedule(when, callback, data), where when can be a delay, a Date, or a cron string; callback the function name to call, and data is an object of data to pass to the function.

  //

  // Scheduled tasks can do anything a request or message from a user can: make requests, query databases, send emails, read+write state: scheduled tasks can invoke any regular method on your Agent.

  async scheduleExamples() {

    // schedule a task to run in 10 seconds

    let task = await this.schedule(10, "someTask", { message: "hello" });


    // schedule a task to run at a specific date

    let task = await this.schedule(new Date("2025-01-01"), "someTask", {});


    // schedule a task to run every 10 seconds

    let { id } = await this.schedule("*/10 * * * *", "someTask", { message: "hello" });


    // schedule a task to run every 10 seconds, but only on Mondays

    let task = await this.schedule("0 0 * * 1", "someTask", { message: "hello" });


    // cancel a scheduled task

    this.cancelSchedule(task.id);


    // Get a specific schedule by ID

    // Returns undefined if the task does not exist

    let task = await this.getSchedule(task.id)


    // Get all scheduled tasks

    // Returns an array of Schedule objects

    let tasks = this.getSchedules();


    // Cancel a task by its ID

    // Returns true if the task was cancelled, false if it did not exist

    await this.cancelSchedule(task.id);


    // Filter for specific tasks

    // e.g. all tasks starting in the next hour

    let tasks = this.getSchedules({

      timeRange: {

        start: new Date(Date.now()),

        end: new Date(Date.now() + 60 * 60 * 1000),

      }

    });

  }


  async someTask(data) {

    await this.callReasoningModel(data.message);

  }


  // Use the this.sql API within the Agent to access the underlying SQLite database

   async callReasoningModel(prompt: Prompt) {

    interface Prompt {

       userId: string;

       user: string;

       system: string;

       metadata: Record<string, string>;

    }


    interface History {

      timestamp: Date;

      entry: string;

    }


    let result = this.sql<History>`SELECT * FROM history WHERE user = ${prompt.userId} ORDER BY timestamp DESC LIMIT 1000`;

    let context = [];

    for await (const row of result) {

      context.push(row.entry);

    }


    const client = new OpenAI({

      apiKey: this.env.OPENAI_API_KEY,

    });


    // Combine user history with the current prompt

    const systemPrompt = prompt.system || 'You are a helpful assistant.';

    const userPrompt = `${prompt.user}\n\nUser history:\n${context.join('\n')}`;


    try {

      const completion = await client.chat.completions.create({

        model: this.env.MODEL || 'o3-mini',

        messages: [

          { role: 'system', content: systemPrompt },

          { role: 'user', content: userPrompt },

        ],

        temperature: 0.7,

        max_tokens: 1000,

      });


      // Store the response in history

      this

        .sql`INSERT INTO history (timestamp, user, entry) VALUES (${new Date()}, ${prompt.userId}, ${completion.choices[0].message.content})`;


      return completion.choices[0].message.content;

    } catch (error) {

      console.error('Error calling reasoning model:', error);

      throw error;

    }

  }


  // Use the SQL API with a type parameter

  async queryUser(userId: string) {

    type User = {

      id: string;

      name: string;

      email: string;

    };

    // Supply the type paramter to the query when calling this.sql

    // This assumes the results returns one or more User rows with "id", "name", and "email" columns

    // You do not need to specify an array type (`User[]` or `Array<User>`) as `this.sql` will always return an array of the specified type.

    const user = await this.sql<User>`SELECT * FROM users WHERE id = ${userId}`;

    return user

  }


  // Run and orchestrate Workflows from Agents

  async runWorkflow(data) {

     let instance = await env.MY_WORKFLOW.create({

       id: data.id,

       params: data,

     })


     // Schedule another task that checks the Workflow status every 5 minutes...

     await this.schedule("*/5 * * * *", "checkWorkflowStatus", { id: instance.id });

   }

}


export default {

  async fetch(request, env, ctx): Promise<Response> {

    // Routed addressing

    // Automatically routes HTTP requests and/or WebSocket connections to /agents/:agent/:name

    // Best for: connecting React apps directly to Agents using useAgent from @cloudflare/agents/react

    return (await routeAgentRequest(request, env)) || Response.json({ msg: 'no agent here' }, { status: 404 });


    // Named addressing

    // Best for: convenience method for creating or retrieving an agent by name/ID.

    let namedAgent = getAgentByName<Env, AIAgent>(env.AIAgent, 'agent-456');

    // Pass the incoming request straight to your Agent

    let namedResp = (await namedAgent).fetch(request);

    return namedResp;


    // Durable Objects-style addressing

    // Best for: controlling ID generation, associating IDs with your existing systems,

    // and customizing when/how an Agent is created or invoked

    const id = env.AIAgent.newUniqueId();

    const agent = env.AIAgent.get(id);

    // Pass the incoming request straight to your Agent

    let resp = await agent.fetch(request);


    // return Response.json({ hello: 'visit https://developers.cloudflare.com/agents for more' });

  },

} satisfies ExportedHandler<Env>;

</code>


<code>

// client.js

import { AgentClient } from "agents/client";


const connection = new AgentClient({

  agent: "dialogue-agent",

  name: "insight-seeker",

});


connection.addEventListener("message", (event) => {

  console.log("Received:", event.data);

});


connection.send(

  JSON.stringify({

    type: "inquiry",

    content: "What patterns do you see?",

  })

);

</code>


<code>

// app.tsx

// React client hook for the agents

import { useAgent } from "agents/react";

import { useState } from "react";


// useAgent client API

function AgentInterface() {

  const connection = useAgent({

    agent: "dialogue-agent",

    name: "insight-seeker",

    onMessage: (message) => {

      console.log("Understanding received:", message.data);

    },

    onOpen: () => console.log("Connection established"),

    onClose: () => console.log("Connection closed"),

  });


  const inquire = () => {

    connection.send(

      JSON.stringify({

        type: "inquiry",

        content: "What insights have you gathered?",

      })

    );

  };


  return (

    <div className="agent-interface">

      <button onClick={inquire}>Seek Understanding</button>

    </div>

  );

}


// State synchronization

function StateInterface() {

  const [state, setState] = useState({ counter: 0 });


  const agent = useAgent({

    agent: "thinking-agent",

    onStateUpdate: (newState) => setState(newState),

  });


  const increment = () => {

    agent.setState({ counter: state.counter + 1 });

  };


  return (

    <div>

      <div>Count: {state.counter}</div>

      <button onClick={increment}>Increment</button>

    </div>

  );

}

</code>


<configuration>

  {

  "durable_objects": {

    "bindings": [

      {

        "binding": "AIAgent",

        "class_name": "AIAgent"

      }

    ]

  },

  "migrations": [

    {

      "tag": "v1",

      // Mandatory for the Agent to store state

      "new_sqlite_classes": ["AIAgent"]

    }

  ]

}

</configuration>

<key_points>


- Imports the `Agent` class from the `agents` package

- Extends the `Agent` class and implements the methods exposed by the `Agent`, including `onRequest` for HTTP requests, or `onConnect` and `onMessage` for WebSockets.

- Uses the `this.schedule` scheduling API to schedule future tasks.

- Uses the `this.setState` API within the Agent for syncing state, and uses type parameters to ensure the state is typed.

- Uses the `this.sql` as a lower-level query API.

- For frontend applications, uses the optional `useAgent` hook to connect to the Agent via WebSockets


</key_points>

</example>


<example id="workers-ai-structured-outputs-json">

<description>

Workers AI supports structured JSON outputs with JSON mode, which supports the `response_format` API provided by the OpenAI SDK.

</description>

<code language="typescript">

import { OpenAI } from "openai";


interface Env {

  OPENAI_API_KEY: string;

}


// Define your JSON schema for a calendar event

const CalendarEventSchema = {

  type: 'object',

  properties: {

    name: { type: 'string' },

    date: { type: 'string' },

    participants: { type: 'array', items: { type: 'string' } },

  },

  required: ['name', 'date', 'participants']

};


export default {

  async fetch(request: Request, env: Env) {

    const client = new OpenAI({

      apiKey: env.OPENAI_API_KEY,

      // Optional: use AI Gateway to bring logs, evals & caching to your AI requests

      // https://developers.cloudflare.com/ai-gateway/usage/providers/openai/

      // baseUrl: "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/openai"

    });


    const response = await client.chat.completions.create({

      model: 'gpt-4o-2024-08-06',

      messages: [

        { role: 'system', content: 'Extract the event information.' },

        { role: 'user', content: 'Alice and Bob are going to a science fair on Friday.' },

      ],

      // Use the `response_format` option to request a structured JSON output

      response_format: {

        // Set json_schema and provide ra schema, or json_object and parse it yourself

        type: 'json_schema',

        schema: CalendarEventSchema, // provide a schema

      },

    });


    // This will be of type CalendarEventSchema

    const event = response.choices[0].message.parsed;


    return Response.json({

      "calendar_event": event,

    })

  }

}

</code>

<configuration>

{

  "name": "my-app",

  "main": "src/index.ts",

  "compatibility_date": "$CURRENT_DATE",

  "observability": {

    "enabled": true

  }

}

</configuration>

<key_points>


- Defines a JSON Schema compatible object that represents the structured format requested from the model

- Sets `response_format` to `json_schema` and provides a schema to parse the response

- This could also be `json_object`, which can be parsed after the fact.

- Optionally uses AI Gateway to cache, log and instrument requests and responses between a client and the AI provider/API.


</key_points>

</example>


</code_examples>


<api_patterns>


<pattern id="websocket_coordination">

<description>

Fan-in/fan-out for WebSockets. Uses the Hibernatable WebSockets API within Durable Objects. Does NOT use the legacy addEventListener API.

</description>

<implementation>

export class WebSocketHibernationServer extends DurableObject {

  async fetch(request: Request, env: Env, ctx: ExecutionContext) {

    // Creates two ends of a WebSocket connection.

    const webSocketPair = new WebSocketPair();

    const [client, server] = Object.values(webSocketPair);


    // Call this to accept the WebSocket connection.

    // Do NOT call server.accept() (this is the legacy approach and is not preferred)

    this.ctx.acceptWebSocket(server);


    return new Response(null, {

          status: 101,

          webSocket: client,

    });

},


async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer): void | Promise<void> {

  // Invoked on each WebSocket message.

  ws.send(message)

},


async webSocketClose(ws: WebSocket, code: number, reason: string, wasClean: boolean) void | Promise<void> {

  // Invoked when a client closes the connection.

  ws.close(code, "<message>");

},


async webSocketError(ws: WebSocket, error: unknown): void | Promise<void> {

  // Handle WebSocket errors

}

}

</implementation>

</pattern>

</api_patterns>


<user_prompt>

{user_prompt}

</user_prompt>


```

The prompt above adopts several best practices, including:

* Using `<xml>` tags to structure the prompt
* API and usage examples for products and use cases
* Guidance on how to generate configuration (for example, `wrangler.jsonc`) as part of the model's response
* Recommendations on Cloudflare products to use for specific storage or state needs

### Additional uses

You can use the prompt in several ways:

* Within the user context window, with your own user prompt inserted between the `<user_prompt>` tags (**easiest**)
* As the `system` prompt for models that support system prompts
* Adding it to the prompt library or file context in your preferred IDE:  
   * Cursor: add the prompt to [your Project Rules ↗](https://docs.cursor.com/context/rules-for-ai)  
   * Zed: use [the /file command ↗](https://zed.dev/docs/assistant/assistant-panel) to add the prompt to the Assistant context  
   * Windsurf: use [the @-mention command ↗](https://docs.codeium.com/chat/overview) to include a file containing the prompt to your Chat  
   * Claude Code: add the prompt to your `CLAUDE.md` configuration after running `/init` to include best practices to a Workers project  
   * GitHub Copilot: create the [.github/copilot-instructions.md ↗](https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot) file at the root of your project and add the prompt

Note

The prompts here are examples and should be adapted to your specific use case.

Depending on the model and user prompt, it may generate invalid code, configuration, or other errors. Review and test the generated code before deploying it.

## Use docs in your editor

AI-enabled editors, including Cursor and Windsurf, can index documentation. Cursor includes the Cloudflare Developer Docs by default: you can use the [@Docs ↗](https://cursor.com/docs/context/mentions#docs) command.

In other editors, such as Zed or Windsurf, you can use `llms-full.txt` files to provide comprehensive documentation context for indexing. For Workers-specific documentation indexing, use [https://developers.cloudflare.com/workers/llms-full.txt ↗](https://developers.cloudflare.com/workers/llms-full.txt). For the complete Cloudflare documentation archive, use the root level [https://developers.cloudflare.com/llms-full.txt ↗](https://developers.cloudflare.com/llms-full.txt) instead.

You can also link an agent to `llms.txt` files while prompting to provide similar context without the need for offline indexing. For workers-specific documentation, use [https://developers.cloudflare.com/workers/llms.txt ↗](https://developers.cloudflare.com/workers/llms.txt). For context of the entire Cloudflare documentation, use the root level [https://developers.cloudflare.com/llms.txt ↗](https://developers.cloudflare.com/llms.txt).

The _Copy Page_ button is also available on any individual page to paste that page's content directly.

You can combine these with the Workers system prompt on this page to improve your editor or agent's understanding of the Workers APIs.

## Additional resources

To get the most out of AI models and tools, review the following guides on prompt engineering and structure:

* OpenAI's [prompt engineering ↗](https://platform.openai.com/docs/guides/prompt-engineering) guide and [best practices ↗](https://platform.openai.com/docs/guides/reasoning-best-practices) for using reasoning models.
* The [prompt engineering ↗](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) guide from Anthropic.
* Google's [quick start guide ↗](https://services.google.com/fh/files/misc/gemini-for-google-workspace-prompting-guide-101.pdf) for writing effective prompts.
* Meta's [prompting documentation ↗](https://www.llama.com/docs/how-to-guides/prompting/) for their Llama model family.
* GitHub's guide for [prompt engineering ↗](https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/prompt-engineering-for-copilot-chat) when using Copilot Chat.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/get-started/","name":"Getting started"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/get-started/prompting/","name":"Prompting"}}]}
```

---

---
title: Templates
description: GitHub repositories that are designed to be a starting point for building a new Cloudflare Workers project.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/get-started/quickstarts.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Templates

Templates are GitHub repositories that are designed to be a starting point for building a new Cloudflare Workers project. To start any of the projects below, run:

### astro-blog-starter-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/astro-blog-starter-template)

Build a personal website, blog, or portfolio with Astro.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/astro-blog-starter-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/astro-blog-starter-template
```

```
yarn create cloudflare --template=cloudflare/templates/astro-blog-starter-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/astro-blog-starter-template
```

  
---

  
### chanfana-openapi-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/chanfana-openapi-template)

Complete backend API template using Hono + Chanfana + D1 + Vitest.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/chanfana-openapi-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/chanfana-openapi-template
```

```
yarn create cloudflare --template=cloudflare/templates/chanfana-openapi-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/chanfana-openapi-template
```

  
---

  
### cli

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/cli)

A handy CLI for developing templates.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/cli)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/cli
```

```
yarn create cloudflare --template=cloudflare/templates/cli
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/cli
```

  
---

  
### containers-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/containers-template)

Build a Container-enabled Worker

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/containers-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/containers-template
```

```
yarn create cloudflare --template=cloudflare/templates/containers-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/containers-template
```

  
---

  
### d1-starter-sessions-api-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template)

D1 starter template using the Sessions API for read replication.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/d1-starter-sessions-api-template
```

```
yarn create cloudflare --template=cloudflare/templates/d1-starter-sessions-api-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/d1-starter-sessions-api-template
```

  
---

  
### d1-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/d1-template)

Cloudflare's native serverless SQL database.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/d1-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/d1-template
```

```
yarn create cloudflare --template=cloudflare/templates/d1-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/d1-template
```

  
---

  
### durable-chat-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/durable-chat-template)

Chat with other users in real-time using Durable Objects and PartyKit.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/durable-chat-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/durable-chat-template
```

```
yarn create cloudflare --template=cloudflare/templates/durable-chat-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/durable-chat-template
```

  
---

  
### hello-world-do-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/hello-world-do-template)

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/hello-world-do-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/hello-world-do-template
```

```
yarn create cloudflare --template=cloudflare/templates/hello-world-do-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/hello-world-do-template
```

  
---

  
### llm-chat-app-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/llm-chat-app-template)

A simple chat application powered by Cloudflare Workers AI

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/llm-chat-app-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/llm-chat-app-template
```

```
yarn create cloudflare --template=cloudflare/templates/llm-chat-app-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/llm-chat-app-template
```

  
---

  
### microfrontend-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/microfrontend-template)

Route requests to different Workers based on path patterns with automatic URL rewriting for unified microfrontend applications.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/microfrontend-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/microfrontend-template
```

```
yarn create cloudflare --template=cloudflare/templates/microfrontend-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/microfrontend-template
```

  
---

  
### multiplayer-globe-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/multiplayer-globe-template)

Display website visitor locations in real-time using Durable Objects and PartyKit.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/multiplayer-globe-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/multiplayer-globe-template
```

```
yarn create cloudflare --template=cloudflare/templates/multiplayer-globe-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/multiplayer-globe-template
```

  
---

  
### mysql-hyperdrive-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/mysql-hyperdrive-template)

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/mysql-hyperdrive-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/mysql-hyperdrive-template
```

```
yarn create cloudflare --template=cloudflare/templates/mysql-hyperdrive-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/mysql-hyperdrive-template
```

  
---

  
### next-starter-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/next-starter-template)

Build a full-stack web application with Next.js.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/next-starter-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/next-starter-template
```

```
yarn create cloudflare --template=cloudflare/templates/next-starter-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/next-starter-template
```

  
---

  
### nlweb-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/nlweb-template)

Build Nl Web components with Cloudflare Workers.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/nlweb-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/nlweb-template
```

```
yarn create cloudflare --template=cloudflare/templates/nlweb-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/nlweb-template
```

  
---

  
### nodejs-http-server-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/nodejs-http-server-template)

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/nodejs-http-server-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/nodejs-http-server-template
```

```
yarn create cloudflare --template=cloudflare/templates/nodejs-http-server-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/nodejs-http-server-template
```

  
---

  
### openauth-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/openauth-template)

Deploy an OpenAuth server on Cloudflare Workers.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/openauth-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/openauth-template
```

```
yarn create cloudflare --template=cloudflare/templates/openauth-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/openauth-template
```

  
---

  
### postgres-hyperdrive-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/postgres-hyperdrive-template)

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/postgres-hyperdrive-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/postgres-hyperdrive-template
```

```
yarn create cloudflare --template=cloudflare/templates/postgres-hyperdrive-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/postgres-hyperdrive-template
```

  
---

  
### r2-explorer-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/r2-explorer-template)

A Google Drive Interface for your Cloudflare R2 Buckets!

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/r2-explorer-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/r2-explorer-template
```

```
yarn create cloudflare --template=cloudflare/templates/r2-explorer-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/r2-explorer-template
```

  
---

  
### react-postgres-fullstack-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/react-postgres-fullstack-template)

Deploy your own library of books using Postgres and Workers.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/react-postgres-fullstack-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/react-postgres-fullstack-template
```

```
yarn create cloudflare --template=cloudflare/templates/react-postgres-fullstack-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/react-postgres-fullstack-template
```

  
---

  
### react-router-hono-fullstack-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/react-router-hono-fullstack-template)

A modern full-stack template powered by Cloudflare Workers, using Hono for backend APIs, React Router for frontend routing, and shadcn/ui for beautiful, accessible components styled with Tailwind CSS

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/react-router-hono-fullstack-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/react-router-hono-fullstack-template
```

```
yarn create cloudflare --template=cloudflare/templates/react-router-hono-fullstack-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/react-router-hono-fullstack-template
```

  
---

  
### react-router-postgres-ssr-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/react-router-postgres-ssr-template)

Deploy your own library of books using Postgres and Workers.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/react-router-postgres-ssr-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/react-router-postgres-ssr-template
```

```
yarn create cloudflare --template=cloudflare/templates/react-router-postgres-ssr-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/react-router-postgres-ssr-template
```

  
---

  
### react-router-starter-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/react-router-starter-template)

Build a full-stack web application with React Router 7.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/react-router-starter-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/react-router-starter-template
```

```
yarn create cloudflare --template=cloudflare/templates/react-router-starter-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/react-router-starter-template
```

  
---

  
### remix-starter-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/remix-starter-template)

Build a full-stack web application with Remix.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/remix-starter-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/remix-starter-template
```

```
yarn create cloudflare --template=cloudflare/templates/remix-starter-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/remix-starter-template
```

  
---

  
### saas-admin-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/saas-admin-template)

Admin dashboard template built with Astro, shadcn/ui, and Cloudflare's developer stack

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/saas-admin-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/saas-admin-template
```

```
yarn create cloudflare --template=cloudflare/templates/saas-admin-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/saas-admin-template
```

  
---

  
### text-to-image-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/text-to-image-template)

Generate images based on text prompts.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/text-to-image-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/text-to-image-template
```

```
yarn create cloudflare --template=cloudflare/templates/text-to-image-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/text-to-image-template
```

  
---

  
### to-do-list-kv-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/to-do-list-kv-template)

A simple to-do list app built with Cloudflare Workers Assets and Remix.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/to-do-list-kv-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/to-do-list-kv-template
```

```
yarn create cloudflare --template=cloudflare/templates/to-do-list-kv-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/to-do-list-kv-template
```

  
---

  
### vite-react-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/vite-react-template)

A template for building a React application with Vite, Hono, and Cloudflare Workers

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/vite-react-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/vite-react-template
```

```
yarn create cloudflare --template=cloudflare/templates/vite-react-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/vite-react-template
```

  
---

  
### worker-publisher-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/worker-publisher-template)

A Cloudflare Worker template that creates and deploys Workers to a Dispatch Namespace via the Cloudflare SDK.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/worker-publisher-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/worker-publisher-template
```

```
yarn create cloudflare --template=cloudflare/templates/worker-publisher-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/worker-publisher-template
```

  
---

  
### workers-builds-notifications-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/workers-builds-notifications-template)

Send Workers Builds status notifications to Slack, Discord, or any webhook via Event Subscriptions.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/workers-builds-notifications-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/workers-builds-notifications-template
```

```
yarn create cloudflare --template=cloudflare/templates/workers-builds-notifications-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/workers-builds-notifications-template
```

  
---

  
### workers-for-platforms-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/workers-for-platforms-template)

Build your own website hosting platform with Workers for Platforms. Users can create and deploy sites through a simple web interface.

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/workers-for-platforms-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/workers-for-platforms-template
```

```
yarn create cloudflare --template=cloudflare/templates/workers-for-platforms-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/workers-for-platforms-template
```

  
---

  
### workflows-starter-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/workflows-starter-template)

Interactive starter template demonstrating Cloudflare Workflows with real-time status updates

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/workflows-starter-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/workflows-starter-template
```

```
yarn create cloudflare --template=cloudflare/templates/workflows-starter-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/workflows-starter-template
```

  
---

  
### x402-proxy-template

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/x402-proxy-template)

Transparent proxy with payment-gated routes using x402 protocol and stateless JWT authentication

Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/x402-proxy-template)

 npm  yarn  pnpm 

```
npm create cloudflare@latest -- --template=cloudflare/templates/x402-proxy-template
```

```
yarn create cloudflare --template=cloudflare/templates/x402-proxy-template
```

```
pnpm create cloudflare@latest --template=cloudflare/templates/x402-proxy-template
```

  
---

  

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/get-started/","name":"Getting started"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/get-started/quickstarts/","name":"Templates"}}]}
```

---

---
title: Betas
description: Cloudflare developer platform and Workers features beta status.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/platform/betas.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Betas

These are the current alphas and betas relevant to the Cloudflare Workers platform.

* **Public alphas and betas are openly available**, but may have limitations and caveats due to their early stage of development.
* Private alphas and betas require explicit access to be granted. Refer to the documentation to join the relevant product waitlist.

| Product                                                                            | Private Beta | Public Beta                                                                   | More Info |
| ---------------------------------------------------------------------------------- | ------------ | ----------------------------------------------------------------------------- | --------- |
| Email Workers                                                                      | ✅            | [Docs](https://developers.cloudflare.com/email-routing/email-workers/)        |           |
| Green Compute                                                                      | ✅            | [Blog ↗](https://blog.cloudflare.com/earth-day-2022-green-compute-open-beta/) |           |
| [TCP Sockets](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) | ✅            | [Docs](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets)    |           |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/platform/","name":"Platform"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/platform/betas/","name":"Betas"}}]}
```

---

---
title: Built with Cloudflare button
description: Set up a Built with Cloudflare button
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/platform/built-with-cloudflare.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Built with Cloudflare button

If you're building an application or website on Cloudflare, you can embed a Built with Cloudflare button in your README, blog post, or documentation.

![Built with Cloudflare](https://workers.cloudflare.com/built-with-cloudflare.svg) 

Disambiguation

The "Built with Cloudflare" button can be used to share that you're using Cloudflare products on your website or application. If you want people to be able to deploy your application on their own account, refer to [Deploy to Cloudflare buttons](https://developers.cloudflare.com/workers/platform/deploy-buttons).

## How to Set Up Built with Cloudflare button

The Built with Cloudflare button is an SVG and can be embedded anywhere. Use the following snippet to paste the button into your README, blog post, or documentation.

* [ Markdown ](#tab-panel-7484)
* [ HTML ](#tab-panel-7485)
* [ URL ](#tab-panel-7486)

```

[![Built with Cloudflare](https://workers.cloudflare.com/built-with-cloudflare.svg)](https://cloudflare.com)


```

```

<a href="https://cloudflare.com"><img src="https://workers.cloudflare.com/built-with-cloudflare.svg" alt="Built with Cloudflare"/></a>


```

```

https://workers.cloudflare.com/built-with-cloudflare.svg


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/platform/","name":"Platform"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/platform/built-with-cloudflare/","name":"Built with Cloudflare button"}}]}
```

---

---
title: Changelog
description: Review recent changes to Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/platform/changelog/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Changelog

This changelog details meaningful changes made to Workers across the Cloudflare dashboard, Wrangler, the API, and the workerd runtime. These changes are not configurable.

This is _different_ from [compatibility dates](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) and [compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags/), which let you explicitly opt-in to or opt-out of specific changes to the Workers Runtime.

[ Subscribe to RSS ](https://developers.cloudflare.com/workers/platform/changelog/index.xml)

## 2026-03-20

* Updated v8 to version 14.6.

## 2026-01-29

* Updated v8 to version 14.5.

## 2026-01-13

* Updated v8 to version 14.4.

## 2025-12-19

* Allow null name when creating dynamic workers.

## 2025-11-25

* Updated v8 to version 14.3.

## 2025-10-25

* The maximum WebSocket message size limit has been increased from 1 MiB to 32 MiB.

## 2025-10-22

* Warnings which were previously only visible via the devtools console in preview sessions are now also sent to the tail Worker, if one is attached.

## 2025-10-17

* Updated v8 to version 14.2.
* Backported an optimization to `JSON.parse()`. More details are [available in this blog post](https://blog.cloudflare.com/unpacking-cloudflare-workers-cpu-performance-benchmarks/#json-parsing) and [the upstream patch](https://chromium-review.googlesource.com/c/v8/v8/+/7027411).

## 2025-09-18

* Updated v8 to version 14.1.

## 2025-09-11

* The node:fs and Web File System APIs are now available within Workers.

## 2025-08-21

* Updated v8 to version 14.0.
* `Uint8Array` type in JavaScript now supports base64 and hex operations.

## 2025-08-14

* Enable V8 Sandbox for improved isolation and security.

## 2025-08-11

* The MessageChannel and MessagePort APIs are now available in Workers.

## 2025-06-27

* Updated v8 to version 13.9.

## 2025-06-23

* Enable FinalizationRegistry API. See [We shipped FinalizationRegistry in Workers: why you should never use it](https://blog.cloudflare.com/we-shipped-finalizationregistry-in-workers-why-you-should-never-use-it/) for details.

## 2025-06-04

* Updated v8 to version 13.8.

## 2025-05-27

* Historically, in some cases, the same instance of `ctx` would be passed to multiple invocations of the event handler. We now always pass a new object for each event. We made this change retroactive to all compatibility dates because we suspect it fixes security bugs in some workers and does not break any worker. However, the old behavior can be restored using the compat flag `nonclass_entrypoint_reuses_ctx_across_invocations`.

## 2025-05-22

* Enabled explicit resource context management and support for Float16Array

## 2025-05-20

* Updated v8 to version 13.7.

## 2025-04-16

* Updated v8 to version 13.6.

## 2025-04-14

* JSRPC message size limit has been increased to 32MiB.

## 2025-04-03

* Websocket client exceptions are now JS exceptions rather than internal errors.

## 2025-03-27

* Updated v8 to version 13.5.

## 2025-02-28

* Updated v8 to version 13.4.
* When using `nodejs_compat`, the new `nodejs_compat_populate_process_env` compatibility flag will cause `process.env` to be automatically populated with text bindings configured for the worker.

## 2025-02-26

* [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/) now supports building projects that use **pnpm 10** as the package manager. If your build previously failed due to this unsupported version, retry your build. No config changes needed.

## 2025-02-13

* [Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/) no longer runs Workers in the same location as D1 databases they are bound to. The same [placement logic](https://developers.cloudflare.com/workers/configuration/placement/#understand-how-smart-placement-works) now applies to all Workers that use Smart Placement, regardless of whether they use D1 bindings.

## 2025-02-11

* When Workers generate an "internal error" exception in response to certain failures, the exception message may provide a reference ID that customers can include in support communication for easier error identification. For example, an exception with the new message might look like: `internal error; reference = 0123456789abcdefghijklmn`.

## 2025-01-31

* Updated v8 to version 13.3.

## 2025-01-15

* The runtime will no longer reuse isolates across worker versions even if the code happens to be identical. This "optimization" was deemed more confusing than it is worth.

## 2025-01-14

* Updated v8 to version 13.2.

## 2024-12-19

* **Cloudflare GitHub App Permissions Update**  
   * Cloudflare is requesting updated permissions for the [Cloudflare GitHub App](https://github.com/apps/cloudflare-workers-and-pages) to enable features like automatically creating a repository on your GitHub account and deploying the new repository for you when getting started with a template. This feature is coming out soon to support a better onboarding experience.  
   * **Requested permissions:**  
         * [Repository Administration](https://docs.github.com/en/rest/authentication/permissions-required-for-github-apps?apiVersion=2022-11-28#repository-permissions-for-administration) (read/write) to create repositories.  
         * [Contents](https://docs.github.com/en/rest/authentication/permissions-required-for-github-apps?apiVersion=2022-11-28#repository-permissions-for-contents) (read/write) to push code to the created repositories.  
   * **Who is impacted:**  
         * Existing users will be prompted to update permissions when GitHub sends an email with subject "\[GitHub\] Cloudflare Workers & Pages is requesting updated permission" on December 19th, 2024.  
         * New users installing the app will see the updated permissions during the connecting repository process.  
   * **Action:** Review and accept the permissions update to use upcoming features. _If you decline or take no action, you can continue connecting repositories and deploying changes via the Cloudflare GitHub App as you do today, but new features requiring these permissions will not be available._  
   * **Questions?** Visit [#github-permissions-update](https://discord.com/channels/595317990191398933/1313895851520688163) in the Cloudflare Developers Discord.

## 2024-11-18

* Updated v8 to version 13.1.

## 2024-11-12

* Fixes exception seen when trying to call deleteAll() during a SQLite-backed Durable Object's alarm handler.

## 2024-11-08

* Update SQLite to version 3.47.

## 2024-10-21

* Fixed encoding of WebSocket pong messages when talking to remote servers. Previously, when a Worker made a WebSocket connection to an external server, the server may have prematurely closed the WebSocket for failure to respond correctly to pings. Client-side connections were not affected.

## 2024-10-14

* Updated v8 to version 13.0.

## 2024-09-26

* You can now connect your GitHub or GitLab repository to an existing Worker to automatically build and deploy your changes when you make a git push with [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/).

## 2024-09-20

* Workers now support the \[`handle_cross_request_promise_resolution`\] compatibility flag which addresses certain edge cases around awaiting and resolving promises across multiple requests.

## 2024-09-19

* Revamped Workers and Pages UI settings to simplify the creation and management of project configurations. For bugs and general feedback, please submit this [form](https://forms.gle/XXqhRGbZmuzninuN9).

## 2024-09-16

* Updated v8 to version 12.9.

## 2024-08-19

* Workers now support the [allow\_custom\_ports compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#allow-specifying-a-custom-port-when-making-a-subrequest-with-the-fetch-api) which enables using the `fetch()` calls to custom ports.

## 2024-08-15

* Updated v8 to version 12.8.
* You can now use [Promise.try()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global%5FObjects/Promise/try) in Cloudflare Workers. Refer to [tc39/proposal-promise-try](https://github.com/tc39/proposal-promise-try) for more context on this API that has recently been added to the JavaScript language.

## 2024-08-14

* When using the `nodejs_compat_v2` compatibility flag, the `setImmediate(fn)` API from Node.js is now available at the global scope.
* The `internal_writable_stream_abort_clears_queue` compatibility flag will ensure that certain `WritableStream` `abort()` operations are handled immediately rather than lazily, ensuring that the stream is appropriately aborted when the consumer of the stream is no longer active.

## 2024-07-19

* Workers with the [mTLS](https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls/) binding now support [Gradual Deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/).

## 2024-07-18

* Added a new `truncated` flag to [Tail Worker](https://developers.cloudflare.com/workers/observability/logs/tail-workers/) events to indicate when the event buffer is full and events are being dropped.

## 2024-07-17

* Updated v8 to version 12.7.

## 2024-07-03

* The [node:crypto](https://developers.cloudflare.com/workers/runtime-apis/nodejs/crypto/) implementation now includes the scrypt(...) and scryptSync(...) APIs.
* Workers now support the standard [EventSource](https://developers.cloudflare.com/workers/runtime-apis/eventsource/) API.
* Fixed a bug where when writing to an HTTP Response body would sometimes hang when the client disconnected (and sometimes throw an exception). It will now always throw an exception.

## 2024-07-01

* When using [Gradual Deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/), you can now use [version overrides](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#version-overrides) to send a request to a specific version of your Worker.

## 2024-06-28

* Fixed a bug which caused `Date.now()` to return skewed results if called before the first I/O of the first request after a Worker first started up. The value returned would be offset backwards by the amount of CPU time spent starting the Worker (compiling and running global scope), making it seem like the first I/O (e.g. first fetch()) was slower than it really was. This skew had nothing to do with Spectre mitigations; it was simply a longstanding bug.

## 2024-06-24

* [Exceptions](https://developers.cloudflare.com/durable-objects/best-practices/error-handling) thrown from Durable Object internal operations and tunneled to the caller may now be populated with a `.retryable: true` property if the exception was likely due to a transient failure, or populated with an `.overloaded: true` property if the exception was due to [overload](https://developers.cloudflare.com/durable-objects/observability/troubleshooting/#durable-object-is-overloaded).

## 2024-06-20

* We now prompt for extra confirmation if attempting to rollback to a version of a Worker using the [Deployments API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/deployments/methods/create/) where the value of a secret is different than the currently deployed version. A `?force=true` query parameter can be specified to proceed with the rollback.

## 2024-06-19

* When using [nodejs\_compat compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/), the `buffer` module now has an implementation of `isAscii()` and `isUtf8()` methods.
* Fixed a bug where exceptions propagated from [JS RPC](https://developers.cloudflare.com/workers/runtime-apis/rpc) calls to Durable Objects would lack the `.remote` property that exceptions from `fetch()` calls to Durable Objects have.

## 2024-06-12

* Blob and Body objects now include a new `bytes()` method, reflecting [recent](https://w3c.github.io/FileAPI/#bytes-method-algo) [additions](https://fetch.spec.whatwg.org/#dom-body-bytes) to web standards.

## 2024-06-03

* Workers with [Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/) enabled now support [Gradual Deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/).

## 2024-05-17

* Updated v8 to version 12.6.

## 2024-05-15

* The new [fetch\_standard\_url compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#use-standard-url-parsing-in-fetch) will become active by default on June 3rd, 2024 and ensures that URLs passed into the `fetch(...)` API, the `new Request(...)` constructor, and redirected requests will be parsed using the standard WHATWG URL parser.
* DigestStream is now more efficient and exposes a new `bytesWritten` property that indicates that number of bytes written to the digest.

## 2024-05-13

* Updated v8 to version 12.5.
* A bug in the fetch API implementation would cause the content type of a Blob to be incorrectly set. The fix is being released behind a new [blob\_standard\_mime\_type compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#properly-extract-blob-mime-type-from-content-type-headers).

## 2024-05-03

* Fixed RPC to/from Durable Objects not honoring the output gate.
* The `internal_stream_byob_return_view` compatibility flag can be used to improve the standards compliance of the `ReadableStreamBYOBReader` implementation when working with BYOB streams provided by the runtime (like in `response.body` or `request.body`). The flag ensures that the final read result will always include a `value` field whose value is set to an empty `Uint8Array` whose underlying `ArrayBuffer` is the same memory allocation as the one passed in on the call to `read()`.
* The Web platform standard `reportError(err)` global API is now available in workers. The reported error will first be emitted as an 'error' event on the global scope then reported in both the console output and tail worker exceptions by default.

## 2024-04-26

* Updated v8 to version 12.4.

## 2024-04-11

* Improve Streams API spec compliance by exposing `desiredSize` and other properties on stream class prototypes
* The new `URL.parse(...)` method is implemented. This provides an alternative to the URL constructor that does not throw exceptions on invalid URLs.
* R2 bindings objects now have a `storageClass` option. This can be set on object upload to specify the R2 storage class - Standard or Infrequent Access. The property is also returned with object metadata.

## 2024-04-05

* A new [JavaScript-native remote procedure call (RPC) API](https://developers.cloudflare.com/workers/runtime-apis/rpc) is now available, allowing you to communicate more easily across Workers and between Workers and Durable Objects.

## 2024-04-04

* There is no longer an explicit limit on the total amount of data which may be uploaded with Cache API [put()](https://developers.cloudflare.com/workers/runtime-apis/cache/#put) per request. Other [Cache API Limits](https://developers.cloudflare.com/workers/platform/limits/#cache-api-limits) continue to apply.
* The Web standard `ReadableStream.from()` API is now implemented. The API enables creating a `ReadableStream` from a either a sync or async iterable.

## 2024-04-03

* When the [brotli\_content\_encoding](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#brotli-content-encoding-support) compatibility flag is enabled, the Workers runtime now supports compressing and decompressing request bodies encoded using the [Brotli](https://developer.mozilla.org/en-US/docs/Glossary/Brotli%5Fcompression) compression algorithm. Refer to [this docs section](https://developers.cloudflare.com/workers/runtime-apis/fetch/#how-the-accept-encoding-header-is-handled) for more detail.

## 2024-04-02

* You can now [write Workers in Python](https://developers.cloudflare.com/workers/languages/python)

## 2024-04-01

* The new [unwrap\_custom\_thenables compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#handling-custom-thenables) enables workers to accept custom thenables in internal APIs that expect a promise (for instance, the `ctx.waitUntil(...)` method).
* TransformStreams created with the TransformStream constructor now have a cancel algorithm that is called when the stream is canceled or aborted. This change is part of the implementation of the WHATWG Streams standard.
* The [nodejs\_compat compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) now includes an implementation of the [MockTracker API from node:test](https://nodejs.org/api/test.html#class-mocktracker). This is not an implementation of the full `node:test` module, and mock timers are currently not included.
* Exceptions reported to [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/) now include a "stack" property containing the exception's stack trace, if available.

## 2024-03-11

* Built-in APIs that return Promises will now produce stack traces when the Promise rejects. Previously, the rejection error lacked a stack trace.
* A new compat flag `fetcher_no_get_put_delete` removes the `get()`, `put()`, and `delete()` methods on service bindings and Durable Object stubs. This will become the default as of compatibility date 2024-03-26\. These methods were designed as simple convenience wrappers around `fetch()`, but were never documented.
* Updated v8 to version 12.3.

## 2024-02-24

* v8 updated to version 12.2.
* You can now use [Iterator helpers](https://v8.dev/features/iterator-helpers) in Workers.
* You can now use [new methods on Set](https://github.com/tc39/proposal-set-methods), such as `Set.intersection` and `Set.union`, in Workers.

## 2024-02-23

* Sockets now support an [opened](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/#socket) attribute.
* [Durable Object alarm handlers](https://developers.cloudflare.com/durable-objects/api/alarms/#alarm) now impose a maximum wall time of 15 minutes.

## 2023-12-04

* The Web Platform standard [navigator.sendBeacon(...) API](https://developers.cloudflare.com/workers/runtime-apis/web-standards#navigatorsendbeaconurl-data) is now provided by the Workers runtime.
* V8 updated to 12.0.

## 2023-10-30

* A new usage model called [Workers Standard](https://developers.cloudflare.com/workers/platform/pricing/#workers) is available for Workers and Pages Functions pricing. This is now the default usage model for accounts that are first upgraded to the Workers Paid plan. Read the [blog post](https://blog.cloudflare.com/workers-pricing-scale-to-zero/) for more information.
* The usage model set in a script's wrangler.toml will be ignored after an account has opted-in to [Workers Standard](https://developers.cloudflare.com/workers/platform/pricing/#workers) pricing. It must be configured through the dashboard (Workers & Pages > Select your Worker > Settings > Usage Model).
* Workers and Pages Functions on the Standard usage model can set custom [CPU limits](https://developers.cloudflare.com/workers/wrangler/configuration/#limits) for their Workers

## 2023-10-20

* Added the [crypto\_preserve\_public\_exponent](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#webcrypto-preserve-publicexponent-field)compatibility flag to correct a wrong type being used in the algorithm field of RSA keys in the WebCrypto API.

## 2023-10-18

* The limit of 3 Cron Triggers per Worker has been removed. Account-level limits on the total number of Cron Triggers across all Workers still apply.

## 2023-10-12

* A [TCP Socket](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/)'s WritableStream now ensures the connection has opened before resolving the promise returned by `close`.

## 2023-10-09

* The Web Platform standard [CustomEvent class](https://dom.spec.whatwg.org/#interface-customevent) is now available in Workers.
* Fixed a bug in the WebCrypto API where the `publicExponent` field of the algorithm of RSA keys would have the wrong type. Use the [crypto\_preserve\_public\_exponent compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#webcrypto-preserve-publicexponent-field) to enable the new behavior.

## 2023-09-14

* An implementation of the [node:crypto](https://developers.cloudflare.com/workers/runtime-apis/nodejs/crypto/)API from Node.js is now available when the [nodejs\_compat compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/)is enabled.

## 2023-07-14

* An implementation of the [util.MIMEType](https://nodejs.org/api/util.html#class-utilmimetype)API from Node.js is now available when the [nodejs\_compat compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/)is enabled.

## 2023-07-07

* An implementation of the [process.env](https://developers.cloudflare.com/workers/runtime-apis/nodejs/process) API from Node.js is now available when using the `nodejs_compat` compatibility flag.
* An implementation of the [diagnostics\_channel](https://developers.cloudflare.com/workers/runtime-apis/nodejs/diagnostics-channel) API from Node.js is now available when using the `nodejs_compat` compatibility flag.

## 2023-06-22

* Added the [strict\_crypto\_checks](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#strict-crypto-error-checking) compatibility flag to enable additional [Web Crypto API](https://developers.cloudflare.com/workers/runtime-apis/web-crypto/) error and security checking.
* Fixes regression in the [TCP Sockets API](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) where `connect("google.com:443")` would fail with a `TypeError`.

## 2023-06-19

* The [TCP Sockets API](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) now reports clearer errors when a connection cannot be established.
* Updated V8 to 11.5.

## 2023-06-09

* `AbortSignal.any()` is now available.
* Updated V8 to 11.4.
* Following an update to the [WHATWG URL spec](https://url.spec.whatwg.org/#interface-urlsearchparams), the `delete()` and `has()` methods of the `URLSearchParams` class now accept an optional second argument to specify the search parameter’s value. This is potentially a breaking change, so it is gated behind the new `urlsearchparams_delete_has_value_arg` and [url\_standard](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#new-url-parser-implementation) compatibility flags.
* Added the [strict\_compression\_checks](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#strict-compression-error-checking) compatibility flag for additional [DecompressionStream](https://developers.cloudflare.com/workers/runtime-apis/web-standards/#compression-streams) error checking.

## 2023-05-26

* A new [Hibernatable WebSockets API](https://developers.cloudflare.com/durable-objects/best-practices/websockets/)(beta) has been added to [Durable Objects](https://developers.cloudflare.com/durable-objects/). The Hibernatable WebSockets API allows a Durable Object that is not currently running an event handler (for example, processing a WebSocket message or alarm) to be removed from memory while keeping its WebSockets connected (“hibernation”). A Durable Object that hibernates will not incur billable Duration (GB-sec) charges.

## 2023-05-16

* The [new connect() method](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) allows you to connect to any TCP-speaking services directly from your Workers. To learn more about other protocols supported on the Workers platform, visit the [new Protocols documentation](https://developers.cloudflare.com/workers/reference/protocols/).
* We have added new [native database integrations](https://developers.cloudflare.com/workers/databases/native-integrations/) for popular serverless database providers, including Neon, PlanetScale, and Supabase. Native integrations automatically handle the process of creating a connection string and adding it as a Secret to your Worker.
* You can now also connect directly to databases over TCP from a Worker, starting with [PostgreSQL](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/). Support for PostgreSQL is based on the popular `pg` driver, and allows you to connect to any PostgreSQL instance over TLS from a Worker directly.
* The [R2 Migrator](https://developers.cloudflare.com/r2/data-migration/) (Super Slurper), which automates the process of migrating from existing object storage providers to R2, is now Generally Available.

## 2023-05-15

* [Cursor](https://developers.cloudflare.com/workers/ai/), an experimental AI assistant, trained to answer questions about Cloudflare's Developer Platform, is now available to preview! Cursor can answer questions about Workers and the Cloudflare Developer Platform, and is itself built on Workers. You can read more about Cursor in the [announcement blog](https://blog.cloudflare.com/introducing-cursor-the-ai-assistant-for-docs/).

## 2023-05-12

* The [performance.now()](https://developer.mozilla.org/en-US/docs/Web/API/Performance/now)and [performance.timeOrigin](https://developer.mozilla.org/en-US/docs/Web/API/Performance/timeOrigin)APIs can now be used in Cloudflare Workers. Just like `Date.now()`, for [security reasons](https://developers.cloudflare.com/workers/reference/security-model/) time only advances after I/O.

## 2023-05-05

* The new `nodeJsCompatModule` type can be used with a Worker bundle to emulate a Node.js environment. Common Node.js globals such as `process` and `Buffer` will be present, and `require('...')` can be used to load Node.js built-ins without the `node:` specifier prefix.
* Fixed an issue where websocket connections would be disconnected when updating workers. Now, only WebSockets connected to Durable Objects are disconnected by updates to that Durable Object’s code.

## 2023-04-28

* The Web Crypto API now supports curves Ed25519 and X25519 defined in the Secure Curves specification.
* The global `connect` method has been moved to a `cloudflare:sockets` module.

## 2023-04-14

* No externally-visible changes this week.

## 2023-04-10

* `URL.canParse(...)` is a new standard API for testing that an input string can be parsed successfully as a URL without the additional cost of creating and throwing an error.
* The Workers-specific `IdentityTransformStream` and `FixedLengthStream` classes now support specifying a `highWaterMark` for the writable-side that is used for backpressure signaling using the standard `writer.desiredSize`/`writer.ready` mechanisms.

## 2023-03-24

* Fixed a bug in Wrangler tail and live logs on the dashboard that prevented the Administrator Read-Only and Workers Tail Read roles from successfully tailing Workers.

## 2023-03-09

* No externally-visible changes.

## 2023-03-06

* [Workers Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush/#limits) now supports 300 characters per log line. This is an increase from the previous limit of 150 characters per line.

## 2023-02-06

* Fixed a bug where transferring large request bodies to a Durable Object was unexpectedly slow.
* Previously, an error would be thrown when trying to access unimplemented standard `Request` and `Response` properties. Now those will be left as `undefined`.

## 2023-01-31

* The [request.cf](https://developers.cloudflare.com/workers/runtime-apis/request/#incomingrequestcfproperties) object now includes two additional properties, `tlsClientHelloLength` and `tlsClientRandom`.

## 2023-01-13

* Durable Objects can now use jurisdictions with `idFromName` via a new subnamespace API.
* V8 updated to 10.9.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/platform/","name":"Platform"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/platform/changelog/","name":"Changelog"}}]}
```

---

---
title: Workers (Historic)
description: Review pre-2023 changes to Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/platform/changelog/historical-changelog.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workers (Historic)

This page tracks changes made to Cloudflare Workers before 2023\. For a view of more recent updates, refer to the [current changelog](https://developers.cloudflare.com/workers/platform/changelog/).

## 2022-12-16

* Conditional `PUT` requests have been fixed in the R2 bindings API.

## 2022-12-02

* Queues no longer support calling `send()` with an undefined JavaScript value as the message.

## 2022-11-30

* The DOMException constructor has been updated to align better with the standard specification. Specifically, the message and name arguments can now be any JavaScript value that is coercible into a string (previously, passing non-string values would throw).
* Extended the R2 binding API to include support for multipart uploads.

## 2022-11-17

* V8 update: 10.6 → 10.8

## 2022-11-02

* Implemented `toJSON()` for R2Checksums so that it is usable with `JSON.stringify()`.

## 2022-10-21

* The alarm retry limit will no longer apply to errors that are our fault.
* Compatibility dates have been added for multiple flags including the new streams implementation.
* `DurableObjectStorage` has a new method `sync()` that provides a way for a Worker to wait for its writes (including those performed with `allowUnconfirmed`) to be synchronized with storage.

## 2022-10-10

* Fixed a bug where if an ES-modules-syntax script exported an array-typed value from the top-level module, the upload API would refuse it with a [500 error ↗](https://community.cloudflare.com/t/community-tip-fixing-error-500-internal-server-error/44453).
* `console.log` now prints more information about certain objects, for example Promises.
* The Workers Runtime is now built from the Open Source code in: [GitHub - cloudflare/workerd: The JavaScript / Wasm runtime that powers Cloudflare Workers ↗](https://github.com/cloudflare/workerd).

## 2022-09-16

* R2 `put` bindings options can now have an `onlyIf` field similar to `get` that does a conditional upload.
* Allow deleting multiple keys at once in R2 bindings.
* Added support for SHA-1, SHA-256, SHA-384, SHA-512 checksums in R2 `put` options.
* User-specified object checksums will now be available in the R2 `get/head` bindings response. MD5 is included by default for non-multipart uploaded objects.
* Updated V8 to 10.6.

## 2022-08-12

* A `Headers` object with the `range` header can now be used for range within `R2GetOptions` for the `get` R2 binding.
* When headers are used for `onlyIf` within `R2GetOptions` for the `get` R2 binding, they now correctly compare against the second granularity. This allows correctly round-tripping to the browser and back. Additionally, `secondsGranularity` is now an option that can be passed into options constructed by hand to specify this when constructing outside Headers for the same effect.
* Fixed the TypeScript type of `DurableObjectState.id` in [@cloudflare/workers-types ↗](https://github.com/cloudflare/workers-types) to always be a `DurableObjectId`.
* Validation errors during Worker upload for module scripts now include correct line and column numbers.
* Bugfix, Profiling tools and flame graphs via Chrome’s debug tools now properly report information.

## 2022-07-08

* Workers Usage Report and Workers Weekly Summary have been disabled due to scaling issues with the service.

## 2022-06-24

* `wrangler dev` in global network preview mode now supports scheduling alarms.
* R2 GET requests made with the `range` option now contain the returned range in the `GetObject`’s `range` parameter.
* Some Web Cryptography API error messages include more information now.
* Updated V8 from 10.2 to 10.3.

## 2022-06-18

* Cron trigger events on Worker scripts using the old `addEventListener` syntax are now treated as failing if there is no event listener registered for `scheduled` events.
* The `durable_object_alarms` flag no longer needs to be explicitly provided to use DO alarms.

## 2022-06-09

* No externally-visible changes.

## 2022-06-03

* It is now possible to create standard `TransformStream` instances that can perform transformations on the data. Because this changes the behavior of the default `new TransformStream()` with no arguments, the `transformstream_enable_standard_constructor` compatibility flag is required to enable.
* Preview in Quick Edit now correctly uses the correct R2 bindings.
* Updated V8 from 10.1 to 10.2.

## 2022-05-26

* The static `Response.json()` method can be used to initialize a Response object with a JSON-serialized payload (refer to [whatwg/fetch #1392 ↗](https://github.com/whatwg/fetch/pull/1392)).
* R2 exceptions being thrown now have the `error` code appended in the message in parenthesis. This is a stop-gap until we are able to explicitly add the code property on the thrown `Error` object.

## 2022-05-19

* R2 bindings: `contentEncoding`, `contentLanguage`, and `cacheControl` are now correctly rendered.
* ReadableStream `pipeTo` and `pipeThrough` now support cancellation using `AbortSignal`.
* Calling `setAlarm()` in a DO with no `alarm()` handler implemented will now throw instead of failing silently. Calling `getAlarm()` when no `alarm()` handler is currently implemented will return null, even if an alarm was previously set on an old version of the DO class, as no execution will take place.
* R2: Better runtime support for additional ranges.
* R2 bindings now support ranges that have an `offset` and an optional `length`, a `length` and an optional `offset`, or a `suffix` (returns the last `N` bytes of a file).

## 2022-05-12

* Fix R2 bindings saving cache-control under content-language and rendering cache-control under content-language.
* Fix R2 bindings list without options to use the default list limit instead of never returning any results.
* Fix R2 bindings which did not correctly handle error messages from R2, resulting in `internal error` being thrown. Also fix behavior for get throwing an exception on a non-existent key instead of returning null. `R2Error` is removed for the time being and will be reinstated at some future time TBD.
* R2 bindings: if the onlyIf condition results in a precondition failure or a not modified result, the object is returned without a body instead of returning null.
* R2 bindings: sha1 is removed as an option because it was not actually hooked up to anything. TBD on additional checksum options beyond md5.
* Added `startAfter` option to the `list()` method in the Durable Object storage API.

## 2022-05-05

* `Response.redirect(url)` will no longer coalesce multiple consecutive slash characters appearing in the URL’s path.
* Fix generated types for Date.
* Fix R2 bindings list without options to use the default list limit instead of never returning any results.
* Fix R2 bindings did not correctly handle error messages from R2, resulting in internal error being thrown. Also fix behavior for get throwing an exception on a non-existent key instead of returning null. `R2Error` is removed for the time being and will be reinstated at some future time TBD.

## 2022-04-29

* Minor V8 update: 10.0 → 10.1.
* R2 public beta bindings are the default regardless of compat date or flags. Internal beta bindings customers should transition to public beta bindings as soon as possible. A back compatibility flag is available if this is not immediately possible. After some lag, new scripts carrying the `r2_public_beta_bindings` compatibility flag will stop accepting to be published until that flag is removed.

## 2022-04-22

* Major V8 update: 9.9 → 10.0.

## 2022-04-14

* Performance and stability improvements.

## 2022-04-08

* The AES-GCM implementation that is part of the Web Cryptography API now returns a friendlier error explaining that 0-length IVs are not allowed.
* R2 error responses now include better details.

## 2022-03-24

* A new compatibility flag has been introduced, `minimal_subrequests` , which removes some features that were unintentionally being applied to same-zone `fetch()` calls. The flag will default to enabled on Tuesday, 2022-04-05, and is described in [Workers minimal\_subrequests compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#minimal-subrequests).
* When creating a `Response` with JavaScript-backed ReadableStreams, the `Body` mixin functions (e.g. `await response.text()` ) are now implemented.
* The `IdentityTransformStream` creates a byte-oriented `TransformStream` implementation that simply passes bytes through unmodified. The readable half of the `TransformStream` supports BYOB-reads. It is important to note that `IdentityTransformStream` is identical to the current non-spec compliant `TransformStream` implementation, which will be updated soon to conform to the WHATWG Stream Standard. All current uses of `new TransformStream()` should be replaced with `new IdentityTransformStream()` to avoid potentially breaking changes later.

## 2022-03-17

* The standard [ByteLengthQueuingStrategy ↗](https://developer.mozilla.org/en-US/docs/Web/API/ByteLengthQueuingStrategy) and [CountQueuingStrategy ↗](https://developer.mozilla.org/en-US/docs/Web/API/CountQueuingStrategy) classes are now available.
* When the `capture_async_api_throws` flag is set, built-in Cloudflare-specific and Web Platform Standard APIs that return Promises will no longer throw errors synchronously and will instead return rejected promises. Exception is given with fatal errors such as out of memory errors.
* Fix R2 publish date rendering.
* Fix R2 bucket binding .get populating contentRange with garbage. contentRange is now undefined as intended.
* When using JavaScript-backed `ReadableStream`, it is now possible to use those streams with `new Response()`.

## 2022-03-11

* Fixed a bug where the key size was not counted when determining how many write units to charge for a Durable Object single-key `put()`. This may result in future writes costing one write unit more than past writes when the key is large enough to bump the total write size up above the next billing unit threshold of 4096 bytes. Multi-key `put()` operations have always properly counted the key size when determining billable write units.
* Implementations of `CompressionStream` and `DecompressionStream` are now available.

## 2022-03-04

* Initial pipeTo/pipeThrough support on ReadableStreams constructed using the new `ReadableStream()` constructor is now available.
* With the `global_navigator` compatibility flag set, the `navigator.userAgent` property can be used to detect when code is running within the Workers environment.
* A bug in the new URL implementation was fixed when setting the value of a `URLSearchParam`.
* The global `addEventListener` and dispatchEvent APIs are now available when using module syntax.
* An implementation of `URLPattern` is now available.

## 2022-02-25

* The `TextDecoder` class now supports the full range of text encodings defined by the WHATWG Encoding Standard.
* Both global `fetch()` and durable object `fetch()` now throw a TypeError when they receive a WebSocket in response to a request without the “Upgrade: websocket” header.
* Durable Objects users may now store up to 50 GB of data across the objects in their account by default. As before, if you need more storage than that you can contact us for an increase.

## 2022-02-18

* `TextDecoder` now supports Windows-1252 labels (aka ASCII): [Encoding API Encodings - Web APIs | MDN ↗](https://developer.mozilla.org/en-US/docs/Web/API/Encoding%5FAPI/Encodings).

## 2022-02-11

* WebSocket message sends were erroneously not respecting Durable Object output gates as described in the [I/O gate blog post ↗](https://blog.cloudflare.com/durable-objects-easy-fast-correct-choose-three/). That bug has now been fixed, meaning that WebSockets will now never send a message under the assumption that a storage write has succeeded unless that write actually has succeeded.

## 2022-02-05

* Fixed bug causing WebSockets to Durable Objects to occasionally hang when the script implementing both a Worker and a Durable Object is re-deployed with new code.
* `crypto.getRandomValues` now supports BigInt64Array and BigUint64Array.
* A new implementation of the standard URL implementation is available. Use `url_standard` feature flag to enable the spec-compliant URL API implementation.

## 2022-01-28

* No user-visible changes.

## 2022-01-20

* Updated V8: 9.7 → 9.8.

## 2022-01-17

* `HTMLRewriter` now supports inspecting and modifying end tags, not just start tags.
* Fixed bug where Durable Objects experiencing a transient CPU overload condition would cause in-progress requests to be unable to return a response (appearing as an indefinite hang from the client side), even after the overload condition clears.

## 2022-01-07

* The `workers_api_getters_setters_on_prototype` configuration flag corrects the way Workers attaches property getters and setters to API objects so that they can be properly subclassed.

## 2021-12-22

* Async iteration (using `for` and `await`) on instances of `ReadableStream` is now available.

## 2021-12-10

* Raised the max value size in Durable Object storage from 32 KiB to 128 KiB.
* `AbortSignal.timeout(delay)` returns an `AbortSignal` that will be triggered after the given number of milliseconds.
* Preview implementations of the new `ReadableStream` and new `WritableStream` constructors are available behind the `streams_enable_constructors` feature flag.
* `crypto.DigestStream` is a non-standard extension to the crypto API that supports generating a hash digest from streaming data. The `DigestStream` itself is a `WritableStream` that does not retain the data written into it; instead, it generates a digest hash automatically when the flow of data has ended. The same hash algorithms supported by `crypto.subtle.digest()` are supported by the `crypto.DigestStream`.
* Added early support for the `scheduler.wait()` API, which is [going through the WICG standardization process ↗](https://github.com/WICG/scheduling-apis), to provide an `await`\-able alternative to `setTimeout()`.
* Fixed bug in `deleteAll` in Durable Objects containing more than 10000 keys that could sometimes cause incomplete data deletion and/or hangs.

## 2021-12-02

* The Streams spec requires that methods returning promises must not throw synchronous errors. As part of the effort of making the Streams implementation more spec compliant, we are converting a number of sync throws to async rejections.
* Major V8 update: 9.6 → 9.7\. See [V8 release v9.7 · V8 ↗](https://v8.dev/blog/v8-release-97) for more details.

## 2021-11-19

* Durable Object stubs that receive an overload exception will be permanently broken to match the behavior of other exception types.
* Fixed issue where preview service claimed Let’s Encrypt certificates were expired.
* [structuredClone() ↗](https://developer.mozilla.org/en-US/docs/Web/API/structuredClone) is now supported.

## 2021-11-12

* The `AbortSignal` object has a new `reason` property indicating the reason for the cancellation. The reason can be specified when the `AbortSignal` is triggered or created.
* Unhandled rejection warnings will be printed to the inspector console.

## 2021-11-05

* Upgrade to V8 9.6\. This adds support for WebAssembly reference types. Refer to the [V8 release v9.6 · V8 ↗](https://v8.dev/blog/v8-release-96) for more details.
* Streams: When using the BYOB reader, the `ArrayBuffer` of the provided TypedArray should be detached, per the Streams spec. Because Workers was not previously enforcing that rule, and changing to comply with the spec could breaking existing code, a new compatibility flag, [streams\_byob\_reader\_detaches\_buffer ↗](https://github.com/cloudflare/cloudflare-docs/pull/2644), has been introduced that will be enabled by default on 2021-11-10\. User code should never try to reuse an `ArrayBuffer` that has been passed in to a BYOB readers `read()` method. The more recently added extension method `readAtLeast()` will always detach the `ArrayBuffer` and is unaffected by the compatibility flag setting.

## 2021-10-21

* Added support for the `signal` option in `EventTarget.addEventListener()`, to remove an event listener in response to an `AbortSignal`.
* The `unhandledrejection` and `rejectionhandled` events are now supported.
* The `ReadableStreamDefaultReader` and `ReadableStreamBYOBReader` constructors are now supported.
* Added non-standard `ReadableStreamBYOBReader` method `.readAtLeast(size, buffer)` that can be used to return a buffer with at least `size` bytes. The `buffer` parameter must be an `ArrayBufferView`. Behavior is identical to `.read()` except that at least `size` bytes are read, only returning fewer if EOF is encountered. One final call to `.readAtLeast()` is still needed to get back a `done = true` value.
* The compatibility flags `formdata_parser_supports_files`, `fetch_refuses_unknown_protocols`, and `durable_object_fetch_requires_full_url` have been scheduled to be turned on by default as of 2021-11-03, 2021-11-10, and 2021-11-10, respectively. For more details, refer to [Compatibility Dates](https://developers.cloudflare.com/workers/configuration/compatibility-dates/)

## 2021-10-14

* `request.signal` will always return an `AbortSignal`.
* Cloudflare Workers’ integration with Chrome DevTools profiling now more accurately reports the line numbers and time elapsed. Previously, the line numbers were shown as one line later then the actual code, and the time shown would be proportional but much longer than the actual time used.
* Upgrade to v8 9.5\. Refer to [V8 release v9.5 · V8 ↗](https://v8.dev/blog/v8-release-95) for more details.

## 2021-09-24

* The `AbortController` and `AbortSignal` objects are now available.
* The Web Platform `queueMicrotask` API is now available.
* It is now possible to use new `EventTarget()` and to create custom `EventTarget` subclasses.
* The `once` option is now supported on `addEventTarget` to register event handlers that will be invoked only once.
* Per the HTML specification, a listener passed in to the `addEventListener` function is allowed to either be a function or an object with a `handleEvent` member function. Previously, Workers only supported the function option, now it supports both.
* The `Event` object now supports most standard methods and properties.
* V8 updated from 9.3 to 9.4.

## 2021-09-03

* The `crypto.randomUUID()` method can be used to generate a new random version 4 UUID.
* Durable Objects are now scheduled more evenly around a colocation (colo).

## 2021-08-05

* No user-facing changes. Just bug fixes & internal maintenance.

## 2021-07-30

* Fixed a hang in Durable Objects when reading more than 16MB of data at once (for example, with a large `list()` operation).
* Added a new compatibility flag `html_rewriter_treats_esi_include_as_void_tag` which causes `HTMLRewriter` to treat `<esi:include>` and `<esi:comment>` as void tags, such that they are considered to have neither an end tag nor nested content. To opt a worker into the new behavior, you must use Wrangler v1.19.0 or newer and specify the flag in `wrangler.toml`. Refer to the [Wrangler compatibility flag notes ↗](https://github.com/cloudflare/wrangler-legacy/pull/2009) for details.

## 2021-07-23

* Performance and stability improvements.

## 2021-07-16

* Workers can now make up to 1000 subrequests to Durable Objects from a within a single request invocation, up from the prior limit of 50.
* Major changes to Durable Objects implementation, the details of which will be the subject of an upcoming blog post. In theory, the changes should not harm existing apps, except to make them faster. Let your account team know if you observe anything unusual or report your issue in the [Workers Discord ↗](https://discord.cloudflare.com).
* Durable Object constructors may now initiate I/O, such as `fetch()` calls.
* Added Durable Objects `state.blockConcurrencyWhile()` API useful for delaying delivery of requests and other events while performing some critical state-affecting task. For example, this can be used to perform start-up initialization in an object’s constructor.
* In Durable Objects, the callback passed to `storage.transaction()` can now return a value, which will be propagated as the return value of the `transaction()` call.

## 2021-07-13

* The preview service now prints a warning in the devtools console when a script uses `Response/Request.clone()` but does not read one of the cloned bodies. Such a situation forces the runtime to buffer the entire message body in memory, which reduces performance. [Find an example here ↗](https://cloudflareworkers.com/#823fbe463bfafd5a06bcfeabbdf5eeae:https://tutorial.cloudflareworkers.com).

## 2021-07-01

* Fixed bug where registering the same exact event listener method twice on the same event type threw an internal error.
* Add support for the `.forEach()` method for `Headers`, `URLSearchParameters`, and `FormData`.

## 2021-06-27

* WebCrypto: Implemented non-standard Ed25519 operation (algorithm NODE-ED25519, curve name NODE-ED25519). The Ed25519 implementation differs from NodeJS’s in that raw import/export of private keys is disallowed, per parity with ECDSA/ECDH.

## 2021-06-17

Changes this week:

* Updated V8 from 9.1 to 9.2.
* `wrangler tail` now works on Durable Objects. Note that logs from long-lived WebSockets will not be visible until the WebSocket is closed.

## 2021-06-11

Changes this week:

* Turn on V8 Sparkplug compiler.
* Durable Objects that are finishing up existing requests after their code is updated will be disconnected from the persistent storage API, to maintain the invariant that only a single instance ever has access to persistent storage for a given Durable Object.

## 2021-06-04

Changes this week:

* WebCrypto: We now support the “raw” import/export format for ECDSA/ECDH public keys.
* `request.cf` is no longer missing when writing Workers using modules syntax.

## 2021-05-14

Changes this week:

* Improve error messages coming from the WebCrypto API.
* Updated V8: 9.0 → 9.1

Changes in an earlier release:

* WebCrypto: Implement JWK export for RSA, ECDSA, & ECDH.
* WebCrypto: Add support for RSA-OAEP
* WebCrypto: HKDF implemented.
* Fix recently-introduced backwards clock jumps in Durable Objects.
* `WebCrypto.generateKey()`, when asked to generate a key pair with algorithm RSA-PSS, would instead return a key pair using algorithm RSASSA-PKCS1-v1\_5\. Although the key structure is the same, the signature algorithms differ, and therefore, signatures generated using the key would not be accepted by a correct implementation of RSA-PSS, and vice versa. Since this would be a pretty obvious problem, but no one ever reported it to us, we guess that currently, no one is using this functionality on Workers.

## 2021-04-29

Changes this week:

* WebCrypto: Implemented `wrapKey()` / `unwrapKey()` for AES algorithms.
* The arguments to `WebSocket.close()` are now optional, as the standard says they should be.

## 2021-04-23

Changes this week:

* In the WebCrypto API, encrypt and decrypt operations are now supported for the “AES-CTR” encryption algorithm.
* For Durable Objects, CPU time limits are now enforced on the object level rather than the request level. Each time a new request arrives, the time limit is “topped up” to 500ms. After the (free) beta period ends and Durable Objects becomes generally available, we will increase this to 30 seconds.
* When a Durable Object exceeds its CPU time limit, the entire object will be discarded and recreated. Previously, we allowed subrequest requests to continue using the same object, but this was dangerous because hitting the CPU time limit can leave the object in an inconsistent state.
* Long running Durable Objects are given more subrequest quota as additional WebSocket messages are sent to them, to avoid the problem of a long-running Object being unable to make any more subrequests after it has been held open by a particular WebSocket for a while.
* When a Durable Object’s code is updated, or when its isolate is reset due to exceeding the memory limit, all stubs pointing to the object will become invalidated and have to be recreated. This is consistent with what happens when the CPU time is exceeded, or when stubs become disconnected due to random network errors. This behavior is useful, as apps can now assume that two messages sent to the same stub will be delivered to exactly the same live instance (if they are delivered at all). Apps which do not care about this property should recreate their stubs for every request; there is no performance penalty from doing so.
* When a Durable Object’s isolate exceeds its memory limit, an exception with an explanatory message will now be thrown to the caller, instead of “internal error”.
* When a Durable Object exceeds its CPU time limit, an exception with an explanatory message will now be thrown to the caller, instead of “internal error”.
* `wrangler tail` now reports CPU-time-exceeded exceptions with an explanatory message instead of “internal error”.

## 2021-04-19

Changes since the last post on 3/26:

* Cron Triggers now have a 15 minute wall time limit, in addition to the existing CPU time limit. (Previously, there was no limit, so a cron trigger that spent all its time waiting for I/O could hang forever.)
* Our WebCrypto implementation now supports importing and exporting HMAC and AES keys in JWK format.
* Our WebCrypto implementation now supports AES key generation for CTR, CBC, and KW modes. AES-CTR encrypt/decrypt and AES-KW key wrapping/unwrapping support will land in a later release.
* Fixed bug where `crypto.subtle.encrypt()` on zero-length inputs would sometimes throw an exception.
* Errors on script upload will now be properly reported for module-based scripts, instead of appearing as a ReferenceError.
* WebCrypto: Key derivation for ECDH.
* WebCrypto: Support ECDH key generation & import.
* WebCrypto: Support ECDSA key generation.
* Fixed bug where `crypto.subtle.encrypt()` on zero-length inputs would sometimes throw an exception.
* Improved exception messages thrown by the WebCrypto API somewhat.
* `waitUntil` is now supported for module Workers. An additional argument called ‘ctx’ is passed after ‘env’, and `waitUntil` is a method on ‘ctx’.
* `passThroughOnException` is now available under the ctx argument to module handlers
* Reliability improvements for Durable Objects
* Reliability improvements for Durable Objects persistent storage API
* `ScheduledEvent.cron` is now set to the original cron string that the event was scheduled for.

## 2021-03-26

Changes this week:

* Existing WebSocket connections to Durable Objects will now be forcibly disconnected on code updates, in order to force clients to connect to the instance running the new code.

## 2021-03-11

New this week:

* When the Workers Runtime itself reloads due to us deploying a new version or config change, we now preload high-traffic Workers in the new instance of the runtime before traffic cuts over. This ensures that users do not observe cold starts for these Workers due to the upgrade, and also fixes a low rate of spurious 503 errors that we had previously been seeing due to overload during such reloads.

(It looks like no release notes were posted the last few weeks, but there were no new user-visible changes to report.)

## 2021-02-11

Changes this week:

* In the preview mode of the dashboard, a Worker that fails during startup will now return a 500 response, rather than getting the default passthrough behavior, which was making it harder to notice when a Worker was failing.
* A Durable Object’s ID is now provided to it in its constructor. It can be accessed off of the `state` provided as the constructor’s first argument, as in `state.id`.

## 2021-02-05

New this week:

* V8 has been updated from 8.8 to 8.9.
* During a `fetch()`, if the destination server commits certain HTTP protocol errors, such as returning invalid (unparsable) headers, we now throw an exception whose description explains the problem, rather than an “internal error”.

New last week (forgot to post):

* Added support for `waitUntil()` in Durable Objects. It is a method on the state object passed to the Durable Object class’s constructor.

## 2021-01-22

New in the past week:

* Fixed a bug which caused scripts with WebAssembly modules to hang when using devtools in the preview service.

## 2021-01-14

Changes this week:

* Implemented File and Blob APIs, which can be used when constructing FormData in outgoing requests. Unfortunately, FormData from incoming requests at this time will still use strings even when file metadata was present, in order to avoid breaking existing deployed Workers. We will find a way to fix that in the future.

## 2021-01-07

Changes this week:

* No user-visible changes.

Changes in the prior release:

* Fixed delivery of WebSocket “error” events.
* Fixed a rare bug where a WritableStream could be garbage collected while it still had writes queued, causing those writes to be lost.

## 2020-12-10

Changes this week:

* Major V8 update: 8.7.220.29 -> 8.8.278.8

## 2019-09-19

Changes this week:

* Unannounced new feature. (Stay tuned.)
* Enforced new limit on concurrent subrequests (see below).
* Stability improvements.

**Concurrent Subrequest Limit**

As of this release, we impose a limit on the number of outgoing HTTP requests that a Worker can make simultaneously. **For each incoming request**, a Worker can make up to 6 concurrent outgoing `fetch()` requests.

If a Worker’s request handler attempts to call `fetch()` more than six times (on behalf of a single incoming request) without waiting for previous fetches to complete, then fetches after the sixth will be delayed until previous fetches have finished. A Worker is still allowed to make up to 50 total subrequests per incoming request, as before; the new limit is only on how many can execute simultaneously.

**Automatic deadlock avoidance**

Our implementation automatically detects if delaying a fetch would cause the Worker to deadlock, and prevents the deadlock by cancelling the least-recently-used request. For example, imagine a Worker that starts 10 requests and waits to receive all the responses without reading the response bodies. A fetch is not considered complete until the response body is fully-consumed (for example, by calling `response.text()` or `response.json()`, or by reading from `response.body`). Therefore, in this scenario, the first six requests will run and their response objects would be returned, but the remaining four requests would not start until the earlier responses are consumed. If the Worker fails to actually read the earlier response bodies and is still waiting for the last four requests, then the Workers Runtime will automatically cancel the first four requests so that the remaining ones can complete. If the Worker later goes back and tries to read the response bodies, exceptions will be thrown.

**Most Workers are Not Affected**

The vast majority of Workers make fewer than six outgoing requests per incoming request. Such Workers are totally unaffected by this change.

Of Workers that do make more than six outgoing requests concurrently for a single incoming request, the vast majority either read the response bodies immediately upon each response returning, or never read the response bodies at all. In either case, these Workers will still work as intended – although they may be a little slower due to outgoing requests after the sixth being delayed.

A very small number of deployed Workers (about 20 total) make more than 6 requests concurrently, wait for all responses to return, and then go back to read the response bodies later. For all known Workers that do this, we have temporarily grandfathered your zone into the old behavior, so that your Workers will continue to operate. However, we will be communicating with customers one-by-one to request that you update your code to proactively read request bodies, so that it works correctly under the new limit.

**Why did we do this?**

Cloudflare communicates with origin servers using HTTP/1.1, not HTTP/2\. Under HTTP/1.1, each concurrent request requires a separate connection. So, Workers that make many requests concurrently could force the creation of an excessive number of connections to origin servers. In some cases, this caused resource exhaustion problems either at the origin server or within our own stack.

On investigating the use cases for such Workers, every case we looked at turned out to be a mistake or otherwise unnecessary. Often, developers were making requests and receiving responses, but they only cared about the response status and headers but not the body. So, they threw away the response objects without reading the body, essentially leaking connections. In some other cases, developers had simply accidentally written code that made excessive requests in a loop for no good reason at all. Both of these cases should now cause no problems under the new behavior.

We chose the limit of 6 concurrent connections based on the fact that Chrome enforces the same limit on web sites in the browser.

## 2020-12-04

Changes this week:

* Durable Objects storage API now supports listing keys by prefix.
* Improved error message when a single request performs more than 1000 KV operations to make clear that a per-request limit was reached, not a global rate limit.
* `wrangler dev` previews should now honor non-default resource limits, for example, longer CPU limits for those in the Workers Unbound beta.
* Fixed off-by-one line numbers in Worker exceptions.
* Exceptions thrown in a Durable Object’s `fetch()` method are now tunneled to its caller.
* Fixed a bug where a large Durable Object response body could cause the Durable Object to become unresponsive.

## 2020-11-13

Changes over the past week:

* `ReadableStream.cancel()` and `ReadableStream.getReader().cancel()` now take an optional, instead of a mandatory, argument, to conform with the Streams spec.
* Fixed an error that occurred when a WASM module declared that it wanted to grow larger than 128MB. Instead, the actual memory usage of the module is monitored and an error is thrown if it exceeds 128MB used.

## 2020-11-05

Changes this week:

* Major V8 update: 8.6 -> 8.7
* Limit the maximum number of Durable Objects keys that can be changed in a single transaction to 128.

## 2020-10-05

We had our usual weekly release last week, but:

* No user-visible changes.

## 2020-09-24

Changes this week:

* Internal changes to support upcoming features.

Also, a change from the 2020-09-08 release that it seems we forgot to post:

* V8 major update: 8.5 -> 8.6

## 2020-08-03

Changes last week:

* Fixed a regression which could cause `HTMLRewriter.transform()` to throw spurious “The parser has stopped.” errors.
* Upgraded V8 from 8.4 to 8.5.

## 2020-07-09

Changes this week:

* Fixed a regression in HTMLRewriter: [https://github.com/cloudflare/lol-html/issues/50 ↗](https://github.com/cloudflare/lol-html/issues/50)
* Common HTTP method names passed to `fetch()` or `new Request()` are now case-insensitive as required by the Fetch API spec.

Changes last week (… forgot to post):

* `setTimeout`/`setInterval` can now take additional arguments which will be passed on to the callback, as required by the spec. (Few people use this feature today because it’s usually much easier to use lambda captures.)

Changes the week before last (… also… forgot to post… we really need to code up a bot for this):

* The HTMLRewriter now supports the `:nth-child` , `:first-child` , `:nth-of-type` , and `:first-of-type` selectors.

## 2020-05-15

Changes this week:

* Implemented API for yet-to-be-announced new feature.

## 2020-04-20

Looks like we forgot to post release notes for a couple weeks. Releases still are happening weekly as always, but the “post to the community” step is insufficiently automated… 4/2 release:

* Fixed a source of long garbage collection paused in memory limit enforcement.

4/9 release:

* No publicly-visible changes.

4/16 release:

* In preview, we now log a warning when attempting to construct a `Request` or `Response` whose body is of type `FormData` but with the `Content-Type` header overridden. Such bodies would not be parseable by the receiver.

## 2020-03-26

New this week:

* Certain “internal errors” that could be thrown when using the Cache API are now reported with human-friendly error messages. For example, `caches.default.match("not a URL")` now throws a TypeError.

## 2020-02-28

New from the past two weeks:

* Fixed a bug in the preview service where the CPU time limiter was overly lenient for the first several requests handled by a newly-started worker. The same bug actually exists in production as well, but we are much more cautious about fixing it there, since doing so might break live sites. If you find your worker now exceeds CPU time limits in preview, then it is likely exceeding time limits in production as well, but only appearing to work because the limits are too lenient for the first few requests. Such Workers will eventually fail in production, too (and always have), so it is best to fix the problem in preview before deploying.
* Major V8 update: 8.0 -> 8.1
* Minor bug fixes.

## 2020-02-13

Changes over the last couple weeks:

* Fixed a bug where if two differently-named scripts within the same account had identical content and were deployed to the same zone, they would be treated as the “same Worker”, meaning they would share the same isolate and global variables. This only applied between Workers on the same zone, so was not a security threat, but it caused confusion. Now, two differently-named Worker scripts will never be considered the same Worker even if they have identical content.
* Performance and stability improvements.

## 2020-01-24

It has been a while since we posted release notes, partly due to the holidays. Here is what is new over the past month:

* Performance and stability improvements.
* A rare source of `daemonDown` errors when processing bursty traffic over HTTP/2 has been eliminated.
* Updated V8 7.9 -> 8.0.

## 2019-12-12

New this week:

* We now pass correct line and column numbers more often when reporting exceptions to the V8 inspector. There remain some cases where the reported line and column numbers will be wrong.
* Fixed a significant source of daemonDown (1105) errors.

## 2019-12-06

Runtime release notes covering the past few weeks:

* Increased total per-request `Cache.put()` limit to 5GiB.
* Increased individual `Cache.put()` limits to the lesser of 5GiB or the zone’s normal [cache limits](https://developers.cloudflare.com/cache/concepts/default-cache-behavior/).
* Added a helpful error message explaining AES decryption failures.
* Some overload errors were erroneously being reported as daemonDown (1105) errors. They have been changed to exceededCpu (1102) errors, which better describes their cause.
* More “internal errors” were converted to useful user-facing errors.
* Stability improvements and bug fixes.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/platform/","name":"Platform"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/platform/changelog/","name":"Changelog"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/platform/changelog/historical-changelog/","name":"Workers (Historic)"}}]}
```

---

---
title: Deploy to Cloudflare buttons
description: Set up a Deploy to Cloudflare button
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/platform/deploy-buttons.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Deploy to Cloudflare buttons

If you're building a Workers application and would like to share it with other developers, you can embed a Deploy to Cloudflare button in your README, blog post, or documentation to enable others to quickly deploy your application on their own Cloudflare account. Deploy to Cloudflare buttons eliminate the need for complex setup, allowing developers to get started with your public GitHub or GitLab repository in just a few clicks.

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/saas-admin-template)

## What are Deploy to Cloudflare buttons?

Deploy to Cloudflare buttons simplify the deployment of a Workers application by enabling Cloudflare to:

* **Clone a Git repository**: Cloudflare clones your source repository into the user's GitHub/GitLab account where they can continue development after deploying.
* **Configure a project**: Your users can customize key details such as repository name, Worker name, and required resource names in a single setup page with customizations reflected in the newly created Git repository.
* **Build & deploy**: Cloudflare builds the application using [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds) and deploys it to the Cloudflare network. Any required resources are automatically provisioned and bound to the Worker without additional setup.
![Deploy to Cloudflare Flow](https://developers.cloudflare.com/_astro/dtw-user-flow.zgS3Y8iK_Z1r8gDo.webp) 

## How to Set Up Deploy to Cloudflare buttons

Deploy to Cloudflare buttons can be embedded anywhere developers might want to launch your project. To add a Deploy to Cloudflare button, copy the following snippet and replace the Git repository URL with your project's URL. You can also optionally specify a subdirectory.

* [ Markdown ](#tab-panel-7487)
* [ HTML ](#tab-panel-7488)
* [ URL ](#tab-panel-7489)

```

[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=<your git repo URL>)


```

```

<a href="https://deploy.workers.cloudflare.com/?url=<YOUR_REPO_URL>"><img src="https://deploy.workers.cloudflare.com/button" alt="Deploy to Cloudflare"/></a>


```

```

https://deploy.workers.cloudflare.com/?url=<YOUR_REPO_URL>


```

If you have already deployed your application using Workers Builds, you can generate a Deploy to Cloudflare button directly from the Cloudflare dashboard by selecting the share button (located within your Worker details) and copying the provided snippet.

![Share an application](https://developers.cloudflare.com/_astro/dtw-share-project.CTDMrwQu_1LDIEO.webp) 

Once you have your snippet, you can paste this wherever you would like your button to be displayed.

## Automatic resource provisioning

If your Worker application requires Cloudflare resources, they will be automatically provisioned as part of the deployment. Currently, supported resources include:

* **Storage**: [KV namespaces](https://developers.cloudflare.com/kv/), [D1 databases](https://developers.cloudflare.com/d1/), [R2 buckets](https://developers.cloudflare.com/r2/), [Hyperdrive](https://developers.cloudflare.com/hyperdrive/), [Vectorize databases](https://developers.cloudflare.com/vectorize/), and [Secrets Store Secrets](https://developers.cloudflare.com/secrets-store/)
* **Compute**: [Durable Objects](https://developers.cloudflare.com/durable-objects/), [Workers AI](https://developers.cloudflare.com/workers-ai/), and [Queues](https://developers.cloudflare.com/queues/)

Cloudflare will read the Wrangler configuration file of your source repo to determine resource requirements for your application. During deployment, Cloudflare will provision any necessary resources and update the Wrangler configuration where applicable for newly created resources (e.g. database IDs and namespace IDs). To ensure successful deployment, please make sure your source repository includes default values for resource names, resource IDs and any other properties for each binding.

### Worker environment variables and secrets

[Worker environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/) can be defined in your Wrangler configuration file as normal:

* [  wrangler.jsonc ](#tab-panel-7490)
* [  wrangler.toml ](#tab-panel-7491)

```

{

  "name": "my-worker",

  "main": "./src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "vars": {

    "API_HOST": "https://example.com",

  },

}


```

```

name = "my-worker"

main = "./src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"


[vars]

API_HOST = "https://example.com"


```

[Worker secrets](https://developers.cloudflare.com/workers/configuration/secrets/) can be defined in a `.dev.vars.example` or `.env.example` file with a [dotenv ↗](https://www.npmjs.com/package/dotenv) format:

.dev.vars.example

```

COOKIE_SIGNING_KEY=my-secret # comment


```

[Secrets Store](https://developers.cloudflare.com/secrets-store/) secrets can be configured in the Wrangler configuration file as normal:

* [  wrangler.jsonc ](#tab-panel-7492)
* [  wrangler.toml ](#tab-panel-7493)

```

{

  "name": "my-worker",

  "main": "./src/index.ts",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "secrets_store_secrets": [

    {

      "binding": "API_KEY",

      "store_id": "demo",

      "secret_name": "api-key"

    }

  ]

}


```

```

name = "my-worker"

main = "./src/index.ts"

# Set this to today's date

compatibility_date = "2026-04-03"


[[secrets_store_secrets]]

binding = "API_KEY"

store_id = "demo"

secret_name = "api-key"


```

## Best practices

**Configuring Build/Deploy commands**: If you are using custom `build` and `deploy` scripts in your `package.json` (for example, if using a full stack framework or running D1 migrations), Cloudflare will automatically detect and pre-populate the build and deploy fields. Users can choose to modify or accept the custom commands during deployment configuration.

If no `deploy` script is specified, Cloudflare will preconfigure `npx wrangler deploy` by default. If no `build` script is specified, Cloudflare will leave this field blank.

**Running D1 Migrations**: If you would like to run migrations as part of your setup, you can specify this in your `package.json` by running your migrations as part of your `deploy` script. The migration command should reference the binding name rather than the database name to ensure migrations are successful when users specify a database name that is different from that of your source repository. The following is an example of how you can set up the scripts section of your `package.json`:

```

{

  "scripts": {

    "build": "astro build",

    "deploy": "npm run db:migrations:apply && wrangler deploy",

    "db:migrations:apply": "wrangler d1 migrations apply DB_BINDING --remote"

  }

}


```

**Provide a description for bindings**: If you wish to provide additional information about bindings, such as why they are required in this template, or suggestions for how to configure a value, you can provide a description in your `package.json`. This can be particularly useful for environment variables and secrets where users might need to find a value outside of Cloudflare.

Inline markdown `` `code` ``, `**bold**`, `__italics__` and `[links](https://example.com)` are supported.

package.json

```

{

  "name": "my-worker",

  "private": true,

  "cloudflare": {

    "bindings": {

      "API_KEY": {

        "description": "Select your company's [API key](https://example.com/) for connecting to the example service."

      },

      "COOKIE_SIGNING_KEY": {

        "description": "Generate a random string using `openssl rand -hex 32`."

      }

    }

  }

}


```

## Limitations

* **Monorepos**: Cloudflare does not fully support monorepos  
   * If your repository URL contains a subdirectory, your application must be fully isolated within that subdirectory, including any dependencies. Otherwise, the build will fail. Cloudflare treats this subdirectory as the root of the new repository created as part of the deploy process.  
   * Additionally, if you have a monorepo that contains multiple Workers applications, they will not be deployed together. You must configure a separate Deploy to Cloudflare button for each application. The user will manually create a distinct Workers application for each subdirectory.
* **Pages applications**: Deploy to Cloudflare buttons only support Workers applications.
* **Non-GitHub/GitLab repositories**: Source repositories from anything other than github.com and gitlab.com are not supported. Self-hosted versions of GitHub and GitLab are also not supported.
* **Private repositories**: Repositories must be public in order for others to successfully use your Deploy to Cloudflare button.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/platform/","name":"Platform"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/platform/deploy-buttons/","name":"Deploy to Cloudflare buttons"}}]}
```

---

---
title: Infrastructure as Code (IaC)
description: While Wrangler makes it easy to upload and manage Workers, there are times when you need a more programmatic approach. This could involve using Infrastructure as Code (IaC) tools or interacting directly with the Workers API. Examples include build and deploy scripts, CI/CD pipelines, custom developer tools, and automated testing.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/platform/infrastructure-as-code.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Infrastructure as Code (IaC)

While [Wrangler](https://developers.cloudflare.com/workers/wrangler/configuration) makes it easy to upload and manage Workers, there are times when you need a more programmatic approach. This could involve using Infrastructure as Code (IaC) tools or interacting directly with the [Workers API](https://developers.cloudflare.com/api/resources/workers/). Examples include build and deploy scripts, CI/CD pipelines, custom developer tools, and automated testing.

To make this easier, Cloudflare provides SDK libraries for popular languages such as [cloudflare-typescript ↗](https://github.com/cloudflare/cloudflare-typescript) and [cloudflare-python ↗](https://github.com/cloudflare/cloudflare-python). For IaC, you can use tools like HashiCorp's Terraform and the [Cloudflare Terraform Provider](https://developers.cloudflare.com/terraform) to manage Workers resources.

Below are examples of deploying a Worker using different tools and languages, along with important considerations for managing Workers with IaC.

All of these examples need an [account ID](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids) and [API token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token) (not Global API key) to work.

## Workers Bundling

None of the examples below do [Workers Bundling](https://developers.cloudflare.com/workers/wrangler/bundling). This is usually done with Wrangler or a tool like [esbuild ↗](https://esbuild.github.io).

Generally, you'd run this bundling step before applying your Terraform plan or using the API for script upload:

Terminal window

```

wrangler deploy --dry-run --outdir build


```

When using Wrangler for building and a different method for uploading, make sure to copy all of your config from `wrangler.json` into your Terraform config or API request. This is especially important with `compatibility_date` or flags your script relies on.

## Terraform

In this example, you need a local file named `my-script.mjs` with script content similar to the below examples. Learn more about the [Cloudflare Terraform Provider](https://developers.cloudflare.com/terraform/), and refer to the [Workers script resource example ↗](https://github.com/cloudflare/terraform-provider-cloudflare/blob/main/examples/resources/cloudflare%5Fworkers%5Fscript/resource.tf) for all available resource settings.

```

variable "account_id" {

  default = "replace_me"

}


resource "cloudflare_worker" "my_worker" {

  account_id = var.account_id

  name = "my-worker"

  observability = {

    enabled = true

  }

}


resource "cloudflare_worker_version" "my_worker_version" {

  account_id = var.account_id

  worker_id = cloudflare_worker.my_worker.id

  compatibility_date = "2025-02-21" # Set this to today's date

  main_module = "my-script.mjs"

  modules = [

    {

      name = "my-script.mjs"

      content_type = "application/javascript+module"

      # Replacement (version creation) is triggered whenever this file changes

      content_file = "my-script.mjs"

    }

  ]

}


resource "cloudflare_workers_deployment" "my_worker_deployment" {

  account_id = var.account_id

  script_name = cloudflare_worker.my_worker.name

  strategy = "percentage"

  versions = [{

    percentage = 100

    version_id = cloudflare_worker_version.my_worker_version.id

  }]

}


```

Notice how you do not have to manage all of these resources in Terraform. For example, you could use just the `cloudflare_worker` resource and seamlessly use Wrangler or your own deployment tools for Versions or Deployments.

## Bindings in Terraform

[Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) allow your Worker to interact with resources on the Cloudflare Developer Platform. In Terraform, bindings are configured differently than in Wrangler. Instead of separate top-level properties for each binding type (like `kv_namespaces`, `r2_buckets`, etc.), Terraform uses a single `bindings` array where each binding has a `type` property along with type-specific properties.

Below are examples of each binding type and their required properties:

### KV Namespace Binding

Bind to a [KV namespace](https://developers.cloudflare.com/kv/api/) for key-value storage:

```

bindings = [{

  type = "kv_namespace"

  name = "MY_KV"

  namespace_id = "your-kv-namespace-id"

}]


```

**Properties:**

* `type`: `"kv_namespace"`
* `name`: The variable name for the binding, accessible via `env.MY_KV`
* `namespace_id`: The ID of your KV namespace

### R2 Bucket Binding

Bind to an [R2 bucket](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/) for object storage:

```

bindings = [{

  type = "r2_bucket"

  name = "MY_BUCKET"

  bucket_name = "my-bucket-name"

}]


```

**Properties:**

* `type`: `"r2_bucket"`
* `name`: The binding name to access via `env.MY_BUCKET`
* `bucket_name`: The name of your R2 bucket

### D1 Database Binding

Bind to a [D1 database](https://developers.cloudflare.com/d1/worker-api/) for SQL storage:

```

bindings = [{

  type = "d1"

  name = "DB"

  id = "your-database-id"

}]


```

**Properties:**

* `type`: `"d1"`
* `name`: The binding name to access via `env.DB`
* `id`: The ID of your D1 database

### Durable Object Binding

Bind to a [Durable Object](https://developers.cloudflare.com/durable-objects/api/) class:

```

bindings = [{

  type = "durable_object_namespace"

  name = "MY_DURABLE_OBJECT"

  class_name = "MyDurableObjectClass"

}]


```

**Properties:**

* `type`: `"durable_object_namespace"`
* `name`: The binding name to access via `env.MY_DURABLE_OBJECT`
* `class_name`: The exported class name of the Durable Object
* `script_name`: (Optional) The Worker script that exports this Durable Object class. Omit if the class is defined in the same Worker.

### Service Binding

Bind to another [Worker](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) for Worker-to-Worker communication:

```

bindings = [{

  type = "service"

  name = "MY_SERVICE"

  service = "other-worker-name"

}]


```

**Properties:**

* `type`: `"service"`
* `name`: The binding name to access via `env.MY_SERVICE`
* `service`: The name of the target Worker
* `entrypoint`: (Optional) The named [entrypoint](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/#named-entrypoints) to bind to

### Queue Binding

Bind to a [Queue](https://developers.cloudflare.com/queues/configuration/javascript-apis/) for message passing:

For producing messages:

```

bindings = [{

  type = "queue"

  name = "MY_QUEUE"

  queue_name = "my-queue"

}]


```

**Properties:**

* `type`: `"queue"`
* `name`: The binding name to access via `env.MY_QUEUE`
* `queue_name`: The name of your Queue

For consuming messages, configure your Worker as a consumer in the queue resource itself, not via bindings.

### Vectorize Binding

Bind to a [Vectorize index](https://developers.cloudflare.com/vectorize/) for vector search:

```

bindings = [{

  type = "vectorize"

  name = "VECTORIZE_INDEX"

  index_name = "my-index"

}]


```

**Properties:**

* `type`: `"vectorize"`
* `name`: The binding name to access via `env.VECTORIZE_INDEX`
* `index_name`: The name of your Vectorize index

### Workers AI Binding

Bind to [Workers AI](https://developers.cloudflare.com/workers-ai/) for AI inference:

```

bindings = [{

  type = "ai"

  name = "AI"

}]


```

**Properties:**

* `type`: `"ai"`
* `name`: The binding name to access via `env.AI`

### Hyperdrive Binding

Bind to a [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) configuration for database connection pooling:

```

bindings = [{

  type = "hyperdrive"

  name = "HYPERDRIVE"

  id = "your-hyperdrive-config-id"

}]


```

**Properties:**

* `type`: `"hyperdrive"`
* `name`: The binding name to access via `env.HYPERDRIVE`
* `id`: The ID of your Hyperdrive configuration

### VPC Service Binding

Bind to a [VPC Service](https://developers.cloudflare.com/workers-vpc/configuration/vpc-services/) for accessing resources in your private network:

```

bindings = [{

  type = "vpc_service"

  name = "PRIVATE_API"

  service_id = "your-vpc-service-id"

}]


```

**Properties:**

* `type`: `"vpc_service"`
* `name`: The binding name to access via `env.PRIVATE_API`
* `service_id`: The ID of your VPC Service (from `cloudflare_connectivity_directory_service` or the dashboard)

You can create the VPC Service with Terraform using the `cloudflare_connectivity_directory_service` resource. For a full walkthrough, refer to [Configure VPC Services with Terraform](https://developers.cloudflare.com/workers-vpc/configuration/vpc-services/terraform/).

### Analytics Engine Binding

Bind to an [Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/) dataset:

```

bindings = [{

  type = "analytics_engine"

  name = "ANALYTICS"

  dataset = "my_dataset"

}]


```

**Properties:**

* `type`: `"analytics_engine"`
* `name`: The binding name to access via `env.ANALYTICS`
* `dataset`: The name of your Analytics Engine dataset

### Environment Variables

For plain text environment variables, use the `plain_text` binding type:

```

bindings = [{

  type = "plain_text"

  name = "MY_VARIABLE"

  text = "my-value"

}]


```

**Properties:**

* `type`: `"plain_text"`
* `name`: The binding name to access via `env.MY_VARIABLE`
* `text`: The value of the environment variable

### Secret Text Binding

For encrypted secrets, use the `secret_text` binding type:

```

bindings = [{

  type = "secret_text"

  name = "API_KEY"

  text = var.api_key

}]


```

**Properties:**

* `type`: `"secret_text"`
* `name`: The binding name to access via `env.API_KEY`
* `text`: The secret value (will be encrypted)

### Complete Example

Here's an example combining multiple binding types:

```

resource "cloudflare_worker_version" "my_worker_version" {

  account_id = var.account_id

  worker_id = cloudflare_worker.my_worker.id

  compatibility_date = "2025-08-06"

  main_module = "worker.js"


  modules = [{

    name = "worker.js"

    content_type = "application/javascript+module"

    content_file = "worker.js"

  }]


  bindings = [

    {

      type = "kv_namespace"

      name = "MY_KV"

      namespace_id = var.kv_namespace_id

    },

    {

      type = "r2_bucket"

      name = "MY_BUCKET"

      bucket_name = "my-bucket"

    },

    {

      type = "d1"

      name = "DB"

      id = var.d1_database_id

    },

    {

      type = "service"

      name = "AUTH_SERVICE"

      service = "auth-worker"

    },

    {

      type = "plain_text"

      name = "ENVIRONMENT"

      text = "production"

    },

    {

      type = "secret_text"

      name = "API_KEY"

      text = var.api_key

    },

    {

      type = "vpc_service"

      name = "PRIVATE_API"

      service_id = var.vpc_service_id

    }

  ]

}


```

## Cloudflare API Libraries

This example uses the [cloudflare-typescript ↗](https://github.com/cloudflare/cloudflare-typescript) SDK which provides convenient access to the Cloudflare REST API from server-side JavaScript or TypeScript.

* [  JavaScript ](#tab-panel-7498)
* [  TypeScript ](#tab-panel-7499)

JavaScript

```

#!/usr/bin/env -S npm run tsn -T


/**

 * Create and deploy a Worker

 *

 * Docs:

 * - https://developers.cloudflare.com/workers/configuration/versions-and-deployments/

 * - https://developers.cloudflare.com/workers/platform/infrastructure-as-code/

 *

 * Prerequisites:

 * 1. Generate an API token: https://developers.cloudflare.com/fundamentals/api/get-started/create-token/

 * 2. Find your account ID: https://developers.cloudflare.com/fundamentals/setup/find-account-and-zone-ids/

 * 3. Find your workers.dev subdomain: https://developers.cloudflare.com/workers/configuration/routing/workers-dev/

 *

 * Environment variables:

 *   - CLOUDFLARE_API_TOKEN (required)

 *   - CLOUDFLARE_ACCOUNT_ID (required)

 *   - CLOUDFLARE_SUBDOMAIN (optional)

 *

 * Usage:

 *   Run this script to deploy a simple "Hello World" Worker.

 *   Access it at: my-hello-world-worker.$subdomain.workers.dev

 */


import { exit } from "node:process";


import Cloudflare from "cloudflare";


const WORKER_NAME = "my-hello-world-worker";

const SCRIPT_FILENAME = `${WORKER_NAME}.mjs`;


function loadConfig() {

  const apiToken = process.env["CLOUDFLARE_API_TOKEN"];

  if (!apiToken) {

    throw new Error(

      "Missing required environment variable: CLOUDFLARE_API_TOKEN",

    );

  }


  const accountId = process.env["CLOUDFLARE_ACCOUNT_ID"];

  if (!accountId) {

    throw new Error(

      "Missing required environment variable: CLOUDFLARE_ACCOUNT_ID",

    );

  }


  const subdomain = process.env["CLOUDFLARE_SUBDOMAIN"];


  return {

    apiToken,

    accountId,

    subdomain: subdomain || undefined,

    workerName: WORKER_NAME,

  };

}


const config = loadConfig();

const client = new Cloudflare({

  apiToken: config.apiToken,

});


async function main() {

  try {

    console.log("🚀 Starting Worker creation and deployment...");


    const scriptContent = `

      export default {

        async fetch(request, env, ctx) {

          return new Response(env.MESSAGE, { status: 200 });

        },

      }`.trim();


    let worker;

    try {

      worker = await client.workers.beta.workers.get(config.workerName, {

        account_id: config.accountId,

      });

      console.log(`♻️  Worker ${config.workerName} already exists. Using it.`);

    } catch (error) {

      if (!(error instanceof Cloudflare.NotFoundError)) {

        throw error;

      }

      console.log(`✏️  Creating Worker ${config.workerName}...`);

      worker = await client.workers.beta.workers.create({

        account_id: config.accountId,

        name: config.workerName,

        subdomain: {

          enabled: config.subdomain !== undefined,

        },

        observability: {

          enabled: true,

        },

      });

    }


    console.log(`⚙️  Worker id: ${worker.id}`);

    console.log("✏️  Creating Worker version...");


    // Create the first version of the Worker

    const version = await client.workers.beta.workers.versions.create(

      worker.id,

      {

        account_id: config.accountId,

        main_module: SCRIPT_FILENAME,

        compatibility_date: new Date().toISOString().split("T")[0],

        bindings: [

          {

            type: "plain_text",

            name: "MESSAGE",

            text: "Hello World!",

          },

        ],

        modules: [

          {

            name: SCRIPT_FILENAME,

            content_type: "application/javascript+module",

            content_base64: Buffer.from(scriptContent).toString("base64"),

          },

        ],

      },

    );


    console.log(`⚙️  Version id: ${version.id}`);

    console.log("🚚 Creating Worker deployment...");


    // Create a deployment and point all traffic to the version we created

    await client.workers.scripts.deployments.create(config.workerName, {

      account_id: config.accountId,

      strategy: "percentage",

      versions: [

        {

          percentage: 100,

          version_id: version.id,

        },

      ],

    });


    console.log("✅ Deployment successful!");


    if (config.subdomain) {

      console.log(`

🌍 Your Worker is live!

📍 URL: https://${config.workerName}.${config.subdomain}.workers.dev/

`);

    } else {

      console.log(`

⚠️  Set up a route, custom domain, or workers.dev subdomain to access your Worker.

Add CLOUDFLARE_SUBDOMAIN to your environment variables to set one up automatically.

`);

    }

  } catch (error) {

    console.error("❌ Deployment failed:", error);

    exit(1);

  }

}


main();


```

TypeScript

```

#!/usr/bin/env -S npm run tsn -T


/**

 * Create and deploy a Worker

 *

 * Docs:

 * - https://developers.cloudflare.com/workers/configuration/versions-and-deployments/

 * - https://developers.cloudflare.com/workers/platform/infrastructure-as-code/

 *

 * Prerequisites:

 * 1. Generate an API token: https://developers.cloudflare.com/fundamentals/api/get-started/create-token/

 * 2. Find your account ID: https://developers.cloudflare.com/fundamentals/setup/find-account-and-zone-ids/

 * 3. Find your workers.dev subdomain: https://developers.cloudflare.com/workers/configuration/routing/workers-dev/

 *

 * Environment variables:

 *   - CLOUDFLARE_API_TOKEN (required)

 *   - CLOUDFLARE_ACCOUNT_ID (required)

 *   - CLOUDFLARE_SUBDOMAIN (optional)

 *

 * Usage:

 *   Run this script to deploy a simple "Hello World" Worker.

 *   Access it at: my-hello-world-worker.$subdomain.workers.dev

 */


import { exit } from 'node:process';


import Cloudflare from 'cloudflare';


interface Config {

  apiToken: string;

  accountId: string;

  subdomain: string | undefined;

  workerName: string;

}


const WORKER_NAME = 'my-hello-world-worker';

const SCRIPT_FILENAME = `${WORKER_NAME}.mjs`;


function loadConfig(): Config {

  const apiToken = process.env['CLOUDFLARE_API_TOKEN'];

  if (!apiToken) {

    throw new Error('Missing required environment variable: CLOUDFLARE_API_TOKEN');

  }


  const accountId = process.env['CLOUDFLARE_ACCOUNT_ID'];

  if (!accountId) {

    throw new Error('Missing required environment variable: CLOUDFLARE_ACCOUNT_ID');

  }


  const subdomain = process.env['CLOUDFLARE_SUBDOMAIN'];


  return {

    apiToken,

    accountId,

    subdomain: subdomain || undefined,

    workerName: WORKER_NAME,

  };

}


const config = loadConfig();

const client = new Cloudflare({

  apiToken: config.apiToken,

});


async function main(): Promise<void> {

  try {

    console.log('🚀 Starting Worker creation and deployment...');


    const scriptContent = `

      export default {

        async fetch(request, env, ctx) {

          return new Response(env.MESSAGE, { status: 200 });

        },

      }`.trim();


    let worker;

    try {

      worker = await client.workers.beta.workers.get(config.workerName, {

        account_id: config.accountId,

      });

      console.log(`♻️  Worker ${config.workerName} already exists. Using it.`);

    } catch (error) {

      if (!(error instanceof Cloudflare.NotFoundError)) { throw error; }

      console.log(`✏️  Creating Worker ${config.workerName}...`);

      worker = await client.workers.beta.workers.create({

        account_id: config.accountId,

        name: config.workerName,

        subdomain: {

          enabled: config.subdomain !== undefined,

        },

        observability: {

          enabled: true,

        },

      });

    }


    console.log(`⚙️  Worker id: ${worker.id}`);

    console.log('✏️  Creating Worker version...');


    // Create the first version of the Worker

    const version = await client.workers.beta.workers.versions.create(worker.id, {

      account_id: config.accountId,

      main_module: SCRIPT_FILENAME,

      compatibility_date: new Date().toISOString().split('T')[0]!,

      bindings: [

        {

          type: 'plain_text',

          name: 'MESSAGE',

          text: 'Hello World!',

        },

      ],

      modules: [

        {

          name: SCRIPT_FILENAME,

          content_type: 'application/javascript+module',

          content_base64: Buffer.from(scriptContent).toString('base64'),

        },

      ],

    });


    console.log(`⚙️  Version id: ${version.id}`);

    console.log('🚚 Creating Worker deployment...');


    // Create a deployment and point all traffic to the version we created

    await client.workers.scripts.deployments.create(config.workerName, {

      account_id: config.accountId,

      strategy: 'percentage',

      versions: [

        {

            percentage: 100,

            version_id: version.id,

          },

        ],

    });


    console.log('✅ Deployment successful!');


    if (config.subdomain) {

      console.log(`

🌍 Your Worker is live!

📍 URL: https://${config.workerName}.${config.subdomain}.workers.dev/

`);

    } else {

      console.log(`

⚠️  Set up a route, custom domain, or workers.dev subdomain to access your Worker.

Add CLOUDFLARE_SUBDOMAIN to your environment variables to set one up automatically.

`);

    }

  } catch (error) {

    console.error('❌ Deployment failed:', error);

    exit(1);

  }

}


main();


```

## Cloudflare REST API

Open a terminal or create a shell script to upload a Worker and manage versions and deployments with curl. Workers scripts are JavaScript [ES Modules ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), but we also support [Python Workers](https://developers.cloudflare.com/workers/languages/python/) (open beta) and [Rust Workers](https://developers.cloudflare.com/workers/languages/rust/).

Warning

This API is in beta. See the multipart/form-data API below for the stable API.

* [ ES Module ](#tab-panel-7494)
* [ Python ](#tab-panel-7495)

Terminal window

```

account_id="replace_me"

api_token="replace_me"

worker_name="my-hello-world-worker"


worker_script_base64=$(echo '

export default {

  async fetch(request, env, ctx) {

    return new Response(env.MESSAGE, { status: 200 });

  }

};

' | base64)


# Note the below will fail if the worker already exists!

# Here's how to delete the Worker

#

# worker_id="replace-me"

# curl "https://api.cloudflare.com/client/v4/accounts/$account_id/workers/workers/$worker_id" \

#   -X DELETE \

#   -H "Authorization: Bearer $api_token"


# Create the Worker

worker_id=$(curl "https://api.cloudflare.com/client/v4/accounts/$account_id/workers/workers" \

  -X POST \

  -H "Authorization: Bearer $api_token" \

  -H "Content-Type: application/json" \

  -d '{

    "name": "'$worker_name'"

  }' \

  | jq -r '.result.id')


echo "\nWorker ID: $worker_id\n"


# Upload the Worker's first version

version_id=$(curl "https://api.cloudflare.com/client/v4/accounts/$account_id/workers/workers/$worker_id/versions" \

  -X POST \

  -H "Authorization: Bearer $api_token" \

  -H "Content-Type: application/json" \

  -d '{

    "compatibility_date": "2025-08-06",

    "main_module": "'$worker_name'.mjs",

    "modules": [

      {

        "name": "'$worker_name'.mjs",

        "content_type": "application/javascript+module",

        "content_base64": "'$worker_script_base64'"

      }

    ],

    "bindings": [

      {

        "type": "plain_text",

        "name": "MESSAGE",

        "text": "Hello World!"

      }

    ]

  }' \

  | jq -r '.result.id')


echo "\nVersion ID: $version_id\n"


# Create a deployment for the Worker

deployment_id=$(curl "https://api.cloudflare.com/client/v4/accounts/$account_id/workers/scripts/$worker_name/deployments" \

  -X POST \

  -H "Authorization: Bearer $api_token" \

  -H "Content-Type: application/json" \

  -d '{

    "strategy": "percentage",

    "versions": [

      {

        "percentage": 100,

        "version_id": "'$version_id'"

      }

    ]

  }' \

  | jq -r '.result.id')


echo "\nDeployment ID: $deployment_id\n"


```

[Python Workers](https://developers.cloudflare.com/workers/languages/python/) have their own special `text/x-python` content type and `python_workers` compatibility flag.

Terminal window

```

account_id="replace_me"

api_token="replace_me"

worker_name="my-hello-world-worker"


worker_script_base64=$(echo '

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        return Response(self.env.MESSAGE)

' | base64)


# Note the below will fail if the worker already exists!

# Here's how to delete the Worker

#

# worker_id="replace-me"

# curl "https://api.cloudflare.com/client/v4/accounts/$account_id/workers/workers/$worker_id" \

#   -X DELETE \

#   -H "Authorization: Bearer $api_token"


# Create the Worker

worker_id=$(curl "https://api.cloudflare.com/client/v4/accounts/$account_id/workers/workers" \

  -X POST \

  -H "Authorization: Bearer $api_token" \

  -H "Content-Type: application/json" \

  -d '{

    "name": "'$worker_name'"

  }' \

  | jq -r '.result.id')


echo "\nWorker ID: $worker_id\n"


# Upload the Worker's first version

version_id=$(curl "https://api.cloudflare.com/client/v4/accounts/$account_id/workers/workers/$worker_id/versions" \

  -X POST \

  -H "Authorization: Bearer $api_token" \

  -H "Content-Type: application/json" \

  -d '{

    "compatibility_date": "2025-08-06",

    "compatibility_flags": [

      "python_workers"

    ],

    "main_module": "'$worker_name'.py",

    "modules": [

      {

        "name": "'$worker_name'.py",

        "content_type": "text/x-python",

        "content_base64": "'$worker_script_base64'"

      }

    ],

    "bindings": [

      {

        "type": "plain_text",

        "name": "MESSAGE",

        "text": "Hello World!"

      }

    ]

  }' \

  | jq -r '.result.id')


echo "\nVersion ID: $version_id\n"


# Create a deployment for the Worker

deployment_id=$(curl "https://api.cloudflare.com/client/v4/accounts/$account_id/workers/scripts/$worker_name/deployments" \

  -X POST \

  -H "Authorization: Bearer $api_token" \

  -H "Content-Type: application/json" \

  -d '{

    "strategy": "percentage",

    "versions": [

      {

        "percentage": 100,

        "version_id": "'$version_id'"

      }

    ]

  }' \

  | jq -r '.result.id')


echo "\nDeployment ID: $deployment_id\n"


```

### multipart/form-data upload API

This API uses [multipart/form-data ↗](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Methods/POST) to upload a Worker and will implicitly create a version and deployment. The above API is recommended for direct management of versions and deployments.

* [ Workers ](#tab-panel-7496)
* [ Workers for Platforms ](#tab-panel-7497)

Terminal window

```

account_id="replace_me"

api_token="replace_me"

worker_name="my-hello-world-script"


script_content='export default {

  async fetch(request, env, ctx) {

    return new Response(env.MESSAGE, { status: 200 });

  }

};'


# Upload the Worker

curl "https://api.cloudflare.com/client/v4/accounts/$account_id/workers/scripts/$worker_name" \

  -X PUT \

  -H "Authorization: Bearer $api_token" \

  -F "metadata={

    'main_module': '"$worker_name".mjs',

    'bindings': [

      {

        'type': 'plain_text',

        'name': 'MESSAGE',

        'text': 'Hello World!'

      }

    ],

    'compatibility_date': '$today'

  };type=application/json" \

  -F "$worker_name.mjs=@-;filename=$worker_name.mjs;type=application/javascript+module" <<EOF

$script_content

EOF


```

For [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms), you can upload a [User Worker](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#user-workers) to a [dispatch namespace](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#dispatch-namespace). Note the [API endpoint](https://developers.cloudflare.com/api/resources/workers%5Ffor%5Fplatforms/subresources/dispatch/subresources/namespaces/subresources/scripts/methods/update/) is on `/workers/dispatch/namespaces/$DISPATCH_NAMESPACE/scripts/$SCRIPT_NAME`.

Terminal window

```

account_id="replace_me"

api_token="replace_me"

dispatch_namespace="replace_me"

worker_name="my-hello-world-script"


script_content='export default {

  async fetch(request, env, ctx) {

    return new Response(env.MESSAGE, { status: 200 });

  }

};'


# Create a dispatch namespace

curl https://api.cloudflare.com/client/v4/accounts/$account_id/workers/dispatch/namespaces \

  -X POST \

  -H 'Content-Type: application/json' \

  -H "Authorization: Bearer $api_token" \

  -d '{

    "name": "'$dispatch_namespace'"

  }'


# Upload the Worker

curl "https://api.cloudflare.com/client/v4/accounts/$account_id/workers/dispatch/namespaces/$dispatch_namespace/scripts/$worker_name" \

  -X PUT \

  -H "Authorization: Bearer $api_token" \

  -F "metadata={

    'main_module': '"$worker_name".mjs',

    'bindings': [

      {

        'type': 'plain_text',

        'name': 'MESSAGE',

        'text': 'Hello World!'

      }

    ],

    'compatibility_date': '$today'

  };type=application/json" \

  -F "$worker_name.mjs=@-;filename=$worker_name.mjs;type=application/javascript+module" <<EOF

$script_content

EOF


```

### Python Workers

[Python Workers](https://developers.cloudflare.com/workers/languages/python/) (open beta) have their own special `text/x-python` content type and `python_workers` compatibility flag for uploading using the multipart/form-data API.

Terminal window

```

curl https://api.cloudflare.com/client/v4/accounts/<account_id>/workers/scripts/my-hello-world-script \

  -X PUT \

  -H 'Authorization: Bearer <api_token>' \

  -F 'metadata={

        "main_module": "my-hello-world-script.py",

        "bindings": [

          {

            "type": "plain_text",

            "name": "MESSAGE",

            "text": "Hello World!"

          }

        ],

        "compatibility_date": "$today",

        "compatibility_flags": [

          "python_workers"

        ]

      };type=application/json' \

  -F 'my-hello-world-script.py=@-;filename=my-hello-world-script.py;type=text/x-python' <<EOF

from workers import WorkerEntrypoint, Response


class Default(WorkerEntrypoint):

    async def fetch(self, request):

        return Response(self.env.MESSAGE)

EOF


```

## Considerations with Durable Objects

[Durable Object](https://developers.cloudflare.com/durable-objects/) migrations are applied with deployments. This means you can't bind to a Durable Object in a Version if a deployment doesn't exist i.e. migrations haven't been applied. For example, running this in Terraform will fail the first time the plan is applied:

```

resource "cloudflare_worker" "my_worker" {

  account_id = var.account_id

  name = "my-worker"

}


resource "cloudflare_worker_version" "my_worker_version" {

  account_id = var.account_id

  worker_id = cloudflare_worker.my_worker.id

  bindings = [

    {

      type = "durable_object_namespace"

      name = "my_durable_object"

      class_name = "MyDurableObjectClass"

    }

  ]

  migrations = {

    new_sqlite_classes = [

      "MyDurableObjectClass"

    ]

  }

  # ...version props ommitted for brevity

}


resource "cloudflare_workers_deployment" "my_worker_deployment" {

  # ...deployment props ommitted for brevity

}


```

To make this succeed, you first have to comment out the `durable_object` binding block, apply the plan, uncomment it, comment out the `migrations` block, then apply again. This time the plan will succeed. This also applies to the API or SDKs. This is an example where it makes sense to just manage the `cloudflare_worker` and/or `cloudflare_workers_deployment` resources while using Wrangler for build and Version management.

## Considerations with Worker Versions

### Resource immutability

Worker versions are immutable at the API level, meaning they cannot be updated after creation, only re-created with any desired changes. This means that meaningful changes to the `cloudflare_worker_version` Terraform resource will always trigger replacement. When the `cloudflare_worker_version` resource is replaced, a new version with the desired changes is created, but the previous version is not deleted. This ensures the Worker has a complete version history when managed via Terraform. In other words, versions are both immutable and append-only. When the parent `cloudflare_worker` resource is deleted, all existing versions associated with the Worker are also deleted.

### Module Content

Worker version modules support two mutually exclusive ways to provide content:

* **`content_file`** \- Points to a local file
* **`content_base64`** \- Inline base64-encoded content

In both cases, changes to the underlying content are tracked using the computed `content_sha256` attribute. Specifying content using the `content_file` attribute is preferred in almost all cases, as it avoids storing the content itself in state. Module content may be quite large (up to tens of megabytes), and storing it in state will bloat the state file and negatively affect the performance of Terraform operations. The main use case for the `content_base64` attribute is importing the `cloudflare_worker_version` Terraform resource from the API, discussed below.

### Import Behavior

**During import, Terraform always populates the `content_base64` attribute in state**, regardless of the attribute used in your config.

Terminal window

```

terraform import cloudflare_worker_version.my_worker_version <account_id>/<worker_id>/<version_id>


```

If your config uses `content_file`, there will be a mismatch after import (state uses `content_base64`, config uses `content_file`). This is expected.

Assuming the content of the local file referenced by `content_file` matches the imported content and their `content_sha256` values are the same, this will result in an in-place update of the `cloudflare_worker_version` Terraform resource. This should be an in-place update instead of a replacement because the underlying content is not changing (the `content_sha256` attribute is the same in both cases), and the resource does not need to be updated at the API level. The only thing that needs to be updated is Terraform state, which will switch from using `content_base64` to `content_file` after the update.

If Terraform instead wants to replace the resource, citing a difference in computed `content_sha256` values, then the content of the local file referenced by `content_file` does not match the imported content and the resource can't be cleanly imported without updating the local file to match the expected API value.

### Examples

**Using `content_file`:**

```

resource "cloudflare_worker_version" "content_file_example" {

  account_id  = var.account_id

  worker_id   = cloudflare_worker.example.id

  main_module = "worker.js"

  modules = [{

    name         = "worker.js"

    content_type = "application/javascript+module"

    content_file = "build/worker.js"

  }]

}


```

**Using `content_base64`:**

```

resource "cloudflare_worker_version" "content_base64_example" {

  account_id  = var.account_id

  worker_id   = cloudflare_worker.example.id

  main_module = "worker.js"

  modules = [{

    name           = "worker.js"

    content_type   = "application/javascript+module"

    content_base64 = base64encode("export default { async fetch() { return new Response('Hello world!') } }")

  }]

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/platform/","name":"Platform"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/platform/infrastructure-as-code/","name":"Infrastructure as Code (IaC)"}}]}
```

---

---
title: Known issues
description: Known issues and bugs to be aware of when using Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/platform/known-issues.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Known issues

Below are some known bugs and issues to be aware of when using Cloudflare Workers.

## Route specificity

* When defining route specificity, a trailing `/*` in your pattern may not act as expected.

Consider two different Workers, each deployed to the same zone. Worker A is assigned the `example.com/images/*` route and Worker B is given the `example.com/images*` route pattern. With these in place, here are how the following URLs will be resolved:

```

// (A) example.com/images/*

// (B) example.com/images*


"example.com/images"

// -> B

"example.com/images123"

// -> B

"example.com/images/hello"

// -> B


```

You will notice that all examples trigger Worker B. This includes the final example, which exemplifies the unexpected behavior.

When adding a wildcard on a subdomain, here are how the following URLs will be resolved:

```

// (A) *.example.com/a

// (B) a.example.com/*


"a.example.com/a"

// -> B


```

## wrangler dev

* When running `wrangler dev --remote`, all outgoing requests are given the `cf-workers-preview-token` header, which Cloudflare recognizes as a preview request. This applies to the entire Cloudflare network, so making HTTP requests to other Cloudflare zones is currently discarded for security reasons. To enable a workaround, insert the following code into your Worker script:

JavaScript

```

const request = new Request(url, incomingRequest);

request.headers.delete('cf-workers-preview-token');

return await fetch(request);


```

## Fetch API in CNAME setup

When you make a subrequest using [fetch()](https://developers.cloudflare.com/workers/runtime-apis/fetch/) from a Worker, the Cloudflare DNS resolver is used. When a zone has a [Partial (CNAME) setup](https://developers.cloudflare.com/dns/zone-setups/partial-setup/), all hostnames that the Worker needs to be able to resolve require a dedicated DNS entry in Cloudflare's DNS setup. Otherwise the Fetch API call will fail with status code [530 (1016)](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-1xxx-errors/error-1016/).

Setup with missing DNS records in Cloudflare DNS

```

// Zone in partial setup: example.com

// DNS records at Authoritative DNS: sub1.example.com, sub2.example.com, ...

// DNS records at Cloudflare DNS: sub1.example.com


"sub1.example.com/"

// -> Can be resolved by Fetch API

"sub2.example.com/"

// -> Cannot be resolved by Fetch API, will lead to 530 status code


```

After adding `sub2.example.com` to Cloudflare DNS

```

// Zone in partial setup: example.com

// DNS records at Authoritative DNS: sub1.example.com, sub2.example.com, ...

// DNS records at Cloudflare DNS: sub1.example.com, sub2.example.com


"sub1.example.com/"

// -> Can be resolved by Fetch API

"sub2.example.com/"

// -> Can be resolved by Fetch API


```

## Fetch to IP addresses

For Workers subrequests, requests can only be made to URLs, not to IP addresses directly. To overcome this limitation [add a A or AAAA name record to your zone ↗](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-dns-records/) and then fetch that resource.

For example, in the zone `example.com` create a record of type `A` with the name `server` and value `192.0.2.1`, and then use:

JavaScript

```

await fetch('http://server.example.com')


```

Do not use:

JavaScript

```

await fetch('http://192.0.2.1')


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/platform/","name":"Platform"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/platform/known-issues/","name":"Known issues"}}]}
```

---

---
title: Limits
description: Cloudflare Workers plan and platform limits.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/platform/limits.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Limits

## Account plan limits

| Feature                                                                                                      | Workers Free | Workers Paid   |
| ------------------------------------------------------------------------------------------------------------ | ------------ | -------------- |
| [Requests](#daily-requests)                                                                                  | 100,000/day  | No limit       |
| [CPU time](#cpu-time)                                                                                        | 10 ms        | 5 min          |
| [Memory](#memory)                                                                                            | 128 MB       | 128 MB         |
| [Subrequests](#subrequests)                                                                                  | 50/request   | 10,000/request |
| [Simultaneous outgoing connections/request](#simultaneous-open-connections)                                  | 6            | 6              |
| [Environment variables](#environment-variables)                                                              | 64/Worker    | 128/Worker     |
| [Environment variable size](#environment-variables)                                                          | 5 KB         | 5 KB           |
| [Worker size](#worker-size)                                                                                  | 3 MB         | 10 MB          |
| [Worker startup time](#worker-startup-time)                                                                  | 1 second     | 1 second       |
| [Number of Workers](#number-of-workers)1                                                                     | 100          | 500            |
| Number of [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/)per account | 5            | 250            |
| Number of [Static Asset](#static-assets) files per Worker version                                            | 20,000       | 100,000        |
| Individual [Static Asset](#static-assets) file size                                                          | 25 MiB       | 25 MiB         |

1 If you reach this limit, consider using [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/).

Need a higher limit?

To request an adjustment to a limit, complete the [Limit Increase Request Form ↗](https://forms.gle/ukpeZVLWLnKeixDu7). If the limit can be increased, Cloudflare will contact you with next steps.

---

## Request and response limits

| Limit                | Value             |
| -------------------- | ----------------- |
| URL size             | 16 KB             |
| Request header size  | 128 KB (total)    |
| Response header size | 128 KB (total)    |
| Response body size   | No enforced limit |

Request body size limits depend on your Cloudflare account plan, not your Workers plan. Requests exceeding these limits return a `413 Request entity too large` error.

| Cloudflare Plan | Maximum request body size |
| --------------- | ------------------------- |
| Free            | 100 MB                    |
| Pro             | 100 MB                    |
| Business        | 200 MB                    |
| Enterprise      | 500 MB (by default)       |

Enterprise customers can contact their account team or [Cloudflare Support](https://developers.cloudflare.com/support/contacting-cloudflare-support/) for a higher request body limit.

Cloudflare does not enforce response body size limits. [CDN cache limits](https://developers.cloudflare.com/cache/concepts/default-cache-behavior/) apply: 512 MB for Free, Pro, and Business plans, and 5 GB for Enterprise.

---

## CPU time

CPU time measures how long the CPU spends executing your Worker code. Waiting on network requests (such as `fetch()` calls, KV reads, or database queries) does **not** count toward CPU time.

| Limit                     | Workers Free | Workers Paid                                                |
| ------------------------- | ------------ | ----------------------------------------------------------- |
| CPU time per HTTP request | 10 ms        | 5 min (default: 30 seconds)                                 |
| CPU time per Cron Trigger | 10 ms        | 30 seconds (< 1 hour interval)  15 min (>= 1 hour interval) |

Most Workers consume very little CPU time. The average Worker uses approximately 2.2 ms per request. Heavier workloads that handle authentication, server-side rendering, or parse large payloads typically use 10-20 ms.

Each [isolate](https://developers.cloudflare.com/workers/reference/how-workers-works/#isolates) has some built-in flexibility to allow for cases where your Worker infrequently runs over the configured limit. If your Worker starts hitting the limit consistently, its execution will be terminated according to the limit configured.

#### Error: exceeded CPU time limit

When a Worker exceeds its CPU time limit, Cloudflare returns **Error 1102** to the client with the message `Worker exceeded resource limits`. In the dashboard, this appears as `Exceeded CPU Time Limits` under **Metrics** \> **Errors** \> **Invocation Statuses**. In analytics and Logpush, the invocation outcome is `exceededCpu`.

To resolve a CPU time limit error:

1. **Increase the CPU time limit** — On the Workers Paid plan, you can raise the limit from the default 30 seconds up to 5 minutes (300,000 ms). Set this in your Wrangler configuration or in the dashboard.
2. **Optimize your code** — Use [CPU profiling with DevTools](https://developers.cloudflare.com/workers/observability/dev-tools/cpu-usage/) to identify CPU-intensive sections of your code.
3. **Offload work** — Move expensive computation to [Durable Objects](https://developers.cloudflare.com/durable-objects/) or process data in smaller chunks across multiple requests.

#### Increasing the CPU time limit

On the Workers Paid plan, you can increase the maximum CPU time from the default 30 seconds to 5 minutes (300,000 ms).

* [  wrangler.jsonc ](#tab-panel-7500)
* [  wrangler.toml ](#tab-panel-7501)

```

{

  // ...rest of your configuration...

  "limits": {

    "cpu_ms": 300000, // default is 30000 (30 seconds)

  },

  // ...rest of your configuration...

}


```

```

[limits]

cpu_ms = 300_000


```

You can also change this in the dashboard: go to **Workers & Pages** \> select your Worker > **Settings** \> adjust the CPU time limit.

#### Monitoring CPU usage

* **Workers Logs** — CPU time and wall time appear in the [invocation log](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#invocation-logs).
* **Tail Workers / Logpush** — CPU time and wall time appear at the top level of the [Workers Trace Events object](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/workers%5Ftrace%5Fevents/).
* **DevTools** — Use [CPU profiling with DevTools](https://developers.cloudflare.com/workers/observability/dev-tools/cpu-usage/) locally to identify CPU-intensive sections of your code.

---

## Memory

| Limit              | Value  |
| ------------------ | ------ |
| Memory per isolate | 128 MB |

Each [isolate](https://developers.cloudflare.com/workers/reference/how-workers-works/#isolates) can consume up to 128 MB of memory, including the JavaScript heap and [WebAssembly](https://developers.cloudflare.com/workers/runtime-apis/webassembly/) allocations. This limit is per-isolate, not per-invocation. A single isolate can handle many concurrent requests.

When an isolate exceeds 128 MB, the Workers runtime lets in-flight requests complete and creates a new isolate for subsequent requests. During extremely high load, the runtime may cancel some incoming requests to maintain stability.

#### Error: exceeded memory limit

When a Worker exceeds its memory limit, Cloudflare returns **Error 1102** to the client with the message `Worker exceeded resource limits`. In the dashboard, this appears as `Exceeded Memory` under **Metrics** \> **Errors** \> **Invocation Statuses**. In analytics and Logpush, the invocation outcome is `exceededMemory`.

You may also see the runtime error `Memory limit would be exceeded before EOF` when attempting to buffer a response body that exceeds the limit.

To resolve a memory limit error:

1. **Stream request and response bodies** — Use [TransformStream](https://developers.cloudflare.com/workers/runtime-apis/streams/transformstream/) or [node:stream](https://developers.cloudflare.com/workers/runtime-apis/nodejs/streams/) instead of buffering entire payloads in memory.
2. **Avoid large in-memory objects** — Store large data in [KV](https://developers.cloudflare.com/kv/), [R2](https://developers.cloudflare.com/r2/), or [D1](https://developers.cloudflare.com/d1/) instead of holding it in Worker memory.
3. **Profile memory usage** — Use [memory profiling with DevTools](https://developers.cloudflare.com/workers/observability/dev-tools/memory-usage/) locally to identify leaks and high-memory allocations.

To view memory errors in the dashboard:

1. Go to **Workers & Pages**.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Select the Worker you want to investigate.
3. Under **Metrics**, select **Errors** \> **Invocation Statuses** and examine **Exceeded Memory**.

---

## Duration

Duration measures wall-clock time from start to end of a Worker invocation. There is no hard limit on duration for HTTP-triggered Workers. As long as the client remains connected, the Worker can continue processing, making subrequests, and setting timeouts.

| Trigger type                                                                                       | Duration limit |
| -------------------------------------------------------------------------------------------------- | -------------- |
| HTTP request                                                                                       | No limit       |
| [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/)             | 15 min         |
| [Durable Object Alarm](https://developers.cloudflare.com/durable-objects/api/alarms/)              | 15 min         |
| [Queue Consumer](https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer) | 15 min         |

When the client disconnects, all tasks associated with that request are canceled. Use [event.waitUntil()](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) to delay cancellation for another 30 seconds or until the promise you pass to `waitUntil()` completes.

Note

Cloudflare updates the Workers runtime a few times per week. The runtime gives in-flight requests a 30-second grace period to finish. If a request does not finish within this time, the runtime terminates it. This scenario is very unlikely because it requires a long-running request to coincide with a runtime update.

---

## Daily requests

Workers scale automatically across the Cloudflare global network. There is no general limit on requests per second.

Accounts on the Workers Free plan have a daily request limit of 100,000 requests, resetting at midnight UTC. When a Worker exceeds this limit, Cloudflare returns **Error 1027**.

| Route mode  | Behavior                                                                      |
| ----------- | ----------------------------------------------------------------------------- |
| Fail open   | Bypasses the Worker. Requests behave as if no Worker is configured.           |
| Fail closed | Returns a Cloudflare 1027 error page. Use this for security-critical Workers. |

You can configure the fail mode by toggling the corresponding [route](https://developers.cloudflare.com/workers/configuration/routing/routes/).

---

## Subrequests

A subrequest is any request a Worker makes using the [Fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch/) or to Cloudflare services like [R2](https://developers.cloudflare.com/r2/), [KV](https://developers.cloudflare.com/kv/), or [D1](https://developers.cloudflare.com/d1/).

| Limit                            | Workers Free | Workers Paid                              |
| -------------------------------- | ------------ | ----------------------------------------- |
| Subrequests per invocation       | 50           | 10,000 (up to 10M)                        |
| Subrequests to internal services | 1,000        | Matches configured limit (default 10,000) |

Each subrequest in a redirect chain counts against this limit. The total number of subrequests may exceed the number of `fetch()` calls in your code. You can change the subrequest limit per Worker using the [limits configuration](https://developers.cloudflare.com/workers/wrangler/configuration/#limits) in your Wrangler configuration file.

There is no set time limit on individual subrequests. As long as the client remains connected, the Worker can continue making subrequests. When the client disconnects, all tasks are canceled. Use [event.waitUntil()](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) to delay cancellation for up to 30 seconds.

### Worker-to-Worker subrequests

Use [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) to send requests from one Worker to another on your account without going over the Internet.

Using global [fetch()](https://developers.cloudflare.com/workers/runtime-apis/fetch/) to call another Worker on the same [zone](https://developers.cloudflare.com/fundamentals/concepts/accounts-and-zones/#zones) without service bindings fails. Workers accept requests sent to a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/#worker-to-worker-communication).

---

## Simultaneous open connections

Each Worker invocation can open up to six simultaneous connections. The following API calls count toward this limit:

* `fetch()` method of the [Fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch/)
* `get()`, `put()`, `list()`, and `delete()` methods of [Workers KV namespace objects](https://developers.cloudflare.com/kv/api/)
* `put()`, `match()`, and `delete()` methods of [Cache objects](https://developers.cloudflare.com/workers/runtime-apis/cache/)
* `list()`, `get()`, `put()`, `delete()`, and `head()` methods of [R2](https://developers.cloudflare.com/r2/)
* `send()` and `sendBatch()` methods of [Queues](https://developers.cloudflare.com/queues/)
* Opening a TCP socket using the [connect()](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) API

Outbound WebSocket connections also count toward this limit.

Once six connections are open, the runtime queues additional attempts until an existing connection closes. The runtime may close stalled connections (those not actively reading or writing) with a `Response closed due to connection limit` exception.

If you use `fetch()` but do not need the response body, call `response.body.cancel()` to free the connection:

TypeScript

```

const response = await fetch(url);


// Only read the response body for successful responses

if (response.statusCode <= 299) {

  // Call response.json(), response.text() or otherwise process the body

} else {

  // Explicitly cancel it

  response.body.cancel();

}


```

If the system detects a deadlock (pending connection attempts with no in-progress reads or writes), it cancels the least-recently-used connection to unblock the Worker.

Note

The runtime measures simultaneous open connections from the top-level request. Workers triggered via [Service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) share the same connection limit.

---

## Environment variables

| Limit                                 | Workers Free | Workers Paid |
| ------------------------------------- | ------------ | ------------ |
| Variables per Worker (secrets + text) | 64           | 128          |
| Variable size                         | 5 KB         | 5 KB         |
| Variables per account                 | No limit     | No limit     |

---

## Worker size

| Limit                    | Workers Free | Workers Paid |
| ------------------------ | ------------ | ------------ |
| After compression (gzip) | 3 MB         | 10 MB        |
| Before compression       | 64 MB        | 64 MB        |

Larger Worker bundles can impact startup time. To check your compressed bundle size:

Terminal window

```

wrangler deploy --outdir bundled/ --dry-run


```

```

# Output will resemble the below:

Total Upload: 259.61 KiB / gzip: 47.23 KiB


```

To reduce Worker size:

* Remove unnecessary dependencies and packages.
* Store configuration files, static assets, and binary data in [KV](https://developers.cloudflare.com/kv/), [R2](https://developers.cloudflare.com/r2/), [D1](https://developers.cloudflare.com/d1/), or [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/) instead of bundling them.
* Split functionality across multiple Workers using [Service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/).

---

## Worker startup time

| Limit        | Value    |
| ------------ | -------- |
| Startup time | 1 second |

A Worker must parse and execute its global scope (top-level code outside of handlers) within 1 second. Larger bundles and expensive initialization code in global scope increase startup time.

When the platform rejects a deployment because the Worker exceeds the startup time limit, the validation returns the error `Script startup exceeded CPU time limit` (error code `10021`). Wrangler automatically generates a CPU profile that you can import into Chrome DevTools or open in VS Code. Refer to [wrangler check startup](https://developers.cloudflare.com/workers/wrangler/commands/general/#startup) for more details.

To measure startup time, run `npx wrangler@latest deploy` or `npx wrangler@latest versions upload`. Wrangler reports `startup_time_ms` in the output.

To reduce startup time, avoid expensive work in global scope. Move initialization logic into your handler or to build time. For example, generating or consuming a large schema at the top level is a common cause of exceeding this limit.

Need a higher limit?

To request an adjustment to a limit, complete the [Limit Increase Request Form ↗](https://forms.gle/ukpeZVLWLnKeixDu7). If the limit can be increased, Cloudflare will contact you with next steps.

---

## Number of Workers

| Limit               | Workers Free | Workers Paid |
| ------------------- | ------------ | ------------ |
| Workers per account | 100          | 500          |

If you need more than 500 Workers, consider using [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/).

---

## Routes and domains

| Limit                                                                                                      | Value |
| ---------------------------------------------------------------------------------------------------------- | ----- |
| [Routes](https://developers.cloudflare.com/workers/configuration/routing/routes/) per zone                 | 1,000 |
| Routes per zone ([wrangler dev --remote](#routes-remote-dev))                                              | 50    |
| [Custom domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) per zone | 100   |
| Routed zones per Worker                                                                                    | 1,000 |

### Routes with `wrangler dev --remote`

When you run a [remote development](https://developers.cloudflare.com/workers/development-testing/#remote-bindings) session using the `--remote` flag, Cloudflare enforces a limit of 50 routes per zone. The Quick Editor in the Cloudflare dashboard also uses `wrangler dev --remote`, so the same limit applies.

If your zone has more than 50 routes, you cannot run a remote session until you remove routes to get under the limit.

If you require more than 1,000 routes or 1,000 routed zones per Worker, consider using [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/). If you require more than 100 custom domains per zone, consider using a wildcard [route](https://developers.cloudflare.com/workers/configuration/routing/routes/).

---

## Cache API limits

| Feature             | Workers Free | Workers Paid |
| ------------------- | ------------ | ------------ |
| Maximum object size | 512 MB       | 512 MB       |
| Calls per request   | 50           | 1,000        |

Calls per request is the number of `put()`, `match()`, or `delete()` Cache API calls per request. This shares the same quota as subrequests (`fetch()`).

Note

The size of chunked response bodies (`Transfer-Encoding: chunked`) is not known in advance. Calling `.put()` with such a response blocks subsequent `.put()` calls until the current one completes.

---

## Log size

| Limit                | Value  |
| -------------------- | ------ |
| Log data per request | 256 KB |

This limit covers all data emitted via `console.log()` statements, exceptions, request metadata, and headers for a single request. After exceeding this limit, the system does not record additional context for that request in logs, tail logs, or [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/).

Refer to the [Workers Trace Event Logpush documentation](https://developers.cloudflare.com/workers/observability/logs/logpush/#limits) for limits on fields sent to Logpush destinations.

---

## Image Resizing with Workers

Refer to the [Image Resizing documentation](https://developers.cloudflare.com/images/transform-images/) for limits that apply when using Image Resizing with Workers.

---

## Static Assets

| Limit                           | Workers Free | Workers Paid |
| ------------------------------- | ------------ | ------------ |
| Files per Worker version        | 20,000       | 100,000      |
| Individual file size            | 25 MiB       | 25 MiB       |
| \_headers rules                 | 100          | 100          |
| \_headers characters per line   | 2,000        | 2,000        |
| \_redirects static redirects    | 2,000        | 2,000        |
| \_redirects dynamic redirects   | 100          | 100          |
| \_redirects total               | 2,100        | 2,100        |
| \_redirects characters per rule | 1,000        | 1,000        |

Note

To use the increased file count limits in Wrangler, you must use version 4.34.0 or higher.

---

## Unbound and Bundled plan limits

Note

Unbound and Bundled plans have been deprecated and are no longer available for new accounts.

If your Worker is on an Unbound plan, limits match the Workers Paid plan.

If your Worker is on a Bundled plan, limits match the Workers Paid plan with these exceptions:

| Feature                  | Bundled plan limit |
| ------------------------ | ------------------ |
| Subrequests              | 50/request         |
| CPU time (HTTP requests) | 50 ms              |
| CPU time (Cron Triggers) | 50 ms              |
| Cache API calls/request  | 50                 |

Bundled plan Workers have no duration limits for [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/), [Durable Object Alarms](https://developers.cloudflare.com/durable-objects/api/alarms/), or [Queue Consumers](https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer).

---

## Wall time limits by invocation type

Wall time (also called wall-clock time) is the total elapsed time from the start to end of an invocation, including time spent waiting on network requests, I/O, and other asynchronous operations. This is distinct from [CPU time](https://developers.cloudflare.com/workers/platform/limits/#cpu-time), which only measures time the CPU spends actively executing your code.

The following table summarizes the wall time limits for different types of Worker invocations across the developer platform:

| Invocation type                                                                                     | Wall time limit | Details                                                                                                                                                                                                                                          |
| --------------------------------------------------------------------------------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Incoming HTTP request                                                                               | Unlimited       | No hard limit while the client remains connected. When the client disconnects, tasks are canceled unless you call [waitUntil()](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) to extend execution by up to 30 seconds. |
| [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/)             | 15 minutes      | Scheduled Workers have a maximum wall time of 15 minutes per invocation.                                                                                                                                                                         |
| [Queue consumers](https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer) | 15 minutes      | Each consumer invocation has a maximum wall time of 15 minutes.                                                                                                                                                                                  |
| [Durable Object alarm handlers](https://developers.cloudflare.com/durable-objects/api/alarms/)      | 15 minutes      | Alarm handler invocations have a maximum wall time of 15 minutes.                                                                                                                                                                                |
| [Durable Objects](https://developers.cloudflare.com/durable-objects/) (RPC / HTTP)                  | Unlimited       | No hard limit while the caller stays connected to the Durable Object.                                                                                                                                                                            |
| [Workflows](https://developers.cloudflare.com/workflows/) (per step)                                | Unlimited       | Each step can run for an unlimited wall time. Individual steps are subject to the configured [CPU time limit](https://developers.cloudflare.com/workers/platform/limits/#cpu-time).                                                              |

---

## Related resources

* [KV limits](https://developers.cloudflare.com/kv/platform/limits/)
* [Durable Object limits](https://developers.cloudflare.com/durable-objects/platform/limits/)
* [Queues limits](https://developers.cloudflare.com/queues/platform/limits/)
* [Workers errors reference](https://developers.cloudflare.com/workers/observability/errors/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/platform/","name":"Platform"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/platform/limits/","name":"Limits"}}]}
```

---

---
title: Pricing
description: Workers plans and pricing information.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/platform/pricing.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Pricing

By default, users have access to the Workers Free plan. The Workers Free plan includes limited usage of Workers, Pages Functions, Workers KV and Hyperdrive. Read more about the [Free plan limits](https://developers.cloudflare.com/workers/platform/limits/#account-plan-limits).

The Workers Paid plan includes Workers, Pages Functions, Workers KV, Hyperdrive, and Durable Objects usage for a minimum charge of $5 USD per month for an account. The plan includes increased initial usage allotments, with clear charges for usage that exceeds the base plan. There are no additional charges for data transfer (egress) or throughput (bandwidth).

All included usage is on a monthly basis.

Pages Functions billing

All [Pages Functions](https://developers.cloudflare.com/pages/functions/) are billed as Workers. All pricing and inclusions in this document apply to Pages Functions. Refer to [Functions Pricing](https://developers.cloudflare.com/pages/functions/pricing/) for more information on Pages Functions pricing.

## Workers

Users on the Workers Paid plan have access to the Standard usage model. Workers Enterprise accounts are billed based on the usage model specified in their contract. To switch to the Standard usage model, contact your Account Manager.

| Requests1, 2, 3 | Duration                                                     | CPU time                        |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| --------------- | ------------------------------------------------------------ | ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Free**        | 100,000 per day                                              | No charge for duration          | 10 milliseconds of CPU time per invocation                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| **Standard**    | 10 million included per month  +$0.30 per additional million | No charge or limit for duration | 30 million CPU milliseconds included per month +$0.02 per additional million CPU milliseconds Max of [5 minutes of CPU time](https://developers.cloudflare.com/workers/platform/limits/#account-plan-limits) per invocation (default: 30 seconds) Max of 15 minutes of CPU time per [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/) or [Queue Consumer](https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer) invocation |

1 Inbound requests to your Worker. Cloudflare does not bill for[subrequests](https://developers.cloudflare.com/workers/platform/limits/#subrequests) you make from your Worker.

2 WebSocket connections made to a Worker are charged as a request, representing the initial `Upgrade` connection made to establish the WebSocket. WebSocket messages routed through a Worker do not count as requests.

3 Requests to static assets are free and unlimited.

### Example pricing

#### Example 1

A Worker that serves 15 million requests per month, and uses an average of 7 milliseconds (ms) of CPU time per request, would have the following estimated costs:

| Monthly Costs    | Formula |                                                                                                           |
| ---------------- | ------- | --------------------------------------------------------------------------------------------------------- |
| **Subscription** | $5.00   |                                                                                                           |
| **Requests**     | $1.50   | (15,000,000 requests - 10,000,000 included requests) / 1,000,000 \* $0.30                                 |
| **CPU time**     | $1.50   | ((7 ms of CPU time per request \* 15,000,000 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 |
| **Total**        | $8.00   |                                                                                                           |

#### Example 2

A project that serves 15 million requests per month, with 80% (12 million) requests serving [static assets](https://developers.cloudflare.com/workers/static-assets/) and the remaining invoking dynamic Worker code. The Worker uses an average of 7 milliseconds (ms) of time per request.

Requests to static assets are free and unlimited. This project would have the following estimated costs:

| Monthly Costs                 | Formula |    |
| ----------------------------- | ------- | -- |
| **Subscription**              | $5.00   |    |
| **Requests to static assets** | $0      | \- |
| **Requests to Worker**        | $0      | \- |
| **CPU time**                  | $0      | \- |
| **Total**                     | $5.00   |    |

#### Example 3

A Worker that runs on a [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/) once an hour to collect data from multiple APIs, process the data and create a report.

* 720 requests/month
* 3 minutes (180,000ms) of CPU time per request

In this scenario, the estimated monthly cost would be calculated as:

| Monthly Costs    | Formula |                                                                                                          |
| ---------------- | ------- | -------------------------------------------------------------------------------------------------------- |
| **Subscription** | $5.00   |                                                                                                          |
| **Requests**     | $0.00   | \-                                                                                                       |
| **CPU time**     | $1.99   | ((180,000 ms of CPU time per request \* 720 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 |
| **Total**        | $6.99   |                                                                                                          |

#### Example 4

A high traffic Worker that serves 100 million requests per month, and uses an average of 7 milliseconds (ms) of CPU time per request, would have the following estimated costs:

| Monthly Costs    | Formula |                                                                                                            |
| ---------------- | ------- | ---------------------------------------------------------------------------------------------------------- |
| **Subscription** | $5.00   |                                                                                                            |
| **Requests**     | $27.00  | (100,000,000 requests - 10,000,000 included requests) / 1,000,000 \* $0.30                                 |
| **CPU time**     | $13.40  | ((7 ms of CPU time per request \* 100,000,000 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 |
| **Total**        | $45.40  |                                                                                                            |

Custom limits

To prevent accidental runaway bills or denial-of-wallet attacks, configure the maximum amount of CPU time that can be used per invocation by [defining limits in your Worker's Wrangler file](https://developers.cloudflare.com/workers/wrangler/configuration/#limits), or via the Cloudflare dashboard (**Workers & Pages** \> Select your Worker > **Settings** \> **CPU Limits**).

If you had a Worker on the Bundled usage model prior to the migration to Standard pricing on March 1, 2024, Cloudflare has automatically added a 50 ms CPU limit on your Worker.

### How to switch usage models

Note

Some Workers Enterprise customers maintain the ability to change usage models.

Users on the Workers Paid plan have access to the Standard usage model. However, some users may still have a legacy usage model configured. Legacy usage models include Workers Unbound and Workers Bundled. Users are advised to move to the Workers Standard usage model. Changing the usage model only affects billable usage, and has no technical implications.

To change your default account-wide usage model:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. Find **Usage Model** on the right-side menu > **Change**.

Usage models may be changed at the individual Worker level:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In **Overview**, select your Worker > **Settings** \> **Usage Model**.

Existing Workers will not be impacted when changing the default usage model. You may change the usage model for individual Workers without affecting your account-wide default usage model.

## Workers Logs

Workers Logs is included in both the Free and Paid [Workers plans](https://developers.cloudflare.com/workers/platform/pricing/).

| Log Events Written | Retention                                                    |        |
| ------------------ | ------------------------------------------------------------ | ------ |
| **Workers Free**   | 200,000 per day                                              | 3 Days |
| **Workers Paid**   | 20 million included per month  +$0.60 per additional million | 7 Days |

Workers Logs documentation

For more information and [examples of Workers Logs billing](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#example-pricing), refer to the [Workers Logs documentation](https://developers.cloudflare.com/workers/observability/logs/workers-logs).

## Workers Trace Events Logpush

Workers Logpush is only available on the Workers Paid plan.

| Paid plan  |                                    |
| ---------- | ---------------------------------- |
| Requests 1 | 10 million / month, +$0.05/million |

1 Workers Logpush charges for request logs that reach your end destination after applying filtering or sampling.

## Workers KV

Workers KV is included in both the Free and Paid [Workers plans](https://developers.cloudflare.com/workers/platform/pricing/).

| Free plan1    | Paid plan     |                                   |
| ------------- | ------------- | --------------------------------- |
| Keys read     | 100,000 / day | 10 million/month, + $0.50/million |
| Keys written  | 1,000 / day   | 1 million/month, + $5.00/million  |
| Keys deleted  | 1,000 / day   | 1 million/month, + $5.00/million  |
| List requests | 1,000 / day   | 1 million/month, + $5.00/million  |
| Stored data   | 1 GB          | 1 GB, + $0.50/ GB-month           |

1 The Workers Free plan includes limited Workers KV usage. All limits reset daily at 00:00 UTC. If you exceed any one of these limits, further operations of that type will fail with an error.

Note

Workers KV pricing for read, write and delete operations is on a per-key basis. Bulk read operations are billed by the amount of keys read in a bulk read operation.

KV documentation

To learn more about KV, refer to the [KV documentation](https://developers.cloudflare.com/kv/).

## Hyperdrive

Hyperdrive is included in both the Free and Paid [Workers plans](https://developers.cloudflare.com/workers/platform/pricing/).

| Free plan[1](#user-content-fn-1)        | Paid plan     |           |
| --------------------------------------- | ------------- | --------- |
| Database queries[2](#user-content-fn-2) | 100,000 / day | Unlimited |

Footnotes

1: The Workers Free plan includes limited Hyperdrive usage. All limits reset daily at 00:00 UTC. If you exceed any one of these limits, further operations of that type will fail with an error.

2: Database queries refers to any database statement made via Hyperdrive, whether a query (`SELECT`), a modification (`INSERT`,`UPDATE`, or `DELETE`) or a schema change (`CREATE`, `ALTER`, `DROP`).

## Footnotes

1. The Workers Free plan includes limited Hyperdrive usage. All limits reset daily at 00:00 UTC. If you exceed any one of these limits, further operations of that type will fail with an error. [↩](#user-content-fnref-1)
2. Database queries refers to any database statement made via Hyperdrive, whether a query (`SELECT`), a modification (`INSERT`,`UPDATE`, or `DELETE`) or a schema change (`CREATE`, `ALTER`, `DROP`). [↩](#user-content-fnref-2)

Hyperdrive documentation

To learn more about Hyperdrive, refer to the [Hyperdrive documentation](https://developers.cloudflare.com/hyperdrive/).

## Queues

Cloudflare Queues charges for the total number of operations against each of your queues during a given month.

* An operation is counted for each 64 KB of data that is written, read, or deleted.
* Messages larger than 64 KB are charged as if they were multiple messages: for example, a 65 KB message and a 127 KB message would both incur two operation charges when written, read, or deleted.
* A KB is defined as 1,000 bytes, and each message includes approximately 100 bytes of internal metadata.
* Operations are per message, not per batch. A batch of 10 messages (the default batch size), if processed, would incur 10x write, 10x read, and 10x delete operations: one for each message in the batch.
* There are no data transfer (egress) or throughput (bandwidth) charges.

| Workers Free        | Workers Paid                   |                                                                |
| ------------------- | ------------------------------ | -------------------------------------------------------------- |
| Standard operations | 10,000 operations/day included | 1,000,000 operations/month included + $0.40/million operations |
| Message retention   | 24 hours (non-configurable)    | 4 days default, configurable up to 14 days                     |

In most cases, it takes 3 operations to deliver a message: 1 write, 1 read, and 1 delete. Therefore, you can use the following formula to estimate your monthly bill:

```

((Number of Messages * 3) - 1,000,000) / 1,000,000  * $0.40


```

Additionally:

* Each retry incurs a read operation. A batch of 10 messages that is retried would incur 10 operations for each retry.
* Messages that reach the maximum retries and that are written to a [Dead Letter Queue](https://developers.cloudflare.com/queues/configuration/batching-retries/) incur a write operation for each 64 KB chunk. A message that was retried 3 times (the default), fails delivery on the fourth time and is written to a Dead Letter Queue would incur five (5) read operations.
* Messages that are written to a queue, but that reach the maximum persistence duration (or "expire") before they are read, incur only a write and delete operation per 64 KB chunk.

Queues billing examples

To learn more about Queues pricing and review billing examples, refer to [Queues Pricing](https://developers.cloudflare.com/queues/platform/pricing/).

## D1

D1 is available on both the Workers Free and Workers Paid plans.

| [Workers Free](https://developers.cloudflare.com/workers/platform/pricing/#workers) | [Workers Paid](https://developers.cloudflare.com/workers/platform/pricing/#workers) |                                                           |
| ----------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | --------------------------------------------------------- |
| Rows read                                                                           | 5 million / day                                                                     | First 25 billion / month included + $0.001 / million rows |
| Rows written                                                                        | 100,000 / day                                                                       | First 50 million / month included + $1.00 / million rows  |
| Storage (per GB stored)                                                             | 5 GB (total)                                                                        | First 5 GB included + $0.75 / GB-mo                       |

Track your D1 usage

To accurately track your usage, use the [meta object](https://developers.cloudflare.com/d1/worker-api/return-object/), [GraphQL Analytics API](https://developers.cloudflare.com/d1/observability/metrics-analytics/#query-via-the-graphql-api), or the [Cloudflare dashboard ↗](https://dash.cloudflare.com/?to=/:account/workers/d1/). Select your D1 database, then view: Metrics > Row Metrics.

### Definitions

1. Rows read measure how many rows a query reads (scans), regardless of the size of each row. For example, if you have a table with 5000 rows and run a `SELECT * FROM table` as a full table scan, this would count as 5,000 rows read. A query that filters on an [unindexed column](https://developers.cloudflare.com/d1/best-practices/use-indexes/) may return fewer rows to your Worker, but is still required to read (scan) more rows to determine which subset to return.
2. Rows written measure how many rows were written to D1 database. Write operations include `INSERT`, `UPDATE`, and `DELETE`. Each of these operations contribute towards rows written. A query that `INSERT` 10 rows into a `users` table would count as 10 rows written.
3. DDL operations (for example, `CREATE`, `ALTER`, and `DROP`) are used to define or modify the structure of a database. They may contribute to a mix of read rows and write rows. Ensure you are accurately tracking your usage through the available tools ([meta object](https://developers.cloudflare.com/d1/worker-api/return-object/), [GraphQL Analytics API](https://developers.cloudflare.com/d1/observability/metrics-analytics/#query-via-the-graphql-api), or the [Cloudflare dashboard ↗](https://dash.cloudflare.com/?to=/:account/workers/d1/)).
4. Row size or the number of columns in a row does not impact how rows are counted. A row that is 1 KB and a row that is 100 KB both count as one row.
5. Defining [indexes](https://developers.cloudflare.com/d1/best-practices/use-indexes/) on your table(s) reduces the number of rows read by a query when filtering on that indexed field. For example, if the `users` table has an index on a timestamp column `created_at`, the query `SELECT * FROM users WHERE created_at > ?1` would only need to read a subset of the table.
6. Indexes will add an additional written row when writes include the indexed column, as there are two rows written: one to the table itself, and one to the index. The performance benefit of an index and reduction in rows read will, in nearly all cases, offset this additional write.
7. Storage is based on gigabytes stored per month, and is based on the sum of all databases in your account. Tables and indexes both count towards storage consumed.
8. Free limits reset daily at 00:00 UTC. Monthly included limits reset based on your monthly subscription renewal date, which is determined by the day you first subscribed.
9. There are no data transfer (egress) or throughput (bandwidth) charges for data accessed from D1.
10. [Read replication](https://developers.cloudflare.com/d1/best-practices/read-replication/) does not charge extra for read replicas. You incur the same usage billing based on `rows_read` and `rows_written` by your queries.

D1 billing

Refer to [D1 Pricing](https://developers.cloudflare.com/d1/platform/pricing/) to learn more about how D1 is billed.

## Durable Objects

Note

Durable Objects are available both on Workers Free and Workers Paid plans.

* **Workers Free plan**: Only Durable Objects with [SQLite storage backend](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/#create-migration) are available.
* **Workers Paid plan**: Durable Objects with either SQLite storage backend or [key-value storage backend](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/#create-durable-object-class-with-key-value-storage) are available.

If you wish to downgrade from a Workers Paid plan to a Workers Free plan, you must first ensure that you have deleted all Durable Object namespaces with the key-value storage backend.

### Compute billing

Durable Objects are billed for compute duration (wall-clock time) while the Durable Object is actively running or is idle in memory but unable to [hibernate](https://developers.cloudflare.com/durable-objects/concepts/durable-object-lifecycle/). Durable Objects that are idle and eligible for hibernation are not billed for duration, even before the runtime has hibernated them. Requests to a Durable Object keep it active or create the object if it was inactive.

| Free plan | Paid plan         |                                                                                                                      |
| --------- | ----------------- | -------------------------------------------------------------------------------------------------------------------- |
| Requests  | 100,000 / day     | 1 million / month, + $0.15/million Includes HTTP requests, RPC sessions1, WebSocket messages2, and alarm invocations |
| Duration3 | 13,000 GB-s / day | 400,000 GB-s / month, + $12.50/million GB-s4,5                                                                       |

Footnotes

1 Each [RPC session](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle/) is billed as one request to your Durable Object. Every [RPC method call](https://developers.cloudflare.com/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/) on a [Durable Objects stub](https://developers.cloudflare.com/durable-objects/) is its own RPC session and therefore a single billed request.

RPC method calls can return objects (stubs) extending [RpcTarget](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle/#lifetimes-memory-and-resource-management) and invoke calls on those stubs. Subsequent calls on the returned stub are part of the same RPC session and are not billed as separate requests. For example:

JavaScript

```

let durableObjectStub = OBJECT_NAMESPACE.get(id); // retrieve Durable Object stub

using foo = await durableObjectStub.bar(); // billed as a request

await foo.baz(); // treated as part of the same RPC session created by calling bar(), not billed as a request

await durableObjectStub.cat(); // billed as a request


```

2 A request is needed to create a WebSocket connection. There is no charge for outgoing WebSocket messages, nor for incoming [WebSocket protocol pings ↗](https://www.rfc-editor.org/rfc/rfc6455#section-5.5.2). For compute requests billing-only, a 20:1 ratio is applied to incoming WebSocket messages to factor in smaller messages for real-time communication. For example, 100 WebSocket incoming messages would be charged as 5 requests for billing purposes. The 20:1 ratio does not affect Durable Object metrics and analytics, which reflect actual usage.

3 Application level auto-response messages handled by [state.setWebSocketAutoResponse()](https://developers.cloudflare.com/durable-objects/best-practices/websockets/) will not incur additional wall-clock time, and so they will not be charged.

4 Duration is billed in wall-clock time as long as the Object is active and not eligible for hibernation, but is shared across all requests active on an Object at once. Calling `accept()` on a WebSocket in an Object will incur duration charges for the entire time the WebSocket is connected. It is recommended to use the WebSocket Hibernation API to avoid incurring duration charges once all event handlers finish running. For a complete explanation, refer to [When does a Durable Object incur duration charges?](https://developers.cloudflare.com/durable-objects/platform/pricing/#when-does-a-durable-object-incur-duration-charges).

5 Duration billing charges for the 128 MB of memory your Durable Object is allocated, regardless of actual usage. If your account creates many instances of a single Durable Object class, Durable Objects may run in the same isolate on the same physical machine and share the 128 MB of memory. These Durable Objects are still billed as if they are allocated a full 128 MB of memory.

### Storage billing

The [Durable Objects Storage API](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/) is only accessible from within Durable Objects. Pricing depends on the storage backend of your Durable Objects.

* **SQLite-backed Durable Objects (recommended)**: [SQLite storage backend](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class) is recommended for all new Durable Object classes. Workers Free plan can only create and access SQLite-backed Durable Objects.
* **Key-value backed Durable Objects**: [Key-value storage backend](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/#create-durable-object-class-with-key-value-storage) is only available on the Workers Paid plan.

#### SQLite storage backend

Storage billing on SQLite-backed Durable Objects

Storage billing for SQLite-backed Durable Objects will be enabled in January 2026, with a target date of January 7, 2026 (no earlier). Only SQLite storage usage on and after the billing target date will incur charges. For more information, refer to [Billing for SQLite Storage](https://developers.cloudflare.com/changelog/2025-12-12-durable-objects-sqlite-storage-billing/).

| Workers Free plan    | Workers Paid plan |                                                           |
| -------------------- | ----------------- | --------------------------------------------------------- |
| Rows reads 1,2       | 5 million / day   | First 25 billion / month included + $0.001 / million rows |
| Rows written 1,2,3,4 | 100,000 / day     | First 50 million / month included + $1.00 / million rows  |
| SQL Stored data 5    | 5 GB (total)      | 5 GB-month, + $0.20/ GB-month                             |

Footnotes

1 Rows read and rows written included limits and rates match [D1 pricing](https://developers.cloudflare.com/d1/platform/pricing/), Cloudflare's serverless SQL database.

2 Key-value methods like `get()`, `put()`, `delete()`, or `list()` store and query data in a hidden SQLite table and are billed as rows read and rows written.

3 Each `setAlarm()` is billed as a single row written.

4 Deletes are counted as rows written.

5 Durable Objects will be billed for stored data until the [data is removed](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/#remove-a-durable-objects-storage). Once the data is removed, the object will be cleaned up automatically by the system.

#### Key-value storage backend

| Workers Paid plan     |                            |
| --------------------- | -------------------------- |
| Read request units1,2 | 1 million, + $0.20/million |
| Write request units3  | 1 million, + $1.00/million |
| Delete requests4      | 1 million, + $1.00/million |
| Stored data5          | 1 GB, + $0.20/ GB-month    |

Footnotes

1 A request unit is defined as 4 KB of data read or written. A request that writes or reads more than 4 KB will consume multiple units, for example, a 9 KB write will consume 3 write request units.

2 List operations are billed by read request units, based on the amount of data examined. For example, a list request that returns a combined 80 KB of keys and values will be billed 20 read request units. A list request that does not return anything is billed for 1 read request unit.

3 Each `setAlarm` is billed as a single write request unit.

4 Delete requests are unmetered. For example, deleting a 100 KB value will be charged one delete request.

5 Durable Objects will be billed for stored data until the data is removed. Once the data is removed, the object will be cleaned up automatically by the system.

Requests that hit the [Durable Objects in-memory cache](https://developers.cloudflare.com/durable-objects/reference/in-memory-state/) or that use the [multi-key versions of get()/put()/delete() methods](https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/) are billed the same as if they were a normal, individual request for each key.

Durable Objects billing examples

For more information and [examples of Durable Objects billing](https://developers.cloudflare.com/durable-objects/platform/pricing#compute-billing-examples), refer to [Durable Objects Pricing](https://developers.cloudflare.com/durable-objects/platform/pricing/).

## Vectorize

Vectorize is currently only available on the Workers paid plan.

| [Workers Free](https://developers.cloudflare.com/workers/platform/pricing/#workers) | [Workers Paid](https://developers.cloudflare.com/workers/platform/pricing/#workers) |                                                                                 |
| ----------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | ------------------------------------------------------------------------------- |
| **Total queried vector dimensions**                                                 | 30 million queried vector dimensions / month                                        | First 50 million queried vector dimensions / month included + $0.01 per million |
| **Total stored vector dimensions**                                                  | 5 million stored vector dimensions                                                  | First 10 million stored vector dimensions + $0.05 per 100 million               |

### Calculating vector dimensions

To calculate your potential usage, calculate the queried vector dimensions and the stored vector dimensions, and multiply by the unit price. The formula is defined as `((queried vectors + stored vectors) * dimensions * ($0.01 / 1,000,000)) + (stored vectors * dimensions * ($0.05 / 100,000,000))`

* For example, inserting 10,000 vectors of 768 dimensions each, and querying those 1,000 times per day (30,000 times per month) would be calculated as `((30,000 + 10,000) * 768) = 30,720,000` queried dimensions and `(10,000 * 768) = 7,680,000` stored dimensions (within the included monthly allocation)
* Separately, and excluding the included monthly allocation, this would be calculated as `(30,000 + 10,000) * 768 * ($0.01 / 1,000,000) + (10,000 * 768 * ($0.05 / 100,000,000))` and sum to $0.31 per month.

## R2

R2 charges based on the total volume of data stored, along with two classes of operations on that data:

1. **Class A operations** which are more expensive and tend to mutate state.
2. **Class B operations** which tend to read existing state.

There are no charges for egress bandwidth.

| Free                               | Standard storage            | Infrequent Access storage |                          |
| ---------------------------------- | --------------------------- | ------------------------- | ------------------------ |
| Storage                            | 10 GB-month / month         | $0.015 / GB-month         | $0.01 / GB-month         |
| Class A Operations                 | 1 million requests / month  | $4.50 / million requests  | $9.00 / million requests |
| Class B Operations                 | 10 million requests / month | $0.36 / million requests  | $0.90 / million requests |
| Data Retrieval (processing)        | None                        | None                      | $0.01 / GB               |
| Egress (data transfer to Internet) | Free                        | Free                      | Free                     |

R2 documentation

To learn more about R2 pricing, including billing examples, refer to [R2 Pricing](https://developers.cloudflare.com/r2/pricing/).

## Containers

Containers are billed for every 10ms that they are actively running at the following rates, with included monthly usage as part of the $5 USD per month [Workers Paid plan](https://developers.cloudflare.com/workers/platform/pricing/):

| Memory           | CPU                                                                | Disk                                                           |                                                           |
| ---------------- | ------------------------------------------------------------------ | -------------------------------------------------------------- | --------------------------------------------------------- |
| **Free**         | N/A                                                                | N/A                                                            | N/A                                                       |
| **Workers Paid** | 25 GiB-hours/month included  +$0.0000025 per additional GiB-second | 375 vCPU-minutes/month \+ $0.000020 per additional vCPU-second | 200 GB-hours/month  +$0.00000007 per additional GB-second |

You only pay for what you use — charges start when a request is sent to the container or when it is manually started. Charges stop after the container instance goes to sleep, which can happen automatically after a timeout.

### Network Egress

Egress from Containers is priced at the following rates:

| Region                 | Price per GB | Included Allotment per month |
| ---------------------- | ------------ | ---------------------------- |
| North America & Europe | $0.025       | 1 TB                         |
| Oceania, Korea, Taiwan | $0.05        | 500 GB                       |
| Everywhere Else        | $0.04        | 500 GB                       |

Containers documentation

To learn more about Containers pricing, refer to [Containers Pricing](https://developers.cloudflare.com/containers/pricing/).

## Service bindings

Requests made from your Worker to another worker via a [Service Binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) do not incur additional request fees. This allows you to split apart functionality into multiple Workers, without incurring additional costs.

For example, if Worker A makes a subrequest to Worker B via a Service Binding, or calls an RPC method provided by Worker B via a Service Binding, this is billed as:

* One request (for the initial invocation of Worker A)
* The total amount of CPU time used across both Worker A and Worker B

Only available on Workers Standard pricing

If your Worker is on the deprecated Bundled or Unbound pricing plans, incoming requests from Service Bindings are charged the same as requests from the Internet. In the example above, you would be charged for two requests, one to Worker A, and one to Worker B.

## Fine Print

Workers Paid plan is separate from any other Cloudflare plan (Free, Professional, Business) you may have. If you are an Enterprise customer, reach out to your account team to confirm pricing details.

Only requests that hit a Worker will count against your limits and your bill. Since Cloudflare Workers runs before the Cloudflare cache, the caching of a request still incurs costs. Refer to [Limits](https://developers.cloudflare.com/workers/platform/limits/) to review definitions and behavior after a limit is hit.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/platform/","name":"Platform"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/platform/pricing/","name":"Pricing"}}]}
```

---

---
title: Choose a data or storage product
description: Storage and database options available on Cloudflare's developer platform.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/platform/storage-options.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Choose a data or storage product

This guide describes the storage & database products available as part of Cloudflare Workers, including recommended use-cases and best practices.

## Choose a storage product

The following table maps our storage & database products to common industry terms as well as recommended use-cases:

| Use-case                                  | Product                                                                           | Ideal for                                                                                                                                                     |
| ----------------------------------------- | --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Key-value storage                         | [Workers KV](https://developers.cloudflare.com/kv/)                               | Configuration data, service routing metadata, personalization (A/B testing)                                                                                   |
| Object storage / blob storage             | [R2](https://developers.cloudflare.com/r2/)                                       | User-facing web assets, images, machine learning and training datasets, analytics datasets, log and event data.                                               |
| Accelerate a Postgres or MySQL database   | [Hyperdrive](https://developers.cloudflare.com/hyperdrive/)                       | Connecting to an existing database in a cloud or on-premise using your existing database drivers & ORMs.                                                      |
| Global coordination & stateful serverless | [Durable Objects](https://developers.cloudflare.com/durable-objects/)             | Building collaborative applications; global coordination across clients; real-time WebSocket applications; strongly consistent, transactional storage.        |
| Lightweight SQL database                  | [D1](https://developers.cloudflare.com/d1/)                                       | Relational data, including user profiles, product listings and orders, and/or customer data.                                                                  |
| Task processing, batching and messaging   | [Queues](https://developers.cloudflare.com/queues/)                               | Background job processing (emails, notifications, APIs), message queuing, and deferred tasks.                                                                 |
| Vector search & embeddings queries        | [Vectorize](https://developers.cloudflare.com/vectorize/)                         | Storing [embeddings](https://developers.cloudflare.com/workers-ai/models/?tasks=Text+Embeddings) from AI models for semantic search and classification tasks. |
| Streaming ingestion                       | [Pipelines](https://developers.cloudflare.com/pipelines/)                         | Streaming data ingestion and processing, including clickstream analytics, telemetry/log data, and structured data for querying                                |
| Time-series metrics                       | [Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/) | Write and query high-cardinality time-series data, usage metrics, and service-level telemetry using Workers and/or SQL.                                       |

Applications can build on multiple storage & database products: for example, using Workers KV for session data; R2 for large file storage, media assets and user-uploaded files; and Hyperdrive to connect to a hosted Postgres or MySQL database.

Pages Functions

Storage options can also be used by your front-end application built with Cloudflare Pages. For more information on available storage options for Pages applications, refer to the [Pages Functions bindings documentation](https://developers.cloudflare.com/pages/functions/bindings/).

## SQL database options

There are three options for SQL-based databases available when building applications with Workers.

* **Hyperdrive** if you have an existing Postgres or MySQL database, require large (1TB, 100TB or more) single databases, and/or want to use your existing database tools. You can also connect Hyperdrive to database platforms like [PlanetScale ↗](https://planetscale.com/) or [Neon ↗](https://neon.tech/).
* **D1** for lightweight, serverless applications that are read-heavy, have global users that benefit from D1's [read replication](https://developers.cloudflare.com/d1/best-practices/read-replication/), and do not require you to manage and maintain a traditional RDBMS.
* **Durable Objects** for stateful serverless workloads, per-user or per-customer SQL state, and building distributed systems (D1 and Queues are built on Durable Objects) where Durable Object's [strict serializability ↗](https://blog.cloudflare.com/durable-objects-easy-fast-correct-choose-three/) enables global ordering of requests and storage operations.

### Session storage

We recommend using [Workers KV](https://developers.cloudflare.com/kv/) for storing session data, credentials (API keys), and/or configuration data. These are typically read at high rates (thousands of RPS or more), are not typically modified (within KV's 1 write RPS per unique key limit), and do not need to be immediately consistent.

Frequently read keys benefit from KV's [internal cache](https://developers.cloudflare.com/kv/concepts/how-kv-works/), and repeated reads to these "hot" keys will typically see latencies in the 500µs to 10ms range.

Authentication frameworks like [OpenAuth ↗](https://openauth.js.org/docs/storage/cloudflare/) use Workers KV as session storage when deployed to Cloudflare, and [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/) uses KV to securely store and distribute user credentials so that they can be validated as close to the user as possible and reduce overall latency.

## Product overviews

### Workers KV

Workers KV is an eventually consistent key-value data store that caches on the Cloudflare global network.

It is ideal for projects that require:

* High volumes of reads and/or repeated reads to the same keys.
* Low-latency global reads (typically within 10ms for hot keys)
* Per-object time-to-live (TTL).
* Distributed configuration and/or session storage.

To get started with KV:

* Read how [KV works](https://developers.cloudflare.com/kv/concepts/how-kv-works/).
* Create a [KV namespace](https://developers.cloudflare.com/kv/concepts/kv-namespaces/).
* Review the [KV Runtime API](https://developers.cloudflare.com/kv/api/).
* Learn about KV [Limits](https://developers.cloudflare.com/kv/platform/limits/).

### R2

R2 is S3-compatible blob storage that allows developers to store large amounts of unstructured data without egress fees associated with typical cloud storage services.

It is ideal for projects that require:

* Storage for files which are infrequently accessed.
* Large object storage (for example, gigabytes or more per object).
* Strong consistency per object.
* Asset storage for websites (refer to [caching guide](https://developers.cloudflare.com/r2/buckets/public-buckets/#caching))

To get started with R2:

* Read the [Get started guide](https://developers.cloudflare.com/r2/get-started/).
* Learn about R2 [Limits](https://developers.cloudflare.com/r2/platform/limits/).
* Review the [R2 Workers API](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/).

### Durable Objects

Durable Objects provide low-latency coordination and consistent storage for the Workers platform through global uniqueness and a transactional storage API.

* Global Uniqueness guarantees that there will be a single instance of a Durable Object class with a given ID running at once, across the world. Requests for a Durable Object ID are routed by the Workers runtime to the Cloudflare data center that owns the Durable Object.
* The transactional storage API provides strongly consistent key-value storage to the Durable Object. Each Object can only read and modify keys associated with that Object. Execution of a Durable Object is single-threaded, but multiple request events may still be processed out-of-order from how they arrived at the Object.

It is ideal for projects that require:

* Real-time collaboration (such as a chat application or a game server).
* Consistent storage.
* Data locality.

To get started with Durable Objects:

* Read the [introductory blog post ↗](https://blog.cloudflare.com/introducing-workers-durable-objects/).
* Review the [Durable Objects documentation](https://developers.cloudflare.com/durable-objects/).
* Get started with [Durable Objects](https://developers.cloudflare.com/durable-objects/get-started/).
* Learn about Durable Objects [Limits](https://developers.cloudflare.com/durable-objects/platform/limits/).

### D1

[D1](https://developers.cloudflare.com/d1/) is Cloudflare’s native serverless database. With D1, you can create a database by importing data or defining your tables and writing your queries within a Worker or through the API.

D1 is ideal for:

* Persistent, relational storage for user data, account data, and other structured datasets.
* Use-cases that require querying across your data ad-hoc (using SQL).
* Workloads with a high ratio of reads to writes (most web applications).

To get started with D1:

* Read [the documentation](https://developers.cloudflare.com/d1)
* Follow the [Get started guide](https://developers.cloudflare.com/d1/get-started/) to provision your first D1 database.
* Review the [D1 Workers Binding API](https://developers.cloudflare.com/d1/worker-api/).

Note

If your working data size exceeds 10 GB (the maximum size for a D1 database), consider splitting the database into multiple, smaller D1 databases.

### Queues

Cloudflare Queues allows developers to send and receive messages with guaranteed delivery. It integrates with [Cloudflare Workers](https://developers.cloudflare.com/workers) and offers at-least once delivery, message batching, and does not charge for egress bandwidth.

Queues is ideal for:

* Offloading work from a request to schedule later.
* Send data from Worker to Worker (inter-Service communication).
* Buffering or batching data before writing to upstream systems, including third-party APIs or [Cloudflare R2](https://developers.cloudflare.com/queues/examples/send-errors-to-r2/).

To get started with Queues:

* [Set up your first queue](https://developers.cloudflare.com/queues/get-started/).
* Learn more [about how Queues works](https://developers.cloudflare.com/queues/reference/how-queues-works/).

### Hyperdrive

Hyperdrive is a service that accelerates queries you make to MySQL and Postgres databases, making it faster to access your data from across the globe, irrespective of your users’ location.

Hyperdrive allows you to:

* Connect to an existing database from Workers without connection overhead.
* Cache frequent queries across Cloudflare's global network to reduce response times on highly trafficked content.
* Reduce load on your origin database with connection pooling.

To get started with Hyperdrive:

* [Connect Hyperdrive](https://developers.cloudflare.com/hyperdrive/get-started/) to your existing database.
* Learn more [about how Hyperdrive speeds up your database queries](https://developers.cloudflare.com/hyperdrive/concepts/how-hyperdrive-works/).

## Pipelines

Pipelines is a streaming ingestion service that allows you to ingest high volumes of real time data, without managing any infrastructure.

Pipelines allows you to:

* Ingest data at extremely high throughput (tens of thousands of records per second or more)
* Batch and write data directly to object storage, ready for querying
* (Future) Transform and aggregate data during ingestion

To get started with Pipelines:

* [Create a Pipeline](https://developers.cloudflare.com/pipelines/getting-started/) that can batch and write records to R2.

### Analytics Engine

Analytics Engine is Cloudflare's time-series and metrics database that allows you to write unlimited-cardinality analytics at scale using a built-in API to write data points from Workers and query that data using SQL directly.

Analytics Engine allows you to:

* Expose custom analytics to your own customers
* Build usage-based billing systems
* Understand the health of your service on a per-customer or per-user basis
* Add instrumentation to frequently called code paths, without impacting performance or overwhelming external analytics systems with events

Cloudflare uses Analytics Engine internally to store and product per-product metrics for products like D1 and R2 at scale.

To get started with Analytics Engine:

* Learn how to [get started with Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/get-started/)
* See [an example of writing time-series data to Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/recipes/usage-based-billing-for-your-saas-product/)
* Understand the [SQL API](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/) for reading data from your Analytics Engine datasets

### Vectorize

Vectorize is a globally distributed vector database that enables you to build full-stack, AI-powered applications with Cloudflare Workers and [Workers AI](https://developers.cloudflare.com/workers-ai/).

Vectorize allows you to:

* Store embeddings from any vector embeddings model (Bring Your Own embeddings) for semantic search and classification tasks.
* Add context to Large Language Model (LLM) queries by using vector search as part of a [Retrieval Augmented Generation](https://developers.cloudflare.com/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai/) (RAG) workflow.
* [Filter on vector metadata](https://developers.cloudflare.com/vectorize/reference/metadata-filtering/) to reduce the search space and return more relevant results.

To get started with Vectorize:

* [Create your first vector database](https://developers.cloudflare.com/vectorize/get-started/intro/).
* Combine [Workers AI and Vectorize](https://developers.cloudflare.com/vectorize/get-started/embeddings/) to generate, store and query text embeddings.
* Learn more about [how vector databases work](https://developers.cloudflare.com/vectorize/reference/what-is-a-vector-database/).

## SQL in Durable Objects vs D1

Cloudflare Workers offers a SQLite-backed serverless database product - [D1](https://developers.cloudflare.com/d1/). How should you compare [SQLite in Durable Objects](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/) and D1?

**D1 is a managed database product.**

D1 fits into a familiar architecture for developers, where application servers communicate with a database over the network. Application servers are typically Workers; however, D1 also supports external, non-Worker access via an [HTTP API ↗](https://developers.cloudflare.com/api/resources/d1/subresources/database/methods/query/), which helps unlock [third-party tooling](https://developers.cloudflare.com/d1/reference/community-projects/#%5Ftop) support for D1.

D1 aims for a "batteries included" feature set, including the above HTTP API, [database schema management](https://developers.cloudflare.com/d1/reference/migrations/#%5Ftop), [data import/export](https://developers.cloudflare.com/d1/best-practices/import-export-data/), and [database query insights](https://developers.cloudflare.com/d1/observability/metrics-analytics/#query-insights).

With D1, your application code and SQL database queries are not colocated which can impact application performance. If performance is a concern with D1, Workers has [Smart Placement](https://developers.cloudflare.com/workers/configuration/placement/#%5Ftop) to dynamically run your Worker in the best location to reduce total Worker request latency, considering everything your Worker talks to, including D1.

**SQLite in Durable Objects is a lower-level compute with storage building block for distributed systems.**

By design, Durable Objects are accessed with Workers-only.

Durable Objects require a bit more effort, but in return, give you more flexibility and control. With Durable Objects, you must implement two pieces of code that run in different places: a front-end Worker which routes incoming requests from the Internet to a unique Durable Object, and the Durable Object itself, which runs on the same machine as the SQLite database. You get to choose what runs where, and it may be that your application benefits from running some application business logic right next to the database.

With SQLite in Durable Objects, you may also need to build some of your own database tooling that comes out-of-the-box with D1.

SQL query pricing and limits are intended to be identical between D1 ([pricing](https://developers.cloudflare.com/d1/platform/pricing/), [limits](https://developers.cloudflare.com/d1/platform/limits/)) and SQLite in Durable Objects ([pricing](https://developers.cloudflare.com/durable-objects/platform/pricing/#sqlite-storage-backend), [limits](https://developers.cloudflare.com/durable-objects/platform/limits/)).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/platform/","name":"Platform"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/platform/storage-options/","name":"Choose a data or storage product"}}]}
```

---

---
title: Workers for Platforms
description: Deploy custom code on behalf of your users or let your users directly deploy their own code to your platform, managing infrastructure.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/platform/workers-for-platforms.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workers for Platforms

Deploy custom code on behalf of your users or let your users directly deploy their own code to your platform, managing infrastructure.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/platform/","name":"Platform"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/platform/workers-for-platforms/","name":"Workers for Platforms"}}]}
```

---

---
title: How the Cache works
description: How Workers interacts with the Cloudflare cache.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/reference/how-the-cache-works.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# How the Cache works

Workers was designed and built on top of Cloudflare's global network to allow developers to interact directly with the Cloudflare cache. The cache can provide ephemeral, data center-local storage, as a convenient way to frequently access static or dynamic content.

By allowing developers to write to the cache, Workers provide a way to customize cache behavior on Cloudflare’s CDN. To learn about the benefits of caching, refer to the Learning Center’s article on [What is Caching? ↗](https://www.cloudflare.com/learning/cdn/what-is-caching/).

Cloudflare Workers run before the cache but can also be utilized to modify assets once they are returned from the cache. Modifying assets returned from cache allows for the ability to sign or personalize responses while also reducing load on an origin and reducing latency to the end user by serving assets from a nearby location.

## Interact with the Cloudflare Cache

Conceptually, there are two ways to interact with Cloudflare’s Cache using a Worker:

* Call to [fetch()](https://developers.cloudflare.com/workers/runtime-apis/fetch/) in a Workers script. Requests proxied through Cloudflare are cached even without Workers according to a zone’s default or configured behavior (for example, static assets like files ending in `.jpg` are cached by default). Workers can further customize this behavior by:  
   * Setting Cloudflare cache rules (that is, operating on the `cf` object of a [request](https://developers.cloudflare.com/workers/runtime-apis/request/)).
* Store responses using the [Cache API](https://developers.cloudflare.com/workers/runtime-apis/cache/) from a Workers script. This allows caching responses that did not come from an origin and also provides finer control by:  
   * Customizing cache behavior of any asset by setting headers such as `Cache-Control` on the response passed to `cache.put()`.  
   * Caching responses generated by the Worker itself through `cache.put()`.

Tiered caching

The Cache API is not compatible with tiered caching. To take advantage of tiered caching, use the [fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch/).

### Single file purge assets cached by a worker

When using single-file purge to purge assets cached by a Worker, make sure not to purge the end user URL. Instead, purge the URL that is in the `fetch` request. For example, you have a Worker that runs on `https://example.com/hello` and this Worker makes a `fetch` request to `https://notexample.com/hello`.

As far as cache is concerned, the asset in the `fetch` request (`https://notexample.com/hello`) is the asset that is cached. To purge it, you need to purge `https://notexample.com/hello`.

Purging the end user URL, `https://example.com/hello`, will not work because that is not the URL that cache sees. You need to confirm in your Worker which URL you are actually fetching, so you can purge the correct asset.

In the previous example, `https://notexample.com/hello` is not proxied through Cloudflare. If `https://notexample.com/hello` was proxied ([orange-clouded](https://developers.cloudflare.com/dns/proxy-status/)) through Cloudflare, then you must own `notexample.com` and purge `https://notexample.com/hello` from the `notexample.com` zone.

To better understand the example, review the following diagram:

flowchart TD
accTitle: Single file purge  assets cached by a worker
accDescr: This diagram is meant to help choose how to purge a file.
A("You have a Worker script that runs on <code>https://</code><code>example.com/hello</code> <br> and this Worker makes a <code>fetch</code> request to <code>https://</code><code>notexample.com/hello</code>.") --> B(Is <code>notexample.com</code> <br> an active zone on Cloudflare?)
    B -- Yes --> C(Is <code>https://</code><code>notexample.com/</code> <br> proxied through Cloudflare?)
    B -- No  --> D(Purge <code>https://</code><code>notexample.com/hello</code> <br> from the original <code>example.com</code> zone.)
    C -- Yes --> E(Do you own <br> <code>notexample.com</code>?)
    C -- No --> F(Purge <code>https://</code><code>notexample.com/hello</code> <br> from the original <code>example.com</code> zone.)
    E -- Yes --> G(Purge <code>https://</code><code>notexample.com/hello</code> <br> from the <code>notexample.com</code> zone.)
    E -- No --> H(Sorry, you can not purge the asset. <br> Only the owner of <code>notexample.com</code> can purge it.)

### Purge assets stored with the Cache API

Assets stored in the cache through [Cache API](https://developers.cloudflare.com/workers/runtime-apis/cache/) operations can be purged in a couple of ways:

* Call `cache.delete` within a Worker to invalidate the cache for the asset with a matching request variable.  
   * Assets purged in this way are only purged locally to the data center the Worker runtime was executed.
* To purge an asset globally, use the standard [cache purge options](https://developers.cloudflare.com/cache/how-to/purge-cache/). Based on cache API implementation, not all cache purge endpoints function for purging assets stored by the Cache API.  
   * All assets on a zone can be purged by using the [Purge Everything](https://developers.cloudflare.com/cache/how-to/purge-cache/purge-everything/) cache operation. This purge will remove all assets associated with a Cloudflare zone from cache in all data centers regardless of the method set.  
   * [Cache Tags](https://developers.cloudflare.com/cache/how-to/purge-cache/purge-by-tags/#add-cache-tag-http-response-headers) can be added to requests dynamically in a Worker by calling `response.headers.append()` and appending `Cache-Tag` values dynamically to that request. Once set, those tags can be used to selectively purge assets from cache without invalidating all cached assets on a zone.
* Currently, it is not possible to purge a URL that uses a custom cache key set by a Worker. Instead, use a [custom key created via Cache Rules](https://developers.cloudflare.com/cache/how-to/cache-rules/settings/#cache-key). Alternatively, purge your assets using purge everything, purge by tag, purge by host or purge by prefix.

## Edge versus browser caching

The browser cache is controlled through the `Cache-Control` header sent in the response to the client (the `Response` instance return from the handler). Workers can customize browser cache behavior by setting this header on the response.

Other means to control Cloudflare’s cache that are not mentioned in this documentation include: Page Rules and Cloudflare cache settings. Refer to the [How to customize Cloudflare’s cache](https://developers.cloudflare.com/cache/concepts/customize-cache/) if you wish to avoid writing JavaScript with still some granularity of control.

What should I use: the Cache API or fetch for caching objects on Cloudflare?

For requests where Workers are behaving as middleware (that is, Workers are sending a subrequest via `fetch`) it is recommended to use `fetch`. This is because preexisting settings are in place that optimize caching while preventing unintended dynamic caching. For projects where there is no backend (that is, the entire project is on Workers as in [Workers Sites](https://developers.cloudflare.com/workers/configuration/sites/start-from-scratch)) the Cache API is the only option to customize caching.

The asset will be cached under the hostname specified within the Worker's subrequest — not the Worker's own hostname. Therefore, in order to purge the cached asset, the purge will have to be performed for the hostname included in the Worker subrequest.

### `fetch`

In the context of Workers, a [fetch](https://developers.cloudflare.com/workers/runtime-apis/fetch/) provided by the runtime communicates with the Cloudflare cache. First, `fetch` checks to see if the URL matches a different zone. If it does, it reads through that zone’s cache (or Worker). Otherwise, it reads through its own zone’s cache, even if the URL is for a non-Cloudflare site. Cache settings on `fetch` automatically apply caching rules based on your Cloudflare settings. `fetch` does not allow you to modify or inspect objects before they reach the cache, but does allow you to modify how it will cache.

When a response fills the cache, the response header contains `CF-Cache-Status: HIT`. You can tell an object is attempting to cache if one sees the `CF-Cache-Status` at all.

This [template](https://developers.cloudflare.com/workers/examples/cache-using-fetch/) shows ways to customize Cloudflare cache behavior on a given request using fetch.

### Cache API

The [Cache API](https://developers.cloudflare.com/workers/runtime-apis/cache/) can be thought of as an ephemeral key-value store, whereby the `Request` object (or more specifically, the request URL) is the key, and the `Response` is the value.

There are two types of cache namespaces available to the Cloudflare Cache:

* **`caches.default`** – You can access the default cache (the same cache shared with `fetch` requests) by accessing `caches.default`. This is useful when needing to override content that is already cached, after receiving the response.
* **`caches.open()`** – You can access a namespaced cache (separate from the cache shared with `fetch` requests) using `let cache = await caches.open(CACHE_NAME)`. Note that [caches.open ↗](https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage/open) is an async function, unlike `caches.default`.

When to use the Cache API:

* When you want to programmatically save and/or delete responses from a cache. For example, say an origin is responding with a `Cache-Control: max-age:0` header and cannot be changed. Instead, you can clone the `Response`, adjust the header to the `max-age=3600` value, and then use the Cache API to save the modified `Response` for an hour.
* When you want to programmatically access a Response from a cache without relying on a `fetch` request. For example, you can check to see if you have already cached a `Response` for the `https://example.com/slow-response` endpoint. If so, you can avoid the slow request.

This [template](https://developers.cloudflare.com/workers/examples/cache-api/) shows ways to use the cache API. For limits of the cache API, refer to [Limits](https://developers.cloudflare.com/workers/platform/limits/#cache-api-limits).

Tiered caching and the Cache API

Cache API within Workers does not support tiered caching. Tiered Cache concentrates connections to origin servers so they come from a small number of data centers rather than the full set of network locations. Cache API is local to a data center, this means that `cache.match` does a lookup, `cache.put` stores a response, and `cache.delete` removes a stored response only in the cache of the data center that the Worker handling the request is in. Because these methods apply only to local cache, they will not work with tiered cache.

## Related resources

* [Cache API](https://developers.cloudflare.com/workers/runtime-apis/cache/)
* [Customize cache behavior with Workers](https://developers.cloudflare.com/cache/interaction-cloudflare-products/workers/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/reference/","name":"Reference"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/reference/how-the-cache-works/","name":"How the Cache works"}}]}
```

---

---
title: How Workers works
description: The difference between the Workers runtime versus traditional browsers and Node.js.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/reference/how-workers-works.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# How Workers works

Though Cloudflare Workers behave similarly to [JavaScript ↗](https://www.cloudflare.com/learning/serverless/serverless-javascript/) in the browser or in Node.js, there are a few differences in how you have to think about your code. Under the hood, the Workers runtime uses the [V8 engine ↗](https://www.cloudflare.com/learning/serverless/glossary/what-is-chrome-v8/) — the same engine used by Chromium and Node.js. The Workers runtime also implements many of the standard [APIs](https://developers.cloudflare.com/workers/runtime-apis/) available in most modern browsers.

The differences between JavaScript written for the browser or Node.js happen at runtime. Rather than running on an individual's machine (for example, [a browser application or on a centralized server ↗](https://www.cloudflare.com/learning/serverless/glossary/client-side-vs-server-side/)), Workers functions run on [Cloudflare's global network ↗](https://www.cloudflare.com/network) \- a growing global network of thousands of machines distributed across hundreds of locations.

Each of these machines hosts an instance of the Workers runtime, and each of those runtimes is capable of running thousands of user-defined applications. This guide will review some of those differences.

For more information, refer to the [Cloud Computing without Containers blog post ↗](https://blog.cloudflare.com/cloud-computing-without-containers).

The three largest differences are: Isolates, Compute per Request, and Distributed Execution.

## Isolates

[V8 ↗](https://v8.dev) orchestrates isolates: lightweight contexts that provide your code with variables it can access and a safe environment to be executed within. You could even consider an isolate a sandbox for your function to run in.

A single instance of the runtime can run hundreds or thousands of isolates, seamlessly switching between them. Each isolate's memory is completely isolated, so each piece of code is protected from other untrusted or user-written code on the runtime. Isolates are also designed to start very quickly. Instead of creating a virtual machine for each function, an isolate is created within an existing environment. This model eliminates the cold starts of the virtual machine model.

Unlike other serverless providers which use [containerized processes ↗](https://www.cloudflare.com/learning/serverless/serverless-vs-containers/) each running an instance of a language runtime, Workers pays the overhead of a JavaScript runtime once on the start of a container. Workers processes are able to run essentially limitless scripts with almost no individual overhead. Any given isolate can start around a hundred times faster than a Node process on a container or virtual machine. Notably, on startup isolates consume an order of magnitude less memory.

Traditional architecture

Workers V8 isolates

User code

Process overhead

A given isolate has its own scope, but isolates are not necessarily long-lived. An isolate may be spun down and evicted for a number of reasons:

* Resource limitations on the machine.
* A suspicious script - anything seen as trying to break out of the isolate sandbox.
* Individual [resource limits](https://developers.cloudflare.com/workers/platform/limits/).

Because of this, it is generally advised that you not store mutable state in your global scope unless you have accounted for this contingency.

If you are interested in how Cloudflare handles security with the Workers runtime, you can [read more about how Isolates relate to Security and Spectre Threat Mitigation](https://developers.cloudflare.com/workers/reference/security-model/).

## Compute per request

Most Workers are a variation on the default Workers flow:

* [  JavaScript ](#tab-panel-7502)
* [  TypeScript ](#tab-panel-7503)

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    return new Response('Hello World!');

  },

};


```

TypeScript

```

export default {

  async fetch(request, env, ctx): Promise<Response> {

    return new Response('Hello World!');

  },

} satisfies ExportedHandler<Env>;


```

For Workers written in [ES modules syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/), when a request to your `*.workers.dev` subdomain or to your Cloudflare-managed domain is received by any of Cloudflare's data centers, the request invokes the [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) defined in your Worker code with the given request. You can respond to the request by returning a [Response](https://developers.cloudflare.com/workers/runtime-apis/response/) object.

## Distributed execution

Isolates are resilient and continuously available for the duration of a request, but in rare instances isolates may be evicted. When a Worker hits official [limits](https://developers.cloudflare.com/workers/platform/limits/) or when resources are exceptionally tight on the machine the request is running on, the runtime will selectively evict isolates after their events are properly resolved.

Like all other JavaScript platforms, a single Workers instance may handle multiple requests including concurrent requests in a single-threaded event loop. That means that other requests may (or may not) be processed during awaiting any `async` tasks (such as `fetch`) if other requests come in while processing a request. Because there is no guarantee that any two user requests will be routed to the same or a different instance of your Worker, Cloudflare recommends you do not use or mutate global state.

## Related resources

* [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) \- Review how incoming HTTP requests to a Worker are passed to the `fetch()` handler.
* [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) \- Learn how incoming HTTP requests are passed to the `fetch()` handler.
* [Workers limits](https://developers.cloudflare.com/workers/platform/limits/) \- Learn about Workers limits including Worker size, startup time, and more.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/reference/","name":"Reference"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/reference/how-workers-works/","name":"How Workers works"}}]}
```

---

---
title: Migrate from Service Workers to ES Modules
description: Write your Worker code in ES modules syntax for an optimized experience.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/reference/migrate-to-module-workers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Migrate from Service Workers to ES Modules

This guide will show you how to migrate your Workers from the [Service Worker ↗](https://developer.mozilla.org/en-US/docs/Web/API/Service%5FWorker%5FAPI) format to the [ES modules ↗](https://blog.cloudflare.com/workers-javascript-modules/) format.

## Advantages of migrating

There are several reasons to migrate your Workers to the ES modules format:

1. Your Worker will run faster. With service workers, bindings are exposed as globals. This means that for every request, the Workers runtime must create a new JavaScript execution context, which adds overhead and time. Workers written using ES modules can reuse the same execution context across multiple requests.
2. Implementing [Durable Objects](https://developers.cloudflare.com/durable-objects/) requires Workers that use ES modules.
3. Bindings for [D1](https://developers.cloudflare.com/d1/), [Workers AI](https://developers.cloudflare.com/workers-ai/), [Vectorize](https://developers.cloudflare.com/vectorize/), [Workflows](https://developers.cloudflare.com/workflows/), and [Images](https://developers.cloudflare.com/images/transform-images/bindings/) can only be used from Workers that use ES modules.
4. You can [gradually deploy changes to your Worker](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/) when you use the ES modules format.
5. You can easily publish Workers using ES modules to `npm`, allowing you to import and reuse Workers within your codebase.

## Migrate a Worker

The following example demonstrates a Worker that redirects all incoming requests to a URL with a `301` status code.

Service Workers are deprecated

Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers.

With the Service Worker syntax, the example Worker looks like:

JavaScript

```

async function handler(request) {

  const base = 'https://example.com';

  const statusCode = 301;


  const destination = new URL(request.url, base);

  return Response.redirect(destination.toString(), statusCode);

}


// Initialize Worker

addEventListener('fetch', event => {

  event.respondWith(handler(event.request));

});


```

Workers using ES modules format replace the `addEventListener` syntax with an object definition, which must be the file's default export (via `export default`). The previous example code becomes:

JavaScript

```

export default {

  fetch(request) {

    const base = "https://example.com";

    const statusCode = 301;


    const source = new URL(request.url);

    const destination = new URL(source.pathname, base);

    return Response.redirect(destination.toString(), statusCode);

  },

};


```

## Bindings

[Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) allow your Workers to interact with resources on the Cloudflare developer platform.

Workers using ES modules format do not rely on any global bindings. However, Service Worker syntax accesses bindings on the global scope.

To understand bindings, refer the following `TODO` KV namespace binding example. To create a `TODO` KV namespace binding, you will:

1. Create a KV namespace named `My Tasks` and receive an ID that you will use in your binding.
2. Create a Worker.
3. Find your Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) and add a KV namespace binding:

* [  wrangler.jsonc ](#tab-panel-7504)
* [  wrangler.toml ](#tab-panel-7505)

```

{

  "kv_namespaces": [

    {

      "binding": "TODO",

      "id": "<ID>"

    }

  ]

}


```

```

[[kv_namespaces]]

binding = "TODO"

id = "<ID>"


```

In the following sections, you will use your binding in Service Worker and ES modules format.

Reference KV from Durable Objects and Workers

To learn more about how to reference KV from Workers, refer to the [KV bindings documentation](https://developers.cloudflare.com/kv/concepts/kv-bindings/).

### Bindings in Service Worker format

In Service Worker syntax, your `TODO` KV namespace binding is defined in the global scope of your Worker. Your `TODO` KV namespace binding is available to use anywhere in your Worker application's code.

JavaScript

```

addEventListener("fetch", async (event) => {

  return await getTodos()

});


async function getTodos() {

  // Get the value for the "to-do:123" key

  // NOTE: Relies on the TODO KV binding that maps to the "My Tasks" namespace.

  let value = await TODO.get("to-do:123");


  // Return the value, as is, for the Response

  event.respondWith(new Response(value));

}


```

### Bindings in ES modules format

In ES modules format, bindings are only available inside the `env` parameter that is provided at the entry point to your Worker.

To access the `TODO` KV namespace binding in your Worker code, the `env` parameter must be passed from the `fetch` handler in your Worker to the `getTodos` function.

JavaScript

```

import { getTodos } from './todos'


export default {

  async fetch(request, env, ctx) {

    // Passing the env parameter so other functions

    // can reference the bindings available in the Workers application

    return await getTodos(env)

  },

};


```

The following code represents a `getTodos` function that calls the `get` function on the `TODO` KV binding.

JavaScript

```

async function getTodos(env) {

  // NOTE: Relies on the TODO KV binding which has been provided inside of

  // the env parameter of the `getTodos` function

  let value = await env.TODO.get("to-do:123");

  return new Response(value);

}


export { getTodos }


```

## Environment variables

[Environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/) are accessed differently in code written in ES modules format versus Service Worker format.

Review the following example environment variable configuration in the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):

* [  wrangler.jsonc ](#tab-panel-7506)
* [  wrangler.toml ](#tab-panel-7507)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker-dev",

  // Define top-level environment variables

  // using the {"vars": "key": "value"} format

  "vars": {

    "API_ACCOUNT_ID": "<EXAMPLE-ACCOUNT-ID>"

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker-dev"


[vars]

API_ACCOUNT_ID = "<EXAMPLE-ACCOUNT-ID>"


```

### Environment variables in Service Worker format

In Service Worker format, the `API_ACCOUNT_ID` is defined in the global scope of your Worker application. Your `API_ACCOUNT_ID` environment variable is available to use anywhere in your Worker application's code.

JavaScript

```

addEventListener("fetch", async (event) => {

  console.log(API_ACCOUNT_ID) // Logs "<EXAMPLE-ACCOUNT-ID>"

  return new Response("Hello, world!")

})


```

### Environment variables in ES modules format

In ES modules format, environment variables are available through the `env` parameter provided at the entrypoint to your Worker application:

JavaScript

```

export default {

  async fetch(request, env, ctx) {

    console.log(env.API_ACCOUNT_ID) // Logs "<EXAMPLE-ACCOUNT-ID>"

    return new Response("Hello, world!")

  },

};


```

You can also import `env` from `cloudflare:workers` to access environment variables from anywhere in your code, including the top-level scope:

* [  JavaScript ](#tab-panel-7508)
* [  TypeScript ](#tab-panel-7509)

JavaScript

```

import { env } from "cloudflare:workers";


// Access environment variables at the top level

const accountId = env.API_ACCOUNT_ID;


export default {

  async fetch(request) {

    console.log(accountId); // Logs "<EXAMPLE-ACCOUNT-ID>"

    return new Response("Hello, world!");

  },

};


```

TypeScript

```

import { env } from "cloudflare:workers";


// Access environment variables at the top level

const accountId = env.API_ACCOUNT_ID;


export default {

  async fetch(request: Request): Promise<Response> {

    console.log(accountId) // Logs "<EXAMPLE-ACCOUNT-ID>"

    return new Response("Hello, world!")

  },

};


```

This approach is useful for initializing configuration or accessing environment variables from deeply nested functions without passing `env` through every function call. For more details, refer to [Importing env as a global](https://developers.cloudflare.com/workers/runtime-apis/bindings/#importing-env-as-a-global).

## Cron Triggers

To handle a [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/) event in a Worker written with ES modules syntax, implement a [scheduled() event handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/#syntax), which is the equivalent of listening for a `scheduled` event in Service Worker syntax.

This example code:

JavaScript

```

addEventListener("scheduled", (event) => {

  // ...

});


```

Then becomes:

JavaScript

```

export default {

  async scheduled(event, env, ctx) {

    // ...

  },

};


```

## Access `event` or `context` data

Workers often need access to data not in the `request` object. For example, sometimes Workers use [waitUntil](https://developers.cloudflare.com/workers/runtime-apis/context/#waituntil) to delay execution. Workers using ES modules format can access `waitUntil` via the `context` parameter. Refer to [ES modules parameters](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/#parameters) for more information.

This example code:

JavaScript

```

async function triggerEvent(event) {

  // Fetch some data

  console.log('cron processed', event.scheduledTime);

}


// Initialize Worker

addEventListener('scheduled', event => {

  event.waitUntil(triggerEvent(event));

});


```

Then becomes:

JavaScript

```

async function triggerEvent(event) {

  // Fetch some data

  console.log('cron processed', event.scheduledTime);

}


export default {

  async scheduled(event, env, ctx) {

    ctx.waitUntil(triggerEvent(event));

  },

};


```

## Service Worker syntax

A Worker written in Service Worker syntax consists of two parts:

1. An event listener that listens for `FetchEvents`.
2. An event handler that returns a [Response](https://developers.cloudflare.com/workers/runtime-apis/response/) object which is passed to the event’s `.respondWith()` method.

When a request is received on one of Cloudflare’s global network servers for a URL matching a Worker, Cloudflare's server passes the request to the Workers runtime. This dispatches a `FetchEvent` in the [isolate](https://developers.cloudflare.com/workers/reference/how-workers-works/#isolates) where the Worker is running.

JavaScript

```

addEventListener('fetch', event => {

  event.respondWith(handleRequest(event.request));

});


async function handleRequest(request) {

  return new Response('Hello worker!', {

    headers: { 'content-type': 'text/plain' },

  });

}


```

Below is an example of the request response workflow:

1. An event listener for the `FetchEvent` tells the script to listen for any request coming to your Worker. The event handler is passed the `event` object, which includes `event.request`, a [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) object which is a representation of the HTTP request that triggered the `FetchEvent`.
2. The call to `.respondWith()` lets the Workers runtime intercept the request in order to send back a custom response (in this example, the plain text `'Hello worker!'`).  
   * The `FetchEvent` handler typically culminates in a call to the method `.respondWith()` with either a [Response](https://developers.cloudflare.com/workers/runtime-apis/response/) or `Promise<Response>` that determines the response.  
   * The `FetchEvent` object also provides [two other methods](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) to handle unexpected exceptions and operations that may complete after a response is returned.

Learn more about [the lifecycle methods of the fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle/).

### Supported `FetchEvent` properties

* `event.type` string  
   * The type of event. This will always return `"fetch"`.
* `event.request` Request  
   * The incoming HTTP request.
* `event.respondWith(responseResponse|Promise)` : void  
   * Refer to [respondWith](#respondwith).
* `event.waitUntil(promisePromise)` : void  
   * Refer to [waitUntil](#waituntil).
* `event.passThroughOnException()` : void  
   * Refer to [passThroughOnException](#passthroughonexception).

### `respondWith`

Intercepts the request and allows the Worker to send a custom response.

If a `fetch` event handler does not call `respondWith`, the runtime delivers the event to the next registered `fetch` event handler. In other words, while not recommended, this means it is possible to add multiple `fetch` event handlers within a Worker.

If no `fetch` event handler calls `respondWith`, then the runtime forwards the request to the origin as if the Worker did not. However, if there is no origin – or the Worker itself is your origin server, which is always true for `*.workers.dev` domains – then you must call `respondWith` for a valid response.

JavaScript

```

// Format: Service Worker

addEventListener('fetch', event => {

  let { pathname } = new URL(event.request.url);


  // Allow "/ignore/*" URLs to hit origin

  if (pathname.startsWith('/ignore/')) return;


  // Otherwise, respond with something

  event.respondWith(handler(event));

});


```

### `waitUntil`

The `waitUntil` command extends the lifetime of the `"fetch"` event. It accepts a `Promise`\-based task which the Workers runtime will execute before the handler terminates but without blocking the response. For example, this is ideal for [caching responses](https://developers.cloudflare.com/workers/runtime-apis/cache/#put) or handling logging.

With the Service Worker format, `waitUntil` is available within the `event` because it is a native `FetchEvent` property.

With the ES modules format, `waitUntil` is moved and available on the `context` parameter object.

JavaScript

```

// Format: Service Worker

addEventListener('fetch', event => {

  event.respondWith(handler(event));

});


async function handler(event) {

  // Forward / Proxy original request

  let res = await fetch(event.request);


  // Add custom header(s)

  res = new Response(res.body, res);

  res.headers.set('x-foo', 'bar');


  // Cache the response

  // NOTE: Does NOT block / wait

  event.waitUntil(caches.default.put(event.request, res.clone()));


  // Done

  return res;

}


```

### `passThroughOnException`

The `passThroughOnException` method prevents a runtime error response when the Worker throws an unhandled exception. Instead, the script will [fail open ↗](https://community.microfocus.com/cyberres/b/sws-22/posts/security-fundamentals-part-1-fail-open-vs-fail-closed), which will proxy the request to the origin server as though the Worker was never invoked.

To prevent JavaScript errors from causing entire requests to fail on uncaught exceptions, `passThroughOnException()` causes the Workers runtime to yield control to the origin server.

With the Service Worker format, `passThroughOnException` is added to the `FetchEvent` interface, making it available within the `event`.

With the ES modules format, `passThroughOnException` is available on the `context` parameter object.

JavaScript

```

// Format: Service Worker

addEventListener('fetch', event => {

  // Proxy to origin on unhandled/uncaught exceptions

  event.passThroughOnException();

  throw new Error('Oops');

});


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/reference/","name":"Reference"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/reference/migrate-to-module-workers/","name":"Migrate from Service Workers to ES Modules"}}]}
```

---

---
title: Protocols
description: Supported protocols on the Workers platform.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/reference/protocols.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Protocols

Cloudflare Workers support the following protocols and interfaces:

| Protocol               | Inbound                                                                                                                                                                                                                                                                                                                                                | Outbound                                                                                                                       |
| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------ |
| **HTTP / HTTPS**       | Handle incoming HTTP requests using the [fetch() handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/)                                                                                                                                                                                                                      | Make HTTP subrequests using the [fetch() API](https://developers.cloudflare.com/workers/runtime-apis/fetch/)                   |
| **Direct TCP sockets** | Support for handling inbound TCP connections is [coming soon ↗](https://blog.cloudflare.com/workers-tcp-socket-api-connect-databases/)                                                                                                                                                                                                                 | Create outbound TCP connections using the [connect() API](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) |
| **WebSockets**         | Accept incoming WebSocket connections using the [WebSocket API](https://developers.cloudflare.com/workers/runtime-apis/websockets/)                                                                                                                                                                                                                    |                                                                                                                                |
| **HTTP/3 (QUIC)**      | Accept inbound requests over [HTTP/3 ↗](https://www.cloudflare.com/learning/performance/what-is-http3/) by enabling it on your [zone](https://developers.cloudflare.com/fundamentals/concepts/accounts-and-zones/#zones) in **Speed** \> **Settings** \> **Protocol Optimization** area of the [Cloudflare dashboard ↗](https://dash.cloudflare.com/). |                                                                                                                                |
| **SMTP**               | Use [Email Workers](https://developers.cloudflare.com/email-routing/email-workers/) to process and forward email, without having to manage TCP connections to SMTP email servers                                                                                                                                                                       | [Email Workers](https://developers.cloudflare.com/email-routing/email-workers/)                                                |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/reference/","name":"Reference"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/reference/protocols/","name":"Protocols"}}]}
```

---

---
title: Security model
description: This article includes an overview of Cloudflare security architecture, and then addresses two frequently asked about issues: V8 bugs and Spectre.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/reference/security-model.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Security model

This article includes an overview of Cloudflare security architecture, and then addresses two frequently asked about issues: V8 bugs and Spectre.

Since the very start of the Workers project, security has been a high priority — there was a concern early on that when hosting a large number of tenants on shared infrastructure, side channels of various kinds would pose a threat. The Cloudflare Workers runtime is carefully designed to defend against side channel attacks.

To this end, Workers is designed to make it impossible for code to measure its own execution time locally. For example, the value returned by `Date.now()` is locked in place while code is executing. No other timers are provided. Moreover, Cloudflare provides no access to concurrency (for example, multi-threading), as it could allow attackers to construct ad hoc timers. These design choices cannot be introduced retroactively into other platforms — such as web browsers — because they remove APIs that existing applications depend on. They were possible in Workers only because of runtime design choices from the start.

While these early design decisions have proven effective, Cloudflare is continuing to add defense-in-depth, including techniques to disrupt attacks by rescheduling Workers to create additional layers of isolation between suspicious Workers and high-value Workers.

The Workers approach is very different from the approach taken by most of the industry. It is resistant to the entire range of [Spectre-style attacks ↗](https://www.cloudflare.com/learning/security/threats/meltdown-spectre/), without requiring special attention paid to each one and without needing to block speculation in general. However, because the Workers approach is different, it requires careful study. Cloudflare is currently working with researchers at Graz University of Technology (TU Graz) to study what has been done. These researchers include some of the people who originally discovered Spectre. Cloudflare will publish the results of this research as they becomes available.

For more details, refer to [this talk ↗](https://www.infoq.com/presentations/cloudflare-v8/) by Kenton Varda, architect of Cloudflare Workers. Spectre is covered near the end.

## Architectural overview

Beginning with a quick overview of the Workers runtime architecture:

Scheduling and routing

Scheduling and routing 

HTTP client

HTTP client 

HTTP server

HTTP server 

Inbound  
HTTP proxy  

\[Not supported by viewer\] 

Outbound  
HTTP proxy  

\[Not supported by viewer\] 

Supervisor  

\[Not supported by viewer\] 

Main Runtime Process

Main Runtime Process 

Outer Sandbox

Outer Sandbox 

Disk

Disk 

Control plane  

\[Not supported by viewer\] 

 HTTP 

\[Not supported by viewer\] 

 Cap'n Proto RPC 

\[Not supported by viewer\] 

 In-process calls 

\[Not supported by viewer\] 

 Other 

\[Not supported by viewer\] 

 V8 Isolate 

\[Not supported by viewer\] 

 V8 Isolate 

\[Not supported by viewer\] 

 V8 Isolate 

\[Not supported by viewer\] 

 V8 Isolate 

\[Not supported by viewer\] 

Process  
Sandbox  

\[Not supported by viewer\] 

 V8 Isolate 

\[Not supported by viewer\] 

Scheduling and routing

Scheduling and routing 

Process  
Sandbox  

\[Not supported by viewer\] 

 V8 Isolate 

\[Not supported by viewer\] 

Scheduling and routing

Scheduling and routing 

There are two fundamental parts of designing a code sandbox: secure isolation and API design.

### Isolation

First, a secure execution environment needed to be created wherein code cannot access anything it is not supposed to.

For this, the primary tool is V8, the JavaScript engine developed by Google for use in Chrome. V8 executes code inside isolates, which prevent that code from accessing memory outside the isolate — even within the same process. Importantly, this means Cloudflare can run many isolates within a single process. This is essential for an edge compute platform like Workers where Cloudflare must host many thousands of guest applications on every machine and rapidly switch between these guests thousands of times per second with minimal overhead. If Cloudflare had to run a separate process for every guest, the number of tenants Cloudflare could support would be drastically reduced, and Cloudflare would have to limit edge compute to a small number of big Enterprise customers. With isolate technology, Cloudflare can make edge compute available to everyone.

Sometimes, though, Cloudflare does decide to schedule a Worker in its own private process. Cloudflare does this if the Worker uses certain features that needs an extra layer of isolation. For example, when a developer uses the devtools debugger to inspect their Worker, Cloudflare runs that Worker in a separate process. This is because historically, in the browser, the inspector protocol has only been usable by the browser’s trusted operator, and therefore has not received as much security scrutiny as the rest of V8\. In order to hedge against the increased risk of bugs in the inspector protocol, Cloudflare moves inspected Workers into a separate process with a process-level sandbox. Cloudflare also uses process isolation as an extra defense against Spectre.

Additionally, even for isolates that run in a shared process with other isolates, Cloudflare runs multiple instances of the whole runtime on each machine, which is called cordons. Workers are distributed among cordons by assigning each Worker a level of trust and separating low-trusted Workers from those trusted more highly. As one example of this in operation: a customer who signs up for the Free plan will not be scheduled in the same process as an Enterprise customer. This provides some defense-in-depth in the case a zero-day security vulnerability is found in V8.

At the whole-process level, Cloudflare applies another layer of sandboxing for defense in depth. The layer 2 sandbox uses Linux namespaces and `seccomp` to prohibit all access to the filesystem and network. Namespaces and `seccomp` are commonly used to implement containers. However, Cloudflare's use of these technologies is much stricter than what is usually possible in container engines, because Cloudflare configures namespaces and `seccomp` after the process has started but before any isolates have been loaded. This means, for example, Cloudflare can (and does) use a totally empty filesystem (mount namespace) and uses `seccomp` to block absolutely all filesystem-related system calls. Container engines cannot normally prohibit all filesystem access because doing so would make it impossible to use `exec()` to start the guest program from disk. In the Workers case, Cloudflare's guest programs are not native binaries and the Workers runtime itself has already finished loading before Cloudflare blocks filesystem access.

The layer 2 sandbox also totally prohibits network access. Instead, the process is limited to communicating only over local UNIX domain sockets to talk to other processes on the same system. Any communication to the outside world must be mediated by some other local process outside the sandbox.

One such process in particular, which is called the supervisor, is responsible for fetching Worker code and configuration from disk or from other internal services. The supervisor ensures that the sandbox process cannot read any configuration except that which is relevant to the Workers that it should be running.

For example, when the sandbox process receives a request for a Worker it has not seen before, that request includes the encryption key for that Worker’s code, including attached secrets. The sandbox can then pass that key to the supervisor in order to request the code. The sandbox cannot request any Worker for which it has not received the appropriate key. It cannot enumerate known Workers. It also cannot request configuration it does not need; for example, it cannot request the TLS key used for HTTPS traffic to the Worker.

Aside from reading configuration, the other reason for the sandbox to talk to other processes on the system is to implement APIs exposed to Workers.

### API design

There is a saying: If a tree falls in the forest, but no one is there to hear it, does it make a sound? A Cloudflare saying: If a Worker executes in a fully-isolated environment in which it is totally prevented from communicating with the outside world, does it actually run?

Complete code isolation is, in fact, useless. In order for Workers to do anything useful, they have to be allowed to communicate with users. At the very least, a Worker needs to be able to receive requests and respond to them. For Workers to send requests to the world safely, APIs are needed.

In the context of sandboxing, API design takes on a new level of responsibility. Cloudflare APIs define exactly what a Worker can and cannot do. Cloudflare must be very careful to design each API so that it can only express allowed operations and no more. For example, Cloudflare wants to allow Workers to make and receive HTTP requests, while not allowing them to be able to access the local filesystem or internal network services.

Currently, Workers does not allow any access to the local filesystem. Therefore, Cloudflare does not expose a filesystem API at all. No API means no access.

But, imagine if Workers did want to support local filesystem access in the future. How can that be done? Workers should not see the whole filesystem. Imagine, though, if each Worker had its own private directory on the filesystem where it can store whatever it wants.

To do this, Workers would use a design based on [capability-based security ↗](https://en.wikipedia.org/wiki/Capability-based%5Fsecurity). Capabilities are a big topic, but in this case, what it would mean is that Cloudflare would give the Worker an object of type `Directory`, representing a directory on the filesystem. This object would have an API that allows creating and opening files and subdirectories, but does not permit traversing up the parent directory. Effectively, each Worker would see its private `Directory` as if it were the root of their own filesystem.

How would such an API be implemented? As described above, the sandbox process cannot access the real filesystem. Instead, file access would be mediated by the supervisor process. The sandbox talks to the supervisor using [Cap’n Proto RPC ↗](https://capnproto.org/rpc.html), a capability-based RPC protocol. (Cap’n Proto is an open source project currently maintained by the Cloudflare Workers team.) This protocol makes it very easy to implement capability-based APIs, so that Cloudflare can strictly limit the sandbox to accessing only the files that belong to the Workers it is running.

Now what about network access? Today, Workers are allowed to talk to the rest of the world only via HTTP — both incoming and outgoing. There is no API for other forms of network access, therefore it is prohibited; although, Cloudflare plans to support other protocols in the future.

As mentioned before, the sandbox process cannot connect directly to the network. Instead, all outbound HTTP requests are sent over a UNIX domain socket to a local proxy service. That service implements restrictions on the request. For example, it verifies that the request is either addressed to a public Internet service or to the Worker’s zone’s own origin server, not to internal services that might be visible on the local machine or network. It also adds a header to every request identifying the Worker from which it originates, so that abusive requests can be traced and blocked. Once everything is in order, the request is sent on to the Cloudflare network's HTTP caching layer and then out to the Internet.

Similarly, inbound HTTP requests do not go directly to the Workers runtime. They are first received by an inbound proxy service. That service is responsible for TLS termination (the Workers runtime never sees TLS keys), as well as identifying the correct Worker script to run for a particular request URL. Once everything is in order, the request is passed over a UNIX domain socket to the sandbox process.

## V8 bugs and the patch gap

Every non-trivial piece of software has bugs and sandboxing technologies are no exception. Virtual machines, containers, and isolates — which Workers use — also have bugs.

Workers rely heavily on isolation provided by V8, the JavaScript engine built by Google for use in Chrome. This has pros and cons. On one hand, V8 is an extraordinarily complicated piece of technology, creating a wider attack surface than virtual machines. More complexity means more opportunities for something to go wrong. However, an extraordinary amount of effort goes into finding and fixing V8 bugs, owing to its position as arguably the most popular sandboxing technology in the world. Google regularly pays out 5-figure bounties to anyone finding a V8 sandbox escape. Google also operates fuzzing infrastructure that automatically finds bugs faster than most humans can. Google’s investment does a lot to minimize the danger of V8 zero-days — bugs that are found by malicious actors and not known to Google.

But, what happens after a bug is found and reported? V8 is open source, so fixes for security bugs are developed in the open and released to everyone at the same time. It is important that any patch be rolled out to production as fast as possible, before malicious actors can develop an exploit.

The time between publishing the fix and deploying it is known as the patch gap. Google previously [announced that Chrome’s patch gap had been reduced from 33 days to 15 days ↗](https://www.zdnet.com/article/google-cuts-chrome-patch-gap-in-half-from-33-to-15-days/).

Fortunately, Cloudflare directly controls the machines on which the Workers runtime operates. Nearly the entire build and release process has been automated, so the moment a V8 patch is published, Cloudflare systems automatically build a new release of the Workers runtime and, after one-click sign-off from the necessary (human) reviewers, automatically push that release out to production.

As a result, the Workers patch gap is now under 24 hours. A patch published by V8’s team in Munich during their work day will usually be in production before the end of the US work day.

## Spectre: Introduction

The V8 team at Google has stated that [V8 itself cannot defend against Spectre ↗](https://arxiv.org/abs/1902.05178). Workers does not need to depend on V8 for this. The Workers environment presents many alternative approaches to mitigating Spectre.

### What is it?

Spectre is a class of attacks in which a malicious program can trick the CPU into speculatively performing computation using data that the program is not supposed to have access to. The CPU eventually realizes the problem and does not allow the program to see the results of the speculative computation. However, the program may be able to derive bits of the secret data by looking at subtle side effects of the computation, such as the effects on the cache.

For more information about Spectre, refer to the [Learning Center page on the topic ↗](https://www.cloudflare.com/learning/security/threats/meltdown-spectre/).

### Why does it matter for Workers?

Spectre encompasses a wide variety of vulnerabilities present in modern CPUs. The specific vulnerabilities vary by architecture and model and it is likely that many vulnerabilities exist which have not yet been discovered.

These vulnerabilities are a problem for every cloud compute platform. Any time you have more than one tenant running code on the same machine, Spectre attacks are possible. However, the closer together the tenants are, the more difficult it can be to mitigate specific vulnerabilities. Many of the known issues can be mitigated at the kernel level (protecting processes from each other) or at the hypervisor level (protecting VMs), often with the help of CPU microcode updates and various defenses (many of which can come with serious performance impact).

In Cloudflare Workers, tenants are isolated from each other using V8 isolates — not processes nor VMs. This means that Workers cannot necessarily rely on OS or hypervisor patches to prevent Spectre. Workers need its own strategy.

### Why not use process isolation?

Cloudflare Workers is designed to run your code in every single Cloudflare location.

Workers is designed to be a platform accessible to everyone. It needs to handle a huge number of tenants, where many tenants get very little traffic.

Combine these two points and planning becomes difficult.

A typical, non-edge serverless provider could handle a low-traffic tenant by sending all of that tenant’s traffic to a single machine, so that only one copy of the application needs to be loaded. If the machine can handle, say, a dozen tenants, that is plenty. That machine can be hosted in a massive data center with millions of machines, achieving economies of scale. However, this centralization incurs latency and worldwide bandwidth costs when the users are not nearby.

With Workers, on the other hand, every tenant, regardless of traffic level, currently runs in every Cloudflare location. And in the quest to get as close to the end user as possible, Cloudflare sometimes chooses locations that only have space for a limited number of machines. The net result is that Cloudflare needs to be able to host thousands of active tenants per machine, with the ability to rapidly spin up inactive ones on-demand. That means that each guest cannot take more than a couple megabytes of memory — hardly enough space for a call stack, much less everything else that a process needs.

Moreover, Cloudflare need context switching to be computationally efficient. Many Workers resident in memory will only handle an event every now and then, and many Workers spend less than a fraction of a millisecond on any particular event. In this environment, a single core can easily find itself switching between thousands of different tenants every second. To handle one event, a significant amount of communication needs to happen between the guest application and its host, meaning still more switching and communications overhead. If each tenant lives in its own process, all this overhead is orders of magnitude larger than if many tenants live in a single process. When using strict process isolation in Workers, the CPU cost can easily be 10x what it is with a shared process.

In order to keep Workers inexpensive, fast, and accessible to everyone, Cloudflare needed to find a way to host multiple tenants in a single process.

### There is no fix for Spectre

Spectre does not have an official solution. Not even when using heavyweight virtual machines. Everyone is still vulnerable.

The industry encounters new Spectre attacks. Every couple months, researchers uncover a new Spectre vulnerability, CPU vendors release new microcode, and OS vendors release kernel patches. Everyone must continue updating.

But is it enough to merely deploy the latest patches?

More vulnerabilities exist but have not yet been publicized. To defend against Spectre, Cloudflare needed to take a different approach. It is not enough to block individual known vulnerabilities. Instead, entire classes of vulnerabilities must be addressed at once.

### Building a defense

It is unlikely that any all-encompassing fix for Spectre will be found. However, the following thought experiment raises points to consider:

Fundamentally, all Spectre vulnerabilities use side channels to detect hidden processor state. Side channels, by definition, involve observing some non-deterministic behavior of a system. Conveniently, most software execution environments try hard to eliminate non-determinism, because non-deterministic execution makes applications unreliable.

However, there are a few sorts of non-determinism that are still common. The most obvious among these is timing. The industry long ago gave up on the idea that a program should take the same amount of time every time it runs, because deterministic timing is fundamentally at odds with heuristic performance optimization. Most Spectre attacks focus on timing as a way to detect the hidden microarchitectural state of the CPU.

Some have proposed that this can be solved by making timers inaccurate or adding random noise. However, it turns out that this does not stop attacks; it only makes them slower. If the timer tracks real time at all, then anything you can do to make it inaccurate can be overcome by running an attack multiple times and using statistics to filter out inconsistencies.

Many security researchers see this as the end of the story. What good is slowing down an attack if the attack is still possible?

### Cascading slow-downs

However, measures that slow down an attack can be powerful.

The key insight is this: as an attack becomes slower, new techniques become practical to make it even slower still. The goal, then, is to chain together enough techniques that an attack becomes so slow as to be uninteresting.

Much of cryptography, after all, is technically vulnerable to brute force attacks — technically, with enough time, you can break it. But when the time required is thousands (or even billions) of years, this is a sufficient defense.

What can be done to slow down Spectre attacks to the point of meaninglessness?

## Freezing a Spectre attack

### Step 0: Do not allow native code

Workers does not allow our customers to upload native-code binaries to run on the Cloudflare network — only JavaScript and WebAssembly. Many other languages, like Python, Rust, or even Cobol, can be compiled or transpiled to one of these two formats. Both are passed through V8 to convert these formats into true native code.

This, in itself, does not necessarily make Spectre attacks harder. However, this is presented as step 0 because it is fundamental to enabling the following steps.

Accepting native code programs implies being beholden to an existing CPU architecture (typically, x86). In order to execute code with reasonable performance, it is usually necessary to run the code directly on real hardware, severely limiting the host’s control over how that execution plays out. For example, a kernel or hypervisor has no ability to prohibit applications from invoking the `CLFLUSH` instruction, an instruction [which is useful in side channel attacks ↗](https://gruss.cc/files/flushflush.pdf) and almost nothing else.

Moreover, supporting native code typically implies supporting whole existing operating systems and software stacks, which bring with them decades of expectations about how the architecture works under them. For example, x86 CPUs allow a kernel or hypervisor to disable the RDTSC instruction, which reads a high-precision timer. Realistically, though, disabling it will break many programs because they are implemented to use RDTSC any time they want to know the current time.

Supporting native code would limit choice in future mitigation techniques. There is greater freedom in using an abstract intermediate format.

### Step 1: Disallow timers and multi-threading

In Workers, you can get the current time using the JavaScript Date API by calling `Date.now()`. However, the time value returned is not the current time. `Date.now()` returns the time of the last I/O. It does not advance during code execution. For example, if an attacker writes:

JavaScript

```

let start = Date.now();

for (let i = 0; i < 1e6; i++) {

  doSpectreAttack();

}

let end = Date.now();


```

The values of `start` and `end` will always be exactly the same. The attacker cannot use `Date` to measure the execution time of their code, which they would need to do to carry out an attack.

Note

This measure was implemented in mid-2017, before Spectre was announced. This measure was implemented because Cloudflare was already concerned about side channel timing attacks. The Workers team has designed the system with side channels in mind.

Similarly, multi-threading and shared memory are not permitted in Workers. Everything related to the processing of one event happens on the same thread. Otherwise, one would be able to race threads in order to guess and check the underlying timer. Multiple Workers are not allowed to operate on the same request concurrently. For example, if you have installed a Cloudflare App on your zone which is implemented using Workers, and your zone itself also uses Workers, then a request to your zone may actually be processed by two Workers in sequence. These run in the same thread.

At this point, measuring code execution time locally is prevented. However, it can still be measured remotely. For example, the HTTP client that is sending a request to trigger the execution of the Worker can measure how long it takes for the Worker to respond. Such a measurement is likely to be very noisy, as it would have to traverse the Internet and incur general networking costs. Such noise can be overcome, in theory, by executing the attack many times and taking an average.

Note

It has been suggested that if Workers reset its execution environment on every request, that Workers would be in a much safer position against timing attacks. Unfortunately, it is not so simple. The execution state could be stored in a client — not the Worker itself — allowing a Worker to resume its previous state on every new request.

In adversarial testing and with help from leading Spectre experts, Cloudflare has not been able to develop a remote timing attack that works in production. However, the lack of a working attack does not mean that Workers should stop building defenses. Instead, the Workers team is currently testing some more advanced measures.

### Step 2: Dynamic process isolation

If an attack is possible at all, it would take a long time to run — hours at the very least, maybe as long as weeks. But once an attack has been running even for a second, there is a large amount of new data that can be used to trigger further measures.

Spectre attacks exhibit abnormal behavior that would not usually be seen in a normal program. These attacks intentionally try to create pathological performance scenarios in order to amplify microarchitectural effects. This is especially true when the attack has already been forced to run billions of times in a loop in order to overcome other mitigations, like those discussed above. This tends to show up in metrics like CPU performance counters.

Now, the usual problem with using performance metrics to detect Spectre attacks is that there are sometimes false positives. Sometimes, a legitimate program behaves poorly. The runtime cannot shut down every application that has poor performance.

Instead, the runtime chooses to reschedule any Worker with suspicious performance metrics into its own process. As described above, the runtime cannot do this with every Worker because the overhead would be too high. However, it is acceptable to isolate a few Worker processes as a defense mechanism. If the Worker is legitimate, it will keep operating, with a little more overhead. Fortunately, Cloudflare can relocate a Worker into its own process at basically any time.

In fact, elaborate performance-counter based triggering may not even be necessary here. If a Worker uses a large amount of CPU time per event, then the overhead of isolating it in its own process is relatively less because it switches context less often. So, the runtime might as well use process isolation for any Worker that is CPU-hungry.

Once a Worker is isolated, Cloudflare can rely on the operating system’s Spectre defenses, as most desktop web browsers do.

Cloudflare has been working with the experts at Graz Technical University to develop this approach. TU Graz’s team co-discovered Spectre itself and has been responsible for a huge number of the follow-on discoveries since then. Cloudflare has developed the ability to dynamically isolate Workers and has identified metrics which reliably detect attacks.

As mentioned previously, process isolation is not a complete defense. Over time, Spectre attacks tend to be slower to carry out which means Cloudflare has the ability to reasonably guess and identify malicious actors. Isolating the process further slows down the potential attack.

### Step 3: Periodic whole-memory shuffling

At this point, all known attacks have been prevented. This leaves Workers susceptible to unknown attacks in the future, as with all other CPU-based systems. However, all new attacks will generally be very slow, taking days or longer, leaving Cloudflare with time to prepare a defense.

For example, it is within reason to restart the entire Workers runtime on a daily basis. This will reset the locations of everything in memory, forcing attacks to restart the process of discovering the locations of secrets. Cloudflare can also reschedule Workers across physical machines or cordons, so that the window to attack any particular neighbor is limited.

In general, because Workers are fundamentally preemptible (unlike containers or VMs), Cloudflare has a lot of freedom to frustrate attacks.

Cloudflare sees this as an ongoing investment — not something that will ever be done.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/reference/","name":"Reference"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/reference/security-model/","name":"Security model"}}]}
```

---

---
title: API
description: A set of programmatic APIs that can be integrated with local Cloudflare Workers-related workflows.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/api.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# API

Wrangler offers APIs to programmatically interact with your Cloudflare Workers.

* [unstable\_startWorker](#unstable%5Fstartworker) \- Start a server for running integration tests against your Worker.
* [unstable\_dev](#unstable%5Fdev) \- Start a server for running either end-to-end (e2e) or integration tests against your Worker.
* [getPlatformProxy](#getplatformproxy) \- Get proxies and values for emulating the Cloudflare Workers platform in a Node.js process.

## `unstable_startWorker`

This API exposes the internals of Wrangler's dev server, and allows you to customise how it runs. For example, you could use `unstable_startWorker()` to run integration tests against your Worker. This example uses `node:test`, but should apply to any testing framework:

JavaScript

```

import assert from "node:assert";

import test, { after, before, describe } from "node:test";

import { unstable_startWorker } from "wrangler";


describe("worker", () => {

  let worker;


  before(async () => {

    worker = await unstable_startWorker({ config: "wrangler.json" });

  });


  test("hello world", async () => {

    assert.strictEqual(

      await (await worker.fetch("http://example.com")).text(),

      "Hello world",

    );

  });


  after(async () => {

    await worker.dispose();

  });

});


```

## `unstable_dev`

Start an HTTP server for testing your Worker.

Once called, `unstable_dev` will return a `fetch()` function for invoking your Worker without needing to know the address or port, as well as a `stop()` function to shut down the HTTP server.

By default, `unstable_dev` will perform integration tests against a local server. If you wish to perform an e2e test against a preview Worker, pass `local: false` in the `options` object when calling the `unstable_dev()` function. Note that e2e tests can be significantly slower than integration tests.

Note

The `unstable_dev()` function has an `unstable_` prefix because the API is experimental and may change in the future. We recommend migrating to the `unstable_startWorker()` API, documented above.

If you have been using `unstable_dev()` for integration testing and want to migrate to Cloudflare's Vitest integration, refer to the [Migrate from unstable\_dev migration guide](https://developers.cloudflare.com/workers/testing/vitest-integration/migration-guides/migrate-from-unstable-dev/) for more information.

### Constructor

JavaScript

```

const worker = await unstable_dev(script, options);


```

### Parameters

* `script` ` string `  
   * A string containing a path to your Worker script, relative to your Worker project's root directory.
* `options` ` object ` optional  
   * Optional options object containing `wrangler dev` configuration settings.  
   * Include an `experimental` object inside `options` to access experimental features such as `disableExperimentalWarning`.  
         * Set `disableExperimentalWarning` to `true` to disable Wrangler's warning about using `unstable_` prefixed APIs.

### Return Type

`unstable_dev()` returns an object containing the following methods:

* `fetch()` `Promise<Response>`  
   * Send a request to your Worker. Returns a Promise that resolves with a [Response](https://developers.cloudflare.com/workers/runtime-apis/response) object.  
   * Refer to [Fetch](https://developers.cloudflare.com/workers/runtime-apis/fetch/).
* `stop()` `Promise<void>`  
   * Shuts down the dev server.

### Usage

When initiating each test suite, use a `beforeAll()` function to start `unstable_dev()`. The `beforeAll()` function is used to minimize overhead: starting the dev server takes a few hundred milliseconds, starting and stopping for each individual test adds up quickly, slowing your tests down.

In each test case, call `await worker.fetch()`, and check that the response is what you expect.

To wrap up a test suite, call `await worker.stop()` in an `afterAll` function.

#### Single Worker example

* [  JavaScript ](#tab-panel-7810)
* [  TypeScript ](#tab-panel-7811)

JavaScript

```

const { unstable_dev } = require("wrangler");


describe("Worker", () => {

  let worker;


  beforeAll(async () => {

    worker = await unstable_dev("src/index.js", {

      experimental: { disableExperimentalWarning: true },

    });

  });


  afterAll(async () => {

    await worker.stop();

  });


  it("should return Hello World", async () => {

    const resp = await worker.fetch();

    const text = await resp.text();

    expect(text).toMatchInlineSnapshot(`"Hello World!"`);

  });

});


```

TypeScript

```

import { unstable_dev } from "wrangler";

import type { UnstableDevWorker } from "wrangler";


describe("Worker", () => {

  let worker: UnstableDevWorker;


  beforeAll(async () => {

    worker = await unstable_dev("src/index.ts", {

      experimental: { disableExperimentalWarning: true },

    });

  });


  afterAll(async () => {

    await worker.stop();

  });


  it("should return Hello World", async () => {

    const resp = await worker.fetch();

    const text = await resp.text();

    expect(text).toMatchInlineSnapshot(`"Hello World!"`);

  });

});


```

#### Multi-Worker example

You can test Workers that call other Workers. In the below example, we refer to the Worker that calls other Workers as the parent Worker, and the Worker being called as a child Worker.

If you shut down the child Worker prematurely, the parent Worker will not know the child Worker exists and your tests will fail.

* [  JavaScript ](#tab-panel-7812)
* [  TypeScript ](#tab-panel-7813)

JavaScript

```

import { unstable_dev } from "wrangler";


describe("multi-worker testing", () => {

  let childWorker;

  let parentWorker;


  beforeAll(async () => {

    childWorker = await unstable_dev("src/child-worker.js", {

      config: "src/child-wrangler.toml",

      experimental: { disableExperimentalWarning: true },

    });

    parentWorker = await unstable_dev("src/parent-worker.js", {

      config: "src/parent-wrangler.toml",

      experimental: { disableExperimentalWarning: true },

    });

  });


  afterAll(async () => {

    await childWorker.stop();

    await parentWorker.stop();

  });


  it("childWorker should return Hello World itself", async () => {

    const resp = await childWorker.fetch();

    const text = await resp.text();

    expect(text).toMatchInlineSnapshot(`"Hello World!"`);

  });


  it("parentWorker should return Hello World by invoking the child worker", async () => {

    const resp = await parentWorker.fetch();

    const parsedResp = await resp.text();

    expect(parsedResp).toEqual("Parent worker sees: Hello World!");

  });

});


```

TypeScript

```

import { unstable_dev } from "wrangler";

import type { UnstableDevWorker } from "wrangler";


describe("multi-worker testing", () => {

  let childWorker: UnstableDevWorker;

  let parentWorker: UnstableDevWorker;


  beforeAll(async () => {

    childWorker = await unstable_dev("src/child-worker.js", {

      config: "src/child-wrangler.toml",

      experimental: { disableExperimentalWarning: true },

    });

    parentWorker = await unstable_dev("src/parent-worker.js", {

      config: "src/parent-wrangler.toml",

      experimental: { disableExperimentalWarning: true },

    });

  });


  afterAll(async () => {

    await childWorker.stop();

    await parentWorker.stop();

  });


  it("childWorker should return Hello World itself", async () => {

    const resp = await childWorker.fetch();

    const text = await resp.text();

    expect(text).toMatchInlineSnapshot(`"Hello World!"`);

  });


  it("parentWorker should return Hello World by invoking the child worker", async () => {

    const resp = await parentWorker.fetch();

    const parsedResp = await resp.text();

    expect(parsedResp).toEqual("Parent worker sees: Hello World!");

  });

});


```

## `getPlatformProxy`

The `getPlatformProxy` function provides a way to obtain an object containing proxies (to **local** `workerd` bindings) and emulations of Cloudflare Workers specific values, allowing the emulation of such in a Node.js process.

Warning

`getPlatformProxy` is, by design, to be used exclusively in Node.js applications. `getPlatformProxy` cannot be run inside the Workers runtime.

One general use case for getting a platform proxy is for emulating bindings in applications targeting Workers, but running outside the Workers runtime (for example, framework local development servers running in Node.js), or for testing purposes (for example, ensuring code properly interacts with a type of binding).

Note

Binding proxies provided by this function are a best effort emulation of the real production bindings. Although they are designed to be as close as possible to the real thing, there might be slight differences and inconsistencies between the two.

### Syntax

JavaScript

```

const platform = await getPlatformProxy(options);


```

### Parameters

* `options` ` object ` optional  
   * Optional options object containing preferences for the bindings:  
         * `environment` string  
         The environment to use.  
         * `configPath` string  
         The path to the config file to use.  
         If no path is specified, the default behavior is to search from the current directory up the filesystem for a [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) to use.  
         **Note:** this field is optional but if a path is specified it must point to a valid file on the filesystem.  
         * `persist` boolean | `{ path: string }`  
         Indicates if and where to persist the bindings data. If `true` or `undefined`, defaults to the same location used by Wrangler, so data can be shared between it and the caller. If `false`, no data is persisted to or read from the filesystem.  
         **Note:** If you use `wrangler`'s `--persist-to` option, note that this option adds a subdirectory called `v3` under the hood while `getPlatformProxy`'s `persist` does not. For example, if you run `wrangler dev --persist-to ./my-directory`, to reuse the same location using `getPlatformProxy`, you will have to specify: `persist: { path: "./my-directory/v3" }`.  
         * `experimental` `{ remoteBindings: boolean }`  
         Object used to enable experimental features, no guarantees are made to the stability of this API, use at your own risk.  
                  * `remoteBindings` Enables `getPlatformProxy` to connect to [remote bindings](https://developers.cloudflare.com/workers/development-testing/#remote-bindings).

### Return Type

`getPlatformProxy()` returns a `Promise` resolving to an object containing the following fields.

* `env` `Record<string, unknown>`  
   * Object containing proxies to bindings that can be used in the same way as production bindings. This matches the shape of the `env` object passed as the second argument to modules-format workers. These proxy to binding implementations run inside `workerd`.  
   * TypeScript Tip: `getPlatformProxy<Env>()` is a generic function. You can pass the shape of the bindings record as a type argument to get proper types without `unknown` values.
* `cf` IncomingRequestCfProperties read-only  
   * Mock of the `Request`'s `cf` property, containing data similar to what you would see in production.
* `ctx` object  
   * Mock object containing implementations of the [waitUntil](https://developers.cloudflare.com/workers/runtime-apis/context/#waituntil) and [passThroughOnException](https://developers.cloudflare.com/workers/runtime-apis/context/#passthroughonexception) functions that do nothing.
* `caches` object  
   * Emulation of the [Workers caches runtime API](https://developers.cloudflare.com/workers/runtime-apis/cache/).  
   * For the time being, all cache operations do nothing. A more accurate emulation will be made available soon.
* `dispose()` () => `Promise<void>`  
   * Terminates the underlying `workerd` process.  
   * Call this after the platform proxy is no longer required by the program. If you are running a long running process (such as a dev server) that can indefinitely make use of the proxy, you do not need to call this function.

### Usage

The `getPlatformProxy` function uses bindings found in the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). For example, if you have an [environment variable](https://developers.cloudflare.com/workers/configuration/environment-variables/#add-environment-variables-via-wrangler) configuration set up in the Wrangler configuration file:

* [  wrangler.jsonc ](#tab-panel-7814)
* [  wrangler.toml ](#tab-panel-7815)

```

{

  "vars": {

    "MY_VARIABLE": "test"

  }

}


```

```

[vars]

MY_VARIABLE = "test"


```

You can access the bindings by importing `getPlatformProxy` like this:

JavaScript

```

import { getPlatformProxy } from "wrangler";


const { env } = await getPlatformProxy();


```

To access the value of the `MY_VARIABLE` binding add the following to your code:

JavaScript

```

console.log(`MY_VARIABLE = ${env.MY_VARIABLE}`);


```

This will print the following output: `MY_VARIABLE = test`.

### Supported bindings

All supported bindings found in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) are available to you via `env`.

The bindings supported by `getPlatformProxy` are:

* [Environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/)
* [Service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/)
* [KV namespace bindings](https://developers.cloudflare.com/kv/api/)
* [R2 bucket bindings](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/)
* [Queue bindings](https://developers.cloudflare.com/queues/configuration/javascript-apis/)
* [D1 database bindings](https://developers.cloudflare.com/d1/worker-api/)
* [Hyperdrive bindings](https://developers.cloudflare.com/hyperdrive)  
Hyperdrive values are simple passthrough ones  
Values provided by hyperdrive bindings such as `connectionString` and `host` do not have a valid meaning outside of a `workerd` process. This means that Hyperdrive proxies return passthrough values, which are values corresponding to the database connection provided by the user. Otherwise, it would return values which would be unusable from within node.js.
* [Workers AI bindings](https://developers.cloudflare.com/workers-ai/get-started/workers-wrangler/#2-connect-your-worker-to-workers-ai)  
Workers AI local development usage charges  
Using Workers AI always accesses your Cloudflare account in order to run AI models and will incur usage charges even in local development.
* [Durable Object bindings](https://developers.cloudflare.com/durable-objects/api/)  
   * To use a Durable Object binding with `getPlatformProxy`, always specify a [script\_name](https://developers.cloudflare.com/workers/wrangler/configuration/#durable-objects).  
   For example, you might have the following binding in a Wrangler configuration file read by `getPlatformProxy`.  
         * [  wrangler.jsonc ](#tab-panel-7818)  
         * [  wrangler.toml ](#tab-panel-7819)  
   ```  
   {  
     "durable_objects": {  
       "bindings": [  
         {  
           "name": "MyDurableObject",  
           "class_name": "MyDurableObject",  
           "script_name": "external-do-worker"  
         }  
       ]  
     }  
   }  
   ```  
   ```  
   [[durable_objects.bindings]]  
   name = "MyDurableObject"  
   class_name = "MyDurableObject"  
   script_name = "external-do-worker"  
   ```  
   You will need to declare your Durable Object `"MyDurableObject"` in another Worker, called `external-do-worker` in this example.  
   ./external-do-worker/src/index.ts  
   ```  
   export class MyDurableObject extends DurableObject {  
     // Your DO code goes here  
   }  
   export default {  
     fetch() {  
         // Doesn't have to do anything, but a DO cannot be the default export  
         return new Response("Hello, world!");  
     },  
   };  
   ```  
   That Worker also needs a Wrangler configuration file that looks like this:  
         * [  wrangler.jsonc ](#tab-panel-7816)  
         * [  wrangler.toml ](#tab-panel-7817)  
   ```  
   {  
     "name": "external-do-worker",  
     "main": "src/index.ts",  
     "compatibility_date": "XXXX-XX-XX"  
   }  
   ```  
   ```  
   name = "external-do-worker"  
   main = "src/index.ts"  
   compatibility_date = "XXXX-XX-XX"  
   ```  
   If you are not using RPC with your Durable Object, you can run a separate Wrangler dev session alongside your framework development server.  
   Otherwise, you can build your application and run both Workers in the same Wrangler dev session.  
   If you are using Pages run:  
    npm  yarn  pnpm  
   ```  
   npx wrangler pages dev -c path/to/pages/wrangler.jsonc -c path/to/external-do-worker/wrangler.jsonc  
   ```  
   ```  
   yarn wrangler pages dev -c path/to/pages/wrangler.jsonc -c path/to/external-do-worker/wrangler.jsonc  
   ```  
   ```  
   pnpm wrangler pages dev -c path/to/pages/wrangler.jsonc -c path/to/external-do-worker/wrangler.jsonc  
   ```  
   If you are using Workers with Assets run:  
    npm  yarn  pnpm  
   ```  
   npx wrangler dev -c path/to/workers-assets/wrangler.jsonc -c path/to/external-do-worker/wrangler.jsonc  
   ```  
   ```  
   yarn wrangler dev -c path/to/workers-assets/wrangler.jsonc -c path/to/external-do-worker/wrangler.jsonc  
   ```  
   ```  
   pnpm wrangler dev -c path/to/workers-assets/wrangler.jsonc -c path/to/external-do-worker/wrangler.jsonc  
   ```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/api/","name":"API"}}]}
```

---

---
title: Bundling
description: Review Wrangler's default bundling.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/bundling.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Bundling

By default, Wrangler bundles your Worker code using [esbuild ↗](https://esbuild.github.io/). This means that Wrangler has built-in support for importing modules from [npm ↗](https://www.npmjs.com/) defined in your `package.json`. To review the exact code that Wrangler will upload to Cloudflare, run `npx wrangler deploy --dry-run --outdir dist`, which will show your Worker code after Wrangler's bundling.

`esbuild` version

Wrangler uses `esbuild`. We periodically update the `esbuild` version included with Wrangler, and since `esbuild` is a pre-1.0.0 tool, this may sometimes include breaking changes to how bundling works. In particular, we may bump the `esbuild` version in a Wrangler minor version.

Note

Wrangler's inbuilt bundling usually provides the best experience, but we understand there are cases where you will need more flexibility. You can provide `rules` and set `find_additional_modules` in your configuration to control which files are included in the deployed Worker but not bundled into the entry-point file. Furthermore, we have an escape hatch in the form of [Custom Builds](https://developers.cloudflare.com/workers/wrangler/custom-builds/), which lets you run your own build before Wrangler's built-in one.

## Including non-JavaScript modules

Bundling your Worker code takes multiple modules and bundles them into one file. Sometimes, you might have modules that cannot be inlined directly into the bundle. For example, instead of bundling a Wasm file into your JavaScript Worker, you would want to upload the Wasm file as a separate module that can be imported at runtime. Wrangler supports this by default for the following file types:

| Module extension    | Imported type      |
| ------------------- | ------------------ |
| .txt                | string             |
| .html               | string             |
| .sql                | string             |
| .bin                | ArrayBuffer        |
| .wasm, .wasm?module | WebAssembly.Module |

Refer to [Bundling configuration](https://developers.cloudflare.com/workers/wrangler/configuration/#bundling) to customize these file types.

For example, with the following import, `text` will be a string containing the contents of `example.txt`:

JavaScript

```

import text from "./example.txt";


```

This is also the basis for importing Wasm, as in the following example:

TypeScript

```

import wasm from "./example.wasm";


// Instantiate Wasm modules in the module scope

const instance = await WebAssembly.instantiate(wasm);


export default {

  fetch() {

    const result = instance.exports.exported_func();


    return new Response(result);

  },

};


```

Note

Cloudflare Workers does not support `WebAssembly.instantiateStreaming()`.

## Find additional modules

By setting `find_additional_modules` to `true` in your configuration file, Wrangler will traverse the file tree below `base_dir`. Any files that match the `rules` you define will also be included as unbundled, external modules in the deployed Worker.

This approach is useful for supporting lazy loading of large or dynamically imported JavaScript files:

* Normally, a large lazy-imported file (for example, `await import("./large-dep.mjs")`) would be bundled directly into your entrypoint, reducing the effectiveness of the lazy loading. If matching rule is added to `rules`, then this file would only be loaded and executed at runtime when it is actually imported.
* Previously, variable based dynamic imports (for example, `` await import(`./lang/${language}.mjs`) ``) would always fail at runtime because Wrangler had no way of knowing which modules to include in the upload. Providing a rule that matches all these files, such as `{ "type": "EsModule", "globs": ["./lang/**/*.mjs"], "fallthrough": true }`, will ensure this module is available at runtime.
* "Partial bundling" is supported when `find_additional_modules` is `true`, and a source file matches one of the configured `rules`, since Wrangler will then treat it as "external" and not try to bundle it into the entry-point file.

## Conditional exports

Wrangler respects the [conditional exports field ↗](https://nodejs.org/api/packages.html#conditional-exports) in `package.json`. This allows developers to implement isomorphic libraries that have different implementations depending on the JavaScript runtime they are running in. When bundling, Wrangler will try to load the [workerd key ↗](https://runtime-keys.proposal.wintercg.org/#workerd). Refer to the Wrangler repository for [an example isomorphic package ↗](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/isomorphic-random-example).

## Disable bundling

Warning

Disabling bundling is not recommended in most scenarios. Use this option only when deploying code pre-processed by other tooling.

If your build tooling already produces build artifacts suitable for direct deployment to Cloudflare, you can opt out of bundling by using the `--no-bundle` command line flag: `npx wrangler deploy --no-bundle`. If you opt out of bundling, Wrangler will not process your code and some features introduced by Wrangler bundling (for example minification, and polyfills injection) will not be available.

Use [Custom Builds](https://developers.cloudflare.com/workers/wrangler/custom-builds/) to customize what Wrangler will bundle and upload to the Cloudflare global network when you use [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) and [wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy).

## Generated Wrangler configuration

Some framework tools, or custom pre-build processes, generate a modified Wrangler configuration to be used to deploy the Worker code. It is possible for Wrangler to automatically use this generated configuration rather than the original, user's configuration.

See [Generated Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/#generated-wrangler-configuration) for more information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/bundling/","name":"Bundling"}}]}
```

---

---
title: Commands
description: Create, develop, and deploy your Cloudflare Workers with Wrangler commands.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Commands

[Wrangler](https://developers.cloudflare.com/workers/wrangler/) offers a number of commands to manage your Cloudflare Workers.

## Commands

* [ Certificates ](https://developers.cloudflare.com/workers/wrangler/commands/certificates/)
* [ Containers ](https://developers.cloudflare.com/workers/wrangler/commands/containers/)
* [ D1 ](https://developers.cloudflare.com/workers/wrangler/commands/d1/)
* [ General commands ](https://developers.cloudflare.com/workers/wrangler/commands/general/)
* [ Hyperdrive ](https://developers.cloudflare.com/workers/wrangler/commands/hyperdrive/)
* [ KV ](https://developers.cloudflare.com/workers/wrangler/commands/kv/)
* [ Pages ](https://developers.cloudflare.com/workers/wrangler/commands/pages/)
* [ Pipelines ](https://developers.cloudflare.com/workers/wrangler/commands/pipelines/)
* [ Queues ](https://developers.cloudflare.com/workers/wrangler/commands/queues/)
* [ R2 ](https://developers.cloudflare.com/workers/wrangler/commands/r2/)
* [ Secrets Store ](https://developers.cloudflare.com/workers/wrangler/commands/secrets-store/)
* [ Tunnel ](https://developers.cloudflare.com/workers/wrangler/commands/tunnel/)
* [ Vectorize ](https://developers.cloudflare.com/workers/wrangler/commands/vectorize/)
* [ VPC ](https://developers.cloudflare.com/workers/wrangler/commands/vpc/)
* [ Workers for Platforms ](https://developers.cloudflare.com/workers/wrangler/commands/workers-for-platforms/)
* [ Workflows ](https://developers.cloudflare.com/workers/wrangler/commands/workflows/)

## How to run Wrangler commands

```

wrangler <COMMAND> <SUBCOMMAND> [PARAMETERS] [OPTIONS]


```

Since Cloudflare recommends [installing Wrangler locally](https://developers.cloudflare.com/workers/wrangler/install-and-update/) in your project (rather than globally), the way to run Wrangler will depend on your specific setup and package manager.

 npm  yarn  pnpm 

```
npx wrangler <COMMAND> <SUBCOMMAND> [PARAMETERS] [OPTIONS]
```

```
yarn wrangler <COMMAND> <SUBCOMMAND> [PARAMETERS] [OPTIONS]
```

```
pnpm wrangler <COMMAND> <SUBCOMMAND> [PARAMETERS] [OPTIONS]
```

You can add Wrangler commands that you use often as scripts in your project's `package.json` file:

```

{

  ...

  "scripts": {

    "deploy": "wrangler deploy",

    "dev": "wrangler dev"

  }

  ...

}


```

You can then run them using your package manager of choice:

 npm  yarn  pnpm 

```
npm run deploy
```

```
yarn run deploy
```

```
pnpm run deploy
```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}}]}
```

---

---
title: Certificates
description: Wrangler commands for managing mTLS and CA certificates, for use standalone or with Hyperdrive.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/certificates.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Certificates

Use these commands to manage certificates for mTLS connections.

The `mtls-certificate` commands manage client certificates for Worker subrequests. The `cert` commands manage both mTLS client certificates and Certificate Authority (CA) chain certificates, primarily for use with [Hyperdrive](https://developers.cloudflare.com/workers/wrangler/commands/hyperdrive/) configurations.

---

## `mtls-certificate`

Manage client certificates used for mTLS connections in subrequests.

These certificates can be used in [mtls\_certificate bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls), which allow a Worker to present the certificate when establishing a connection with an origin that requires client authentication (mTLS).

### `mtls-certificate upload`

Upload an mTLS certificate

* [  npm ](#tab-panel-7820)
* [  pnpm ](#tab-panel-7821)
* [  yarn ](#tab-panel-7822)

Terminal window

```

npx wrangler mtls-certificate upload


```

Terminal window

```

pnpm wrangler mtls-certificate upload


```

Terminal window

```

yarn wrangler mtls-certificate upload


```

* `--cert` ` string ` required  
The path to a certificate file (.pem) containing a chain of certificates to upload
* `--key` ` string ` required  
The path to a file containing the private key for your leaf certificate
* `--name` ` string `  
The name for the certificate

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of using the `upload` command to upload an mTLS certificate.

Terminal window

```

npx wrangler mtls-certificate upload --cert cert.pem --key key.pem --name my-origin-cert


```

```

Uploading mTLS Certificate my-origin-cert...

Success! Uploaded mTLS Certificate my-origin-cert

ID: 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d

Issuer: CN=my-secured-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US

Expires: 1/01/2025


```

You can then add this certificate as a [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/) in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):

* [  wrangler.jsonc ](#tab-panel-7841)
* [  wrangler.toml ](#tab-panel-7842)

```

{

  "mtls_certificates": [

    {

      "binding": "MY_CERT",

      "certificate_id": "99f5fef1-6cc1-46b8-bd79-44a0d5082b8d",

    },

  ],

}


```

```

[[mtls_certificates]]

binding = "MY_CERT"

certificate_id = "99f5fef1-6cc1-46b8-bd79-44a0d5082b8d"


```

Note that the certificate and private keys must be in separate (typically `.pem`) files when uploading.

### `mtls-certificate list`

List uploaded mTLS certificates

* [  npm ](#tab-panel-7823)
* [  pnpm ](#tab-panel-7824)
* [  yarn ](#tab-panel-7825)

Terminal window

```

npx wrangler mtls-certificate list


```

Terminal window

```

pnpm wrangler mtls-certificate list


```

Terminal window

```

yarn wrangler mtls-certificate list


```

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of using the `list` command to upload an mTLS certificate.

Terminal window

```

npx wrangler mtls-certificate list


```

```

ID: 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d

Name: my-origin-cert

Issuer: CN=my-secured-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US

Created on: 1/01/2023

Expires: 1/01/2025


ID: c5d004d1-8312-402c-b8ed-6194328d5cbe

Issuer: CN=another-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US

Created on: 1/01/2023

Expires: 1/01/2025


```

### `mtls-certificate delete`

Delete an mTLS certificate

* [  npm ](#tab-panel-7826)
* [  pnpm ](#tab-panel-7827)
* [  yarn ](#tab-panel-7828)

Terminal window

```

npx wrangler mtls-certificate delete


```

Terminal window

```

pnpm wrangler mtls-certificate delete


```

Terminal window

```

yarn wrangler mtls-certificate delete


```

* `--id` ` string `  
The id of the mTLS certificate to delete
* `--name` ` string `  
The name of the mTLS certificate record to delete

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of using the `delete` command to delete an mTLS certificate.

Terminal window

```

npx wrangler mtls-certificate delete --id 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d


```

```

Are you sure you want to delete certificate 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d (my-origin-cert)? [y/n]

yes

Deleting certificate 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d...

Deleted certificate 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d successfully


```

---

## `cert`

Manage mTLS client certificates and Certificate Authority (CA) chain certificates used for secured connections.

These certificates can be used in Hyperdrive configurations, enabling them to present the certificate when connecting to an origin database that requires client authentication (mTLS) or a custom Certificate Authority (CA).

### `cert upload mtls-certificate`

Upload an mTLS certificate

* [  npm ](#tab-panel-7829)
* [  pnpm ](#tab-panel-7830)
* [  yarn ](#tab-panel-7831)

Terminal window

```

npx wrangler cert upload mtls-certificate


```

Terminal window

```

pnpm wrangler cert upload mtls-certificate


```

Terminal window

```

yarn wrangler cert upload mtls-certificate


```

* `--cert` ` string ` required  
The path to a certificate file (.pem) containing a chain of certificates to upload
* `--key` ` string ` required  
The path to a file containing the private key for your leaf certificate
* `--name` ` string `  
The name for the certificate

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of using the `upload` command to upload an mTLS certificate.

Terminal window

```

npx wrangler cert upload --cert cert.pem --key key.pem --name my-origin-cert


```

```

Uploading mTLS Certificate my-origin-cert...

Success! Uploaded mTLS Certificate my-origin-cert

ID: 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d

Issuer: CN=my-secured-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US

Expires: 1/01/2025


```

Note that the certificate and private keys must be in separate (typically `.pem`) files when uploading.

### `cert upload certificate-authority`

Upload a CA certificate chain

* [  npm ](#tab-panel-7832)
* [  pnpm ](#tab-panel-7833)
* [  yarn ](#tab-panel-7834)

Terminal window

```

npx wrangler cert upload certificate-authority


```

Terminal window

```

pnpm wrangler cert upload certificate-authority


```

Terminal window

```

yarn wrangler cert upload certificate-authority


```

* `--name` ` string `  
The name for the certificate
* `--ca-cert` ` string ` required  
The path to a certificate file (.pem) containing a chain of CA certificates to upload

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of using the `upload` command to upload an CA certificate.

Terminal window

```

npx wrangler cert upload certificate-authority --ca-cert server-ca-chain.pem --name SERVER_CA_CHAIN


```

```

Uploading CA Certificate SERVER_CA_CHAIN...

Success! Uploaded CA Certificate SERVER_CA_CHAIN

ID: 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d

Issuer: CN=my-secured-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US

Expires: 1/01/2025


```

### `cert list`

List uploaded mTLS certificates

* [  npm ](#tab-panel-7835)
* [  pnpm ](#tab-panel-7836)
* [  yarn ](#tab-panel-7837)

Terminal window

```

npx wrangler cert list


```

Terminal window

```

pnpm wrangler cert list


```

Terminal window

```

yarn wrangler cert list


```

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of using the `list` command to upload an mTLS or CA certificate.

Terminal window

```

npx wrangler cert list


```

```

ID: 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d

Name: my-origin-cert

Issuer: CN=my-secured-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US

Created on: 1/01/2023

Expires: 1/01/2025


ID: c5d004d1-8312-402c-b8ed-6194328d5cbe

Issuer: CN=another-origin.com,OU=my-team,O=my-org,L=San Francisco,ST=California,C=US

Created on: 1/01/2023

Expires: 1/01/2025


```

### `cert delete`

Delete an mTLS certificate

* [  npm ](#tab-panel-7838)
* [  pnpm ](#tab-panel-7839)
* [  yarn ](#tab-panel-7840)

Terminal window

```

npx wrangler cert delete


```

Terminal window

```

pnpm wrangler cert delete


```

Terminal window

```

yarn wrangler cert delete


```

* `--id` ` string `  
The id of the mTLS certificate to delete
* `--name` ` string `  
The name of the mTLS certificate record to delete

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of using the `delete` command to delete an mTLS or CA certificate.

Terminal window

```

npx wrangler cert delete --id 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d


```

```

Are you sure you want to delete certificate 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d (my-origin-cert)? [y/n]

yes

Deleting certificate 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d...

Deleted certificate 99f5fef1-6cc1-46b8-bd79-44a0d5082b8d successfully


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/certificates/","name":"Certificates"}}]}
```

---

---
title: Containers
description: Wrangler commands for interacting with Cloudflare's Container Platform.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/containers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Containers

Interact with [Containers](https://developers.cloudflare.com/containers/) using Wrangler.

### `build`

Build a Container image from a Dockerfile.

```

wrangler containers build [PATH] [OPTIONS]


```

* `PATH` ` string ` optional  
   * Path for the directory containing the Dockerfile to build.
* `-t, --tag` ` string ` required  
   * Name and optionally a tag (format: "name:tag").
* `--path-to-docker` ` string ` optional  
   * Path to your docker binary if it's not on `$PATH`.  
   * Default: "docker"
* `-p, --push` ` boolean ` optional  
   * Push the built image to Cloudflare's managed registry.  
   * Default: false

### `delete`

Delete a Container (application).

```

wrangler containers delete <CONTAINER_ID> [OPTIONS]


```

* `CONTAINER_ID` ` string ` required  
   * The ID of the Container to delete.

### `images`

Perform operations on images in your containers registry.

#### `images list`

List images in your containers registry.

```

wrangler containers images list [OPTIONS]


```

* `--filter` ` string ` optional  
   * Regex to filter results.
* `--json` ` boolean ` optional  
   * Return output as clean JSON.  
   * Default: false

#### `images delete`

Remove an image from your containers registry.

```

wrangler containers images delete [IMAGE] [OPTIONS]


```

* `IMAGE` ` string ` required  
   * Image to delete of the form `IMAGE:TAG`

### `registries`

Configure and view registries available to your container.[Read more](https://developers.cloudflare.com/containers/platform-details/image-management/#using-amazon-ecr-container-images) about our currently supported external registries.

#### `registries list`

List registries your containers are able to use.

```

wrangler containers registries list [OPTIONS]


```

* `--json` ` boolean ` optional  
   * Return output as clean JSON.  
   * Default: false

#### `registries configure`

Configure a new registry for your account.

```

wrangler containers registries configure [DOMAIN] [OPTIONS]


```

* `DOMAIN` ` string ` required  
   * Domain to configure for the registry.
* `--public-credential` ` string ` required  
   * The public part of the registry credentials, e.g. `AWS_ACCESS_KEY_ID` for ECR
* `--secret-store-id` ` string ` optional  
   * The ID of the secret store to use to store the registry credentials
* `--secret-name` ` string ` optional  
   * The name Wrangler should store the registry credentials under

When run interactively, wrangler will prompt you for your secret and store it in Secrets Store. To run non-interactively, you can send your secret value to wrangler through stdin to have the secret created for you.

#### `registries delete`

Remove a registry configuration from your account.

```

wrangler containers registries delete [DOMAIN] [OPTIONS]


```

* `DOMAIN` ` string ` required  
   * domain of the registry to delete

#### `registries credentials`

Generate temporary credentials to push or pull images from the Cloudflare managed registry (`registry.cloudflare.com`).

```

wrangler containers registries credentials [OPTIONS]


```

* `--push` ` boolean ` optional  
   * Generate credentials with push permission.
* `--pull` ` boolean ` optional  
   * Generate credentials with pull permission.
* `--expiration-minutes` ` number ` optional  
   * How long the credentials should be valid for (in minutes).  
   * Default: 15

At least one of `--push` or `--pull` must be specified.

### `info`

Get information about a specific Container, including top-level details and a list of instances.

```

wrangler containers info <CONTAINER_ID> [OPTIONS]


```

* `CONTAINER_ID` ` string ` required  
   * The ID of the Container to get information about.

### `instances`

List all Container instances for a given application. Displays instance ID, name, state, location, version, and creation time.

In interactive mode, results are paginated. Press `Enter` to load the next page or `Esc`/`q` to stop. In non-interactive environments (for example, when piping output or running in CI), all pages are fetched automatically.

Use the `--json` flag to return output as a flat JSON array. Each element contains the fields `id`, `name`, `state`, `location`, `version`, and `created`. This is also the default output format in non-interactive environments.

```

wrangler containers instances <APPLICATION_ID> [OPTIONS]


```

* `APPLICATION_ID` ` string ` required  
   * The UUID of the application to list instances for. Use `wrangler containers list` to find application IDs.
* `--per-page` ` number ` optional  
   * Number of instances per page.  
   * Default: 25
* `--json` ` boolean ` optional  
   * Return output as clean JSON.  
   * Default: false

For example, to list instances for an application:

Terminal window

```

wrangler containers instances 12345678-abcd-1234-abcd-123456789abc


```

```

INSTANCE                              NAME        STATE          LOCATION  VERSION  CREATED

a1b2c3d4-e5f6-7890-abcd-ef1234567890  worker-12   running        sfo06     3        2025-06-01T12:00:00Z

b2c3d4e5-f6a7-8901-bcde-f12345678901  worker-47   provisioning   iad01     2        2025-06-01T13:00:00Z


```

To get the same data as JSON:

Terminal window

```

wrangler containers instances 12345678-abcd-1234-abcd-123456789abc --json


```

```

[

  {

    "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",

    "name": "worker-12",

    "state": "running",

    "location": "sfo06",

    "version": 3,

    "created": "2025-06-01T12:00:00Z"

  }

]


```

### `list`

List the Containers in your account.

```

wrangler containers list [OPTIONS]


```

### `push`

Push a tagged image to a Cloudflare managed registry, which is automatically integrated with your account.

```

wrangler containers push [TAG] [OPTIONS]


```

* `TAG` ` string ` required  
   * The name and tag of the container image to push.
* `--path-to-docker` ` string ` optional  
   * Path to your docker binary if it's not on `$PATH`.  
   * Default: "docker"

### `ssh`

Connect to a running Container instance using SSH. Refer to [SSH](https://developers.cloudflare.com/containers/ssh/) for configuration details.

```

wrangler containers ssh <INSTANCE_ID>


```

You can also specify a command to run, instead of the default shell. For example:

```

wrangler containers ssh <INSTANCE_ID> -- ls -al


```

* `INSTANCE_ID` ` string ` required  
   * The ID of the Container instance to SSH into.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/containers/","name":"Containers"}}]}
```

---

---
title: D1
description: Wrangler commands for interacting with Cloudflare D1.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/d1.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# D1

Interact with [D1 databases](https://developers.cloudflare.com/d1/) service using Wrangler.

## `d1 create`

Creates a new D1 database, and provides the binding and UUID that you will put in your config file

This command acts on remote D1 Databases.

* [  npm ](#tab-panel-7843)
* [  pnpm ](#tab-panel-7844)
* [  yarn ](#tab-panel-7845)

Terminal window

```

npx wrangler d1 create [NAME]


```

Terminal window

```

pnpm wrangler d1 create [NAME]


```

Terminal window

```

yarn wrangler d1 create [NAME]


```

* `[NAME]` ` string ` required  
The name of the new D1 database
* `--location` ` string `  
A hint for the primary location of the new DB. Options: weur: Western Europe eeur: Eastern Europe apac: Asia Pacific oc: Oceania wnam: Western North America enam: Eastern North America
* `--jurisdiction` ` string `  
The location to restrict the D1 database to run and store data within to comply with local regulations. Note that if jurisdictions are set, the location hint is ignored. Options: eu: The European Union fedramp: FedRAMP-compliant data centers
* `--use-remote` ` boolean `  
Use a remote binding when adding the newly created resource to your config
* `--update-config` ` boolean `  
Automatically update your config file with the newly added resource
* `--binding` ` string `  
The binding name of this resource in your Worker

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `d1 info`

Get information about a D1 database, including the current database size and state

This command acts on remote D1 Databases.

* [  npm ](#tab-panel-7846)
* [  pnpm ](#tab-panel-7847)
* [  yarn ](#tab-panel-7848)

Terminal window

```

npx wrangler d1 info [NAME]


```

Terminal window

```

pnpm wrangler d1 info [NAME]


```

Terminal window

```

yarn wrangler d1 info [NAME]


```

* `[NAME]` ` string ` required  
The name of the DB
* `--json` ` boolean ` default: false  
Return output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `d1 list`

List all D1 databases in your account

This command acts on remote D1 Databases.

* [  npm ](#tab-panel-7849)
* [  pnpm ](#tab-panel-7850)
* [  yarn ](#tab-panel-7851)

Terminal window

```

npx wrangler d1 list


```

Terminal window

```

pnpm wrangler d1 list


```

Terminal window

```

yarn wrangler d1 list


```

* `--json` ` boolean ` default: false  
Return output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `d1 delete`

Delete a D1 database

This command acts on remote D1 Databases.

* [  npm ](#tab-panel-7852)
* [  pnpm ](#tab-panel-7853)
* [  yarn ](#tab-panel-7854)

Terminal window

```

npx wrangler d1 delete [NAME]


```

Terminal window

```

pnpm wrangler d1 delete [NAME]


```

Terminal window

```

yarn wrangler d1 delete [NAME]


```

* `[NAME]` ` string ` required  
The name or binding of the DB
* `--skip-confirmation` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `d1 execute`

Execute a command or SQL file

You must provide either --command or --file for this command to run successfully.

* [  npm ](#tab-panel-7855)
* [  pnpm ](#tab-panel-7856)
* [  yarn ](#tab-panel-7857)

Terminal window

```

npx wrangler d1 execute [DATABASE]


```

Terminal window

```

pnpm wrangler d1 execute [DATABASE]


```

Terminal window

```

yarn wrangler d1 execute [DATABASE]


```

* `[DATABASE]` ` string ` required  
The name or binding of the DB
* `--command` ` string `  
The SQL query you wish to execute, or multiple queries separated by ';'
* `--file` ` string `  
A .sql file to ingest
* `--yes` ` boolean ` alias: --y  
Answer "yes" to any prompts
* `--local` ` boolean `  
Execute commands/files against a local DB for use with wrangler dev
* `--remote` ` boolean `  
Execute commands/files against a remote D1 database for use with remote bindings or your deployed Worker
* `--persist-to` ` string `  
Specify directory to use for local persistence (for use with --local)
* `--json` ` boolean ` default: false  
Return output as JSON
* `--preview` ` boolean ` default: false  
Execute commands/files against a preview D1 database

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `d1 export`

Export the contents or schema of your database as a .sql file

* [  npm ](#tab-panel-7858)
* [  pnpm ](#tab-panel-7859)
* [  yarn ](#tab-panel-7860)

Terminal window

```

npx wrangler d1 export [NAME]


```

Terminal window

```

pnpm wrangler d1 export [NAME]


```

Terminal window

```

yarn wrangler d1 export [NAME]


```

* `[NAME]` ` string ` required  
The name of the D1 database to export
* `--local` ` boolean `  
Export from your local DB you use with wrangler dev
* `--remote` ` boolean `  
Export from a remote D1 database
* `--output` ` string ` required  
Path to the SQL file for your export
* `--table` ` string `  
Specify which tables to include in export
* `--no-schema` ` boolean `  
Only output table contents, not the DB schema
* `--no-data` ` boolean `  
Only output table schema, not the contents of the DBs themselves

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `d1 time-travel info`

Retrieve information about a database at a specific point-in-time using Time Travel

This command acts on remote D1 Databases.

For more information about Time Travel, see <https://developers.cloudflare.com/d1/reference/time-travel/>

* [  npm ](#tab-panel-7861)
* [  pnpm ](#tab-panel-7862)
* [  yarn ](#tab-panel-7863)

Terminal window

```

npx wrangler d1 time-travel info [DATABASE]


```

Terminal window

```

pnpm wrangler d1 time-travel info [DATABASE]


```

Terminal window

```

yarn wrangler d1 time-travel info [DATABASE]


```

* `[DATABASE]` ` string ` required  
The name or binding of the DB
* `--timestamp` ` string `  
Accepts a Unix (seconds from epoch) or RFC3339 timestamp (e.g. 2023-07-13T08:46:42.228Z) to retrieve a bookmark for
* `--json` ` boolean ` default: false  
Return output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `d1 time-travel restore`

Restore a database back to a specific point-in-time

This command acts on remote D1 Databases.

For more information about Time Travel, see <https://developers.cloudflare.com/d1/reference/time-travel/>

* [  npm ](#tab-panel-7864)
* [  pnpm ](#tab-panel-7865)
* [  yarn ](#tab-panel-7866)

Terminal window

```

npx wrangler d1 time-travel restore [DATABASE]


```

Terminal window

```

pnpm wrangler d1 time-travel restore [DATABASE]


```

Terminal window

```

yarn wrangler d1 time-travel restore [DATABASE]


```

* `[DATABASE]` ` string ` required  
The name or binding of the DB
* `--bookmark` ` string `  
Bookmark to use for time travel
* `--timestamp` ` string `  
Accepts a Unix (seconds from epoch) or RFC3339 timestamp (e.g. 2023-07-13T08:46:42.228Z) to retrieve a bookmark for (within the last 30 days)
* `--json` ` boolean ` default: false  
Return output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `d1 migrations create`

Create a new migration

This will generate a new versioned file inside the 'migrations' folder. Name your migration file as a description of your change. This will make it easier for you to find your migration in the 'migrations' folder. An example filename looks like:

```
0000_create_user_table.sql

```

The filename will include a version number and the migration name you specify.

* [  npm ](#tab-panel-7867)
* [  pnpm ](#tab-panel-7868)
* [  yarn ](#tab-panel-7869)

Terminal window

```

npx wrangler d1 migrations create [DATABASE] [MESSAGE]


```

Terminal window

```

pnpm wrangler d1 migrations create [DATABASE] [MESSAGE]


```

Terminal window

```

yarn wrangler d1 migrations create [DATABASE] [MESSAGE]


```

* `[DATABASE]` ` string ` required  
The name or binding of the DB
* `[MESSAGE]` ` string ` required  
The Migration message

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `d1 migrations list`

View a list of unapplied migration files

* [  npm ](#tab-panel-7870)
* [  pnpm ](#tab-panel-7871)
* [  yarn ](#tab-panel-7872)

Terminal window

```

npx wrangler d1 migrations list [DATABASE]


```

Terminal window

```

pnpm wrangler d1 migrations list [DATABASE]


```

Terminal window

```

yarn wrangler d1 migrations list [DATABASE]


```

* `[DATABASE]` ` string ` required  
The name or binding of the DB
* `--local` ` boolean `  
Check migrations against a local DB for use with wrangler dev
* `--remote` ` boolean `  
Check migrations against a remote DB for use with wrangler dev --remote
* `--preview` ` boolean ` default: false  
Check migrations against a preview D1 DB
* `--persist-to` ` string `  
Specify directory to use for local persistence (you must use --local with this flag)

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `d1 migrations apply`

Apply any unapplied D1 migrations

This command will prompt you to confirm the migrations you are about to apply. Confirm that you would like to proceed. After applying, a backup will be captured.

The progress of each migration will be printed in the console.

When running the apply command in a CI/CD environment or another non-interactive command line, the confirmation step will be skipped, but the backup will still be captured.

If applying a migration results in an error, this migration will be rolled back, and the previous successful migration will remain applied.

* [  npm ](#tab-panel-7873)
* [  pnpm ](#tab-panel-7874)
* [  yarn ](#tab-panel-7875)

Terminal window

```

npx wrangler d1 migrations apply [DATABASE]


```

Terminal window

```

pnpm wrangler d1 migrations apply [DATABASE]


```

Terminal window

```

yarn wrangler d1 migrations apply [DATABASE]


```

* `[DATABASE]` ` string ` required  
The name or binding of the DB
* `--local` ` boolean `  
Execute commands/files against a local DB for use with wrangler dev
* `--remote` ` boolean `  
Execute commands/files against a remote DB for use with wrangler dev --remote
* `--preview` ` boolean ` default: false  
Execute commands/files against a preview D1 DB
* `--persist-to` ` string `  
Specify directory to use for local persistence (you must use --local with this flag)

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `d1 insights`

  
Experimental 

Get information about the queries run on a D1 database

This command acts on remote D1 Databases.

* [  npm ](#tab-panel-7876)
* [  pnpm ](#tab-panel-7877)
* [  yarn ](#tab-panel-7878)

Terminal window

```

npx wrangler d1 insights [NAME]


```

Terminal window

```

pnpm wrangler d1 insights [NAME]


```

Terminal window

```

yarn wrangler d1 insights [NAME]


```

* `[NAME]` ` string ` required  
The name of the DB
* `--time-period` ` string ` default: 1d  
Fetch data from now to the provided time period
* `--sort-type` ` string ` default: sum  
Choose the operation you want to sort insights by
* `--sort-by` ` string ` default: time  
Choose the field you want to sort insights by
* `--sort-direction` ` string ` default: DESC  
Choose a sort direction
* `--limit` ` number ` default: 5  
fetch insights about the first X queries
* `--json` ` boolean ` default: false  
return output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/d1/","name":"D1"}}]}
```

---

---
title: General commands
description: General Wrangler commands for developing, deploying, and managing Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/general.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# General commands

Learn about general Wrangler commands for developing, deploying, and managing Workers and other pieces of the Cloudflare developer platform.

## `docs`

Open the Cloudflare developer documentation in your default browser.

* [  npm ](#tab-panel-7883)
* [  pnpm ](#tab-panel-7884)
* [  yarn ](#tab-panel-7885)

Terminal window

```

npx wrangler docs [SEARCH]


```

Terminal window

```

pnpm wrangler docs [SEARCH]


```

Terminal window

```

yarn wrangler docs [SEARCH]


```

* `[SEARCH]` ` string `  
Enter search terms (e.g. the wrangler command) you want to know more about
* `--yes` ` boolean ` alias: --y  
Takes you to the docs, even if search fails

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `init`

Create a new project via the [create-cloudflare-cli (C3) tool](https://developers.cloudflare.com/workers/get-started/guide/#1-create-a-new-worker-project). A variety of web frameworks are available to choose from as well as templates. Dependencies are installed by default, with the option to deploy your project immediately.

```

wrangler init [<NAME>] [OPTIONS]


```

* `NAME` ` string ` optional (default: name of working directory)  
   * The name of the Workers project. This is both the directory name and `name` property in the generated [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--yes` ` boolean ` optional  
   * Answer yes to any prompts for new projects.
* `--from-dash` ` string ` optional  
   * Fetch a Worker initialized from the dashboard. This is done by passing the flag and the Worker name. `wrangler init --from-dash <WORKER_NAME>`.  
   * The `--from-dash` command will not automatically sync changes made to the dashboard after the command is used. Therefore, it is recommended that you continue using the CLI.

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

---

## `dev`

Start a local server for developing your Worker.

```

wrangler dev [<SCRIPT>] [OPTIONS]


```

Note

None of the options for this command are required. Many of these options can be set in your Wrangler file. Refer to the [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration) documentation for more information.

* `SCRIPT` ` string `  
   * The path to an entry point for your Worker. Only required if your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) does not include a `main` key (for example, `main = "index.js"`).
* `--name` ` string ` optional  
   * Name of the Worker.
* `--config`, `-c` ` string[] ` optional  
   * Path(s) to [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). If not provided, Wrangler will use the nearest config file based on your current working directory.  
   * You can provide multiple configuration files to run multiple Workers in one dev session like this: `wrangler dev -c ./wrangler.toml -c ../other-worker/wrangler.toml`. The first config will be treated as the _primary_ Worker, which will be exposed over HTTP. The remaining config files will only be accessible via a service binding from the primary Worker.
* `--no-bundle` ` boolean ` (default: false) optional  
   * Skip Wrangler's build steps. Particularly useful when using custom builds. Refer to [Bundling ↗](https://developers.cloudflare.com/workers/wrangler/bundling/) for more information.
* `--env` ` string ` optional  
   * Perform on a specific environment.
* `--compatibility-date` ` string ` optional  
   * A date in the form yyyy-mm-dd, which will be used to determine which version of the Workers runtime is used.
* `--compatibility-flags`, `--compatibility-flag` ` string[] ` optional  
   * Flags to use for compatibility checks.
* `--latest` ` boolean ` (default: true) optional  
   * Use the latest version of the Workers runtime.
* `--ip` ` string ` optional  
   * IP address to listen on, defaults to `localhost`.
* `--port` ` number ` optional  
   * Port to listen on.
* `--inspector-port` ` number ` optional  
   * Port for devtools to connect to.
* `--routes`, `--route` ` string[] ` optional  
   * Routes to upload.  
   * For example: `--route example.com/*`.
* `--host` ` string ` optional  
   * Host to forward requests to, defaults to the zone of project.
* `--local-protocol` ` 'http'|'https' ` (default: http) optional  
   * Protocol to listen to requests on.
* `--https-key-path` ` string ` optional  
   * Path to a custom certificate key.
* `--https-cert-path` ` string ` optional  
   * Path to a custom certificate.
* `--local-upstream` ` string ` optional  
   * Host to act as origin in local mode, defaults to `dev.host` or route.
* `--assets` ` string ` optional beta  
   * Folder of static assets to be served. Replaces [Workers Sites](https://developers.cloudflare.com/workers/configuration/sites/). Visit [assets](https://developers.cloudflare.com/workers/static-assets/) for more information.
* `--site` ` string ` optional deprecated, use \`--assets\`  
   * Folder of static assets for Workers Sites.  
   Warning  
   Workers Sites is deprecated. Please use [Workers Assets](https://developers.cloudflare.com/workers/static-assets/) or [Pages](https://developers.cloudflare.com/pages/).
* `--site-include` ` string[] ` optional deprecated  
   * Array of `.gitignore`\-style patterns that match file or directory names from the sites directory. Only matched items will be uploaded.
* `--site-exclude` ` string[] ` optional deprecated  
   * Array of `.gitignore`\-style patterns that match file or directory names from the sites directory. Matched items will not be uploaded.
* `--upstream-protocol` ` 'http'|'https' ` (default: https) optional  
   * Protocol to forward requests to host on.
* `--var` ` key:value\[] ` optional  
   * Array of `key:value` pairs to inject as variables into your code. The value will always be passed as a string to your Worker.  
   * For example, `--var "git_hash:'$(git rev-parse HEAD)'" "test:123"` makes the `git_hash` and `test` variables available in your Worker's `env`.  
   * This flag is an alternative to defining [vars](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys) in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). If defined in both places, this flag's values will be used.
* `--define` ` key:value\[] ` optional  
   * Array of `key:value` pairs to replace global identifiers in your code.  
   * For example, `--define "GIT_HASH:'$(git rev-parse HEAD)'"` will replace all uses of `GIT_HASH` with the actual value at build time.  
   * This flag is an alternative to defining [define](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys) in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). If defined in both places, this flag's values will be used.
* `--tsconfig` ` string ` optional  
   * Path to a custom `tsconfig.json` file.
* `--minify` ` boolean ` optional  
   * Minify the Worker.
* `--persist-to` ` string ` optional  
   * Specify directory to use for local persistence.
* `--remote` ` boolean ` (default: false) optional  
   * Develop against remote resources and data stored on Cloudflare's network.
* `--test-scheduled` ` boolean ` (default: false) optional  
   * Exposes a `/__scheduled` fetch route which will trigger a scheduled event (Cron Trigger) for testing during development. To simulate different cron patterns, a `cron` query parameter can be passed in: `/__scheduled?cron=*+*+*+*+*` or `/cdn-cgi/handler/scheduled?cron=*+*+*+*+*`.
* `--log-level` ` 'debug'|'info'|'log'|'warn'|'error|'none' ` (default: log) optional  
   * Specify Wrangler's logging level.
* `--show-interactive-dev-session` ` boolean ` (default: true if the terminal supports interactivity) optional  
   * Show the interactive dev session.
* `--alias` `Array<string>`  
   * Specify modules to alias using [module aliasing](https://developers.cloudflare.com/workers/wrangler/configuration/#module-aliasing).
* `--types` ` boolean ` (default: false) optional  
   * Generate types from your Worker configuration.

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

`wrangler dev` is a way to [locally test](https://developers.cloudflare.com/workers/development-testing/) your Worker while developing. With `wrangler dev` running, send HTTP requests to `localhost:8787` and your Worker should execute as expected. You will also see `console.log` messages and exceptions appearing in your terminal.

---

## `deploy`

Deploy your Worker to Cloudflare.

When you run `wrangler deploy` in a project directory without a Wrangler configuration file, Wrangler will [automatically detect your framework](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/) and configure your project for Cloudflare Workers. This command will prompt you to confirm the detected settings before applying changes. Confirm that you would like to proceed, and your project will be configured and deployed.

To configure your project without deploying, use [wrangler setup](#setup) instead.

```

wrangler deploy [<PATH>] [OPTIONS]


```

Note

None of the options for this command are required. Also, many can be set in your Wrangler file. Refer to the [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/) documentation for more information.

* `PATH` ` string `  
   * A path specific what needs to be deployed, this can either be:  
         * The path to an entry point for your Worker.  
                  * Only required if your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) does not include a `main` key (for example, `main = "index.js"`).  
         * Or the path to an assets directory for the deployment of a static site.  
                  * Visit [assets](https://developers.cloudflare.com/workers/static-assets/) for more information.  
                  * This overrides the eventual `assets` configuration in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).  
                  * This is equivalent to the `--assets` option listed below.  
                  * Note: this option currently only works only in interactive mode (so not in CI systems).
* `--name` ` string ` optional  
   * Name of the Worker.
* `--no-bundle` ` boolean ` (default: false) optional  
   * Skip Wrangler's build steps. Particularly useful when using custom builds. Refer to [Bundling ↗](https://developers.cloudflare.com/workers/wrangler/bundling/) for more information.
* `--env` ` string ` optional  
   * Perform on a specific environment.  
   Note  
   If you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), you select the environment at dev or build time via the `CLOUDFLARE_ENV` environment variable rather than the `--env` flag. Otherwise, environments are defined in your Worker config file as usual. For more detail on using environments with the Cloudflare Vite plugin, refer to the [plugin documentation](https://developers.cloudflare.com/workers/vite-plugin/reference/cloudflare-environments/).
* `--outdir` ` string ` optional  
   * Path to directory where Wrangler will write the bundled Worker files.
* `--compatibility-date` ` string ` optional  
   * A date in the form yyyy-mm-dd, which will be used to determine which version of the Workers runtime is used.
* `--compatibility-flags`, `--compatibility-flag` ` string[] ` optional  
   * Flags to use for compatibility checks.
* `--latest` ` boolean ` (default: true) optional  
   * Use the latest version of the Workers runtime.
* `--assets` ` string ` optional beta  
   * Folder of static assets to be served. Replaces [Workers Sites](https://developers.cloudflare.com/workers/configuration/sites/). Visit [assets](https://developers.cloudflare.com/workers/static-assets/) for more information.
* `--site` ` string ` optional deprecated, use \`--assets\`  
   * Folder of static assets for Workers Sites.  
   Warning  
   Workers Sites is deprecated. Please use [Workers Assets](https://developers.cloudflare.com/workers/static-assets/) or [Pages](https://developers.cloudflare.com/pages/).
* `--site-include` ` string[] ` optional deprecated  
   * Array of `.gitignore`\-style patterns that match file or directory names from the sites directory. Only matched items will be uploaded.
* `--site-exclude` ` string[] ` optional deprecated  
   * Array of `.gitignore`\-style patterns that match file or directory names from the sites directory. Matched items will not be uploaded.
* `--var` ` key:value\[] ` optional  
   * Array of `key:value` pairs to inject as variables into your code. The value will always be passed as a string to your Worker.  
   * For example, `--var git_hash:$(git rev-parse HEAD) test:123` makes the `git_hash` and `test` variables available in your Worker's `env`.  
   * This flag is an alternative to defining [vars](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys) in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). If defined in both places, this flag's values will be used.
* `--define` ` key:value\[] ` optional  
   * Array of `key:value` pairs to replace global identifiers in your code.  
   * For example, `--define GIT_HASH:$(git rev-parse HEAD)` will replace all uses of `GIT_HASH` with the actual value at build time.  
   * This flag is an alternative to defining [define](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys) in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). If defined in both places, this flag's values will be used.
* `--triggers`, `--schedule`, `--schedules` ` string[] ` optional  
   * Cron schedules to attach to the deployed Worker. Refer to [Cron Trigger Examples](https://developers.cloudflare.com/workers/configuration/cron-triggers/#examples).
* `--routes`, `--route` string\[\] optional  
   * Routes where this Worker will be deployed.  
   * For example: `--route example.com/*`.
* `--tsconfig` ` string ` optional  
   * Path to a custom `tsconfig.json` file.
* `--minify` ` boolean ` optional  
   * Minify the bundled Worker before deploying.
* `--dry-run` ` boolean ` (default: false) optional  
   * Compile a project without actually deploying to live servers. Combined with `--outdir`, this is also useful for testing the output of `npx wrangler deploy`. It also gives developers a chance to upload our generated sourcemap to a service like Sentry, so that errors from the Worker can be mapped against source code, but before the service goes live.
* `--keep-vars` ` boolean ` (default: false) optional  
   * It is recommended best practice to treat your Wrangler developer environment as a source of truth for your Worker configuration, and avoid making changes via the Cloudflare dashboard.  
   * If you change your environment variables in the Cloudflare dashboard, Wrangler will override them the next time you deploy. If you want to disable this behaviour set `keep-vars` to `true`.  
   * Secrets are never deleted by a deployment whether this flag is true or false.
* `--dispatch-namespace` ` string ` optional  
   * Specify the [Workers for Platforms dispatch namespace](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#dispatch-namespace) to upload this Worker to.
* `--metafile` ` string ` optional  
   * Specify a file to write the build metadata from esbuild to. If flag is used without a path string, this defaults to `bundle-meta.json` inside the directory specified by `--outdir`. This can be useful for understanding the bundle size.
* `--containers-rollout` ` immediate | gradual ` optional  
   * Specify the [rollout strategy](https://developers.cloudflare.com/containers/faq#how-do-container-updates-and-rollouts-work) for [Containers](https://developers.cloudflare.com/containers) associated with the Worker. If set to `immediate`, 100% of container instances will be updated in one rollout step, overriding any configuration in `rollout_step_percentage`. Note that `rollout_active_grace_period`, if configured, still applies.  
   * Defaults to `gradual`, where the default rollout is 10% then 100% of instances.
* `--strict` ` boolean ` (default: false) optional  
   * Turns on strict mode for the deployment command, meaning that the command will be more defensive and prevent deployments which could introduce potential issues. In particular, this mode prevents deployments if the deployment would potentially override remote settings in non-interactive environments.
* `--tag` ` string ` optional  
   * A tag for this Worker version. Matches the behavior of `wrangler versions upload --tag`.
* `--message` ` string ` optional  
   * A descriptive message for this Worker version and deployment. Matches the behavior of `wrangler versions upload --message`. The message is also applied to the deployment.
* `--yes` ` boolean ` (default: false) optional  
   * Skip confirmation prompts and run [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/) non-interactively using detected settings. Only applicable when no Wrangler configuration file exists in your project.

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

---

## `delete`

Delete your Worker and all associated Cloudflare developer platform resources.

```

wrangler delete [<SCRIPT>] [OPTIONS]


```

* `SCRIPT` ` string `  
   * The path to an entry point for your Worker. Only required if your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) does not include a `main` key (for example, `main = "index.js"`).
* `--name` ` string ` optional  
   * Name of the Worker.
* `--env` ` string ` optional  
   * Perform on a specific environment.
* `--dry-run` ` boolean ` (default: false) optional  
   * Do not actually delete the Worker. This is useful for testing the output of `wrangler delete`.

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

---

## `setup`

🪄 Setup a project to work on Cloudflare

* [  npm ](#tab-panel-7886)
* [  pnpm ](#tab-panel-7887)
* [  yarn ](#tab-panel-7888)

Terminal window

```

npx wrangler setup


```

Terminal window

```

pnpm wrangler setup


```

Terminal window

```

yarn wrangler setup


```

* `--yes` ` boolean ` alias: --y default: false  
Answer "yes" to any prompts for configuring your project
* `--build` ` boolean ` default: false  
Run your project's build command once it has been configured
* `--dry-run` ` boolean `  
Runs the command without applying any filesystem modifications

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

This command configures your project for Cloudflare Workers without deploying. It performs the same [automatic project configuration](https://developers.cloudflare.com/workers/framework-guides/automatic-configuration/) as `wrangler deploy`, but does not deploy. This is useful when you want to review the generated configuration before deploying.

---

## `secret`

Manage the secret variables for a Worker.

This action creates a new [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#versions) of the Worker and [deploys](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#deployments) it immediately. To only create a new version of the Worker, use the [wrangler versions secret](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-secret-put) commands.

### `secret put`

Create or update a secret for a Worker

* [  npm ](#tab-panel-7889)
* [  pnpm ](#tab-panel-7890)
* [  yarn ](#tab-panel-7891)

Terminal window

```

npx wrangler secret put [KEY]


```

Terminal window

```

pnpm wrangler secret put [KEY]


```

Terminal window

```

yarn wrangler secret put [KEY]


```

* `[KEY]` ` string ` required  
The variable name to be accessible in the Worker
* `--name` ` string `  
Name of the Worker. If this is not specified, it will default to the name specified in your Wrangler config file.

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

When running this command, you will be prompted to input the secret's value:

Terminal window

```

npx wrangler secret put FOO


```

```

? Enter a secret value: > ***

🌀 Creating the secret for script worker-app

✨ Success! Uploaded secret FOO


```

The `put` command can also receive piped input. For example:

Terminal window

```

echo "-----BEGIN PRIVATE KEY-----\nM...==\n-----END PRIVATE KEY-----\n" | wrangler secret put PRIVATE_KEY


```

### `secret delete`

Delete a secret from a Worker

* [  npm ](#tab-panel-7892)
* [  pnpm ](#tab-panel-7893)
* [  yarn ](#tab-panel-7894)

Terminal window

```

npx wrangler secret delete [KEY]


```

Terminal window

```

pnpm wrangler secret delete [KEY]


```

Terminal window

```

yarn wrangler secret delete [KEY]


```

* `[KEY]` ` string ` required  
The variable name to be accessible in the Worker
* `--name` ` string `  
Name of the Worker. If this is not specified, it will default to the name specified in your Wrangler config file.

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `secret list`

List all secrets for a Worker

* [  npm ](#tab-panel-7895)
* [  pnpm ](#tab-panel-7896)
* [  yarn ](#tab-panel-7897)

Terminal window

```

npx wrangler secret list


```

Terminal window

```

pnpm wrangler secret list


```

Terminal window

```

yarn wrangler secret list


```

* `--name` ` string `  
Name of the Worker. If this is not specified, it will default to the name specified in your Wrangler config file.
* `--format` ` "json" | "pretty" ` default: json  
The format to print the secrets in

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of listing the secrets for the current Worker.

Terminal window

```

npx wrangler secret list


```

```

[

  {

    "name": "FOO",

    "type": "secret_text"

  }

]


```

---

### `secret bulk`

Upload multiple secrets for a Worker at once

* [  npm ](#tab-panel-7898)
* [  pnpm ](#tab-panel-7899)
* [  yarn ](#tab-panel-7900)

Terminal window

```

npx wrangler secret bulk [FILE]


```

Terminal window

```

pnpm wrangler secret bulk [FILE]


```

Terminal window

```

yarn wrangler secret bulk [FILE]


```

* `[FILE]` ` string `  
The file of key-value pairs to upload, as JSON in form {"key": value, ...} or .env file in the form KEY=VALUE. If omitted, Wrangler expects to receive input from stdin rather than a file.
* `--name` ` string `  
Name of the Worker. If this is not specified, it will default to the name specified in your Wrangler config file.

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of uploading secrets from a JSON file redirected to `stdin`. When complete, the output summary will show the number of secrets uploaded and the number of secrets that failed to upload.

```

{

  "secret-name-1": "secret-value-1",

  "secret-name-2": "secret-value-2"

}


```

Terminal window

```

npx wrangler secret bulk < secrets.json


```

```

🌀 Creating the secrets for the Worker "script-name"

✨ Successfully created secret for key: secret-name-1

...

🚨 Error uploading secret for key: secret-name-1

✨ Successfully created secret for key: secret-name-2


Finished processing secrets JSON file:

✨ 1 secrets successfully uploaded

🚨 1 secrets failed to upload


```

---

## `tail`

🦚 Start a log tailing session for a Worker

* [  npm ](#tab-panel-7901)
* [  pnpm ](#tab-panel-7902)
* [  yarn ](#tab-panel-7903)

Terminal window

```

npx wrangler tail [WORKER]


```

Terminal window

```

pnpm wrangler tail [WORKER]


```

Terminal window

```

yarn wrangler tail [WORKER]


```

* `[WORKER]` ` string `  
Name or route of the worker to tail
* `--format` ` "json" | "pretty" `  
The format of log entries
* `--status` ` "ok" | "error" | "canceled" `  
Filter by invocation status
* `--header` ` string `  
Filter by HTTP header
* `--method` ` string `  
Filter by HTTP method
* `--sampling-rate` ` number `  
Adds a percentage of requests to log sampling rate
* `--search` ` string `  
Filter by a text match in console.log messages
* `--ip` ` string `  
Filter by the IP address the request originates from. Use "self" to filter for your own IP
* `--version-id` ` string `  
Filter by Worker version

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

After starting `wrangler tail`, you will receive a live feed of console and exception logs for each request your Worker receives.

If your Worker has a high volume of traffic, the tail might enter sampling mode. This will cause some of your messages to be dropped and a warning to appear in your tail logs. To prevent messages from being dropped, add the options listed above to filter the volume of tail messages.

Note

It may take up to 1 minute (60 seconds) for a tail to exit sampling mode after adding an option to filter tail messages.

If sampling persists after using options to filter messages, consider using [instant logs ↗](https://developers.cloudflare.com/logs/instant-logs/).

---

## `login`

Authorize Wrangler with your Cloudflare account using OAuth. Wrangler will attempt to automatically open your web browser to login with your Cloudflare account.

If you prefer to use API tokens for authentication, such as in headless or continuous integration environments, refer to [Running Wrangler in CI/CD](https://developers.cloudflare.com/workers/ci-cd/).

```

wrangler login [OPTIONS]


```

* `--scopes-list` ` string ` optional  
   * List all the available OAuth scopes with descriptions.
* `--scopes` ` string ` optional  
   * Allows to choose your set of OAuth scopes. The set of scopes must be entered in a whitespace-separated list, for example, `npx wrangler login --scopes account:read user:read`.
* `--callback-host` ` string ` optional  
   * Defaults to `localhost`. Sets the IP or hostname where Wrangler should listen for the OAuth callback.
* `--callback-port` ` string ` optional  
   * Defaults to `8976`. Sets the port where Wrangler should listen for the OAuth callback.

Note

`wrangler login` uses all the available scopes by default if no flags are provided.

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

If Wrangler fails to open a browser, you can copy and paste the URL generated by `wrangler login` in your terminal into a browser and log in.

### Use `wrangler login` on a remote machine

If you are using Wrangler from a remote machine, but run the login flow from your local browser, you will receive the following error message after logging in:`This site can't be reached`.

To finish the login flow, run `wrangler login` and go through the login flow in the browser:

Terminal window

```

npx wrangler login


```

```

 ⛅️ wrangler 2.1.6

-------------------

Attempting to login via OAuth...

Opening a link in your default browser: https://dash.cloudflare.com/oauth2/auth?xyz...


```

The browser login flow will redirect you to a `localhost` URL on your machine.

Leave the login flow active. Open a second terminal session. In that second terminal session, use `curl` or an equivalent request library on the remote machine to fetch this `localhost` URL. Copy and paste the `localhost` URL that was generated during the `wrangler login` flow and run:

Terminal window

```

curl <LOCALHOST_URL>


```

### Use `wrangler login` in a container

The Cloudflare OAuth provider will always redirect to a callback server at `localhost:8976`. If you are running Wrangler inside a container, this server might not be accessible from your host machine's browser - even after authorizing the connection, your login command will hang.

You must configure your container to map port `8976` on your host machine to the Wrangler OAuth callback server's port (`8976` by default).

For example, if you are running Wrangler in a Docker container:

Terminal window

```

docker run -p 8976:8976 <your-image>


```

And when you run `npx wrangler login` inside your container, set the callback host to listen on all network interfaces:

Terminal window

```

npx wrangler login --callback-host=0.0.0.0


```

Now when the browser redirects to `localhost:8976`, the request will be forwarded to Wrangler running inside the container on `0.0.0.0:8976`.

If you need to use a different port inside the container, use `--callback-port` as well and adjust your port mapping accordingly, for example:

Terminal window

```

# When starting your container

docker run -p 8976:9000 <your-image>


# Inside the container

npx wrangler login --callback-host=0.0.0.0 --callback-port=9000


```

---

## `logout`

Remove Wrangler's authorization for accessing your account. This command will invalidate your current OAuth token.

```

wrangler logout


```

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

If you are using `CLOUDFLARE_API_TOKEN` instead of OAuth, and you can logout by deleting your API token in the Cloudflare dashboard:

1. In the Cloudflare dashboard, go to the **Account API tokens** page.  
[ Go to **Account API tokens** ](https://dash.cloudflare.com/?to=/:account/api-tokens)
2. Select the three-dot menu on your Wrangler token.
3. Select **Delete**.

---

## `auth`

### `auth token`

Retrieve your current authentication token or credentials for use with other tools and scripts.

```

wrangler auth token [OPTIONS]


```

* `--json` ` boolean ` optional  
   * Return output as JSON with token type information. This also enables retrieving API key/email credentials.

The command returns whichever authentication method is currently configured, in the following order of precedence:

* API token from `CLOUDFLARE_API_TOKEN` environment variable
* API key/email from `CLOUDFLARE_API_KEY` and `CLOUDFLARE_EMAIL` environment variables (requires `--json` flag, since this method uses two values instead of a single token)
* OAuth token from `wrangler login` (automatically refreshed if expired)

When using `--json`, the output includes the token type:

```

// API token

{ "type": "api_token", "token": "..." }


// OAuth token

{ "type": "oauth", "token": "..." }


// API key/email (only available with --json)

{ "type": "api_key", "key": "...", "email": "..." }


```

An error is returned if no authentication method is available, or if API key/email is configured without `--json`.

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

---

## `whoami`

🕵️ Retrieve your user information

* [  npm ](#tab-panel-7904)
* [  pnpm ](#tab-panel-7905)
* [  yarn ](#tab-panel-7906)

Terminal window

```

npx wrangler whoami


```

Terminal window

```

pnpm wrangler whoami


```

Terminal window

```

yarn wrangler whoami


```

* `--account` ` string `  
Show membership information for the given account (id or name).
* `--json` ` boolean ` default: false  
Return user information as JSON. Exits with a non-zero status if not authenticated.

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

---

## `versions`

Note

The minimum required wrangler version to use these commands is 3.40.0\. For versions before 3.73.0, you will need to add the `--x-versions` flag.

### `versions upload`

Upload a new [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#versions) of your Worker that is not deployed immediately.

* [  npm ](#tab-panel-7907)
* [  pnpm ](#tab-panel-7908)
* [  yarn ](#tab-panel-7909)

Terminal window

```

npx wrangler versions upload [SCRIPT]


```

Terminal window

```

pnpm wrangler versions upload [SCRIPT]


```

Terminal window

```

yarn wrangler versions upload [SCRIPT]


```

* `[SCRIPT]` ` string `  
The path to an entry point for your Worker
* `--name` ` string `  
Name of the Worker
* `--tag` ` string `  
A tag for this Worker Gradual Rollouts Version
* `--message` ` string `  
A descriptive message for this Worker Gradual Rollouts Version
* `--preview-alias` ` string `  
Name of an alias for this Worker version
* `--no-bundle` ` boolean ` default: false  
Skip internal build steps and directly upload Worker
* `--outdir` ` string `  
Output directory for the bundled Worker
* `--outfile` ` string `  
Output file for the bundled worker
* `--compatibility-date` ` string `  
Date to use for compatibility checks
* `--compatibility-flags` ` string ` alias: --compatibility-flag  
Flags to use for compatibility checks
* `--latest` ` boolean ` default: false  
Use the latest version of the Worker runtime
* `--assets` ` string `  
Static assets to be served. Replaces Workers Sites.
* `--var` ` string `  
A key-value pair to be injected into the script as a variable
* `--define` ` string `  
A key-value pair to be substituted in the script
* `--alias` ` string `  
A module pair to be substituted in the script
* `--jsx-factory` ` string `  
The function that is called for each JSX element
* `--jsx-fragment` ` string `  
The function that is called for each JSX fragment
* `--tsconfig` ` string `  
Path to a custom tsconfig.json file
* `--minify` ` boolean `  
Minify the Worker
* `--upload-source-maps` ` boolean `  
Include source maps when uploading this Worker Gradual Rollouts Version.
* `--dry-run` ` boolean `  
Compile a project without actually uploading the version.
* `--secrets-file` ` string `  
Path to a file containing secrets to upload with the version (JSON or .env format). Secrets from previous deployments will not be deleted - see `--keep-secrets`

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `versions deploy`

Deploy a previously created [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#versions) of your Worker all at once or create a [gradual deployment](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/) to incrementally shift traffic to a new version by following an interactive prompt.

* [  npm ](#tab-panel-7910)
* [  pnpm ](#tab-panel-7911)
* [  yarn ](#tab-panel-7912)

Terminal window

```

npx wrangler versions deploy [VERSION-SPECS]


```

Terminal window

```

pnpm wrangler versions deploy [VERSION-SPECS]


```

Terminal window

```

yarn wrangler versions deploy [VERSION-SPECS]


```

* `--name` ` string `  
Name of the worker
* `--version-id` ` string `  
Worker Version ID(s) to deploy
* `--percentage` ` number `  
Percentage of traffic to split between Worker Version(s) (0-100)
* `[VERSION-SPECS]` ` string `  
Shorthand notation to deploy Worker Version(s) \[<version-id>@<percentage>..\]
* `--message` ` string `  
Description of this deployment (optional)
* `--yes` ` boolean ` alias: --y default: false  
Automatically accept defaults to prompts
* `--dry-run` ` boolean ` default: false  
Don't actually deploy

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

Note

The non-interactive version of this prompt is: `wrangler versions deploy version-id-1@percentage-1% version-id-2@percentage-2 -y`

For example:`wrangler versions deploy 095f00a7-23a7-43b7-a227-e4c97cab5f22@10% 1a88955c-2fbd-4a72-9d9b-3ba1e59842f2@90% -y`

### `versions list`

Retrieve details for the 10 most recent versions. Details include `Version ID`, `Created on`, `Author`, `Source`, and optionally, `Tag` or `Message`.

* [  npm ](#tab-panel-7913)
* [  pnpm ](#tab-panel-7914)
* [  yarn ](#tab-panel-7915)

Terminal window

```

npx wrangler versions list


```

Terminal window

```

pnpm wrangler versions list


```

Terminal window

```

yarn wrangler versions list


```

* `--name` ` string `  
Name of the Worker
* `--json` ` boolean ` default: false  
Display output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `versions view`

View the details of a specific version of your Worker

* [  npm ](#tab-panel-7916)
* [  pnpm ](#tab-panel-7917)
* [  yarn ](#tab-panel-7918)

Terminal window

```

npx wrangler versions view [VERSION-ID]


```

Terminal window

```

pnpm wrangler versions view [VERSION-ID]


```

Terminal window

```

yarn wrangler versions view [VERSION-ID]


```

* `[VERSION-ID]` ` string ` required  
The Worker Version ID to view
* `--name` ` string `  
Name of the worker
* `--json` ` boolean ` default: false  
Display output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `versions secret put`

Create or update a secret variable for a Worker

* [  npm ](#tab-panel-7919)
* [  pnpm ](#tab-panel-7920)
* [  yarn ](#tab-panel-7921)

Terminal window

```

npx wrangler versions secret put [KEY]


```

Terminal window

```

pnpm wrangler versions secret put [KEY]


```

Terminal window

```

yarn wrangler versions secret put [KEY]


```

* `[KEY]` ` string `  
The variable name to be accessible in the Worker
* `--name` ` string `  
Name of the Worker
* `--message` ` string `  
Description of this deployment
* `--tag` ` string `  
A tag for this version

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `versions secret delete`

Delete a secret variable from a Worker

* [  npm ](#tab-panel-7922)
* [  pnpm ](#tab-panel-7923)
* [  yarn ](#tab-panel-7924)

Terminal window

```

npx wrangler versions secret delete [KEY]


```

Terminal window

```

pnpm wrangler versions secret delete [KEY]


```

Terminal window

```

yarn wrangler versions secret delete [KEY]


```

* `[KEY]` ` string `  
The variable name to be accessible in the Worker
* `--name` ` string `  
Name of the Worker
* `--message` ` string `  
Description of this deployment
* `--tag` ` string `  
A tag for this version

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `versions secret bulk`

Create or update a secret variable for a Worker

* [  npm ](#tab-panel-7925)
* [  pnpm ](#tab-panel-7926)
* [  yarn ](#tab-panel-7927)

Terminal window

```

npx wrangler versions secret bulk [FILE]


```

Terminal window

```

pnpm wrangler versions secret bulk [FILE]


```

Terminal window

```

yarn wrangler versions secret bulk [FILE]


```

* `[FILE]` ` string `  
The file of key-value pairs to upload, as JSON in form {"key": value, ...} or .dev.vars file in the form KEY=VALUE
* `--name` ` string `  
Name of the Worker
* `--message` ` string `  
Description of this deployment
* `--tag` ` string `  
A tag for this version

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

---

## `triggers`

Note

The minimum required wrangler version to use these commands is 3.40.0\. For versions before 3.73.0, you will need to add the `--x-versions` flag.

### `triggers deploy`

  
Experimental 

Apply changes to triggers (Routes or domains and Cron Triggers) when using `wrangler versions upload`

* [  npm ](#tab-panel-7928)
* [  pnpm ](#tab-panel-7929)
* [  yarn ](#tab-panel-7930)

Terminal window

```

npx wrangler triggers deploy


```

Terminal window

```

pnpm wrangler triggers deploy


```

Terminal window

```

yarn wrangler triggers deploy


```

* `--name` ` string `  
Name of the worker
* `--triggers` ` string ` aliases: --schedule, --schedules  
cron schedules to attach
* `--routes` ` string ` alias: --route  
Routes to upload
* `--dry-run` ` boolean `  
Don't actually deploy

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

---

## `deployments`

[Deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#deployments) track the version(s) of your Worker that are actively serving traffic.

Note

The minimum required wrangler version to use these commands is 3.40.0\. For versions before 3.73.0, you will need to add the `--x-versions` flag.

### `deployments list`

Displays the 10 most recent deployments of your Worker

* [  npm ](#tab-panel-7931)
* [  pnpm ](#tab-panel-7932)
* [  yarn ](#tab-panel-7933)

Terminal window

```

npx wrangler deployments list


```

Terminal window

```

pnpm wrangler deployments list


```

Terminal window

```

yarn wrangler deployments list


```

* `--name` ` string `  
Name of the Worker
* `--json` ` boolean ` default: false  
Display output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `deployments status`

View the current state of your production

* [  npm ](#tab-panel-7934)
* [  pnpm ](#tab-panel-7935)
* [  yarn ](#tab-panel-7936)

Terminal window

```

npx wrangler deployments status


```

Terminal window

```

pnpm wrangler deployments status


```

Terminal window

```

yarn wrangler deployments status


```

* `--name` ` string `  
Name of the Worker
* `--json` ` boolean ` default: false  
Display output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `rollback`

Warning

A rollback will immediately create a new deployment with the specified version of your Worker and become the active deployment across all your deployed routes and domains. This change will not affect work in your local development environment.

```

wrangler rollback [<VERSION_ID>] [OPTIONS]


```

* `VERSION_ID` ` string ` optional  
   * The ID of the version you wish to roll back to. If not supplied, the `rollback` command defaults to the version uploaded before the latest version.
* `--name` ` string ` optional  
   * Perform on a specific Worker rather than inheriting from the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--message` ` string ` optional  
   * Add message for rollback. Accepts empty string. When specified, interactive prompts for rollback confirmation and message are skipped.

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

---

## `types`

Generate types based on your Worker configuration, including `Env` types based on your bindings, module rules, and [runtime types](https://developers.cloudflare.com/workers/languages/typescript/) based on the`compatibility_date` and `compatibility_flags` in your [config file](https://developers.cloudflare.com/workers/wrangler/configuration/).

```

wrangler types [<PATH>] [OPTIONS]


```

Note

If you are running a version of Wrangler that is greater than `3.66.0` but below `4.0.0`, you will need to include the `--experimental-include-runtime` flag. During its experimental release, runtime types were output to a separate file (`.wrangler/types/runtime.d.ts` by default). If you have an older version of Wrangler, you can access runtime types through the `@cloudflare/workers-types` package.

### Multi-environment support

By default, `wrangler types` generates types for bindings from **all environments** defined in your configuration file. This ensures your generated `Env` type includes all bindings that might be used across different deployment environments (such as staging and production), preventing TypeScript errors when accessing environment-specific bindings.

For example, if you have a KV namespace binding only in production and an R2 bucket binding only in staging, both will be included in the generated types as optional properties.

To generate types for only a specific environment, use the `--env` flag.

### Options

* `PATH` ` string ` (default: \`./worker-configuration.d.ts\`)  
   * The path to where types for your Worker will be written.  
   * The path must have a `d.ts` extension.
* `--env` ` string ` optional  
   * Generate types for bindings in a specific environment only, rather than aggregating bindings from all environments.
* `--env-interface` ` string ` (default: \`Env\`)  
   * The name of the interface to generate for the environment object.  
   * Not valid if the Worker uses the Service Worker syntax.
* `--include-runtime` ` boolean ` (default: true)  
   * Whether to generate runtime types based on the`compatibility_date` and `compatibility_flags` in your [config file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--include-env` ` boolean ` (default: true)  
   * Whether to generate `Env` types based on your Worker bindings.
* `--strict-vars` ` boolean ` optional (default: true)  
   * Control the types that Wrangler generates for `vars` bindings.  
   * If `true`, (the default) Wrangler generates literal and union types for bindings (e.g. `myVar: 'my dev variable' | 'my prod variable'`).  
   * If `false`, Wrangler generates generic types (e.g. `myVar: string`). This is useful when variables change frequently, especially when working across multiple environments.
* `--check` ` boolean ` optional  
   * Check if the generated types at the specified path are up-to-date without regenerating them.  
   * Exits with code 0 if types are up-to-date, or code 1 if types are out-of-date.  
   * Useful for CI/CD pipelines and pre-commit hooks to ensure types have been regenerated after configuration changes.
* `--config`, `-c` ` string[] ` optional  
   * Path(s) to [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). If the Worker you are generating types for has service bindings or bindings to Durable Objects, you can also provide the paths to those configuration files so that the generated `Env` type will include RPC types. For example, given a Worker with a service binding, `wrangler types -c wrangler.toml -c ../bound-worker/wrangler.toml` will generate an `Env` type like this:  
TypeScript  
```  
interface Env {  
  SERVICE_BINDING: Service<import("../bound-worker/src/index").Entrypoint>;  
}  
```

---

## `telemetry`

Cloudflare collects anonymous usage data to improve Wrangler. You can learn more about this in our [data policy ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler/telemetry.md).

You can manage sharing of usage data at any time using these commands.

### `disable`

Disable telemetry collection for Wrangler.

```

wrangler telemetry disable


```

### `enable`

Enable telemetry collection for Wrangler.

```

wrangler telemetry enable


```

### `status`

Check whether telemetry collection is currently enabled. The return result is specific to the directory where you have run the command.

This will resolve the global status set by `wrangler telemetry disable / enable`, the environment variable [WRANGLER\_SEND\_METRICS](https://developers.cloudflare.com/workers/wrangler/system-environment-variables/#supported-environment-variables), and the [send\_metrics](https://developers.cloudflare.com/workers/wrangler/configuration/#top-level-only-keys) key in the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).

```

wrangler telemetry status


```

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

---

## `check`

### `startup`

Generate a CPU profile of your Worker's startup phase.

After you run `wrangler check startup`, you can import the profile into Chrome DevTools or open it directly in VSCode to view a flamegraph of your Worker's startup phase. Additionally, when a Worker deployment fails with a startup time error Wrangler will automatically generate a CPU profile for easy investigation.

Note

This command measures performance of your Worker locally, on your own machine — which has a different CPU than when your Worker runs on Cloudflare. This means results can vary widely.

You should use the CPU profile that `wrangler check startup` generates in order to understand where time is spent at startup, but you should not expect the overall startup time in the profile to match exactly what your Worker's startup time will be when deploying to Cloudflare.

Terminal window

```

wrangler check startup


```

* `--args` ` string ` optional  
   * To customise the way `wrangler check startup` builds your Worker for analysis, provide the exact arguments you use when deploying your Worker with `wrangler deploy`, or your Pages project with `wrangler pages functions build`. For instance, if you deploy your Worker with `wrangler deploy --no-bundle`, you should use `wrangler check startup --args="--no-bundle"` to profile the startup phase.
* `--worker` ` string ` optional  
   * If you don't use Wrangler to deploy your Worker, you can use this argument to provide a Worker bundle to analyse. This should be a file path to a serialized multipart upload, with the exact same format as [the API expects](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/).
* `--pages` ` boolean ` optional  
   * If you don't use a Wrangler config file with your Pages project (i.e. a Wrangler config file containing `pages_build_output_dir`), use this flag to force `wrangler check startup` to treat your project as a Pages project.

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

---

## `complete`

Generate shell completion scripts for Wrangler commands. Shell completions allow you to autocomplete commands, subcommands, and flags by pressing Tab as you type.

```

wrangler complete <SHELL>


```

* `SHELL` ` string ` required  
   * The shell to generate completions for. Supported values: `bash`, `zsh`, `fish`, `powershell`.

### Setup

Generate and add the completion script to your shell configuration file:

* [ Bash ](#tab-panel-7879)
* [ Zsh ](#tab-panel-7880)
* [ Fish ](#tab-panel-7881)
* [ PowerShell ](#tab-panel-7882)

Terminal window

```

wrangler complete bash >> ~/.bashrc


```

Then restart your terminal or run `source ~/.bashrc`.

Terminal window

```

wrangler complete zsh >> ~/.zshrc


```

Then restart your terminal or run `source ~/.zshrc`.

Terminal window

```

wrangler complete fish >> ~/.config/fish/config.fish


```

Then restart your terminal or run `source ~/.config/fish/config.fish`.

PowerShell

```

wrangler complete powershell >> $PROFILE


```

Then restart PowerShell or run `. $PROFILE`.

### Usage

After setup, press Tab to autocomplete commands, subcommands, and flags:

Terminal window

```

wrangler d<TAB>          # completes to 'deploy', 'dev', 'd1', etc.

wrangler kv <TAB>        # shows subcommands: namespace, key, bulk


```

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/general/","name":"General commands"}}]}
```

---

---
title: Hyperdrive
description: Wrangler commands for managing Hyperdrive database configurations.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/hyperdrive.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Hyperdrive

Manage [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) database configurations using Wrangler.

To manage mTLS client certificates and CA chain certificates used by Hyperdrive, refer to [Certificate commands](https://developers.cloudflare.com/workers/wrangler/commands/certificates/).

## `hyperdrive create`

Create a Hyperdrive config

* [  npm ](#tab-panel-7937)
* [  pnpm ](#tab-panel-7938)
* [  yarn ](#tab-panel-7939)

Terminal window

```

npx wrangler hyperdrive create [NAME]


```

Terminal window

```

pnpm wrangler hyperdrive create [NAME]


```

Terminal window

```

yarn wrangler hyperdrive create [NAME]


```

* `[NAME]` ` string ` required  
The name of the Hyperdrive config
* `--connection-string` ` string `  
The connection string for the database you want Hyperdrive to connect to - ex: protocol://user:password@host:port/database
* `--service-id` ` string `  
The Workers VPC Service ID of the origin database
* `--origin-host` ` string ` alias: --host  
The host of the origin database
* `--origin-port` ` number ` alias: --port  
The port number of the origin database
* `--origin-scheme` ` string ` alias: --scheme default: postgresql  
The scheme used to connect to the origin database
* `--database` ` string `  
The name of the database within the origin database
* `--origin-user` ` string ` alias: --user  
The username used to connect to the origin database
* `--origin-password` ` string ` alias: --password  
The password used to connect to the origin database
* `--access-client-id` ` string `  
The Client ID of the Access token to use when connecting to the origin database
* `--access-client-secret` ` string `  
The Client Secret of the Access token to use when connecting to the origin database
* `--caching-disabled` ` boolean `  
Disables the caching of SQL responses
* `--max-age` ` number `  
Specifies max duration for which items should persist in the cache, cannot be set when caching is disabled
* `--swr` ` number `  
Indicates the number of seconds cache may serve the response after it becomes stale, cannot be set when caching is disabled
* `--ca-certificate-id` ` string ` alias: --ca-certificate-uuid  
Sets custom CA certificate when connecting to origin database. Must be valid UUID of already uploaded CA certificate.
* `--mtls-certificate-id` ` string ` alias: --mtls-certificate-uuid  
Sets custom mTLS client certificates when connecting to origin database. Must be valid UUID of already uploaded public/private key certificates.
* `--sslmode` ` string `  
Sets sslmode for connecting to database. For PostgreSQL: 'require, verify-ca, verify-full'. For MySQL: 'REQUIRED, VERIFY\_CA, VERIFY\_IDENTITY'.
* `--origin-connection-limit` ` number `  
The (soft) maximum number of connections that Hyperdrive may establish to the origin database
* `--binding` ` string `  
The binding name of this resource in your Worker
* `--update-config` ` boolean `  
Automatically update your config file with the newly added resource

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `hyperdrive delete`

Delete a Hyperdrive config

* [  npm ](#tab-panel-7940)
* [  pnpm ](#tab-panel-7941)
* [  yarn ](#tab-panel-7942)

Terminal window

```

npx wrangler hyperdrive delete [ID]


```

Terminal window

```

pnpm wrangler hyperdrive delete [ID]


```

Terminal window

```

yarn wrangler hyperdrive delete [ID]


```

* `[ID]` ` string ` required  
The ID of the Hyperdrive config

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `hyperdrive get`

Get a Hyperdrive config

* [  npm ](#tab-panel-7943)
* [  pnpm ](#tab-panel-7944)
* [  yarn ](#tab-panel-7945)

Terminal window

```

npx wrangler hyperdrive get [ID]


```

Terminal window

```

pnpm wrangler hyperdrive get [ID]


```

Terminal window

```

yarn wrangler hyperdrive get [ID]


```

* `[ID]` ` string ` required  
The ID of the Hyperdrive config

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `hyperdrive list`

List Hyperdrive configs

* [  npm ](#tab-panel-7946)
* [  pnpm ](#tab-panel-7947)
* [  yarn ](#tab-panel-7948)

Terminal window

```

npx wrangler hyperdrive list


```

Terminal window

```

pnpm wrangler hyperdrive list


```

Terminal window

```

yarn wrangler hyperdrive list


```

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `hyperdrive update`

Update a Hyperdrive config

* [  npm ](#tab-panel-7949)
* [  pnpm ](#tab-panel-7950)
* [  yarn ](#tab-panel-7951)

Terminal window

```

npx wrangler hyperdrive update [ID]


```

Terminal window

```

pnpm wrangler hyperdrive update [ID]


```

Terminal window

```

yarn wrangler hyperdrive update [ID]


```

* `[ID]` ` string ` required  
The ID of the Hyperdrive config
* `--name` ` string `  
Give your config a new name
* `--connection-string` ` string `  
The connection string for the database you want Hyperdrive to connect to - ex: protocol://user:password@host:port/database
* `--service-id` ` string `  
The Workers VPC Service ID of the origin database
* `--origin-host` ` string ` alias: --host  
The host of the origin database
* `--origin-port` ` number ` alias: --port  
The port number of the origin database
* `--origin-scheme` ` string ` alias: --scheme  
The scheme used to connect to the origin database
* `--database` ` string `  
The name of the database within the origin database
* `--origin-user` ` string ` alias: --user  
The username used to connect to the origin database
* `--origin-password` ` string ` alias: --password  
The password used to connect to the origin database
* `--access-client-id` ` string `  
The Client ID of the Access token to use when connecting to the origin database
* `--access-client-secret` ` string `  
The Client Secret of the Access token to use when connecting to the origin database
* `--caching-disabled` ` boolean `  
Disables the caching of SQL responses
* `--max-age` ` number `  
Specifies max duration for which items should persist in the cache, cannot be set when caching is disabled
* `--swr` ` number `  
Indicates the number of seconds cache may serve the response after it becomes stale, cannot be set when caching is disabled
* `--ca-certificate-id` ` string ` alias: --ca-certificate-uuid  
Sets custom CA certificate when connecting to origin database. Must be valid UUID of already uploaded CA certificate.
* `--mtls-certificate-id` ` string ` alias: --mtls-certificate-uuid  
Sets custom mTLS client certificates when connecting to origin database. Must be valid UUID of already uploaded public/private key certificates.
* `--sslmode` ` string `  
Sets sslmode for connecting to database. For PostgreSQL: 'require, verify-ca, verify-full'. For MySQL: 'REQUIRED, VERIFY\_CA, VERIFY\_IDENTITY'.
* `--origin-connection-limit` ` number `  
The (soft) maximum number of connections that Hyperdrive may establish to the origin database

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/hyperdrive/","name":"Hyperdrive"}}]}
```

---

---
title: KV
description: Wrangler commands for managing Workers KV namespaces and key-value pairs.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/kv.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# KV

Manage [Workers KV](https://developers.cloudflare.com/kv/) using Wrangler.

## `kv namespace`

Manage Workers KV namespaces.

Note

The `kv ...` commands allow you to manage your Workers KV resources in the Cloudflare network. Learn more about using Workers KV with Wrangler in the [Workers KV guide](https://developers.cloudflare.com/kv/get-started/).

Warning

Since version 3.60.0, Wrangler supports the `kv ...` syntax. If you are using versions below 3.60.0, the command follows the `kv:...` syntax. Learn more about the deprecation of the `kv:...` syntax in the [Wrangler commands](https://developers.cloudflare.com/kv/reference/kv-commands/#deprecations) for KV page.

### `kv namespace create`

Create a new namespace

* [  npm ](#tab-panel-7952)
* [  pnpm ](#tab-panel-7953)
* [  yarn ](#tab-panel-7954)

Terminal window

```

npx wrangler kv namespace create [NAMESPACE]


```

Terminal window

```

pnpm wrangler kv namespace create [NAMESPACE]


```

Terminal window

```

yarn wrangler kv namespace create [NAMESPACE]


```

* `[NAMESPACE]` ` string ` required  
The name of the new namespace
* `--preview` ` boolean `  
Interact with a preview namespace
* `--use-remote` ` boolean `  
Use a remote binding when adding the newly created resource to your config
* `--update-config` ` boolean `  
Automatically update your config file with the newly added resource
* `--binding` ` string `  
The binding name of this resource in your Worker

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `kv namespace list`

Output a list of all KV namespaces associated with your account id

* [  npm ](#tab-panel-7955)
* [  pnpm ](#tab-panel-7956)
* [  yarn ](#tab-panel-7957)

Terminal window

```

npx wrangler kv namespace list


```

Terminal window

```

pnpm wrangler kv namespace list


```

Terminal window

```

yarn wrangler kv namespace list


```

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `kv namespace delete`

Delete a given namespace.

* [  npm ](#tab-panel-7958)
* [  pnpm ](#tab-panel-7959)
* [  yarn ](#tab-panel-7960)

Terminal window

```

npx wrangler kv namespace delete [NAMESPACE]


```

Terminal window

```

pnpm wrangler kv namespace delete [NAMESPACE]


```

Terminal window

```

yarn wrangler kv namespace delete [NAMESPACE]


```

* `[NAMESPACE]` ` string `  
The name of the namespace to delete
* `--binding` ` string `  
The binding name to the namespace to delete from
* `--namespace-id` ` string `  
The id of the namespace to delete
* `--preview` ` boolean `  
Interact with a preview namespace
* `--skip-confirmation` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `kv namespace rename`

Rename a KV namespace

* [  npm ](#tab-panel-7961)
* [  pnpm ](#tab-panel-7962)
* [  yarn ](#tab-panel-7963)

Terminal window

```

npx wrangler kv namespace rename [OLD-NAME]


```

Terminal window

```

pnpm wrangler kv namespace rename [OLD-NAME]


```

Terminal window

```

yarn wrangler kv namespace rename [OLD-NAME]


```

* `[OLD-NAME]` ` string `  
The current name of the namespace to rename
* `--namespace-id` ` string `  
The id of the namespace to rename
* `--new-name` ` string ` required  
The new name for the namespace

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `kv key`

Manage key-value pairs within a Workers KV namespace.

Note

The `kv ...` commands allow you to manage your Workers KV resources in the Cloudflare network. Learn more about using Workers KV with Wrangler in the [Workers KV guide](https://developers.cloudflare.com/kv/get-started/).

Warning

Since version 3.60.0, Wrangler supports the `kv ...` syntax. If you are using versions below 3.60.0, the command follows the `kv:...` syntax. Learn more about the deprecation of the `kv:...` syntax in the [Wrangler commands](https://developers.cloudflare.com/kv/reference/kv-commands/) for KV page.

### `kv key put`

Write a single key/value pair to the given namespace

* [  npm ](#tab-panel-7964)
* [  pnpm ](#tab-panel-7965)
* [  yarn ](#tab-panel-7966)

Terminal window

```

npx wrangler kv key put [KEY] [VALUE]


```

Terminal window

```

pnpm wrangler kv key put [KEY] [VALUE]


```

Terminal window

```

yarn wrangler kv key put [KEY] [VALUE]


```

* `[KEY]` ` string ` required  
The key to write to
* `[VALUE]` ` string `  
The value to write
* `--path` ` string `  
Read value from the file at a given path
* `--binding` ` string `  
The binding name to the namespace to write to
* `--namespace-id` ` string `  
The id of the namespace to write to
* `--preview` ` boolean `  
Interact with a preview namespace
* `--ttl` ` number `  
Time for which the entries should be visible
* `--expiration` ` number `  
Time since the UNIX epoch after which the entry expires
* `--metadata` ` string `  
Arbitrary JSON that is associated with a key
* `--local` ` boolean `  
Interact with local storage
* `--remote` ` boolean `  
Interact with remote storage
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `kv key list`

Output a list of all keys in a given namespace

* [  npm ](#tab-panel-7967)
* [  pnpm ](#tab-panel-7968)
* [  yarn ](#tab-panel-7969)

Terminal window

```

npx wrangler kv key list


```

Terminal window

```

pnpm wrangler kv key list


```

Terminal window

```

yarn wrangler kv key list


```

* `--binding` ` string `  
The binding name to the namespace to list
* `--namespace-id` ` string `  
The id of the namespace to list
* `--preview` ` boolean ` default: false  
Interact with a preview namespace
* `--prefix` ` string `  
A prefix to filter listed keys
* `--local` ` boolean `  
Interact with local storage
* `--remote` ` boolean `  
Interact with remote storage
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `kv key get`

Read a single value by key from the given namespace

* [  npm ](#tab-panel-7970)
* [  pnpm ](#tab-panel-7971)
* [  yarn ](#tab-panel-7972)

Terminal window

```

npx wrangler kv key get [KEY]


```

Terminal window

```

pnpm wrangler kv key get [KEY]


```

Terminal window

```

yarn wrangler kv key get [KEY]


```

* `[KEY]` ` string ` required  
The key value to get.
* `--text` ` boolean ` default: false  
Decode the returned value as a utf8 string
* `--binding` ` string `  
The binding name to the namespace to get from
* `--namespace-id` ` string `  
The id of the namespace to get from
* `--preview` ` boolean ` default: false  
Interact with a preview namespace
* `--local` ` boolean `  
Interact with local storage
* `--remote` ` boolean `  
Interact with remote storage
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `kv key delete`

Remove a single key value pair from the given namespace

* [  npm ](#tab-panel-7973)
* [  pnpm ](#tab-panel-7974)
* [  yarn ](#tab-panel-7975)

Terminal window

```

npx wrangler kv key delete [KEY]


```

Terminal window

```

pnpm wrangler kv key delete [KEY]


```

Terminal window

```

yarn wrangler kv key delete [KEY]


```

* `[KEY]` ` string ` required  
The key value to delete.
* `--binding` ` string `  
The binding name to the namespace to delete from
* `--namespace-id` ` string `  
The id of the namespace to delete from
* `--preview` ` boolean `  
Interact with a preview namespace
* `--local` ` boolean `  
Interact with local storage
* `--remote` ` boolean `  
Interact with remote storage
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `kv bulk`

Manage multiple key-value pairs within a Workers KV namespace in batches.

Note

The `kv ...` commands allow you to manage your Workers KV resources in the Cloudflare network. Learn more about using Workers KV with Wrangler in the [Workers KV guide](https://developers.cloudflare.com/kv/get-started/).

Warning

Since version 3.60.0, Wrangler supports the `kv ...` syntax. If you are using versions below 3.60.0, the command follows the `kv:...` syntax. Learn more about the deprecation of the `kv:...` syntax in the [Wrangler commands](https://developers.cloudflare.com/kv/reference/kv-commands/) for KV page.

### `kv bulk get`

Gets multiple key-value pairs from a namespace

* [  npm ](#tab-panel-7976)
* [  pnpm ](#tab-panel-7977)
* [  yarn ](#tab-panel-7978)

Terminal window

```

npx wrangler kv bulk get [FILENAME]


```

Terminal window

```

pnpm wrangler kv bulk get [FILENAME]


```

Terminal window

```

yarn wrangler kv bulk get [FILENAME]


```

* `[FILENAME]` ` string ` required  
The file containing the keys to get
* `--binding` ` string `  
The binding name to the namespace to get from
* `--namespace-id` ` string `  
The id of the namespace to get from
* `--preview` ` boolean ` default: false  
Interact with a preview namespace
* `--local` ` boolean `  
Interact with local storage
* `--remote` ` boolean `  
Interact with remote storage
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `kv bulk put`

Upload multiple key-value pairs to a namespace

* [  npm ](#tab-panel-7979)
* [  pnpm ](#tab-panel-7980)
* [  yarn ](#tab-panel-7981)

Terminal window

```

npx wrangler kv bulk put [FILENAME]


```

Terminal window

```

pnpm wrangler kv bulk put [FILENAME]


```

Terminal window

```

yarn wrangler kv bulk put [FILENAME]


```

* `[FILENAME]` ` string ` required  
The file containing the key/value pairs to write
* `--binding` ` string `  
The binding name to the namespace to write to
* `--namespace-id` ` string `  
The id of the namespace to write to
* `--preview` ` boolean `  
Interact with a preview namespace
* `--ttl` ` number `  
Time for which the entries should be visible
* `--expiration` ` number `  
Time since the UNIX epoch after which the entry expires
* `--metadata` ` string `  
Arbitrary JSON that is associated with a key
* `--local` ` boolean `  
Interact with local storage
* `--remote` ` boolean `  
Interact with remote storage
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `kv bulk delete`

Delete multiple key-value pairs from a namespace

* [  npm ](#tab-panel-7982)
* [  pnpm ](#tab-panel-7983)
* [  yarn ](#tab-panel-7984)

Terminal window

```

npx wrangler kv bulk delete [FILENAME]


```

Terminal window

```

pnpm wrangler kv bulk delete [FILENAME]


```

Terminal window

```

yarn wrangler kv bulk delete [FILENAME]


```

* `[FILENAME]` ` string ` required  
The file containing the keys to delete
* `--force` ` boolean ` alias: --f  
Do not ask for confirmation before deleting
* `--binding` ` string `  
The binding name to the namespace to delete from
* `--namespace-id` ` string `  
The id of the namespace to delete from
* `--preview` ` boolean `  
Interact with a preview namespace
* `--local` ` boolean `  
Interact with local storage
* `--remote` ` boolean `  
Interact with remote storage
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/kv/","name":"KV"}}]}
```

---

---
title: Pages
description: Wrangler commands for configuring Cloudflare Pages.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/pages.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Pages

Configure [Cloudflare Pages](https://developers.cloudflare.com/pages/) using Wrangler.

## `pages dev`

Develop your full-stack Pages application locally

* [  npm ](#tab-panel-7985)
* [  pnpm ](#tab-panel-7986)
* [  yarn ](#tab-panel-7987)

Terminal window

```

npx wrangler pages dev [DIRECTORY] [COMMAND]


```

Terminal window

```

pnpm wrangler pages dev [DIRECTORY] [COMMAND]


```

Terminal window

```

yarn wrangler pages dev [DIRECTORY] [COMMAND]


```

* `[DIRECTORY]` ` string `  
The directory of static assets to serve
* `[COMMAND]` ` string `  
The proxy command to run \[deprecated\]
* `--compatibility-date` ` string `  
Date to use for compatibility checks
* `--compatibility-flags` ` string ` alias: --compatibility-flag  
Flags to use for compatibility checks
* `--ip` ` string `  
The IP address to listen on
* `--port` ` number `  
The port to listen on (serve from)
* `--inspector-port` ` number `  
Port for devtools to connect to
* `--proxy` ` number `  
The port to proxy (where the static assets are served)
* `--script-path` ` string `  
The location of the single Worker script if not using functions \[default: \_worker.js\]
* `--no-bundle` ` boolean `  
Whether to run bundling on `_worker.js`
* `--binding` ` array ` alias: --b  
Bind variable/secret (KEY=VALUE)
* `--kv` ` array ` alias: --k  
KV namespace to bind (--kv KV\_BINDING)
* `--d1` ` array `  
D1 database to bind (--d1 D1\_BINDING)
* `--do` ` array ` alias: --o  
Durable Object to bind (--do DO\_BINDING=CLASS\_NAME@SCRIPT\_NAME)
* `--r2` ` array `  
R2 bucket to bind (--r2 R2\_BINDING)
* `--ai` ` string `  
AI to bind (--ai AI\_BINDING)
* `--version-metadata` ` string `  
Worker Version metadata (--version-metadata VERSION\_METADATA\_BINDING)
* `--service` ` array `  
Service to bind (--service SERVICE=SCRIPT\_NAME)
* `--live-reload` ` boolean ` default: false  
Auto reload HTML pages when change is detected
* `--local-protocol` ` "http" | "https" `  
Protocol to listen to requests on, defaults to http.
* `--https-key-path` ` string `  
Path to a custom certificate key
* `--https-cert-path` ` string `  
Path to a custom certificate
* `--persist-to` ` string `  
Specify directory to use for local persistence (defaults to .wrangler/state)
* `--log-level` ` "debug" | "info" | "log" | "warn" | "error" | "none" `  
Specify logging level
* `--show-interactive-dev-session` ` boolean `  
Show interactive dev session (defaults to true if the terminal supports interactivity)

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages functions build`

Compile a folder of Pages Functions into a single Worker

* [  npm ](#tab-panel-7988)
* [  pnpm ](#tab-panel-7989)
* [  yarn ](#tab-panel-7990)

Terminal window

```

npx wrangler pages functions build [DIRECTORY]


```

Terminal window

```

pnpm wrangler pages functions build [DIRECTORY]


```

Terminal window

```

yarn wrangler pages functions build [DIRECTORY]


```

* `[DIRECTORY]` ` string ` default: functions  
The directory of Pages Functions
* `--outfile` ` string `  
The location of the output Worker script
* `--outdir` ` string `  
Output directory for the bundled Worker
* `--output-config-path` ` string `  
The location for the output config file
* `--build-metadata-path` ` string `  
The location for the build metadata file
* `--project-directory` ` string `  
The location of the Pages project
* `--output-routes-path` ` string `  
The location for the output \_routes.json file
* `--minify` ` boolean ` default: false  
Minify the output Worker script
* `--sourcemap` ` boolean ` default: false  
Generate a sourcemap for the output Worker script
* `--fallback-service` ` string ` default: ASSETS  
The service to fallback to at the end of the `next` chain. Setting to '' will fallback to the global `fetch`.
* `--watch` ` boolean ` default: false  
Watch for changes to the functions and automatically rebuild the Worker script
* `--plugin` ` boolean ` default: false  
Build a plugin rather than a Worker script
* `--build-output-directory` ` string `  
The directory to output static assets to
* `--compatibility-date` ` string `  
Date to use for compatibility checks
* `--compatibility-flags` ` string ` alias: --compatibility-flag  
Flags to use for compatibility checks
* `--external` ` string `  
A list of module imports to exclude from bundling
* `--metafile` ` string `  
Path to output build metadata from esbuild. If flag is used without a path, defaults to 'bundle-meta.json' inside the directory specified by --outdir.

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages project list`

List your Cloudflare Pages projects

* [  npm ](#tab-panel-7991)
* [  pnpm ](#tab-panel-7992)
* [  yarn ](#tab-panel-7993)

Terminal window

```

npx wrangler pages project list


```

Terminal window

```

pnpm wrangler pages project list


```

Terminal window

```

yarn wrangler pages project list


```

* `--json` ` boolean ` default: false  
Return output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages project create`

Create a new Cloudflare Pages project

* [  npm ](#tab-panel-7994)
* [  pnpm ](#tab-panel-7995)
* [  yarn ](#tab-panel-7996)

Terminal window

```

npx wrangler pages project create [PROJECT-NAME]


```

Terminal window

```

pnpm wrangler pages project create [PROJECT-NAME]


```

Terminal window

```

yarn wrangler pages project create [PROJECT-NAME]


```

* `[PROJECT-NAME]` ` string ` required  
The name of your Pages project
* `--production-branch` ` string `  
The name of the production branch of your project
* `--compatibility-flags` ` string ` alias: --compatibility-flag  
Flags to use for compatibility checks
* `--compatibility-date` ` string `  
Date to use for compatibility checks

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages project delete`

Delete a Cloudflare Pages project

* [  npm ](#tab-panel-7997)
* [  pnpm ](#tab-panel-7998)
* [  yarn ](#tab-panel-7999)

Terminal window

```

npx wrangler pages project delete [PROJECT-NAME]


```

Terminal window

```

pnpm wrangler pages project delete [PROJECT-NAME]


```

Terminal window

```

yarn wrangler pages project delete [PROJECT-NAME]


```

* `[PROJECT-NAME]` ` string ` required  
The name of your Pages project
* `--yes` ` boolean ` alias: --y  
Answer "yes" to confirm project deletion

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages deployment list`

List deployments in your Cloudflare Pages project

* [  npm ](#tab-panel-8000)
* [  pnpm ](#tab-panel-8001)
* [  yarn ](#tab-panel-8002)

Terminal window

```

npx wrangler pages deployment list


```

Terminal window

```

pnpm wrangler pages deployment list


```

Terminal window

```

yarn wrangler pages deployment list


```

* `--project-name` ` string `  
The name of the project you would like to list deployments for
* `--environment` ` string `  
Environment type to list deployments for
* `--json` ` boolean ` default: false  
Return output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages deployment tail`

Start a tailing session for a project's deployment and livestream logs from your Functions

* [  npm ](#tab-panel-8003)
* [  pnpm ](#tab-panel-8004)
* [  yarn ](#tab-panel-8005)

Terminal window

```

npx wrangler pages deployment tail [DEPLOYMENT]


```

Terminal window

```

pnpm wrangler pages deployment tail [DEPLOYMENT]


```

Terminal window

```

yarn wrangler pages deployment tail [DEPLOYMENT]


```

* `[DEPLOYMENT]` ` string `  
(Optional) ID or URL of the deployment to tail. Specify by environment if deployment ID is unknown.
* `--project-name` ` string `  
The name of the project you would like to tail
* `--environment` ` string ` default: production  
When not providing a specific deployment ID, specifying environment will grab the latest production or preview deployment
* `--format` ` string `  
The format of log entries
* `--status` ` "ok" | "error" | "canceled" `  
Filter by invocation status
* `--header` ` string `  
Filter by HTTP header
* `--method` ` string `  
Filter by HTTP method
* `--search` ` string `  
Filter by a text match in console.log messages
* `--sampling-rate` ` number `  
Adds a percentage of requests to log sampling rate
* `--ip` ` string `  
Filter by the IP address the request originates from. Use "self" to filter for your own IP

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages deployment delete`

Delete a deployment in your Cloudflare Pages project

* [  npm ](#tab-panel-8006)
* [  pnpm ](#tab-panel-8007)
* [  yarn ](#tab-panel-8008)

Terminal window

```

npx wrangler pages deployment delete [DEPLOYMENT-ID]


```

Terminal window

```

pnpm wrangler pages deployment delete [DEPLOYMENT-ID]


```

Terminal window

```

yarn wrangler pages deployment delete [DEPLOYMENT-ID]


```

* `[DEPLOYMENT-ID]` ` string ` required  
The ID of the deployment to delete
* `--project-name` ` string `  
The name of the project the deployment belongs to
* `--force` ` boolean ` alias: --f default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages deploy`

Deploy a directory of static assets as a Pages deployment

* [  npm ](#tab-panel-8009)
* [  pnpm ](#tab-panel-8010)
* [  yarn ](#tab-panel-8011)

Terminal window

```

npx wrangler pages deploy [DIRECTORY]


```

Terminal window

```

pnpm wrangler pages deploy [DIRECTORY]


```

Terminal window

```

yarn wrangler pages deploy [DIRECTORY]


```

* `[DIRECTORY]` ` string `  
The directory of static files to upload
* `--project-name` ` string `  
The name of the project you want to deploy to
* `--branch` ` string `  
The name of the branch you want to deploy to
* `--commit-hash` ` string `  
The SHA to attach to this deployment
* `--commit-message` ` string `  
The commit message to attach to this deployment
* `--commit-dirty` ` boolean `  
Whether or not the workspace should be considered dirty for this deployment
* `--skip-caching` ` boolean `  
Skip asset caching which speeds up builds
* `--no-bundle` ` boolean `  
Whether to run bundling on `_worker.js` before deploying
* `--upload-source-maps` ` boolean ` default: false  
Whether to upload any server-side sourcemaps with this deployment

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages secret put`

Create or update a secret variable for a Pages project

* [  npm ](#tab-panel-8012)
* [  pnpm ](#tab-panel-8013)
* [  yarn ](#tab-panel-8014)

Terminal window

```

npx wrangler pages secret put [KEY]


```

Terminal window

```

pnpm wrangler pages secret put [KEY]


```

Terminal window

```

yarn wrangler pages secret put [KEY]


```

* `[KEY]` ` string ` required  
The variable name to be accessible in the Pages project
* `--project-name` ` string ` aliases: --project  
The name of your Pages project

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages secret bulk`

Bulk upload secrets for a Pages project

* [  npm ](#tab-panel-8015)
* [  pnpm ](#tab-panel-8016)
* [  yarn ](#tab-panel-8017)

Terminal window

```

npx wrangler pages secret bulk [FILE]


```

Terminal window

```

pnpm wrangler pages secret bulk [FILE]


```

Terminal window

```

yarn wrangler pages secret bulk [FILE]


```

* `[FILE]` ` string `  
The file of key-value pairs to upload, as JSON in form {"key": value, ...} or .dev.vars file in the form KEY=VALUE
* `--project-name` ` string ` aliases: --project  
The name of your Pages project

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages secret delete`

Delete a secret variable from a Pages project

* [  npm ](#tab-panel-8018)
* [  pnpm ](#tab-panel-8019)
* [  yarn ](#tab-panel-8020)

Terminal window

```

npx wrangler pages secret delete [KEY]


```

Terminal window

```

pnpm wrangler pages secret delete [KEY]


```

Terminal window

```

yarn wrangler pages secret delete [KEY]


```

* `[KEY]` ` string ` required  
The variable name to be accessible in the Pages project
* `--project-name` ` string ` aliases: --project  
The name of your Pages project

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages secret list`

List all secrets for a Pages project

* [  npm ](#tab-panel-8021)
* [  pnpm ](#tab-panel-8022)
* [  yarn ](#tab-panel-8023)

Terminal window

```

npx wrangler pages secret list


```

Terminal window

```

pnpm wrangler pages secret list


```

Terminal window

```

yarn wrangler pages secret list


```

* `--project-name` ` string ` aliases: --project  
The name of your Pages project

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pages download config`

  
Experimental 

Download your Pages project config as a Wrangler configuration file

* [  npm ](#tab-panel-8024)
* [  pnpm ](#tab-panel-8025)
* [  yarn ](#tab-panel-8026)

Terminal window

```

npx wrangler pages download config [PROJECTNAME]


```

Terminal window

```

pnpm wrangler pages download config [PROJECTNAME]


```

Terminal window

```

yarn wrangler pages download config [PROJECTNAME]


```

* `[PROJECTNAME]` ` string `  
The Pages project to download
* `--force` ` boolean `  
Overwrite an existing Wrangler configuration file without prompting

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/pages/","name":"Pages"}}]}
```

---

---
title: Pipelines
description: Wrangler commands for managing Cloudflare Pipelines.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/pipelines.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Pipelines

Manage your [Pipelines](https://developers.cloudflare.com/pipelines/) using Wrangler.

## `pipelines setup`

Interactive setup for a complete pipeline

* [  npm ](#tab-panel-8027)
* [  pnpm ](#tab-panel-8028)
* [  yarn ](#tab-panel-8029)

Terminal window

```

npx wrangler pipelines setup


```

Terminal window

```

pnpm wrangler pipelines setup


```

Terminal window

```

yarn wrangler pipelines setup


```

* `--name` ` string `  
Pipeline name

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines create`

Create a new pipeline

* [  npm ](#tab-panel-8030)
* [  pnpm ](#tab-panel-8031)
* [  yarn ](#tab-panel-8032)

Terminal window

```

npx wrangler pipelines create [PIPELINE]


```

Terminal window

```

pnpm wrangler pipelines create [PIPELINE]


```

Terminal window

```

yarn wrangler pipelines create [PIPELINE]


```

* `[PIPELINE]` ` string ` required  
The name of the pipeline to create
* `--sql` ` string `  
Inline SQL query for the pipeline
* `--sql-file` ` string `  
Path to file containing SQL query for the pipeline

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines list`

List all pipelines

* [  npm ](#tab-panel-8033)
* [  pnpm ](#tab-panel-8034)
* [  yarn ](#tab-panel-8035)

Terminal window

```

npx wrangler pipelines list


```

Terminal window

```

pnpm wrangler pipelines list


```

Terminal window

```

yarn wrangler pipelines list


```

* `--page` ` number ` default: 1  
Page number for pagination
* `--per-page` ` number ` default: 20  
Number of pipelines per page
* `--json` ` boolean ` default: false  
Output in JSON format

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines get`

Get details about a specific pipeline

* [  npm ](#tab-panel-8036)
* [  pnpm ](#tab-panel-8037)
* [  yarn ](#tab-panel-8038)

Terminal window

```

npx wrangler pipelines get [PIPELINE]


```

Terminal window

```

pnpm wrangler pipelines get [PIPELINE]


```

Terminal window

```

yarn wrangler pipelines get [PIPELINE]


```

* `[PIPELINE]` ` string ` required  
The ID of the pipeline to retrieve
* `--json` ` boolean ` default: false  
Output in JSON format

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines update`

Update a pipeline configuration (legacy pipelines only)

* [  npm ](#tab-panel-8039)
* [  pnpm ](#tab-panel-8040)
* [  yarn ](#tab-panel-8041)

Terminal window

```

npx wrangler pipelines update [PIPELINE]


```

Terminal window

```

pnpm wrangler pipelines update [PIPELINE]


```

Terminal window

```

yarn wrangler pipelines update [PIPELINE]


```

* `[PIPELINE]` ` string ` required  
The name of the legacy pipeline to update
* `--source` ` array `  
Space separated list of allowed sources. Options are 'http' or 'worker'
* `--require-http-auth` ` boolean `  
Require Cloudflare API Token for HTTPS endpoint authentication
* `--cors-origins` ` array `  
CORS origin allowlist for HTTP endpoint (use \* for any origin). Defaults to an empty array
* `--batch-max-mb` ` number `  
Maximum batch size in megabytes before flushing. Defaults to 100 MB if unset. Minimum: 1, Maximum: 100
* `--batch-max-rows` ` number `  
Maximum number of rows per batch before flushing. Defaults to 10,000,000 if unset. Minimum: 100, Maximum: 10,000,000
* `--batch-max-seconds` ` number `  
Maximum age of batch in seconds before flushing. Defaults to 300 if unset. Minimum: 1, Maximum: 300
* `--r2-bucket` ` string `  
Destination R2 bucket name
* `--r2-access-key-id` ` string `  
R2 service Access Key ID for authentication. Leave empty for OAuth confirmation.
* `--r2-secret-access-key` ` string `  
R2 service Secret Access Key for authentication. Leave empty for OAuth confirmation.
* `--r2-prefix` ` string `  
Prefix for storing files in the destination bucket. Default is no prefix
* `--compression` ` string `  
Compression format for output files
* `--shard-count` ` number `  
Number of shards for the pipeline. More shards handle higher request volume; fewer shards produce larger output files. Defaults to 2 if unset. Minimum: 1, Maximum: 15

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines delete`

Delete a pipeline

* [  npm ](#tab-panel-8042)
* [  pnpm ](#tab-panel-8043)
* [  yarn ](#tab-panel-8044)

Terminal window

```

npx wrangler pipelines delete [PIPELINE]


```

Terminal window

```

pnpm wrangler pipelines delete [PIPELINE]


```

Terminal window

```

yarn wrangler pipelines delete [PIPELINE]


```

* `[PIPELINE]` ` string ` required  
The ID or name of the pipeline to delete
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines streams create`

Create a new stream

* [  npm ](#tab-panel-8045)
* [  pnpm ](#tab-panel-8046)
* [  yarn ](#tab-panel-8047)

Terminal window

```

npx wrangler pipelines streams create [STREAM]


```

Terminal window

```

pnpm wrangler pipelines streams create [STREAM]


```

Terminal window

```

yarn wrangler pipelines streams create [STREAM]


```

* `[STREAM]` ` string ` required  
The name of the stream to create
* `--schema-file` ` string `  
Path to JSON file containing stream schema
* `--http-enabled` ` boolean ` default: true  
Enable HTTP endpoint
* `--http-auth` ` boolean ` default: true  
Require authentication for HTTP endpoint
* `--cors-origin` ` string `  
CORS origin

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines streams list`

List all streams

* [  npm ](#tab-panel-8048)
* [  pnpm ](#tab-panel-8049)
* [  yarn ](#tab-panel-8050)

Terminal window

```

npx wrangler pipelines streams list


```

Terminal window

```

pnpm wrangler pipelines streams list


```

Terminal window

```

yarn wrangler pipelines streams list


```

* `--page` ` number ` default: 1  
Page number for pagination
* `--per-page` ` number ` default: 20  
Number of streams per page
* `--pipeline-id` ` string `  
Filter streams by pipeline ID
* `--json` ` boolean ` default: false  
Output in JSON format

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines streams get`

Get details about a specific stream

* [  npm ](#tab-panel-8051)
* [  pnpm ](#tab-panel-8052)
* [  yarn ](#tab-panel-8053)

Terminal window

```

npx wrangler pipelines streams get [STREAM]


```

Terminal window

```

pnpm wrangler pipelines streams get [STREAM]


```

Terminal window

```

yarn wrangler pipelines streams get [STREAM]


```

* `[STREAM]` ` string ` required  
The ID of the stream to retrieve
* `--json` ` boolean ` default: false  
Output in JSON format

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines streams delete`

Delete a stream

* [  npm ](#tab-panel-8054)
* [  pnpm ](#tab-panel-8055)
* [  yarn ](#tab-panel-8056)

Terminal window

```

npx wrangler pipelines streams delete [STREAM]


```

Terminal window

```

pnpm wrangler pipelines streams delete [STREAM]


```

Terminal window

```

yarn wrangler pipelines streams delete [STREAM]


```

* `[STREAM]` ` string ` required  
The ID of the stream to delete
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines sinks create`

Create a new sink

* [  npm ](#tab-panel-8057)
* [  pnpm ](#tab-panel-8058)
* [  yarn ](#tab-panel-8059)

Terminal window

```

npx wrangler pipelines sinks create [SINK]


```

Terminal window

```

pnpm wrangler pipelines sinks create [SINK]


```

Terminal window

```

yarn wrangler pipelines sinks create [SINK]


```

* `[SINK]` ` string ` required  
The name of the sink to create
* `--type` ` string ` required  
The type of sink to create
* `--bucket` ` string ` required  
R2 bucket name
* `--format` ` string ` default: parquet  
Output format
* `--compression` ` string ` default: zstd  
Compression method (parquet only)
* `--target-row-group-size` ` string `  
Target row group size for parquet format
* `--path` ` string `  
The base prefix in your bucket where data will be written
* `--partitioning` ` string `  
Time partition pattern (r2 sinks only)
* `--roll-size` ` number `  
Roll file size in MB
* `--roll-interval` ` number ` default: 300  
Roll file interval in seconds
* `--access-key-id` ` string `  
R2 access key ID (leave empty for R2 credentials to be automatically created)
* `--secret-access-key` ` string `  
R2 secret access key (leave empty for R2 credentials to be automatically created)
* `--namespace` ` string `  
Data catalog namespace (required for r2-data-catalog)
* `--table` ` string `  
Table name within namespace (required for r2-data-catalog)
* `--catalog-token` ` string `  
Authentication token for data catalog (required for r2-data-catalog)

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines sinks list`

List all sinks

* [  npm ](#tab-panel-8060)
* [  pnpm ](#tab-panel-8061)
* [  yarn ](#tab-panel-8062)

Terminal window

```

npx wrangler pipelines sinks list


```

Terminal window

```

pnpm wrangler pipelines sinks list


```

Terminal window

```

yarn wrangler pipelines sinks list


```

* `--page` ` number ` default: 1  
Page number for pagination
* `--per-page` ` number ` default: 20  
Number of sinks per page
* `--pipeline-id` ` string `  
Filter sinks by pipeline ID
* `--json` ` boolean ` default: false  
Output in JSON format

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines sinks get`

Get details about a specific sink

* [  npm ](#tab-panel-8063)
* [  pnpm ](#tab-panel-8064)
* [  yarn ](#tab-panel-8065)

Terminal window

```

npx wrangler pipelines sinks get [SINK]


```

Terminal window

```

pnpm wrangler pipelines sinks get [SINK]


```

Terminal window

```

yarn wrangler pipelines sinks get [SINK]


```

* `[SINK]` ` string ` required  
The ID of the sink to retrieve
* `--json` ` boolean ` default: false  
Output in JSON format

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `pipelines sinks delete`

Delete a sink

* [  npm ](#tab-panel-8066)
* [  pnpm ](#tab-panel-8067)
* [  yarn ](#tab-panel-8068)

Terminal window

```

npx wrangler pipelines sinks delete [SINK]


```

Terminal window

```

pnpm wrangler pipelines sinks delete [SINK]


```

Terminal window

```

yarn wrangler pipelines sinks delete [SINK]


```

* `[SINK]` ` string ` required  
The ID of the sink to delete
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/pipelines/","name":"Pipelines"}}]}
```

---

---
title: Queues
description: Wrangler commands for managing Workers Queues configurations.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/queues.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Queues

Manage your Workers [Queues](https://developers.cloudflare.com/queues/) configurations using Wrangler.

## `queues list`

List queues

* [  npm ](#tab-panel-8069)
* [  pnpm ](#tab-panel-8070)
* [  yarn ](#tab-panel-8071)

Terminal window

```

npx wrangler queues list


```

Terminal window

```

pnpm wrangler queues list


```

Terminal window

```

yarn wrangler queues list


```

* `--page` ` number `  
Page number for pagination

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues create`

Create a queue

* [  npm ](#tab-panel-8072)
* [  pnpm ](#tab-panel-8073)
* [  yarn ](#tab-panel-8074)

Terminal window

```

npx wrangler queues create [NAME]


```

Terminal window

```

pnpm wrangler queues create [NAME]


```

Terminal window

```

yarn wrangler queues create [NAME]


```

* `[NAME]` ` string ` required  
The name of the queue
* `--delivery-delay-secs` ` number `  
How long a published message should be delayed for, in seconds. Must be between 0 and 86400
* `--message-retention-period-secs` ` number `  
How long to retain a message in the queue, in seconds. Must be between 60 and 86400 if on free tier, otherwise must be between 60 and 1209600

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues update`

Update a queue

* [  npm ](#tab-panel-8075)
* [  pnpm ](#tab-panel-8076)
* [  yarn ](#tab-panel-8077)

Terminal window

```

npx wrangler queues update [NAME]


```

Terminal window

```

pnpm wrangler queues update [NAME]


```

Terminal window

```

yarn wrangler queues update [NAME]


```

* `[NAME]` ` string ` required  
The name of the queue
* `--delivery-delay-secs` ` number `  
How long a published message should be delayed for, in seconds. Must be between 0 and 86400
* `--message-retention-period-secs` ` number `  
How long to retain a message in the queue, in seconds. Must be between 60 and 86400 if on free tier, otherwise must be between 60 and 1209600

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues delete`

Delete a queue

* [  npm ](#tab-panel-8078)
* [  pnpm ](#tab-panel-8079)
* [  yarn ](#tab-panel-8080)

Terminal window

```

npx wrangler queues delete [NAME]


```

Terminal window

```

pnpm wrangler queues delete [NAME]


```

Terminal window

```

yarn wrangler queues delete [NAME]


```

* `[NAME]` ` string ` required  
The name of the queue

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues info`

Get queue information

* [  npm ](#tab-panel-8081)
* [  pnpm ](#tab-panel-8082)
* [  yarn ](#tab-panel-8083)

Terminal window

```

npx wrangler queues info [NAME]


```

Terminal window

```

pnpm wrangler queues info [NAME]


```

Terminal window

```

yarn wrangler queues info [NAME]


```

* `[NAME]` ` string ` required  
The name of the queue

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues consumer add`

Add a Queue Worker Consumer

* [  npm ](#tab-panel-8084)
* [  pnpm ](#tab-panel-8085)
* [  yarn ](#tab-panel-8086)

Terminal window

```

npx wrangler queues consumer add [QUEUE-NAME] [SCRIPT-NAME]


```

Terminal window

```

pnpm wrangler queues consumer add [QUEUE-NAME] [SCRIPT-NAME]


```

Terminal window

```

yarn wrangler queues consumer add [QUEUE-NAME] [SCRIPT-NAME]


```

* `[QUEUE-NAME]` ` string ` required  
Name of the queue to configure
* `[SCRIPT-NAME]` ` string ` required  
Name of the consumer script
* `--batch-size` ` number `  
Maximum number of messages per batch
* `--batch-timeout` ` number `  
Maximum number of seconds to wait to fill a batch with messages
* `--message-retries` ` number `  
Maximum number of retries for each message
* `--dead-letter-queue` ` string `  
Queue to send messages that failed to be consumed
* `--max-concurrency` ` number `  
The maximum number of concurrent consumer Worker invocations. Must be a positive integer
* `--retry-delay-secs` ` number `  
The number of seconds to wait before retrying a message

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues consumer remove`

Remove a Queue Worker Consumer

* [  npm ](#tab-panel-8087)
* [  pnpm ](#tab-panel-8088)
* [  yarn ](#tab-panel-8089)

Terminal window

```

npx wrangler queues consumer remove [QUEUE-NAME] [SCRIPT-NAME]


```

Terminal window

```

pnpm wrangler queues consumer remove [QUEUE-NAME] [SCRIPT-NAME]


```

Terminal window

```

yarn wrangler queues consumer remove [QUEUE-NAME] [SCRIPT-NAME]


```

* `[QUEUE-NAME]` ` string ` required  
Name of the queue to configure
* `[SCRIPT-NAME]` ` string ` required  
Name of the consumer script

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues consumer http add`

Add a Queue HTTP Pull Consumer

* [  npm ](#tab-panel-8090)
* [  pnpm ](#tab-panel-8091)
* [  yarn ](#tab-panel-8092)

Terminal window

```

npx wrangler queues consumer http add [QUEUE-NAME]


```

Terminal window

```

pnpm wrangler queues consumer http add [QUEUE-NAME]


```

Terminal window

```

yarn wrangler queues consumer http add [QUEUE-NAME]


```

* `[QUEUE-NAME]` ` string ` required  
Name of the queue for the consumer
* `--batch-size` ` number `  
Maximum number of messages per batch
* `--message-retries` ` number `  
Maximum number of retries for each message
* `--dead-letter-queue` ` string `  
Queue to send messages that failed to be consumed
* `--visibility-timeout-secs` ` number `  
The number of seconds a message will wait for an acknowledgement before being returned to the queue.
* `--retry-delay-secs` ` number `  
The number of seconds to wait before retrying a message

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues consumer http remove`

Remove a Queue HTTP Pull Consumer

* [  npm ](#tab-panel-8093)
* [  pnpm ](#tab-panel-8094)
* [  yarn ](#tab-panel-8095)

Terminal window

```

npx wrangler queues consumer http remove [QUEUE-NAME]


```

Terminal window

```

pnpm wrangler queues consumer http remove [QUEUE-NAME]


```

Terminal window

```

yarn wrangler queues consumer http remove [QUEUE-NAME]


```

* `[QUEUE-NAME]` ` string ` required  
Name of the queue for the consumer

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues consumer worker add`

Add a Queue Worker Consumer

* [  npm ](#tab-panel-8096)
* [  pnpm ](#tab-panel-8097)
* [  yarn ](#tab-panel-8098)

Terminal window

```

npx wrangler queues consumer worker add [QUEUE-NAME] [SCRIPT-NAME]


```

Terminal window

```

pnpm wrangler queues consumer worker add [QUEUE-NAME] [SCRIPT-NAME]


```

Terminal window

```

yarn wrangler queues consumer worker add [QUEUE-NAME] [SCRIPT-NAME]


```

* `[QUEUE-NAME]` ` string ` required  
Name of the queue to configure
* `[SCRIPT-NAME]` ` string ` required  
Name of the consumer script
* `--batch-size` ` number `  
Maximum number of messages per batch
* `--batch-timeout` ` number `  
Maximum number of seconds to wait to fill a batch with messages
* `--message-retries` ` number `  
Maximum number of retries for each message
* `--dead-letter-queue` ` string `  
Queue to send messages that failed to be consumed
* `--max-concurrency` ` number `  
The maximum number of concurrent consumer Worker invocations. Must be a positive integer
* `--retry-delay-secs` ` number `  
The number of seconds to wait before retrying a message

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues consumer worker remove`

Remove a Queue Worker Consumer

* [  npm ](#tab-panel-8099)
* [  pnpm ](#tab-panel-8100)
* [  yarn ](#tab-panel-8101)

Terminal window

```

npx wrangler queues consumer worker remove [QUEUE-NAME] [SCRIPT-NAME]


```

Terminal window

```

pnpm wrangler queues consumer worker remove [QUEUE-NAME] [SCRIPT-NAME]


```

Terminal window

```

yarn wrangler queues consumer worker remove [QUEUE-NAME] [SCRIPT-NAME]


```

* `[QUEUE-NAME]` ` string ` required  
Name of the queue to configure
* `[SCRIPT-NAME]` ` string ` required  
Name of the consumer script

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues pause-delivery`

Pause message delivery for a queue

* [  npm ](#tab-panel-8102)
* [  pnpm ](#tab-panel-8103)
* [  yarn ](#tab-panel-8104)

Terminal window

```

npx wrangler queues pause-delivery [NAME]


```

Terminal window

```

pnpm wrangler queues pause-delivery [NAME]


```

Terminal window

```

yarn wrangler queues pause-delivery [NAME]


```

* `[NAME]` ` string ` required  
The name of the queue

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues resume-delivery`

Resume message delivery for a queue

* [  npm ](#tab-panel-8105)
* [  pnpm ](#tab-panel-8106)
* [  yarn ](#tab-panel-8107)

Terminal window

```

npx wrangler queues resume-delivery [NAME]


```

Terminal window

```

pnpm wrangler queues resume-delivery [NAME]


```

Terminal window

```

yarn wrangler queues resume-delivery [NAME]


```

* `[NAME]` ` string ` required  
The name of the queue

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues purge`

Purge messages from a queue

* [  npm ](#tab-panel-8108)
* [  pnpm ](#tab-panel-8109)
* [  yarn ](#tab-panel-8110)

Terminal window

```

npx wrangler queues purge [NAME]


```

Terminal window

```

pnpm wrangler queues purge [NAME]


```

Terminal window

```

yarn wrangler queues purge [NAME]


```

* `[NAME]` ` string ` required  
The name of the queue
* `--force` ` boolean `  
Skip the confirmation dialog and forcefully purge the Queue

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues subscription create`

Create a new event subscription for a queue

* [  npm ](#tab-panel-8111)
* [  pnpm ](#tab-panel-8112)
* [  yarn ](#tab-panel-8113)

Terminal window

```

npx wrangler queues subscription create [QUEUE]


```

Terminal window

```

pnpm wrangler queues subscription create [QUEUE]


```

Terminal window

```

yarn wrangler queues subscription create [QUEUE]


```

* `[QUEUE]` ` string ` required  
The name of the queue to create the subscription for
* `--source` ` string ` required  
The event source type
* `--events` ` string ` required  
Comma-separated list of event types to subscribe to
* `--name` ` string `  
Name for the subscription (auto-generated if not provided)
* `--enabled` ` boolean ` default: true  
Whether the subscription should be active
* `--model-name` ` string `  
Workers AI model name (required for workersAi.model source)
* `--worker-name` ` string `  
Worker name (required for workersBuilds.worker source)
* `--workflow-name` ` string `  
Workflow name (required for workflows.workflow source)

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues subscription list`

List event subscriptions for a queue

* [  npm ](#tab-panel-8114)
* [  pnpm ](#tab-panel-8115)
* [  yarn ](#tab-panel-8116)

Terminal window

```

npx wrangler queues subscription list [QUEUE]


```

Terminal window

```

pnpm wrangler queues subscription list [QUEUE]


```

Terminal window

```

yarn wrangler queues subscription list [QUEUE]


```

* `[QUEUE]` ` string ` required  
The name of the queue to list subscriptions for
* `--page` ` number ` default: 1  
Page number for pagination
* `--per-page` ` number ` default: 20  
Number of subscriptions per page
* `--json` ` boolean ` default: false  
Output in JSON format

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues subscription get`

Get details about a specific event subscription

* [  npm ](#tab-panel-8117)
* [  pnpm ](#tab-panel-8118)
* [  yarn ](#tab-panel-8119)

Terminal window

```

npx wrangler queues subscription get [QUEUE]


```

Terminal window

```

pnpm wrangler queues subscription get [QUEUE]


```

Terminal window

```

yarn wrangler queues subscription get [QUEUE]


```

* `[QUEUE]` ` string ` required  
The name of the queue
* `--id` ` string ` required  
The ID of the subscription to retrieve
* `--json` ` boolean ` default: false  
Output in JSON format

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues subscription delete`

Delete an event subscription from a queue

* [  npm ](#tab-panel-8120)
* [  pnpm ](#tab-panel-8121)
* [  yarn ](#tab-panel-8122)

Terminal window

```

npx wrangler queues subscription delete [QUEUE]


```

Terminal window

```

pnpm wrangler queues subscription delete [QUEUE]


```

Terminal window

```

yarn wrangler queues subscription delete [QUEUE]


```

* `[QUEUE]` ` string ` required  
The name of the queue
* `--id` ` string ` required  
The ID of the subscription to delete
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `queues subscription update`

Update an existing event subscription

* [  npm ](#tab-panel-8123)
* [  pnpm ](#tab-panel-8124)
* [  yarn ](#tab-panel-8125)

Terminal window

```

npx wrangler queues subscription update [QUEUE]


```

Terminal window

```

pnpm wrangler queues subscription update [QUEUE]


```

Terminal window

```

yarn wrangler queues subscription update [QUEUE]


```

* `[QUEUE]` ` string ` required  
The name of the queue
* `--id` ` string ` required  
The ID of the subscription to update
* `--name` ` string `  
New name for the subscription
* `--events` ` string `  
Comma-separated list of event types to subscribe to
* `--enabled` ` boolean `  
Whether the subscription should be active
* `--json` ` boolean ` default: false  
Output in JSON format

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/queues/","name":"Queues"}}]}
```

---

---
title: R2
description: Wrangler commands for managing Workers R2 buckets and objects.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/r2.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# R2

Manage your [R2](https://developers.cloudflare.com/r2/) configurations using Wrangler.

## `r2 bucket`

Interact with buckets in an R2 store.

Note

The `r2 bucket` commands allow you to manage application data in the Cloudflare network to be accessed from Workers using [the R2 API](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/).

### `r2 bucket create`

Create a new R2 bucket

* [  npm ](#tab-panel-8126)
* [  pnpm ](#tab-panel-8127)
* [  yarn ](#tab-panel-8128)

Terminal window

```

npx wrangler r2 bucket create [NAME]


```

Terminal window

```

pnpm wrangler r2 bucket create [NAME]


```

Terminal window

```

yarn wrangler r2 bucket create [NAME]


```

* `[NAME]` ` string ` required  
The name of the new bucket
* `--location` ` string `  
The optional location hint that determines geographic placement of the R2 bucket
* `--storage-class` ` string ` alias: --s  
The default storage class for objects uploaded to this bucket
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the new bucket will be created
* `--use-remote` ` boolean `  
Use a remote binding when adding the newly created resource to your config
* `--update-config` ` boolean `  
Automatically update your config file with the newly added resource
* `--binding` ` string `  
The binding name of this resource in your Worker

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket info`

Get information about an R2 bucket

* [  npm ](#tab-panel-8129)
* [  pnpm ](#tab-panel-8130)
* [  yarn ](#tab-panel-8131)

Terminal window

```

npx wrangler r2 bucket info [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket info [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket info [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the bucket to retrieve info for
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--json` ` boolean ` default: false  
Return the bucket information as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket delete`

Delete an R2 bucket

* [  npm ](#tab-panel-8132)
* [  pnpm ](#tab-panel-8133)
* [  yarn ](#tab-panel-8134)

Terminal window

```

npx wrangler r2 bucket delete [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket delete [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket delete [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the bucket to delete
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket list`

List R2 buckets

* [  npm ](#tab-panel-8135)
* [  pnpm ](#tab-panel-8136)
* [  yarn ](#tab-panel-8137)

Terminal window

```

npx wrangler r2 bucket list


```

Terminal window

```

pnpm wrangler r2 bucket list


```

Terminal window

```

yarn wrangler r2 bucket list


```

* `--jurisdiction` ` string ` alias: --J  
The jurisdiction to list

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket catalog enable`

Enable the data catalog on an R2 bucket

* [  npm ](#tab-panel-8138)
* [  pnpm ](#tab-panel-8139)
* [  yarn ](#tab-panel-8140)

Terminal window

```

npx wrangler r2 bucket catalog enable [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket catalog enable [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket catalog enable [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the bucket to enable

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket catalog disable`

Disable the data catalog for an R2 bucket

* [  npm ](#tab-panel-8141)
* [  pnpm ](#tab-panel-8142)
* [  yarn ](#tab-panel-8143)

Terminal window

```

npx wrangler r2 bucket catalog disable [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket catalog disable [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket catalog disable [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the bucket to disable the data catalog for

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket catalog get`

Get the status of the data catalog for an R2 bucket

* [  npm ](#tab-panel-8144)
* [  pnpm ](#tab-panel-8145)
* [  yarn ](#tab-panel-8146)

Terminal window

```

npx wrangler r2 bucket catalog get [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket catalog get [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket catalog get [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket whose data catalog status to retrieve

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket catalog compaction enable`

Enable automatic file compaction for your R2 data catalog or a specific table

* [  npm ](#tab-panel-8147)
* [  pnpm ](#tab-panel-8148)
* [  yarn ](#tab-panel-8149)

Terminal window

```

npx wrangler r2 bucket catalog compaction enable [BUCKET] [NAMESPACE] [TABLE]


```

Terminal window

```

pnpm wrangler r2 bucket catalog compaction enable [BUCKET] [NAMESPACE] [TABLE]


```

Terminal window

```

yarn wrangler r2 bucket catalog compaction enable [BUCKET] [NAMESPACE] [TABLE]


```

* `[BUCKET]` ` string ` required  
The name of the bucket which contains the catalog
* `[NAMESPACE]` ` string `  
The namespace containing the table (optional, for table-level compaction)
* `[TABLE]` ` string `  
The name of the table (optional, for table-level compaction)
* `--target-size` ` number ` default: 128  
The target size for compacted files in MB (allowed values: 64, 128, 256, 512)
* `--token` ` string `  
A cloudflare api token with access to R2 and R2 Data Catalog (required for catalog-level compaction settings only)

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

Examples:

Terminal window

```

# Enable catalog-level compaction (requires token)

npx wrangler r2 bucket catalog compaction enable my-bucket --token <TOKEN>


# Enable table-level compaction

npx wrangler r2 bucket catalog compaction enable my-bucket my-namespace my-table --target-size 256


```

### `r2 bucket catalog compaction disable`

Disable automatic file compaction for your R2 data catalog or a specific table

* [  npm ](#tab-panel-8150)
* [  pnpm ](#tab-panel-8151)
* [  yarn ](#tab-panel-8152)

Terminal window

```

npx wrangler r2 bucket catalog compaction disable [BUCKET] [NAMESPACE] [TABLE]


```

Terminal window

```

pnpm wrangler r2 bucket catalog compaction disable [BUCKET] [NAMESPACE] [TABLE]


```

Terminal window

```

yarn wrangler r2 bucket catalog compaction disable [BUCKET] [NAMESPACE] [TABLE]


```

* `[BUCKET]` ` string ` required  
The name of the bucket which contains the catalog
* `[NAMESPACE]` ` string `  
The namespace containing the table (optional, for table-level compaction)
* `[TABLE]` ` string `  
The name of the table (optional, for table-level compaction)

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

Examples:

Terminal window

```

# Disable catalog-level compaction

npx wrangler r2 bucket catalog compaction disable my-bucket


# Disable table-level compaction

npx wrangler r2 bucket catalog compaction disable my-bucket my-namespace my-table


```

### `r2 bucket catalog snapshot-expiration enable`

Enable automatic snapshot expiration for your R2 data catalog or a specific table

* [  npm ](#tab-panel-8153)
* [  pnpm ](#tab-panel-8154)
* [  yarn ](#tab-panel-8155)

Terminal window

```

npx wrangler r2 bucket catalog snapshot-expiration enable [BUCKET] [NAMESPACE] [TABLE]


```

Terminal window

```

pnpm wrangler r2 bucket catalog snapshot-expiration enable [BUCKET] [NAMESPACE] [TABLE]


```

Terminal window

```

yarn wrangler r2 bucket catalog snapshot-expiration enable [BUCKET] [NAMESPACE] [TABLE]


```

* `[BUCKET]` ` string ` required  
The name of the bucket which contains the catalog
* `[NAMESPACE]` ` string `  
The namespace containing the table (optional, for table-level snapshot expiration)
* `[TABLE]` ` string `  
The name of the table (optional, for table-level snapshot expiration)
* `--older-than-days` ` number `  
Delete snapshots older than this many days, defaults to 30
* `--retain-last` ` number `  
The minimum number of snapshots to retain, defaults to 5
* `--token` ` string `  
A cloudflare api token with access to R2 and R2 Data Catalog (required for catalog-level snapshot expiration settings only)

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket catalog snapshot-expiration disable`

Disable automatic snapshot expiration for your R2 data catalog or a specific table

* [  npm ](#tab-panel-8156)
* [  pnpm ](#tab-panel-8157)
* [  yarn ](#tab-panel-8158)

Terminal window

```

npx wrangler r2 bucket catalog snapshot-expiration disable [BUCKET] [NAMESPACE] [TABLE]


```

Terminal window

```

pnpm wrangler r2 bucket catalog snapshot-expiration disable [BUCKET] [NAMESPACE] [TABLE]


```

Terminal window

```

yarn wrangler r2 bucket catalog snapshot-expiration disable [BUCKET] [NAMESPACE] [TABLE]


```

* `[BUCKET]` ` string ` required  
The name of the bucket which contains the catalog
* `[NAMESPACE]` ` string `  
The namespace containing the table (optional, for table-level snapshot expiration)
* `[TABLE]` ` string `  
The name of the table (optional, for table-level snapshot expiration)
* `--force` ` boolean ` default: false  
Skip confirmation prompt

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket cors set`

Set the CORS configuration for an R2 bucket from a JSON file

* [  npm ](#tab-panel-8159)
* [  pnpm ](#tab-panel-8160)
* [  yarn ](#tab-panel-8161)

Terminal window

```

npx wrangler r2 bucket cors set [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket cors set [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket cors set [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to set the CORS configuration for
* `--file` ` string ` required  
Path to the JSON file containing the CORS configuration
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket cors delete`

Clear the CORS configuration for an R2 bucket

* [  npm ](#tab-panel-8162)
* [  pnpm ](#tab-panel-8163)
* [  yarn ](#tab-panel-8164)

Terminal window

```

npx wrangler r2 bucket cors delete [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket cors delete [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket cors delete [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to delete the CORS configuration for
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket cors list`

List the CORS rules for an R2 bucket

* [  npm ](#tab-panel-8165)
* [  pnpm ](#tab-panel-8166)
* [  yarn ](#tab-panel-8167)

Terminal window

```

npx wrangler r2 bucket cors list [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket cors list [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket cors list [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to list the CORS rules for
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket dev-url enable`

Enable public access via the r2.dev URL for an R2 bucket

* [  npm ](#tab-panel-8168)
* [  pnpm ](#tab-panel-8169)
* [  yarn ](#tab-panel-8170)

Terminal window

```

npx wrangler r2 bucket dev-url enable [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket dev-url enable [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket dev-url enable [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to enable public access via its r2.dev URL
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket dev-url disable`

Disable public access via the r2.dev URL for an R2 bucket

* [  npm ](#tab-panel-8171)
* [  pnpm ](#tab-panel-8172)
* [  yarn ](#tab-panel-8173)

Terminal window

```

npx wrangler r2 bucket dev-url disable [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket dev-url disable [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket dev-url disable [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to disable public access via its r2.dev URL
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket dev-url get`

Get the r2.dev URL and status for an R2 bucket

* [  npm ](#tab-panel-8174)
* [  pnpm ](#tab-panel-8175)
* [  yarn ](#tab-panel-8176)

Terminal window

```

npx wrangler r2 bucket dev-url get [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket dev-url get [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket dev-url get [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket whose r2.dev URL status to retrieve
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket domain add`

Connect a custom domain to an R2 bucket

* [  npm ](#tab-panel-8177)
* [  pnpm ](#tab-panel-8178)
* [  yarn ](#tab-panel-8179)

Terminal window

```

npx wrangler r2 bucket domain add [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket domain add [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket domain add [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to connect a custom domain to
* `--domain` ` string ` required  
The custom domain to connect to the R2 bucket
* `--zone-id` ` string ` required  
The zone ID associated with the custom domain
* `--min-tls` ` string `  
Set the minimum TLS version for the custom domain (defaults to 1.0 if not set)
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket domain remove`

Remove a custom domain from an R2 bucket

* [  npm ](#tab-panel-8180)
* [  pnpm ](#tab-panel-8181)
* [  yarn ](#tab-panel-8182)

Terminal window

```

npx wrangler r2 bucket domain remove [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket domain remove [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket domain remove [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to remove the custom domain from
* `--domain` ` string ` required  
The custom domain to remove from the R2 bucket
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket domain update`

Update settings for a custom domain connected to an R2 bucket

* [  npm ](#tab-panel-8183)
* [  pnpm ](#tab-panel-8184)
* [  yarn ](#tab-panel-8185)

Terminal window

```

npx wrangler r2 bucket domain update [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket domain update [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket domain update [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket associated with the custom domain to update
* `--domain` ` string ` required  
The custom domain whose settings will be updated
* `--min-tls` ` string `  
Update the minimum TLS version for the custom domain
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket domain get`

Get custom domain connected to an R2 bucket

* [  npm ](#tab-panel-8186)
* [  pnpm ](#tab-panel-8187)
* [  yarn ](#tab-panel-8188)

Terminal window

```

npx wrangler r2 bucket domain get [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket domain get [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket domain get [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket whose custom domain to retrieve
* `--domain` ` string ` required  
The custom domain to get information for
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket domain list`

List custom domains for an R2 bucket

* [  npm ](#tab-panel-8189)
* [  pnpm ](#tab-panel-8190)
* [  yarn ](#tab-panel-8191)

Terminal window

```

npx wrangler r2 bucket domain list [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket domain list [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket domain list [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket whose connected custom domains will be listed
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket lifecycle add`

Add a lifecycle rule to an R2 bucket

* [  npm ](#tab-panel-8192)
* [  pnpm ](#tab-panel-8193)
* [  yarn ](#tab-panel-8194)

Terminal window

```

npx wrangler r2 bucket lifecycle add [BUCKET] [NAME] [PREFIX]


```

Terminal window

```

pnpm wrangler r2 bucket lifecycle add [BUCKET] [NAME] [PREFIX]


```

Terminal window

```

yarn wrangler r2 bucket lifecycle add [BUCKET] [NAME] [PREFIX]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to add a lifecycle rule to
* `[NAME]` ` string ` alias: --id  
A unique name for the lifecycle rule, used to identify and manage it.
* `[PREFIX]` ` string `  
Prefix condition for the lifecycle rule (leave empty for all prefixes)
* `--expire-days` ` number `  
Number of days after which objects expire
* `--expire-date` ` string `  
Date after which objects expire (YYYY-MM-DD)
* `--ia-transition-days` ` number `  
Number of days after which objects transition to Infrequent Access storage
* `--ia-transition-date` ` string `  
Date after which objects transition to Infrequent Access storage (YYYY-MM-DD)
* `--abort-multipart-days` ` number `  
Number of days after which incomplete multipart uploads are aborted
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation and data catalog validation prompt

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket lifecycle remove`

Remove a lifecycle rule from an R2 bucket

* [  npm ](#tab-panel-8195)
* [  pnpm ](#tab-panel-8196)
* [  yarn ](#tab-panel-8197)

Terminal window

```

npx wrangler r2 bucket lifecycle remove [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket lifecycle remove [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket lifecycle remove [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to remove a lifecycle rule from
* `--name` ` string ` alias: --id required  
The unique name of the lifecycle rule to remove
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket lifecycle list`

List lifecycle rules for an R2 bucket

* [  npm ](#tab-panel-8198)
* [  pnpm ](#tab-panel-8199)
* [  yarn ](#tab-panel-8200)

Terminal window

```

npx wrangler r2 bucket lifecycle list [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket lifecycle list [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket lifecycle list [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to list lifecycle rules for
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket lifecycle set`

Set the lifecycle configuration for an R2 bucket from a JSON file

* [  npm ](#tab-panel-8201)
* [  pnpm ](#tab-panel-8202)
* [  yarn ](#tab-panel-8203)

Terminal window

```

npx wrangler r2 bucket lifecycle set [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket lifecycle set [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket lifecycle set [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to set lifecycle configuration for
* `--file` ` string ` required  
Path to the JSON file containing lifecycle configuration
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation and data catalog validation prompt

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket lock add`

Add a lock rule to an R2 bucket

* [  npm ](#tab-panel-8204)
* [  pnpm ](#tab-panel-8205)
* [  yarn ](#tab-panel-8206)

Terminal window

```

npx wrangler r2 bucket lock add [BUCKET] [NAME] [PREFIX]


```

Terminal window

```

pnpm wrangler r2 bucket lock add [BUCKET] [NAME] [PREFIX]


```

Terminal window

```

yarn wrangler r2 bucket lock add [BUCKET] [NAME] [PREFIX]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to add a bucket lock rule to
* `[NAME]` ` string ` alias: --id  
A unique name for the bucket lock rule, used to identify and manage it.
* `[PREFIX]` ` string `  
Prefix condition for the bucket lock rule (set to "" for all prefixes)
* `--retention-days` ` number `  
Number of days which objects will be retained for
* `--retention-date` ` string `  
Date after which objects will be retained until (YYYY-MM-DD)
* `--retention-indefinite` ` boolean `  
Retain objects indefinitely
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket lock remove`

Remove a bucket lock rule from an R2 bucket

* [  npm ](#tab-panel-8207)
* [  pnpm ](#tab-panel-8208)
* [  yarn ](#tab-panel-8209)

Terminal window

```

npx wrangler r2 bucket lock remove [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket lock remove [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket lock remove [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to remove a bucket lock rule from
* `--name` ` string ` alias: --id required  
The unique name of the bucket lock rule to remove
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket lock list`

List lock rules for an R2 bucket

* [  npm ](#tab-panel-8210)
* [  pnpm ](#tab-panel-8211)
* [  yarn ](#tab-panel-8212)

Terminal window

```

npx wrangler r2 bucket lock list [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket lock list [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket lock list [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to list lock rules for
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket lock set`

Set the lock configuration for an R2 bucket from a JSON file

* [  npm ](#tab-panel-8213)
* [  pnpm ](#tab-panel-8214)
* [  yarn ](#tab-panel-8215)

Terminal window

```

npx wrangler r2 bucket lock set [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket lock set [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket lock set [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to set lock configuration for
* `--file` ` string ` required  
Path to the JSON file containing lock configuration
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket notification create`

Create an event notification rule for an R2 bucket

* [  npm ](#tab-panel-8216)
* [  pnpm ](#tab-panel-8217)
* [  yarn ](#tab-panel-8218)

Terminal window

```

npx wrangler r2 bucket notification create [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket notification create [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket notification create [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to create an event notification rule for
* `--event-types` ` "object-create" | "object-delete" ` alias: --event-type required  
The type of event(s) that will emit event notifications
* `--prefix` ` string `  
The prefix that an object must match to emit event notifications (note: regular expressions not supported)
* `--suffix` ` string `  
The suffix that an object must match to emit event notifications (note: regular expressions not supported)
* `--queue` ` string ` required  
The name of the queue that will receive event notification messages
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--description` ` string `  
A description that can be used to identify the event notification rule after creation

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket notification delete`

Delete an event notification rule from an R2 bucket

* [  npm ](#tab-panel-8219)
* [  pnpm ](#tab-panel-8220)
* [  yarn ](#tab-panel-8221)

Terminal window

```

npx wrangler r2 bucket notification delete [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket notification delete [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket notification delete [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to delete an event notification rule for
* `--queue` ` string ` required  
The name of the queue that corresponds to the event notification rule. If no rule is provided, all event notification rules associated with the bucket and queue will be deleted
* `--rule` ` string `  
The ID of the event notification rule to delete
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket notification list`

List event notification rules for an R2 bucket

* [  npm ](#tab-panel-8222)
* [  pnpm ](#tab-panel-8223)
* [  yarn ](#tab-panel-8224)

Terminal window

```

npx wrangler r2 bucket notification list [BUCKET]


```

Terminal window

```

pnpm wrangler r2 bucket notification list [BUCKET]


```

Terminal window

```

yarn wrangler r2 bucket notification list [BUCKET]


```

* `[BUCKET]` ` string ` required  
The name of the R2 bucket to get event notification rules for
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket sippy enable`

Enable Sippy on an R2 bucket

* [  npm ](#tab-panel-8225)
* [  pnpm ](#tab-panel-8226)
* [  yarn ](#tab-panel-8227)

Terminal window

```

npx wrangler r2 bucket sippy enable [NAME]


```

Terminal window

```

pnpm wrangler r2 bucket sippy enable [NAME]


```

Terminal window

```

yarn wrangler r2 bucket sippy enable [NAME]


```

* `[NAME]` ` string ` required  
The name of the bucket
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists
* `--provider` ` "AWS" | "GCS" `
* `--bucket` ` string `  
The name of the upstream bucket
* `--region` ` string `  
(AWS provider only) The region of the upstream bucket
* `--access-key-id` ` string `  
(AWS provider only) The secret access key id for the upstream bucket
* `--secret-access-key` ` string `  
(AWS provider only) The secret access key for the upstream bucket
* `--service-account-key-file` ` string `  
(GCS provider only) The path to your Google Cloud service account key JSON file
* `--client-email` ` string `  
(GCS provider only) The client email for your Google Cloud service account key
* `--private-key` ` string `  
(GCS provider only) The private key for your Google Cloud service account key
* `--r2-access-key-id` ` string `  
The secret access key id for this R2 bucket
* `--r2-secret-access-key` ` string `  
The secret access key for this R2 bucket

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket sippy disable`

Disable Sippy on an R2 bucket

* [  npm ](#tab-panel-8228)
* [  pnpm ](#tab-panel-8229)
* [  yarn ](#tab-panel-8230)

Terminal window

```

npx wrangler r2 bucket sippy disable [NAME]


```

Terminal window

```

pnpm wrangler r2 bucket sippy disable [NAME]


```

Terminal window

```

yarn wrangler r2 bucket sippy disable [NAME]


```

* `[NAME]` ` string ` required  
The name of the bucket
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 bucket sippy get`

Check the status of Sippy on an R2 bucket

* [  npm ](#tab-panel-8231)
* [  pnpm ](#tab-panel-8232)
* [  yarn ](#tab-panel-8233)

Terminal window

```

npx wrangler r2 bucket sippy get [NAME]


```

Terminal window

```

pnpm wrangler r2 bucket sippy get [NAME]


```

Terminal window

```

yarn wrangler r2 bucket sippy get [NAME]


```

* `[NAME]` ` string ` required  
The name of the bucket
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the bucket exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `r2 object`

Interact with R2 objects.

Note

The `r2 object` commands allow you to manage application data in the Cloudflare network to be accessed from Workers using [the R2 API](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/).

### `r2 object get`

Fetch an object from an R2 bucket

* [  npm ](#tab-panel-8234)
* [  pnpm ](#tab-panel-8235)
* [  yarn ](#tab-panel-8236)

Terminal window

```

npx wrangler r2 object get [OBJECTPATH]


```

Terminal window

```

pnpm wrangler r2 object get [OBJECTPATH]


```

Terminal window

```

yarn wrangler r2 object get [OBJECTPATH]


```

* `[OBJECTPATH]` ` string ` required  
The source object path in the form of {bucket}/{key}
* `--file` ` string ` alias: --f  
The destination file to create
* `--pipe` ` boolean ` alias: --p  
Enables the file to be piped to a destination, rather than specified with the --file option
* `--local` ` boolean `  
Interact with local storage
* `--remote` ` boolean `  
Interact with remote storage
* `--persist-to` ` string `  
Directory for local persistence
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the object exists

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 object put`

Create an object in an R2 bucket

* [  npm ](#tab-panel-8237)
* [  pnpm ](#tab-panel-8238)
* [  yarn ](#tab-panel-8239)

Terminal window

```

npx wrangler r2 object put [OBJECTPATH]


```

Terminal window

```

pnpm wrangler r2 object put [OBJECTPATH]


```

Terminal window

```

yarn wrangler r2 object put [OBJECTPATH]


```

* `[OBJECTPATH]` ` string ` required  
The destination object path in the form of {bucket}/{key}
* `--content-type` ` string ` alias: --ct  
A standard MIME type describing the format of the object data
* `--content-disposition` ` string ` alias: --cd  
Specifies presentational information for the object
* `--content-encoding` ` string ` alias: --ce  
Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field
* `--content-language` ` string ` alias: --cl  
The language the content is in
* `--cache-control` ` string ` alias: --cc  
Specifies caching behavior along the request/reply chain
* `--expires` ` string `  
The date and time at which the object is no longer cacheable
* `--local` ` boolean `  
Interact with local storage
* `--remote` ` boolean `  
Interact with remote storage
* `--persist-to` ` string `  
Directory for local persistence
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the object will be created
* `--storage-class` ` string ` alias: --s  
The storage class of the object to be created
* `--force` ` boolean ` alias: --y default: false  
Skip data catalog validation prompt
* `--file` ` string ` alias: --f  
The path of the file to upload
* `--pipe` ` boolean ` alias: --p  
Enables the file to be piped in, rather than specified with the --file option

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `r2 object delete`

Delete an object in an R2 bucket

* [  npm ](#tab-panel-8240)
* [  pnpm ](#tab-panel-8241)
* [  yarn ](#tab-panel-8242)

Terminal window

```

npx wrangler r2 object delete [OBJECTPATH]


```

Terminal window

```

pnpm wrangler r2 object delete [OBJECTPATH]


```

Terminal window

```

yarn wrangler r2 object delete [OBJECTPATH]


```

* `[OBJECTPATH]` ` string ` required  
The destination object path in the form of {bucket}/{key}
* `--local` ` boolean `  
Interact with local storage
* `--remote` ` boolean `  
Interact with remote storage
* `--persist-to` ` string `  
Directory for local persistence
* `--jurisdiction` ` string ` alias: --J  
The jurisdiction where the object exists
* `--force` ` boolean ` alias: --y default: false  
Skip data catalog validation prompt

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

---

## R2 SQL

Note

R2 SQL is currently in open beta. Report R2 SQL bugs in [GitHub ↗](https://github.com/cloudflare/workers-sdk/issues/new/choose). R2 SQL expects there to be a [WRANGLER\_R2\_SQL\_AUTH\_TOKEN](https://developers.cloudflare.com/r2-sql/query-data/#authentication) environment variable to be set.

### `r2 sql query`

Execute SQL query against R2 Data Catalog

* [  npm ](#tab-panel-8243)
* [  pnpm ](#tab-panel-8244)
* [  yarn ](#tab-panel-8245)

Terminal window

```

npx wrangler r2 sql query [WAREHOUSE] [QUERY]


```

Terminal window

```

pnpm wrangler r2 sql query [WAREHOUSE] [QUERY]


```

Terminal window

```

yarn wrangler r2 sql query [WAREHOUSE] [QUERY]


```

* `[WAREHOUSE]` ` string ` required  
R2 Data Catalog warehouse name
* `[QUERY]` ` string ` required  
The SQL query to execute

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/r2/","name":"R2"}}]}
```

---

---
title: Secrets Store
description: Wrangler commands for managing account secrets within a Secrets Store.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/secrets-store.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Secrets Store

Interact with [Secret Store](https://developers.cloudflare.com/secrets-store/) using Wrangler.

## `secrets-store secret`

Use the following commands to manage your account secrets.

`--remote` option

In order to interact with Secrets Store in production, you should append `--remote` to your command. Without it, your command will default to [local development mode](https://developers.cloudflare.com/workers/development-testing/).

### `secrets-store secret create`

Create a secret within a store

* [  npm ](#tab-panel-8246)
* [  pnpm ](#tab-panel-8247)
* [  yarn ](#tab-panel-8248)

Terminal window

```

npx wrangler secrets-store secret create [STORE-ID]


```

Terminal window

```

pnpm wrangler secrets-store secret create [STORE-ID]


```

Terminal window

```

yarn wrangler secrets-store secret create [STORE-ID]


```

* `[STORE-ID]` ` string ` required  
ID of the store in which the secret resides
* `--name` ` string ` required  
Name of the secret
* `--value` ` string `  
Value of the secret (Note: Only for testing. Not secure as this will leave secret value in plain-text in terminal history, exclude this flag and use automatic prompt instead)
* `--scopes` ` string ` required  
Scopes for the secret (comma-separated list of scopes eg:"workers")
* `--comment` ` string `  
Comment for the secret
* `--remote` ` boolean ` default: false  
Execute command against remote Secrets Store
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of using the `create` command to create an account-level secret.

Terminal window

```

npx wrangler secrets-store secret create 8f7a1cdced6342c18d223ece462fd88d --name ServiceA_key-1 --scopes workers --remote


```

```

✓ Enter a secret value: › ***


🔐 Creating secret... (Name: ServiceA_key-1, Value: REDACTED, Scopes: workers, Comment: undefined)

✓ Select an account: › My account

✅ Created secret! (ID: 13bc7498c6374a4e9d13be091c3c65f1)


```

### `secrets-store secret update`

Update a secret within a store

* [  npm ](#tab-panel-8249)
* [  pnpm ](#tab-panel-8250)
* [  yarn ](#tab-panel-8251)

Terminal window

```

npx wrangler secrets-store secret update [STORE-ID]


```

Terminal window

```

pnpm wrangler secrets-store secret update [STORE-ID]


```

Terminal window

```

yarn wrangler secrets-store secret update [STORE-ID]


```

* `[STORE-ID]` ` string ` required  
ID of the store in which the secret resides
* `--secret-id` ` string ` required  
ID of the secret to update
* `--value` ` string `  
Updated value of the secret (Note: Only for testing. Not secure as this will leave secret value in plain-text in terminal history, exclude this flag and use automatic prompt instead)
* `--scopes` ` string `  
Updated scopes for the secret (comma-separated list of scopes eg:"workers")
* `--comment` ` string `  
Updated comment for the secret
* `--remote` ` boolean ` default: false  
Execute command against remote Secrets Store
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `secrets-store secret duplicate`

Duplicate a secret within a store

* [  npm ](#tab-panel-8252)
* [  pnpm ](#tab-panel-8253)
* [  yarn ](#tab-panel-8254)

Terminal window

```

npx wrangler secrets-store secret duplicate [STORE-ID]


```

Terminal window

```

pnpm wrangler secrets-store secret duplicate [STORE-ID]


```

Terminal window

```

yarn wrangler secrets-store secret duplicate [STORE-ID]


```

* `[STORE-ID]` ` string ` required  
ID of the store in which the secret resides
* `--secret-id` ` string ` required  
ID of the secret to duplicate the secret value of
* `--name` ` string ` required  
Name of the new secret
* `--scopes` ` string ` required  
Scopes for the new secret
* `--comment` ` string `  
Comment for the new secret
* `--remote` ` boolean ` default: false  
Execute command against remote Secrets Store
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `secrets-store secret get`

Get a secret within a store

* [  npm ](#tab-panel-8255)
* [  pnpm ](#tab-panel-8256)
* [  yarn ](#tab-panel-8257)

Terminal window

```

npx wrangler secrets-store secret get [STORE-ID]


```

Terminal window

```

pnpm wrangler secrets-store secret get [STORE-ID]


```

Terminal window

```

yarn wrangler secrets-store secret get [STORE-ID]


```

* `[STORE-ID]` ` string ` required  
ID of the store in which the secret resides
* `--secret-id` ` string ` required  
ID of the secret to retrieve
* `--remote` ` boolean ` default: false  
Execute command against remote Secrets Store
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example with the expected output:

Terminal window

```

npx wrangler secrets-store secret get 8f7a1cdced6342c18d223ece462fd88d --secret-id 13bc7498c6374a4e9d13be091c3c65f1 --remote


```

```

🔐 Getting secret... (ID: 13bc7498c6374a4e9d13be091c3c65f1)

✓ Select an account: › My account

| Name                        | ID                                  | StoreID                             | Comment | Scopes  | Status  | Created                | Modified               |

|-----------------------------|-------------------------------------|-------------------------------------|---------|---------|---------|------------------------|------------------------|

| ServiceA_key-1          | 13bc7498c6374a4e9d13be091c3c65f1    | 8f7a1cdced6342c18d223ece462fd88d    |         | workers | active  | 4/9/2025, 10:06:01 PM  | 4/15/2025, 09:13:05 AM |


```

### `secrets-store secret delete`

Delete a secret within a store

* [  npm ](#tab-panel-8258)
* [  pnpm ](#tab-panel-8259)
* [  yarn ](#tab-panel-8260)

Terminal window

```

npx wrangler secrets-store secret delete [STORE-ID]


```

Terminal window

```

pnpm wrangler secrets-store secret delete [STORE-ID]


```

Terminal window

```

yarn wrangler secrets-store secret delete [STORE-ID]


```

* `[STORE-ID]` ` string ` required  
ID of the store in which the secret resides
* `--secret-id` ` string ` required  
ID of the secret to delete
* `--remote` ` boolean ` default: false  
Execute command against remote Secrets Store
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

### `secrets-store secret list`

List secrets within a store

* [  npm ](#tab-panel-8261)
* [  pnpm ](#tab-panel-8262)
* [  yarn ](#tab-panel-8263)

Terminal window

```

npx wrangler secrets-store secret list [STORE-ID]


```

Terminal window

```

pnpm wrangler secrets-store secret list [STORE-ID]


```

Terminal window

```

yarn wrangler secrets-store secret list [STORE-ID]


```

* `[STORE-ID]` ` string ` required  
ID of the store in which to list secrets
* `--page` ` number ` default: 1  
Page number of secrets listing results, can configure page size using "per-page"
* `--per-page` ` number ` default: 10  
Number of secrets to show per page
* `--remote` ` boolean ` default: false  
Execute command against remote Secrets Store
* `--persist-to` ` string `  
Directory for local persistence

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `secrets-store store`

Use the following commands to manage your store.

Store limitation

[Secrets Store](https://developers.cloudflare.com/secrets-store/) is in open beta. Currently, you can only have one store per Cloudflare account.

### `secrets-store store create`

Create a store within an account

* [  npm ](#tab-panel-8264)
* [  pnpm ](#tab-panel-8265)
* [  yarn ](#tab-panel-8266)

Terminal window

```

npx wrangler secrets-store store create [NAME]


```

Terminal window

```

pnpm wrangler secrets-store store create [NAME]


```

Terminal window

```

yarn wrangler secrets-store store create [NAME]


```

* `[NAME]` ` string ` required  
Name of the store
* `--remote` ` boolean ` default: false  
Execute command against remote Secrets Store

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of using the `create` command to create a store.

Terminal window

```

npx wrangler secrets-store store create default --remote


```

```

🔐 Creating store... (Name: default)

✅ Created store! (Name: default, ID: 2e2a82d317134506b58defbe16982d54)


```

### `secrets-store store delete`

Delete a store within an account

* [  npm ](#tab-panel-8267)
* [  pnpm ](#tab-panel-8268)
* [  yarn ](#tab-panel-8269)

Terminal window

```

npx wrangler secrets-store store delete [STORE-ID]


```

Terminal window

```

pnpm wrangler secrets-store store delete [STORE-ID]


```

Terminal window

```

yarn wrangler secrets-store store delete [STORE-ID]


```

* `[STORE-ID]` ` string ` required  
ID of the store
* `--remote` ` boolean ` default: false  
Execute command against remote Secrets Store

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of using the `delete` command to delete a store.

Terminal window

```

npx wrangler secrets-store store delete d2dafaeac9434de2b6d08b292ce08211 --remote


```

```

🔐 Deleting store... (Name: d2dafaeac9434de2b6d08b292ce08211)

✅ Deleted store! (ID: d2dafaeac9434de2b6d08b292ce08211)


```

### `secrets-store store list`

List stores within an account

* [  npm ](#tab-panel-8270)
* [  pnpm ](#tab-panel-8271)
* [  yarn ](#tab-panel-8272)

Terminal window

```

npx wrangler secrets-store store list


```

Terminal window

```

pnpm wrangler secrets-store store list


```

Terminal window

```

yarn wrangler secrets-store store list


```

* `--page` ` number ` default: 1  
Page number of stores listing results, can configure page size using "per-page"
* `--per-page` ` number ` default: 10  
Number of stores to show per page
* `--remote` ` boolean ` default: false  
Execute command against remote Secrets Store

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

The following is an example of using the `list` command to list stores.

Terminal window

```

npx wrangler secrets-store store list --remote


```

```

🔐 Listing stores...

┌─────────┬──────────────────────────────────┬──────────────────────────────────┬──────────────────────┬──────────────────────┐

│ Name    │ ID                               │ AccountID                        │ Created              │ Modified             │

├─────────┼──────────────────────────────────┼──────────────────────────────────┼──────────────────────┼──────────────────────┤

│ default │ 8876bad33f164462bf0743fe8adf98f4 │ REDACTED │ 4/9/2025, 1:11:48 PM  │ 4/9/2025, 1:11:48 PM │

└─────────┴──────────────────────────────────┴──────────────────────────────────┴──────────────────────┴──────────────────────┘


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/secrets-store/","name":"Secrets Store"}}]}
```

---

---
title: Tunnel
description: Wrangler commands for managing Cloudflare Tunnels.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/tunnel.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Tunnel

Manage [Cloudflare Tunnels](https://developers.cloudflare.com/tunnel/) directly from Wrangler. Create, run, and manage tunnels that securely connect your local services to Cloudflare's network — no public IPs required.

Note

All `wrangler tunnel` commands are **experimental** and may change without notice.

Wrangler manages the [cloudflared](https://developers.cloudflare.com/tunnel/downloads/) binary automatically. On first use, Wrangler will prompt you to download `cloudflared` to a local cache directory. You can skip this by installing `cloudflared` yourself and adding it to your `PATH`, or by setting the `CLOUDFLARED_PATH` environment variable to point to an existing binary.

### `tunnel create`

Create a new remotely managed [Cloudflare Tunnel](https://developers.cloudflare.com/tunnel/).

```

wrangler tunnel create <NAME>


```

* `NAME` ` string ` required  
   * A name for your tunnel. Must be unique within your account.

Tunnels created via Wrangler are always **remotely managed** — configure them in the [Cloudflare dashboard ↗](https://dash.cloudflare.com/?to=/:account/tunnels) or via the API.

After creation, use `wrangler tunnel run` with the tunnel ID to start the tunnel.

Terminal window

```

npx wrangler tunnel create my-app


```

```

Creating tunnel "my-app"

Created tunnel.

ID: f70ff985-a4ef-4643-bbbc-4a0ed4fc8415

Name: my-app


To run this tunnel, configure its ingress rules in the Cloudflare dashboard, then run:

   wrangler tunnel run f70ff985-a4ef-4643-bbbc-4a0ed4fc8415


```

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

---

### `tunnel delete`

Delete a Cloudflare Tunnel from your account.

```

wrangler tunnel delete <TUNNEL> [OPTIONS]


```

* `TUNNEL` ` string ` required  
   * The name or UUID of the tunnel to delete.
* `--force` ` boolean ` optional  
   * Skip the confirmation prompt.

Warning

Deleting a tunnel is permanent and cannot be undone. Any active connections through the tunnel will be terminated.

Terminal window

```

npx wrangler tunnel delete f70ff985-a4ef-4643-bbbc-4a0ed4fc8415


```

```

Are you sure you want to delete tunnel "f70ff985-a4ef-4643-bbbc-4a0ed4fc8415"? This action cannot be undone. (y/n)

Deleting tunnel f70ff985-a4ef-4643-bbbc-4a0ed4fc8415

Tunnel deleted.


```

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

---

### `tunnel info`

Display details about a Cloudflare Tunnel, including its ID, name, status, and creation time.

```

wrangler tunnel info <TUNNEL>


```

* `TUNNEL` ` string ` required  
   * The name or UUID of the tunnel to inspect.

Terminal window

```

npx wrangler tunnel info f70ff985-a4ef-4643-bbbc-4a0ed4fc8415


```

```

Getting tunnel details

ID: f70ff985-a4ef-4643-bbbc-4a0ed4fc8415

Name: my-app

Status: healthy

Created: 2025-01-15T10:30:00Z

Type: cfd_tunnel


```

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

---

### `tunnel list`

List all Cloudflare Tunnels in your account.

```

wrangler tunnel list


```

The output includes the tunnel ID, name, status, and creation date for each tunnel. Only non-deleted tunnels are shown.

Terminal window

```

npx wrangler tunnel list


```

```

Listing Cloudflare Tunnels


ID                                   Name       Status    Created

f70ff985-a4ef-4643-bbbc-4a0ed4fc8415 my-app     healthy   2025-01-15T10:30:00Z

550e8400-e29b-41d4-a716-446655440000 api-tunnel inactive  2025-01-10T15:45:00Z


```

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

---

### `tunnel run`

Run a Cloudflare Tunnel using the [cloudflared](https://developers.cloudflare.com/tunnel/downloads/) daemon. This starts a persistent connection between your local machine and Cloudflare's network.

```

wrangler tunnel run [TUNNEL] [OPTIONS]


```

* `TUNNEL` ` string ` optional  
   * The name or UUID of the tunnel to run. Required unless `--token` is provided.
* `--token` ` string ` optional  
   * A tunnel token to use directly. Skips API authentication.
* `--log-level` ` string ` (default: info) optional  
   * Log level for `cloudflared`. Does not affect Wrangler logs (controlled by `WRANGLER_LOG`). One of: `debug`, `info`, `warn`, `error`, `fatal`.

Named tunnels are **remotely managed** — configure ingress rules (which local services to expose) in the [Cloudflare dashboard ↗](https://dash.cloudflare.com/?to=/:account/tunnels) or via the API before running the tunnel.

There are two ways to run a tunnel:

**By tunnel name or ID** (fetches the token via the API):

Terminal window

```

npx wrangler tunnel run my-app


```

**By token** (no API authentication needed — useful for CI/CD or remote servers):

Terminal window

```

npx wrangler tunnel run --token eyJhIjoiNGE2MjY...


```

Note

The tunnel token is passed to `cloudflared` via the `TUNNEL_TOKEN` environment variable rather than CLI arguments, preventing it from appearing in process listings.

Press `Ctrl+C` to stop the tunnel. Wrangler will send a graceful shutdown signal to `cloudflared` before exiting.

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

---

### `tunnel quick-start`

Start a free, temporary tunnel without a Cloudflare account using [Quick Tunnels](https://developers.cloudflare.com/tunnel/setup/#quick-tunnels-development). This is useful for quick demos, testing webhooks, or sharing local development servers.

```

wrangler tunnel quick-start <URL>


```

* `URL` ` string ` required  
   * The local URL to expose (for example, `http://localhost:8080`).

The tunnel is assigned a random `*.trycloudflare.com` subdomain and lasts for the duration of the process.

Terminal window

```

npx wrangler tunnel quick-start http://localhost:8080


```

```

Starting quick tunnel to http://localhost:8080...

Your tunnel URL: https://random-words-here.trycloudflare.com


```

Note

Quick tunnels are anonymous and temporary — they do not appear in your account's tunnel list and cannot be configured. For production use, create a named tunnel with `wrangler tunnel create`.

The following global flags work on every command:

* `--help` ` boolean `  
   * Show help.
* `--config` ` string ` (not supported by Pages)  
   * Path to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `--cwd` ` string `  
   * Run as if Wrangler was started in the specified directory instead of the current working directory.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/tunnel/","name":"Tunnel"}}]}
```

---

---
title: Vectorize
description: Wrangler commands for interacting with Vectorize vector databases.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/vectorize.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Vectorize

Interact with a [Vectorize](https://developers.cloudflare.com/vectorize/) vector database using Wrangler.

## `vectorize create`

Create a Vectorize index

* [  npm ](#tab-panel-8273)
* [  pnpm ](#tab-panel-8274)
* [  yarn ](#tab-panel-8275)

Terminal window

```

npx wrangler vectorize create [NAME]


```

Terminal window

```

pnpm wrangler vectorize create [NAME]


```

Terminal window

```

yarn wrangler vectorize create [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index to create (must be unique).
* `--dimensions` ` number `  
The dimension size to configure this index for, based on the output dimensions of your ML model.
* `--metric` ` string `  
The distance metric to use for searching within the index.
* `--preset` ` string `  
The name of an preset representing an embeddings model: Vectorize will configure the dimensions and distance metric for you when provided.
* `--description` ` string `  
An optional description for this index.
* `--json` ` boolean ` default: false  
Return output as JSON
* `--deprecated-v1` ` boolean ` default: false  
Create a deprecated Vectorize V1 index. This is not recommended and indexes created with this option need all other Vectorize operations to have this option enabled.
* `--use-remote` ` boolean `  
Use a remote binding when adding the newly created resource to your config
* `--update-config` ` boolean `  
Automatically update your config file with the newly added resource
* `--binding` ` string `  
The binding name of this resource in your Worker

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize delete`

Delete a Vectorize index

* [  npm ](#tab-panel-8276)
* [  pnpm ](#tab-panel-8277)
* [  yarn ](#tab-panel-8278)

Terminal window

```

npx wrangler vectorize delete [NAME]


```

Terminal window

```

pnpm wrangler vectorize delete [NAME]


```

Terminal window

```

yarn wrangler vectorize delete [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index
* `--force` ` boolean ` alias: --y default: false  
Skip confirmation
* `--deprecated-v1` ` boolean ` default: false  
Delete a deprecated Vectorize V1 index.

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize get`

Get a Vectorize index by name

* [  npm ](#tab-panel-8279)
* [  pnpm ](#tab-panel-8280)
* [  yarn ](#tab-panel-8281)

Terminal window

```

npx wrangler vectorize get [NAME]


```

Terminal window

```

pnpm wrangler vectorize get [NAME]


```

Terminal window

```

yarn wrangler vectorize get [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index.
* `--json` ` boolean ` default: false  
Return output as JSON
* `--deprecated-v1` ` boolean ` default: false  
Fetch a deprecated V1 Vectorize index. This must be enabled if the index was created with V1 option.

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize list`

List your Vectorize indexes

* [  npm ](#tab-panel-8282)
* [  pnpm ](#tab-panel-8283)
* [  yarn ](#tab-panel-8284)

Terminal window

```

npx wrangler vectorize list


```

Terminal window

```

pnpm wrangler vectorize list


```

Terminal window

```

yarn wrangler vectorize list


```

* `--json` ` boolean ` default: false  
Return output as JSON
* `--deprecated-v1` ` boolean ` default: false  
List deprecated Vectorize V1 indexes for your account.

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize list-vectors`

List vector identifiers in a Vectorize index

* [  npm ](#tab-panel-8285)
* [  pnpm ](#tab-panel-8286)
* [  yarn ](#tab-panel-8287)

Terminal window

```

npx wrangler vectorize list-vectors [NAME]


```

Terminal window

```

pnpm wrangler vectorize list-vectors [NAME]


```

Terminal window

```

yarn wrangler vectorize list-vectors [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index
* `--count` ` number `  
Maximum number of vectors to return (1-1000)
* `--cursor` ` string `  
Cursor for pagination to get the next page of results
* `--json` ` boolean ` default: false  
Return output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize query`

Query a Vectorize index

* [  npm ](#tab-panel-8288)
* [  pnpm ](#tab-panel-8289)
* [  yarn ](#tab-panel-8290)

Terminal window

```

npx wrangler vectorize query [NAME]


```

Terminal window

```

pnpm wrangler vectorize query [NAME]


```

Terminal window

```

yarn wrangler vectorize query [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index
* `--vector` ` number `  
Vector to query the Vectorize Index
* `--vector-id` ` string `  
Identifier for a vector in the index against which the index should be queried
* `--top-k` ` number ` default: 5  
The number of results (nearest neighbors) to return
* `--return-values` ` boolean ` default: false  
Specify if the vector values should be included in the results
* `--return-metadata` ` string ` default: none  
Specify if the vector metadata should be included in the results
* `--namespace` ` string `  
Filter the query results based on this namespace
* `--filter` ` string `  
Filter the query results based on this metadata filter.

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize insert`

Insert vectors into a Vectorize index

* [  npm ](#tab-panel-8291)
* [  pnpm ](#tab-panel-8292)
* [  yarn ](#tab-panel-8293)

Terminal window

```

npx wrangler vectorize insert [NAME]


```

Terminal window

```

pnpm wrangler vectorize insert [NAME]


```

Terminal window

```

yarn wrangler vectorize insert [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index.
* `--file` ` string ` required  
A file containing line separated json (ndjson) vector objects.
* `--batch-size` ` number ` default: 1000  
Number of vector records to include when sending to the Cloudflare API.
* `--json` ` boolean ` default: false  
return output as JSON
* `--deprecated-v1` ` boolean ` default: false  
Insert into a deprecated V1 Vectorize index. This must be enabled if the index was created with the V1 option.

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize upsert`

Upsert vectors into a Vectorize index

* [  npm ](#tab-panel-8294)
* [  pnpm ](#tab-panel-8295)
* [  yarn ](#tab-panel-8296)

Terminal window

```

npx wrangler vectorize upsert [NAME]


```

Terminal window

```

pnpm wrangler vectorize upsert [NAME]


```

Terminal window

```

yarn wrangler vectorize upsert [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index.
* `--file` ` string ` required  
A file containing line separated json (ndjson) vector objects.
* `--batch-size` ` number ` default: 5000  
Number of vector records to include in a single upsert batch when sending to the Cloudflare API.
* `--json` ` boolean ` default: false  
return output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize get-vectors`

Get vectors from a Vectorize index

* [  npm ](#tab-panel-8297)
* [  pnpm ](#tab-panel-8298)
* [  yarn ](#tab-panel-8299)

Terminal window

```

npx wrangler vectorize get-vectors [NAME]


```

Terminal window

```

pnpm wrangler vectorize get-vectors [NAME]


```

Terminal window

```

yarn wrangler vectorize get-vectors [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index.
* `--ids` ` string ` required  
Vector identifiers to be fetched from the Vectorize Index. Example: `--ids a 'b' 1 '2'`

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize delete-vectors`

Delete vectors in a Vectorize index

* [  npm ](#tab-panel-8300)
* [  pnpm ](#tab-panel-8301)
* [  yarn ](#tab-panel-8302)

Terminal window

```

npx wrangler vectorize delete-vectors [NAME]


```

Terminal window

```

pnpm wrangler vectorize delete-vectors [NAME]


```

Terminal window

```

yarn wrangler vectorize delete-vectors [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index.
* `--ids` ` string ` required  
Vector identifiers to be deleted from the Vectorize Index. Example: `--ids a 'b' 1 '2'`

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize info`

Get additional details about the index

* [  npm ](#tab-panel-8303)
* [  pnpm ](#tab-panel-8304)
* [  yarn ](#tab-panel-8305)

Terminal window

```

npx wrangler vectorize info [NAME]


```

Terminal window

```

pnpm wrangler vectorize info [NAME]


```

Terminal window

```

yarn wrangler vectorize info [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index.
* `--json` ` boolean ` default: false  
return output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize create-metadata-index`

Enable metadata filtering on the specified property

* [  npm ](#tab-panel-8306)
* [  pnpm ](#tab-panel-8307)
* [  yarn ](#tab-panel-8308)

Terminal window

```

npx wrangler vectorize create-metadata-index [NAME]


```

Terminal window

```

pnpm wrangler vectorize create-metadata-index [NAME]


```

Terminal window

```

yarn wrangler vectorize create-metadata-index [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index.
* `--propertyName` ` string ` required  
The name of the metadata property to index.
* `--type` ` string ` required  
The type of metadata property to index. Valid types are 'string', 'number' and 'boolean'.

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize list-metadata-index`

List metadata properties on which metadata filtering is enabled

* [  npm ](#tab-panel-8309)
* [  pnpm ](#tab-panel-8310)
* [  yarn ](#tab-panel-8311)

Terminal window

```

npx wrangler vectorize list-metadata-index [NAME]


```

Terminal window

```

pnpm wrangler vectorize list-metadata-index [NAME]


```

Terminal window

```

yarn wrangler vectorize list-metadata-index [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index.
* `--json` ` boolean ` default: false  
return output as JSON

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vectorize delete-metadata-index`

Delete metadata indexes

* [  npm ](#tab-panel-8312)
* [  pnpm ](#tab-panel-8313)
* [  yarn ](#tab-panel-8314)

Terminal window

```

npx wrangler vectorize delete-metadata-index [NAME]


```

Terminal window

```

pnpm wrangler vectorize delete-metadata-index [NAME]


```

Terminal window

```

yarn wrangler vectorize delete-metadata-index [NAME]


```

* `[NAME]` ` string ` required  
The name of the Vectorize index.
* `--propertyName` ` string ` required  
The name of the metadata property to index.

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/vectorize/","name":"Vectorize"}}]}
```

---

---
title: VPC
description: Wrangler commands for managing Workers VPC services.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/vpc.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# VPC

Manage [Workers VPC](https://developers.cloudflare.com/workers-vpc/) services using Wrangler. VPC services allow your Workers to connect to private services on your network through Cloudflare Tunnels.

## `vpc service create`

Create a new VPC service

* [  npm ](#tab-panel-8315)
* [  pnpm ](#tab-panel-8316)
* [  yarn ](#tab-panel-8317)

Terminal window

```

npx wrangler vpc service create [NAME]


```

Terminal window

```

pnpm wrangler vpc service create [NAME]


```

Terminal window

```

yarn wrangler vpc service create [NAME]


```

* `[NAME]` ` string ` required  
The name of the VPC service
* `--type` ` string ` required  
The type of the VPC service
* `--tcp-port` ` number `  
TCP port number
* `--app-protocol` ` string `  
Application protocol for the TCP service
* `--http-port` ` number `  
HTTP port (default: 80)
* `--https-port` ` number `  
HTTPS port number (default: 443)
* `--ipv4` ` string `  
IPv4 address for the host \[conflicts with --ipv6\]
* `--ipv6` ` string `  
IPv6 address for the host \[conflicts with --ipv4\]
* `--hostname` ` string `  
Hostname for the host
* `--resolver-ips` ` string `  
Comma-separated list of resolver IPs
* `--tunnel-id` ` string ` required  
UUID of the Cloudflare tunnel
* `--cert-verification-mode` ` string `  
TLS certificate verification mode for the connection to the origin

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vpc service delete`

Delete a VPC service

* [  npm ](#tab-panel-8318)
* [  pnpm ](#tab-panel-8319)
* [  yarn ](#tab-panel-8320)

Terminal window

```

npx wrangler vpc service delete [SERVICE-ID]


```

Terminal window

```

pnpm wrangler vpc service delete [SERVICE-ID]


```

Terminal window

```

yarn wrangler vpc service delete [SERVICE-ID]


```

* `[SERVICE-ID]` ` string ` required  
The ID of the service to delete

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vpc service get`

Get a VPC service

* [  npm ](#tab-panel-8321)
* [  pnpm ](#tab-panel-8322)
* [  yarn ](#tab-panel-8323)

Terminal window

```

npx wrangler vpc service get [SERVICE-ID]


```

Terminal window

```

pnpm wrangler vpc service get [SERVICE-ID]


```

Terminal window

```

yarn wrangler vpc service get [SERVICE-ID]


```

* `[SERVICE-ID]` ` string ` required  
The ID of the VPC service

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vpc service list`

List VPC services

* [  npm ](#tab-panel-8324)
* [  pnpm ](#tab-panel-8325)
* [  yarn ](#tab-panel-8326)

Terminal window

```

npx wrangler vpc service list


```

Terminal window

```

pnpm wrangler vpc service list


```

Terminal window

```

yarn wrangler vpc service list


```

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `vpc service update`

Update a VPC service

* [  npm ](#tab-panel-8327)
* [  pnpm ](#tab-panel-8328)
* [  yarn ](#tab-panel-8329)

Terminal window

```

npx wrangler vpc service update [SERVICE-ID]


```

Terminal window

```

pnpm wrangler vpc service update [SERVICE-ID]


```

Terminal window

```

yarn wrangler vpc service update [SERVICE-ID]


```

* `[SERVICE-ID]` ` string ` required  
The ID of the VPC service to update
* `--name` ` string ` required  
The name of the VPC service
* `--type` ` string ` required  
The type of the VPC service
* `--tcp-port` ` number `  
TCP port number
* `--app-protocol` ` string `  
Application protocol for the TCP service
* `--http-port` ` number `  
HTTP port (default: 80)
* `--https-port` ` number `  
HTTPS port number (default: 443)
* `--ipv4` ` string `  
IPv4 address for the host \[conflicts with --ipv6\]
* `--ipv6` ` string `  
IPv6 address for the host \[conflicts with --ipv4\]
* `--hostname` ` string `  
Hostname for the host
* `--resolver-ips` ` string `  
Comma-separated list of resolver IPs
* `--tunnel-id` ` string ` required  
UUID of the Cloudflare tunnel
* `--cert-verification-mode` ` string `  
TLS certificate verification mode for the connection to the origin

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/vpc/","name":"VPC"}}]}
```

---

---
title: Workers for Platforms
description: Wrangler commands for managing Workers for Platforms dispatch namespaces.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/workers-for-platforms.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workers for Platforms

Wrangler commands for managing Workers for Platforms [dispatch namespace](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#dispatch-namespace) using Wrangler.

## `dispatch-namespace list`

List all dispatch namespaces

* [  npm ](#tab-panel-8330)
* [  pnpm ](#tab-panel-8331)
* [  yarn ](#tab-panel-8332)

Terminal window

```

npx wrangler dispatch-namespace list


```

Terminal window

```

pnpm wrangler dispatch-namespace list


```

Terminal window

```

yarn wrangler dispatch-namespace list


```

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `dispatch-namespace get`

Get information about a dispatch namespace

* [  npm ](#tab-panel-8333)
* [  pnpm ](#tab-panel-8334)
* [  yarn ](#tab-panel-8335)

Terminal window

```

npx wrangler dispatch-namespace get [NAME]


```

Terminal window

```

pnpm wrangler dispatch-namespace get [NAME]


```

Terminal window

```

yarn wrangler dispatch-namespace get [NAME]


```

* `[NAME]` ` string ` required  
Name of the dispatch namespace

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `dispatch-namespace create`

Create a dispatch namespace

* [  npm ](#tab-panel-8336)
* [  pnpm ](#tab-panel-8337)
* [  yarn ](#tab-panel-8338)

Terminal window

```

npx wrangler dispatch-namespace create [NAME]


```

Terminal window

```

pnpm wrangler dispatch-namespace create [NAME]


```

Terminal window

```

yarn wrangler dispatch-namespace create [NAME]


```

* `[NAME]` ` string ` required  
Name of the dispatch namespace

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `dispatch-namespace delete`

Delete a dispatch namespace

* [  npm ](#tab-panel-8339)
* [  pnpm ](#tab-panel-8340)
* [  yarn ](#tab-panel-8341)

Terminal window

```

npx wrangler dispatch-namespace delete [NAME]


```

Terminal window

```

pnpm wrangler dispatch-namespace delete [NAME]


```

Terminal window

```

yarn wrangler dispatch-namespace delete [NAME]


```

* `[NAME]` ` string ` required  
Name of the dispatch namespace

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

Note

You must delete all user Workers in the dispatch namespace before it can be deleted.

## `dispatch-namespace rename`

Rename a dispatch namespace

* [  npm ](#tab-panel-8342)
* [  pnpm ](#tab-panel-8343)
* [  yarn ](#tab-panel-8344)

Terminal window

```

npx wrangler dispatch-namespace rename [OLDNAME] [NEWNAME]


```

Terminal window

```

pnpm wrangler dispatch-namespace rename [OLDNAME] [NEWNAME]


```

Terminal window

```

yarn wrangler dispatch-namespace rename [OLDNAME] [NEWNAME]


```

* `[OLDNAME]` ` string ` required  
Name of the dispatch namespace
* `[NEWNAME]` ` string ` required  
New name of the dispatch namespace

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/workers-for-platforms/","name":"Workers for Platforms"}}]}
```

---

---
title: Workflows
description: Wrangler commands for managing and configuring Cloudflare Workflows.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/commands/workflows.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workflows

Manage and configure [Workflows](https://developers.cloudflare.com/workflows/) using Wrangler.

Note

The `wrangler workflows` command requires Wrangler version `3.83.0` or greater. Use `npx wrangler@latest` to always use the latest Wrangler version when invoking commands.

`--local` option

All `wrangler workflows` commands support the `--local` flag to target a Workflow running in a local [wrangler dev](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) session instead of production. Use `--port` to specify the port of the dev session (defaults to `8787`).

The `--local` flag requires Wrangler version `4.79.0` or greater.

For more information, refer to [Workflows local development](https://developers.cloudflare.com/workflows/build/local-development/).

## `workflows list`

List Workflows associated to account

* [  npm ](#tab-panel-8345)
* [  pnpm ](#tab-panel-8346)
* [  yarn ](#tab-panel-8347)

Terminal window

```

npx wrangler workflows list


```

Terminal window

```

pnpm wrangler workflows list


```

Terminal window

```

yarn wrangler workflows list


```

* `--local` ` boolean `  
Interact with local dev session
* `--port` ` number ` default: 8787  
Port of the local dev session (default: 8787)
* `--page` ` number ` default: 1  
Show a sepecific page from the listing, can configure page size using "per-page"
* `--per-page` ` number `  
Configure the maximum number of workflows to show per page

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `workflows describe`

Describe Workflow resource

* [  npm ](#tab-panel-8348)
* [  pnpm ](#tab-panel-8349)
* [  yarn ](#tab-panel-8350)

Terminal window

```

npx wrangler workflows describe [NAME]


```

Terminal window

```

pnpm wrangler workflows describe [NAME]


```

Terminal window

```

yarn wrangler workflows describe [NAME]


```

* `--local` ` boolean `  
Interact with local dev session
* `--port` ` number ` default: 8787  
Port of the local dev session (default: 8787)
* `[NAME]` ` string ` required  
Name of the workflow

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `workflows delete`

Delete workflow - when deleting a workflow, it will also delete it's own instances

* [  npm ](#tab-panel-8351)
* [  pnpm ](#tab-panel-8352)
* [  yarn ](#tab-panel-8353)

Terminal window

```

npx wrangler workflows delete [NAME]


```

Terminal window

```

pnpm wrangler workflows delete [NAME]


```

Terminal window

```

yarn wrangler workflows delete [NAME]


```

* `--local` ` boolean `  
Interact with local dev session
* `--port` ` number ` default: 8787  
Port of the local dev session (default: 8787)
* `[NAME]` ` string ` required  
Name of the workflow

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `workflows trigger`

Trigger a workflow, creating a new instance. Can optionally take a JSON string to pass a parameter into the workflow instance

* [  npm ](#tab-panel-8354)
* [  pnpm ](#tab-panel-8355)
* [  yarn ](#tab-panel-8356)

Terminal window

```

npx wrangler workflows trigger [NAME] [PARAMS]


```

Terminal window

```

pnpm wrangler workflows trigger [NAME] [PARAMS]


```

Terminal window

```

yarn wrangler workflows trigger [NAME] [PARAMS]


```

* `--local` ` boolean `  
Interact with local dev session
* `--port` ` number ` default: 8787  
Port of the local dev session (default: 8787)
* `[NAME]` ` string ` required  
Name of the workflow
* `[PARAMS]` ` string ` default:  
Params for the workflow instance, encoded as a JSON string
* `--id` ` string `  
Custom instance ID, if not provided it will default to a random UUIDv4

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `workflows instances list`

Instance related commands (list, describe, terminate, pause, resume)

* [  npm ](#tab-panel-8357)
* [  pnpm ](#tab-panel-8358)
* [  yarn ](#tab-panel-8359)

Terminal window

```

npx wrangler workflows instances list [NAME]


```

Terminal window

```

pnpm wrangler workflows instances list [NAME]


```

Terminal window

```

yarn wrangler workflows instances list [NAME]


```

* `--local` ` boolean `  
Interact with local dev session
* `--port` ` number ` default: 8787  
Port of the local dev session (default: 8787)
* `[NAME]` ` string ` required  
Name of the workflow
* `--reverse` ` boolean ` default: false  
Reverse order of the instances table
* `--status` ` string `  
Filters list by instance status (can be one of: queued, running, paused, errored, terminated, complete)
* `--page` ` number ` default: 1  
Show a sepecific page from the listing, can configure page size using "per-page"
* `--per-page` ` number `  
Configure the maximum number of instances to show per page

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `workflows instances describe`

Describe a workflow instance - see its logs, retries and errors

* [  npm ](#tab-panel-8360)
* [  pnpm ](#tab-panel-8361)
* [  yarn ](#tab-panel-8362)

Terminal window

```

npx wrangler workflows instances describe [NAME] [ID]


```

Terminal window

```

pnpm wrangler workflows instances describe [NAME] [ID]


```

Terminal window

```

yarn wrangler workflows instances describe [NAME] [ID]


```

* `--local` ` boolean `  
Interact with local dev session
* `--port` ` number ` default: 8787  
Port of the local dev session (default: 8787)
* `[NAME]` ` string ` required  
Name of the workflow
* `[ID]` ` string ` default: latest  
ID of the instance - instead of an UUID you can type 'latest' to get the latest instance and describe it
* `--step-output` ` boolean ` default: true  
Don't output the step output since it might clutter the terminal
* `--truncate-output-limit` ` number ` default: 5000  
Truncate step output after x characters

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `workflows instances send-event`

Send an event to a workflow instance

* [  npm ](#tab-panel-8363)
* [  pnpm ](#tab-panel-8364)
* [  yarn ](#tab-panel-8365)

Terminal window

```

npx wrangler workflows instances send-event [NAME] [ID]


```

Terminal window

```

pnpm wrangler workflows instances send-event [NAME] [ID]


```

Terminal window

```

yarn wrangler workflows instances send-event [NAME] [ID]


```

* `--local` ` boolean `  
Interact with local dev session
* `--port` ` number ` default: 8787  
Port of the local dev session (default: 8787)
* `[NAME]` ` string ` required  
Name of the workflow
* `[ID]` ` string ` required  
ID of the instance - instead of an UUID you can type 'latest' to get the latest instance and send an event to it
* `--type` ` string ` required  
Type of the workflow event
* `--payload` ` string ` default: {}  
JSON string for the workflow event (e.g., '{"key": "value"}')

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `workflows instances terminate`

Terminate a workflow instance

* [  npm ](#tab-panel-8366)
* [  pnpm ](#tab-panel-8367)
* [  yarn ](#tab-panel-8368)

Terminal window

```

npx wrangler workflows instances terminate [NAME] [ID]


```

Terminal window

```

pnpm wrangler workflows instances terminate [NAME] [ID]


```

Terminal window

```

yarn wrangler workflows instances terminate [NAME] [ID]


```

* `--local` ` boolean `  
Interact with local dev session
* `--port` ` number ` default: 8787  
Port of the local dev session (default: 8787)
* `[NAME]` ` string ` required  
Name of the workflow
* `[ID]` ` string ` required  
ID of the instance - instead of an UUID you can type 'latest' to get the latest instance and describe it

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `workflows instances restart`

Restart a workflow instance

* [  npm ](#tab-panel-8369)
* [  pnpm ](#tab-panel-8370)
* [  yarn ](#tab-panel-8371)

Terminal window

```

npx wrangler workflows instances restart [NAME] [ID]


```

Terminal window

```

pnpm wrangler workflows instances restart [NAME] [ID]


```

Terminal window

```

yarn wrangler workflows instances restart [NAME] [ID]


```

* `--local` ` boolean `  
Interact with local dev session
* `--port` ` number ` default: 8787  
Port of the local dev session (default: 8787)
* `[NAME]` ` string ` required  
Name of the workflow
* `[ID]` ` string ` required  
ID of the instance - instead of an UUID you can type 'latest' to get the latest instance and describe it

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `workflows instances pause`

Pause a workflow instance

* [  npm ](#tab-panel-8372)
* [  pnpm ](#tab-panel-8373)
* [  yarn ](#tab-panel-8374)

Terminal window

```

npx wrangler workflows instances pause [NAME] [ID]


```

Terminal window

```

pnpm wrangler workflows instances pause [NAME] [ID]


```

Terminal window

```

yarn wrangler workflows instances pause [NAME] [ID]


```

* `--local` ` boolean `  
Interact with local dev session
* `--port` ` number ` default: 8787  
Port of the local dev session (default: 8787)
* `[NAME]` ` string ` required  
Name of the workflow
* `[ID]` ` string ` required  
ID of the instance - instead of an UUID you can type 'latest' to get the latest instance and pause it

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

## `workflows instances resume`

Resume a workflow instance

* [  npm ](#tab-panel-8375)
* [  pnpm ](#tab-panel-8376)
* [  yarn ](#tab-panel-8377)

Terminal window

```

npx wrangler workflows instances resume [NAME] [ID]


```

Terminal window

```

pnpm wrangler workflows instances resume [NAME] [ID]


```

Terminal window

```

yarn wrangler workflows instances resume [NAME] [ID]


```

* `--local` ` boolean `  
Interact with local dev session
* `--port` ` number ` default: 8787  
Port of the local dev session (default: 8787)
* `[NAME]` ` string ` required  
Name of the workflow
* `[ID]` ` string ` required  
ID of the instance - instead of an UUID you can type 'latest' to get the latest instance and resume it

Global flags

* `--v` ` boolean ` alias: --version  
Show version number
* `--cwd` ` string `  
Run as if Wrangler was started in the specified directory instead of the current working directory
* `--config` ` string ` alias: --c  
Path to Wrangler configuration file
* `--env` ` string ` alias: --e  
Environment to use for operations, and for selecting .env and .dev.vars files
* `--env-file` ` string `  
Path to an .env file to load - can be specified multiple times - values from earlier files are overridden by values in later files
* `--experimental-provision` ` boolean ` aliases: --x-provision default: true  
Experimental: Enable automatic resource provisioning
* `--experimental-auto-create` ` boolean ` alias: --x-auto-create default: true  
Automatically provision draft bindings with new resources

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/commands/","name":"Commands"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/commands/workflows/","name":"Workflows"}}]}
```

---

---
title: Configuration
description: Use a configuration file to customize the development and deployment setup for your Worker project and other Developer Platform products.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/configuration.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Configuration

Wrangler optionally uses a configuration file to customize the development and deployment setup for a Worker.

Note

As of Wrangler v3.91.0 Wrangler supports both JSON (`wrangler.json` or `wrangler.jsonc`) and TOML (`wrangler.toml`) for its configuration file. Prior to that version, only `wrangler.toml` was supported.

Cloudflare recommends using `wrangler.jsonc` for new projects, and some newer Wrangler features will only be available to projects using a JSON config file.

The format of Wrangler's configuration file is exactly the same across both languages, only the syntax differs.

You can use one of the many available online converters to easily switch between the two.

Throughout this page and the rest of Cloudflare's documentation, configuration snippets are provided as both JSON and TOML.

It is best practice to treat Wrangler's configuration file as the [source of truth](#source-of-truth) for configuring a Worker.

## Sample Wrangler configuration

* [  wrangler.jsonc ](#tab-panel-8400)
* [  wrangler.toml ](#tab-panel-8401)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  // Top-level configuration

  "name": "my-worker",

  "main": "src/index.js",

  // Set this to today's date

  "compatibility_date": "2026-04-03",

  "workers_dev": false,

  "route": {

    "pattern": "example.org/*",

    "zone_name": "example.org",

  },

  "kv_namespaces": [

    {

      "binding": "<MY_NAMESPACE>",

      "id": "<KV_ID>",

    },

  ],

  "env": {

    "staging": {

      "name": "my-worker-staging",

      "route": {

        "pattern": "staging.example.org/*",

        "zone_name": "example.org",

      },

      "kv_namespaces": [

        {

          "binding": "<MY_NAMESPACE>",

          "id": "<STAGING_KV_ID>",

        },

      ],

    },

  },

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker"

main = "src/index.js"

# Set this to today's date

compatibility_date = "2026-04-03"

workers_dev = false


[route]

pattern = "example.org/*"

zone_name = "example.org"


[[kv_namespaces]]

binding = "<MY_NAMESPACE>"

id = "<KV_ID>"


[env.staging]

name = "my-worker-staging"


  [env.staging.route]

  pattern = "staging.example.org/*"

  zone_name = "example.org"


  [[env.staging.kv_namespaces]]

  binding = "<MY_NAMESPACE>"

  id = "<STAGING_KV_ID>"


```

## Environments

You can define different configurations for a Worker using Wrangler [environments](https://developers.cloudflare.com/workers/wrangler/environments/). There is a default (top-level) environment and you can create named environments that provide environment-specific configuration.

These are defined under `[env.<name>]` keys, such as `[env.staging]` which you can then preview or deploy with the `-e` / `--env` flag in the `wrangler` commands like `npx wrangler deploy --env staging`.

The majority of keys are inheritable, meaning that top-level configuration can be used in environments. [Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/), such as `vars` or `kv_namespaces`, are not inheritable and need to be defined explicitly.

Further, there are a few keys that can _only_ appear at the top-level.

Note

If you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), you select the environment at dev or build time via the `CLOUDFLARE_ENV` environment variable rather than the `--env` flag. Otherwise, environments are defined in your Worker config file as usual. For more detail on using environments with the Cloudflare Vite plugin, refer to the [plugin documentation](https://developers.cloudflare.com/workers/vite-plugin/reference/cloudflare-environments/).

## Automatic provisioning

[Beta](https://developers.cloudflare.com/changelog/2025-10-24-automatic-resource-provisioning/) 

Wrangler can automatically provision resources for you when you deploy your Worker without you having to create them ahead of time.

This currently works for KV, R2, and D1 bindings.

To use this feature, add bindings to your configuration file _without_ adding resource IDs, or in the case of R2, a bucket name. Resources will be created with the name of your worker as the prefix.

* [  wrangler.jsonc ](#tab-panel-8378)
* [  wrangler.toml ](#tab-panel-8379)

```

{

  "kv_namespaces": [

    {

      "binding": "<MY_KV_NAMESPACE>",

    },

  ],

}


```

```

[[kv_namespaces]]

binding = "<MY_KV_NAMESPACE>"


```

When you run `wrangler dev`, local resources will automatically be created which persist between runs. When you run `wrangler deploy`, resources will be created for you, and their IDs will be written back to your configuration file.

If you deploy a worker with resources and no resource IDs from the dashboard (for example, via GitHub), resources will be created, but their IDs will only be accessible via the dashboard. Currently, these resource IDs will not be written back to your repository.

## Top-level only keys

Top-level keys apply to the Worker as a whole (and therefore all environments). They cannot be defined within named environments.

* `keep_vars` ` boolean ` optional  
   * Whether Wrangler should keep variables configured in the dashboard on deploy. Refer to [source of truth](#source-of-truth).
* `migrations` ` object[] ` optional  
   * When making changes to your Durable Object classes, you must perform a migration. Refer to [Durable Object migrations](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/).
* `send_metrics` ` boolean ` optional  
   * Whether Wrangler should send usage data to Cloudflare for this project. Defaults to `true`. You can learn more about this in our [data policy ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler/telemetry.md).
* `site` ` object ` optional deprecated  
   * See the [Workers Sites](#workers-sites) section below for more information. Cloudflare Pages and Workers Assets is preferred over this approach.  
   * This is not supported by the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).

## Inheritable keys

Inheritable keys are configurable at the top-level, and can be inherited (or overridden) by environment-specific configuration.

Note

At a minimum, the `name`, `main` and `compatibility_date` keys are required to deploy a Worker.

The `main` key is optional for assets-only Workers.

* `name` ` string ` required  
   * The name of your Worker. Alphanumeric characters (`a`,`b`,`c`, etc.) and dashes (`-`) only. Do not use underscores (`_`). Worker names can be up to 255 characters. If you plan to use a [workers.dev subdomain](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/), the name must be 63 characters or less and cannot start or end with a dash.
* `main` ` string ` required  
   * The path to the entrypoint of your Worker that will be executed. For example: `./src/index.ts`.
* `compatibility_date` ` string ` required  
   * A date in the form `yyyy-mm-dd`, which will be used to determine which version of the Workers runtime is used. Refer to [Compatibility dates](https://developers.cloudflare.com/workers/configuration/compatibility-dates/).
* `account_id` ` string ` optional  
   * This is the ID of the account associated with your zone. You might have more than one account, so make sure to use the ID of the account associated with the zone/route you provide, if you provide one. It can also be specified through the `CLOUDFLARE_ACCOUNT_ID` environment variable.
* `compatibility_flags` ` string[] ` optional  
   * A list of flags that enable features from upcoming features of the Workers runtime, usually used together with `compatibility_date`. Refer to [compatibility dates](https://developers.cloudflare.com/workers/configuration/compatibility-dates/).
* `workers_dev` ` boolean ` optional  
   * Enables use of `*.workers.dev` subdomain to deploy your Worker. If you have a Worker that is only for `scheduled` events, you can set this to `false`. Defaults to `true`. Refer to [types of routes](#types-of-routes).
* `preview_urls` ` boolean ` optional  
   * Enables use of Preview URLs to test your Worker. Defaults to value of `workers_dev`. Refer to [Preview URLs](https://developers.cloudflare.com/workers/configuration/previews).
* `route` ` Route ` optional  
   * A route that your Worker should be deployed to. Only one of `routes` or `route` is required. Refer to [types of routes](#types-of-routes).
* `routes` ` Route[] ` optional  
   * An array of routes that your Worker should be deployed to. Only one of `routes` or `route` is required. Refer to [types of routes](#types-of-routes).
* `tsconfig` ` string ` optional  
   * Path to a custom `tsconfig`.  
   * Not applicable if you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).
* `triggers` ` object ` optional  
   * Cron definitions to trigger a Worker's `scheduled` function. Refer to [triggers](#triggers).
* `rules` ` Rule ` optional  
   * An ordered list of rules that define which modules to import, and what type to import them as. You will need to specify rules to use `Text`, `Data` and `CompiledWasm` modules, or when you wish to have a `.js` file be treated as an `ESModule` instead of `CommonJS`.  
   * Not applicable if you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).
* `build` ` Build ` optional  
   * Configures a custom build step to be run by Wrangler when building your Worker. Refer to [Custom builds](#custom-builds).  
   * Not applicable if you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).
* `no_bundle` ` boolean ` optional  
   * Skip internal build steps and directly deploy your Worker script. You must have a plain JavaScript Worker with no dependencies.  
   * Not applicable if you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).
* `find_additional_modules` ` boolean ` optional  
   * If true then Wrangler will traverse the file tree below `base_dir`. Any files that match `rules` will be included in the deployed Worker. Defaults to true if `no_bundle` is true, otherwise false. Can only be used with Module format Workers (not Service Worker format).  
   * Not applicable if you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).
* `base_dir` ` string ` optional  
   * The directory in which module "rules" should be evaluated when including additional files (via `find_additional_modules`) into a Worker deployment. Defaults to the directory containing the `main` entry point of the Worker if not specified.  
   * Not applicable if you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).
* `preserve_file_names` ` boolean ` optional  
   * Determines whether Wrangler will preserve the file names of additional modules bundled with the Worker. The default is to prepend filenames with a content hash. For example, `34de60b44167af5c5a709e62a4e20c4f18c9e3b6-favicon.ico`.  
   * Not applicable if you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).
* `minify` ` boolean ` optional  
   * Minify the Worker script before uploading.  
   * If you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), `minify` is replaced by Vite's [build.minify ↗](https://vite.dev/config/build-options.html#build-minify).
* `keep_names` ` boolean ` optional  
   * Wrangler uses esbuild to process the Worker code for development and deployment. This option allows you to specify whether esbuild should apply its [keepNames ↗](https://esbuild.github.io/api/#keep-names) logic to the code or not. Defaults to `true`.
* `logpush` ` boolean ` optional  
   * Enables Workers Trace Events Logpush for a Worker. Any scripts with this property will automatically get picked up by the Workers Logpush job configured for your account. Defaults to `false`. Refer to [Workers Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush/).
* `limits` ` Limits ` optional  
   * Configures limits to be imposed on execution at runtime. Refer to [Limits](#limits).
* `observability` ` object ` optional  
   * Configures automatic observability settings for telemetry data emitted from your Worker. Refer to [Observability](#observability).
* `assets` ` Assets ` optional  
   * Configures static assets that will be served. Refer to [Assets](https://developers.cloudflare.com/workers/static-assets/binding/) for more details.
* `migrations` ` object ` optional  
   * Maps a Durable Object from a class name to a runtime state. This communicates changes to the Durable Object (creation / deletion / rename / transfer) to the Workers runtime and provides the runtime with instructions on how to deal with those changes. Refer to [Durable Objects migrations](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/#migration-wrangler-configuration).
* `placement` ` object ` optional  
   * Configures where your Worker runs to minimize latency to back-end services. Refer to [Placement](https://developers.cloudflare.com/workers/configuration/placement/).  
   * `mode` ` string ` — Set to `"smart"` to automatically place your Worker near back-end services based on observed latency.  
   * `region` ` string ` — Specify a cloud region (for example, `"aws:us-east-1"`, `"gcp:europe-west1"`, or `"azure:westeurope"`) to place your Worker near infrastructure in that region.  
   * `host` ` string ` — Specify a hostname and port for a single-homed layer 4 service (for example, `"my_database_host.com:5432"`) to place your Worker near that service.  
   * `hostname` ` string ` — Specify a hostname for a single-homed layer 7 service (for example, `"my_api_server.com"`) to place your Worker near that service.

## Non-inheritable keys

Non-inheritable keys are configurable at the top-level, but cannot be inherited by environments and must be specified for each environment.

* `define` ` Record<string, string> ` optional  
   * A map of values to substitute when deploying your Worker.  
   * If you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), `define` is replaced by Vite's [define ↗](https://vite.dev/config/shared-options.html#define).
* `vars` ` object ` optional  
   * A map of environment variables to set when deploying your Worker. Refer to [Environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/).
* `durable_objects` ` object ` optional  
   * A list of Durable Objects that your Worker should be bound to. Refer to [Durable Objects](#durable-objects).
* `kv_namespaces` ` object ` optional  
   * A list of KV namespaces that your Worker should be bound to. Refer to [KV namespaces](#kv-namespaces).
* `r2_buckets` ` object ` optional  
   * A list of R2 buckets that your Worker should be bound to. Refer to [R2 buckets](#r2-buckets).
* `vectorize` ` object ` optional  
   * A list of Vectorize indexes that your Worker should be bound to. Refer to [Vectorize indexes](#vectorize-indexes).
* `services` ` object ` optional  
   * A list of service bindings that your Worker should be bound to. Refer to [service bindings](#service-bindings).
* `queues` ` object ` optional  
   * A list of Queue producers and consumers that your Worker should be bound to. Refer to [Queues](#queues).
* `workflows` ` object ` optional  
   * A list of Workflows that your Worker should be bound to. Refer to [Workflows](#workflows).
* `tail_consumers` ` object ` optional  
   * A list of the Tail Workers your Worker sends data to. Refer to [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/).
* `secrets` ` object ` optional  
   * Declares the secret names your Worker requires. Used for validation during local development and deploy, and as the source of truth for type generation. Refer to [Secrets](#secrets).  
   * `required` ` string[] ` optional — A list of secret names that must be set to deploy your Worker.

## Types of routes

There are three types of [routes](https://developers.cloudflare.com/workers/configuration/routing/): [Custom Domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), [routes](https://developers.cloudflare.com/workers/configuration/routing/routes/), and [workers.dev](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/).

### Custom Domains

[Custom Domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) allow you to connect your Worker to a domain or subdomain, without having to make changes to your DNS settings or perform any certificate management.

* `pattern` ` string ` required  
   * The pattern that your Worker should be run on, for example, `"example.com"`.
* `custom_domain` ` boolean ` optional  
   * Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to `false`.

Example:

* [  wrangler.jsonc ](#tab-panel-8380)
* [  wrangler.toml ](#tab-panel-8381)

```

{

  "routes": [

    {

      "pattern": "shop.example.com",

      "custom_domain": true,

    },

  ],

}


```

```

[[routes]]

pattern = "shop.example.com"

custom_domain = true


```

### Routes

[Routes](https://developers.cloudflare.com/workers/configuration/routing/routes/) allow users to map a URL pattern to a Worker. A route can be configured as a zone ID route, a zone name route, or a simple route.

#### Zone ID route

* `pattern` ` string ` required  
   * The pattern that your Worker can be run on, for example,`"example.com/*"`.
* `zone_id` ` string ` required  
   * The ID of the zone that your `pattern` is associated with. Refer to [Find zone and account IDs](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids/).

Example:

* [  wrangler.jsonc ](#tab-panel-8384)
* [  wrangler.toml ](#tab-panel-8385)

```

{

  "routes": [

    {

      "pattern": "subdomain.example.com/*",

      "zone_id": "<YOUR_ZONE_ID>",

    },

  ],

}


```

```

[[routes]]

pattern = "subdomain.example.com/*"

zone_id = "<YOUR_ZONE_ID>"


```

#### Zone name route

* `pattern` ` string ` required  
   * The pattern that your Worker should be run on, for example, `"example.com/*"`.
* `zone_name` ` string ` required  
   * The name of the zone that your `pattern` is associated with. If you are using API tokens, this will require the `Account` scope.

Example:

* [  wrangler.jsonc ](#tab-panel-8388)
* [  wrangler.toml ](#tab-panel-8389)

```

{

  "routes": [

    {

      "pattern": "subdomain.example.com/*",

      "zone_name": "example.com",

    },

  ],

}


```

```

[[routes]]

pattern = "subdomain.example.com/*"

zone_name = "example.com"


```

#### Simple route

This is a simple route that only requires a pattern.

Example:

* [  wrangler.jsonc ](#tab-panel-8382)
* [  wrangler.toml ](#tab-panel-8383)

```

{

  "route": "example.com/*",

}


```

```

route = "example.com/*"


```

### `workers.dev`

Cloudflare Workers accounts come with a `workers.dev` subdomain that is configurable in the Cloudflare dashboard.

* `workers_dev` ` boolean ` optional  
   * Whether the Worker runs on a custom `workers.dev` account subdomain. Defaults to `true`.

* [  wrangler.jsonc ](#tab-panel-8386)
* [  wrangler.toml ](#tab-panel-8387)

```

{

  "workers_dev": false,

}


```

```

workers_dev = false


```

## Triggers

Triggers allow you to define the `cron` expression to invoke your Worker's `scheduled` function. Refer to [Supported cron expressions](https://developers.cloudflare.com/workers/configuration/cron-triggers/#supported-cron-expressions).

* `crons` ` string[] ` required  
   * An array of `cron` expressions.  
   * To disable a Cron Trigger, set `crons = []`. Commenting out the `crons` key will not disable a Cron Trigger.

Example:

* [  wrangler.jsonc ](#tab-panel-8390)
* [  wrangler.toml ](#tab-panel-8391)

```

{

  "triggers": {

    "crons": ["* * * * *"],

  },

}


```

```

[triggers]

crons = [ "* * * * *" ]


```

## Observability

The [Observability](https://developers.cloudflare.com/workers/observability/logs/workers-logs) setting allows you to automatically ingest, store, filter, and analyze logging data emitted from Cloudflare Workers directly from your Cloudflare Worker's dashboard.

* `enabled` ` boolean ` required  
   * When set to `true` on a Worker, logs for the Worker are persisted. Defaults to `true` for all new Workers.
* `head_sampling_rate` ` number ` optional  
   * A number between 0 and 1, where 0 indicates zero out of one hundred requests are logged, and 1 indicates every request is logged. If `head_sampling_rate` is unspecified, it is configured to a default value of 1 (100%). Read more about [head-based sampling](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#head-based-sampling).

Example:

* [  wrangler.jsonc ](#tab-panel-8392)
* [  wrangler.toml ](#tab-panel-8393)

```

{

  "observability": {

    "enabled": true,

    "head_sampling_rate": 0.1, // 10% of requests are logged

  },

}


```

```

[observability]

enabled = true

head_sampling_rate = 0.1


```

## Custom builds

Note

Not applicable if you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).

You can configure a custom build step that will be run before your Worker is deployed. Refer to [Custom builds](https://developers.cloudflare.com/workers/wrangler/custom-builds/).

* `command` ` string ` optional  
   * The command used to build your Worker. On Linux and macOS, the command is executed in the `sh` shell and the `cmd` shell for Windows. The `&&` and `||` shell operators may be used.
* `cwd` ` string ` optional  
   * The directory in which the command is executed.
* `watch_dir` ` string | string[] ` optional  
   * The directory to watch for changes while using `wrangler dev`. Defaults to the current working directory.

Example:

* [  wrangler.jsonc ](#tab-panel-8394)
* [  wrangler.toml ](#tab-panel-8395)

```

{

  "build": {

    "command": "npm run build",

    "cwd": "build_cwd",

    "watch_dir": "build_watch_dir",

  },

}


```

```

[build]

command = "npm run build"

cwd = "build_cwd"

watch_dir = "build_watch_dir"


```

## Limits

You can impose limits on your Worker's behavior at runtime. Limits are only supported for the [Standard Usage Model](https://developers.cloudflare.com/workers/platform/pricing/#example-pricing-standard-usage-model). Limits are only enforced when deployed to Cloudflare's network, not in local development. The CPU limit can be set to a maximum of 300,000 milliseconds (5 minutes).

Each [isolate](https://developers.cloudflare.com/workers/reference/how-workers-works/#isolates) has some built-in flexibility to allow for cases where your Worker infrequently runs over the configured limit. If your Worker starts hitting the limit consistently, its execution will be terminated according to the limit configured.

  
* `cpu_ms` ` number ` optional  
   * The maximum CPU time allowed per invocation, in milliseconds.
* `subrequests` ` number ` optional  
   * The maximum number of subrequests allowed per invocation. This value defaults to 50 for free accounts and 10,000 for paid accounts. The free account maximum is 50 and the paid account maximum is 10,000,000\. Refer to [subrequest limits](https://developers.cloudflare.com/workers/platform/limits/#subrequests) for more information.

Example:

* [  wrangler.jsonc ](#tab-panel-8396)
* [  wrangler.toml ](#tab-panel-8397)

```

{

  "limits": {

    "cpu_ms": 100,

    "subrequests": 150,

  },

}


```

```

[limits]

cpu_ms = 100

subrequests = 150


```

## Bindings

### Browser Rendering

The [Workers Browser Rendering API](https://developers.cloudflare.com/browser-rendering/) allows developers to programmatically control and interact with a headless browser instance and create automation flows for their applications and products.

A [browser binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/) will provide your Worker with an authenticated endpoint to interact with a dedicated Chromium browser instance.

* `binding` ` string ` required  
   * The binding name used to refer to the browser binding. The value (string) you set will be used to reference this headless browser in your Worker. The binding must be [a valid JavaScript variable name ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar%5Fand%5Ftypes#variables). For example, `binding = "HEAD_LESS"` or `binding = "simulatedBrowser"` would both be valid names for the binding.

Example:

* [  wrangler.jsonc ](#tab-panel-8398)
* [  wrangler.toml ](#tab-panel-8399)

```

{

  "browser": {

    "binding": "<BINDING_NAME>",

  },

}


```

```

[browser]

binding = "<BINDING_NAME>"


```

### D1 databases

[D1](https://developers.cloudflare.com/d1/) is Cloudflare's serverless SQL database. A Worker can query a D1 database (or databases) by creating a [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/) to each database for [D1 Workers Binding API](https://developers.cloudflare.com/d1/worker-api/).

To bind D1 databases to your Worker, assign an array of the below object to the `[[d1_databases]]` key.

* `binding` ` string ` required  
   * The binding name used to refer to the D1 database. The value (string) you set will be used to reference this database in your Worker. The binding must be [a valid JavaScript variable name ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar%5Fand%5Ftypes#variables). For example, `binding = "MY_DB"` or `binding = "productionDB"` would both be valid names for the binding.
* `database_name` ` string ` required  
   * The name of the database. This is a human-readable name that allows you to distinguish between different databases, and is set when you first create the database.
* `database_id` ` string ` required  
   * The ID of the database. The database ID is available when you first use `wrangler d1 create` or when you call `wrangler d1 list`, and uniquely identifies your database.
* `preview_database_id` ` string ` optional  
   * The preview ID of this D1 database. If provided, `wrangler dev` uses this ID. Otherwise, it uses `database_id`. This option is required when using `wrangler dev --remote`.
* `migrations_dir` ` string ` optional  
   * The migration directory containing the migration files. By default, `wrangler d1 migrations create` creates a folder named `migrations`. You can use `migrations_dir` to specify a different folder containing the migration files (for example, if you have a mono-repo setup, and want to use a single D1 instance across your apps/packages).  
   * For more information, refer to [D1 Wrangler migrations commands](https://developers.cloudflare.com/workers/wrangler/commands/d1/#d1-migrations-create) and [D1 migrations](https://developers.cloudflare.com/d1/reference/migrations/).

Note

When using Wrangler in the default local development mode, files will be written to local storage instead of the preview or production database. Refer to [Local development and testing](https://developers.cloudflare.com/workers/development-testing/) for more details.

Example:

* [  wrangler.jsonc ](#tab-panel-8402)
* [  wrangler.toml ](#tab-panel-8403)

```

{

  "d1_databases": [

    {

      "binding": "<BINDING_NAME>",

      "database_name": "<DATABASE_NAME>",

      "database_id": "<DATABASE_ID>",

    },

  ],

}


```

```

[[d1_databases]]

binding = "<BINDING_NAME>"

database_name = "<DATABASE_NAME>"

database_id = "<DATABASE_ID>"


```

### Dispatch namespace bindings (Workers for Platforms)

Dispatch namespace bindings allow for communication between a [dynamic dispatch Worker](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#dynamic-dispatch-worker) and a [dispatch namespace](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#dispatch-namespace). Dispatch namespace bindings are used in [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/). Workers for Platforms helps you deploy serverless functions programmatically on behalf of your customers.

* `binding` ` string ` required  
   * The binding name. The value (string) you set will be used to reference this database in your Worker. The binding must be [a valid JavaScript variable name ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar%5Fand%5Ftypes#variables). For example, `binding = "MY_NAMESPACE"` or `binding = "productionNamespace"` would both be valid names for the binding.
* `namespace` ` string ` required  
   * The name of the [dispatch namespace](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#dispatch-namespace).
* `outbound` ` object ` optional  
   * `service` ` string ` required The name of the [outbound Worker](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/) to bind to.  
   * `parameters` array optional A list of parameters to pass data from your [dynamic dispatch Worker](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#dynamic-dispatch-worker) to the [outbound Worker](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/).

* [  wrangler.jsonc ](#tab-panel-8404)
* [  wrangler.toml ](#tab-panel-8405)

```

{

  "dispatch_namespaces": [

    {

      "binding": "<BINDING_NAME>",

      "namespace": "<NAMESPACE_NAME>",

      "outbound": {

        "service": "<WORKER_NAME>",

        "parameters": ["params_object"],

      },

    },

  ],

}


```

```

[[dispatch_namespaces]]

binding = "<BINDING_NAME>"

namespace = "<NAMESPACE_NAME>"


  [dispatch_namespaces.outbound]

  service = "<WORKER_NAME>"

  parameters = [ "params_object" ]


```

### Durable Objects

[Durable Objects](https://developers.cloudflare.com/durable-objects/) provide low-latency coordination and consistent storage for the Workers platform.

To bind Durable Objects to your Worker, assign an array of the below object to the `durable_objects.bindings` key.

* `name` ` string ` required  
   * The name of the binding used to refer to the Durable Object.
* `class_name` ` string ` required  
   * The exported class name of the Durable Object.
* `script_name` ` string ` optional  
   * The name of the Worker where the Durable Object is defined, if it is external to this Worker. This option can be used both in local and remote development. In local development, you must run the external Worker in a separate process (via `wrangler dev`). In remote development, the appropriate remote binding must be used.
* `environment` ` string ` optional  
   * The environment of the `script_name` to bind to.

Example:

* [  wrangler.jsonc ](#tab-panel-8406)
* [  wrangler.toml ](#tab-panel-8407)

```

{

  "durable_objects": {

    "bindings": [

      {

        "name": "<BINDING_NAME>",

        "class_name": "<CLASS_NAME>",

      },

    ],

  },

}


```

```

[[durable_objects.bindings]]

name = "<BINDING_NAME>"

class_name = "<CLASS_NAME>"


```

#### Migrations

When making changes to your Durable Object classes, you must perform a migration. Refer to [Durable Object migrations](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/).

* `tag` ` string ` required  
   * A unique identifier for this migration.
* `new_sqlite_classes` ` string[] ` optional  
   * The new Durable Objects being defined.
* `renamed_classes` ` {from: string, to: string}[] ` optional  
   * The Durable Objects being renamed.
* `deleted_classes` ` string[] ` optional  
   * The Durable Objects being removed.

Example:

* [  wrangler.jsonc ](#tab-panel-8420)
* [  wrangler.toml ](#tab-panel-8421)

```

{

  "migrations": [

    {

      "tag": "v1",

      "new_sqlite_classes": [

        // Array of new classes

        "DurableObjectExample",

      ],

    },

    {

      "tag": "v2", // Should be unique for each entry

      "renamed_classes": [

        // Array of rename directives

        {

          "from": "DurableObjectExample",

          "to": "UpdatedName",

        },

      ],

      "deleted_classes": [

        // Array of deleted class names

        "DeprecatedClass",

      ],

    },

  ],

}


```

```

[[migrations]]

tag = "v1"

new_sqlite_classes = [ "DurableObjectExample" ]


[[migrations]]

tag = "v2"

deleted_classes = [ "DeprecatedClass" ]


  [[migrations.renamed_classes]]

  from = "DurableObjectExample"

  to = "UpdatedName"


```

### Email bindings

You can send an email about your Worker's activity from your Worker to an email address verified on [Email Routing](https://developers.cloudflare.com/email-routing/setup/email-routing-addresses/#destination-addresses). This is useful for when you want to know about certain types of events being triggered, for example.

Before you can bind an email address to your Worker, you need to [enable Email Routing](https://developers.cloudflare.com/email-routing/get-started/) and have at least one [verified email address](https://developers.cloudflare.com/email-routing/setup/email-routing-addresses/#destination-addresses). Then, assign an array to the object (send\_email) with the type of email binding you need.

* `name` ` string ` required  
   * The binding name.
* `destination_address` ` string ` optional  
   * The [chosen email address](https://developers.cloudflare.com/email-routing/email-workers/send-email-workers/#types-of-bindings) you send emails to.
* `allowed_destination_addresses` ` string[] ` optional  
   * The [allowlist of email addresses](https://developers.cloudflare.com/email-routing/email-workers/send-email-workers/#types-of-bindings) you send emails to.

You can add one or more types of bindings to your Wrangler file. However, each attribute must be on its own line:

* [  wrangler.jsonc ](#tab-panel-8464)
* [  wrangler.toml ](#tab-panel-8465)

```

{

  "send_email": [

    {

      "name": "<NAME_FOR_BINDING1>"

    },

    {

      "name": "<NAME_FOR_BINDING2>",

      "destination_address": "<YOUR_EMAIL>@example.com"

    },

    {

      "name": "<NAME_FOR_BINDING3>",

      "allowed_destination_addresses": [

        "<YOUR_EMAIL>@example.com",

        "<YOUR_EMAIL2>@example.com"

      ]

    }

  ]

}


```

```

[[send_email]]

name = "<NAME_FOR_BINDING1>"


[[send_email]]

name = "<NAME_FOR_BINDING2>"

destination_address = "<YOUR_EMAIL>@example.com"


[[send_email]]

name = "<NAME_FOR_BINDING3>"

allowed_destination_addresses = [ "<YOUR_EMAIL>@example.com", "<YOUR_EMAIL2>@example.com" ]


```

### Environment variables

[Environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/) are a type of binding that allow you to attach text strings or JSON values to your Worker.

Example:

* [  wrangler.jsonc ](#tab-panel-8462)
* [  wrangler.toml ](#tab-panel-8463)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker-dev",

  "vars": {

    "API_HOST": "example.com",

    "API_ACCOUNT_ID": "example_user",

    "SERVICE_X_DATA": {

      "URL": "service-x-api.dev.example",

      "MY_ID": 123

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker-dev"


[vars]

API_HOST = "example.com"

API_ACCOUNT_ID = "example_user"


  [vars.SERVICE_X_DATA]

  URL = "service-x-api.dev.example"

  MY_ID = 123


```

### Hyperdrive

[Hyperdrive](https://developers.cloudflare.com/hyperdrive/) bindings allow you to interact with and query any Postgres database from within a Worker.

* `binding` ` string ` required  
   * The binding name.
* `id` ` string ` required  
   * The ID of the Hyperdrive configuration.

Example:

* [  wrangler.jsonc ](#tab-panel-8410)
* [  wrangler.toml ](#tab-panel-8411)

```

{

  // required for database drivers to function

  "compatibility_flags": ["nodejs_compat_v2"],

  "hyperdrive": [

    {

      "binding": "<BINDING_NAME>",

      "id": "<ID>",

    },

  ],

}


```

```

compatibility_flags = [ "nodejs_compat_v2" ]


[[hyperdrive]]

binding = "<BINDING_NAME>"

id = "<ID>"


```

### Images

[Cloudflare Images](https://developers.cloudflare.com/images/transform-images/transform-via-workers/) lets you make transformation requests to optimize, resize, and manipulate images stored in remote sources.

To bind Images to your Worker, assign an array of the below object to the `images` key.

`binding` (required). The name of the binding used to refer to the Images API.

* [  wrangler.jsonc ](#tab-panel-8408)
* [  wrangler.toml ](#tab-panel-8409)

```

{

  "images": {

    "binding": "IMAGES", // i.e. available in your Worker on env.IMAGES

  },

}


```

```

[images]

binding = "IMAGES"


```

### KV namespaces

[Workers KV](https://developers.cloudflare.com/kv/api/) is a global, low-latency, key-value data store. It stores data in a small number of centralized data centers, then caches that data in Cloudflare’s data centers after access.

To bind KV namespaces to your Worker, assign an array of the below object to the `kv_namespaces` key.

* `binding` ` string ` required  
   * The binding name used to refer to the KV namespace.
* `id` ` string ` required  
   * The ID of the KV namespace.
* `preview_id` ` string ` optional  
   * The preview ID of this KV namespace. This option is **required** when using `wrangler dev --remote` to develop against remote resources (but is not required with [remote bindings](https://developers.cloudflare.com/workers/development-testing/#remote-bindings)). If developing locally, this is an optional field. `wrangler dev` will use this ID for the KV namespace. Otherwise, `wrangler dev` will use `id`.

Note

When using Wrangler in the default local development mode, files will be written to local storage instead of the preview or production namespace. Refer to [Local development and testing](https://developers.cloudflare.com/workers/development-testing/) for more details.

Example:

* [  wrangler.jsonc ](#tab-panel-8412)
* [  wrangler.toml ](#tab-panel-8413)

```

{

  "kv_namespaces": [

    {

      "binding": "<BINDING_NAME1>",

      "id": "<NAMESPACE_ID1>",

    },

    {

      "binding": "<BINDING_NAME2>",

      "id": "<NAMESPACE_ID2>",

    },

  ],

}


```

```

[[kv_namespaces]]

binding = "<BINDING_NAME1>"

id = "<NAMESPACE_ID1>"


[[kv_namespaces]]

binding = "<BINDING_NAME2>"

id = "<NAMESPACE_ID2>"


```

### Queues

[Queues](https://developers.cloudflare.com/queues/) is Cloudflare's global message queueing service, providing [guaranteed delivery](https://developers.cloudflare.com/queues/reference/delivery-guarantees/) and [message batching](https://developers.cloudflare.com/queues/configuration/batching-retries/). To interact with a queue with Workers, you need a producer Worker to send messages to the queue and a consumer Worker to pull batches of messages out of the Queue. A single Worker can produce to and consume from multiple Queues.

To bind Queues to your producer Worker, assign an array of the below object to the `[[queues.producers]]` key.

* `queue` ` string ` required  
   * The name of the queue, used on the Cloudflare dashboard.
* `binding` ` string ` required  
   * The binding name used to refer to the queue in your Worker. The binding must be [a valid JavaScript variable name ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar%5Fand%5Ftypes#variables). For example, `binding = "MY_QUEUE"` or `binding = "productionQueue"` would both be valid names for the binding.
* `delivery_delay` ` number ` optional  
   * The number of seconds to [delay messages sent to a queue](https://developers.cloudflare.com/queues/configuration/batching-retries/#delay-messages) for by default. This can be overridden on a per-message or per-batch basis.

Example:

* [  wrangler.jsonc ](#tab-panel-8414)
* [  wrangler.toml ](#tab-panel-8415)

```

{

  "queues": {

    "producers": [

      {

        "binding": "<BINDING_NAME>",

        "queue": "<QUEUE_NAME>",

        "delivery_delay": 60, // Delay messages by 60 seconds before they are delivered to a consumer

      },

    ],

  },

}


```

```

[[queues.producers]]

binding = "<BINDING_NAME>"

queue = "<QUEUE_NAME>"

delivery_delay = 60


```

To bind Queues to your consumer Worker, assign an array of the below object to the `[[queues.consumers]]` key.

* `queue` ` string ` required  
   * The name of the queue, used on the Cloudflare dashboard.
* `max_batch_size` ` number ` optional  
   * The maximum number of messages allowed in each batch.
* `max_batch_timeout` ` number ` optional  
   * The maximum number of seconds to wait for messages to fill a batch before the batch is sent to the consumer Worker.
* `max_retries` ` number ` optional  
   * The maximum number of retries for a message, if it fails or [retryAll()](https://developers.cloudflare.com/queues/configuration/javascript-apis/#messagebatch) is invoked.
* `dead_letter_queue` ` string ` optional  
   * The name of another queue to send a message if it fails processing at least `max_retries` times.  
   * If a `dead_letter_queue` is not defined, messages that repeatedly fail processing will be discarded.  
   * If there is no queue with the specified name, it will be created automatically.
* `max_concurrency` ` number ` optional  
   * The maximum number of concurrent consumers allowed to run at once. Leaving this unset will mean that the number of invocations will scale to the [currently supported maximum](https://developers.cloudflare.com/queues/platform/limits/).  
   * Refer to [Consumer concurrency](https://developers.cloudflare.com/queues/configuration/consumer-concurrency/) for more information on how consumers autoscale, particularly when messages are retried.
* `retry_delay` ` number ` optional  
   * The number of seconds to [delay retried messages](https://developers.cloudflare.com/queues/configuration/batching-retries/#delay-messages) for by default, before they are re-delivered to the consumer. This can be overridden on a per-message or per-batch basis [when retrying messages](https://developers.cloudflare.com/queues/configuration/batching-retries/#explicit-acknowledgement-and-retries).

Example:

* [  wrangler.jsonc ](#tab-panel-8422)
* [  wrangler.toml ](#tab-panel-8423)

```

{

  "queues": {

    "consumers": [

      {

        "queue": "my-queue",

        "max_batch_size": 10,

        "max_batch_timeout": 30,

        "max_retries": 10,

        "dead_letter_queue": "my-queue-dlq",

        "max_concurrency": 5,

        "retry_delay": 120, // Delay retried messages by 2 minutes before re-attempting delivery

      },

    ],

  },

}


```

```

[[queues.consumers]]

queue = "my-queue"

max_batch_size = 10

max_batch_timeout = 30

max_retries = 10

dead_letter_queue = "my-queue-dlq"

max_concurrency = 5

retry_delay = 120


```

### R2 buckets

[Cloudflare R2 Storage](https://developers.cloudflare.com/r2) allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.

To bind R2 buckets to your Worker, assign an array of the below object to the `r2_buckets` key.

* `binding` ` string ` required  
   * The binding name used to refer to the R2 bucket.
* `bucket_name` ` string ` required  
   * The name of this R2 bucket.
* `jurisdiction` ` string ` optional  
   * The jurisdiction where this R2 bucket is located, if a jurisdiction has been specified. Refer to [Jurisdictional Restrictions](https://developers.cloudflare.com/r2/reference/data-location/#jurisdictional-restrictions).
* `preview_bucket_name` ` string ` optional  
   * The preview name of this R2 bucket. If provided, `wrangler dev` will use this name for the R2 bucket. Otherwise, it will use `bucket_name`. This option is required when using `wrangler dev --remote` (but is not required with [remote bindings](https://developers.cloudflare.com/workers/development-testing/#remote-bindings)).

Note

When using Wrangler in the default local development mode, files will be written to local storage instead of the preview or production bucket. Refer to [Local development and testing](https://developers.cloudflare.com/workers/development-testing/) for more details.

Example:

* [  wrangler.jsonc ](#tab-panel-8418)
* [  wrangler.toml ](#tab-panel-8419)

```

{

  "r2_buckets": [

    {

      "binding": "<BINDING_NAME1>",

      "bucket_name": "<BUCKET_NAME1>",

    },

    {

      "binding": "<BINDING_NAME2>",

      "bucket_name": "<BUCKET_NAME2>",

    },

  ],

}


```

```

[[r2_buckets]]

binding = "<BINDING_NAME1>"

bucket_name = "<BUCKET_NAME1>"


[[r2_buckets]]

binding = "<BINDING_NAME2>"

bucket_name = "<BUCKET_NAME2>"


```

### Vectorize indexes

A [Vectorize index](https://developers.cloudflare.com/vectorize/) allows you to insert and query vector embeddings for semantic search, classification and other vector search use-cases.

To bind Vectorize indexes to your Worker, assign an array of the below object to the `vectorize` key.

* `binding` ` string ` required  
   * The binding name used to refer to the bound index from your Worker code.
* `index_name` ` string ` required  
   * The name of the index to bind.

Example:

* [  wrangler.jsonc ](#tab-panel-8416)
* [  wrangler.toml ](#tab-panel-8417)

```

{

  "vectorize": [

    {

      "binding": "<BINDING_NAME>",

      "index_name": "<INDEX_NAME>",

    },

  ],

}


```

```

[[vectorize]]

binding = "<BINDING_NAME>"

index_name = "<INDEX_NAME>"


```

### Service bindings

A service binding allows you to send HTTP requests to another Worker without those requests going over the Internet. The request immediately invokes the downstream Worker, reducing latency as compared to a request to a third-party service. Refer to [About Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/).

To bind other Workers to your Worker, assign an array of the below object to the `services` key.

* `binding` ` string ` required  
   * The binding name used to refer to the bound Worker.
* `service` ` string ` required  
   * The name of the Worker.  
   * To bind to a Worker in a specific [environment](https://developers.cloudflare.com/workers/wrangler/environments), you need to append the environment name to the Worker name. This should be in the format `<worker-name>-<environment-name>`. For example, to bind to a Worker called `worker-name` in its `staging` environment, `service` should be set to `worker-name-staging`.
* `entrypoint` ` string ` optional  
   * The name of the [entrypoint](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/#named-entrypoints) to bind to. If you do not specify an entrypoint, the default export of the Worker will be used.

Example:

* [  wrangler.jsonc ](#tab-panel-8424)
* [  wrangler.toml ](#tab-panel-8425)

```

{

  "services": [

    {

      "binding": "<BINDING_NAME>",

      "service": "<WORKER_NAME>",

      "entrypoint": "<ENTRYPOINT_NAME>",

    },

  ],

}


```

```

[[services]]

binding = "<BINDING_NAME>"

service = "<WORKER_NAME>"

entrypoint = "<ENTRYPOINT_NAME>"


```

### Static assets

Refer to [Assets](#assets).

### Analytics Engine Datasets

[Workers Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/) provides analytics, observability and data logging from Workers. Write data points to your Worker binding then query the data using the [SQL API](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/).

To bind Analytics Engine datasets to your Worker, assign an array of the below object to the `analytics_engine_datasets` key.

* `binding` ` string ` required  
   * The binding name used to refer to the dataset.
* `dataset` ` string ` optional  
   * The dataset name to write to. This will default to the same name as the binding if it is not supplied.

Example:

* [  wrangler.jsonc ](#tab-panel-8426)
* [  wrangler.toml ](#tab-panel-8427)

```

{

  "analytics_engine_datasets": [

    {

      "binding": "<BINDING_NAME>",

      "dataset": "<DATASET_NAME>",

    },

  ],

}


```

```

[[analytics_engine_datasets]]

binding = "<BINDING_NAME>"

dataset = "<DATASET_NAME>"


```

### mTLS Certificates

To communicate with origins that require client authentication, a Worker can present a certificate for mTLS in subrequests. Wrangler provides the `mtls-certificate` [command](https://developers.cloudflare.com/workers/wrangler/commands#mtls-certificate) to upload and manage these certificates.

To create a [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/) to an mTLS certificate for your Worker, assign an array of objects with the following shape to the `mtls_certificates` key.

* `binding` ` string ` required  
   * The binding name used to refer to the certificate.
* `certificate_id` ` string ` required  
   * The ID of the certificate. Wrangler displays this via the `mtls-certificate upload` and `mtls-certificate list` commands.

Example of a Wrangler configuration file that includes an mTLS certificate binding:

* [  wrangler.jsonc ](#tab-panel-8430)
* [  wrangler.toml ](#tab-panel-8431)

```

{

  "mtls_certificates": [

    {

      "binding": "<BINDING_NAME1>",

      "certificate_id": "<CERTIFICATE_ID1>",

    },

    {

      "binding": "<BINDING_NAME2>",

      "certificate_id": "<CERTIFICATE_ID2>",

    },

  ],

}


```

```

[[mtls_certificates]]

binding = "<BINDING_NAME1>"

certificate_id = "<CERTIFICATE_ID1>"


[[mtls_certificates]]

binding = "<BINDING_NAME2>"

certificate_id = "<CERTIFICATE_ID2>"


```

mTLS certificate bindings can then be used at runtime to communicate with secured origins via their [fetch method](https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls).

### Workers AI

[Workers AI](https://developers.cloudflare.com/workers-ai/) allows you to run machine learning models, on the Cloudflare network, from your own code – whether that be from Workers, Pages, or anywhere via REST API.

Workers AI local development usage charges

Using Workers AI always accesses your Cloudflare account in order to run AI models and will incur usage charges even in local development.

Unlike other bindings, this binding is limited to one AI binding per Worker project.

* `binding` ` string ` required  
   * The binding name.

Example:

* [  wrangler.jsonc ](#tab-panel-8428)
* [  wrangler.toml ](#tab-panel-8429)

```

{

  "ai": {

    "binding": "AI", // available in your Worker code on `env.AI`

  },

}


```

```

[ai]

binding = "AI"


```

### Workflows

[Workflows](https://developers.cloudflare.com/workflows/) allow you to build durable, multi-step applications using the Workers platform. A Workflow binding enables your Worker to create and manage Workflow instances programmatically.

To bind Workflows to your Worker, assign an array of the below object to the `workflows` key.

* `binding` ` string ` required  
   * The binding name used to refer to the Workflow in your Worker. The binding must be [a valid JavaScript variable name ↗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar%5Fand%5Ftypes#variables). For example, `binding = "MY_WORKFLOW"` would be a valid name for the binding.
* `name` ` string ` required  
   * The name of the Workflow.
* `class_name` ` string ` required  
   * The name of the exported Workflow class. The `class_name` must match the name of the Workflow class exported from your Worker code.
* `script_name` ` string ` optional  
   * The name of the Worker script where the Workflow class is defined. Only required if the Workflow is defined in a different Worker than the one the binding is configured on.

Example:

* [  wrangler.jsonc ](#tab-panel-8432)
* [  wrangler.toml ](#tab-panel-8433)

```

{

  "workflows": [

    {

      "binding": "<BINDING_NAME>",

      "name": "<WORKFLOW_NAME>",

      "class_name": "<CLASS_NAME>",

    },

  ],

}


```

```

[[workflows]]

binding = "<BINDING_NAME>"

name = "<WORKFLOW_NAME>"

class_name = "<CLASS_NAME>"


```

## Assets

[Static assets](https://developers.cloudflare.com/workers/static-assets/) allows developers to run front-end websites on Workers. You can configure the directory of assets, an optional runtime binding, and routing configuration options.

You can only configure one collection of assets per Worker.

The following options are available under the `assets` key.

* `directory` ` string ` optional  
   * Folder of static assets to be served.  
   * Not required if you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), which will automatically point to the client build output.
* `binding` ` string ` optional  
   * The binding name used to refer to the assets. Optional, and only useful when a Worker script is set with `main`.
* `run_worker_first` ` boolean | string[] ` optional, defaults to false  
   * Controls whether static assets are fetched directly, or a Worker script is invoked. Can be a boolean (`true`/`false`) or an array of route pattern strings with support for glob patterns (`*`) and exception patterns (`!` prefix). Patterns must begin with `/` or `!/`. Learn more about fetching assets when using [run\_worker\_first](https://developers.cloudflare.com/workers/static-assets/routing/worker-script/#run-your-worker-script-first).
* `html_handling`: ` "auto-trailing-slash" | "force-trailing-slash" | "drop-trailing-slash" | "none" ` optional, defaults to "auto-trailing-slash"  
   * Determines the redirects and rewrites of requests for HTML content. Learn more about the various options in [assets routing](https://developers.cloudflare.com/workers/static-assets/routing/advanced/html-handling/).
* `not_found_handling`: ` "single-page-application" | "404-page" | "none" ` optional, defaults to "none"  
   * Determines the handling of requests that do not map to an asset. Learn more about the various options for [routing behavior](https://developers.cloudflare.com/workers/static-assets/#routing-behavior).

Example:

* [  wrangler.jsonc ](#tab-panel-8434)
* [  wrangler.toml ](#tab-panel-8435)

```

{

  "assets": {

    "directory": "./public",

    "binding": "ASSETS",

    "html_handling": "force-trailing-slash",

    "not_found_handling": "404-page",

  },

}


```

```

[assets]

directory = "./public"

binding = "ASSETS"

html_handling = "force-trailing-slash"

not_found_handling = "404-page"


```

You can also configure `run_worker_first` with an array of route patterns:

* [  wrangler.jsonc ](#tab-panel-8436)
* [  wrangler.toml ](#tab-panel-8437)

```

{

  "assets": {

    "directory": "./public",

    "binding": "ASSETS",

    "run_worker_first": [

      "/api/*", // API calls go to Worker first

      "!/api/docs/*", // EXCEPTION: For /api/docs/*, try static assets first

    ],

  },

}


```

```

[assets]

directory = "./public"

binding = "ASSETS"

run_worker_first = [ "/api/*", "!/api/docs/*" ]


```

## Containers

You can define [Containers](https://developers.cloudflare.com/containers) to run alongside your Worker using the `containers` field.

Note

You must also define a Durable Object to communicate with your Container via Workers. This Durable Object's class name must match the `class_name` value in container configuration.

The following options are available:

* `image` ` string ` required  
   * The image to use for the container. This can either be a local path to a `Dockerfile`, in which case `wrangler deploy` will build and push the image, or it can be an image reference. Supported registries are the Cloudflare Registry, Docker Hub, and Amazon ECR. For more information, refer to [Image Management](https://developers.cloudflare.com/containers/platform-details/image-management/).
* `class_name` ` string ` required  
   * The corresponding Durable Object class name. This will make this Durable Object a container-enabled Durable Object and allow each instance to control a container. See [Durable Object Container Methods](https://developers.cloudflare.com/durable-objects/api/container/) for details.
* `instance_type` ` string ` optional  
   * The instance type of the container. This determines the amount of memory, CPU, and disk given to the container instance. The current options are `"lite"`, `"basic"`, `"standard-1"`, `"standard-2"`, `"standard-3"`, and `"standard-4"`. The default is `"lite"`. For more information, the see [instance types documentation](https://developers.cloudflare.com/containers/platform-details#instance-types).  
   * To specify a custom instance type, see [here](#custom-instance-types).
* `max_instances` ` string ` optional  
   * The maximum number of concurrent container instances you want to run at any given moment. Stopped containers do not count towards this - you may have more container instances than this number overall, but only this many actively running containers at once. If a request to start a container will exceed this limit, that request will error.  
   * Defaults to 20.  
   * This value is only enforced when running in production on Cloudflare's network. This limit does not apply during local development, so you may run more instances than specified.
* `name` ` string ` optional  
   * The name of your container. Used as an identifier. This will default to a combination of your Worker name, the class name, and your environment.
* `image_build_context` ` string ` optional  
   * The build context of the application, by default it is the directory of `image`.
* `image_vars` ` Record<string, string> ` optional  
   * Build-time variables, equivalent to using `--build-arg` with `docker build`. If you want to provide environment variables to your container at _runtime_, you should [use secret bindings or envVars on the Container class](https://developers.cloudflare.com/containers/examples/env-vars-and-secrets/).
* `rollout_active_grace_period` ` number ` optional  
   * The minimum number of seconds to wait before an active container instance becomes eligible for updating during a [rollout](https://developers.cloudflare.com/containers/faq#how-do-container-updates-and-rollouts-work). At that point, the container will be sent at `SIGTERM` and still has 15 minutes to shut down before it is forcibly killed and updated.  
   * Defaults to `0`.
* `rollout_step_percentage` ` number | number[] ` optional  
   * Configures what percentage of instances should be updated at each step of a [rollout](https://developers.cloudflare.com/containers/faq#how-do-container-updates-and-rollouts-work).  
   * If this is set to a single number, each step will rollout to that percentage of instances. The options are `5`, `10`, `20`, `25`, `50` or `100`.  
   * If this is an array of numbers, each step specifies the cumulative rollout progress, so the final step must be `100`.  
   * Defaults to `[10, 100]`.  
   * This can be overridden ad hoc by deploying with the `--containers-rollout=immediate` flag, which will roll out to 100% of instances in one step. Note that flag will not override `rollout_active_grace_period`, if configured.
* `wrangler_ssh` ` object ` optional  
   * Configuration for SSH through Wrangler. Refer to [Wrangler SSH](#wrangler-ssh).
* `authorized_keys` ` object[] ` optional  
   * Public keys that should be added to the Container's `authorized_keys` file.

* [  wrangler.jsonc ](#tab-panel-8456)
* [  wrangler.toml ](#tab-panel-8457)

```

{

  "containers": [

    {

      "class_name": "MyContainer",

      "image": "./Dockerfile",

      "max_instances": 10,

      "instance_type": "basic", // Optional, defaults to "lite"

      "image_vars": {

        "FOO": "BAR",

      },

    },

  ],

  "durable_objects": {

    "bindings": [

      {

        "name": "MY_CONTAINER",

        "class_name": "MyContainer",

      },

    ],

  },

  "migrations": [

    {

      "tag": "v1",

      "new_sqlite_classes": ["MyContainer"],

    },

  ],

}


```

```

[[containers]]

class_name = "MyContainer"

image = "./Dockerfile"

max_instances = 10

instance_type = "basic"


  [containers.image_vars]

  FOO = "BAR"


[[durable_objects.bindings]]

name = "MY_CONTAINER"

class_name = "MyContainer"


[[migrations]]

tag = "v1"

new_sqlite_classes = [ "MyContainer" ]


```

### Custom Instance Types

In place of the [named instance types](https://developers.cloudflare.com/containers/platform-details/limits/#instance-types), you can set a custom instance type by individually configuring vCPU, memory, and disk. See the [limits documentation](https://developers.cloudflare.com/containers/platform-details/limits/#custom-instance-types) for constraints on custom instance types.

The following options are available:

* `vcpu` ` number ` optional  
   * The vCPU to be used by your container. Defaults to `0.0625` (1/16 vCPU).
* `memory_mib` ` number ` optional  
   * The memory to be used by your container, in MiB. Defaults to `256`.
* `disk_mb` ` number ` optional  
   * The disk to be used by your container, in MB. Defaults to `2000` (2GB).

* [  wrangler.jsonc ](#tab-panel-8442)
* [  wrangler.toml ](#tab-panel-8443)

```

{

  "containers": [

    {

      "image": "./Dockerfile",

      "instance_type": {

        "vcpu": 1,

        "memory_mib": 1024,

        "disk_mb": 4000,

      },

    },

  ],

}


```

```

[[containers]]

image = "./Dockerfile"


  [containers.instance_type]

  vcpu = 1

  memory_mib = 1_024

  disk_mb = 4_000


```

### Wrangler SSH

Configuration for SSH access to a Container instance through Wrangler. For a guide on connecting to Containers via SSH, refer to [SSH](https://developers.cloudflare.com/containers/ssh/).

The following options are available:

* `enabled` ` boolean ` optional  
   * Whether SSH through Wrangler is enabled. Defaults to `false`.
* `port` ` number ` optional  
   * The port for the SSH service to run on. Defaults to `22`.

### Authorized keys

An authorized key is a public key that can be used to SSH into a Container.

The following are properties of a key:

* `name` ` string ` required  
   * The display name of the key.
* `public_key` ` string ` required  
   * The public key itself.  
   * Currently only the `ssh-ed25519` key type is supported.

## Bundling

Note

Wrangler bundling is not applicable if you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/).

Wrangler can operate in two modes: the default bundling mode and `--no-bundle` mode. In bundling mode, Wrangler will traverse all the imports of your code and generate a single JavaScript "entry-point" file. Imported source code is "inlined/bundled" into this entry-point file.

It is also possible to include additional modules into your Worker, which are uploaded alongside the entry-point. You specify which additional modules should be included into your Worker using the `rules` key, making these modules available to be imported when your Worker is invoked. The `rules` key will be an array of the below object.

* `type` ` string ` required  
   * The type of module. Must be one of: `ESModule`, `CommonJS`, `CompiledWasm`, `Text` or `Data`.
* `globs` ` string[] ` required  
   * An array of glob rules (for example, `["**/*.md"]`). Refer to [glob ↗](https://man7.org/linux/man-pages/man7/glob.7.html).
* `fallthrough` ` boolean ` optional  
   * When set to `true` on a rule, this allows you to have multiple rules for the same `Type`.

Example:

* [  wrangler.jsonc ](#tab-panel-8440)
* [  wrangler.toml ](#tab-panel-8441)

```

{

  "rules": [

    {

      "type": "Text",

      "globs": ["**/*.md"],

      "fallthrough": true,

    },

  ],

}


```

```

[[rules]]

type = "Text"

globs = [ "**/*.md" ]

fallthrough = true


```

### Importing modules within a Worker

You can import and refer to these modules within your Worker, like so:

index.js

```

import markdown from "./example.md";


export default {

  async fetch() {

    return new Response(markdown);

  },

};


```

### Find additional modules

Normally Wrangler will only include additional modules that are statically imported in your source code as in the example above. By setting `find_additional_modules` to `true` in your configuration file, Wrangler will traverse the file tree below `base_dir`. Any files that match `rules` will also be included as unbundled, external modules in the deployed Worker.`base_dir` defaults to the directory containing your `main` entrypoint.

See [https://developers.cloudflare.com/workers/wrangler/bundling/ ↗](https://developers.cloudflare.com/workers/wrangler/bundling/) for more details and examples.

### Python Workers

By default, Python Workers bundle the files and folders in `python_modules` at the root of your Worker (alongside your wrangler config file). The files in this directory represent your vendored packages and is where the pywrangler tool copies packages into. In some cases, you may find that the files in this folder are too large and if your worker doesn't require them then they just grow your bundle size for no reason.

To fix this, you can exclude certain files from being included. To do this use the `python_modules.excludes` option, for example:

* [  wrangler.jsonc ](#tab-panel-8438)
* [  wrangler.toml ](#tab-panel-8439)

```

{

  "python_modules": {

    "excludes": ["**/*.pyc", "**/__pycache__"],

  },

}


```

```

[python_modules]

excludes = [ "**/*.pyc", "**/__pycache__" ]


```

This will exclude any .pyc files and `__pycache__` directories inside any subdirectory in `python_modules`.

By default, `python_modules.excludes` is set to `["**/*.pyc"]`, so be sure to include this when setting it to a different value.

## Local development settings

Note

If you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), you should use Vite's [server options ↗](https://vite.dev/config/server-options.html) instead.

You can configure various aspects of local development, such as the local protocol or port.

* `ip` ` string ` optional
* IP address for the local dev server to listen on. Defaults to `localhost`.
* `port` ` number ` optional
* Port for the local dev server to listen on. Defaults to `8787`.
* `local_protocol` ` string ` optional  
   * Protocol that local dev server listens to requests on. Defaults to `http`.
* `upstream_protocol` ` string ` optional  
   * Protocol that the local dev server forwards requests on. Defaults to `https`.
* `host` ` string ` optional  
   * Host to forward requests to, defaults to the host of the first `route` of the Worker.
* `enable_containers` ` boolean ` optional  
   * Determines whether to enable containers during a local dev session, if they have been configured. Defaults to `true`. If set to `false`, you can develop the rest of your application without requiring Docker or other container tool, as long as you do not invoke any code that interacts with containers.
* `container_engine` ` string ` optional  
   * Used for local development of [Containers](https://developers.cloudflare.com/containers/local-dev). Wrangler will attempt to automatically find the correct socket to use to communicate with your container engine. If that does not work (usually surfacing as an `internal error` when attempting to connect to your Container), you can try setting the socket path using this option. You can also set this via the environment variable `DOCKER_HOST`. Example:
* `generate_types` ` boolean ` optional  
   * Generate types from your Worker configuration. Defaults to `false`.

* [  wrangler.jsonc ](#tab-panel-8444)
* [  wrangler.toml ](#tab-panel-8445)

```

{

  "dev": {

    "ip": "192.168.1.1",

    "port": 8080,

    "local_protocol": "http",

  },

}


```

```

[dev]

ip = "192.168.1.1"

port = 8_080

local_protocol = "http"


```

## Secrets

[Secrets](https://developers.cloudflare.com/workers/configuration/secrets/) are a type of binding that allow you to [attach encrypted text values](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret) to your Worker.

### Local development

Warning

Do not use `vars` to store sensitive information in your Worker's Wrangler configuration file. Use secrets instead.

Put secrets for use in local development in either a `.dev.vars` file or a `.env` file, in the same directory as the Wrangler configuration file.

Note

You can use the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property) to declare which secret names your Worker requires. When defined, only the keys listed in `secrets.required` are loaded from `.dev.vars` or `.env`. Additional keys are excluded and missing keys produce a warning.

Choose to use either `.dev.vars` or `.env` but not both. If you define a `.dev.vars` file, then values in `.env` files will not be included in the `env` object during local development.

These files should be formatted using the [dotenv ↗](https://hexdocs.pm/dotenvy/dotenv-file-format.html) syntax. For example:

.dev.vars / .env

```

SECRET_KEY="value"

API_TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"


```

Do not commit secrets to git

The `.dev.vars` and `.env` files should not committed to git. Add `.dev.vars*` and `.env*` to your project's `.gitignore` file.

To set different secrets for each Cloudflare environment, create files named `.dev.vars.<environment-name>` or `.env.<environment-name>`.

When you select a Cloudflare environment in your local development, the corresponding environment-specific file will be loaded ahead of the generic `.dev.vars` (or `.env`) file.

* When using `.dev.vars.<environment-name>` files, all secrets must be defined per environment. If `.dev.vars.<environment-name>` exists then only this will be loaded; the `.dev.vars` file will not be loaded.
* In contrast, all matching `.env` files are loaded and the values are merged. For each variable, the value from the most specific file is used, with the following precedence:  
   * `.env.<environment-name>.local` (most specific)  
   * `.env.local`  
   * `.env.<environment-name>`  
   * `.env` (least specific)

Controlling `.env` handling

It is possible to control how `.env` files are loaded in local development by setting environment variables on the process running the tools.

* To disable loading local dev vars from `.env` files without providing a `.dev.vars` file, set the `CLOUDFLARE_LOAD_DEV_VARS_FROM_DOT_ENV` environment variable to `"false"`.
* To include every environment variable defined in your system's process environment as a local development variable, ensure there is no `.dev.vars` and then set the `CLOUDFLARE_INCLUDE_PROCESS_ENV` environment variable to `"true"`. This is not needed when using the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property), which loads from `process.env` automatically.

### `secrets` configuration property

Note

This property is experimental and subject to change.

The `secrets` configuration property lets you declare the secret names your Worker requires in your Wrangler configuration file. Required secrets are validated during local development and deploy, and used as the source of truth for type generation.

* [  wrangler.jsonc ](#tab-panel-8446)
* [  wrangler.toml ](#tab-panel-8447)

```

{

  "secrets": {

    "required": ["API_KEY", "DB_PASSWORD"],

  },

}


```

```

[secrets]

required = [ "API_KEY", "DB_PASSWORD" ]


```

**Type generation**

When `secrets` is defined at any config level, `wrangler types` generates typed bindings from the names listed in `secrets.required` and no longer infers secret names from `.dev.vars` or `.env` files. This lets you run type generation in environments where those files are not present.

Per-environment secrets are supported. Each named environment produces its own interface, and the aggregated `Env` type marks secrets that only appear in some environments as optional.

**Deploy**

When `secrets` is defined, `wrangler deploy` and `wrangler versions upload` validate that all secrets in `secrets.required` are configured on the Worker before the operation succeeds. If any required secrets are missing, the command fails with an error listing which secrets need to be set.

## Module Aliasing

Note

If you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), `alias` is replaced Vite's [resolve.alias ↗](https://vite.dev/config/shared-options.html#resolve-alias).

You can configure Wrangler to replace all calls to import a particular package with a module of your choice, by configuring the `alias` field:

* [  wrangler.jsonc ](#tab-panel-8448)
* [  wrangler.toml ](#tab-panel-8449)

```

{

  "alias": {

    "foo": "./replacement-module-filepath",

  },

}


```

```

[alias]

foo = "./replacement-module-filepath"


```

replacement-module-filepath.js

```

export const bar = "baz";


```

With the configuration above, any calls to `import` or `require()` the module `foo` will be aliased to point to your replacement module:

JavaScript

```

import { bar } from "foo";


console.log(bar); // returns "baz"


```

### Bundling issues

When Wrangler bundles your Worker, it might fail to resolve dependencies. Setting up an alias for such dependencies is a simple way to fix the issue.

However, before doing so, verify that the package is correctly installed in your project, either as a direct dependency in `package.json` or as a transitive dependency.

If an alias is the correct solution for your dependency issue, you have several options:

* **Alternative implementation** — Implement the module's logic in a Worker-compatible manner, ensuring that all the functionality remains intact.
* **No-op module** — If the module's logic is unused or irrelevant, point the alias to an empty file. This makes the module a no-op while fixing the bundling issue.
* **Runtime error** — If the module's logic is unused and the Worker should not attempt to use it (for example, because of security vulnerabilities), point the alias to a file with a single top-level `throw` statement. This fixes the bundling issue while ensuring the module is never actually used.

### Example: Aliasing dependencies from NPM

You can use module aliasing to provide an implementation of an NPM package that does not work on Workers — even if you only rely on that NPM package indirectly, as a dependency of one of your Worker's dependencies.

For example, some NPM packages depend on [node-fetch ↗](https://www.npmjs.com/package/node-fetch), a package that provided a polyfill of the [fetch() API](https://developers.cloudflare.com/workers/runtime-apis/fetch/), before it was built into Node.js.

`node-fetch` isn't needed in Workers, because the `fetch()` API is provided by the Workers runtime. And `node-fetch` doesn't work on Workers, because it relies on currently unsupported Node.js APIs from the `http`/`https` modules.

You can alias all imports of `node-fetch` to instead point directly to the `fetch()` API that is built into the Workers runtime:

* [  wrangler.jsonc ](#tab-panel-8450)
* [  wrangler.toml ](#tab-panel-8451)

```

{

  "alias": {

    "node-fetch": "./fetch-polyfill",

  },

}


```

```

[alias]

node-fetch = "./fetch-polyfill"


```

./fetch-polyfill

```

export default fetch;


```

### Example: Aliasing Node.js APIs

You can use module aliasing to provide your own polyfill implementation of a Node.js API that is not yet available in the Workers runtime.

For example, let's say the NPM package you rely on calls [fs.readFile ↗](https://nodejs.org/api/fs.html#fsreadfilepath-options-callback). You can alias the fs module by adding the following to your Worker's Wrangler configuration file:

* [  wrangler.jsonc ](#tab-panel-8452)
* [  wrangler.toml ](#tab-panel-8453)

```

{

  "alias": {

    "fs": "./fs-polyfill",

  },

}


```

```

[alias]

fs = "./fs-polyfill"


```

./fs-polyfill

```

export function readFile() {

  // ...

}


```

In many cases, this allows you to work provide just enough of an API to make a dependency work. You can learn more about Cloudflare Workers' support for Node.js APIs on the [Cloudflare Workers Node.js API documentation page](https://developers.cloudflare.com/workers/runtime-apis/nodejs/).

## Source maps

[Source maps](https://developers.cloudflare.com/workers/observability/source-maps/) translate compiled and minified code back to the original code that you wrote. Source maps are combined with the stack trace returned by the JavaScript runtime to present you with a stack trace.

* `upload_source_maps` ` boolean `  
   * When `upload_source_maps` is set to `true`, Wrangler will automatically generate and upload source map files when you run [wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) or [wrangler versions deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#versions-deploy).

Example:

* [  wrangler.jsonc ](#tab-panel-8454)
* [  wrangler.toml ](#tab-panel-8455)

```

{

  "upload_source_maps": true,

}


```

```

upload_source_maps = true


```

## Workers Sites

Use Workers Static Assets Instead

You should use [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/) to host full-stack applications instead of Workers Sites. It has been deprecated in Wrangler v4, and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) does not support Workers Sites. Do not use Workers Sites for new projects.

[Workers Sites](https://developers.cloudflare.com/workers/configuration/sites/) allows you to host static websites, or dynamic websites using frameworks like Vue or React, on Workers.

* `bucket` ` string ` required  
   * The directory containing your static assets. It must be a path relative to your Wrangler configuration file.
* `include` ` string[] ` optional  
   * An exclusive list of `.gitignore`\-style patterns that match file or directory names from your bucket location. Only matched items will be uploaded.
* `exclude` ` string[] ` optional  
   * A list of `.gitignore`\-style patterns that match files or directories in your bucket that should be excluded from uploads.

Example:

* [  wrangler.jsonc ](#tab-panel-8458)
* [  wrangler.toml ](#tab-panel-8459)

```

{

  "site": {

    "bucket": "./public",

    "include": ["upload_dir"],

    "exclude": ["ignore_dir"],

  },

}


```

```

[site]

bucket = "./public"

include = [ "upload_dir" ]

exclude = [ "ignore_dir" ]


```

## Proxy support

Corporate networks will often have proxies on their networks and this can sometimes cause connectivity issues. To configure Wrangler with the appropriate proxy details, [add the following environmental variables](https://developers.cloudflare.com/workers/configuration/environment-variables/):

* `https_proxy`
* `HTTPS_PROXY`
* `http_proxy`
* `HTTP_PROXY`

To configure this on macOS, add `HTTP_PROXY=http://<YOUR_PROXY_HOST>:<YOUR_PROXY_PORT>` before your Wrangler commands.

Example:

Terminal window

```

$ HTTP_PROXY=http://localhost:8080 wrangler dev


```

If your IT team has configured your computer's proxy settings, be aware that the first non-empty environment variable in this list will be used when Wrangler makes outgoing requests.

For example, if both `https_proxy` and `http_proxy` are set, Wrangler will only use `https_proxy` for outgoing requests.

## Source of truth

We recommend treating your Wrangler configuration file as the source of truth for your Worker configuration, and to avoid making changes to your Worker via the Cloudflare dashboard if you are using Wrangler.

If you need to make changes to your Worker from the Cloudflare dashboard, the dashboard will generate a TOML snippet for you to copy into your Wrangler configuration file, which will help ensure your Wrangler configuration file is always up to date.

If you change your environment variables in the Cloudflare dashboard, Wrangler will override them the next time you deploy. If you want to disable this behavior, add `keep_vars = true` to your Wrangler configuration file.

If you change your routes in the dashboard, Wrangler will override them in the next deploy with the routes you have set in your Wrangler configuration file. To manage routes via the Cloudflare dashboard only, remove any route and routes keys from your Wrangler configuration file. Then add `workers_dev = false` to your Wrangler configuration file. For more information, refer to [Deprecations](https://developers.cloudflare.com/workers/wrangler/deprecations/#other-deprecated-behavior).

Wrangler will not delete your secrets (encrypted environment variables) unless you run `wrangler secret delete <key>`.

## Generated Wrangler configuration

Note

This section describes a feature that can be implemented by frameworks and other build tools that are integrating with Wrangler.

It is unlikely that an application developer will need to use this feature, but it is documented here to help you understand when Wrangler is using a generated configuration rather than the original, user's configuration.

For example, when using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), an output Worker configuration file is generated as part of the build. This is then used for preview and deployment.

Some framework tools, or custom pre-build processes, generate a modified Wrangler configuration to be used to deploy the Worker code. In this case, the tool may also create a special `.wrangler/deploy/config.json` file that redirects Wrangler to use the generated configuration rather than the original, user's configuration.

Wrangler uses this generated configuration only for the following deploy and dev related commands:

* `wrangler deploy`
* `wrangler dev`
* `wrangler versions upload`
* `wrangler versions deploy`
* `wrangler pages deploy`
* `wrangler pages functions build`

When running these commands, Wrangler looks up the directory tree from the current working directory for a file at the path `.wrangler/deploy/config.json`. This file must contain only a single JSON object of the form:

```

{ "configPath": "../../path/to/wrangler.jsonc" }


```

When this `config.json` file exists, Wrangler will follow the `configPath` (relative to the `.wrangler/deploy/config.json` file) to find the generated Wrangler configuration file to load and use in the current command. Wrangler will display messaging to the user to indicate that the configuration has been redirected to a different file than the user's configuration file.

The generated configuration file should not include any [environments](#environments). This is because such a file, when required, should be created as part of a build step, which should already target a specific environment. These build tools should generate distinct deployment configuration files for different environments.

### Custom build tool example

A common example of using a redirected configuration is where a custom build tool, or framework, wants to modify the user's configuration to be used when deploying, by generating a new configuration in a `dist` directory.

* First, the user writes code that uses Cloudflare Workers resources, configured via a user's Wrangler configuration file like the following:  
   * [  wrangler.jsonc ](#tab-panel-8460)  
   * [  wrangler.toml ](#tab-panel-8461)  
```  
{  
  "$schema": "./node_modules/wrangler/config-schema.json",  
  "name": "my-worker",  
  "main": "src/index.ts",  
  "vars": {  
    "MY_VARIABLE": "production variable",  
  },  
  "env": {  
    "staging": {  
      "vars": {  
        "MY_VARIABLE": "staging variable",  
      },  
    },  
  },  
}  
```  
```  
"$schema" = "./node_modules/wrangler/config-schema.json"  
name = "my-worker"  
main = "src/index.ts"  
[vars]  
MY_VARIABLE = "production variable"  
[env.staging.vars]  
MY_VARIABLE = "staging variable"  
```  
This configuration points `main` at the user's code entry-point and defines the `MY_VARIABLE` variable in two different environments.
* Then, the user runs a custom build for a given environment (for example `staging`). This will read the user's Wrangler configuration file to find the source code entry-point and environment specific settings:  
Terminal window  
```  
> my-tool build --env=staging  
```
* `my-tool` generates a `dist` directory that contains both compiled code and a new generated deployment configuration file, containing only the settings for the given environment. It also creates a `.wrangler/deploy/config.json` file that redirects Wrangler to the new, generated deployment configuration file:  
   * dist \- index.js - wrangler.jsonc - .wrangler - deploy - config.json

The generated `dist/wrangler.jsonc` might contain:

```

{

  "name": "my-worker",

  "main": "./index.js",

  "vars": {

    "MY_VARIABLE": "staging variable"

  }

}


```

Now, the `main` property points to the generated code entry-point, no environment is defined, and the `MY_VARIABLE` variable is resolved to the staging environment value.

And the `.wrangler/deploy/config.json` contains the path to the generated configuration file:

```

{

  "configPath": "../../dist/wrangler.jsonc"

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/configuration/","name":"Configuration"}}]}
```

---

---
title: Custom builds
description: Customize how your code is compiled, before being processed by Wrangler.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/custom-builds.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Custom builds

Custom builds are a way for you to customize how your code is compiled, before being processed by Wrangler.

Note

Wrangler runs [esbuild ↗](https://esbuild.github.io/) by default as part of the `dev` and `deploy` commands, and bundles your Worker project into a single Worker script. Refer to [Bundling](https://developers.cloudflare.com/workers/wrangler/bundling/).

## Configure custom builds

Custom builds are configured by adding a `[build]` section in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), and using the following options for configuring your custom build.

* `command` ` string ` optional  
   * The command used to build your Worker. On Linux and macOS, the command is executed in the `sh` shell and the `cmd` shell for Windows. The `&&` and `||` shell operators may be used. This command will be run as part of `wrangler dev` and `npx wrangler deploy`.
* `cwd` ` string ` optional  
   * The directory in which the command is executed.
* `watch_dir` ` string | string\[] ` optional  
   * The directory to watch for changes while using `wrangler dev`. Defaults to the current working directory.

Example:

* [  wrangler.jsonc ](#tab-panel-8466)
* [  wrangler.toml ](#tab-panel-8467)

```

{

  "build": {

    "command": "npm run build",

    "cwd": "build_cwd",

    "watch_dir": "build_watch_dir"

  }

}


```

```

[build]

command = "npm run build"

cwd = "build_cwd"

watch_dir = "build_watch_dir"


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/custom-builds/","name":"Custom builds"}}]}
```

---

---
title: Deprecations
description: The differences between Wrangler versions, specifically deprecations and breaking changes.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/deprecations.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Deprecations

Review the difference between Wrangler versions, specifically deprecations and breaking changes.

## Wrangler v4

### Workers Sites

Usage of [Workers Sites](https://developers.cloudflare.com/workers/wrangler/configuration/#workers-sites) is deprecated. Instead, we recommend migrating to [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/). Support for using Workers Sites with Wrangler will be removed in a future version of Wrangler.

### Service environments

Usage of [Service Environments ↗](https://blog.cloudflare.com/introducing-worker-services/#services-have-environments), enabled via the `legacy_env` property in Wrangler config, is deprecated. Instead, we recommend migrating to [Wrangler Environments](https://developers.cloudflare.com/workers/wrangler/configuration/#environments). Support for using Service Environments with Wrangler will be removed in a future version of Wrangler.

## Wrangler v3

### Deprecated commands

The following commands are deprecated in Wrangler as of Wrangler v3\. These commands will be fully removed in a future version of Wrangler.

#### `generate`

The `wrangler generate` command is deprecated, but still active in v3\. `wrangler generate` will be fully removed in v4.

Use `npm create cloudflare@latest` for new Workers and Pages projects.

#### `publish`

The `wrangler deploy` command is deprecated, but still active in v3\. `wrangler deploy` will be fully removed in v4.

Use [npx wrangler deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) to deploy Workers.

#### `pages publish`

The `wrangler pages publish` command is deprecated, but still active in v3\. `wrangler pages publish` will be fully removed in v4.

Use [wrangler pages deploy](https://developers.cloudflare.com/workers/wrangler/commands/general/#deploy) to deploy Pages.

#### `version`

Instead, use `wrangler --version` to check the current version of Wrangler.

### Deprecated options

#### `--experimental-local`

`wrangler dev` in v3 is local by default so this option is no longer necessary.

#### `--local`

`wrangler dev` in v3 is local by default so this option is no longer necessary.

#### `--persist`

`wrangler dev` automatically persists data by default so this option is no longer necessary.

#### `-- <command>`, `--proxy`, and `--script-path` in `wrangler pages dev`

These options prevent `wrangler pages dev` from being able to accurately emulate production's behavior for serving static assets and have therefore been deprecated. Instead of relying on Wrangler to proxy through to some other upstream dev server, you can emulate a more accurate behavior by building your static assets to a directory and pointing Wrangler to that directory with `wrangler pages dev <directory>`.

#### `--legacy-assets` and the `legacy_assets` config file property

We recommend you [migrate to Workers assets ↗](https://developers.cloudflare.com/workers/static-assets/)

#### `--node-compat` and the `node_compat` config file property

Instead, use the [nodejs\_compat compatibility flag ↗](https://developers.cloudflare.com/workers/runtime-apis/nodejs). This includes the functionality from legacy `node_compat` polyfills and natively implemented Node.js APIs.

#### The `usage_model` config file property

This no longer has any effect, after the [rollout of Workers Standard Pricing ↗](https://blog.cloudflare.com/workers-pricing-scale-to-zero/).

## Wrangler v2

Wrangler v2 introduces new fields for configuration and new features for developing and deploying a Worker, while deprecating some redundant fields.

* `wrangler.toml` is no longer mandatory.
* `dev` and `publish` accept CLI arguments.
* `tail` can be run on arbitrary Worker names.
* `init` creates a project boilerplate.
* JSON bindings for `vars`.
* Local mode for `wrangler dev`.
* Module system (for both modules and service worker format Workers).
* DevTools.
* TypeScript support.
* Sharing development environment on the Internet.
* Wider platform compatibility.
* Developer hotkeys.
* Better configuration validation.

The following video describes some of the major changes in Wrangler v2, and shows you how Wrangler v2 can help speed up your workflow.

### Common deprecations

Refer to the following list for common fields that are no longer required.

* `type` is no longer required. Wrangler will infer the correct project type automatically.
* `zone_id` is no longer required. It can be deduced from the routes directly.
* `build.upload.format` is no longer used. The format is now inferred automatically from the code.
* `build.upload.main` and `build.upload.dir` are no longer required. Use the top level `main` field, which now serves as the entry-point for the Worker.
* `site.entry-point` is no longer required. The entry point should be specified through the `main` field.
* `webpack_config` and `webpack` properties are no longer supported. Refer to [Migrate webpack projects from Wrangler version 1](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/eject-webpack/). Here are the Wrangler v1 commands that are no longer supported:
* `wrangler preview` \- Use the `wrangler dev` command, for running your worker in your local environment.
* `wrangler generate` \- If you want to use a starter template, clone its GitHub repository and manually initialize it.
* `wrangler route` \- Routes are defined in the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).
* `wrangler report` \- If you find a bug, report it at [Wrangler issues ↗](https://github.com/cloudflare/workers-sdk/issues/new/choose).
* `wrangler build` \- If you wish to access the output from bundling your Worker, use `wrangler deploy --outdir=path/to/output`.

#### New fields

These are new fields that can be added to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).

* **`main`**: `string`, optional  
The `main` field is used to specify an entry point to the Worker. It may be in the established service worker format, or the newer, preferred modules format. An entry point is now explicitly required, and can be configured either via the `main` field, or passed directly as a command line, for example, `wrangler dev index.js`. This field replaces the legacy `build.upload.main` field (which only applied to modules format Workers).
* **`rules`**: `array`, optional  
The `rules` field is an array of mappings between module types and file patterns. It instructs Wrangler to interpret specific files differently than JavaScript. For example, this is useful for reading text-like content as text files, or compiled WASM as ready to instantiate and execute. These rules can apply to Workers of both the established service worker format, and the newer modules format. This field replaces the legacy `build.upload.rules` field (which only applied to modules format Workers).

#### Non-mandatory fields

A few configuration fields which were previously required, are now optional in particular situations. They can either be inferred, or added as an optimization. No fields are required anymore when starting with Wrangler v2, and you can gradually add configuration as the need arises.

* **`name`**: `string`  
The `name` configuration field is now not required for `wrangler dev`, or any of the `wrangler kv:*` commands. Further, it can also be passed as a command line argument as `--name <name>`. It is still required for `wrangler deploy`.
* **`account_id`**: `string`  
The `account_id` field is not required for any of the commands. Any relevant commands will check if you are logged in, and if not, will prompt you to log in. Once logged in, it will use your account ID and will not prompt you again until your login session expires. If you have multiple account IDs, you will be presented with a list of accounts to choose from.  
You can still configure `account_id` in your Wrangler file, or as an environment variable `CLOUDFLARE_ACCOUNT_ID`. This makes startup faster and bypasses the list of choices if you have multiple IDs. The `CLOUDFLARE_API_TOKEN` environment variable is also useful for situations where it is not possible to login interactively. To learn more, visit [Running in CI/CD](https://developers.cloudflare.com/workers/ci-cd/external-cicd/).
* **`workers_dev`** `boolean`, default: `true` when no routes are present  
The `workers_dev` field is used to indicate that the Worker should be published to a `*.workers.dev` subdomain. For example, for a Worker named `my-worker` and a previously configured `*.workers.dev` subdomain `username`, the Worker will get published to `my-worker.username.workers.dev.com`. This field is not mandatory, and defaults to `true` when `route` or `routes` are not configured. When routes are present, it defaults to `false`. If you want to neither publish it to a `*.workers.dev` subdomain nor any routes, set `workers_dev` to `false`. This useful when you are publishing a Worker as a standalone service that can only be accessed from another Worker with (`services`).

#### Deprecated fields (non-breaking)

A few configuration fields are deprecated, but their presence is not a breaking change yet. It is recommended to read the warning messages and follow the instructions to migrate to the new configuration. They will be removed and stop working in a future version.

* **`zone_id`**: `string`, deprecated  
The `zone_id` field is deprecated and will be removed in a future release. It is now inferred from `route`/`routes`, and optionally from `dev.host` when using `wrangler dev`. This also makes it simpler to deploy a single Worker to multiple domains.
* **`build.upload`**: `object`, deprecated  
The `build.upload` field is deprecated and will be removed in a future release. Its usage results in a warning with suggestions on rewriting the configuration file to remove the warnings.  
   * `build.upload.main`/`build.upload.dir` are replaced by the `main` fields and are applicable to both service worker format and modules format Workers.  
   * `build.upload.rules` is replaced by the `rules` field and is applicable to both service worker format and modules format Workers.  
   * `build.upload.format` is no longer specified and is automatically inferred by `wrangler`.

#### Deprecated fields (breaking)

A few configuration fields are deprecated and will not work as expected anymore. It is recommended to read the error messages and follow the instructions to migrate to the new configuration.

* **`site.entry-point`**: `string`, deprecated  
The `site.entry-point` configuration was used to specify an entry point for Workers with a `[site]` configuration. This has been replaced by the top-level `main` field.
* **`type`**: `rust` | `javascript` | `webpack`, deprecated  
The `type` configuration was used to specify the type of Worker. It has since been made redundant and is now inferred from usage. If you were using `type = "webpack"` (and the optional `webpack_config` field), you should read the [webpack migration guide](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/eject-webpack/) to modify your project and use a custom build instead.

### Deprecated commands

The following commands are deprecated in Wrangler as of Wrangler v2.

#### `build`

The `wrangler build` command is no longer available for building the Worker.

The equivalent functionality can be achieved by `wrangler publish --dry-run --outdir=path/to/build`.

#### `config`

The `wrangler config` command is no longer available for authenticating via an API token.

Use `wrangler login` / `wrangler logout` to manage OAuth authentication, or provide an API token via the `CLOUDFLARE_API_TOKEN` environment variable.

#### `preview`

The `wrangler preview` command is no longer available for creating a temporary preview instance of the Worker.

Try using `wrangler dev` to try out a worker during development.

#### subdomain

The `wrangler subdomain` command is no longer available for creating a `workers.dev` subdomain.

Create the `workers.dev` subdomain in **Workers & Pages** \> select your Worker > Your subdomain > **Change**.

#### route

The `wrangler route` command is no longer available to configure a route for a Worker.

Routes are specified in the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).

### Other deprecated behavior

* Cloudflare dashboard-defined routes will not be added alongside Wrangler-defined routes. Wrangler-defined routes are the `route` or `routes` key in your `wrangler.toml`. If both are defined, only routes defined in `wrangler.toml` will be valid. To manage routes via the Cloudflare dashboard only, remove any `route` and `routes` keys from and add `workers_dev = false` to your Wrangler file.
* Wrangler will no longer use `index.js` in the directory where `wrangler dev` is called as the entry point to a Worker. Use the `main` configuration field, or explicitly pass it as a command line argument, for example: `wrangler dev index.js`.
* Wrangler will no longer assume that bare specifiers are file names if they are not represented as a path. For example, in a folder like so:  
```  
project  
├── index.js  
└── some-dependency.js  
```  
where the content of `index.js` is:  
JavaScript  
```  
import SomeDependency from "some-dependency.js";  
addEventListener("fetch", (event) => {  
  // ...  
});  
```  
Wrangler v1 would resolve `import SomeDependency from "some-dependency.js";` to the file `some-dependency.js`. This will also work in Wrangler v2, but will also log a deprecation warning. In the future, this will break with an error. Instead, you should rewrite the import to specify that it is a relative path, like so:  
```  
import SomeDependency from "some-dependency.js";  
import SomeDependency from "./some-dependency.js";  
```

### Wrangler v1 and v2 comparison tables

#### Commands

| Command   | v1 | v2 | Notes                                          |
| --------- | -- | -- | ---------------------------------------------- |
| publish   | ✅  | ✅  |                                                |
| dev       | ✅  | ✅  |                                                |
| preview   | ✅  | ❌  | Removed, use dev instead.                      |
| init      | ✅  | ✅  |                                                |
| generate  | ✅  | ❌  | Removed, use git clone instead.                |
| build     | ✅  | ❌  | Removed, invoke your own build script instead. |
| secret    | ✅  | ✅  |                                                |
| route     | ✅  | ❌  | Removed, use publish instead.                  |
| tail      | ✅  | ✅  |                                                |
| kv        | ✅  | ✅  |                                                |
| r2        | 🚧 | ✅  | Introduced in Wrangler v1.19.8.                |
| pages     | ❌  | ✅  |                                                |
| config    | ✅  | ❓  |                                                |
| login     | ✅  | ✅  |                                                |
| logout    | ✅  | ✅  |                                                |
| whoami    | ✅  | ✅  |                                                |
| subdomain | ✅  | ❓  |                                                |
| report    | ✅  | ❌  | Removed, error reports are made interactively. |

#### Configuration

| Property            | v1 | v2 | Notes                                                                                                                            |
| ------------------- | -- | -- | -------------------------------------------------------------------------------------------------------------------------------- |
| type = "webpack"    | ✅  | ❌  | Removed, refer to [this guide](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/eject-webpack/) to migrate. |
| type = "rust"       | ✅  | ❌  | Removed, use [workers-rs ↗](https://github.com/cloudflare/workers-rs) instead.                                                   |
| type = "javascript" | ✅  | 🚧 | No longer required, can be omitted.                                                                                              |

#### Features

| Feature    | v1 | v2 | Notes                                                                                                                                                                                                 |
| ---------- | -- | -- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| TypeScript | ❌  | ✅  | You can give wrangler a TypeScript file, and it will automatically transpile it to JavaScript using [esbuild ↗](https://github.com/evanw/esbuild) under-the-hood.                                     |
| Local mode | ❌  | ✅  | wrangler dev --local will run your Worker on your local machine instead of on our network. This is powered by [Miniflare ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare/). |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/deprecations/","name":"Deprecations"}}]}
```

---

---
title: Environments
description: Use environments to create different configurations for the same Worker application.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/environments.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Environments

Wrangler allows you to use environments to create different configurations for the same Worker application. Environments are configured in the Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).

When you create an environment, Cloudflare effectively creates a new Worker with the name `<top-level-name>-<environment-name>`. For example, a Worker project named `my-worker` with an environment `dev` would deploy as a Worker named `my-worker-dev`.

Review the following environments flow:

1. Create a Worker, named `my-worker` for example.
2. Create an environment, for example `dev`, in the Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), by adding a `[env.<ENV_NAME>]` section.  
   * [  wrangler.jsonc ](#tab-panel-8468)  
   * [  wrangler.toml ](#tab-panel-8469)  
```  
{  
  "name": "my-worker",  
  "env": {  
    "<ENV_NAME>": {  
      // environment-specific configuration goes here  
    }  
  }  
}  
```  
```  
name = "my-worker"  
[env]  
"<ENV_NAME>" = { }  
```
3. You can configure the `dev` environment with different values to the top-level environment. Refer [here](https://developers.cloudflare.com/workers/wrangler/configuration/#environments) for how different options are inherited - or not inherited - between environments. For example, to set a different route for a Worker in the `dev` environment:  
   * [  wrangler.jsonc ](#tab-panel-8470)  
   * [  wrangler.toml ](#tab-panel-8471)  
```  
{  
  "$schema": "./node_modules/wrangler/config-schema.json",  
  "name": "your-worker",  
  "route": "example.com",  
  "env": {  
    "dev": {  
      "route": "dev.example.com"  
    }  
  }  
}  
```  
```  
"$schema" = "./node_modules/wrangler/config-schema.json"  
name = "your-worker"  
route = "example.com"  
[env.dev]  
route = "dev.example.com"  
```
4. Environments are used with the `--env` or `-e` flag on Wrangler commands. For example, you can develop the Worker in the `dev` environment by running `npx wrangler dev -e=dev`, and deploy it with `npx wrangler deploy -e=dev`.  
Alternatively, you can use the [CLOUDFLARE\_ENV environment variable](https://developers.cloudflare.com/workers/wrangler/system-environment-variables/#supported-environment-variables) to select the active environment. For example, `CLOUDFLARE_ENV=dev npx wrangler deploy` will deploy to the `dev` environment. The `--env` command line argument takes precedence over the `CLOUDFLARE_ENV` environment variable.  
Note  
If you're using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), you select the environment at dev or build time via the `CLOUDFLARE_ENV` environment variable rather than the `--env` flag. Otherwise, environments are defined in your Worker config file as usual. For more detail on using environments with the Cloudflare Vite plugin, refer to the [plugin documentation](https://developers.cloudflare.com/workers/vite-plugin/reference/cloudflare-environments/).

## Non-inheritable keys and environments

[Non-inheritable keys](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys) are configurable at the top-level, but cannot be inherited by environments and must be specified for each environment.

For example, [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) and [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/) are non-inheritable, and must be specified per [environment](https://developers.cloudflare.com/workers/wrangler/environments/) in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).

Review the following example Wrangler file:

* [  wrangler.jsonc ](#tab-panel-8478)
* [  wrangler.toml ](#tab-panel-8479)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker",

  "vars": {

    "API_HOST": "example.com"

  },

  "kv_namespaces": [

    {

      "binding": "<BINDING_NAME>",

      "id": "<KV_NAMESPACE_ID_DEV>"

    }

  ],

  "env": {

    "production": {

      "vars": {

        "API_HOST": "production.example.com"

      },

      "kv_namespaces": [

        {

          "binding": "<BINDING_NAME>",

          "id": "<KV_NAMESPACE_ID_PRODUCTION>"

        }

      ]

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker"


[vars]

API_HOST = "example.com"


[[kv_namespaces]]

binding = "<BINDING_NAME>"

id = "<KV_NAMESPACE_ID_DEV>"


[env.production.vars]

API_HOST = "production.example.com"


[[env.production.kv_namespaces]]

binding = "<BINDING_NAME>"

id = "<KV_NAMESPACE_ID_PRODUCTION>"


```

### Service bindings

To use a [service binding](https://developers.cloudflare.com/workers/wrangler/configuration/#service-bindings) that targets a Worker in a specific environment, you need to append the environment name to the target Worker name in the `service` field. This should be in the format `<worker-name>-<environment-name>`. In the example below, we have two Workers, both with a `staging` environment. `worker-b` has a service binding to `worker-a`. Note how the `service` field in the `staging` environment points to `worker-a-staging`, whereas the top-level service binding points to `worker-a`.

* [  wrangler.jsonc ](#tab-panel-8472)
* [  wrangler.toml ](#tab-panel-8473)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "worker-a",

  "vars": {

    "FOO": "<top-level-var>"

  },

  "env": {

    "staging": {

      "vars": {

        "FOO": "<staging-var>"

      }

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "worker-a"


[vars]

FOO = "<top-level-var>"


[env.staging.vars]

FOO = "<staging-var>"


```

* [  wrangler.jsonc ](#tab-panel-8476)
* [  wrangler.toml ](#tab-panel-8477)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "worker-b",

  "services": {

    "binding": "<BINDING_NAME>",

    "service": "worker-a"

  },

  // Note how `service = "worker-a-staging"`

  "env": {

    "staging": {

      "service": {

        "binding": "<BINDING_NAME>",

        "service": "worker-a-staging"

      }

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "worker-b"


[services]

binding = "<BINDING_NAME>"

service = "worker-a"


[env.staging.service]

binding = "<BINDING_NAME>"

service = "worker-a-staging"


```

### Secrets for production

You may assign environment-specific [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) by running the command [wrangler secret put <KEY> -env](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret-put). You can also create `dotenv` type files named `.dev.vars.<environment-name>`.

Like other environment variables, secrets are [non-inheritable](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys) and must be defined per environment.

### Secrets in local development

Warning

Do not use `vars` to store sensitive information in your Worker's Wrangler configuration file. Use secrets instead.

Put secrets for use in local development in either a `.dev.vars` file or a `.env` file, in the same directory as the Wrangler configuration file.

Note

You can use the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property) to declare which secret names your Worker requires. When defined, only the keys listed in `secrets.required` are loaded from `.dev.vars` or `.env`. Additional keys are excluded and missing keys produce a warning.

Choose to use either `.dev.vars` or `.env` but not both. If you define a `.dev.vars` file, then values in `.env` files will not be included in the `env` object during local development.

These files should be formatted using the [dotenv ↗](https://hexdocs.pm/dotenvy/dotenv-file-format.html) syntax. For example:

.dev.vars / .env

```

SECRET_KEY="value"

API_TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"


```

Do not commit secrets to git

The `.dev.vars` and `.env` files should not committed to git. Add `.dev.vars*` and `.env*` to your project's `.gitignore` file.

To set different secrets for each Cloudflare environment, create files named `.dev.vars.<environment-name>` or `.env.<environment-name>`.

When you select a Cloudflare environment in your local development, the corresponding environment-specific file will be loaded ahead of the generic `.dev.vars` (or `.env`) file.

* When using `.dev.vars.<environment-name>` files, all secrets must be defined per environment. If `.dev.vars.<environment-name>` exists then only this will be loaded; the `.dev.vars` file will not be loaded.
* In contrast, all matching `.env` files are loaded and the values are merged. For each variable, the value from the most specific file is used, with the following precedence:  
   * `.env.<environment-name>.local` (most specific)  
   * `.env.local`  
   * `.env.<environment-name>`  
   * `.env` (least specific)

Controlling `.env` handling

It is possible to control how `.env` files are loaded in local development by setting environment variables on the process running the tools.

* To disable loading local dev vars from `.env` files without providing a `.dev.vars` file, set the `CLOUDFLARE_LOAD_DEV_VARS_FROM_DOT_ENV` environment variable to `"false"`.
* To include every environment variable defined in your system's process environment as a local development variable, ensure there is no `.dev.vars` and then set the `CLOUDFLARE_INCLUDE_PROCESS_ENV` environment variable to `"true"`. This is not needed when using the [secrets configuration property](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets-configuration-property), which loads from `process.env` automatically.

---

## Examples

### Staging and production environments

The following Wrangler file adds two environments, `[env.staging]` and `[env.production]`, to the Wrangler file. If you are deploying to a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) or [route](https://developers.cloudflare.com/workers/configuration/routing/routes/), you must provide a [route or routes key](https://developers.cloudflare.com/workers/wrangler/configuration/) for each environment.

* [  wrangler.jsonc ](#tab-panel-8480)
* [  wrangler.toml ](#tab-panel-8481)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker",

  "route": "dev.example.com/*",

  "vars": {

    "ENVIRONMENT": "dev"

  },

  "env": {

    "staging": {

      "vars": {

        "ENVIRONMENT": "staging"

      },

      "route": "staging.example.com/*"

    },

    "production": {

      "vars": {

        "ENVIRONMENT": "production"

      },

      "routes": [

        "example.com/foo/*",

        "example.com/bar/*"

      ]

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker"

route = "dev.example.com/*"


[vars]

ENVIRONMENT = "dev"


[env.staging]

route = "staging.example.com/*"


  [env.staging.vars]

  ENVIRONMENT = "staging"


[env.production]

routes = [ "example.com/foo/*", "example.com/bar/*" ]


  [env.production.vars]

  ENVIRONMENT = "production"


```

You can pass the name of the environment via the `--env` flag to run commands in a specific environment.

With this configuration, Wrangler will behave in the following manner:

Terminal window

```

npx wrangler deploy


```

```

Uploaded my-worker

Published my-worker

  dev.example.com/*


```

Terminal window

```

npx wrangler deploy --env staging


```

```

Uploaded my-worker-staging

Published my-worker-staging

  staging.example.com/*


```

Terminal window

```

npx wrangler deploy --env production


```

```

Uploaded my-worker-production

Published my-worker-production

  example.com/*


```

Any defined [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/) (the [vars](https://developers.cloudflare.com/workers/wrangler/configuration/) key) are exposed as global variables to your Worker.

With this configuration, the `ENVIRONMENT` variable can be used to call specific code depending on the given environment:

JavaScript

```

if (ENVIRONMENT === "staging") {

  // staging-specific code

} else if (ENVIRONMENT === "production") {

  // production-specific code

}


```

### Staging environment with \*.workers.dev

To deploy your code to your `*.workers.dev` subdomain, include `workers_dev = true` in the desired environment. Your Wrangler file may look like this:

* [  wrangler.jsonc ](#tab-panel-8474)
* [  wrangler.toml ](#tab-panel-8475)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "my-worker",

  "route": "example.com/*",

  "env": {

    "staging": {

      "workers_dev": true

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "my-worker"

route = "example.com/*"


[env.staging]

workers_dev = true


```

With this configuration, Wrangler will behave in the following manner:

Terminal window

```

npx wrangler deploy


```

```

Uploaded my-worker

Published my-worker

  example.com/*


```

Terminal window

```

npx wrangler deploy --env staging


```

```

Uploaded my-worker

Published my-worker

  https://my-worker-staging.<YOUR_SUBDOMAIN>.workers.dev


```

Warning

When you create a Worker via an environment, Cloudflare automatically creates an SSL certification for it. SSL certifications are discoverable and a matter of public record. Be careful when naming your environments that they do not contain sensitive information, such as, `migrating-service-from-company1-to-company2` or `company1-acquisition-load-test`.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/environments/","name":"Environments"}}]}
```

---

---
title: Install/Update Wrangler
description: Get started by installing Wrangler, and update to newer versions by following this guide.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/install-and-update.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Install/Update Wrangler

Wrangler is a command-line tool for building with Cloudflare developer products.

## Install Wrangler

To install [Wrangler ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler), ensure you have [Node.js ↗](https://nodejs.org/en/) and [npm ↗](https://docs.npmjs.com/getting-started) installed, preferably using a Node version manager like [Volta ↗](https://volta.sh/) or [nvm ↗](https://github.com/nvm-sh/nvm). Using a version manager helps avoid permission issues and allows you to change Node.js versions.

Wrangler System Requirements

We support running the Wrangler CLI with the [Current, Active, and Maintenance ↗](https://nodejs.org/en/about/previous-releases) versions of Node.js. Your Worker will always be executed in `workerd`, the open source Cloudflare Workers runtime.

Wrangler is only supported on macOS 13.5+, Windows 11, and Linux distros that support glib 2.35\. This follows [workerd's OS support policy ↗](https://github.com/cloudflare/workerd?tab=readme-ov-file#running-workerd).

Wrangler is installed locally into each of your projects. This allows you and your team to use the same Wrangler version, control Wrangler versions for each project, and roll back to an earlier version of Wrangler, if needed.

To install Wrangler within your Worker project, run:

 npm  yarn  pnpm  bun 

```
npm i -D wrangler@latest
```

```
yarn add -D wrangler@latest
```

```
pnpm add -D wrangler@latest
```

```
bun add -d wrangler@latest
```

Since Cloudflare recommends installing Wrangler locally in your project (rather than globally), the way to run Wrangler will depend on your specific setup and package manager. Refer to [How to run Wrangler commands](https://developers.cloudflare.com/workers/wrangler/commands/#how-to-run-wrangler-commands) for more information.

Warning

If Wrangler is not installed, running `npx wrangler` will use the latest version of Wrangler.

## Check your Wrangler version

To check your Wrangler version, run:

Terminal window

```

npx wrangler --version

// or

npx wrangler -v


```

## Update Wrangler

To update the version of Wrangler used in your project, run:

 npm  yarn  pnpm  bun 

```
npm i -D wrangler@latest
```

```
yarn add -D wrangler@latest
```

```
pnpm add -D wrangler@latest
```

```
bun add -d wrangler@latest
```

## Related resources

* [Commands](https://developers.cloudflare.com/workers/wrangler/commands/) \- A detailed list of the commands that Wrangler supports.
* [Configuration](https://developers.cloudflare.com/workers/wrangler/configuration/) \- Learn more about Wrangler's configuration file.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/install-and-update/","name":"Install/Update Wrangler"}}]}
```

---

---
title: Migrate from Wrangler v2 to v3
description: There are no special instructions for migrating from Wrangler v2 to v3. You should be able to update Wrangler by following the instructions in Install/Update Wrangler. You should experience no disruption to your workflow.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/migration/update-v2-to-v3.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Migrate from Wrangler v2 to v3

There are no special instructions for migrating from Wrangler v2 to v3\. You should be able to update Wrangler by following the instructions in [Install/Update Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/#update-wrangler). You should experience no disruption to your workflow.

Warning

If you tried to update to Wrangler v3 prior to v3.3, you may have experienced some compatibility issues with older operating systems. Please try again with the latest v3 where those have been resolved.

## Deprecations

Refer to [Deprecations](https://developers.cloudflare.com/workers/wrangler/deprecations/#wrangler-v3) for more details on what is no longer supported in v3.

## Additional assistance

If you do have an issue or need further assistance, [file an issue ↗](https://github.com/cloudflare/workers-sdk/issues/new/choose) in the `workers-sdk` repo on GitHub.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/migration/","name":"Migrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/migration/update-v2-to-v3/","name":"Migrate from Wrangler v2 to v3"}}]}
```

---

---
title: Migrate from Wrangler v3 to v4
description: Wrangler v4 is a major release focused on updates to underlying systems and dependencies, along with improvements to keep Wrangler commands consistent and clear. Unlike previous major versions of Wrangler, which were foundational rewrites and rearchitectures — Version 4 of Wrangler includes a much smaller set of changes. If you use Wrangler today, your workflow is very unlikely to change.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/migration/update-v3-to-v4.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Migrate from Wrangler v3 to v4

Wrangler v4 is a major release focused on updates to underlying systems and dependencies, along with improvements to keep Wrangler commands consistent and clear. Unlike previous major versions of Wrangler, which were [foundational rewrites ↗](https://blog.cloudflare.com/wrangler-v2-beta/) and [rearchitectures ↗](https://blog.cloudflare.com/wrangler3/) — Version 4 of Wrangler includes a much smaller set of changes. If you use Wrangler today, your workflow is very unlikely to change.

While many users should expect a no-op upgrade, the following sections outline the more significant changes and steps for migrating where necessary.

## Upgrade to Wrangler v4

To upgrade to the latest version of Wrangler v4 within your Worker project, run:

 npm  yarn  pnpm  bun 

```
npm i -D wrangler@4
```

```
yarn add -D wrangler@4
```

```
pnpm add -D wrangler@4
```

```
bun add -d wrangler@4
```

After upgrading, you can verify the installation:

 npm  yarn  pnpm 

```
npx wrangler --version
```

```
yarn wrangler --version
```

```
pnpm wrangler --version
```

### Summary of changes

* **Updated Node.js support policy:**Node.js v16, which reached End-of-Life in 2022, is no longer supported in Wrangler v4\. Wrangler now follows Node.js's [official support lifecycle ↗](https://nodejs.org/en/about/previous-releases).
* **Upgraded esbuild version**: Wrangler uses [esbuild ↗](https://esbuild.github.io/) to bundle Worker code before deploying it, and was previously pinned to esbuild v0.17.19\. Wrangler v4 uses esbuild v0.24, which could impact dynamic wildcard imports. Going forward, Wrangler will be periodically updating the `esbuild` version included with Wrangler, and since `esbuild` is a pre-1.0.0 tool, this may sometimes include breaking changes to how bundling works. In particular, we may bump the `esbuild` version in a Wrangler minor version.
* **Commands default to local mode**: All commands that can run in either local or remote mode now default to local, requiring a `--remote` flag for API queries.
* **Deprecated commands and configurations removed:** Legacy commands, flags, and configurations are removed.

## Detailed Changes

### Updated Node.js support policy

Wrangler now supports only Node.js versions that align with [Node.js's official lifecycle ↗](https://nodejs.org/en/about/previous-releases):

* **Supported**: Current, Active LTS, Maintenance LTS
* **No longer supported:** Node.js v16 (EOL in 2022)

Wrangler tests no longer run on v16, and users still on this version may encounter unsupported behavior. Users still using Node.js v16 must upgrade to a supported version to continue receiving support and compatibility with Wrangler.

Am I affected?

Run the following command to check your Node.js version:

Terminal window

```

node --version


```

**You need to take action if** your version starts with `v16` or `v18` (for example, `v16.20.0` or `v18.20.0`).

**To upgrade Node.js**, refer to the [Wrangler system requirements](https://developers.cloudflare.com/workers/wrangler/install-and-update/). Cloudflare recommends using the latest LTS version of Node.js.

### Upgraded esbuild version

Wrangler v4 upgrades esbuild from **v0.17.19** to **v0.24**, bringing improvements (such as the ability to use the `using` keyword with RPC) and changes to bundling behavior:

* **Dynamic imports:** Wildcard imports (for example, `import('./data/' + kind + '.json')`) now automatically include all matching files in the bundle.

Users relying on wildcard dynamic imports may see unwanted files bundled. Prior to esbuild v0.19, `import` statements with dynamic paths (like `import('./data/' + kind + '.json')`) did not bundle all files matching the glob pattern (`*.json`). Only files explicitly referenced or included using `find_additional_modules` were bundled. With esbuild v0.19, wildcard imports now automatically bundle all files matching the glob pattern. This could result in unwanted files being bundled, so users might want to avoid wildcard dynamic imports and use explicit imports instead.

### Commands default to local mode

All commands now run in **local mode by default.** Wrangler has many commands for accessing resources like KV and R2, but the commands were previously inconsistent in whether they ran in a local or remote environment. For example, D1 defaulted to querying a local datastore, and required the `--remote` flag to query via the API. KV, on the other hand, previously defaulted to querying via the API (implicitly using the `--remote` flag) and required a `--local` flag to query a local datastore. In order to make the behavior consistent across Wrangler, each command now uses the `--local` flag by default, and requires an explicit `--remote` flag to query via the API.

For example:

* **Previous Behavior (Wrangler v3):** `wrangler kv key get` queried remotely by default.
* **New Behavior (Wrangler v4):** `wrangler kv key get` queries locally unless `--remote` is specified.

Those using `wrangler kv key` and/or `wrangler r2 object` commands to query or write to their data store will need to add the `--remote` flag in order to replicate previous behavior.

Am I affected?

Check if you use any of these commands in scripts, CI/CD pipelines, or manual workflows:

**KV commands:**

* `wrangler kv key get`
* `wrangler kv key put`
* `wrangler kv key delete`
* `wrangler kv key list`
* `wrangler kv bulk put`
* `wrangler kv bulk delete`

**R2 commands:**

* `wrangler r2 object get`
* `wrangler r2 object put`
* `wrangler r2 object delete`

**You need to take action if:**

* You run these commands expecting them to interact with your remote/production data.
* You have scripts or CI/CD pipelines that use these commands without the `--local` or `--remote` flag.

Search your codebase and CI/CD configs:

Terminal window

```

grep -rE "wrangler (kv|r2)" --include="*.sh" --include="*.yml" --include="*.yaml" --include="Makefile" --include="package.json" .


```

**What to do:**

Add `--remote` to commands that should interact with your Cloudflare account:

Terminal window

```

# Before (Wrangler v3 - queried remote by default)

wrangler kv key get --binding MY_KV "my-key"


# After (Wrangler v4 - must specify --remote)

wrangler kv key get --binding MY_KV "my-key" --remote


```

### Deprecated commands and configurations removed

All previously deprecated features in [Wrangler v2](https://developers.cloudflare.com/workers/wrangler/deprecations/#wrangler-v2) and in [Wrangler v3](https://developers.cloudflare.com/workers/wrangler/deprecations/#wrangler-v3) are now removed. Additionally, the following features that were deprecated during the Wrangler v3 release are also now removed:

* Legacy Assets (using `wrangler dev/deploy --legacy-assets` or the `legacy_assets` config file property). Instead, we recommend you [migrate to Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/).
* Legacy Node.js compatibility (using `wrangler dev/deploy --node-compat` or the `node_compat` config file property). Instead, use the [nodejs\_compat compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/). This includes the functionality from legacy `node_compat` polyfills and natively implemented Node.js APIs.
* `wrangler version`. Instead, use `wrangler --version` to check the current version of Wrangler.
* `getBindingsProxy()` (via `import { getBindingsProxy } from "wrangler"`). Instead, use the [getPlatformProxy() API](https://developers.cloudflare.com/workers/wrangler/api/#getplatformproxy), which takes exactly the same arguments.
* `usage_model`. This no longer has any effect, after the [rollout of Workers Standard Pricing ↗](https://blog.cloudflare.com/workers-pricing-scale-to-zero/).

Am I affected?

**Check your Wrangler configuration file** (`wrangler.toml`, `wrangler.json`, or `wrangler.jsonc`) for deprecated settings:

Terminal window

```

# For TOML files

grep -E "(legacy_assets|node_compat|usage_model)\s*=" wrangler.toml


# For JSON files

grep -E "\"(legacy_assets|node_compat|usage_model)\"" wrangler.json wrangler.jsonc


```

**Check your commands and scripts** for deprecated flags:

Terminal window

```

grep -rE "wrangler.*(--legacy-assets|--node-compat)" --include="*.sh" --include="*.yml" --include="*.yaml" --include="Makefile" --include="package.json" .


```

**Check for deprecated API usage** in your code:

Terminal window

```

grep -rE "getBindingsProxy" --include="*.js" --include="*.ts" --include="*.mjs" .


```

**You need to take action if you find any of the following:**

| Deprecated                                     | Replacement                                                                                                         |
| ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- |
| legacy\_assets config or \--legacy-assets flag | [Migrate to Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/)                        |
| node\_compat config or \--node-compat flag     | Use the [nodejs\_compat compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/)         |
| usage\_model config                            | Remove it (no longer has any effect)                                                                                |
| wrangler version command                       | Use wrangler --version                                                                                              |
| getBindingsProxy() import                      | Use [getPlatformProxy()](https://developers.cloudflare.com/workers/wrangler/api/#getplatformproxy) (same arguments) |
| wrangler publish command                       | Use wrangler deploy                                                                                                 |
| wrangler generate command                      | Use npm create cloudflare@latest                                                                                    |
| wrangler pages publish command                 | Use wrangler pages deploy                                                                                           |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/migration/","name":"Migrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/migration/update-v3-to-v4/","name":"Migrate from Wrangler v3 to v4"}}]}
```

---

---
title: 1. Migrate webpack projects
description: This guide describes the steps to migrate a webpack project from Wrangler v1 to Wrangler v2. After completing this guide, update your Wrangler version.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/migration/v1-to-v2/eject-webpack.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# 1\. Migrate webpack projects

This guide describes the steps to migrate a webpack project from Wrangler v1 to Wrangler v2\. After completing this guide, [update your Wrangler version](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/update-v1-to-v2/).

Previous versions of Wrangler offered rudimentary support for [webpack ↗](https://webpack.js.org/) with the `type` and `webpack_config` keys in the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). Starting with Wrangler v2, Wrangler no longer supports the `type` and `webpack_config` keys, but you can still use webpack with your Workers.

As a developer using webpack with Workers, you may be in one of four categories:

1. [I use \[build\] to run webpack (or another bundler) external to wrangler.](#i-use-build-to-run-webpack-or-another-bundler-external-to-wrangler).
2. [I use type = webpack, but do not provide my own configuration and let Wrangler take care of it.](#i-use-type--webpack-but-do-not-provide-my-own-configuration-and-let-wrangler-take-care-of-it).
3. [I use type = webpack and webpack\_config = <path/to/webpack.config.js> to handle JSX, TypeScript, WebAssembly, HTML files, and other non-standard filetypes.](#i-use-type--webpack-and-webpack%5Fconfig--pathtowebpackconfigjs-to-handle-jsx-typescript-webassembly-html-files-and-other-non-standard-filetypes).
4. [I use type = webpack and webpack\_config = <path/to/webpack.config.js> to perform code-transforms and/or other code-modifying functionality.](#i-use-type--webpack-and-webpack%5Fconfig--pathtowebpackconfigjs-to-perform-code-transforms-andor-other-code-modifying-functionality).

If you do not see yourself represented, [file an issue ↗](https://github.com/cloudflare/workers-sdk/issues/new/choose) and we can assist you with your specific situation and improve this guide for future readers.

### I use `[build]` to run webpack (or another bundler) external to Wrangler.

Wrangler v2 supports the `[build]` key, so your Workers will continue to build using your own setup.

### I use `type = webpack`, but do not provide my own configuration and let Wrangler take care of it.

Wrangler will continue to take care of it. Remove `type = webpack` from your Wrangler file.

### I use `type = webpack` and `webpack_config = <path/to/webpack.config.js>` to handle JSX, TypeScript, WebAssembly, HTML files, and other non-standard filetypes.

As of Wrangler v2, Wrangler has built-in support for this use case. Refer to [Bundling](https://developers.cloudflare.com/workers/wrangler/bundling/) for more details.

The Workers runtime handles JSX and TypeScript. You can `import` any modules you need into your code and the Workers runtime includes them in the built Worker automatically.

You should remove the `type` and `webpack_config` keys from your Wrangler file.

### I use `type = webpack` and `webpack_config = <path/to/webpack.config.js>` to perform code-transforms and/or other code-modifying functionality.

Wrangler v2 drops support for project types, including `type = webpack` and configuration via the `webpack_config` key. If your webpack configuration performs operations beyond adding loaders (for example, for TypeScript) you will need to maintain your custom webpack configuration. In the long term, you should [migrate to an external \[build\] process](https://developers.cloudflare.com/workers/wrangler/custom-builds/). In the short term, it is still possible to reproduce Wrangler v1's build steps in newer versions of Wrangler by following the instructions below.

1. Add [wranglerjs-compat-webpack-plugin ↗](https://www.npmjs.com/package/wranglerjs-compat-webpack-plugin) as a `devDependency`.

[wrangler-js ↗](https://www.npmjs.com/package/wrangler-js), shipped as a separate library from [Wrangler v1 ↗](https://www.npmjs.com/package/@cloudflare/wrangler/v/1.19.11), is a Node script that configures and executes [webpack 4 ↗](https://unpkg.com/browse/wrangler-js@0.1.11/package.json) for you. When you set `type = webpack`, Wrangler v1 would execute this script for you. We have ported the functionality over to a new package, [wranglerjs-compat-webpack-plugin ↗](https://www.npmjs.com/package/wranglerjs-compat-webpack-plugin), which you can use as a [webpack plugin ↗](https://v4.webpack.js.org/configuration/plugins/).

To do that, you will need to add it as a dependency:

 npm  yarn  pnpm  bun 

```
npm i -D webpack@^4.46.0 webpack-cli wranglerjs-compat-webpack-plugin
```

```
yarn add -D webpack@^4.46.0 webpack-cli wranglerjs-compat-webpack-plugin
```

```
pnpm add -D webpack@^4.46.0 webpack-cli wranglerjs-compat-webpack-plugin
```

```
bun add -d webpack@^4.46.0 webpack-cli wranglerjs-compat-webpack-plugin
```

You should see this reflected in your `package.json` file:

```

{

  "name": "my-worker",

  "version": "x.y.z",

  // ...

  "devDependencies": {

    // ...

    "wranglerjs-compat-webpack-plugin": "^x.y.z",

    "webpack": "^4.46.0",

    "webpack-cli": "^x.y.z"

  }

}


```

1. Add `wranglerjs-compat-webpack-plugin` to `webpack.config.js`.

Modify your `webpack.config.js` file to include the plugin you just installed.

JavaScript

```

const {

  WranglerJsCompatWebpackPlugin,

} = require("wranglerjs-compat-webpack-plugin");


module.exports = {

  // ...

  plugins: [new WranglerJsCompatWebpackPlugin()],

};


```

1. Add a build script your `package.json`.

```

{

  "name": "my-worker",

  "version": "2.0.0",

  // ...

  "scripts": {

    "build": "webpack" // <-- Add this line!

    // ...

  }

}


```

1. Remove unsupported entries from your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/).

Remove the `type` and `webpack_config` keys from your Wrangler file, as they are not supported anymore.

* [  wrangler.jsonc ](#tab-panel-8482)
* [  wrangler.toml ](#tab-panel-8483)

```

{

  // Remove these!

  "type": "webpack",

  "webpack_config": "webpack.config.js"

}


```

```

type = "webpack"

webpack_config = "webpack.config.js"


```

1. Tell Wrangler how to bundle your Worker.

Wrangler no longer has any knowledge of how to build your Worker. You will need to tell it how to call webpack and where to look for webpack's output. This translates into two fields:

* [  wrangler.jsonc ](#tab-panel-8484)
* [  wrangler.toml ](#tab-panel-8485)

```

{

  "main": "./worker/script.js", // by default, or whatever file webpack outputs

  "build": {

    "command": "npm run build" // or "yarn build"

  }

}


```

```

main = "./worker/script.js"


[build]

command = "npm run build"


```

1. Test your project.

Try running `npx wrangler deploy` to test that your configuration works as expected.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/migration/","name":"Migrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/","name":"Migrate from Wrangler v1 to v2"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/eject-webpack/","name":"1. Migrate webpack projects"}}]}
```

---

---
title: 2. Update to Wrangler v2
description: This document describes the steps to migrate a project from Wrangler v1 to Wrangler v2. Before updating your Wrangler version, review and complete Migrate webpack projects from Wrangler version 1 if it applies to your project.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/migration/v1-to-v2/update-v1-to-v2.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# 2\. Update to Wrangler v2

This document describes the steps to migrate a project from Wrangler v1 to Wrangler v2\. Before updating your Wrangler version, review and complete [Migrate webpack projects from Wrangler version 1](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/eject-webpack/) if it applies to your project.

Wrangler v2 ships with new features and improvements that may require some changes to your configuration.

The CLI itself will guide you through the upgrade process.

Note

To learn more about the improvements to Wrangler, refer to [Wrangler v1 and v2 comparison](https://developers.cloudflare.com/workers/wrangler/deprecations/#wrangler-v1-and-v2-comparison-tables).

## Update Wrangler version

### 1\. Uninstall Wrangler v1

If you had previously installed Wrangler v1 globally using npm, you can uninstall it with:

Terminal window

```

npm uninstall -g @cloudflare/wrangler


```

If you used Cargo to install Wrangler v1, you can uninstall it with:

Terminal window

```

cargo uninstall wrangler


```

### 2\. Install Wrangler

Now, install the latest version of Wrangler.

Terminal window

```

npm install -g wrangler


```

### 3\. Verify your install

To check that you have installed the correct Wrangler version, run:

Terminal window

```

npx wrangler --version


```

## Test Wrangler v2 on your previous projects

Now you will test that Wrangler v2 can build your Wrangler v1 project. In most cases, it will build just fine. If there are errors, the command line should instruct you with exactly what to change to get it to build.

If you would like to read more on the deprecated [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) fields that cause Wrangler v2 to error, refer to [Deprecations](https://developers.cloudflare.com/workers/wrangler/deprecations/).

Run the `wrangler dev` command. This will show any warnings or errors that should be addressed. Note that in most cases, the messages will include actionable instructions on how to resolve the issue.

Terminal window

```

npx wrangler dev


```

* Errors need to be fixed before Wrangler can build your Worker.
* In most cases, you will only see warnings. These do not stop Wrangler from building your Worker, but consider updating the configuration to remove them.

Here is an example of some warnings and errors:

Terminal window

```

 ⛅️ wrangler 2.x

-------------------------------------------------------

▲ [WARNING] Processing wrangler.toml configuration:

  - 😶 Ignored: "type":

    Most common features now work out of the box with wrangler, including modules, jsx,

  typescript, etc. If you need anything more, use a custom build.

  - Deprecation: "zone_id":

    This is unnecessary since we can deduce this from routes directly.

  - Deprecation: "build.upload.format":

    The format is inferred automatically from the code.


✘ [ERROR] Processing wrangler.toml configuration:

  - Expected "route" to be either a string, or an object with shape { pattern, zone_id | zone_name }, but got "".


```

## Deprecations

Refer to [Deprecations](https://developers.cloudflare.com/workers/wrangler/deprecations/) for more details on what is no longer supported.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/migration/","name":"Migrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/","name":"Migrate from Wrangler v1 to v2"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/update-v1-to-v2/","name":"2. Update to Wrangler v2"}}]}
```

---

---
title: Authentication
description: In Cloudflare’s system, a user can have multiple accounts and zones. As a result, your user is configured globally on your machine via a single Cloudflare Token. Your account(s) and zone(s) will be configured per project, but will use your Cloudflare Token to authenticate all API calls. A configuration file is created in a .wrangler directory in your computer’s home directory.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/migration/v1-to-v2/wrangler-legacy/authentication.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Authentication

Warning

This page is for Wrangler v1, which has been deprecated.[Learn how to update to the latest version of Wrangler](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/).

## Background

In Cloudflare’s system, a user can have multiple accounts and zones. As a result, your user is configured globally on your machine via a single Cloudflare Token. Your account(s) and zone(s) will be configured per project, but will use your Cloudflare Token to authenticate all API calls. A configuration file is created in a `.wrangler` directory in your computer’s home directory.

---

### Using commands

To set up Wrangler to work with your Cloudflare user, use the following commands:

* `login`: a command that opens a Cloudflare account login page to authorize Wrangler.
* `config`: an alternative to `login` that prompts you to enter your `email` and `api` key.
* `whoami`: run this command to confirm that your configuration is appropriately set up. When successful, this command will print out your account email and your `account_id` needed for your project's Wrangler file.

### Using environment variables

You can also configure your global user with environment variables. This is the preferred method for using Wrangler in CI (continuous integration) environments.

To customize the authentication tokens that Wrangler uses, you may provide the `CF_ACCOUNT_ID` and `CF_API_TOKEN` environment variables when running any Wrangler command. The account ID may be obtained from the Cloudflare dashboard in **Overview** and you may [create or reuse an existing API token](#generate-tokens).

Terminal window

```

CF_ACCOUNT_ID=accountID CF_API_TOKEN=veryLongAPIToken wrangler publish


```

Alternatively, you may use the `CF_EMAIL` and `CF_API_KEY` environment variable combination instead:

Terminal window

```

CF_EMAIL=cloudflareEmail CF_API_KEY=veryLongAPI wrangler publish


```

You can also specify or override the target Zone ID by defining the `CF_ZONE_ID` environment variable.

Defining environment variables inline will override the default credentials stored in `wrangler config` or in your Wrangler file.

---

## Generate Tokens

### API token

1. In **Overview**, select [**Get your API token**](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/).
2. After being taken to the **Profile** page, select **Create token**.
3. Under the **API token templates** section, find the **Edit Cloudflare Workers** template and select **Use template**.
4. Fill out the rest of the fields and then select **Continue to summary**, where you can select **Create Token** and issue your token for use.

### Global API Key

1. In **Overview**, select **Get your API token**.
2. After being taken to the **Profile** page, scroll to **API Keys**.
3. Select **View** to copy your **Global API Key**.\*

Warning

\* Treat your Global API Key like a password. It should not be stored in version control or in your code – use environment variables if possible.

---

## Use Tokens

After getting your token or key, you can set up your default credentials on your local machine by running `wrangler config`:

Terminal window

```

wrangler config


```

```

Enter API token:

superlongapitoken


```

Use the `--api-key` flag to instead configure with email and global API key:

Terminal window

```

wrangler config --api-key


```

```

Enter email:

testuser@example.com

Enter global API key:

superlongapikey


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/migration/","name":"Migrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/","name":"Migrate from Wrangler v1 to v2"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/wrangler-legacy/","name":"Wrangler v1 (legacy)"}},{"@type":"ListItem","position":7,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/wrangler-legacy/authentication/","name":"Authentication"}}]}
```

---

---
title: Commands
description: Complete list of all commands available for wrangler, the Workers CLI.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/migration/v1-to-v2/wrangler-legacy/commands.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Commands

Warning

This page is for Wrangler v1, which has been deprecated.[Learn how to update to the latest version of Wrangler](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/).

Complete list of all commands available for [wrangler ↗](https://github.com/cloudflare/wrangler-legacy), the Workers CLI.

---

## generate

Scaffold a Cloudflare Workers project from a public GitHub repository.

Terminal window

```

wrangler generate [$NAME] [$TEMPLATE] [--type=$TYPE] [--site]


```

Default values indicated by =value.

* `$NAME` \=worker optional  
   * The name of the Workers project. This is both the directory name and `name` property in the generated [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/) file.
* `$TEMPLATE` \=[https://github.com/cloudflare/worker-template ↗](https://github.com/cloudflare/worker-template) optional  
   * The GitHub URL of the [repository to use as the template ↗](https://github.com/cloudflare/worker-template) for generating the project.
* `--type=$TYPE` \=webpack optional  
   * The type of project; one of `webpack`, `javascript`, or `rust`.
* `--site` optional  
   * When defined, the default `$TEMPLATE` value is changed to [cloudflare/workers-sdk/templates/worker-sites ↗](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-sites). This scaffolds a [Workers Site](https://developers.cloudflare.com/workers/configuration/sites/start-from-scratch) project.

---

## init

Create a skeleton [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) in an existing directory. This command can be used as an alternative to `generate` if you prefer to clone a template repository yourself or you already have a JavaScript project and would like to use Wrangler.

Terminal window

```

wrangler init [$NAME] [--type=$TYPE] [--site]


```

Default values indicated by =value.

* `$NAME` \=(Name of working directory) optional  
   * The name of the Workers project. This is both the directory name and `name` property in the generated [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/) file.
* `--type=$TYPE` \=webpack optional  
   * The type of project; one of `webpack`, `javascript`, or `rust`.
* `--site` optional  
   * When defined, the default `$TEMPLATE` value is changed to [cloudflare/workers-sdk/templates/worker-sites ↗](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-sites). This scaffolds a [Workers Site](https://developers.cloudflare.com/workers/configuration/sites/start-from-scratch) project.

---

## build

Build your project (if applicable). This command looks at your Wrangler file and reacts to the ["type" value](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#keys) specified.

When using `type = "webpack"`, Wrangler will build the Worker using its internal webpack installation. When using `type = "javascript"` , the [build.command](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#build-1), if defined, will run.

Terminal window

```

wrangler build [--env $ENVIRONMENT_NAME]


```

* `--env` optional  
   * If defined, Wrangler will load the matching environment's configuration before building. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.

---

## login

Authorize Wrangler with your Cloudflare account. This will open a login page in your browser and request your account access permissions. This command is the alternative to `wrangler config` and it uses OAuth tokens.

Terminal window

```

wrangler login [--scopes-list] [--scopes $SCOPES]


```

All of the arguments and flags to this command are optional:

* `--scopes-list` optional  
   * List all the available OAuth scopes with descriptions.
* `--scopes $SCOPES` optional  
   * Allows to choose your set of OAuth scopes. The set of scopes must be entered in a whitespace-separated list, for example, `wrangler login --scopes account:read user:read`.

`wrangler login` uses all the available scopes by default if no flags are provided.

---

## logout

Remove Wrangler's authorization for accessing your account. This command will invalidate your current OAuth token and delete the configuration file, if present.

Terminal window

```

wrangler logout


```

This command only invalidates OAuth tokens acquired through the `wrangler login` command. However, it will try to delete the configuration file regardless of your authorization method.

To delete your API token:

1. In the Cloudflare dashboard, go to the **Workers & Pages** page.  
[ Go to **Workers & Pages** ](https://dash.cloudflare.com/?to=/:account/workers-and-pages)
2. In the **Overview** \> **Get your API token** in the right side menu.
3. Select the three-dot menu on your Wrangler token and select **Delete**.

---

## config

Configure Wrangler so that it may acquire a Cloudflare API Token or Global API key, instead of OAuth tokens, in order to access and manage account resources.

Terminal window

```

wrangler config [--api-key]


```

* `--api-key` optional  
   * To provide your email and global API key instead of a token. (This is not recommended for security reasons.)

You can also use environment variables to authenticate, or `wrangler login` to authorize with OAuth tokens.

---

## publish

Publish your Worker to Cloudflare. Several keys in your Wrangler file determine whether you are publishing to a `*.workers.dev` subdomain or a custom domain. However, custom domains must be proxied (orange-clouded) through Cloudflare. Refer to the [Get started guide](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) for more information.

Terminal window

```

wrangler publish [--env $ENVIRONMENT_NAME]


```

* `--env` optional  
   * If defined, Wrangler will load the matching environment's configuration before building and deploying. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.

To use this command, the following fields are required in your Wrangler file:

* `name` string  
   * The name of the Workers project. This is both the directory name and `name` property in the generated [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/) file.
* `type` string  
   * The type of project; one of `webpack`, `javascript`, or `rust`.
* `account_id` string  
   * The Cloudflare account ID. This can be found in the Cloudflare dashboard, for example, `account_id = "a655bacaf2b4cad0e2b51c5236a6b974"`.

You can publish to [<your-worker>.<your-subdomain>.workers.dev ↗](https://workers.dev) or to a custom domain.

When you publish changes to an existing Worker script, all new requests will automatically route to the updated version of the Worker without downtime. Any inflight requests will continue running on the previous version until completion. Once all inflight requests have finished complete, the previous Worker version will be purged and will no longer handle requests.

### Publishing to workers.dev

To publish to [\*.workers.dev ↗](https://workers.dev), you will first need to have a subdomain registered. You can register a subdomain by executing the [wrangler subdomain](#subdomain) command.

After you have registered a subdomain, add `workers_dev` to your Wrangler file.

* `workers_dev` bool  
   * When `true`, indicates that the Worker should be deployed to a `*.workers.dev` domain.

### Publishing to your own domain

To publish to your own domain, specify these three fields in your Wrangler file.

* `zone_id` string  
   * The Cloudflare zone ID, for example, `zone_id = "b6558acaf2b4cad1f2b51c5236a6b972"`, which can be found in the Cloudflare dashboard.
* `route` string  
   * The route you would like to publish to, for example, `route = "example.com/my-worker/*"`.
* `routes` Array  
   * The routes you would like to publish to, for example, `routes = ["example.com/foo/*", example.com/bar/*]`.

Note

Make sure to use only `route` or `routes`, not both.

### Publishing the same code to multiple domains

To publish your code to multiple domains, refer to the [documentation for environments](https://developers.cloudflare.com/workers/wrangler/environments/).

---

## dev

`wrangler dev` is a command that establishes a connection between `localhost` and a global network server that operates your Worker in development. A `cloudflared` tunnel forwards all requests to the global network server, which continuously updates as your Worker code changes. This allows full access to Workers KV, Durable Objects and other Cloudflare developer platform products. The `dev` command is a way to test your Worker while developing.

Terminal window

```

wrangler dev [--env $ENVIRONMENT_NAME] [--ip <ip>] [--port <port>] [--host <host>] [--local-protocol <http|https>] [--upstream-protocol <http|https>]


```

* `--env` optional  
   * If defined, Wrangler will load the matching environment's configuration. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.
* `--ip` optional  
   * The IP to listen on, defaults to `127.0.0.1`.
* `--port` optional  
   * The port to listen on, defaults to `8787`.
* `--host` optional  
   * The host to forward requests to, defaults to the zone of the project or to `tutorial.cloudflareworkers.com` if unauthenticated.
* `--local-protocol` optional  
   * The protocol to listen to requests on, defaults to `http`.
* `--upstream-protocol` optional  
   * The protocol to forward requests to host on, defaults to `https`.

These arguments can also be set in your Wrangler file. Refer to the [wrangler dev configuration](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#dev) documentation for more information.

### Usage

You should run `wrangler dev` from your Worker directory. Wrangler will run a local server accepting requests, executing your Worker, and forwarding them to a host. If you want to use another host other than your zone or `tutorials.cloudflare.com`, you can specify with `--host example.com`.

Terminal window

```

wrangler dev


```

```

💁  JavaScript project found. Skipping unnecessary build!

💁  watching "./"

👂  Listening on http://127.0.0.1:8787


```

With `wrangler dev` running, you can send HTTP requests to `localhost:8787` and your Worker should execute as expected. You will also see `console.log` messages and exceptions appearing in your terminal. If either of these things do not happen, or you think the output is incorrect, [file an issue ↗](https://github.com/cloudflare/wrangler-legacy).

---

## tail

Start a session to livestream logs from a deployed Worker.

Terminal window

```

wrangler tail [--format $FORMAT] [--status $STATUS] [OPTIONS]


```

* `--format $FORMAT` json|pretty  
   * The format of the log entries.
* `--status $STATUS`  
   * Filter by invocation status \[possible values: `ok`, `error`, `canceled`\].
* `--header $HEADER`  
   * Filter by HTTP header.
* `--method $METHOD`  
   * Filter by HTTP method.
* `--sampling-rate $RATE`  
   * Add a percentage of requests to log sampling rate.
* `--search $SEARCH`  
   * Filter by a text match in `console.log` messages.

After starting `wrangler tail` in a directory with a project, you will receive a live feed of console and exception logs for each request your Worker receives.

Like all Wrangler commands, run `wrangler tail` from your Worker’s root directory (the directory with your Wrangler file).

Legacy issues with existing cloudflared configuration

`wrangler tail` versions older than version 1.19.0 use `cloudflared` to run. Update to the [latest Wrangler version](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/install-update/) to avoid any issues.

---

## preview

Preview your project using the [Cloudflare Workers preview service ↗](https://cloudflareworkers.com/).

Terminal window

```

wrangler preview [--watch] [--env $ENVIRONMENT_NAME] [ --url $URL] [$METHOD] [$BODY]


```

Default values indicated by =value.

* `--env $ENVIRONMENT_NAME` optional  
   * If defined, Wrangler will load the matching environment's configuration. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.
* `--watch` recommended  
   * When enabled, any changes to the Worker project will continually update the preview service with the newest version of your project. By default, `wrangler preview` will only bundle your project a single time.
* `$METHOD` \="GET" optional  
   * The type of request to preview your Worker with (`GET`, `POST`).
* `$BODY` \="Null" optional  
   * The body string to post to your preview Worker request. For example, `wrangler preview post hello=hello`.

### kv\_namespaces

If you are using [kv\_namespaces](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#kv%5Fnamespaces) with `wrangler preview`, you will need to specify a `preview_id` in your Wrangler file before you can start the session. This is so that you do not accidentally write changes to your production namespace while you are developing. You may make `preview_id` equal to `id` if you would like to preview with your production namespace, but you should ensure that you are not writing values to KV that would break your production Worker.

To create a `preview_id` run:

Terminal window

```

wrangler kv:namespace create --preview "NAMESPACE"


```

### Previewing on Windows Subsystem for Linux (WSL 1/2)

#### Setting $BROWSER to your browser binary

WSL is a Linux environment, so Wrangler attempts to invoke `xdg-open` to open your browser. To make `wrangler preview` work with WSL, you should set your `$BROWSER` to the path of your browser binary:

Terminal window

```

export BROWSER="/mnt/c/tools/firefox.exe"

wrangler preview


```

Spaces in filepaths are not common in Linux, and some programs like `xdg-open` will break on [paths with spaces ↗](https://github.com/microsoft/WSL/issues/3632#issuecomment-432821522). You can work around this by linking the binary to your `/usr/local/bin`:

Terminal window

```

ln -s "/mnt/c/Program Files/Mozilla Firefox/firefox.exe" firefox

export BROWSER=firefox


```

#### Setting $BROWSER to `wsl-open`

Another option is to install [wsl-open ↗](https://github.com/4U6U57/wsl-open#standalone) and set the `$BROWSER` [env variable](https://developers.cloudflare.com/workers/configuration/environment-variables/) to `wsl-open` via `wsl-open -w`. This ensures that `xdg-open` uses `wsl-open` when it attempts to open your browser.

If you are using WSL 2, you will need to install `wsl-open` following their [standalone method ↗](https://github.com/4U6U57/wsl-open#standalone) rather than through `npm`. This is because their npm package has not yet been updated with WSL 2 support.

---

## `route`

List or delete a route associated with a domain:

Terminal window

```

wrangler route list [--env $ENVIRONMENT_NAME]


```

Default values indicated by =value.

* `--env $ENVIRONMENT_NAME` optional  
   * If defined, the changes will only apply to the specified environment. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.

This command will forward the JSON response from the [List Routes API](https://developers.cloudflare.com/api/resources/workers/subresources/routes/methods/list/). Each object within the JSON list will include the route id, route pattern, and the assigned Worker name for the route. Piping this through a tool such as `jq` will render the output nicely.

Terminal window

```

wrangler route delete $ID [--env $ENVIRONMENT_NAME]


```

Default values indicated by =value.

* `$ID` required  
   * The hash of the route ID to delete.
* `--env $ENVIRONMENT_NAME` optional  
   * If defined, the changes will only apply to the specified environment. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.

---

## subdomain

Create or change your [\*.workers.dev ↗](https://workers.dev) subdomain.

Terminal window

```

wrangler subdomain <name>


```

---

## secret

Interact with your secrets.

### `put`

Create or replace a secret.

Terminal window

```

wrangler secret put <name> --env ENVIRONMENT_NAME

Enter the secret text you would like assigned to the variable name on the Worker named my-worker-ENVIRONMENT_NAME:


```

You will be prompted to input the secret's value. This command can receive piped input, so the following example is also possible:

Terminal window

```

echo "-----BEGIN PRIVATE KEY-----\nM...==\n-----END PRIVATE KEY-----\n" | wrangler secret put PRIVATE_KEY


```

* `name`  
   * The variable name to be accessible in the script.
* `--env $ENVIRONMENT_NAME` optional  
   * If defined, the changes will only apply to the specified environment. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.

### `delete`

Delete a secret from a specific script.

Terminal window

```

wrangler secret delete <name> --env ENVIRONMENT_NAME


```

* `name`  
   * The variable name to be accessible in the script.
* `--env $ENVIRONMENT_NAME` optional  
   * If defined, the changes will only apply to the specified environment. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.

### `list`

List all the secret names bound to a specific script.

Terminal window

```

wrangler secret list --env ENVIRONMENT_NAME


```

* `--env $ENVIRONMENT_NAME` optional  
   * If defined, only the specified environment's secrets will be listed. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.

---

## kv

The `kv` subcommand allows you to store application data in the Cloudflare network to be accessed from Workers using [Workers KV ↗](https://www.cloudflare.com/products/workers-kv/). KV operations are scoped to your account, so in order to use any of these commands, you:

* must configure an `account_id` in your project's Wrangler file.
* run all `wrangler kv:<command>` operations in your terminal from the project's root directory.

### Getting started

To use Workers KV with your Worker, the first thing you must do is create a KV namespace. This is done with the `kv:namespace` subcommand.

The `kv:namespace` subcommand takes a new binding name as its argument. A Workers KV namespace will be created using a concatenation of your Worker’s name (from your Wrangler file) and the binding name you provide:

Terminal window

```

wrangler kv:namespace create "MY_KV"


```

```

🌀  Creating namespace with title "my-site-MY_KV"

✨  Success!

Add the following to your configuration file:

kv_namespaces = [

  { binding = "MY_KV", id = "e29b263ab50e42ce9b637fa8370175e8" }

]


```

Successful operations will print a new configuration block that should be copied into your Wrangler file. Add the output to the existing `kv_namespaces` configuration if already present. You can now access the binding from within a Worker:

JavaScript

```

let value = await MY_KV.get("my-key");


```

To write a value to your KV namespace using Wrangler, run the `wrangler kv:key put` subcommand.

Terminal window

```

wrangler kv:key put --binding=MY_KV "key" "value"


```

```

✨  Success


```

Instead of `--binding`, you may use `--namespace-id` to specify which KV namespace should receive the operation:

Terminal window

```

wrangler kv:key put --namespace-id=e29b263ab50e42ce9b637fa8370175e8 "key" "value"


```

```

✨  Success


```

Additionally, KV namespaces can be used with environments. This is useful for when you have code that refers to a KV binding like `MY_KV`, and you want to be able to have these bindings point to different namespaces (like one for staging and one for production).

A Wrangler file with two environments:

* [  wrangler.jsonc ](#tab-panel-8486)
* [  wrangler.toml ](#tab-panel-8487)

```

{

  "env": {

    "staging": {

      "kv_namespaces": [

        {

          "binding": "MY_KV",

          "id": "e29b263ab50e42ce9b637fa8370175e8"

        }

      ]

    },

    "production": {

      "kv_namespaces": [

        {

          "binding": "MY_KV",

          "id": "a825455ce00f4f7282403da85269f8ea"

        }

      ]

    }

  }

}


```

```

[[env.staging.kv_namespaces]]

binding = "MY_KV"

id = "e29b263ab50e42ce9b637fa8370175e8"


[[env.production.kv_namespaces]]

binding = "MY_KV"

id = "a825455ce00f4f7282403da85269f8ea"


```

To insert a value into a specific KV namespace, you can use:

Terminal window

```

wrangler kv:key put --env=staging --binding=MY_MV "key" "value"


```

```

✨  Success


```

Since `--namespace-id` is always unique (unlike binding names), you do not need to specify an `--env` argument.

### Concepts

Most `kv` commands require you to specify a namespace. A namespace can be specified in two ways:

1. With a `--binding`:  
Terminal window  
```  
wrangler kv:key get --binding=MY_KV "my key"  
```  
   * This can be combined with `--preview` flag to interact with a preview namespace instead of a production namespace.
2. With a `--namespace-id`:  
Terminal window  
```  
wrangler kv:key get --namespace-id=06779da6940b431db6e566b4846d64db "my key"  
```

Most `kv` subcommands also allow you to specify an environment with the optional `--env` flag. This allows you to publish Workers running the same code but with different namespaces. For example, you could use separate staging and production namespaces for KV data in your Wrangler file:

* [  wrangler.jsonc ](#tab-panel-8488)
* [  wrangler.toml ](#tab-panel-8489)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "type": "webpack",

  "name": "my-worker",

  "account_id": "<account id here>",

  "route": "staging.example.com/*",

  "workers_dev": false,

  "kv_namespaces": [

    {

      "binding": "MY_KV",

      "id": "06779da6940b431db6e566b4846d64db"

    }

  ],

  "env": {

    "production": {

      "route": "example.com/*",

      "kv_namespaces": [

        {

          "binding": "MY_KV",

          "id": "07bc1f3d1f2a4fd8a45a7e026e2681c6"

        }

      ]

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

type = "webpack"

name = "my-worker"

account_id = "<account id here>"

route = "staging.example.com/*"

workers_dev = false


[[kv_namespaces]]

binding = "MY_KV"

id = "06779da6940b431db6e566b4846d64db"


[env.production]

route = "example.com/*"


  [[env.production.kv_namespaces]]

  binding = "MY_KV"

  id = "07bc1f3d1f2a4fd8a45a7e026e2681c6"


```

With the Wrangler file above, you can specify `--env production` when you want to perform a KV action on the namespace `MY_KV` under `env.production`. For example, with the Wrangler file above, you can get a value out of a production KV instance with:

Terminal window

```

wrangler kv:key get --binding "MY_KV" --env=production "my key"


```

To learn more about environments, refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/).

### `kv:namespace`

#### `create`

Create a new namespace.

Terminal window

```

wrangler kv:namespace create $NAME [--env=$ENVIRONMENT_NAME] [--preview]


```

* `$NAME`  
   * The name of the new namespace.
* `--env $ENVIRONMENT_NAME` optional  
   * If defined, the changes will only apply to the specified environment. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.
* `--preview` optional  
   * Interact with a preview namespace (the `preview_id` value) instead of production.

##### Usage

Terminal window

```

wrangler kv:namespace create "MY_KV"

🌀  Creating namespace with title "worker-MY_KV"

✨  Add the following to your wrangler.toml:

kv_namespaces = [

  { binding = "MY_KV", id = "e29b263ab50e42ce9b637fa8370175e8" }

]


```

Terminal window

```

wrangler kv:namespace create "MY_KV" --preview

🌀  Creating namespace with title "my-site-MY_KV_preview"

✨  Success!

Add the following to your wrangler.toml:

kv_namespaces = [

  { binding = "MY_KV", preview_id = "15137f8edf6c09742227e99b08aaf273" }

]


```

#### `list`

List all KV namespaces associated with an account ID.

Terminal window

```

wrangler kv:namespace list


```

##### Usage

This example passes the Wrangler command through the `jq` command:

Terminal window

```

wrangler kv:namespace list | jq "."

[

  {

    "id": "06779da6940b431db6e566b4846d64db",

    "title": "TEST_NAMESPACE"

  },

  {

    "id": "32ac1b3c2ed34ed3b397268817dea9ea",

    "title": "STATIC_CONTENT"

  }

]


```

#### `delete`

Delete a given namespace.

Terminal window

```

wrangler kv:namespace delete --binding= [--namespace-id=]


```

* `--binding` required (if no `--namespace-id`)  
   * The name of the namespace to delete.
* `--namespace-id` required (if no `--binding`)  
   * The ID of the namespace to delete.
* `--env $ENVIRONMENT_NAME` optional  
   * If defined, the changes will only apply to the specified environment. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.
* `--preview` optional  
   * Interact with a preview namespace instead of production.

##### Usage

Terminal window

```

wrangler kv:namespace delete --binding=MY_KV

Are you sure you want to delete namespace f7b02e7fc70443149ac906dd81ec1791? [y/n]

yes

🌀  Deleting namespace f7b02e7fc70443149ac906dd81ec1791

✨  Success


```

Terminal window

```

wrangler kv:namespace delete --binding=MY_KV --preview

Are you sure you want to delete namespace 15137f8edf6c09742227e99b08aaf273? [y/n]

yes

🌀  Deleting namespace 15137f8edf6c09742227e99b08aaf273

✨  Success


```

### `kv:key`

#### `put`

Write a single key-value pair to a particular namespace.

Terminal window

```

wrangler kv:key put --binding= [--namespace-id=] $KEY $VALUE

✨  Success


```

* `$KEY` required  
   * The key to write to.
* `$VALUE` required  
   * The value to write.
* `--binding` required (if no `--namespace-id`)  
   * The name of the namespace to write to.
* `--namespace-id` required (if no `--binding`)  
   * The ID of the namespace to write to.
* `--env $ENVIRONMENT_NAME` optional  
   * If defined, the changes will only apply to the specified environment. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.
* `--preview` optional  
   * Interact with a preview namespace instead of production. Pass this to the Wrangler file’s `kv_namespaces.preview_id` instead of `kv_namespaces.id`.
* `--ttl` optional  
   * The lifetime (in number of seconds) the document should exist before expiring. Must be at least `60` seconds. This option takes precedence over the `expiration` option.
* `--expiration` optional  
   * The timestamp, in UNIX seconds, indicating when the key-value pair should expire.
* `--path` optional  
   * When defined, Wrangler reads the `--path` file location to upload its contents as KV documents. This is ideal for security-sensitive operations because it avoids saving keys and values into your terminal history.

##### Usage

Terminal window

```

wrangler kv:key put --binding=MY_KV "key" "value"

✨  Success


```

Terminal window

```

wrangler kv:key put --binding=MY_KV --preview "key" "value"

✨  Success


```

Terminal window

```

wrangler kv:key put --binding=MY_KV "key" "value" --ttl=10000

✨  Success


```

Terminal window

```

wrangler kv:key put --binding=MY_KV "key" value.txt --path

✨  Success


```

#### `list`

Output a list of all keys in a given namespace.

Terminal window

```

wrangler kv:key list --binding= [--namespace-id=] [--prefix] [--env]


```

* `--binding` required (if no `--namespace-id`)  
   * The name of the namespace to list.
* `--namespace-id` required (if no `--binding`)  
   * The ID of the namespace to list.
* `--env $ENVIRONMENT_NAME` optional  
   * If defined, the changes will only apply to the specified environment. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.
* `--prefix` optional  
   * A prefix to filter listed keys.

##### Usage

This example passes the Wrangler command through the `jq` command:

Terminal window

```

wrangler kv:key list --binding=MY_KV --prefix="public" | jq "."

[

  {

    "name": "public_key"

  },

  {

    "name": "public_key_with_expiration",

    "expiration": "2019-09-10T23:18:58Z"

  }

]


```

#### `get`

Read a single value by key from the given namespace.

Terminal window

```

wrangler kv:key get --binding= [--env=] [--preview] [--namespace-id=] "$KEY"


```

* `$KEY` required  
   * The key value to get.
* `--binding` required (if no `--namespace-id`)  
   * The name of the namespace to get from.
* `--namespace-id` required (if no `--binding`)  
   * The ID of the namespace to get from.
* `--env $ENVIRONMENT_NAME` optional  
   * If defined, the operation will only apply to the specified environment. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.
* `--preview` optional  
   * Interact with a preview namespace instead of production. Pass this to use your Wrangler file’s `kv_namespaces.preview_id` instead of `kv_namespaces.id`

##### Usage

Terminal window

```

wrangler kv:key get --binding=MY_KV "key"

value


```

#### `delete`

Removes a single key value pair from the given namespace.

Terminal window

```

wrangler kv:key delete --binding= [--env=] [--preview] [--namespace-id=] "$KEY"


```

* `$KEY` required  
   * The key value to delete.
* `--binding` required (if no `--namespace-id`)  
   * The name of the namespace to delete from.
* `--namespace-id` required (if no `--binding`)  
   * The id of the namespace to delete from.
* `--env` optional  
   * Perform on a specific environment specified as `$ENVIRONMENT_NAME`.
* `--preview` optional  
   * Interact with a preview namespace instead of production. Pass this to use your Wrangler configuration file's `kv_namespaces.preview_id` instead of `kv_namespaces.id`

##### Usage

Terminal window

```

wrangler kv:key delete --binding=MY_KV "key"

Are you sure you want to delete key "key"? [y/n]

yes

🌀  Deleting key "key"

✨  Success


```

### `kv:bulk`

#### `put`

Write a file full of key-value pairs to the given namespace.

Terminal window

```

wrangler kv:bulk put --binding= [--env=] [--preview] [--namespace-id=] $FILENAME


```

* `$FILENAME` required  
   * The file to write to the namespace
* `--binding` required (if no `--namespace-id`)  
   * The name of the namespace to put to.
* `--namespace-id` required (if no `--binding`)  
   * The id of the namespace to put to.
* `--env $ENVIRONMENT_NAME` optional  
   * If defined, the changes will only apply to the specified environment. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.
* `--preview` optional  
   * Interact with a preview namespace instead of production. Pass this to use your Wrangler file’s `kv_namespaces.preview_id` instead of `kv_namespaces.id`

This command takes a JSON file as an argument with a list of key-value pairs to upload. An example of JSON input:

```

[

  {

    "key": "test_key",

    "value": "test_value",

    "expiration_ttl": 3600

  }

]


```

In order to save JSON data, cast `value` to a string:

```

[

  {

    "key": "test_key",

    "value": "{\"name\": \"test_value\"}",

    "expiration_ttl": 3600

  }

]


```

The schema below is the full schema for key-value entries uploaded via the bulk API:

* `key` ` string ` required  
   * The key’s name. The name may be 512 bytes maximum. All printable, non-whitespace characters are valid.
* `value` ` string ` required  
   * The UTF-8 encoded string to be stored, up to 25 MB in length.
* `expiration` int optional  
   * The time, measured in number of seconds since the UNIX epoch, at which the key should expire.
* `expiration_ttl` int optional  
   * The number of seconds the document should exist before expiring. Must be at least `60` seconds.
* `base64` bool optional  
   * When true, the server will decode the value as base64 before storing it. This is useful for writing values that would otherwise be invalid JSON strings, such as images. Defaults to `false`.

If both `expiration` and `expiration_ttl` are specified for a given key, the API will prefer `expiration_ttl`.

##### Usage

Terminal window

```

wrangler kv:bulk put --binding=MY_KV allthethingsupload.json

🌀  uploading 1 key value pairs

✨  Success


```

#### `delete`

Delete all specified keys within a given namespace.

Terminal window

```

wrangler kv:bulk delete --binding= [--env=] [--preview] [--namespace-id=] $FILENAME


```

* `$FILENAME` required  
   * The file with key-value pairs to delete.
* `--binding` required (if no `--namespace-id`)  
   * The name of the namespace to delete from.
* `--namespace-id` required (if no `--binding`)  
   * The ID of the namespace to delete from.
* `--env $ENVIRONMENT_NAME` optional  
   * If defined, the changes will only apply to the specified environment. Refer to [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) for more information.
* `--preview` optional  
   * Interact with a preview namespace instead of production. Pass this to use your Wrangler file’s `kv_namespaces.preview_id` instead of `kv_namespaces.id`

This command takes a JSON file as an argument with a list of key-value pairs to delete. An example of JSON input:

```

[

  {

    "key": "test_key",

    "value": ""

  }

]


```

* `key` ` string ` required  
   * The key’s name. The name may be at most 512 bytes. All printable, non-whitespace characters are valid.
* `value` ` string ` required  
   * This field must be specified for deserialization purposes, but is unused because the provided keys are being deleted, not written.

##### Usage

Terminal window

```

wrangler kv:bulk delete --binding=MY_KV allthethingsdelete.json


```

```

Are you sure you want to delete all keys in allthethingsdelete.json? [y/n]

y

🌀  deleting 1 key value pairs

✨  Success


```

---

## Environment variables

Wrangler supports any [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) keys passed in as environment variables. This works by passing in `CF_` \+ any uppercased TOML key. For example:

`CF_NAME=my-worker CF_ACCOUNT_ID=1234 wrangler dev`

---

## \--help

Terminal window

```

wrangler --help


```

```

👷 ✨  wrangler 1.12.3

The Wrangler Team <wrangler@cloudflare.com>


USAGE:

    wrangler [SUBCOMMAND]


FLAGS:

    -h, --help       Prints help information

    -V, --version    Prints version information


SUBCOMMANDS:

    kv:namespace    🗂️  Interact with your Workers KV Namespaces

    kv:key          🔑  Individually manage Workers KV key-value pairs

    kv:bulk         💪  Interact with multiple Workers KV key-value pairs at once

    route           ➡️  List or delete worker routes.

    secret          🤫  Generate a secret that can be referenced in the worker script

    generate        👯  Generate a new worker project

    init            📥  Create a wrangler.toml for an existing project

    build           🦀  Build your worker

    preview         🔬  Preview your code temporarily on cloudflareworkers.com

    dev             👂  Start a local server for developing your worker

    publish         🆙  Publish your worker to the orange cloud

    config          🕵️  Authenticate Wrangler with a Cloudflare API Token or Global API Key

    subdomain       👷  Configure your workers.dev subdomain

    whoami          🕵️  Retrieve your user info and test your auth config

    tail            🦚  Aggregate logs from production worker

    login           🔓  Authorize Wrangler with your Cloudflare username and password

    logout          ⚙️  Remove authorization from Wrangler.

    help            Prints this message or the help of the given subcommand(s)


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/migration/","name":"Migrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/","name":"Migrate from Wrangler v1 to v2"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/wrangler-legacy/","name":"Wrangler v1 (legacy)"}},{"@type":"ListItem","position":7,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/wrangler-legacy/commands/","name":"Commands"}}]}
```

---

---
title: Configuration
description: Learn how to configure your Cloudflare Worker using Wrangler v1. This guide covers top-level and environment-specific settings, key types, and deployment options.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Configuration

Warning

This page is for Wrangler v1, which has been deprecated.[Learn how to update to the latest version of Wrangler](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/).

## Background

Your project will need some configuration before you can publish your Worker. Configuration is done through changes to keys and values stored in a Wrangler file located in the root of your project directory. You must manually edit this file to edit your keys and values before you can publish.

---

## Environments

The top-level configuration is the collection of values you specify at the top of your Wrangler file. These values will be inherited by all environments, unless otherwise defined in the environment.

The layout of a top-level configuration in a Wrangler file is displayed below:

* [  wrangler.jsonc ](#tab-panel-8494)
* [  wrangler.toml ](#tab-panel-8495)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "name": "your-worker",

  "type": "javascript",

  "account_id": "your-account-id",

  // This field specifies that the Worker

  // will be deployed to a *.workers.dev domain

  "workers_dev": true,

  // -- OR --

  // These fields specify that the Worker

  // will deploy to a custom domain

  "zone_id": "your-zone-id",

  "routes": [

    "example.com/*"

  ]

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

name = "your-worker"

type = "javascript"

account_id = "your-account-id"

workers_dev = true

zone_id = "your-zone-id"

routes = [ "example.com/*" ]


```

Environment configuration (optional): the configuration values you specify under an `[env.name]` in your Wrangler file.

Environments allow you to deploy the same project to multiple places under multiple names. These environments are utilized with the `--env` or `-e` flag on the [commands](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/commands/) that are deploying live Workers:

* `build`
* `dev`
* `preview`
* `publish`
* `secret`

Some environment properties can be [_inherited_](#keys) from the top-level configuration, but if new values are configured in an environment, they will always override those at the top level.

An example of an `[env.name]` configuration looks like this:

* [  wrangler.jsonc ](#tab-panel-8514)
* [  wrangler.toml ](#tab-panel-8515)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "type": "javascript",

  "name": "your-worker",

  "account_id": "your-account-id",

  "vars": {

    "FOO": "default FOO value",

    "BAR": "default BAR value"

  },

  "kv_namespaces": [

    {

      "binding": "FOO",

      "id": "1a...",

      "preview_id": "1b..."

    }

  ],

  "env": {

    "helloworld": {

      // Now adding configuration keys for the "helloworld" environment.

      // These new values will override the top-level configuration.

      "name": "your-worker-helloworld",

      "account_id": "your-other-account-id",

      "vars": {

        "FOO": "env-helloworld FOO value",

        "BAR": "env-helloworld BAR value"

      },

      "kv_namespaces": [

        {

          // Redeclare kv namespace bindings for each environment

          // NOTE: In this case, passing new IDs because new `account_id` value.

          "binding": "FOO",

          "id": "888...",

          "preview_id": "999..."

        }

      ]

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

type = "javascript"

name = "your-worker"

account_id = "your-account-id"


[vars]

FOO = "default FOO value"

BAR = "default BAR value"


[[kv_namespaces]]

binding = "FOO"

id = "1a..."

preview_id = "1b..."


[env.helloworld]

name = "your-worker-helloworld"

account_id = "your-other-account-id"


  [env.helloworld.vars]

  FOO = "env-helloworld FOO value"

  BAR = "env-helloworld BAR value"


  [[env.helloworld.kv_namespaces]]

  binding = "FOO"

  id = "888..."

  preview_id = "999..."


```

To deploy this example Worker to the `helloworld` environment, you would run `wrangler deploy --env helloworld`.

---

## Keys

There are three types of keys in a Wrangler file:

* Top level only keys are required to be configured at the top level of your Wrangler file only; multiple environments on the same project must share this key's value.
* Inherited keys can be configured at the top level and/or environment. If the key is defined only at the top level, the environment will use the key's value from the top level. If the key is defined in the environment, the environment value will override the top-level value.
* Non-inherited keys must be defined for every environment individually.
* `name` inherited required  
   * The name of your Worker script. If inherited, your environment name will be appended to the top level.
* `type` top level required  
   * Specifies how `wrangler build` will build your project. There are three options: `javascript`, `webpack`, and `rust`. `javascript` checks for a build command specified in the `[build]` section, `webpack` builds your project using webpack v4, and `rust` compiles the Rust in your project to WebAssembly.

Note

Cloudflare will continue to support `rust` and `webpack` project types, but recommends using the `javascript` project type and specifying a custom [build](#build) section.

* `account_id` inherited required  
   * This is the ID of the account associated with your zone. You might have more than one account, so make sure to use the ID of the account associated with the `zone_id` you provide, if you provide one. It can also be specified through the `CF_ACCOUNT_ID` environment variable.
* `zone_id` inherited optional  
   * This is the ID of the zone or domain you want to run your Worker on. It can also be specified through the `CF_ZONE_ID` environment variable. This key is optional if you are using only a `*.workers.dev` subdomain.
* `workers_dev` inherited optional  
   * This is a boolean flag that specifies if your Worker will be deployed to your [\*.workers.dev ↗](https://workers.dev) subdomain. If omitted, it defaults to false.
* `route` not inherited optional  
   * A route, specified by URL pattern, on your zone that you would like to run your Worker on.  
   `route = "http://example.com/*"`. A `route` OR `routes` key is only required if you are not using a [\*.workers.dev ↗](https://workers.dev) subdomain.
* `routes` not inherited optional  
   * A list of routes you would like to use your Worker on. These follow exactly the same rules a `route`, but you can specify a list of them.  
   `routes = ["http://example.com/hello", "http://example.com/goodbye"]`. A `route` OR `routes` key is only required if you are not using a `*.workers.dev` subdomain.
* `webpack_config` inherited optional  
   * This is the path to a custom webpack configuration file for your Worker. You must specify this field to use a custom webpack configuration, otherwise Wrangler will use a default configuration for you. Refer to the [Wrangler webpack page](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/eject-webpack/) for more information.
* `vars` not inherited optional  
   * An object containing text variables that can be directly accessed in a Worker script.
* `kv_namespaces` not inherited optional  
   * These specify any [Workers KV](#kv%5Fnamespaces) Namespaces you want to access from inside your Worker.
* `site` inherited optional  
   * Determines the local folder to upload and serve from a Worker.
* `dev` not inherited optional  
   * Arguments for `wrangler dev` that configure local server.
* `triggers` inherited optional  
   * Configures cron triggers for running a Worker on a schedule.
* `usage_model` inherited optional  
   * Specifies the [Usage Model](https://developers.cloudflare.com/workers/platform/pricing/#workers) for your Worker. There are two options - [bundled](https://developers.cloudflare.com/workers/platform/limits/#account-plan-limits) and [unbound](https://developers.cloudflare.com/workers/platform/limits/#account-plan-limits). For newly created Workers, if the Usage Model is omitted it will be set to the [default Usage Model set on the account ↗](https://dash.cloudflare.com/?account=workers/default-usage-model). For existing Workers, if the Usage Model is omitted, it will be set to the Usage Model configured in the dashboard for that Worker.
* `build` top level optional  
   * Configures a custom build step to be run by Wrangler when building your Worker. Refer to the [custom builds documentation](#build) for more details.

### vars

The `vars` key defines a table of [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/) provided to your Worker script. All values are plaintext values.

Usage:

* [  wrangler.jsonc ](#tab-panel-8490)
* [  wrangler.toml ](#tab-panel-8491)

```

{

  "vars": {

    "FOO": "some value",

    "BAR": "some other string"

  }

}


```

```

[vars]

FOO = "some value"

BAR = "some other string"


```

The table keys are available to your Worker as global variables, which will contain their associated values.

JavaScript

```

// Worker code:

console.log(FOO);

//=> "some value"


console.log(BAR);

//=> "some other string"


```

Alternatively, you can define `vars` using an inline table format. This style should not include any new lines to be considered a valid TOML configuration:

* [  wrangler.jsonc ](#tab-panel-8492)
* [  wrangler.toml ](#tab-panel-8493)

```

{

  "vars": {

    "FOO": "some value",

    "BAR": "some other string"

  }

}


```

```

[vars]

FOO = "some value"

BAR = "some other string"


```

Note

Secrets should be handled using the [wrangler secret](https://developers.cloudflare.com/workers/wrangler/commands/general/#secret) command.

### kv\_namespaces

`kv_namespaces` defines a list of KV namespace bindings for your Worker.

Usage:

* [  wrangler.jsonc ](#tab-panel-8498)
* [  wrangler.toml ](#tab-panel-8499)

```

{

  "kv_namespaces": [

    {

      "binding": "FOO",

      "id": "0f2ac74b498b48028cb68387c421e279",

      "preview_id": "6a1ddb03f3ec250963f0a1e46820076f"

    },

    {

      "binding": "BAR",

      "id": "068c101e168d03c65bddf4ba75150fb0",

      "preview_id": "fb69528dbc7336525313f2e8c3b17db0"

    }

  ]

}


```

```

[[kv_namespaces]]

binding = "FOO"

id = "0f2ac74b498b48028cb68387c421e279"

preview_id = "6a1ddb03f3ec250963f0a1e46820076f"


[[kv_namespaces]]

binding = "BAR"

id = "068c101e168d03c65bddf4ba75150fb0"

preview_id = "fb69528dbc7336525313f2e8c3b17db0"


```

Alternatively, you can define `kv namespaces` like so:

* [  wrangler.jsonc ](#tab-panel-8502)
* [  wrangler.toml ](#tab-panel-8503)

```

{

  "kv_namespaces": [

    {

      "binding": "FOO",

      "preview_id": "abc456",

      "id": "abc123"

    },

    {

      "binding": "BAR",

      "preview_id": "xyz456",

      "id": "xyz123"

    }

  ]

}


```

```

[[kv_namespaces]]

binding = "FOO"

preview_id = "abc456"

id = "abc123"


[[kv_namespaces]]

binding = "BAR"

preview_id = "xyz456"

id = "xyz123"


```

Much like environment variables and secrets, the `binding` names are available to your Worker as global variables.

JavaScript

```

// Worker script:


let value = await FOO.get("keyname");

//=> gets the value for "keyname" from

//=> the FOO variable, which points to

//=> the "0f2ac...e279" KV namespace


```

* `binding` required  
   * The name of the global variable your code will reference. It will be provided as a [KV runtime instance](https://developers.cloudflare.com/kv/api/).
* `id` required  
   * The ID of the KV namespace that your `binding` should represent. Required for `wrangler publish`.
* `preview_id` required  
   * The ID of the KV namespace that your `binding` should represent during `wrangler dev` or `wrangler preview`. Required for `wrangler dev` and `wrangler preview`.

Note

Creating your KV namespaces can be handled using Wrangler’s [KV Commands](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/commands/#kv).

You can also define your `kv_namespaces` using an [alternative TOML syntax ↗](https://github.com/toml-lang/toml/blob/master/toml.md#user-content-table).

### site

A [Workers Site](https://developers.cloudflare.com/workers/configuration/sites/start-from-scratch) generated with [wrangler generate --site](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/commands/#generate) or [wrangler init --site](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/commands/#init).

Usage:

* [  wrangler.jsonc ](#tab-panel-8496)
* [  wrangler.toml ](#tab-panel-8497)

```

{

  "site": {

    "bucket": "./public",

    "entry-point": "workers-site"

  }

}


```

```

[site]

bucket = "./public"

entry-point = "workers-site"


```

* `bucket` required  
   * The directory containing your static assets. It must be a path relative to your Wrangler file. Example: `bucket = "./public"`
* `entry-point` optional  
   * The location of your Worker script. The default location is `workers-site`. Example: `entry-point = "./workers-site"`
* `include` optional  
   * An exclusive list of `.gitignore`\-style patterns that match file or directory names from your `bucket` location. Only matched items will be uploaded. Example: `include = ["upload_dir"]`
* `exclude` optional  
   * A list of `.gitignore`\-style patterns that match files or directories in your `bucket` that should be excluded from uploads. Example: `exclude = ["ignore_dir"]`

You can also define your `site` using an [alternative TOML syntax ↗](https://github.com/toml-lang/toml/blob/master/toml.md#user-content-inline-table).

#### Storage Limits

For exceptionally large pages, Workers Sites may not be ideal. There is a 25 MiB limit per page or file. Additionally, Wrangler will create an asset manifest for your files that will count towards your script’s size limit. If you have too many files, you may not be able to use Workers Sites.

#### Exclusively including files/directories

If you want to include only a certain set of files or directories in your `bucket`, add an `include` field to your`[site]` section of your Wrangler file:

* [  wrangler.jsonc ](#tab-panel-8500)
* [  wrangler.toml ](#tab-panel-8501)

```

{

  "site": {

    "bucket": "./public",

    "entry-point": "workers-site",

    "include": [ // must be an array.

      "included_dir"

    ]

  }

}


```

```

[site]

bucket = "./public"

entry-point = "workers-site"

include = [ "included_dir" ]


```

Wrangler will only upload files or directories matching the patterns in the `include` array.

#### Excluding files/directories

If you want to exclude files or directories in your `bucket`, add an `exclude` field to your `[site]` section of your Wrangler file:

* [  wrangler.jsonc ](#tab-panel-8504)
* [  wrangler.toml ](#tab-panel-8505)

```

{

  "site": {

    "bucket": "./public",

    "entry-point": "workers-site",

    "exclude": [ // must be an array.

      "excluded_dir"

    ]

  }

}


```

```

[site]

bucket = "./public"

entry-point = "workers-site"

exclude = [ "excluded_dir" ]


```

Wrangler will ignore files or directories matching the patterns in the `exclude` array when uploading assets to Workers KV.

#### Include > Exclude

If you provide both `include` and `exclude` fields, the `include` field will be used and the `exclude` field will be ignored.

#### Default ignored entries

Wrangler will always ignore:

* `node_modules`
* Hidden files and directories
* Symlinks

#### More about include/exclude patterns

Refer to the [gitignore documentation ↗](https://git-scm.com/docs/gitignore) to learn more about the standard matching patterns.

#### Customizing your Sites Build

Workers Sites projects use webpack by default. Though you can [bring your own webpack configuration](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/eject-webpack/), be aware of your `entry` and `context` settings.

You can also use the `[build]` section with Workers Sites, as long as your build step will resolve dependencies in `node_modules`. Refer to the [custom builds](#build) section for more information.

### triggers

A set of cron triggers used to call a Worker on a schedule.

Usage:

* [  wrangler.jsonc ](#tab-panel-8506)
* [  wrangler.toml ](#tab-panel-8507)

```

{

  "triggers": {

    "crons": [

      "0 0 * JAN-JUN FRI",

      "0 0 LW JUL-DEC *"

    ]

  }

}


```

```

[triggers]

crons = [ "0 0 * JAN-JUN FRI", "0 0 LW JUL-DEC *" ]


```

* `crons` optional  
   * A set of [cron expressions ↗](https://crontab.guru/), where each expression is a separate schedule to run the Worker on.

### dev

Arguments for `wrangler dev` can be configured here so you do not have to repeatedly pass them.

Usage:

* [  wrangler.jsonc ](#tab-panel-8508)
* [  wrangler.toml ](#tab-panel-8509)

```

{

  "dev": {

    "port": 9000,

    "local_protocol": "https"

  }

}


```

```

[dev]

port = 9_000

local_protocol = "https"


```

* `ip` optional  
   * IP address for the local `wrangler dev` server to listen on, defaults to `127.0.0.1`.
* `port` optional  
   * Port for local `wrangler dev` server to listen on, defaults to `8787`.
* `local_protocol` optional  
   * Protocol that local `wrangler dev` server listen to requests on, defaults to `http`.
* `upstream_protocol` optional  
   * Protocol that `wrangler dev` forwards requests on, defaults to `https`.

### build

A custom build command for your project. There are two configurations based on the format of your Worker: `service-worker` and `modules`.

#### Service Workers

Service Workers are deprecated

Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers.

This section is for customizing Workers with the `service-worker` format. These Workers use `addEventListener` and look like the following:

JavaScript

```

addEventListener("fetch", (event) => {

  event.respondWith(new Response("I'm a service Worker!"));

});


```

Usage:

* [  wrangler.jsonc ](#tab-panel-8510)
* [  wrangler.toml ](#tab-panel-8511)

```

{

  "build": {

    "command": "npm install && npm run build",

    "upload": {

      "format": "service-worker"

    }

  }

}


```

```

[build]

command = "npm install && npm run build"


  [build.upload]

  format = "service-worker"


```

##### `[build]`

* `command` optional  
   * The command used to build your Worker. On Linux and macOS, the command is executed in the `sh` shell and the `cmd` shell for Windows. The `&&` and `||` shell operators may be used.
* `cwd` optional  
   * The working directory for commands, defaults to the project root directory.
* `watch_dir` optional  
   * The directory to watch for changes while using `wrangler dev`, defaults to the `src` relative to the project root directory.

##### `[build.upload]`

* `format` required  
   * The format of the Worker script, must be `"service-worker"`.

Note

Ensure the `main` field in your `package.json` references the Worker you want to publish.

#### Modules

Workers now supports the ES Modules syntax. This format allows you to export a collection of files and/or modules, unlike the Service Worker format which requires a single file to be uploaded.

Module Workers `export` their event handlers instead of using `addEventListener` calls.

Modules receive all bindings (KV Namespaces, Environment Variables, and Secrets) as arguments to the exported handlers. With the Service Worker format, these bindings are available as global variables.

Note

Refer to the [fetch() handler documentation](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch) to learn more about the differences between the Service Worker and Module worker formats.

An uploaded module may `import` other uploaded ES Modules. If using the CommonJS format, you may `require` other uploaded CommonJS modules.

JavaScript

```

import html from "./index.html";


export default {

  // * request is the same as `event.request` from the service worker format

  // * waitUntil() and passThroughOnException() are accessible from `ctx` instead of `event` from the service worker format

  // * env is where bindings like KV namespaces, Durable Object namespaces, Config variables, and Secrets

  // are exposed, instead of them being placed in global scope.

  async fetch(request, env, ctx) {

    const headers = { "Content-Type": "text/html;charset=UTF-8" };

    return new Response(html, { headers });

  },

};


```

To create a Workers project using Wrangler and Modules, add a `[build]` section:

* [  wrangler.jsonc ](#tab-panel-8512)
* [  wrangler.toml ](#tab-panel-8513)

```

{

  "build": {

    "command": "npm install && npm run build",

    "upload": {

      "format": "modules",

      "main": "./worker.mjs"

    }

  }

}


```

```

[build]

command = "npm install && npm run build"


  [build.upload]

  format = "modules"

  main = "./worker.mjs"


```

##### `[build]`

* `command` optional  
   * The command used to build your Worker. On Linux and macOS system, the command is executed in the `sh` shell and the `cmd` shell for Windows. The `&&` and `||` shell operators may be used.
* `cwd` optional  
   * The working directory for commands, defaults to the project root directory.
* `watch_dir` optional  
   * The directory to watch for changes while using `wrangler dev`, defaults to the `src` relative to the project root directory.

##### `[build.upload]`

* `format` required  
   * The format of the Workers script, must be `"modules"`.
* `dir` optional  
   * The directory you wish to upload your modules from, defaults to the `dist` relative to the project root directory.
* `main` required  
   * The relative path of the main module from `dir`, including the `./` prefix. The main module must be an ES module. For projects with a build script, this usually refers to the output of your JavaScript bundler.

Note

If your project is written using CommonJS modules, you will need to re-export your handlers and Durable Object classes using an ES module shim. Refer to the [modules-webpack-commonjs ↗](https://github.com/cloudflare/modules-webpack-commonjs) template as an example.

* `rules` optional  
   * An ordered list of rules that define which modules to import, and what type to import them as. You will need to specify rules to use Text, Data, and CompiledWasm modules, or when you wish to have a `.js` file be treated as an `ESModule` instead of `CommonJS`.

Defaults:

* [  wrangler.jsonc ](#tab-panel-8516)
* [  wrangler.toml ](#tab-panel-8517)

```

{

  // You do not need to include these default rules in your [Wrangler configuration file](/workers/wrangler/configuration/), they are implicit.

  // The default rules are treated as the last two rules in the list.

  "build": {

    "upload": {

      "format": "modules",

      "main": "./worker.mjs",

      "rules": [

        {

          "type": "ESModule",

          "globs": [

            "**/*.mjs"

          ]

        },

        {

          "type": "CommonJS",

          "globs": [

            "**/*.js",

            "**/*.cjs"

          ]

        }

      ]

    }

  }

}


```

```

[build.upload]

format = "modules"

main = "./worker.mjs"


  [[build.upload.rules]]

  type = "ESModule"

  globs = [ "**/*.mjs" ]


  [[build.upload.rules]]

  type = "CommonJS"

  globs = [ "**/*.js", "**/*.cjs" ]


```

* `type` required  
   * The module type, see the table below for acceptable options:
* `globs` required  
   * UNIX-style [glob rules ↗](https://docs.rs/globset/0.4.6/globset/#syntax) that are used to determine the module type to use for a given file in `dir`. Globs are matched against the module's relative path from `build.upload.dir` without the `./` prefix. Rules are evaluated in order, starting at the top.
* `fallthrough` optional  
   * This option allows further rules for this module type to be considered if set to true. If not specified or set to false, further rules for this module type will be ignored.

---

## Example

To illustrate how these levels are applied, here is a Wrangler file using multiple environments:

* [  wrangler.jsonc ](#tab-panel-8518)
* [  wrangler.toml ](#tab-panel-8519)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  // top level configuration

  "type": "javascript",

  "name": "my-worker-dev",

  "account_id": "12345678901234567890",

  "zone_id": "09876543210987654321",

  "route": "dev.example.com/*",

  "usage_model": "unbound",

  "kv_namespaces": [

    {

      "binding": "FOO",

      "id": "b941aabb520e61dcaaeaa64b4d8f8358",

      "preview_id": "03c8c8dd3b032b0528f6547d0e1a83f3"

    },

    {

      "binding": "BAR",

      "id": "90e6f6abd5b4f981c748c532844461ae",

      "preview_id": "e5011a026c5032c09af62c55ecc3f438"

    }

  ],

  "build": {

    "command": "webpack",

    "upload": {

      "format": "service-worker"

    }

  },

  "site": {

    "bucket": "./public",

    "entry-point": "workers-site"

  },

  "dev": {

    "ip": "0.0.0.0",

    "port": 9000,

    "local_protocol": "http",

    "upstream_protocol": "https"

  },

  "env": {

    // environment configuration

    "staging": {

      "name": "my-worker-staging",

      "route": "staging.example.com/*",

      "kv_namespaces": [

        {

          "binding": "FOO",

          "id": "0f2ac74b498b48028cb68387c421e279"

        },

        {

          "binding": "BAR",

          "id": "068c101e168d03c65bddf4ba75150fb0"

        }

      ]

    },

    // environment configuration

    "production": {

      "workers_dev": true,

      "kv_namespaces": [

        {

          "binding": "FOO",

          "id": "0d2ac74b498b48028cb68387c421e233"

        },

        {

          "binding": "BAR",

          "id": "0d8c101e168d03c65bddf4ba75150f33"

        }

      ]

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

type = "javascript"

name = "my-worker-dev"

account_id = "12345678901234567890"

zone_id = "09876543210987654321"

route = "dev.example.com/*"

usage_model = "unbound"


[[kv_namespaces]]

binding = "FOO"

id = "b941aabb520e61dcaaeaa64b4d8f8358"

preview_id = "03c8c8dd3b032b0528f6547d0e1a83f3"


[[kv_namespaces]]

binding = "BAR"

id = "90e6f6abd5b4f981c748c532844461ae"

preview_id = "e5011a026c5032c09af62c55ecc3f438"


[build]

command = "webpack"


  [build.upload]

  format = "service-worker"


[site]

bucket = "./public"

entry-point = "workers-site"


[dev]

ip = "0.0.0.0"

port = 9_000

local_protocol = "http"

upstream_protocol = "https"


[env.staging]

name = "my-worker-staging"

route = "staging.example.com/*"


  [[env.staging.kv_namespaces]]

  binding = "FOO"

  id = "0f2ac74b498b48028cb68387c421e279"


  [[env.staging.kv_namespaces]]

  binding = "BAR"

  id = "068c101e168d03c65bddf4ba75150fb0"


[env.production]

workers_dev = true


  [[env.production.kv_namespaces]]

  binding = "FOO"

  id = "0d2ac74b498b48028cb68387c421e233"


  [[env.production.kv_namespaces]]

  binding = "BAR"

  id = "0d8c101e168d03c65bddf4ba75150f33"


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/migration/","name":"Migrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/","name":"Migrate from Wrangler v1 to v2"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/wrangler-legacy/","name":"Wrangler v1 (legacy)"}},{"@type":"ListItem","position":7,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/","name":"Configuration"}}]}
```

---

---
title: Install / Update
description: Assuming you have Rust’s package manager, Cargo, installed, run:
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/migration/v1-to-v2/wrangler-legacy/install-update.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Install / Update

Warning

This page is for Wrangler v1, which has been deprecated.[Learn how to update to the latest version of Wrangler](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/).

## Install

### Install with `npm`

Terminal window

```

npm i @cloudflare/wrangler -g


```

EACCESS error

You may have already installed npm. It is possible that an `EACCES` error may be thrown while installing Wrangler. This is related to how many systems install the npm binary. It is recommended that you reinstall npm using a Node version manager like [nvm ↗](https://github.com/nvm-sh/nvm#installing-and-updating) or [Volta ↗](https://volta.sh/).

### Install with `cargo`

Assuming you have Rust’s package manager, [Cargo ↗](https://github.com/rust-lang/cargo), installed, run:

Terminal window

```

cargo install wrangler


```

Otherwise, to install Cargo, you must first install rustup. On Linux and macOS systems, `rustup` can be installed as follows:

Terminal window

```

curl https://sh.rustup.rs -sSf | sh


```

Additional installation methods are available [on the Rust site ↗](https://forge.rust-lang.org/other-installation-methods.html).

Windows users will need to install Perl as a dependency for `openssl-sys` — [Strawberry Perl ↗](https://www.perl.org/get.html) is recommended.

After Cargo is installed, you may now install Wrangler:

Terminal window

```

cargo install wrangler


```

Customize OpenSSL

By default, a copy of OpenSSL is included to make things easier during installation, but this can make the binary size larger. If you want to use your system's OpenSSL installation, provide the feature flag `sys-openssl` when running install:

Terminal window

```

cargo install wrangler --features sys-openssl


```

### Manual install

1. Download the binary tarball for your platform from the [releases page ↗](https://github.com/cloudflare/wrangler-legacy/releases). You do not need the `wranglerjs-*.tar.gz` download – Wrangler will install that for you.
2. Unpack the tarball and place the Wrangler binary somewhere on your `PATH`, preferably `/usr/local/bin` for Linux/macOS or `Program Files` for Windows.

## Update

To update [Wrangler ↗](https://github.com/cloudflare/wrangler-legacy), run one of the following:

### Update with `npm`

Terminal window

```

npm update -g @cloudflare/wrangler


```

### Update with `cargo`

Terminal window

```

cargo install wrangler --force


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/migration/","name":"Migrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/","name":"Migrate from Wrangler v1 to v2"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/wrangler-legacy/","name":"Wrangler v1 (legacy)"}},{"@type":"ListItem","position":7,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/wrangler-legacy/install-update/","name":"Install / Update"}}]}
```

---

---
title: Webpack
description: Learn how to migrate from Wrangler v1 to v2 using webpack. This guide covers configuration, custom builds, and compatibility for Cloudflare Workers.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/migration/v1-to-v2/wrangler-legacy/webpack.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Webpack

Warning

This page is for Wrangler v1, which has been deprecated.[Learn how to update to the latest version of Wrangler](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/).

Wrangler allows you to develop modern ES6 applications with support for modules. This support is possible because of Wrangler's [webpack ↗](https://webpack.js.org/) integration. This document describes how Wrangler uses webpack to build your Workers and how you can bring your own configuration.

Configuration and webpack version

Wrangler includes `webpack@4`. If you want to use `webpack@5`, or another bundler like esbuild or Rollup, you must set up [custom builds](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#build) in your Wrangler file.

You must set `type = "webpack"` in your Wrangler file to use Wrangler's webpack integration. If you are encountering warnings about specifying `webpack_config`, refer to [backwards compatibility](#backwards-compatibility).

## Sensible defaults

This is the default webpack configuration that Wrangler uses to build your Worker:

JavaScript

```

module.exports = {

  target: "webworker",

  entry: "./index.js", // inferred from "main" in package.json

};


```

The `"main"` field in the `package.json` file determines the `entry` configuration value. When undefined or missing, `"main"` defaults to `index.js`, meaning that `entry` also defaults to `index.js`.

The default configuration sets `target` to `webworker`. This is the correct value because Cloudflare Workers are built to match the [Service Worker API ↗](https://developer.mozilla.org/en-US/docs/Web/API/Service%5FWorker%5FAPI). Refer to the [webpack documentation ↗](https://webpack.js.org/concepts/targets/) for an explanation of this `target` value.

## Bring your own configuration

You can tell Wrangler to use a custom webpack configuration file by setting `webpack_config` in your Wrangler file. Always set `target` to `webworker`.

### Example

JavaScript

```

module.exports = {

  target: "webworker",

  entry: "./index.js",

  mode: "production",

};


```

* [  wrangler.jsonc ](#tab-panel-8520)
* [  wrangler.toml ](#tab-panel-8521)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "type": "webpack",

  "name": "my-worker",

  "account_id": "12345678901234567890",

  "workers_dev": true,

  "webpack_config": "webpack.config.js"

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

type = "webpack"

name = "my-worker"

account_id = "12345678901234567890"

workers_dev = true

webpack_config = "webpack.config.js"


```

### Example with multiple environments

It is possible to use different webpack configuration files within different [Wrangler environments](https://developers.cloudflare.com/workers/wrangler/environments/). For example, the `"webpack.development.js"` configuration file is used during `wrangler dev` for development, but other, more production-ready configurations are used when building for the staging or production environments:

* [  wrangler.jsonc ](#tab-panel-8522)
* [  wrangler.toml ](#tab-panel-8523)

```

{

  "$schema": "./node_modules/wrangler/config-schema.json",

  "type": "webpack",

  "name": "my-worker-dev",

  "account_id": "12345678901234567890",

  "workers_dev": true,

  "webpack_config": "webpack.development.js",

  "env": {

    "staging": {

      "name": "my-worker-staging",

      "webpack_config": "webpack.staging.js"

    },

    "production": {

      "name": "my-worker-production",

      "webpack_config": "webpack.production.js"

    }

  }

}


```

```

"$schema" = "./node_modules/wrangler/config-schema.json"

type = "webpack"

name = "my-worker-dev"

account_id = "12345678901234567890"

workers_dev = true

webpack_config = "webpack.development.js"


[env.staging]

name = "my-worker-staging"

webpack_config = "webpack.staging.js"


[env.production]

name = "my-worker-production"

webpack_config = "webpack.production.js"


```

JavaScript

```

module.exports = {

  target: "webworker",

  devtool: "cheap-module-source-map", // avoid "eval": Workers environment doesn’t allow it

  entry: "./index.js",

  mode: "development",

};


```

JavaScript

```

module.exports = {

  target: "webworker",

  entry: "./index.js",

  mode: "production",

};


```

### Using with Workers Sites

Wrangler commands are run from the project root. Ensure your `entry` and `context` are set appropriately. For a project with structure:

```

.

├── public

│   ├── 404.html

│   └── index.html

├── workers-site

│   ├── index.js

│   ├── package-lock.json

│   ├── package.json

│   └── webpack.config.js

└── wrangler.toml


```

The corresponding `webpack.config.js` file should look like this:

JavaScript

```

module.exports = {

  context: __dirname,

  target: "webworker",

  entry: "./index.js",

  mode: "production",

};


```

## Shimming globals

When you want to bring your own implementation of an existing global API, you may [shim ↗](https://webpack.js.org/guides/shimming/#shimming-globals) a third-party module in its place as a webpack plugin.

For example, you may want to replace the `URL` global class with the `url-polyfill` npm package. After defining the package as a dependency in your `package.json` file and installing it, add a plugin entry to your webpack configuration.

### Example with webpack plugin

JavaScript

```

const webpack = require("webpack");


module.exports = {

  target: "webworker",

  entry: "./index.js",

  mode: "production",

  plugins: [

    new webpack.ProvidePlugin({

      URL: "url-polyfill",

    }),

  ],

};


```

## Backwards compatibility

If you are using `wrangler@1.6.0` or earlier, a `webpack.config.js` file at the root of your project is loaded automatically. This is not always obvious, which is why versions of Wrangler after `wrangler@1.6.0` require you to specify a `webpack_config` value in your Wrangler file.

When [upgrading from wrangler@1.6.0](https://developers.cloudflare.com/workers/wrangler/migration/v1-to-v2/wrangler-legacy/install-update/), you may encounter webpack configuration warnings. To resolve this, add `webpack_config = "webpack.config.js"` to your Wrangler file.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/migration/","name":"Migrations"}},{"@type":"ListItem","position":5,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/","name":"Migrate from Wrangler v1 to v2"}},{"@type":"ListItem","position":6,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/wrangler-legacy/","name":"Wrangler v1 (legacy)"}},{"@type":"ListItem","position":7,"item":{"@id":"/workers/wrangler/migration/v1-to-v2/wrangler-legacy/webpack/","name":"Webpack"}}]}
```

---

---
title: System environment variables
description: Local environment variables that can change Wrangler's behavior.
image: https://developers.cloudflare.com/dev-products-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/workers/wrangler/system-environment-variables.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# System environment variables

System environment variables are local environment variables that can change Wrangler's behavior. There are three ways to set system environment variables:

1. Create an `.env` file in your project directory. Set the values of your environment variables in your [.env](https://developers.cloudflare.com/workers/wrangler/system-environment-variables/#example-env-file) file. This is the recommended way to set these variables, as it persists the values between Wrangler sessions.
2. Inline the values in your Wrangler command. For example, `WRANGLER_LOG="debug" npx wrangler deploy` will set the value of `WRANGLER_LOG` to `"debug"` for this execution of the command.
3. Set the values in your shell environment. For example, if you are using Z shell, adding `export CLOUDFLARE_API_TOKEN=...` to your `~/.zshrc` file will set this token as part of your shell configuration.

Note

To set different system environment variables for each environment, create files named `.env.<environment-name>`. When you use `wrangler <command> --env <environment-name>`, the corresponding environment-specific file will be loaded instead of the `.env` file, so the two files are not merged.

Note

During local development, the values in `.env` files are also loaded into the `env` object in your Worker, so you can access them in your Worker code.

For example, if you set `API_HOST="localhost:3000"` in your `.env` file, you can access it in your Worker like this:

JavaScript

```

const apiHost = env.API_HOST;


```

See the [Environment variables and secrets](https://developers.cloudflare.com/workers/development-testing/environment-variables/) page for more information on how to use `.env` files in local development.

## Supported environment variables

Wrangler supports the following environment variables:

* `CLOUDFLARE_ACCOUNT_ID` ` string ` optional  
   * The [account ID](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids/) for the Workers related account.
* `CLOUDFLARE_API_TOKEN` ` string ` optional  
   * The [API token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/) for your Cloudflare account, can be used for authentication for situations like CI/CD, and other automation.
* `CLOUDFLARE_API_KEY` ` string ` optional  
   * The API key for your Cloudflare account, usually used for older authentication method with `CLOUDFLARE_EMAIL=`.
* `CLOUDFLARE_EMAIL` ` string ` optional  
   * The email address associated with your Cloudflare account, usually used for older authentication method with `CLOUDFLARE_API_KEY=`.
* `CLOUDFLARE_ACCESS_CLIENT_ID` ` string ` optional  
   * The Client ID of a [Cloudflare Access Service Token](https://developers.cloudflare.com/cloudflare-one/access-controls/service-credentials/service-tokens/), used to authenticate with Access-protected domains in non-interactive environments such as CI/CD pipelines. Must be set together with `CLOUDFLARE_ACCESS_CLIENT_SECRET`. When both variables are set, Wrangler authenticates using the service token instead of launching `cloudflared access login`.
* `CLOUDFLARE_ACCESS_CLIENT_SECRET` ` string ` optional  
   * The Client Secret of a [Cloudflare Access Service Token](https://developers.cloudflare.com/cloudflare-one/access-controls/service-credentials/service-tokens/), used together with `CLOUDFLARE_ACCESS_CLIENT_ID` to authenticate with Access-protected domains in non-interactive environments.
* `CLOUDFLARE_ENV` ` string ` optional  
   * The [environment](https://developers.cloudflare.com/workers/wrangler/environments/) to use for Wrangler commands. This allows you to select an environment without using the `--env` flag. For example, `CLOUDFLARE_ENV=production wrangler deploy` will deploy to the `production` environment. The `--env` command line argument takes precedence over this environment variable.
* `WRANGLER_SEND_METRICS` ` string ` optional  
   * Options for this are `true` and `false`. Defaults to `true`. Controls whether Wrangler can send anonymous usage data to Cloudflare for this project. You can learn more about this in our [data policy ↗](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler/telemetry.md).
* `CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_<BINDING_NAME>` ` string ` optional  
   * The [local connection string](https://developers.cloudflare.com/hyperdrive/configuration/local-development/) for your database to use in local development with [Hyperdrive](https://developers.cloudflare.com/hyperdrive/). For example, if the binding for your Hyperdrive is named `PROD_DB`, this would be `CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_PROD_DB="postgres://user:password@127.0.0.1:5432/testdb"`. Each Hyperdrive is uniquely distinguished by the binding name.
* `CLOUDFLARE_API_BASE_URL` ` string ` optional  
   * The default value is `"https://api.cloudflare.com/client/v4"`.
* `WRANGLER_LOG` ` string ` optional  
   * Options for Logging levels are `"none"`, `"error"`, `"warn"`, `"info"`, `"log"` and `"debug"`. Levels are case-insensitive and default to `"log"`. If an invalid level is specified, Wrangler will fallback to the default. Logs can include requests to Cloudflare's API, any usage data being collected, and more verbose error logs.
* `WRANGLER_LOG_PATH` ` string ` optional  
   * A file or directory path where Wrangler will write debug logs. If the path ends in `.log`, Wrangler will consider this the path to a file where all logs will be written. Otherwise, Wrangler will treat the path as a directory where it will write one or more log files using a timestamp for the filenames.
* `FORCE_COLOR` ` string ` optional  
   * By setting this to `0`, you can disable Wrangler's colorised output, which makes it easier to read with some terminal setups. For example, `FORCE_COLOR=0`.
* `WRANGLER_HTTPS_KEY_PATH` ` string ` optional  
   * Path to a custom HTTPS certificate key when running `wrangler dev`, to be used with `WRANGLER_HTTPS_CERT_PATH`.
* `WRANGLER_HTTPS_CERT_PATH` ` string ` optional  
   * Path to a custom HTTPS certificate when running `wrangler dev`, to be used with `WRANGLER_HTTPS_KEY_PATH`.
* `DOCKER_HOST` ` string ` optional  
   * Used for local development of [Containers](https://developers.cloudflare.com/containers/local-dev). Wrangler will attempt to automatically find the correct socket to use to communicate with your container engine. If that does not work (usually surfacing as an `internal error` when attempting to connect to your Container), you can try setting the socket path using this environment variable.
* `WRANGLER_R2_SQL_AUTH_TOKEN` ` string ` optional  
   * API token used for executing queries with [R2 SQL](https://developers.cloudflare.com/r2-sql).
* `WRANGLER_OUTPUT_FILE_PATH` ` string ` optional  
   * Specifies a file path where Wrangler will write output data in [ND-JSON ↗](https://github.com/ndjson/ndjson-spec) (newline-delimited JSON) format. Each line in the file is a separate JSON object containing information about Wrangler operations such as deployments, version uploads, and errors. This is useful for CI/CD pipelines and automation tools that need to programmatically access deployment information. If both `WRANGLER_OUTPUT_FILE_PATH` and `WRANGLER_OUTPUT_FILE_DIRECTORY` are set, `WRANGLER_OUTPUT_FILE_PATH` takes precedence.
* `WRANGLER_OUTPUT_FILE_DIRECTORY` ` string ` optional  
   * Specifies a directory where Wrangler will create a randomly-named file (format: `wrangler-output-<timestamp>-<random>.json`) to write output data in [ND-JSON ↗](https://github.com/ndjson/ndjson-spec) format. This is useful when you want to keep output files organized in a specific directory but do not need to control the exact filename. If both `WRANGLER_OUTPUT_FILE_PATH` and `WRANGLER_OUTPUT_FILE_DIRECTORY` are set, `WRANGLER_OUTPUT_FILE_PATH` takes precedence.

### Example output file

When these environment variables are set, Wrangler writes one JSON object per line to the output file. Each entry includes a `timestamp` field and a `type` field indicating the kind of operation. Here is an example of what the file might contain after running `wrangler deploy`:

```

{"type":"wrangler-session","version":1,"wrangler_version":"3.78.0","command_line_args":["deploy"],"log_file_path":"/path/to/logs/wrangler-2024-11-03_12-00-00_abc.log","timestamp":"2024-11-03T12:00:00.000Z"}

{"type":"deploy","version":1,"worker_name":"my-worker","worker_tag":"abc123def456","version_id":"v1-abc123","targets":["https://my-worker.example.workers.dev"],"worker_name_overridden":false,"wrangler_environment":"production","timestamp":"2024-11-03T12:00:05.000Z"}


```

The `wrangler-session` entry is written when Wrangler starts and contains information about the command being run. The `deploy` entry is written when a deployment completes successfully and includes the worker name, version ID, and deployment URLs.

Other entry types include:

* `version-upload` \- Written by `wrangler versions upload` with version ID and preview URLs
* `version-deploy` \- Written by `wrangler versions deploy` with deployment information
* `pages-deploy` \- Written by `wrangler pages deploy` with Pages deployment details
* `command-failed` \- Written when a command fails, including error code and message

## Example `.env` file

The following is an example `.env` file:

Terminal window

```

CLOUDFLARE_ACCOUNT_ID=<YOUR_ACCOUNT_ID_VALUE>

CLOUDFLARE_API_TOKEN=<YOUR_API_TOKEN_VALUE>

CLOUDFLARE_EMAIL=<YOUR_EMAIL>

WRANGLER_SEND_METRICS=true

CLOUDFLARE_API_BASE_URL=https://api.cloudflare.com/client/v4

WRANGLER_LOG=debug

WRANGLER_LOG_PATH=../Desktop/my-logs/my-log-file.log

WRANGLER_R2_SQL_AUTH_TOKEN=<YOUR_R2_API_TOKEN_VALUE>


```

## Deprecated global variables

The following variables are deprecated. Use the new variables listed above to prevent any issues or unwanted messaging.

* `CF_ACCOUNT_ID`
* `CF_API_TOKEN`
* `CF_API_KEY`
* `CF_EMAIL`
* `CF_API_BASE_URL`

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/workers/","name":"Workers"}},{"@type":"ListItem","position":3,"item":{"@id":"/workers/wrangler/","name":"Wrangler"}},{"@type":"ListItem","position":4,"item":{"@id":"/workers/wrangler/system-environment-variables/","name":"System environment variables"}}]}
```
