---
title: Reference Architectures
description: No matter if you know Cloudflare well, or if you are just starting out. These documents help you understand how our connectivity cloud is architectured and how the services can be integrated with your own infrastructure. Read How to use to understand how the documentation is structured, and either navigate by type from the menu or find by solution area.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Reference Architectures

![Hero image](https://developers.cloudflare.com/_astro/reference-architecture-hero.Eeeva8Wz_1QTsOE.svg) 

All the documents in this section are designed to help you understand how Cloudflare and its products are designed and architected. These documents describe how you can leverage our platform to create solutions based on your business needs.

No matter if you know Cloudflare well, or if you are just starting out. These documents help you understand how our connectivity cloud is architectured and how the services can be integrated with your own infrastructure. Read [How to use](https://developers.cloudflare.com/reference-architecture/how-to-use/) to understand how the documentation is structured, and either navigate by type from the menu or [find by solution](https://developers.cloudflare.com/reference-architecture/by-solution/) area.

* [ How to use ](https://developers.cloudflare.com/reference-architecture/how-to-use/)
* [ Find by solution ](https://developers.cloudflare.com/reference-architecture/by-solution/)
* [ Reference Architectures ](https://developers.cloudflare.com/reference-architecture/architectures/)
* [ Reference Architecture Diagrams ](https://developers.cloudflare.com/reference-architecture/diagrams/)
* [ Design Guides ](https://developers.cloudflare.com/reference-architecture/design-guides/)
* [ Implementation Guides ](https://developers.cloudflare.com/reference-architecture/implementation-guides/)

---

## More resources

[Cloudflare blog](https://blog.cloudflare.com/) 

Read articles and announcements about the latest Cloudflare products and features.

[Learning Paths](https://developers.cloudflare.com/learning-paths/) 

Module-based guidance on Cloudflare product workflows.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}}]}
```

---

---
title: How to use
description: Below are the different types of architecture content and information is organized from high level reference architectures, to design guides with best practices and guidelines to implementation guides which provide detailed steps to deploy a specific solution.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/how-to-use.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# How to use

The reference architecture content in our documentation is designed to help you understand how Cloudflare has been designed and built, and how our products and services integrate with your current IT architecture.

Below are the different types of architecture content and information is organized from high level reference architectures, to design guides with best practices and guidelines to implementation guides which provide detailed steps to deploy a specific solution.

## Reference architectures

[Reference architectures](https://developers.cloudflare.com/reference-architecture/architectures/) provide a foundational knowledge of the Cloudflare platform and products while offering a description for how they relate to your existing infrastructure and business challenges. They are high-level, conceptual documents that walk through the concepts of an area of our platform, mapping our network, products and features to the typical architecture of a customer's environment. Detailed diagrams with supporting content explain how our technology works and how it can be integrated with your own infrastructure. The goal of a reference architecture is:

* Present thought leadership for a broad technology area
* Visualize the architecture of Cloudflare and understand how it's been designed
* Explain integration points between Cloudflare and your infrastructure

## Reference architecture diagrams

A [reference architecture diagram](https://developers.cloudflare.com/reference-architecture/diagrams/) focusses on a specific solution or use case where Cloudflare can be used. One or more diagrams are the primary content with supporting introduction and summary. These can focus on sections from a reference architecture that are not fully developed. The goal of this type of document is:

* Visualize the components of a specific solution's architecture
* Provide a quick answer to a specific question around a use case

## Design guides

These [guides](https://developers.cloudflare.com/reference-architecture/design-guides/) are typically aimed at architects, developers, and IT professionals who are tasked with designing and deploying systems that leverage the company's technologies. They typically focus on a specific solution that would be a subset of the greater architecture. For example, if you have read our [SASE Reference Architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/), but are a startup, you may want to understand the details of using a [SASE approach for a small startup](https://developers.cloudflare.com/reference-architecture/design-guides/zero-trust-for-startups/). These documents are:

* Helping you think through how to design a deployment of Cloudflare as part of an overall solution.
* More prescriptive than reference architectures, sharing best practices and guidelines.
* Focused on a solution design that you are trying to achieve, such as connecting private networks to Cloudflare, or using a web application firewall to secure a public website.
* Not a replacement for product documentation and do not describe specific product configuration or commands to run.

## Implementation guides

Implementation guides provide [step-by-step instructions](https://developers.cloudflare.com/reference-architecture/implementation-guides/) and practical guidance for how to effectively deploy and configure specific solutions or services. Implementation guides are focused on a specific implementation goal. While a design guide provides the overall best practices for designing a solution, an implementation guide details the actual steps to deploy in the context of a specific job-to-be-done. These documents are:

* Focused on a specific implementation outcome, such as connecting a remote office using the Cloudflare One Appliance.
* Provide information about the exact commands and configuration steps to take.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/how-to-use/","name":"How to use"}}]}
```

---

---
title: Find by solution
description: Use the list below for reference architecture documentation that relates to a solution area you are interested in.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/by-solution.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Find by solution

Use the list below for reference architecture documentation that relates to a solution area you are interested in.

### Cloudflare Connectivity Cloud

Content that pertains to the Cloudflare platform in general.

#### Reference architectures

* [Cloudflare security reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/security/)
* [Multi-vendor Application Security and Performance Reference Architecture](https://developers.cloudflare.com/reference-architecture/architectures/multi-vendor/)
* [Protect network infrastructure with Magic Transit](https://developers.cloudflare.com/reference-architecture/architectures/magic-transit/)
* [Protect Hybrid Cloud Networks with Cloudflare Magic Transit](https://developers.cloudflare.com/reference-architecture/diagrams/network/protect-hybrid-cloud-networks-with-cloudflare-magic-transit/)

#### Reference architecture diagrams

* [Protecting ISP and telecommunications networks from DDoS attacks](https://developers.cloudflare.com/reference-architecture/diagrams/network/protecting-sp-networks-from-ddos/)

#### Design guides

* [Extend Cloudflare's Benefits to SaaS Providers' End-Customers](https://developers.cloudflare.com/reference-architecture/design-guides/extending-cloudflares-benefits-to-saas-providers-end-customers/)

### Zero Trust / SASE

Architecture documentation related to using Cloudflare for Zero Trust, SSE and SASE initiatives for protecting your applications, data, employees and the corporate network.

#### Reference architectures

* [Evolving to a SASE architecture with Cloudflare](https://developers.cloudflare.com/reference-architecture/architectures/sase/)
* [Using Cloudflare SASE with Microsoft](https://developers.cloudflare.com/reference-architecture/architectures/cloudflare-sase-with-microsoft/)

#### Reference architecture diagrams

* [Access to private apps without having to deploy client agents](https://developers.cloudflare.com/reference-architecture/diagrams/sase/sase-clientless-access-private-dns/)
* [Securing data at rest](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-at-rest/)
* [Securing data in transit](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-in-transit/)
* [Securing data in use](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-in-use/)
* [Extend ZTNA with external authorization and serverless computing](https://developers.cloudflare.com/reference-architecture/diagrams/sase/augment-access-with-serverless/)
* [DNS filtering solution for Internet service providers](https://developers.cloudflare.com/reference-architecture/diagrams/sase/gateway-dns-for-isp/)
* [Cloudflare One Appliance deployment options](https://developers.cloudflare.com/reference-architecture/diagrams/sase/cloudflare-one-appliance-deployment/)
* [Deploy self-hosted VoIP services for hybrid users](https://developers.cloudflare.com/reference-architecture/diagrams/sase/deploying-self-hosted-voip-services-for-hybrid-users/)

#### Design guides

* [Designing ZTNA access policies for Cloudflare Access](https://developers.cloudflare.com/reference-architecture/design-guides/designing-ztna-access-policies/)
* [Building zero trust architecture into your startup](https://developers.cloudflare.com/reference-architecture/design-guides/zero-trust-for-startups/)
* [Network-focused migration from VPN concentrators to Zero Trust Network Access](https://developers.cloudflare.com/reference-architecture/design-guides/network-vpn-migration/)
* [Using a zero trust framework to secure SaaS applications](https://developers.cloudflare.com/reference-architecture/design-guides/zero-trust-for-saas/)

#### Implementation guides

* [Secure your Internet traffic and SaaS apps](https://developers.cloudflare.com/learning-paths/secure-internet-traffic/concepts/)
* [Replace your VPN](https://developers.cloudflare.com/learning-paths/replace-vpn/concepts/)
* [Deploy clientless access](https://developers.cloudflare.com/learning-paths/clientless-access/concepts/)
* [Secure your email with Email security](https://developers.cloudflare.com/learning-paths/secure-your-email/concepts/)

### Networking

#### Reference architecture diagrams

* [Protect public networks with Cloudflare](https://developers.cloudflare.com/reference-architecture/diagrams/network/protect-public-networks-with-cloudflare/)
* [Bring your own IP space to Cloudflare](https://developers.cloudflare.com/reference-architecture/diagrams/network/bring-your-own-ip-space-to-cloudflare/)
* [Protect hybrid cloud networks with Cloudflare Magic Transit](https://developers.cloudflare.com/reference-architecture/diagrams/network/protect-hybrid-cloud-networks-with-cloudflare-magic-transit/)
* [Protect ISP and telecommunications networks from DDoS attacks](https://developers.cloudflare.com/reference-architecture/diagrams/network/protecting-sp-networks-from-ddos/)

### Application Performance

Content related to DNS, caching, load balancing and other Cloudflare services designed to improve application reliability and performance.

#### Reference architectures

* [Content Delivery Network](https://developers.cloudflare.com/reference-architecture/architectures/cdn/)
* [Load Balancing](https://developers.cloudflare.com/reference-architecture/architectures/load-balancing/)

#### Reference architecture diagrams

* [Designing a distributed web performance architecture](https://developers.cloudflare.com/reference-architecture/diagrams/content-delivery/distributed-web-performance-architecture/)

### Application Security

Content related to protecting your applications from threats such as DDoS attack, SQL injection, exploiting application vulnerabilities, scraping API data and more.

#### Reference architecture diagrams

* [Bot management](https://developers.cloudflare.com/reference-architecture/diagrams/bots/bot-management/)

#### Design guides

* [Secure application delivery](https://developers.cloudflare.com/reference-architecture/design-guides/secure-application-delivery/)

#### Implementation guides

* [Use mTLS with Cloudflare protected resources](https://developers.cloudflare.com/learning-paths/mtls/concepts/)

### Developer Platform

Architecture content for our developer platform.

#### Reference architecture diagrams

##### AI

* [Automatic captioning for video uploads](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-video-caption/)
* [Composable AI architecture](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-composable/)
* [Content-based asset creation](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-asset-creation/)
* [Multi-vendor AI observability and control](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-multivendor-observability-control/)
* [Retrieval Augmented Generation (RAG)](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-rag/)
* [Ingesting BigQuery Data into Workers AI](https://developers.cloudflare.com/reference-architecture/diagrams/ai/bigquery-workers-ai/)

##### Serverless

* [Optimizing Image Delivery with Cloudflare Image Resizing and R2](https://developers.cloudflare.com/reference-architecture/diagrams/content-delivery/optimizing-image-delivery-with-cloudflare-image-resizing-and-r2/)
* [A/B-testing using Workers](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/a-b-testing-using-workers/)
* [Fullstack Applications](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/fullstack-application/)
* [Serverless ETL pipelines](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-etl/)
* [Serverless global APIs](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-global-apis/)
* [Serverless image content management](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-image-content-management/)
* [Programmable Platforms](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/programmable-platforms/)

##### Storage

* [Egress-free object storage in multi-cloud setups](https://developers.cloudflare.com/reference-architecture/diagrams/storage/egress-free-storage-multi-cloud/)
* [On-demand Object Storage Data Migration](https://developers.cloudflare.com/reference-architecture/diagrams/storage/on-demand-object-storage-migration/)
* [Event notifications for storage](https://developers.cloudflare.com/reference-architecture/diagrams/storage/event-notifications-for-storage/)
* [Storing User Generated Content](https://developers.cloudflare.com/reference-architecture/diagrams/storage/storing-user-generated-content/)
* [Control and data plane architectural pattern for Durable Objects](https://developers.cloudflare.com/reference-architecture/diagrams/storage/durable-object-control-data-plane-pattern/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/by-solution/","name":"Find by solution"}}]}
```

---

---
title: Implementation Guides
description: Implementation guides provide step-by-step instructions and practical guidance for how to effectively deploy and configure specific solutions or services. Implementation guides are focused on a specific implementation goal.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/implementation-guides/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Implementation Guides

Implementation guides provide [step-by-step instructions](https://developers.cloudflare.com/reference-architecture/implementation-guides/) and practical guidance for how to effectively deploy and configure specific solutions or services. Implementation guides are focused on a specific implementation goal.

## Zero Trust

* [Secure your Internet traffic and SaaS apps](https://developers.cloudflare.com/learning-paths/secure-internet-traffic/concepts/)
* [Replace your VPN](https://developers.cloudflare.com/learning-paths/replace-vpn/concepts/)
* [Deploy Zero Trust Web Access](https://developers.cloudflare.com/learning-paths/clientless-access/concepts/)
* [Secure your email with Email security](https://developers.cloudflare.com/learning-paths/secure-your-email/concepts/)

## Application Security

* [Use mTLS with Cloudflare protected resources](https://developers.cloudflare.com/learning-paths/mtls/concepts/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/implementation-guides/","name":"Implementation Guides"}}]}
```

---

---
title: Use mTLS with Cloudflare protected resources
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/implementation-guides/application-security/mtls.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Use mTLS with Cloudflare protected resources

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/implementation-guides/","name":"Implementation Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/implementation-guides/application-security/","name":"Application Security"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/implementation-guides/application-security/mtls/","name":"Use mTLS with Cloudflare protected resources"}}]}
```

---

---
title: Zero Trust
description: Zero Trust implementation guides walk you through the steps to deploy a Zero Trust solution with Cloudflare.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/implementation-guides/zero-trust/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Zero Trust

Zero Trust implementation guides walk you through the steps to deploy a Zero Trust solution with Cloudflare.

## Zero Trust

* [Secure your Internet traffic and SaaS apps](https://developers.cloudflare.com/learning-paths/secure-internet-traffic/concepts/)
* [Replace your VPN](https://developers.cloudflare.com/learning-paths/replace-vpn/concepts/)
* [Deploy Zero Trust Web Access](https://developers.cloudflare.com/learning-paths/clientless-access/concepts/)
* [Secure your email with Email security](https://developers.cloudflare.com/learning-paths/secure-your-email/concepts/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/implementation-guides/","name":"Implementation Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/implementation-guides/zero-trust/","name":"Zero Trust"}}]}
```

---

---
title: Holistic AI Security with Cloudflare One
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/implementation-guides/zero-trust/holistic-ai-security.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Holistic AI Security with Cloudflare One

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/implementation-guides/","name":"Implementation Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/implementation-guides/zero-trust/","name":"Zero Trust"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/implementation-guides/zero-trust/holistic-ai-security/","name":"Holistic AI Security with Cloudflare One"}}]}
```

---

---
title: Replace your VPN
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/implementation-guides/zero-trust/replace-vpn.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Replace your VPN

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/implementation-guides/","name":"Implementation Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/implementation-guides/zero-trust/","name":"Zero Trust"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/implementation-guides/zero-trust/replace-vpn/","name":"Replace your VPN"}}]}
```

---

---
title: Secure your Internet traffic and SaaS apps
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/implementation-guides/zero-trust/secure-internet-traffic.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Secure your Internet traffic and SaaS apps

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/implementation-guides/","name":"Implementation Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/implementation-guides/zero-trust/","name":"Zero Trust"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/implementation-guides/zero-trust/secure-internet-traffic/","name":"Secure your Internet traffic and SaaS apps"}}]}
```

---

---
title: Secure your email with Email security
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/implementation-guides/zero-trust/secure-your-email.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Secure your email with Email security

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/implementation-guides/","name":"Implementation Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/implementation-guides/zero-trust/","name":"Zero Trust"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/implementation-guides/zero-trust/secure-your-email/","name":"Secure your email with Email security"}}]}
```

---

---
title: Deploy clientless access
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/implementation-guides/zero-trust/ztna-web-access.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Deploy clientless access

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/implementation-guides/","name":"Implementation Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/implementation-guides/zero-trust/","name":"Zero Trust"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/implementation-guides/zero-trust/ztna-web-access/","name":"Deploy clientless access"}}]}
```

---

---
title: AI Security for Apps Reference Architecture
description: This article highlights how Cloudflare's AI Security for Apps complements Cloudflare WAF by providing an AI protection layer for detecting and mitigating threats to AI-powered applications.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/architectures/ai-security-for-apps.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# AI Security for Apps Reference Architecture

**Last reviewed:**  28 days ago 

## Abstract

The purpose of this document is to highlight how Cloudflare's AI Security for Apps complements Cloudflare WAF by providing an AI protection layer for detecting and mitigating threats to AI-powered applications. Additionally, use cases, specific AI threats, and architecture are discussed.

### Who is this document for and what will you learn?

This document is designed for IT and security professionals who are looking to understand the need for AI security and how they can protect their AI-powered applications. This document highlights how Cloudflare's AI Security for Apps complements Cloudflare WAF by providing an AI security layer for detecting and mitigating threats to AI-powered applications. Additionally, use cases, specific AI threats, and architecture along with traffic flow is discussed. It is aimed primarily at Chief Information Security Officers (CSO/CISO) and their direct teams who are responsible for the overall web application security program at their organizations.

This document is specific to security for AI-powered applications. For a deeper understanding of Cloudflare's overall architecture and breadth of Application Performance and Security services, Network Services, Zero Trust / SASE, and Developer Services, refer to the [Architecture Center](https://developers.cloudflare.com/reference-architecture/).

To build a stronger baseline understanding of Cloudflare, we recommend the following resources:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (5 minute read) or [video ↗](https://youtu.be/XHvmX3FhTwU?feature=shared) (2 minutes)
* Ebook: [How Cloudflare strengthens security everywhere you do business ↗](https://cf-assets.www.cloudflare.com/slt3lc6tev37/is7XGR7xZ8CqW0l9EyHZR/1b4311823f602f72036385a66fb96e8c/Everywhere%5FSecurity-Cloudflare-strengthens-security-everywhere-you%5Fdo-business.pdf) (10 minute read)
* For an understanding of Cloudflare's underlying security architecture and base services, refer to the [Cloudflare Security Architecture](https://developers.cloudflare.com/reference-architecture/architectures/security/)
* [AI Security for Apps product web page ↗](https://cfl.re/4b24QX5)
* For a video walkthrough of AI Security for Apps and a demo, refer to [Cloudflare AI Security Suite: Protect AI-powered apps with AI Security for Apps ↗](https://www.youtube.com/watch?v=LoGaySHVGu8) (16 minutes)

## Introduction

AI is accelerating innovation across a broad range of industries. Rapid innovation often raises new, sometimes overlooked, security challenges where security is usually an afterthought and attack surfaces aren't fully understood. In this environment, users may intentionally or inadvertently reveal vulnerabilities, issues, or confidential information exposing Enterprises to harmful consequences and legal liability.

![Banner image for AI security](https://developers.cloudflare.com/_astro/banner-ai-security.DahM0Djk_Z8QbOF.webp) 

For example, applications using AI are more probabilistic in nature than traditional applications that are more deterministic. You can't write a regex to identify and block a prompt injection attack—users can phrase the attack in too many ways, and the model can respond unpredictably. Instead, AI models must be secured by other LLMs to fully understand the context and intent of interactions, and provide mitigations accordingly. If appropriate security measures are not taken, enterprises can be exposed to new vulnerabilities, threats, reputational issues, and even legal liability.

With Cloudflare AI Security Suite, Cloudflare offers a comprehensive AI security solution for all Enterprise AI security needs whether securing your workforce use of generative AI, governing AI agents, protecting AI-powered applications, or even building securely with AI.

![Diagram showing Cloudflare's holistic approach to AI security](https://developers.cloudflare.com/_astro/fig01-holistic-approach.CRWUmyjU_1XTkNl.webp "Figure 1: Cloudflare provides a holistic approach to AI security")

Figure 1: Cloudflare provides a holistic approach to AI security

Enterprises need to protect their employees and customers from AI-specific threats; this could be from human to AI, or AI to corporate and 3rd party resource access. In order to implement a unified policy layer, it's important for customers to choose a vendor that can provide a holistic security solution for AI. This also enables organizations to benefit in operational simplicity and cross-product innovation.

![Diagram showing the different components of Cloudflare AI Security Suite and how they interact](https://developers.cloudflare.com/_astro/fig02-ai-security-suite.CB_2jHa6_Zzcn12.webp "Figure 2: Cloudflare AI Security Suite provides robust solutions for public and private apps")

Figure 2: Cloudflare AI Security Suite provides robust solutions for public and private apps

Cloudflare offers a layered security detection and mitigation approach across its security products, including WAF. AI Security for Apps complements WAF by adding another security threat detection and mitigation layer specific to AI threats.

AI Security for Apps can help protect your services powered by large language models (LLMs) against abuse. This model-agnostic detection currently helps detect and mitigate multiple AI threats like PII exposure, unsafe topics, prompt injection, and jailbreak.

There are three main functions AI Security for Apps provides: LLM Discovery, visibility, and protection and mitigation as highlighted in Figure 3.

![The main functions of Cloudflare AI Security for Apps: LLM discovery, visibility, and protection and mitigation](https://developers.cloudflare.com/_astro/fig03-ai-sec-main-functions.CzSw3EBn_Z1gVnRU.webp "Figure 3: Cloudflare AI Security for Apps protects applications and agents powered by LLMs")

Figure 3: Cloudflare AI Security for Apps protects applications and agents powered by LLMs

Since [Cloudflare also runs AI inference across its network ↗](https://workers.cloudflare.com/product/workers-ai/?gclsrc=aw.ds&&utm%5Fsource=google&utm%5Fmedium=cpc&utm%5Fcampaign=20580233211&utm%5Fterm=%5Fgo%5Fcmp-20580233211%5Fadg-181172125365%5Fad-779014290669%5Fdsa-2446653702475%5Fdev-c%5Fext-%5Fprd-%5Fsig-CjwKCAiAkvDMBhBMEiwAnUA9BRoKAZhWFo6H4P4iU80p%5FvHyyPDRqQaJrRWh7FxiFsVdHUHXBJmPqRoCHZUQAvD%5FBwE&utm%5Fcontent=779014290669&gad%5Fsource=1&gad%5Fcampaignid=20580233211&gbraid=0AAAAADnzVeSdzBJRQWgS-2NmB9h2ySOaj&gclid=CjwKCAiAkvDMBhBMEiwAnUA9BRoKAZhWFo6H4P4iU80p%5FvHyyPDRqQaJrRWh7FxiFsVdHUHXBJmPqRoCHZUQAvD%5FBwE) and can reach about 95% of the world's population within approximately 50 ms, having a AI security deployed so close to the model and the end user allows Cloudflare to identify attacks early and protect both end users and customer models from abuses and attacks.

![Request flow diagram showing how Cloudflare AI Security for Apps protects applications from AI security threats](https://developers.cloudflare.com/_astro/fig04-ai-security-inline.D6ZT9o0K_Z1N40XT.webp "Figure 4: Cloudflare AI Security for Apps sits inline to protect applications from AI security threats")

Figure 4: Cloudflare AI Security for Apps sits inline to protect applications from AI security threats

## Definitions

* **Deep learning:** machine learning that uses artificial neural networks to learn from data similar to the way humans learn
* **LLMs (Large Language Models):** AI models designed for a specific purpose like understanding and generating data sets; typically use a massive amount of data for deep learning
* **LLM or AI Discovery:** automated process of discovering LLM or AI endpoints
* **Generative AI:** AI that creates new content from deep learning based on existing data
* **AI Inference:** operational stage of AI where a trained model applies its knowledge

## AI Security for Apps Diagram and Traffic Flow

AI Security for Apps leverages [Cloudflare's reverse proxy architecture](https://developers.cloudflare.com/reference-architecture/architectures/security/) and sits inline with all of the other Cloudflare application performance and security capabilities. AI Security for Apps is app location and AI model agnostic. It complements WAF by adding AI-specific threat detection and mitigation capabilities which can protect AI-powered applications and APIs using large language models (LLMs). For example, generative AI applications require this type of AI-specific security. Applications and LLMs can sit in Cloudflare, 3rd party cloud, or on-premises.

![Diagram showing the flow of requests protected by Cloudflare AI Security for Apps, which is AI model agnostic](https://developers.cloudflare.com/_astro/fig05-ai-security-model-agnostic.A9Bh93co_Z2pHIhA.webp "Figure 5: Cloudflare AI Security for Apps sits inline and is app location and AI model agnostic")

Figure 5: Cloudflare AI Security for Apps sits inline and is app location and AI model agnostic

This has several benefits:

* **Operational simplicity:** users can continue with the same operational model they're already used to with creating WAF policies. No new constructs, operations, or dashboards to learn.
* **Single unified security policy dashboard:** all security policies follow the same operational model and can be updated and applied in one place.
* **Layered Security:** because AI Security for Apps is inline with all other performance and security products, customer can reap the benefits of layered security across products leveraging the power of the entire Cloudflare platform for complete end-to-end security posture for all apps and APIs.
* **Cross-product innovation:** customers benefit from cross-product innovation and integration such as automatic LLM Discovery via API Security capabilities.

![Diagram showing how Cloudflare secures and processes AI-specific traffic](https://developers.cloudflare.com/_astro/fig06-secure-ai-traffic.D4Nouiea_M8FoW.webp "Figure 6: How Cloudflare secures and processes AI-specific traffic")

Figure 6: How Cloudflare secures and processes AI-specific traffic

1. Client request is sent to the closest Cloudflare Data Center via anycast ensuring low latency. Via LLM Discovery, Cloudflare detects LLM or AI traffic by looking at LLM-specific heuristics. Discovered LLM endpoints are automatically labeled with the `cf-llm` label.
2. Cloudflare AI-specific threat detections like PII exposure and unsafe content are run on all traffic to LLM specific endpoints regardless of if any security policies are in place. These analytics are viewable in **Security Analytics** and suspicious activity is also bubbled up in **Security Overview**.
3. Any mitigation policies configured by the user are automatically applied to all discovered LLM endpoints. If desired, users can be selective on where they would like to enforce the security policies based on many different request attributes and headers.
4. Sensitive data protection can log sensitive data on the response and enforcing AI-specific security policies on incoming traffic can protect the model from learning PII or unsafe topic information, and, in return, prevent future PII exposure.

## AI Security for Apps Architecture

AI Security for Apps architecture provides security without sacrificing performance. [All AI threat detections run in parallel leveraging LLM models specific to the threat being detected ↗](https://blog.cloudflare.com/block-unsafe-llm-prompts-with-firewall-for-ai/); this architecture allows for adding additional AI detections without a significant impact on latency since all the detections are being done in parallel instead of sequentially. [Cloudflare leverages its own AI Inference as a service, Workers AI, for this capability ↗](https://www.cloudflare.com/developer-platform/products/workers-ai/) ensuring maximum performance and security.

Cloudflare's reverse proxy architecture leveraging anycast, inline security approach, and parallel processing via AI-specific threat models all lead to maximum performance compared to other solutions which rely on leveraging 3rd party components or are architected around AI security wrappers and hairpinning solutions.

![Diagram showing the parallel execution of multiple threat detections at Cloudflare](https://developers.cloudflare.com/_astro/fig07-parallel-execution._dDJtw5N_Z1ER1Tg.webp "Figure 7: Cloudflare AI threat detections run in parallel for maximum performance")

Figure 7: Cloudflare AI threat detections run in parallel for maximum performance

## LLM Discovery

Cloudflare conducts heuristic checks to identify LLM traffic and respective endpoints.

* LLM-specific heuristics are used
* Known false positives (from analysis of millions of requests) are filtered out.

For example, LLM endpoints mostly need more than 1 second to respond, while the majority of other endpoints take less than 1 second. We know that [80% of LLM endpoints have an effective bitrate operating at slower than 4 KB/s ↗](https://blog.cloudflare.com/take-control-of-public-ai-application-security-with-cloudflare-firewall-for-ai/).

Based on the traffic data across Cloudflare's global network, we know there are other traffic patterns that can also operate at this bitrate, and we filter these false positives out. Ex: 1) GraphQL endpoints, 2) device heartbeat or health check, 3) generators (for QR codes, one time passwords, invoices, etc.)

![Chart showing the low bitrate of most LLM traffic](https://developers.cloudflare.com/_astro/fig08-llm-traffic-bitrate.BwbPxtWw_1apWg0.webp "Figure 8: LLM traffic has a bitrate of less than 4 KB/s")

Figure 8: LLM traffic has a bitrate of less than 4 KB/s

Once LLM endpoints are identified, Cloudflare API security capabilities automatically label the endpoints with a `cf-llm` label; this allows for easy filtering in analytics and for easily applying security policies to all LLM endpoints.

![Diagram outlining the LLM discovery process](https://developers.cloudflare.com/_astro/fig09-llm-discovery.XknsQk_Q_1r3NwJ.webp "Figure 9: Cloudflare AI Security for Apps LLM Discovery")

Figure 9: Cloudflare AI Security for Apps LLM Discovery

The below diagram highlights the overall LLM discovery and AI threat mitigation. Once LLM endpoints are discovered, detections will automatically run on those endpoints. Mitigation is done by creating a WAF security policy with the AI-specific context and fields AI Security for Apps provides.

![LLM discovery and AI threat mitigation at Cloudflare with API Shield, WAF, and AI Security for Apps](https://developers.cloudflare.com/_astro/fig10-ai-threat-mitigation.CUA53ZFB_Yp5l8.webp "Figure 10: Cloudflare AI Security for Apps LLM discovery and AI threat mitigation")

Figure 10: Cloudflare AI Security for Apps LLM discovery and AI threat mitigation

### LLM Prompt Detection

Cloudflare looks for specific patterns and via analysis detects and extracts LLM prompts within the body of incoming requests. Detection runs on incoming traffic. Currently, the detection only handles requests with a JSON content type (`application/json`). Cloudflare will populate the existing [Security for AI Apps fields ↗](https://cfl.re/435SvOO) based on the scan results. Respectively, you can see these results in the **Security Analytics** dashboard by filtering on the `cf-llm` managed endpoint label and reviewing the detection results on your traffic.

Additionally, the respective populated fields can be used in security rule expressions (custom rules and rate limiting rules) to protect your application against AI-specific threats like PII exposure.

## AI Security Threat Detections with AI Security for Apps

AI Security for Apps currently provides detections and mitigation for critical AI security threats. The threats AI Security for Apps helps mitigate for map to the following risks in the [OWASP Top 10 for LLM Applications ↗](https://genai.owasp.org/llm-top-10/) as shown in the table below.

![Top 3 LLM risks and how AI Security for Apps helps mitigate them](https://developers.cloudflare.com/_astro/fig11-top-llm-risks.BgqEOq3q_Z2k3XzK.webp "Figure 11: AI Security for Apps helps mitigate top LLM risks")

Figure 11: AI Security for Apps helps mitigate top LLM risks

When enabled, the AI security detections run on incoming traffic, searching for any LLM prompts attempting to exploit the model. Security policies can be created via both WAF custom rules and rate limiting rules.

### PII Exposure

Prevent data leaks of personally identifiable information (PII) — for example, phone numbers, email addresses, social security numbers, and credit card numbers.

AI Security for Apps helps prevent PII being sent in the request and respectively AI models being trained on this data which can consequently expose PII in subsequent requests.

![Example request flow showing PII exposure detection and mitigation](https://developers.cloudflare.com/_astro/fig12-pii-exposure-mitigation.B1E13KkH_1aRQq9.webp "Figure 12: Cloudflare AI Security for Apps - PII exposure detection and mitigation")

Figure 12: Cloudflare AI Security for Apps - PII exposure detection and mitigation

### Unsafe Topics

Detect and moderate unsafe or harmful prompts – for example, prompts potentially related to violent crimes.

AI Security for Apps helps prevent AI models from receiving requests with harmful requests and preventing the model from learning and responding to requests that can be deemed harmful and Enterprises can potentially even be held liable for.

![Example request flow showing unsafe topics detection and mitigation](https://developers.cloudflare.com/_astro/fig13-unsafe-topics-detection.BVrdr_a9_1kJTTa.webp "Figure 13: Cloudflare AI Security for Apps - PII unsafe topics detection and mitigation")

Figure 13: Cloudflare AI Security for Apps - PII unsafe topics detection and mitigation

### Prompt Injection and Jailbreak

Detect prompts intentionally designed to subvert the intended behavior of the LLMs as specified by the developer

AI Security for Apps detects attempts to manipulate, misuse, or elicit unintended outputs. A prompt injection score signifying the likeliness of a prompt injection or jailbreak attempt is given to every request that is routed to an LLM endpoint. A score of less than 20 signifies a prompt injection attack.

![Example request flow showing prompt injection and jailbreak detection and mitigation](https://developers.cloudflare.com/_astro/fig14-prompt-injection.DSrbviyx_25MJzX.webp "Figure 14: Cloudflare AI Security for Apps - Prompt injection and jailbreak detection and mitigation")

Figure 14: Cloudflare AI Security for Apps - Prompt injection and jailbreak detection and mitigation

## Analytics and Prompt Logging

AI Security for Apps provides for always-on detections and continuous visibility via analytics into all AI security threats, regardless of if a security policy is in place or not. Once an LLM endpoint has been discovered via LLM discovery, all detections are run on traffic to that endpoint and any detected attacks are logged. The below diagram demonstrates this.

![Example request flow showing how the always-on detection provides feedback about suspicious activity](https://developers.cloudflare.com/_astro/fig15-always-on-detection.RC40SxpS_ZtJgwK.webp "Figure 15: Cloudflare AI Security for Apps - Always-on detection")

Figure 15: Cloudflare AI Security for Apps - Always-on detection

You can also see any suspicious activity quickly bubbled up under **Security Overview** and **Security Analytics** for users to easily review and take action on.

![Security Analytics dashboard showing suspicious activity alerts for AI-specific threats](https://developers.cloudflare.com/_astro/fig16-suspicious-activity-alerts.DRTD9GVg_28RTRy.webp "Figure 16: Suspicious activity alerts for AI-specific threats")

Figure 16: Suspicious activity alerts for AI-specific threats

The powerful analytics capabilities allow users to jump to immediate threats like PII exposure and unsafe topics and within each of these even filter down further based on specific categories within the identified threat. There are categories for both [PII exposure](https://developers.cloudflare.com/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii%5Fcategories/) and [unsafe topics](https://developers.cloudflare.com/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe%5Ftopic%5Fcategories/). For example, below we are filtering the logs with PII detected further based on the specific category of **Credit Card**.

![How to filter logs in the Cloudflare dashboard based on an AI-specific threat - "Credit Card"](https://developers.cloudflare.com/_astro/fig17-filtering-logs.D5DXPRjr_qsGfQ.webp "Figure 17: Filtering logs based on AI-specific threats")

Figure 17: Filtering logs based on AI-specific threats

Within discovered endpoints, under the **Endpoints** tab and within **Security** \> **Web assets**, users can also easily filter on the `cf-llm` label for discovered LLM-specific endpoints as shown below.

Here, the power of the Cloudflare platform and cross-product integration is on full display. Not only are the respective discovered LLM endpoints labeled with `cf-llm`, but [Cloudflare API Security capabilities has also automatically attached managed risk labels](https://developers.cloudflare.com/api-shield/management-and-monitoring/endpoint-labels/) of `cf-risk-missing-auth` and `cf-risk-missing-schema`, signifying identified risks associated with the respective endpoint.

![The Cloudflare dashboard showing an endpoint that was automatically labelled with "cf-llm", "cf-risk-missing-auth", and "cf-risk-missing-schema"](https://developers.cloudflare.com/_astro/fig18-auto-endpoint-labeling.Cdr7M0br_Z2rcUje.webp "Figure 18: LLM discovery and auto labeling of API endpoint security risks")

Figure 18: LLM discovery and auto labeling of API endpoint security risks

Users can also log the exact prompts in the request via prompt logging. Log request details, including the request body are easily accessible via **Security Analytics**. In the figure below, notice that only users with the respective private key configured can decrypt and view the payload contents.

![The details of a logged event due to detected PII categories with an encrypted payload](https://developers.cloudflare.com/_astro/fig19-prompt-logging-encrypted.DFJCu81S_Z2caBCW.webp "Figure 19: AI Security for Apps - Prompt logging with payload encrypted")

Figure 19: AI Security for Apps - Prompt logging with payload encrypted

Once decrypted, users can view the exact LLM prompt and even the specific category detected as shown below.

![The details of a logged event due to detected PII categories showing the decrypted payload](https://developers.cloudflare.com/_astro/fig20-prompt-logging-decrypted.aJAN6CqF_vdaMf.webp "Figure 20: AI Security for Apps - Prompt logging with payload decrypted")

Figure 20: AI Security for Apps - Prompt logging with payload decrypted

## Summary

AI is powerful and organizations continue to adopt AI at a rapid pace, but without protections in place, it's risky. Cloudflare provides a layered security approach incorporating AI Security to protect your AI-powered applications.

AI Security for Apps complements WAF providing the same operational model and can detect and mitigate threats like PII exposure, unsafe content, and prompt injection / jailbreak. Further, Cloudflare's powerful LLM discovery, analytics, and prompt logging capability provide users the deep visibility to easily understand and take appropriate action to secure AI-powered applications.

## Related Resources

* [Cloudflare AI Security for Apps Product Page ↗](https://cfl.re/4b24QX5)
* [Cloudflare Blog: AI Security for Apps ↗](https://cfl.re/ai-sec-apps-blog-ga)
* [Cloudflare Developer Docs: AI Security for Apps ↗](https://cfl.re/435SvOO)
* [Self-guided Product Tour: AI Security for Apps ↗](https://cfl.re/49T8nXg)
* [Video: Cloudflare AI Security Suite: Protect AI-powered apps with AI Security for Apps ↗](https://www.youtube.com/watch?v=LoGaySHVGu8)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/architectures/","name":"Reference Architectures"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/architectures/ai-security-for-apps/","name":"AI Security for Apps Reference Architecture"}}]}
```

---

---
title: Content Delivery Network (CDN) Reference Architecture
description: This reference architecture discusses the traditional challenges customers face with web applications, how the Cloudflare CDN resolves these challenges, and CDN architecture and design.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/architectures/cdn.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Content Delivery Network (CDN) Reference Architecture

**Last reviewed:**  over 3 years ago 

## Introduction

Every day, users of the Internet enjoy the benefits of performance and reliability provided by [content delivery networks ↗](https://www.cloudflare.com/learning/cdn/what-is-a-cdn/) (CDNs). CDNs have become a must-have to combat latency and a requirement for any major company delivering content to users on the Internet. While providing performance and reliability for customers, CDNs also enable companies to further secure their applications and cut costs. This document discusses the traditional challenges customers face with web applications, how the Cloudflare CDN resolves these challenges, and CDN architecture and design.

### Who is this document for and what will you learn?

This reference architecture is designed for IT or network professionals with some responsibility over or familiarity with their organization's existing infrastructure. It is useful to have some experience with technologies and concepts important to content delivery, including caching, DNS and firewalls.

To build a stronger baseline understanding of Cloudflare, we recommend the following resources:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (5 minute read) or [video ↗](https://youtu.be/XHvmX3FhTwU?feature=shared) (2 minutes)
* What is a CDN? | [Website ↗](https://www.cloudflare.com/learning/cdn/what-is-a-cdn/) (5 minute read)
* Analyst Report: [Cloudflare named Leader in 2024 GigaOm Radar for Content Delivery Networks ↗](https://www.cloudflare.com/lp/gigaom-radar-cdn/) (20 minute read)

Those who read this reference architecture will learn:

* How Cloudflare CDN can significantly improve the delivery of content to your customers
* How anycast IP routing is important in ensuring reliable CDN performance
* The range of tiered caching options and how to choose the one for your needs

## Traditional challenges deploying web applications

Over the last several years, especially with the advent of the COVID-19 pandemic and the focus on remote work, there has been a significant growth in Internet traffic, further growing the need to efficiently manage network traffic, cut latency, and increase performance.

Companies running their applications in the cloud or on-premise are faced with the challenges of:

1. Implementing solutions to increase performance
2. As demand grows, scaling out their architecture to meet availability and redundancy concerns
3. Securing their environments and applications from growing Internet threats
4. Reining in growing costs related to doing all of the above

With companies serving customers across the globe, the above challenges require a significant undertaking. Traditionally, a website/application is deployed centrally and replicated to another region for availability, or the website/application is deployed across a handful of servers, sometimes across multiple data centers for resiliency.

The servers hosting the websites are called origin servers. When clients access a website, they make a request for resources from the server. Navigating to one website can generate hundreds of requests from the browser for HTML, CSS, images, videos, etc. With versions of HTTP prior to HTTP/2, each of these HTTP requests would also require a new TCP connection.

Enhancements in HTTP/2 and HTTP/3 allow for multiplexing multiple requests to the same server over a single TCP connection, thus saving server resources. However, compute and network resources are still consumed as servers respond to these requests. As more clients access the website, the following can result:

* The origin server starts to become overloaded with requests, impacting availability; companies start looking at scaling out to handle the additional load
* As each request has to make its way to the origin server, performance and user experience is impacted due to latency
* The latency for end users becomes proportional to the distance between the client and origin server, thus resulting in varying experiences based on client location. This is especially true for specific countries that may experience latency due to traffic from or to that country, like China.
* As origin servers respond to the increasing requests, bandwidth, egress, and compute costs increase drastically
* Even as customers scale out to handle the increased demand in traffic, they are left exposed to both infrastructure-level and application-level distributed denial-of-service (DDoS) attacks

In Figure 1 below, there is no CDN present and there is an origin server sitting in the US. As clients access the website, the first step is DNS resolution, typically done by the user’s ISP. The next step is the HTTP request sent directly to the origin server. The user experience will vary depending on their location. For example, you can see the latency is much lower for users in the US, where the origin server is located. For users outside the US, the latency increases, thus resulting in a higher round-trip time (RTT).

As more clients make requests to the origin server, the load on the network and server increases, resulting in higher latency and higher costs for resource and bandwidth use.

From a security perspective, the origin server is also vulnerable to DDoS attacks at both the infrastructure and application layer. A DDoS attack could be initiated from a botnet sending millions of requests to the origin server, consuming resources and preventing it from serving legitimate clients.

Further, in terms of resiliency, if the origin server temporarily goes offline, all content is inaccessible to users.

![Figure 1: Diagram of HTTP web requests between DNS and origin server without a CDN.](https://developers.cloudflare.com/_astro/ref-arch-cdn-figure1.BH2E9Wnc_2oxdBw.svg "Figure 1: HTTP Request with no CDN")

Figure 1: HTTP Request with no CDN

## How a CDN tackles web application challenges

A CDN helps address the challenges customers face around latency, performance, availability, redundancy, security, and costs. A CDN's core goal is to decrease latency and increase performance for websites and applications by caching content as close as possible to end users or those accessing the content.

CDNs decrease latency and increase performance by having many data center locations across the globe that cache the content from the origin. The goal is to have content cached as close as possible to users, so content is cached at the edge of the CDN provider's network.

### Impacts

* **Improved website load time**: Instead of every client making a request to the origin server, which could be located a considerable distance away, the request is routed to a local server that responds with cached content, thus decreasing latency and increasing overall performance. Regardless of where the origin server and clients are located, performance will be more consistent for all users, as the CDN will serve locally cached content when possible.
* **Increased content availability and redundancy:** Because every client request no longer needs to be sent to the origin server, CDNs provide not only performance benefits, but also availability and redundancy. Requests are load balanced over local servers with cached content; these servers respond to local requests, significantly decreasing overall load on the origin server. The origin server only is contacted when needed (when content is not cached or for dynamic non-cacheable content).
* **Improved website security:** A CDN acts as a reverse proxy and sits in front of origin servers. Thus it can provide enhanced security such as DDoS mitigation, improvements to security certificates, and other optimizations.
* **Reduced bandwidth costs:** Because CDNs use cached content to respond to requests, the number of requests sent to the origin server is reduced, thus also reducing associated bandwidth costs.

### Routing requests to CDN nodes

An important difference in some CDN implementations is how they route traffic to the respective local CDN nodes. Routing requests to CDN nodes can be done via two different methods:

**DNS unicast routing**

In this method, recursive DNS queries redirect requests to CDN nodes; the client’s DNS resolver forwards requests to the CDN’s authoritative nameserver. CDNs based on DNS unicast routing are not ideal in that clients may be geographically dispersed from the DNS resolver. Decisions on closest-proximity CDN nodes are based on the client's DNS server instead of client’s IP address. Also, if any changes are needed for the DNS response, there is a dependency on DNS time to live (TTL) expiration.

Further, since DNS routing uses unicast addresses, traffic is routed directly to a specific node, creating possible concerns when there are traffic spikes, as in a DDoS attack.

Another challenge with DNS-based CDNs is that DNS is not very graceful upon failover. Typically a new session or application must be started for the DNS resolver with a different IP address to take over.

**Anycast routing**

The Cloudflare CDN, which is discussed in more detail in the next section, uses anycast routing. Anycast allows for nodes on a network to have the same IP address. The same IP address is announced from multiple nodes in different locations, and client redirection is handled via the Internet’s routing protocol, BGP.

Using an anycast-based CDN has several advantages:

* Incoming traffic is routed to the nearest data center with the capacity to process the requests efficiently.
* Availability and redundancy is inherently provided. Since multiple nodes have the same IP address, if one node were to fail, requests are simply routed to another node in close proximity.
* Because anycast distributes traffic across multiple data centers, it increases the overall surface area, thus preventing any one location from becoming overwhelmed with requests. For this reason, anycast networks are very resilient to DDoS attacks.

## Introducing the Cloudflare CDN

Cloudflare provides a Software as a Service (SaaS) model for CDN. With Cloudflare’s SaaS model, customers benefit from the Cloudflare CDN without having to manage or maintain any infrastructure or software.

The benefits of the Cloudflare CDN can be attributed to the below two points, discussed in more detail in this section.

1. CDNs inherently increase performance by caching content on servers close to the user
2. The unique Cloudflare architecture and integrated ecosystem

Figure 2 shows a simplified view of the Cloudflare CDN. Clients are receiving their response back from a server on Cloudflare’s global anycast network closest to where the clients are located, thus drastically reducing the latency and RTT. The diagram depicts a consistent end-user experience regardless of the physical location of the clients and origin.

![Figure 2 is a diagram representing the traffic between a client and a server on Cloudflare's global anycast network at different client locations.](https://developers.cloudflare.com/_astro/ref-arch-cdn-figure2.DP9jXMC9_Z135xXW.svg "Figure 2: HTTP request to Cloudflare CDN with anycast")

Figure 2: HTTP request to Cloudflare CDN with anycast

## Cloudflare CDN architecture and design

Figure 3 is a view of the Cloudflare CDN on the global anycast network. In addition to using anycast for network performance and resiliency, the Cloudflare CDN leverages Tiered Cache to deliver optimized results while saving costs for customers. Customers can also [enable Argo Smart Routing](https://developers.cloudflare.com/argo-smart-routing/get-started/) to find the fastest network path to route requests to the origin server. These capabilities are discussed in detail in the remainder of this document.

![Figure 3: Diagram representing requests coming from an end user, protected by Cloudflare products including WAF and DDoS protection, and traveling through the anycast Network to reach the origin server using Smart Tiered Cache.](https://developers.cloudflare.com/_astro/ref-arch-cdn-figure3.CcIfEHZq_STCJW.svg "Figure 3: Cloudflare CDN with Tiered Cache on global anycast network")

Figure 3: Cloudflare CDN with Tiered Cache on global anycast network

In the above diagram, there are a few important key points to understand about the Cloudflare CDN and the global anycast network it resides on:

* An important differentiator is that Cloudflare utilizes one global network and runs every service on every server in every Cloudflare data center, thus providing end users the closest proximity to Cloudflare’s services, with the highest scale, resiliency, and performance.
* Cloudflare is a reverse proxy, meaning it receives requests from clients and proxies the requests back to the customer’s origin servers. Thus, every request traverses through Cloudflare’s network before reaching the customer’s network. Since Cloudflare has hardened and protected its infrastructure at the edge (ingress), all customers are consequently also protected from infrastructure-level and volumetric DDoS attacks. Requests and traffic must go through the protected Cloudflare network before reaching the customer’s origin server.
* The Cloudflare CDN leverages the Cloudflare global anycast network. Thus the incoming request is routed to and answered by the node closest to the user.
* The inherent benefits of anycast are decreased latency, network resiliency, higher availability, and increased security due to larger surface area for absorbing both legitimate traffic loads and DDoS attacks. Cloudflare’s global anycast network spans [hundreds of cities worldwide ↗](https://www.cloudflare.com/network/), reaching 95% of the world’s Internet-connected population within 50 milliseconds while providing over 405 Tbps network capacity and DDoS protection capability.
* Edge nodes within the Cloudflare network cache content from the origin server and are able to respond to requests via a cached copy. Cloudflare also provides [DNS](https://developers.cloudflare.com/dns/), [DDoS protection](https://developers.cloudflare.com/ddos-protection/), [WAF](https://developers.cloudflare.com/waf/), and other performance, reliability, and security services using the same edge architecture.
* [Argo](https://developers.cloudflare.com/argo-smart-routing/) uses optimized routing and caching technology across the Cloudflare network to deliver responses to users more quickly, reliably, and securely. Argo includes Smart Routing and [Tiered Cache](https://developers.cloudflare.com/cache/how-to/tiered-cache/). Cloudflare leverages Argo to provide an enhanced CDN solution.

### Tiered Cache

Once a site is onboarded, standard caching is configured by default. With standard caching, each data center acts as a direct reverse proxy for the origin servers. A cache miss in any data center results in a request being sent to the origin server from the ingress data center.

Although standard caching works, it is not the most optimal design — cached content closer to the client may already exist in other Cloudflare data centers, and origin servers are sometimes unnecessarily overloaded as a result. Thus, it is best to enable Tiered Cache, which is included with every Cloudflare plan. With Tiered Cache, certain data centers are reverse proxies to the origin for other data centers, resulting in more cache hits and faster response times.

Tiered Cache leverages the scale of Cloudflare’s network to minimize requests to customer origins. When a request comes into a Cloudflare data center, if the requested content is not locally cached, other Cloudflare data centers are checked for the cached content.

Cloudflare data centers have shorter distances and faster paths between them than the connections between data centers and customer origin servers, optimizing the response to the client with a significant improvement in cache hit ratio. The Cloudflare CDN leverages Argo Smart Routing data to determine the best upper tier data centers to use for Tiered Cache. Argo Smart Routing can also be enabled as an add-on to provide the fastest paths between data centers and origin servers for cache misses and other types of dynamic traffic.

The Cloudflare CDN allows customers to configure tiered caching. Note that depending on the Cloudflare plan, different topologies are available for Tiered Cache. By default, tiered caching is disabled and can be enabled under the caching tab of the main menu. ​​

#### Tiered Cache topologies

The different cache topologies allow customers to control how Cloudflare interacts with origin servers to help ensure higher cache hit ratios, fewer origin connections, and reduced latency.

| **Smart Tiered Cache Topology (all plans)**                                                                                                                           | **Generic Global Tiered Topology (Enterprise only)**                                                                                              | **Custom Tiered Cache Topology (Enterprise only)**                                                                                                                         |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Recommended for most deployments. It is the default configuration once Tiered Cache is enabled.                                                                       | Recommended for those who have high traffic that is spread across the globe and desire the highest cache usage and best performance possible.     | Recommended for customers who have additional data on their user base and have specific geographic regions they would like to focus on.                                    |
| Ideal for customers who want to leverage CDN for performance but minimize requests to origin servers and bandwidth utilization between Cloudflare and origin servers. | Generic Global Tiered Topology balances between cache efficiency and latency. Instructs Cloudflare to use all Tier 1 data centers as upper tiers. | Custom Tiered Cache Topology allows customers to set a custom topology that fits specific needs (ex: upper tiers in specific geographic locations serving more customers). |
| Cloudflare will dynamically find the single best upper tier for an origin using Argo performance and routing data.                                                    | Engage your account team to build a custom topology.                                                                                              |                                                                                                                                                                            |

### Traffic flow: Tiered Cache, Smart Tiered Cache topology

In Figure 4, Tiered Caching is enabled with Smart Tiered Cache Topology. The diagram depicts two separate traffic flows, summarized below. The first traffic flow (Client 1) is a request from a client that comes into Data Center 1\. The second traffic flow (Client 2) is a subsequent request for the same resource into a different data center, Data Center 2.

![Figure 4: The same diagram as Figure 3 demonstrating requests between end users and origin server over the anycast Network, with bidirectional arrows indicating traffic flow enabled by Smart Tiered Cache.](https://developers.cloudflare.com/_astro/ref-arch-cdn-figure4.kIutXMs6_Z239rdF.svg "Figure 4: HTTP requests and traffic flow through Cloudflare CDN")

Figure 4: HTTP requests and traffic flow through Cloudflare CDN

| Request 1                                                                                                                                                                                                                                                                                                                                     | Request 2                                                                                                                                                            |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| First request received in Data Center 1 results in cache miss, as request had not been made previously by any client.                                                                                                                                                                                                                         | Second request by a different client received in Data Center 3 results in cache miss, as request had not been made previously by any client served by Data Center 3. |
| No cached content found, so Data Center 1 checks with its upper tier data center to request a copy of the content.                                                                                                                                                                                                                            | No cached content found, so Data Center 3 checks with the upper tier data center to request a copy of the content.                                                   |
| Upper tier data center also does not have content cached locally, so it makes a request to the origin server for content. Upon receiving the content, the upper tier data center caches it locally and relays the content to the requesting lower tier data center. The lower tier data center caches the content and responds to the client. | Cached content found at the upper tier data center. Data Center 3 retrieves and caches this content locally and responds to the client.                              |

In Figure 4, the top end user traffic flow displays the traffic flow when a client request is received by a data center closest to the client, Data Center 1\. Since there is nothing locally cached on the ingress data center and tiered caching is enabled, a request is sent to the upper tier data center to request a copy of the content to cache. Because the upper tier data center also does not have the content cached, it sends the request to the origin server, caches the received content upon response, and responds to the lower tier data center with the cached content. The lower tier data center caches the content and responds to the client.

Notice that when a new request for the same content is made to another data center (bottom end user traffic flow), Data Center 3, the content is not locally cached; however, the content is retrieved from the upper tier data center, where it was cached from the first request for the same content.

With the upper tier data center returning the cached content for the second request, the trip to the origin server is prevented, resulting in higher cache hit ratios, faster response times, saved bandwidth cost between the Cloudflare network and the origin server, and reduced load on the origin server responding to requests.

### Regional Tiered Cache

The main difference between Smart Tiered Cache and Global tiered cache is the number of upper tiers that can talk to the origin servers. With Smart Tiered Cache the closest upper tier to the origin is selected using Argo performance and routing data. This means that all requests that experience a cache `MISS` at a lower tier will funnel through this single upper tier and have a higher percentage chance of a cache `HIT` to avoid sending traffic to an origin server. However, the downside to this architecture is that the lower tier could be located across the globe from the upper tier. Even if the upper tier can fulfill the request from its cache, the distance between the upper tier and lower tier could still add latency to the response depending on the distance traveled. To summarize, Smart Tiered Cache ensures that all requests for cache flow through a single upper tier cache location which increases cache `HIT` percentages, and reduces requests to the origin server, however it can result in higher latencies fulfilling those requests since the upper tier could be located far away from the lower tier that originated the request.

With Generic Global Tiered Cache, Cloudflare uses its largest data centers around the globe as upper tier cache which means, in general, that the upper tier cache is much closer to the lower tier cache. This can greatly reduce latency when lower tiers need to pass requests to upper tiers. However, this ultimately will increase the amount of requests serviced by the origin as each upper tier cache will need to populate from the origin. To summarize, Generic Global Tiered cache can improve response times when cache is populated, but will also increase load on the origin servers.

Regional Tiered Cache combines the best of both of these strategies together by adding an additional layer of cache to the architecture. Using the Regional Tiered Cache option with Smart Tiered Caching means that while a single upper tier cache location exists closest to the origin, a Regional Tier layer has been added between the upper and lower tier that is geographically closer to the lower tier. Now, requests from lower tiers will now check a Regional Tier for cache before being sent to an upper tier. A single Regional Tier can accept requests from several lower tier caches and because of that, can greatly improve performance and latency for globally available applications.

Regional Tiered Caching is recommended for use with Smart Tiered Caching and Custom Tiered Caching. However, Regional Tiered Cache is not beneficial for customers with many upper tiers in many regions like Generic Global Tiered Cache.

#### Traffic flow: Tiered Cache, Smart Tiered Cache with Regional Tiered Cache

In Figure 5, Tiered Caching is enabled with Smart Tiered Cache Topology. The diagram depicts the topology of Smart Tiered Cache with Regional Tiered Cache enabled. Lower tier caches, when they experience a cache `MISS` will first send those requests to a more local, regional hub data center to see if the cache can handle the request. If not, the request will continue on to the upper tier and then origin server, if necessary.

![Figure 5: Diagram illustrating requests between an end user and origin server with lower, regional and upper tiered caching enabled.](https://developers.cloudflare.com/_astro/ref-arch-cdn-figure5.B3Tq_F2z_Z239rdF.svg "Figure 5: Cloudflare CDN with Tiered Cache and Regional Tiered Cache")

Figure 5: Cloudflare CDN with Tiered Cache and Regional Tiered Cache

### Argo Smart Routing

Argo Smart Routing is a service that finds optimized routes across the Cloudflare network to deliver responses to users more quickly. As discussed earlier, Cloudflare CDN leverages Argo Smart Routing to determine the best upper tier data centers for Tiered Cache.

In addition, Argo Smart Routing can be enabled to ensure the fastest paths over the Cloudflare network are taken between upper tier data centers and origin servers at all times. Without Argo Smart Routing, communication between upper tier data centers to origin servers are still intelligently routed around problems on the Internet to ensure origin reachability.

Argo Smart Routing accelerates traffic by taking into account real-time data and network intelligence from routing nearly 50 million HTTP requests per second; it ensures the fastest and most reliable network paths are traversed over the Cloudflare network to the origin server. On average, Argo Smart Routing accounts for 30% faster performance on web assets.

#### Traffic Flow: Tiered Cache, Smart Tiered Cache Topology with Argo Smart Routing

Figure 6 details the traffic flow when Tiered Cache and Argo Smart Routing are not enabled. The request comes into the closest data center, and, because content is not locally cached and Tiered Cache is not enabled, the request is sent directly to the origin server for the content. Also, since Argo Smart Routing is not enabled, a reliable, but perhaps not the fastest, path is taken when communicating with the origin server.

![Figure 6: Diagram with bidirectional arrows indicating a request between an end user and origin server without Argo Smart Routing enabled.](https://developers.cloudflare.com/_astro/ref-arch-cdn-figure6.CUGfxAW8_Z239rdF.svg "Figure 6: Cloudflare CDN without Tiered Cache or Argo Smart Routing")

Figure 6: Cloudflare CDN without Tiered Cache or Argo Smart Routing

Figure 7 articulates the traffic flow with both Tiered Cache and Argo Smart Routing enabled. When a request is received by Data Center 1 and there is a cache miss, the cache of the upper tier data center, Data Center 6, is checked. If the cached content is not found at the upper tier data center, with Argo Smart Routing enabled, the request is sent on the fastest path from the upper tier data center to the origin.

The fastest path is determined by the Argo network intelligence capabilities, which take into account real-time network data such as congestion, latency, and RTT.

**With the Cloudflare CDN, Argo Smart Routing is used when:**

1. There is a cache miss and the request needs to be sent to the origin server to retrieve the content.
2. There is a request for non-cacheable content, such as dynamic content (ex: APIs), and the request must go to the origin server.

![Figure 7: Diagram with bidirectional arrows indicating a request between an end user and origin server, with Argo Smart Routing enabled to improve speed.](https://developers.cloudflare.com/_astro/ref-arch-cdn-figure7.Cxfbf7KH_Z1eobh2.svg "Figure 7: Cloudflare CDN with Tiered Cache and Argo Smart Routing")

Figure 7: Cloudflare CDN with Tiered Cache and Argo Smart Routing

### Cache Reserve

Expanding on the idea of Tiered Cache, Cache Reserve further utilizes the scale and speed of the Cloudflare network while additionally leveraging R2, Cloudflare’s persistent object storage, to cache content even longer. Cache Reserve helps customers reduce bills by eliminating egress fees from origins while also providing multiple layers of resiliency and protection to make sure that content is reliably available which improves website performance by having content load faster. Basically, Cache Reserve is an additional higher tier of cache with longer retention duration.

While Cache Reserve can function without Tiered Cache enabled, it is recommended that Tiered Cache be enabled with Cache Reserve. Tiered Cache will funnel, and potentially eliminate, requests to Cache Reserve which eliminates redundant read operations and redundant storage of cached content reducing egress and storage fees. Enabling Cache Reserve via the Cloudflare dashboard will check and provide a warning if you try to use Cache Reserve without Tiered Cache enabled.

Cache Reserve has a retention period of 30 days which means it will hold cached content for 30 days regardless of cached headers or TTL policy. The TTL policy still affects the content’s freshness which means when content cache TTL expires inside of Cache Reserve, the content will need to be revalidated by checking the origin for any updates. The TTL policy can be set by any number of methods, such as Cache-Control, CDN-Cache-Control response headers, Edge Cache TTL, cache TTL by status code, or Cache Rules. Every time cache is read from Cache Reserve, the retention timer is reset to 30 days. After 30 days, if the cached content has not been read from Cache Reserve, the cache will be deleted.

There are three main criteria to match for content to be considered cacheable via Cache Reserve:

1. The content must be cacheable. See the [Cache documentation](https://developers.cloudflare.com/cache/) for more details on cacheable content.
2. TTL is set to at least 10 hours. This can be set by any method from the previous paragraph.
3. The Content-Length header must be used in the response header. Please note, this means that the \[Transfer-Method “chunked” will prevent Cache Reserve from being populated.

When combined with Tiered Caching and Argo Smart Routing, Cache Reserve can be a powerful tool for increasing cache hits and in turn reducing load on origin servers while also improving performance by bringing the content closer to the end user.

Note

Using [Image Resizing](https://developers.cloudflare.com/images/transform-images/) with Cache Reserve will not result in resized images being stored in Cache Reserve since Image Resizing takes place after reading from Cache Reserve. Resized images will be cached in other available tiers when they are served after resizing.

### Traffic flow: Cache Reserve topology

Figure 8 illustrates how Cache Reserve can help reduce load on an origin server while also helping repopulate cache stores in both upper and lower tier data centers.

![Figure 8: Traffic between end users and an origin server showing Cache Reserve as the final step in the architecture of the Cloudflare CDN solution.](https://developers.cloudflare.com/_astro/ref-arch-cdn-figure8.B8u-UV7X_Z239rdF.svg "Figure 8: Cloudflare CDN with Tiered Cache and Cache Reserve")

Figure 8: Cloudflare CDN with Tiered Cache and Cache Reserve

### China Network & Global Acceleration for clients in China

Latency depends not just on how far the client is from the origin or cache, but can also be significantly affected by the geographic region of the traffic — like China. To address these latency challenges, Cloudflare provides two key solutions:

1. [China Network](https://developers.cloudflare.com/china-network/) provides in-China caching for end users located in China, regardless of the origin location. This solution is provided by collaborating with JD Cloud and uses their data centers to ensure the fastest and most reliable cache performance for Chinese users compared to data centers outside of China.
2. [Global Acceleration](https://developers.cloudflare.com/china-network/concepts/global-acceleration/) offers reliable and secure connectivity to streamline content from origins to JD Cloud data centers in China. This is particularly beneficial for dynamic content like web applications and API calls.

## Summary

To summarize, the Cloudflare CDN is SaaS that helps address the challenges customers face around latency, performance, availability, redundancy, security, and costs. The Cloudflare CDN leverages Cloudflare’s global anycast network and Tiered Cache to deliver optimized results while saving costs for customers. Customers can also (enable Argo Smart)\[argo-smart-routing/get-started/\] Routing to ensure the fastest network path is used to route requests to the origin server and also choose to enable Cache Reserve to increase cache hits to further save costs and increase performance of their website or application.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/architectures/","name":"Reference Architectures"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/architectures/cdn/","name":"Content Delivery Network (CDN) Reference Architecture"}}]}
```

---

---
title: CrowdStrike and Cloudflare - A unified security ecosystem for automated, risk-based protection
description: This reference architecture outlines how Cloudflare and CrowdStrike solutions integrate to create a unified security ecosystem that combines endpoint protection with zero trust network access, threat intelligence sharing, and automated remediation workflows. Organizations can leverage this integration to implement risk-based access policies, improve threat detection, and orchestrate security responses across both platforms.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/architectures/cloudflare-sase-with-crowdstrike.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# CrowdStrike and Cloudflare - A unified security ecosystem for automated, risk-based protection

**Last reviewed:**  about 2 months ago 

## Abstract

This reference architecture outlines how Cloudflare and CrowdStrike solutions integrate to create a unified security ecosystem that combines endpoint protection with zero trust network access, threat intelligence sharing, and automated remediation workflows. Organizations can leverage this integration to implement risk-based access policies, improve threat detection, and orchestrate security responses across both platforms.

## Introduction

Today's cybersecurity landscape presents organizations with a complex set of challenges. The expanding attack surface created by remote work, cloud migration, and sophisticated threats requires a cohesive approach that spans endpoint protection, network security, and identity management.

Cloudflare One and CrowdStrike Falcon® provide a powerful integrated solution to these challenges. By combining CrowdStrike's industry-leading security platform with Cloudflare's secure network and zero trust capabilities, organizations can implement comprehensive protection that secures both their devices and network traffic while simplifying management through automation and policy consistency.

### Why integrate Cloudflare and CrowdStrike?

**Context-aware zero trust:** Identity alone is no longer sufficient for trust. Cloudflare Access ingests real-time Falcon Zero Trust Assessment (ZTA) scores to enforce dynamic, risk-based policies. This ensures that only devices verified as healthy and compliant can access sensitive resources, effectively blocking compromised endpoints even if user credentials are valid.

**Unified visibility and extended detection and response (XDR):** Network and endpoint data often reside in disconnected silos. This integration streams Cloudflare's rich network logs (from Cloudflare Gateway, Cloudflare Web Application Firewall (WAF), and Cloudflare Email Security services) directly into CrowdStrike Falcon® Next-Gen SIEM. This unified view allows analysts to correlate network blocks with specific endpoint processes, providing a complete picture of the attack chain.

**Automated remediation:** By connecting enforcement points, across Cloudflare and CrowdStrike, security teams can move from manual reaction to automated protection. A threat detected on the endpoint can trigger an immediate block at the network edge (and vice versa), drastically reducing risk and mean time to respond (MTTR) without increasing operational overhead.

### Key integration points

The integration between Cloudflare and CrowdStrike creates a powerful security ecosystem where device security posture directly influences access decisions. When a user attempts to access an application, the Cloudflare One platform verifies the request by checking multiple factors: the CrowdStrike Falcon® agent's security assessment, user identity from supported providers, and additional contextual information. Access is granted only when all policy requirements are met, ensuring that only secure devices can reach sensitive resources.

This continuous verification process is enhanced by bidirectional data sharing between the platforms:

1. **Device posture assessment:** CrowdStrike's real-time Zero Trust Assessment (ZTA) telemetry informs Cloudflare Zero Trust access decisions.
2. **Unified security logging:** Cloudflare forwards security telemetry to CrowdStrike's Falcon Next-Gen SIEM.
3. **Email security intelligence:** Cloudflare Email Security alerts feed into CrowdStrike's logging and analysis tools.
4. **Automated remediation workflows:** Security events trigger coordinated, automated responses across both platforms, orchestrated via CrowdStrike Falcon Fusion SOAR.

## Integration architecture overview

The integration between Cloudflare and CrowdStrike establishes a comprehensive security architecture centered on a bi-directional intelligence exchange. This ecosystem connects device endpoint security with zero trust network access and automated response.

The architecture is defined by the following key flows:

* **Zero trust access control:**  
   * The user's endpoint runs both the Cloudflare One Client and the CrowdStrike Falcon agent.  
   * CrowdStrike Falcon Device Posture and ZTA scores are shared with Cloudflare via a service-to-service API.  
   * Cloudflare uses this real-time device health information as a critical factor in its Cloudflare Access decisions, enforcing zero trust policies for both public and private applications.
* **Unified security telemetry:**  
   * Cloudflare sends network and security logs (via Logpush) to CrowdStrike Falcon NextGen SIEM for centralized correlation, analysis, and threat detection.
* **Automated remediation:**  
   * Security events and threat detections within the CrowdStrike platform trigger automated containment and response workflows, orchestrated via Falcon Fusion SOAR (security orchestration, automation, and response), which leverages API automation to take bi-directional action across both platforms.

This integrated approach enables secure access to various application types:

* Internet applications (SaaS, web apps)
* Self-hosted applications (on premises, data center)
* SaaS applications (protected through identity proxy)

![High level architecture of integration between Cloudflare and CrowdStrike](https://developers.cloudflare.com/_astro/Main_Arch.COvXoOw2_Z96otW.svg "Figure 1: High level architecture - Integration")

Figure 1: High level architecture - Integration

### Key use cases

The integration between Cloudflare and CrowdStrike enables six use cases that address critical security challenges:

#### 1\. [Zero trust access with device posture and user risk score](#use-case-detail-zero-trust-with-user-and-device-risk-posture)

**Challenge:** With a hybrid workforce, users access sensitive applications from personal or infected devices outside the corporate perimeter, bypassing traditional firewall controls.

**Solution:** Integrate CrowdStrike Falcon ZTA scores directly into Cloudflare Access policies to enforce real-time conditional access.

#### 2\. [Unified threat hunting](#use-case-detail-unified-threat-hunting)

**Challenge:** Security analysts struggle to correlate network alerts (e.g., a blocked malicious domain) with specific endpoint behavior because data resides in separate silos.

**Solution:** Stream Cloudflare Gateway, WAF, and Email Security logs via Logpush to CrowdStrike Falcon Next-Gen SIEM for centralized analysis.

#### 3\. [Automated edge remediation](#use-case-detail-automated-edge-remediation)

**Challenge:** Manual incident response is too slow to stop automated attacks. By the time an analyst sees an alert, the adversary may have already moved laterally or exfiltrated data.

**Solution:** Leverage CrowdStrike Falcon Fusion SOAR to automatically trigger remediation actions, within Cloudflare, based on detected threats.

#### 4\. [Compromised user lifecycle: Detection and response](#use-case-detail-compromised-user-lifecycle--detection-and-response)

**Challenge:** A user's laptop is infected with malware. While an endpoint detection and response (EDR) tool might detect it, the user still has valid session tokens allowing them to access SaaS apps and sensitive data.

**Solution:** A closed-loop response where endpoint detection immediately revokes network access and triggers investigation.

#### 5\. [Insider threat and data protection](#use-case-detail-insider-threat-and-data-protection)

**Challenge:** A departing employee attempts to upload proprietary source code to a personal cloud storage site. The traffic is encrypted, and the device is "healthy," bypassing standard checks.

**Solution:** Combine Cloudflare Data Loss Prevention (DLP) inspection with CrowdStrike behavioral analytics to detect and block data theft.

#### 6\. [Proactive application defense](#use-case-detail-proactive-application-defense)

**Challenge:** Attackers use automated botnets to scan applications for vulnerabilities. WAFs block known signatures, but low-and-slow attacks can slip through regular filters.

**Solution:** Use endpoint data to inform application security, creating an immune system for web assets.

## Use case detail: Zero trust with user and device risk posture

This use case demonstrates how the integration helps prevent compromised or unmanaged devices from accessing corporate resources.

### Phase 1: Device and user risk assessment

The CrowdStrike Falcon agent continuously monitors the endpoint, calculating a ZTA score (1–100) based on OS health, patch levels, and threat activity. In parallel, Cloudflare continuously updates the user risk score based on user and entity behavior analytics (UEBA).

### Phase 2: Policy evaluation

When a user requests access to an application, Cloudflare Access intercepts the request and queries the CrowdStrike API for the device's current ZTA score.

### Phase 3: Access enforcement

Cloudflare permits connection only if the ZTA score meets the minimum threshold defined in the zero trust policy; otherwise, the user is presented with a Cloudflare Access Block Page, typically instructing them to remediate the device.

![Zero Trust access flow showing device posture and user risk score evaluation](https://developers.cloudflare.com/_astro/UseCase01.BoX0v3_H_Z96otW.svg "Figure 2: Zero Trust access with device posture and user risk score")

Figure 2: Zero Trust access with device posture and user risk score

## Use case detail: Unified threat hunting

This use case focuses on providing comprehensive visibility, eliminating blind spots between network traffic and endpoint activity.

### Phase 1: Data ingestion

Cloudflare Logpush filters and forwards HTTP requests, DNS queries, and firewall events to the Falcon Next-Gen SIEM data intake API.

### Phase 2: Correlation

Falcon Next-Gen SIEM indexes this data alongside endpoint telemetry, allowing analysts to query a single dataset.

### Phase 3: Investigation

An analyst investigating an endpoint alert can instantly pivot to see every network request that device made through Cloudflare, identifying the phishing site or C2 server that caused the infection.

![Unified threat hunting workflow between Cloudflare and CrowdStrike](https://developers.cloudflare.com/_astro/UseCase02.DNAdCPJO_Z96otW.svg "Figure 3: Unified threat hunting")

Figure 3: Unified threat hunting

## Use case detail: Automated edge remediation

This use case demonstrates how implementing CrowdStrike Falcon Fusion SOAR helps reduce the MTTR for rapidly evolving threats.

### Phase 1: Threat detection

CrowdStrike Falcon detects a specific indicator of compromise (IOC), such as a malicious IP address attacking multiple endpoints.

### Phase 2: Orchestration

A Falcon Fusion SOAR workflow is triggered by the detection.

### Phase 3: Edge mitigation

The workflow calls the Cloudflare API to add the malicious IP to a blocklist in Cloudflare WAF or Gateway, instantly protecting the entire organization from that threat source.

![Automated edge remediation workflow from threat detection to edge mitigation](https://developers.cloudflare.com/_astro/UseCase03.Cmq89bjl_Z96otW.svg "Figure 4: Automated edge remediation")

Figure 4: Automated edge remediation

## Use case detail: Compromised user lifecycle — Detection and response

This use case outlines how the combined integration pillars are leveraged to contain active endpoint compromise and prevent lateral movement.

### Phase 1: Detection and signal sharing

The Falcon agent detects malware execution. It immediately drops the device's ZTA score to "Critical" and sends an alert to the SIEM.

### Phase 2: Instant access revocation

Cloudflare Access, checking the ZTA score on the very next request, blocks the user from accessing Salesforce, email, or internal tools, effectively quarantining the device from the network.

### Phase 3: Investigate and remediate

Falcon Fusion SOAR automates a response playbook: It isolates the endpoint (network containment) and adds the user to a custom list, in Cloudflare, effectively tagging them in the logs for deeper retrospective analysis in Falcon Next-Gen SIEM and enforcing additional policies attached to the custom list.

![Compromised user lifecycle showing detection, access revocation, and remediation](https://developers.cloudflare.com/_astro/UseCase04.BoAr7B_A_Z96otW.svg "Figure 5: Compromised user lifecycle - detection and response")

Figure 5: Compromised user lifecycle - detection and response

## Use case detail: Insider threat and data protection

This use case demonstrates how the unified approach helps prevent and respond to data exfiltration by trusted insider actors.

### Phase 1: DLP monitoring

Cloudflare DLP scans upload traffic. It detects source code markers and logs the event to Falcon Next-Gen SIEM via Logpush, while momentarily blocking the specific request.

### Phase 2: Risk scoring and correlation

Falcon Next-Gen SIEM correlates this DLP event with endpoint activity (e.g., recent USB usage or large file copies). This behavior triggers a "High Risk" user tag.

### Phase 3: Adaptive control

Falcon Fusion SOAR updates the Cloudflare Zero Trust policy to require "step-up authentication" or remote browser isolation (RBI) for this specific user, preventing further data movement even for legitimate tasks until cleared by HR or security.

![Insider threat and data protection workflow with DLP monitoring and adaptive controls](https://developers.cloudflare.com/_astro/UseCase05.BSXF1uGi_Z96otW.svg "Figure 6: Insider threat and data protection")

Figure 6: Insider threat and data protection

## Use case detail: Proactive application defense

This use case explores the power of the integrated solutions to defend public applications against botnets and zero-day exploits.

### Phase 1: Attack identification

Cloudflare WAF blocks a series of SQL injection attempts from a specific subnet. These logs are sent to Falcon Next-Gen SIEM.

### Phase 2: Cross-domain analysis

CrowdStrike Threat Intelligence enriches the log data, identifying the subnet as part of a known targeted ransomware group.

### Phase 3: Defensive tuning

Falcon Fusion SOAR triggers a workflow to update Cloudflare WAF rules: It increases the "Bot Fight Mode" sensitivity for that region and creates a proactive block rule for the entire autonomous system number (ASN) associated with the attack, hardening the application before the main assault begins.

![Proactive application defense workflow from attack identification to defensive tuning](https://developers.cloudflare.com/_astro/UseCase06.C2xAjdoT_Z96otW.svg "Figure 7: Proactive application defense")

Figure 7: Proactive application defense

## Implementation components

The integration between Cloudflare and CrowdStrike leverages several key components from each platform to create a cohesive security ecosystem.

### Cloudflare components

1. **Zero Trust Network Access (ZTNA)**: Controls access to applications based on identity, device posture, and other contextual signals  
   * Application access policies  
   * Private network access  
   * Service token authentication  
   * Device posture verification
2. **Secure Web Gateway (SWG)**: Inspects and filters Internet-bound traffic  
   * URL filtering  
   * Malware protection  
   * Content categories  
   * File type controls
3. **Data Loss Prevention (DLP)**: Prevents unauthorized data exfiltration  
   * Built-in data profiles (PII, financial data, secrets)  
   * Custom data patterns  
   * Exact data matching  
   * Context awareness
4. **Remote Browser Isolation (RBI)**: Executes web content in a secure cloud environment  
   * File upload/download controls  
   * Clipboard restrictions  
   * Keyboard input controls  
   * Visual presentation only
5. **Email Security**: Prevents email-based threats  
   * Phishing protection  
   * Malicious attachment scanning  
   * Business email compromise detection  
   * Link isolation
6. **API-driven Cloud Access Security Broker (CASB)**: Monitors SaaS usage and security  
   * SaaS posture management  
   * Permission monitoring  
   * Data security scanning  
   * Public share detection
7. **Web Application Firewall (WAF)**  
   * Machine learning (ML) detection and blocking  
   * Custom rule creation  
   * Managed rule sets  
   * Rate limiting

### CrowdStrike components

1. **Falcon Endpoint Agent**: Provides comprehensive endpoint protection  
   * Behavior monitoring  
   * Malware prevention  
   * Device security posture assessment  
   * Vulnerability management
2. **Zero Trust Assessment (ZTA)**: Evaluates device security in real time  
   * OS security assessment  
   * Sensor status monitoring  
   * Overall device health scoring  
   * Continuous evaluation
3. **Falcon Next-Gen SIEM**: Centralizes security monitoring and analysis  
   * Log ingestion, correlation, and real-time searching  
   * Threat detection rules and alert triggering  
   * Security visualization with customizable dashboards  
   * Alert management and long-term data storage
4. **Falcon Insight XDR**: Provides extended detection and response capabilities  
   * Cross-domain detection  
   * Automated investigation  
   * Threat hunting  
   * Guided remediation
5. **Falcon Fusion SOAR:** Orchestrates and automates complex security workflows across the Cloudflare and CrowdStrike platforms for unified incident response  
   * Security orchestration  
   * Playbook execution  
   * Automated containment and enrichment  
   * Bi-directional actioning

## Summary

The integration between Cloudflare and CrowdStrike provides organizations with a comprehensive security solution that combines endpoint security, zero trust network access, and application protection. By leveraging the strengths of both platforms, organizations can achieve better visibility into their security posture, automate responses to threats, and more effectively protect their applications and data.

This reference architecture demonstrates how these solutions work together to address key security challenges, including zero trust adoption, application protection, and data security. By implementing this integrated approach, organizations can enhance their security posture while reducing the operational burden on their security teams.

## Resources

* [Cloudflare One - CrowdStrike](https://developers.cloudflare.com/cloudflare-one/integrations/service-providers/crowdstrike/)
* [CrowdStrike Marketplace - Cloudflare ↗](https://marketplace.crowdstrike.com/partners/cloudflare/)
* [CrowdStrike Falcon Fusion SOAR with Cloudflare SASE ↗](https://blog.cloudflare.com/integrating-crowdstrike-falcon-fusion-soar-with-cloudflares-sase-platform/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/architectures/","name":"Reference Architectures"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/architectures/cloudflare-sase-with-crowdstrike/","name":"CrowdStrike and Cloudflare - A unified security ecosystem for automated, risk-based protection"}}]}
```

---

---
title: Reference Architecture using Cloudflare SASE with Microsoft
description: This reference architecture explains how Microsoft and Cloudflare can be integrated together. By leveraging Cloudflare's secure network access, risky user isolation, and application and data visibility, organizations can consolidate management.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/architectures/cloudflare-sase-with-microsoft.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Reference Architecture using Cloudflare SASE with Microsoft

**Last reviewed:**  almost 2 years ago 

## Introduction

In today's rapidly evolving digital landscape, organizations are increasingly embracing cloud migration to modernize their environments and enhance productivity. Microsoft has emerged as a leading provider of cloud applications and services, offering a comprehensive suite of solutions to support hybrid work. However, this shift to the cloud also presents new challenges and risks that must be addressed to ensure the security and integrity of an organization's resources.

As organizations migrate to hybrid and multi-cloud environments, they often face the complexity of managing a combination of Software as a Service (SaaS), self-hosted, and non-web applications. This heterogeneous ecosystem can complicate the process of securing and controlling access to these resources. Additionally relying on legacy, often on-premises, Virtual Private Network (VPN) solutions to securely connect users to applications can introduce security gaps and hinder employee productivity. To overcome these challenges and achieve greater security outcomes, organizations can benefit from partnering with Cloudflare, a leading provider of cloud security and performance solutions. Cloudflare offers seamless integration with Microsoft's cloud ecosystem, enabling customers to eliminate security gaps, enhance performance, and ensure reliability across their hybrid work environments.

In this reference architecture diagram, we will explore how the combination of Cloudflare's Secure Access Service Edge (SASE) platform and Microsoft's cloud applications and services can help you attain a Zero Trust security posture and accelerate cloud modernization and productivity while providing comprehensive security for hybrid work. By leveraging Cloudflare's secure network access, risky user isolation, and application and data visibility, organizations can consolidate management through a unified interface and enable secure access to any resource, regardless of location.

### Who is this document for and what will you learn?

This reference architecture is designed for IT or security professionals with some responsibility over or familiarity with their organization's Microsoft deployments. It is designed to help you understand the different ways in which Microsoft and Cloudflare can be integrated together in terms of your Zero Trust and SASE programs.

To build a stronger baseline understanding of Cloudflare, we recommend the following resources:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (5 minute read) or [video ↗](https://youtu.be/XHvmX3FhTwU?feature=shared) (2 minutes)
* Solution Brief: [Cloudflare One ↗](https://cfl.re/SASE-SSE-platform-brief) (3 minute read)
* Whitepaper: [Reference Architecture for Internet-Native Transformation ↗](https://cfl.re/internet-native-transformation-wp) (10 minute read)
* Blog: [Zero Trust, SASE, and SSE: foundational concepts for your next-generation network ↗](https://blog.cloudflare.com/zero-trust-sase-and-sse-foundational-concepts-for-your-next-generation-network/) (14 minute read)

Those who read this reference architecture will learn:

* How Cloudflare and Microsoft can be integrated together to protect users, devices, applications and networks from a Zero Trust perspective

This document is also accompanied by a reference architecture with a more indepth look at [Cloudflare and SASE](https://developers.cloudflare.com/reference-architecture/architectures/sase/).

While this document examines Cloudflare at a technical level, it does not offer fine detail about every product in the platform. Visit the [developer documentation ↗](https://developers.cloudflare.com/) for further information specific to a product area or use case.

## Integration of Cloudflare with Microsoft

Cloudflare's [Zero Trust Network Access ↗](https://www.cloudflare.com/zero-trust/products/access/) (ZTNA) provides a faster and safer alternative to traditional VPNs. It replaces on-premises VPN infrastructure and protects any application, regardless of whether it is hosted in an on-premises network, public cloud, or as Software as a Service (SaaS). By integrating with Microsoft Intune and Microsoft Entra ID (formerly Azure Active Directory), Cloudflare's ZTNA service enables organizations to enforce default-deny, Zero Trust rules and provide conditional access to internal resources based on user identity and device posture.

Microsoft and Cloudflare can be integrated in the following ways.

* Using Microsoft [Entra ID ↗](https://learn.microsoft.com/en-us/entra/fundamentals/whatis) for authentication to all Cloudflare protected resources
* Leveraging Microsoft [Intune ↗](https://learn.microsoft.com/en-us/mem/intune/fundamentals/what-is-intune) device posture in Cloudflare policies to ensure only managed, trusted devices have access to protected resources
* Using Cloudflare [CASB](https://developers.cloudflare.com/cloudflare-one/integrations/cloud-and-saas/) to inspect your [Microsoft 365 ↗](https://www.microsoft.com/en-us/microsoft-365/what-is-microsoft-365) tenants and alert on security findings for incorrectly configured accounts and shared files containing sensitive data
* Using Cloudflare's [Secure Web Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) to control access to Microsoft SaaS applications such as Outlook, OneDrive and Teams
* Using Cloudflare's [Email security](https://developers.cloudflare.com/email-security/) service to increase protection of email from phishing attacks and business email compromise.

### Microsoft Entra ID with Cloudflare

Cloudflare's integration with Entra ID allows you to leverage your identities in Entra for authentication to any Cloudflare protected application. Groups can also be imported via SCIM to be used in access policies, simplifying management and abstracting access control by managing group membership in Entra ID.

* Entra ID enables administrators to create and enforce policies on both applications and users using Conditional Access policies.
* It offers a wide range of parameters to control user access to applications, such as user risk level, sign-in risk level, device platform, location, client apps, and more.
* Security teams can define their security controls in Entra ID and enforce them at the network layer, for every request, with Cloudflare's ZTNA service.

![Figure 1: Microsoft Entra ID integrates with Cloudflare for ZTNA access to SaaS and self hosted applications.](https://developers.cloudflare.com/_astro/cloudflare-sase-with-microsoft-fig1.DLUixQrQ_Z1qvAIq.svg "Figure 1: Microsoft Entra ID integrates with Cloudflare for ZTNA access to SaaS and self hosted applications.")

Figure 1: Microsoft Entra ID integrates with Cloudflare for ZTNA access to SaaS and self hosted applications.

### Microsoft Intune with Cloudflare

Cloudflare is able to enforce access policies that include information about device posture. Intune can be integrated into Cloudflare so that information about Intune managed and protected devices can be used to enforce access control to Cloudflare protected resources.

* With a device connected using our [agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/), Cloudflare's ZTNA service can leverage the enhanced telemetry and context provided by Intune regarding a user's device posture and compliance state.
* Intune provides detailed information about the security status and configuration of user devices, enabling more informed access control decisions.
* This integration allows administrators to ensure that only compliant and secure devices are granted access to critical networks and applications.

![Figure 2: Figure 2: Using Intune and Cloudflare device posture data for secure application access.](https://developers.cloudflare.com/_astro/cloudflare-sase-with-microsoft-fig2.B-u59e7U_Z1vBimS.svg "Figure 2: Using Intune and Cloudflare device posture data for secure application access.")

Figure 2: Using Intune and Cloudflare device posture data for secure application access.

### Cloudflare CASB for Microsoft 365

As companies adopt numerous SaaS applications, maintaining consistent security, visibility, and performance becomes increasingly difficult. With each application having unique configurations and security requirements, IT teams face challenges in staying compliant and protecting sensitive data across the diverse landscape.

Cloudflare CASB (Cloud Access Security Broker) addresses these challenges by providing extensive visibility across Microsoft 365 and other popular SaaS applications. This visibility enables organizations to quickly identify misconfigurations, exposed files, user access, and third-party access, ensuring a secure and compliant SaaS environment.

Learn more about how our CASB solution can [protect data at rest here](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-at-rest/).

### Cloudflare's Secure Web Gateway for improved security to Microsoft SaaS applications

Cloudflare's Secure Web Gateway (SWG) can help organizations achieve safe and secure access to Microsoft 365 in the following ways:

1. Traffic inspection and filtering: Cloudflare's SWG inspects all user and device traffic destined for the Internet, including traffic to Microsoft 365\. This allows organizations to apply security policies, content filtering, and threat prevention measures to ensure that only legitimate and authorized traffic reaches Microsoft 365 services. As seen above, policies can be designed so that only managed, secure devices can access any part of the Microsoft 365 and Azure platform.
2. Data protection with DLP profiles: Traffic is not only inspected based on device posture and identity information, but our DLP engine can also examine the content of the request and allow/block downloads/uploads of confidential information to and from Microsoft 365 and Azure.
3. Enforce Cloudflare gateway: Microsoft 365 can be configured to accept user traffic only from a specific range of IP addresses. Cloudflare makes it possible to define and associate IP addresses attached to all traffic leaving the SWG. This means that organizations can configure Microsoft 365 to only accept traffic coming from the IP address range designated by Cloudflare SWG, ensuring that all traffic has been inspected and approved by Cloudflare's security policies before reaching Microsoft 365.

By leveraging Cloudflare SWG as a secure gateway for Microsoft 365 access, organizations can benefit from advanced threat protection, granular access controls, traffic inspection, and centralized visibility, ensuring a safe and secure experience for their users while mitigating risks and maintaining compliance.

### Cloudflare's Email security for improved email protection

Phishing is the root cause of upwards of 90% of breaches that lead to financial loss and brand damage. Cloudflare's email security solution sits in front of all email going to your Microsoft 365 tenant, filtering out spam, bulk, malicious and spoof content. The solution can leverage Microsoft [rules for quarantine actions](https://developers.cloudflare.com/email-security/deployment/inline/setup/office-365-area1-mx/use-cases/four-user-quarantine-admin-quarantine/), allowing you to fine tune how different email detections are handled.

![Figure 3: Cloud email security protects all Microsoft 365 inboxes.](https://developers.cloudflare.com/_astro/cloudflare-sase-with-microsoft-fig3.B5Jderoc_F6Odd.svg "Figure 3: Cloud email security protects all Microsoft 365 inboxes.")

Figure 3: Cloud email security protects all Microsoft 365 inboxes.

It is also possible to configure cloud email security to scan [Microsoft 365 inboxes via API](https://developers.cloudflare.com/email-security/deployment/api/), avoiding the need to make changes to existing DNS records.

## Summary

By leveraging Cloudflare and its integrations with Microsoft, organizations can establish a Zero Trust security posture that goes beyond the limitations of traditional network security models. With Cloudflare's Zero Trust Network Access (ZTNA), organizations can replace self hosted VPNs and enforce conditional access based on user identity and device posture. The integration with Microsoft Entra ID allows for authentication and access control, while Microsoft Intune provides device posture information. Additionally, Cloudflare's CASB offers visibility into the security of Microsoft 365 configuration, the Secure Web Gateway inspects and filters traffic to Microsoft 365, and Email security protects against phishing attacks, ensuring a secure and compliant environment. This approach enables faster and more secure access to applications, while providing granular control over user access based on identity and device posture.

![Figure 4: A summary of Cloudflare SASE and Microsoft integrations.](https://developers.cloudflare.com/_astro/cloudflare-sase-with-microsoft-fig4.DEjQxEbH_ZdDpCU.svg "Figure 4: A summary of Cloudflare SASE and Microsoft integrations")

Figure 4: A summary of Cloudflare SASE and Microsoft integrations

## Related resources

* [Overview of Microsoft and Cloudflare partnership ↗](https://www.cloudflare.com/partners/technology-partners/microsoft/)
* [Set up Microsoft Entra ID (formerly Azure Active Directory) as an identity provider](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/entra-id/#set-up-entra-id-as-an-identity-provider)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/architectures/","name":"Reference Architectures"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/architectures/cloudflare-sase-with-microsoft/","name":"Reference Architecture using Cloudflare SASE with Microsoft"}}]}
```

---

---
title: Enhancing security posture with SentinelOne and Cloudflare One
description: The integration between Cloudflare One and SentinelOne provides organizations with a comprehensive security solution. The integration works through a service-to-service posture check that identifies devices based on their serial numbers.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/architectures/cloudflare-sase-with-sentinelone.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enhancing security posture with SentinelOne and Cloudflare One

**Last reviewed:**  11 months ago 

## Introduction

The integration between Cloudflare One and SentinelOne provides organizations with a comprehensive security solution that combines endpoint protection with [Zero Trust Network Access ↗](https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/). This integration enables organizations to make access decisions based on device security posture, ensuring that only healthy and compliant devices can access protected resources. This reference architecture describes how organizations can implement and leverage this integration to enhance their security posture. The integration can assist in advancing an organization's or agency's Zero Trust Architecture Maturity Model, with the goal of one's organization eventually achieving Advanced or Optimal across all [CISA's 5 Pillars of Zero Trust. ↗](https://www.cisa.gov/sites/default/files/2023-04/CISA%5FZero%5FTrust%5FMaturity%5FModel%5FVersion%5F2%5F508c.pdf)

## Who is this document for and what will you learn?

This reference architecture is designed for IT and security professionals who are implementing or planning to implement a Zero Trust security model using Cloudflare and SentinelOne. It provides detailed guidance on integration setup, configuration options, and common deployment scenarios. To build a stronger baseline understanding of these technologies, we recommend reviewing both platforms' core documentation.

Recommended resources for a stronger understanding of Cloudflare's SentinelOne integration:

* [SentinelOne device posture integration](https://developers.cloudflare.com/cloudflare-one/integrations/service-providers/sentinelone/)

## Integration overview

Cloudflare One can integrate with SentinelOne to enforce device-based access policies for applications and resources. The integration works through a service-to-service posture check that identifies devices based on their serial numbers. This allows organizations to ensure that only managed and secure devices can access sensitive resources.

## Technical components

### SentinelOne components

The SentinelOne platform provides critical endpoint security capabilities:

The SentinelOne agent must be deployed on all managed devices and provides real-time security monitoring and threat detection. Key posture data points include:

* Infection status of the device
* Number of active threats detected
* Agent activity status
* Network connectivity status
* Operational state of the agent

The SentinelOne Management Console provides centralized control and visibility, including the APIs necessary for integration with Cloudflare.

### Cloudflare components

Cloudflare's Zero Trust infrastructure provides the policy enforcement layer:

The Cloudflare One Client must be deployed alongside the SentinelOne agent on managed devices. This client creates the secure connection to Cloudflare's network and enables device posture checking.

The Cloudflare dashboard provides the configuration interface for:

* Service provider integration settings
* Device posture policies
* Access policies that incorporate device posture checks

## Implementation architecture

### Authentication and authorization flow

![Figure 1: SentinelOne is used in Cloudflare policies as part of authorization flow.](https://developers.cloudflare.com/_astro/figure1.DqycNoJs_Z20rPBo.svg "Figure 1: SentinelOne is used in Cloudflare policies as part of authorization flow.")

Figure 1: SentinelOne is used in Cloudflare policies as part of authorization flow.

When a user attempts to access a protected resource, the following sequence occurs:

1. The user's device connects to Cloudflare's network through the Cloudflare One Client.
2. Cloudflare queries the SentinelOne API to check the device's security posture.
3. The SentinelOne platform returns current device status including infection state, threats, and agent health.
4. Cloudflare evaluates this information against configured policies.
5. Access is granted or denied based on policy evaluation.

### Integration setup

The integration requires specific configuration steps:

First, a service account must be created in SentinelOne with appropriate permissions. This involves generating an API token and noting the REST API URL for your instance.

Next, SentinelOne must be configured as a service provider in the Cloudflare Zero Trust dashboard. This includes:

* Providing the API token and REST API URL
* Setting an appropriate polling frequency
* Testing the connection to ensure proper communication

Finally, device posture checks must be configured to define the security requirements for access. For detailed setup instructions, refer to [SentinelOne device posture integration](https://developers.cloudflare.com/cloudflare-one/integrations/service-providers/sentinelone/).

## Security capabilities

### Device posture verification

The integration enables robust device security verification through multiple attributes:

Infection Status monitoring ensures that compromised devices cannot access sensitive resources. Active Threat Detection prevents devices with ongoing security incidents from maintaining access. Agent Health Monitoring confirms that the security stack remains functional and properly configured.

### User risk detection

SentinelOne provides [endpoint detection and response (EDR) ↗](https://www.sentinelone.com/cybersecurity-101/endpoint-security/what-is-endpoint-detection-and-response-edr/) signals that help determine user risk scores. This allows organizations to identify and manage users who may present security risks, enabling proactive security measures before incidents occur.

## Core architecture

![Figure 2: SentinelOne and Cloudflare Zero Trust technical architecture.](https://developers.cloudflare.com/_astro/figure2.BaY3MgFK_Z1A6Acu.svg "Figure 2: SentinelOne and Cloudflare Zero Trust technical architecture.")

Figure 2: SentinelOne and Cloudflare Zero Trust technical architecture.

_Note: Labels in this image may reflect a previous product name._

The integration architecture begins at the managed endpoint device level, where two critical components coexist. The SentinelOne agent serves as the primary security enforcer, continuously monitoring the device for threats, assessing device health, and providing real-time security status updates. Alongside it, the Cloudflare One Client establishes secure connectivity and manages the device's interaction with Cloudflare's Zero Trust infrastructure. These components work in tandem to ensure both endpoint security and secure network access.

When a user attempts to access protected resources, the architecture initiates a sophisticated verification process. The Cloudflare One Client first establishes a secure tunnel to Cloudflare's global network, creating an encrypted channel for all communications. This connection ensures that all traffic between the device and protected resources remains secure and can be properly evaluated against security policies.

### Cloudflare Zero Trust platform operations

At the heart of the architecture lies the Cloudflare Zero Trust platform, which consists of three main engines working in concert. The **Device Posture Engine** serves as the first line of defense, actively querying the SentinelOne platform to verify the device's security status. It checks multiple attributes including infection status, active threats, agent health, and network connectivity state. This information forms the foundation for access decisions.

The **Access Policy Engine** then takes this device posture information and combines it with other contextual factors to make access decisions. It evaluates predefined policies that can include criteria such as device security status, user identity, location, and other risk factors. This engine ensures that only devices meeting all security requirements can access protected resources.

The **Secure Web Gateway** adds another layer of protection by filtering all traffic, preventing access to malicious sites, and enforcing data loss prevention policies. This component ensures that even after access is granted, all traffic is continuously monitored and protected.

### SentinelOne platform integration

The SentinelOne platform plays a crucial role in this architecture through three main components. The **Management Console** provides centralized control over all endpoints, allowing security teams to configure policies, monitor device status, and respond to security events. The **API Services** component facilitates real-time communication with Cloudflare, providing critical security information about managed devices.

The **Security Analytics** component continuously processes security telemetry from all endpoints, identifying threats, assessing risks, and providing detailed security insights. This information flows to Cloudflare through **API Services**, enabling dynamic access decisions based on the latest security intelligence.

### Authentication and access flow

When a user requires access to protected resources, the architecture follows a specific flow:

First, the device's security status is evaluated through the **SentinelOne agent**, which reports detailed health and security information to the SentinelOne platform. Simultaneously, the **Cloudflare One Client** initiates the access request to Cloudflare's Zero Trust platform.

Next, Cloudflare's **Device Posture Engine** queries the SentinelOne platform through its **API Services** to verify the device's security status. This check includes all current security metrics, threat status, and compliance information. The **Access Policy Engine** then evaluates this information against defined security policies.

If all security requirements are met, access is granted through the secure tunnel established by the Cloudflare One Client. Throughout the session, continuous monitoring ensures that any change in device security status can trigger immediate reevaluation of access permissions.

### Security and monitoring capabilities

The architecture provides comprehensive security through multiple mechanisms. At the endpoint level, the SentinelOne agent provides advanced threat detection and response capabilities. The **Security Analytics** component processes this security telemetry in real-time, enabling quick identification of threats and security issues.

Cloudflare's **Secure Web Gateway** provides network-level protection, filtering traffic and preventing access to malicious resources. This component works in conjunction with the **Access Policy Engine** to ensure that all traffic, both to internal and external resources, meets security requirements.

## Operational benefits

This integrated architecture delivers several key operational benefits. It enables organizations to implement true Zero Trust access control, where every access request is verified based on current security status. The integration between SentinelOne and Cloudflare provides seamless security enforcement, combining endpoint protection with network-level access control.

The architecture also supports dynamic policy enforcement, where changes in device security status can automatically trigger access restrictions. This ensures that compromised or non-compliant devices can be quickly isolated from sensitive resources, maintaining organizational security.

## Deployment considerations

### Network architecture

Organizations should consider their network architecture when implementing this integration. Key factors include:

* Distribution of endpoints across different networks
* Bandwidth and latency requirements for posture checks
* Integration with existing security tools and workflows

The integration between Cloudflare One and SentinelOne requires thoughtful planning to ensure successful implementation. At its foundation, organizations need to prepare their environment by having the SentinelOne agent and Cloudflare One Client deployed on all devices that will be subject to posture checks. This foundational step ensures that both security monitoring and secure network connectivity are in place before building additional security controls.

When implementing the integration, organizations should approach it as a service provider relationship where SentinelOne acts as a trusted source of device security information. This relationship is established through secure API communications, with careful attention paid to proper credential management and regular verification of the connection between the platforms. The integration relies on SentinelOne's ability to provide real-time device security status, which Cloudflare then uses to make access decisions.

### Policy design

Effective policy design is crucial for security and usability. Consider implementing policies that:

* Start with basic hygiene requirements and gradually increase security requirements
* Account for different user roles and access needs
* Include fallback options for exceptional circumstances

Policy configuration represents another crucial aspect of the deployment. Organizations can leverage SentinelOne's detailed device posture information to create nuanced access policies. These policies can take into account multiple factors such as device infection status, active threats, and agent health. By monitoring these various attributes, organizations can ensure that only devices meeting their security requirements can access protected resources.

Regular testing and monitoring play vital roles in maintaining the effectiveness of the integration. Through Cloudflare's logging and testing capabilities, organizations can verify that posture checks are functioning as intended and that policies are being enforced correctly. This ongoing verification helps ensure that the security benefits of the integration are consistently realized.

## Conclusion

The integration between Cloudflare One and SentinelOne provides organizations with a powerful tool for implementing Zero Trust security principles. By combining endpoint protection with access control, organizations can ensure that only secure and compliant devices can access sensitive resources. This approach significantly reduces the risk of compromised devices accessing corporate resources while maintaining user productivity through seamless authentication and authorization processes.

## Related resources

* [Overview of SentinelOne and Cloudflare partnership ↗](https://www.cloudflare.com/partners/technology-partners/sentinelone/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/architectures/","name":"Reference Architectures"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/architectures/cloudflare-sase-with-sentinelone/","name":"Enhancing security posture with SentinelOne and Cloudflare One"}}]}
```

---

---
title: Understanding Email Security Deployments
description: This reference architecture describes the key architecture of Cloudflare Email security.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/architectures/email-security-deployments.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Understanding Email Security Deployments

**Last reviewed:**  4 months ago 

## Introduction

Email continues to be a mission critical method for communication between people and organizations. This also makes email an ideal channel for attackers to exploit in their attempts to take over accounts, steal data, and gain access to internal systems. Being able to reduce spam, defeat phishing, and malware attacks is critical for the security of your organization. Over 90% of cybersecurity incidents begin with an email attack.

Cloudflare Email security service is a market leading solution that can be deployed in a variety of ways to support different needs for each organization. This document outlines the different methods to deploy Email security and why you would choose any specific model.

## Strengthen your email infrastructure with Cloudflare Email security

Email remains a critical communication channel for businesses of all sizes. However, email also serves as a prime target for cyber attacks, including phishing, spam, and malware. To safeguard your organization sensitive data and reputation, a robust email security solution is essential.

Cloudflare Email security offers a comprehensive suite of tools and technologies designed to protect your email infrastructure from a wide range of threats. By implementing Cloudflare Email security, you can significantly enhance your organization security posture and mitigate the risks associated with email-borne attacks.

This reference architecture provides a detailed overview of how to deploy and configure Cloudflare Email security to optimize your email security posture. This reference architecture will delve into key components and best practices to ensure the seamless integration of this solution into your existing IT infrastructure.

### Who is this reference architecture for and what will you learn?

This reference architecture is designed for IT or security professionals who are looking at using Cloudflare to secure aspects of their business. This reference architecture is designed for a broad audience, including:

* **IT security professionals**: Security engineers, architects, and administrators responsible for designing, implementing, and managing Email security solutions.
* **Network engineers**: Network engineers who manage network infrastructure and email gateways.
* **Cloud architects**: Cloud architects who design and implement cloud-based Email security solutions.
* **Security and IT decision-makers**: Managers and executives who need to understand the technical aspects of Email security and make informed decisions.

Whether you are a seasoned security expert or a newcomer to Email security, this document will provide you with the necessary information to effectively deploy and manage Cloudflare Email security.

To build a stronger understanding of Cloudflare, we recommend the following resources:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (five-minute read) or [Video ↗](https://www.cloudflare.com/what-is-cloudflare/video) (two minutes)
* [Cloudflare Blog ↗](https://blog.cloudflare.com/tag/cloud-email-security/) | [Email security ↗](https://blog.cloudflare.com/tag/cloud-email-security/) and [Phishing ↗](https://blog.cloudflare.com/tag/phishing/)
* CISA | [Phishing Guidance: Stopping the Attack Cycle at Phase One ↗](https://www.cisa.gov/publications/phishing-guidance-stopping-attack-cycle-phase-one)

By the end of this reference architecture, you will have learned how Cloudflare protects your email and what considerations should be made for choosing how to deploy. You will learn about the specific components, technologies, and configurations involved in the Cloudflare Email security solution. This includes how it integrates with existing email infrastructure and leverages cloud-based services.

## Email security deployment options

Cloudflare Email security is a modern approach to solving phishing attacks. Cloudflare solution is built upon AI and Machine Learning utilizing elastics services in addition to benefiting from Cloudflare expansive threat intelligence network. Cloudflare Email security was designed as the only true Cloud Elastic Service with shared intelligence and [Supervised ML ↗](https://www.ibm.com/think/topics/supervised-learning) capable of any deployment method available for email. However, choosing the right deployment model is crucial for maximizing the benefits of Email security.

This document will discuss the following methods to deploy and where you would use them:

* [Inline or MX](https://developers.cloudflare.com/cloudflare-one/email-security/setup/pre-delivery-deployment/mx-inline-deployment/)
* [Microsoft 365 API integration](https://developers.cloudflare.com/cloudflare-one/email-security/setup/post-delivery-deployment/api/m365-api/)
* [Journaling](https://developers.cloudflare.com/cloudflare-one/email-security/setup/post-delivery-deployment/bcc-journaling/journaling-setup/m365-journaling/) or [BCC](https://developers.cloudflare.com/cloudflare-one/email-security/setup/post-delivery-deployment/bcc-journaling/bcc-setup/gmail-bcc-setup/enable-gmail-integration/) with auto-move
* Mixed deployment

## Choose a deployment model

Before you choose a deployment option, it is important to consider your needs and desired experience. Our best practice is typically to go with an MX deployment when we are the primary phishing protection in place. The key reasons for this are as follows:

* [Pre-delivery](https://developers.cloudflare.com/cloudflare-one/email-security/setup/pre-delivery-deployment/mx-inline-deployment/) remediation allows us to tune how messages are delivered by appending to the subject/body, applying URL Rewriting to Cloudflare [Remote Browser Isolation](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/), and delivering messages into the junk folder or downstream email quarantine. This enables you to design with a specific user experience in mind.
* We can remove messages before they are consumed by systems that ingest emails such as a ServiceNow or an Archiving Solution.
* We remove the risk of dwell time issues where there is a time difference between delivery to the inbox and when the message is moved from the inbox.
* We can support mixed deployments such as a mix of Microsoft 365 and Microsoft Exchange or Microsoft 365 and Google Workspace.

If those needs are not important or you are using layered security that does not include another API-based solution, then our API method is quick and efficient to deploy with no changes to your mail flow. If you want the benefits of API without the risk of API Throttling, then Journal/BCC is the best choice as the ingestion method does not use API calls. However, if you want the protection of an MX deployment along with the benefits of API for internal messaging, then our mixed deployment is ideal.

Should your needs change, know that you have the flexibility to change deployment methods as you see fit without having to repurchase our solution. The only caveat is that Advantage and CyberSafe customers are limited to Inline deployments while Enterprise licensing benefits from all capabilities.

Before you commit to a specific deployment, Cloudflare suggests you review all of the options, weigh your needs, and consult with your account team as needed.

## Deployment options

### Inline

With an Inline deployment, all emails destined for one or more domains are filtered through Cloudflare before they are delivered to the user's inbox. Cloudflare can be deployed anywhere in your email processing chain. When deployed as the first hop, you will need to update the domain's DNS MX records to point to Cloudflare. If you prefer Cloudflare to inspect messages after your existing SEG (Secure Email Gateway), Cloudflare can be inserted as a hop in the processing chain, and will then forward processed messages downstream to the next hop. Based on policies, messages are blocked and/or quarantined if they are marked as Spam, Malicious, Bulk, and more.

![Inline deployment](https://developers.cloudflare.com/_astro/Inline_MX.W7ooc9mD_1wuIMp.svg) 

The diagram above describes the following:

1. Email arrives at Cloudflare based on [MX records ↗](https://www.cloudflare.com/en-gb/learning/dns/dns-records/dns-mx-record/).
2. Cloudflare inspects email body, header, and attachments and assigns the appropriate disposition:  
   * Malicious  
   * Spam  
   * Bulk  
   * Suspicious  
   * Spoof  
   * Clean
3. Apply any policy, such as allow or block certain domains.
4. Quarantine high risk emails
5. All messages that received a [disposition](https://developers.cloudflare.com/cloudflare-one/email-security/reference/dispositions-and-attributes/#dispositions) by Cloudflare will have the header `X-CFEmailSecurity-Disposition` added. This header can be used by downstream systems to enact any special handling (rerouting, external quarantining, and more).
6. Forward on all valid email traffic.
7. Subject and/or body modifications can be applied to the messages to add visible information for the end-user about the disposition.

From a security perspective, the Inline deployment is the preferred method of deployment, because it scans every email and stops malicious content from reaching the user inbox. This removes all exposure risks to users.

#### Benefits of Inline deployment

* Messages are processed and blocked before delivery to the user mailbox.
* Inline deployment allows you to modify the message, adding subject or body mark-ups such as appending \[SPAM\] or \[EXTERNAL SPAM\] to the subject.
* Provides high availability and adaptive message pooling as Cloudflare will continue to accept incoming emails in queue, even when the downstream services are not available. When the downstream services are restored, messages will resume delivery for the queue.
* Messages with an assigned [disposition](https://developers.cloudflare.com/cloudflare-one/email-security/reference/dispositions-and-attributes/#dispositions) that are not quarantined receive an `X-header` that may be used for advanced handling downstream.
* Compatible with all mail systems including Microsoft Exchange On-Prem, Postfix, Lotus Notes, Google Workspace, Microsoft 365, and more.

#### Considerations

Before deploying Email security via Inline deployment, you will need to consider the following:

1. Redirecting deployments where mail flows into Microsoft Exchange or Microsoft 365 first, then to an Email security solution by way of mail flow rules for scanning/remediation, and then back into Microsoft 365 is not supported by Microsoft. While Cloudflare is technically capable of this deployment, it creates attribution (recognizing the original sender) and delivery issues.
2. If Cloudflare is going to be the MX, this will require DNS changes. If there are many domains, each DNS zone needs to be updated.
3. Inline deployment can increase complexity in the SMTP architecture if Cloudflare is not deployed as MX such as Inline behind a traditional SEG (Mimecast/ProofPoint).
4. Inline deployment may require policy duplication on multiple solutions and the MTA. For example, Cloudflare, SEGs, and MTA treat allow policies in significantly different ways and may all need exception handling for the same message.
5. In a layered deployment, some vendors such as Mimecast and Barracuda can only function as the MX. In this scenario, you would configure Cloudflare Inline behind those vendors.
6. When using Mimecast, it is recommended to disable URL Rewriting as it makes it impossible for Cloudflare to decode and crawl URLs. If this feature remains enabled, our link following capabilities are limited to domain reputation and age.

#### Inline (Cisco Connector)

Cisco offers a unique capability to integrate with Cloudflare using a connector as MX or behind Cloudflare with a supportable Hairpin deployment. This deployment functions the same as Inline in all other considerations. Refer to Cisco as MX Record and Cisco - Email security as MX Record.

### API

An alternative approach is to integrate via the Graph [API](https://developers.cloudflare.com/cloudflare-one/email-security/setup/post-delivery-deployment/api/m365-api/) in Microsoft 365\. In this model, emails are delivered directly to the user inbox, where Cloudflare then receives copies of messages, scans them, and moves them as configured by [disposition](https://developers.cloudflare.com/cloudflare-one/email-security/reference/dispositions-and-attributes/#dispositions).

This is performed by subscribing to all user mailboxes on the authorized domains. You have the ability to choose if the scope should be restricted to the Inbox only, or All Folders during the authorization process. Upon delivery to the mailbox, the subscription triggers an action within Microsoft 365 that sends Cloudflare a copy of the email to be scanned and assigned a disposition. Once the disposition has been assigned, our solution will look at the [auto-move](https://developers.cloudflare.com/cloudflare-one/email-security/settings/auto-moves/) policy and perform the desired action.

![API deployment](https://developers.cloudflare.com/_astro/API.D-5LzkKL_6O1bn.svg) 

The diagram above describes the following:

1. An email is delivered directly to the user inbox via an existing route.
2. Cloudflare retrieves messages for inspection via email vendors API. Cloudflare inspects email body, header, and attachments and assigns the appropriate disposition:  
   * Malicious  
   * Spam  
   * Bulk  
   * Suspicious  
   * Spoof  
   * Clean
3. Apply any policy, such as allow or block certain domains.
4. Messages are moved per policy in the Cloudflare solution. The following actions are available:  
   * Inbox  
   * Junk  
   * Trash  
   * Soft Delete (User Recoverable)  
   * Hard Delete (Admin Recoverable)

Under normal circumstances, this process is typically performed in less than 2-3 seconds from inbox delivery to the move request. There is no SLA from Google or Microsoft 365 on how long they will take to perform the action. If the move action is not successful, our solution will retry numerous times every five minutes.

#### Benefits of API deployment

* Easy way to add protection in complex email architectures with no changes to mail flow operations.
* Agentless deployment for Microsoft 365.
* Microsoft 365 Defender/ATP operates on the message first.
* This method can be used for a Proof of Value to collect and report on emails without requiring changes to mail flow. In this scenario you would leave the remediation policy not configured to prevent actions being taken.

#### Considerations

Before deploying Email security via [API deployment](https://developers.cloudflare.com/cloudflare-one/email-security/setup/post-delivery-deployment/api/m365-api/), you will need to consider the following:

* Depending on the API infrastructure, Microsoft 365 or Google outages and maintenance windows will increase message dwell time in the inbox as emails cannot be scanned or remediated until after delivery to the user. This is a limitation of all API vendors.
* Microsoft 365 may throttle API requests to the Graph API on a Service by Service basis. The Mail API with Graph is within the Outlook Services section. These limits could be abused by a threat actor to functionally disable any API based deployment granting an additional window for attack. The limits are as follows:  
   * 10,000 API requests in a 10 minute period  
   * Four concurrent requests  
   * 150 megabytes (MB) upload (PATCH, POST, PUT) in a five-minute period  
   * Refer to [Outlook service limits ↗](https://learn.microsoft.com/en-us/graph/throttling-limits#outlook-service-limits)
* The Gmail API is subject to a daily usage limit that applies to all requests made from your application, and per-user rate limits. Each limit is identified in terms of quota units, or an abstract unit of measurement representing Gmail resource usage. The main request limits are described as follows:  
   * Per user rate limit of 250 quota units per user per second, moving average (allows short bursts).  
   * Per-method Quota Usage is based on the number of quota units consumed by a request depending on the method called.  
   * For example, `messages.get` and `messages.attachments.get` consume five quota units. Refer to [Per-method quota usage ↗](https://developers.google.com/gmail/api/reference/quota#per-method%5Fquota%5Fusage)
* Requires read/write access into mailboxes which some security/email teams may not allow.
* Only Microsoft 365 has true API support. Google allows for API remediation but still requires a Compliance Rule to deliver emails using SMTP for scanning. On-prem Exchange requires PowerShell and does not have APIs for auto-moves.
* Messages cannot be modified after delivery as per Microsoft 365/Google requirements. This means we cannot perform URL Rewriting to Cloudflare [email link isolation](https://developers.cloudflare.com/cloudflare-one/email-security/investigation/search-email/#open-links) or append text to the email subject or body. Those features are only available using an Inline deployment.

### BCC/Journaling

BCC/Journaling is very similar to API deployments with the exception of how emails are delivered to Cloudflare. As with API the email is delivered to the mailbox first, but at the same time an account specific email address is added to the email so a copy is transmitted via SMTP to Cloudflare for evaluation.

Once Cloudflare receives the email, it will scan and determine the [disposition](https://developers.cloudflare.com/cloudflare-one/email-security/reference/dispositions-and-attributes/#dispositions) of the email. Once an email has a disposition our solution will look at the API authorizations and [auto-move](https://developers.cloudflare.com/cloudflare-one/email-security/settings/auto-moves/) policy and perform the desired action. This method is less at risk to API Throttling as the APIs for Microsoft 365 and Google are only used to remediate emails.

![BCC/Journaling deployment](https://developers.cloudflare.com/_astro/Journaling_Diagram.yvbQDbEw_2qLegl.svg) 

During a proof of value, this deployment can be configured with any Email security solution or mail platform that allows for adding a BCC recipient to gain visibility into what those solutions are missing that Cloudflare would block.

#### Benefits of BCC/Journaling deployment

* Easy way to add protection in complex email architectures with no changes to mail flow operations.
* Agentless deployment for Microsoft 365\. Microsoft 365 would transmit emails after delivery to Cloudflare and the API Authorization can be configured with a Remediation policy to move emails with a disposition out of the inbox.
* Google makes use of Compliance Rules for BCC which can be combined with an API Authorization to move emails after delivery. This provides for the same outcome as the API deployment detailed above.
* Microsoft 365 and Google operate on the message first. This provides a more layered approach taking advantage of the security capabilities of Microsoft 365/Google in addition to Cloudflare.
* You can control the scope of messages inspected (external, internal, or both)
* This method can be used for a Proof of Value to collect and report on emails without requiring changes to mail flow. This does not require an API Authorization to be in place. If the API is configured for Microsoft 365 or Google, you would leave the Remediation policy not configured to prevent actions being taken.

#### Considerations

Before deploying Email security via BCC/Journaling deployment, you will need to consider the following:

* Same limitations of API.
* Depends on Google or Microsoft 365 to deliver messages via SMTP.
* May require a Connector in Microsoft 365 to facilitate direct communication.
* Messages cannot be modified after delivery as per Microsoft 365/Google requirements. This means we cannot perform URL Rewriting to Cloudflare Email Link Isolation or append text to the email Subject or Body. Those features are only available using an Inline deployment.

### Mixed

Mixed utilizes an Inline deployment for external emails and BCC/Journaling for internal emails. This is facilitated by using both deployment methods but configuring Cloudflare for two hops in BCC/Journal mode. This scenario provides all of the added benefits of an MX delivery for external messages, while also providing remediation of bad emails from internal sources. Here are some scenarios where this is helpful.

If you have mailboxes where emails are consumed by services such as ticketing systems, CRMs (Customer Relationship Management systems), or Legal Archiving. Each of these integrations run the risk of malicious emails being delivered into those systems where no API-based Email security solution can remediate the problem. The only deployment capable of protecting you would be Inline. If you had additional concerns about malware being spread internally or compromised accounts being used to phish users internally, you would have a gap requiring the purchase of both an Inline solution and an API solution. This would create other problems as you may need to manage policies related to email delivery in three different solutions (MX, API, and Microsoft 365/Google).

Cloudflare's mixed deployment allows us to collapse all of those use cases into a single solution by allowing quarantining of messages at the Cloudflare edge in addition to evaluating internal email and removing them when needed. This improves security while decreasing vendor spend, management overhead, and risk due to the complexity of managing three different policy sets.

#### Benefits

mixed deployment combines the benefits of Inline deployment for external emails and BCC/Journaling for internal emails.

#### Considerations

When you choose mixed deployment, you need to consider that:

* Internal email detections are limited due to a lack of information such as Email Authentication, Sending Server, and Delivery Path. Only the content within the body of the email can be analyzed.
* Internal emails may have higher False Positives when using Protecting Users with impersonation registry.

## Automated Post Delivery

Cloudflare offers automated workflows based on continuous analysis and submissions. These features enable Cloudflare to move messages using the API [auto-move](https://developers.cloudflare.com/cloudflare-one/email-security/settings/auto-moves/) policy after delivery. This is best paired with the phish submissions or third-party user submissions.

### Submission Handling

Cloudflare prioritizes Administrator Submissions for false positives and negatives through the Cloudflare dashboard. This approach enables faster review times and helps Cloudflare proactively identify and correct issues that may affect multiple users improving the overall product experience. It is recommended that administrators review user submissions, identify all related messages, and submit as verified false positive/false negatives via the Cloudflare dashboard. These submissions will be reviewed and used to improve Machine Learning Models, Detections, and Engines in the future.

## Summary

To summarize, Email security offers three core deployment models: API, BCC/Journaling, and Inline (or MX). Inline is the preferred deployment model as it filters and remediates malicious messages before they reach the user inbox, thereby removing dwell time risk and allowing for features like URL Rewriting and message modification.

API and BCC/Journaling models are post-delivery solutions, integrating directly with platforms like Microsoft 365 or Google Workspace to inspect and [auto-move](https://developers.cloudflare.com/cloudflare-one/email-security/settings/auto-moves/) emails after they have landed in the user mailbox. While these post-delivery methods are easier to deploy and require no mail flow changes, they face limitations such as API throttling risks and the inability to modify message content (like subjects or body text).

Finally, the mixed deployment combines the benefits of Inline for external email protection (critical for systems like CRM or ticketing that ingest email) with BCC/Journaling for internal email evaluation.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/architectures/","name":"Reference Architectures"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/architectures/email-security-deployments/","name":"Understanding Email Security Deployments"}}]}
```

---

---
title: Load Balancing Reference Architecture
description: This reference architecture is for organizations looking to deploy both global and local traffic management load balancing solutions. It is designed for IT, web hosting, and network professionals with some responsibility over or familiarity with their organization's existing infrastructure.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/architectures/load-balancing.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Load Balancing Reference Architecture

**Last reviewed:**  about 2 years ago 

## Introduction

Cloudflare Load Balancing is a SaaS offering that allows organizations to host applications for a global user base while vastly reducing concerns of maintenance, failover, resiliency, and scalability. Using Cloudflare Load Balancing allows organizations to address the following challenges:

* Efficiently handling large volumes of incoming traffic, especially during unexpected surges or spikes.
* Ensuring applications and services remain accessible to users.
* Maintaining quick response times and optimal performance for all users, especially during high traffic periods.
* Adapting to changing traffic demands and ensuring the infrastructure can accommodate growth.
* Helping applications and services resist Distributed Denial of Service (DDoS) attacks.

Cloudflare Load Balancing is built on Cloudflare’s connectivity cloud, ​​a unified, intelligent platform of programmable cloud-native services that enable secure any-to-any connectivity between all networks (enterprise and Internet), cloud environments, applications, and users. It is one of the largest global networks, with data centers spanning over 330 cities and interconnection with over 13,000 network peers. It also has a greater presence in core Internet exchanges than many other large technology companies.

As a result, Cloudflare operates within \~50 ms of \~95% of the world’s Internet-connected population. And since all Cloudflare services are designed to run across every network location, all requests are routed, inspected, and filtered close to their source, resulting in strong performance and consistent user experiences.

This document describes a reference architecture for organizations looking to deploy both global and local traffic management load balancing solutions.

### Who is this document for and what will you learn?

This reference architecture is designed for IT, web hosting, and network professionals with some responsibility over or familiarity with their organization's existing infrastructure. It is useful to have some experience with networking concepts such as routing, DNS, and IP addressing, as well as basic understanding of load balancer functionality.

To build a stronger baseline understanding of Cloudflare and its load balancing solution, we recommend the following resources:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (5 minute read) or [video ↗](https://youtu.be/XHvmX3FhTwU?feature=shared) (2 minutes)
* Solution Brief: [Cloudflare Private Network Load Balancing ↗](https://cf-assets.www.cloudflare.com/slt3lc6tev37/4mn2dtdw7TvSwCUJw8mMf5/f1fa6269f4468c432560b2c9f5ebd38a/Cloudflare%5FLocal%5FTraffic%5FManager%5FSolution%5FBrief.pdf) (5 minute read)
* Solution Brief: [Cloudflare GTM Load Balancing ↗](https://cf-assets.www.cloudflare.com/slt3lc6tev37/5OWUduF4YBKYADj3zREAX6/5241a81a3fc4ff1db7c9bade14991b23/Cloudflare%5FGlobal%5FTraffic%5FManager%5F%5FGTM%5F%5FSolution%5FBrief.pdf) (5 minute read)
* Blog: [Elevate load balancing with Private IPs and Cloudflare Tunnels: a secure path to efficient traffic distribution ↗](https://blog.cloudflare.com/elevate-load-balancing-with-private-ips-and-cloudflare-tunnels-a-secure-path-to-efficient-traffic-distribution/) (13 minutes)

Those who read this reference architecture will learn:

* How Cloudflare Load Balancing can address both Private Network Load Balancing and global traffic management use cases.
* How Cloudflare’s global network enhances the functionality of Cloudflare Load Balancing.
* The capabilities of Cloudflare Load Balancers, and how they apply to various use cases.
* The structure of Cloudflare Load Balancers and their various configurations.

## Handling dynamic workloads in modern applications

### Concepts and terminology

#### Endpoint

In this document, the term “endpoint” is any service or hardware that intercepts and processes incoming public or private traffic. Since load balancing can be used for more than just web servers, the term endpoint has been chosen to represent all possible types of origins, hostnames, private or public IP addresses, virtual IP addresses (VIPs), servers, and other dedicated hardware boxes. It could be on-premises or hosted in a public or private cloud — and could even be a third-party load balancer.

#### Steering

Steering is a load balancer’s main function — the process of handling, sending, and forwarding requests based on a set of policies. These policies generally take many factors into account, including request URL, URL path, HTTP headers, configured weights, priority, and endpoint latency, responsiveness, capacity, and load.

#### Layer 7

[Layer 7 ↗](https://www.cloudflare.com/learning/ddos/what-is-layer-7/) of the [OSI model ↗](https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/), also known as the application layer, is where protocols such as SSH, FTP, NTP, SMTP, and HTTP(S) reside. When this document refers to layer 7 or layer 7 load balancers, it means HTTP(S)-based services. The Cloudflare layer 7 stack allows Cloudflare to apply services like DDoS protection, Bot Management. WAF, CDN, Load Balancing and more to a customer's website to improve performance, availability, and security.

#### Layer 4

Layer 4 of the [OSI model ↗](https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/) — also called the transport layer — is responsible for end-to-end communication between two devices. Network services that operate at layer 4 can support a much broader set of services and protocols. Cloudflare’s public layer 4 load balancers are enabled by a product called Spectrum, which works as a layer 4 reverse proxy. In addition to offering load balancing, Spectrum provides protection from [DDoS attacks ↗](https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/) and can conceal the endpoint IP addresses.

#### SSL/TLS Offloading

SSL (Secure Sockets Layer) and its successor TLS (Transport Layer Security) are cryptographic protocols used to secure connections over the Internet. SSL and TLS offloading, also known as SSL/TLS termination or SSL/TLS acceleration, is a technique used in load balancers and web servers to handle the SSL/TLS encryption and decryption process without affecting an endpoint’s performance. SSL/TLS offloading improves server performance, simplifies certificate management, and enhances scalability by offloading the resource-intensive encryption and decryption tasks to dedicated devices, helping endpoints remain dedicated to serving content and application logic.

### Challenges addressed by load balancers

Modern websites, or any applications for that matter, face three main challenges:

1. **Performance:** Ensuring that the application responds to users requests and input in a timely manner
2. **Availability:** Maintaining the uptime for the application, so it is always able to respond to user requests
3. **Scalability:** Growing, shrinking, or relocating application resources based on user behavior or demand.

#### Performance

Application performance can be affected by several factors, but the most common cause of performance issues is the amount of usage or load placed on an endpoint. An endpoint generally has a finite amount of compute resources it can provide. If too many requests arrive at once, or if the type of requests cause increased CPU/memory usage, the endpoint will respond slower or fail to respond at all.

To address these challenges, endpoints can be upgraded with more compute resources. But during idle or low-usage times, the organization ends up paying for underutilized resources. Organizations may also deploy multiple endpoints — but to seamlessly steer traffic between them, a load balancing solution is needed to make this process seamless to the end user.

Figure 1 shows how load might be distributed without a load balancer:

![Endpoint load is not distributed evenly without a load balancer](https://developers.cloudflare.com/_astro/lb-ref-arch-1.D0yttOOR_Z2ojHfp.svg "Figure 1: Endpoint performance can suffer without a load balancer")

Figure 1: Endpoint performance can suffer without a load balancer

Load balancers allow organizations to host several endpoints and portion out traffic between them, ensuring no single endpoint gets overwhelmed. The load balancer handles all incoming requests and forwards them to the appropriate endpoint. The client doesn’t need any knowledge of endpoint availability or load — it just needs to send the request to the load balancer and the load balancer handles the rest. Figure 2 shows how a load balancer can evenly distribute traffic from users across a set of endpoints.

![A load balancer helps evenly distribute requests across multiple endpoints](https://developers.cloudflare.com/_astro/lb-ref-arch-2.DiqlVt64_ZyggkG.svg "Figure 2: Load balancers help distribute load across endpoints")

Figure 2: Load balancers help distribute load across endpoints

Another performance-related issue has to do with the distance between a client and an endpoint. Whether due to the mere fact of traveling farther, or having to make more network hops, a request that travels a longer distance generally has a higher round-trip time (RTT).

RTT becomes important at scale. For example, if a client and endpoint are both located in the United States, it would be reasonable to expect a RTT of 25ms. If the client has 20 requests it needs responses to, the total time required to handle them sequentially (not including compute time) would be 500ms (20 x 25ms). And if the same client connected from the APAC region the RTT might be upwards of 150ms, resulting in an undesirable total loading time of 3000ms (20 x 150ms). (Certainly, request streaming enhancements in HTTP/2 and HTTP/3 might change this math — but in websites with dynamic or interactive content, where a response’s information is used to generate additional requests, the example still holds in general.) Figure 3 illustrates how this happens.

![Latency compounds based on the number of requests](https://developers.cloudflare.com/_astro/lb-ref-arch-3.D0FbXMvI_tMVbA.svg "Figure 3: How latency can compound and affect the total time it takes to load a resource")

Figure 3: How latency can compound and affect the total time it takes to load a resource

In the same way a load balancer can pass traffic to a less-busy endpoint, it can also pass traffic to a geographically closer endpoint, resulting in a more responsive experience for the client. Specifically, the load balancer performs a lookup of the IP address that sent the request, determines its location, and selects the closest or most region-appropriate endpoint to send it to (this is similar to functionality provided by DNS solutions like GeoDNS).

#### Availability

Service availability encompasses both unintentional and intentional downtime of endpoints behind a load balancer. Several factors can contribute to unintentional downtime, including hardware failure, software bugs, network issues, and ISP or other vendor issues. Even for the most advanced organizations, these issues are inevitable.

Load balancers solve these issues by always monitoring the health of endpoints. If an endpoint is slow to respond to a health check, or fails to respond entirely, the endpoint is marked as unhealthy. Several methods exist for monitoring, including basic health tests like ICMP (ping) and TCP connection tests. More advanced health tests can be used like issuing an HTTP GET request and ensuring a specific response code and response body are returned from the endpoint. Once an endpoint is in a degraded state, the load balancer will send fewer or no requests its way in favor of healthier endpoints. As the endpoint becomes operational again and the load balancer is able to receive responses to its health checks, the endpoint is marked as operational and has traffic steered towards it once more.

Intentional downtime comes in a few different forms, including capacity changes, hardware or infrastructure upgrades, and software updates. Load balancers gracefully remove traffic from one or more endpoints to allow for such maintenance.

#### Scale

Effective application scaling helps organizations meet customer or user demand and avoid unnecessary billing or charges. During traffic increases, organizations may need to temporarily deploy more endpoints to ensure the service stays performant and available. However, constantly having enough endpoints online to meet your maximum possible traffic could be costly regardless whether the endpoint is located on-premises or via a cloud provider like AWS, GCP, or Azure. Load balancers allow for dynamic increases or decreases in capacity by monitoring requests, connections, and latency to the endpoints.

Another type of scale to consider is geographic scale. As services grow in popularity, endpoint location becomes more important. Users in a different geographic region than an endpoint may have slower response times and receive a lower quality of service than users in the same region. As organizations deploy new endpoints in different regions, they have to decide how they want to distribute their traffic. This challenge has been met by different layers of load balancing called global traffic management (GTM) and Private Network Load Balancing. This document describes both of these in detail in the following section — but in summary, the GTM load balancer handles the initial request (typically via DNS) and then selects and steers traffic to the Private Network Load Balancer that is deployed close to endpoints in the appropriate geographic region.

### Types of traffic management

As mentioned, load balancing for global applications and services comes in two layers. The first layer is called Global Traffic Management or Manager (GTM), which may also be called Global Server Load Balancing (GSLB). The second layer is called Private Network Load Balancing, which may also be referred to as Server Load Balancing (SLB). This section will define the purpose of these different types of load balancing and how they work together.

#### Global traffic manager / global traffic management (GTM)

A Global Traffic Manager is responsible for routing requests, generally from the Internet, to the proper region or data center. Many GTM load balancers operate at the DNS layer, allowing them to:

* Resolve a DNS request to an IP address based on geographic region or physical location.
* Provide the IP of the endpoint or service closest to the client, so it can connect.

Figure 4 shows how a GTM load balancer is used to select a data center based on the client location or region.

![Global traffic management steers traffic to the proper region or data center](https://developers.cloudflare.com/_astro/lb-ref-arch-4.OnwMof7d_11aC7L.svg "Figure 4: Global traffic management load balancer overview")

Figure 4: Global traffic management load balancer overview

Global Traffic Managers can also proxy traffic and perform a variety of inspections, including reading/changing/deleting headers in HTTP requests and modifying URLs based on region or geographic location. GTM functionality is best implemented by cloud-based load balancers (like Cloudflare) since the goal is to steer traffic from anywhere in the world. Hardware load balancers exist in a single physical location, which means the further traffic originates from the load balancer, the slower the end-user experience. A cloud-based load balancer can run in many different geographic locations, helping it provide a performant solution for DNS-only, layer 4, and layer 7 contexts.

#### Private Network Load Balancing

Private Network Load Balancing steers traffic within a data center or geographic location. A Private Network Load Balancer can be responsible for load balancing, SSL/TLS offloading, content switching, and other application delivery functions. Private Network Load Balancing ensures efficient distribution of client requests across multiple endpoints to improve performance and ensure high availability. Private Network Load Balancers are usually placed inside private networks and are used to load balance publicly or privately accessible resources. In Figure 5 below, the GTM load balancer has selected the Europe data center to direct a request to the Europe data center’s Private Network Load Balancer which will then steer it to the appropriate endpoint.

![Private Network Load Balancing is responsible for steering to the final endpoint or destination](https://developers.cloudflare.com/_astro/lb-ref-arch-5.F19YgVWw_15nz5k.svg "Figure 5: Private Network Load Balancer overview")

Figure 5: Private Network Load Balancer overview

Private Network Load Balancer and their endpoints usually sit behind firewalls. But while endpoints may be protected on private networks, accessibility to the Private Network Load Balancer can be either public or private depending on deployment requirements. A Private Network Load Balancer will monitor total requests, connections, and endpoint health to ensure requests are steered towards endpoints capable of responding in a timely manner.

#### On-premises vs cloud-based load balancers

There are two main load balancer architectures:

* On-premises load balancers  
   * Typically hardware-based, but also can be virtualized or software-based  
   * Focused on maximum performance
* Cloud-based load balancers  
   * Software deployed public cloud infrastructure  
   * Handle requests closer to the originator of the request

Each approach has advantages and disadvantages. On-premises load balancers usually exist inside of private networks completely controlled by the organization. These load balancers are collocated with the endpoints they are load balancing, so latency and RTT time should be minimal. The disadvantage of these on-premises load balancers is that they are restricted to a single physical location. Which means traffic from other regions can have long RTT and high latency in responses. Also, adding another data center requires purchasing and deploying all new equipment. On-premises load balancers also typically require cloud-based load balancers for geographic traffic steering to get requests routed by a geographically local or region-appropriate data center. The advantages of cloud-based load balancers is that they can operate in almost any geographic region without concern for rack space, power, cooling, or maintenance and can scale without concern for new chassis, modules, or larger network connections. Cloud-based load balancers do however increase latency and RTT between the load balancer and the endpoints as they are not typically colocated with the endpoints they are steering traffic toward.

## Cloudflare Load Balancing architecture and design

Cloudflare has offered cloud-based GTM since 2016 and started adding Private Network Load Balancing capabilities in 2023\. This section will review the entire Cloudflare Load Balancing architecture and dive deep into the different configurations and options available. First, however, it's important to understand the benefits that Cloudflare Load Balancers have simply by running on Cloudflare’s global network.

### Inherent advantages in the Cloudflare architecture

Cloudflare Load Balancing is built on Cloudflare’s connectivity cloud, ​​a unified, intelligent platform of programmable cloud-native services that enable any-to-any connectivity between all networks (enterprise and Internet), cloud environments, applications, and users. It is one of the largest global networks, with data centers spanning over 330 cities and interconnection with over 13,000 network peers. It also has a greater presence in core Internet exchanges than many other large technology companies.

As a result, Cloudflare operates within \~50 ms of \~95% of the world’s Internet-connected population. And since all Cloudflare services are designed to run across every network location, all traffic is connected, inspected, and filtered close to the source for the best performance and consistent user experience.

Cloudflare’s load balancing solution benefits from our use of anycast technology. Anycast allows Cloudflare to announce the IP addresses of our services from every data center worldwide, so traffic is always routed to the Cloudflare data center closest to the source. This means traffic inspection, authentication, and policy enforcement take place close to the end user, leading to consistently high-quality experiences.

Using anycast ensures the Cloudflare network is well balanced. If there is a sudden increase in traffic on the network, the load can be distributed across multiple data centers – which in turn, helps maintain consistent and reliable connectivity for users. Further, Cloudflare’s large network capacity and AI/ML-optimized smart routing also help ensure that performance is constantly optimized.

By contrast, many other SaaS-based load balancing providers use Unicast routing in which a single IP address is associated with a single endpoint and/or data center. In many such architectures, a single IP address is then associated with a specific application, which means requests to access that application may have very different network routing experiences depending on how far that traffic needs to travel. For example, performance may be excellent for employees working in the office next to the application’s endpoints, but poor for remote employees or those working overseas. Unicast also complicates scaling traffic loads — that single service location must ramp up resources when load increases, whereas anycast networks can share traffic across many data centers and geographies.

Figure 6 shows how using the Cloudflare network allows geographically disparate users to connect to their resources as fast as possible.

![Cloudflare’s global anycast network ensures that the closest data center is always selected](https://developers.cloudflare.com/_astro/lb-ref-arch-6.Bw_DeAYw_VJ60J.svg "Figure 6: Load balancers hosted on Cloudflare’s global anycast network")

Figure 6: Load balancers hosted on Cloudflare’s global anycast network

Figure 6, above, shows other Cloudflare services are also running in each of these data centers since Cloudflare runs every service in every data center so users have a consistent experience everywhere. For example, Cloudflare’s layer 7 load balancer will also be able to take advantage of other services such as DDoS protection, CDN/Cache, Bot Management, or WAF. All of these additional services can help protect your service from unnecessary traffic whether it be malicious requests (blocked by DDoS Protection, Bot Management, or WAF) or requests that can be served via cache rather than a request to endpoint. All of these services can be combined as needed to make a service or offering as protected, resilient, and performant as possible.

![Cloudflare Layer 7 features can be used together to further secure a service](https://developers.cloudflare.com/_astro/lb-ref-arch-7.BB-S-4sn_HuCjE.svg "Figure 7: Some of the processes a HTTP request passes through in the Cloudflare layer 7 stack")

Figure 7: Some of the processes a HTTP request passes through in the Cloudflare layer 7 stack

Cloudflare also has a [network optimization service ↗](https://blog.cloudflare.com/orpheus-saves-internet-requests-while-maintaining-speed/) that is constantly running at all data centers to ensure that Cloudflare provides the best path between Cloudflare data centers and also track all the available paths to endpoints. This allows Cloudflare to ensure that endpoints can always be reached and reroute traffic to alternate Cloudflare data centers, if necessary, to reach an endpoint. After the load balancer has made a decision on which endpoint to steer the traffic, the traffic is then forwarded to Cloudflare’s network optimization service to determine the best path to reach the destination. The path can be affected by a feature called Argo Smart Routing which, when enabled, uses timed TCP connections to find the Cloudflare data center with the fastest RTT to the endpoint. Figure 8 shows how Argo Smart Routing can help improve connection time to endpoints.

![Argo Smart Routing finds the fastest path between requester and endpoint](https://developers.cloudflare.com/_astro/lb-ref-arch-8.DxPypMMy_1yPSyw.svg "Figure 8: Argo Smart Routing reduces latency to endpoints")

Figure 8: Argo Smart Routing reduces latency to endpoints

Another way traffic flow can be affected is by the use of Cloudflare Tunnels. This document covers Cloudflare Tunnels in depth in the following section. Because Cloudflare Tunnels connect endpoints to specific Cloudflare data centers, traffic destined for those endpoints must traverse those data centers to reach the endpoint. Figure 9 shows how connections to private endpoints connected via Cloudflare Tunnel must pass through the data center where the tunnel terminates.

![Requests take different paths depending on whether the endpoint is public or connected over Cloudflare Tunnel](https://developers.cloudflare.com/_astro/lb-ref-arch-9.coisSp9H_1cdDiM.svg "Figure 9: Paths to endpoints differ when connecting endpoints via Cloudflare Tunnel")

Figure 9: Paths to endpoints differ when connecting endpoints via Cloudflare Tunnel

Usually, GTM and Private Network Load Balancers are either separate hardware or separate SaaS (GTM) and hardware Private Network Load Balancing components. Cloudflare’s GTM and Private Network Load Balancing capabilities are combined into a single SaaS offering which greatly simplifies configuration and management. There is no need to create a GTM load balancer and steer traffic to more local Private Network Load Balancers. All endpoints can be directly connected to Cloudflare and traffic is steered to the correct region, data center, and endpoint all from a single load balancer configuration. While the concepts of GTM and Private Network Load Balancing features will persist, their implementation in Cloudflare will be done in a way that keeps load balancer configurations as simple and straightforward as possible. Figure 10 illustrates how global traffic can be steered from any geographic region to a specific endpoint as needed.

![Combining GTM and Private Network Load Balancing functions into a single load balancer configuration](https://developers.cloudflare.com/_astro/lb-ref-arch-10.BICXl4Ld_Z2sioWk.svg "Figure 10: Cloudflare combines the function of GTM and Private Network Load Balancing")

Figure 10: Cloudflare combines the function of GTM and Private Network Load Balancing

### The structure of a Cloudflare Load Balancer

A Cloudflare Load Balancer, often referred to as a Virtual IP (VIP), is configured with an entrypoint. Typically, this entrypoint is a DNS record. The load balancer first applies a defined traffic steering algorithm to select an endpoint pool, which is a group of endpoints selected based on function, geographic area, or region. A load balancer configuration can have one or multiple endpoint pools, and each endpoint pool can have one or many endpoints. After selecting an endpoint pool, the load balancer applies an endpoint steering algorithm to the list of endpoints and selects an endpoint to steer the traffic towards. Figure 11 shows the basic steps from client request to endpoint within a Cloudflare Load Balancer.

![The steps within a Cloudflare Load Balancer](https://developers.cloudflare.com/_astro/lb-ref-arch-11.Bx2sEYiV_Z2ociBh.svg "Figure 11: The basic process flow through a Cloudflare Load Balancer")

Figure 11: The basic process flow through a Cloudflare Load Balancer

The definition of a Cloudflare Load Balancer is divided into three main components:

1. Health monitors: these components are responsible for observing the health of endpoints and categorizing them as healthy or critical (unhealthy).
2. Endpoint pools: this is where endpoints are defined and where health monitors and endpoint steering are applied.
3. Load balancers: in this component, lists of endpoint pools and traffic steering policies are applied.

The following sections detail the options available and considerations for configuring a Cloudflare Load Balancer, starting with steering, which is utilized in both endpoint pool and load balancer configurations.

### Steering types and methods

Steering is the core function of a load balancer and steering methods ultimately determine which endpoint is going to be selected when a load balancer is engaged. From the load balancer’s perspective, steering can be applied in two key areas.

The first is called ‘traffic steering’, and it is responsible for determining which endpoint pool will handle incoming requests, typically based on proximity or geographic region of the requester. The concept of traffic steering closely aligns with the idea of global traffic management.

The second area where steering is applied is after a region, data center, or endpoint pool has been selected. At this point, the load balancer needs to select the single endpoint responsible for handling the request or connection, referred to as ‘endpoint steering’. Steering at both of these levels is done by applying steering methods tailored to the specific needs of the customer deploying the load balancer. There are several different algorithms to choose from, but not all algorithms are applicable to both steering types.

Below is an in-depth review of all the steering methods Cloudflare offers. At the end of this section, there is a quick reference table which can be helpful in understanding which algorithms are applicable to which use cases.

#### Traffic steering

Traffic steering selects the group of endpoints, also called an endpoint pool. The most common use of traffic steering is to select the endpoint pool based on the least latent response times, geographic region, or physical location. Traffic steering is closely aligned to global traffic management and serves as the initial step in directing traffic to an endpoint.

#### Endpoint steering

Endpoint steering is responsible for selecting which endpoint will receive the request or connection. Endpoint steering can randomly select an endpoint, a previously selected endpoint (if session affinity is enabled), or it can be used to select the least utilized, fastest responding, endpoint for a request or connection. Endpoint steering is closely related to Private Network Load Balancing, as it is responsible for selecting the final destination of a request or connection.

#### Weighted steering

Weighted steering takes into account the differences in endpoint pools and endpoints that will be responsible for handling requests from a load balancer. Endpoint weight, which is a required field for every endpoint, is only used when specific steering methods are chosen. Similarly, endpoint pool weight is only needed when specific steering methods are selected. Please see the [steering options overview](#steering-options-overview) section for a quick reference for when weights are applied.

Weight influences the randomness of endpoint pool or endpoint selection for a single request or connection within a load balancer. Weight does not consider historical data or current connection information, which means that weight may have variations in distribution over shorter timeframes. However, over longer periods of time and with significant traffic, the distribution will more closely resemble the desired weights applied in configuration. It’s important to note that session affinity will also override weight settings after the initial connection, as session affinity is intended to direct subsequent requests to the same endpoint pool or endpoint. Figure 12 shows a weight example for two endpoint pools with equal capacity and probability of being selected.

![A pair of endpoint pools with equal probability of being selected](https://developers.cloudflare.com/_astro/lb-ref-arch-12.Buje8NxO_Z1t79ta.svg "Figure 12: A pair of endpoint pools with equal capacity")

Figure 12: A pair of endpoint pools with equal capacity

Specific algorithms, such as Least Outstanding Request Steering, take into account the number of open requests and connections. Weight is used to determine which endpoints or endpoint pools can handle a greater number of open requests or connections. Essentially, weight defines the capacity of endpoints or endpoint pools, regardless of the selected steering method.

Weight is defined as any number between 0.00 and 1.00\. It’s important to note that the total weight of the endpoint pools or the endpoints within an endpoint pool do not need to equal 1\. Instead, the weights will be added together, and then an individual weight value is divided by that sum to get the probability of that endpoint being selected.

Weight to percentage equation: (endpoint weight) ÷ (sum of all weights in the pool) = (% of traffic to endpoint)

Below are some examples with diagrams to help in understanding how weight is used for distributing traffic. In these examples, it is assumed that the goal is to evenly distribute traffic across all endpoints with the same capacity or compute resources. [Random](#random-steering) traffic steering will be used to demonstrate traffic distribution across three endpoint pools.

Example 1:

* There are three endpoint pools defined, all with a weight of 1
* Each endpoint pool has a 33% probability of being selected

Example math for weight of 1: (1) ÷ (1 + 1 + 1) = (.3333) (or 33.33%)

![A set of three endpoint pools all with equal probability](https://developers.cloudflare.com/_astro/lb-ref-arch-13.BIZS6w9__ygYRV.svg "Figure 13: Three endpoint pools with equal weight")

Figure 13: Three endpoint pools with equal weight

In this example, it was simple to apply 1 to all the weight values for each of the endpoint pools. However, it should be noted that any number between 0.01 and 1.00 could have been used as long as the same number was used across all three endpoint pools. For instance, setting all three pools to .1 or even .7 would have resulted in an equal probability that each pool would be selected to receive traffic.

Since the sum of the weights is used to calculate the probability, organizations can use any number of values to make these inputs easier to understand. In the following examples, since each endpoint has the same capacity, a value of .1 weight is assigned to each endpoint, and the sum of these values is used as the weight for the endpoint pool.

Example 2

* There are three endpoint pools defined
* Each endpoint pool has a different number of endpoints, but all endpoints have equal capacity
* To evenly distribute load across endpoints, each endpoint pool needs a different probability

![Three endpoint pools with different numbers of endpoints](https://developers.cloudflare.com/_astro/lb-ref-arch-14.ChU-xE19_zNzhL.svg "Figure 14: Illustrates how to use weight to balance load across endpoint pools with different capacity")

Figure 14: Illustrates how to use weight to balance load across endpoint pools with different capacity

Example math for weight of .4 : (.4) ÷ (.4 + .5 + .6) = (.2667) (or 26.67%)

Example math for weight of .5 : (.5) ÷ (.4 + .5 + .6) = (.3333) (or 33.33%)

Example math for weight of .6 : (.6) ÷ (.4 + .5 + .6) = (.4000) (or 40.00%)

It is possible that endpoints do not all have the same capacity. In the following example, one of the endpoint pool’s endpoints has twice the capacity of the endpoints in the other two endpoint pools.

Example 3

* There are three endpoint pools defined
* Endpoint pool 1 has endpoints that have double the capacity compared to those in endpoint pool 2 and endpoint pool 3
* The goal is to place double the amount of traffic to endpoint pool 1 per endpoint
* Endpoint pool 1 has 4 endpoints but with double capacity, the weight of each endpoint will be valued at .2 for a total of .8 for the endpoint pool

![Three endpoint pools with different numbers of endpoints and endpoints of different capacity](https://developers.cloudflare.com/_astro/lb-ref-arch-15.CJwKtgsv_2tvur8.svg "Figure 15: Using weight to balance load across endpoint pools with different capacities and endpoints")

Figure 15: Using weight to balance load across endpoint pools with different capacities and endpoints

Example math for weight of .8 : (.4) ÷ (.8 + .5 + .6) = (.4211) (or 42.11%)

Example math for weight of .5 : (.5) ÷ (.8 + .5 + .6) = (.2632) (or 26.32%)

Example math for weight of .6 : (.6) ÷ (.8 + .5 + .6) = (.3157) (or 31.57%)

In this final example, since the four endpoints in endpoint pool 1 are double the capacity of other endpoints, the calculation treats endpoint pool 1 as if it essentially has 8 endpoints instead of 4\. Therefore, the weight value of .8 instead of .4 as shown in example 2.

These are just three simple examples illustrating how weight can be used to distribute load across endpoint pools or endpoints. The same calculations are used for weights applied to endpoints within an endpoint pool as well. However, the impact of using weights within different steering methods is similar, although with slightly modified calculations, as covered in the sections below.

Weights are most useful when one endpoint pool might have more resources than another endpoint pool or when endpoints within an endpoint pool do not have equal capacity. Weight helps to ensure that all resources are used equally given their capabilities.

#### Steering methods

##### Off - failover

Off - failover is the most basic of traffic steering policies. It uses the order of the endpoint pools as a priority list for selecting which pool to direct traffic towards. If the first pool in the list is healthy and able to receive traffic, that is the pool that will be selected. Since off - failover isn’t available for endpoint steering, another steering method will be used to select an endpoint. Off - failover is commonly used in active/passive failover scenarios where a primary data center or group of endpoints is used to handle traffic, and only under failure conditions, is traffic steered towards a backup endpoint pool.

##### Random steering

Random steering is available for both traffic steering and endpoint steering. Random spreads traffic across resources based on the weight defined at both the load balancer configuration and within the endpoint pool. The weight values set at the load balancer for each endpoint pool can differ from the weight value set per endpoint within that endpoint pool. For example, within a load balancer configuration, 70% of traffic can be sent to one of two endpoint pools, then within that endpoint pool, the traffic can be evenly distributed across four endpoints. The previous section, [weighted steering](#weighted-steering), provides a detailed explanation of how weight is used and the calculations that determine the selection of an endpoint pool or endpoint.

##### Hash steering

Hash steering is an endpoint steering algorithm that uses endpoint weight and the request’s source IP address to select an endpoint. The result is that every request from the same IP address will always steer to the same endpoint. It’s important to note that altering the order of endpoints or adding or removing endpoints from the endpoint pool could result in different outcomes when using the hash algorithm.

##### Geo steering

Geo steering is a traffic steering algorithm available to enterprise plan customers that is used to tie endpoint pools to specific countries or geographic regions. This option can be useful for improving performance by steering traffic to endpoints closer to users. It also aids in complying with laws and regulations by steering requests from users in specific regions to resources within the same region or to resources designed to meet specific regulatory requirements.

##### Dynamic steering

Dynamic steering is a traffic steering algorithm available to enterprise plan customers that creates round trip time (RTT) profiles. RTT values are collected each time a health probe request is made and based on the response from the endpoint to the monitor request. When a request is made, Cloudflare inspects the RTT data and sorts pools by their RTT values. If there is no existing RTT data for your pool in a region or colocation center, Cloudflare directs traffic to the pools in failover order. When enabling dynamic steering the first time for an endpoint pool, allow 10 minutes for the change to take effect as Cloudflare builds an RTT profile for that pool. Dynamic steering doesn’t use geographic boundaries in its decision making process and solely focuses on selecting the lowest RTT endpoint pool.

##### Proximity steering

Proximity steering is a traffic steering algorithm available to enterprise plan customers that steers traffic to the closest physical data center based on where the request endpointated.

Cloudflare determines the requester’s physical location using the following methods, in this order:

1. [EDNS Client Subnet ↗](https://developers.google.com/speed/public-dns/docs/ecs) information, if provided in the DNS request
2. Geolocation information of the resolver used to reach Cloudflare
3. GPS location of the Cloudflare data center handling the request

Proximity steering requires providing GPS coordinates for all endpoint pools, allowing Cloudflare to calculate the closest endpoint pool based on the requesting IP, DNS resolver, or Cloudflare data center.

##### Least outstanding requests steering (LORS)

Least outstanding request steering (LORS) is available to enterprise plan customers and can be used for both traffic and endpoint steering.

LORS uses the number of unanswered HTTP requests to influence steering and is only functional when used with Cloudflare Layer 7 proxied Cloudflare Load Balancers. If LORS is assigned to any other type of load balancer, its behavior will be equivalent to random steering. LORS uses the counts of open requests, along with weight, to create a new transformed weight that is used for the steering decision.

Equation for LORS transformed weight:

* weight / (count + 1) = transformedWeight

Reminder for random weight calculation:

* weight / (total weight) = probability of being selected

Here’s an example of LORS:

* Pool A has a weight of 0.4
* Pool B has a weight of 0.6
* Pool A has 3 open requests
* Pool B has 0 open requests
* Relevant equation  
   * weight / (count + 1) = transformedWeight
* Pool A's transformed weight: 0.4 / (3 + 1) = 0.1
* Pool B's transformed weight: 0.6 / (0 + 1) = 0.6
* Relevant equation  
   * weight / (total weight) = probability of being selected
* Pool A’s probability of being steered toward: 0.1 / (0.1+0.6) = .1429 (14.29%)
* Pool B’s probability of being steered toward: 0.6 / (0.1+0.6) = .8571 (85.71%)

In this example, the next connection has a 14.29% probability of being steered to Pool A and a 85.71% probability of being steered to Pool B. While it’s likely that traffic will be steered towards Pool B, it is still possible for it to be steered to Pool A. In situations with lighter load conditions, there will be more variation in the steering results, which may not precisely match the configured weights. However, as the load increases, the actual steering results will closely match the configured weights.

When non-L7 proxied load balancers are used with LORS, the open request count information is not available. As a result, the denominator will always be 1\. Since dividing any number by 1 doesn’t change the numerator, and in this case, the numerator is the weight, steering decisions will be made solely on weight. This results in the random method described above.

LORS is best used if endpoint pools or endpoints are easily overwhelmed by spikes in concurrent requests. It is well-suited for applications that value endpoint health over factors like latency, geographic alignment, or other metrics. This is especially useful when some or all requests put a heavy load on an endpoint and take a significant amount of time to generate a response.

#### Steering options overview

| Steering Method            | Traffic Steering | Endpoint Steering | Weight-based | Enterprise-only |
| -------------------------- | ---------------- | ----------------- | ------------ | --------------- |
| Off - Failover             | X                |                   |              |                 |
| Random                     | X                | X                 | X            |                 |
| Hash                       | X                | X                 | X            |                 |
| Geo                        | X                | X                 |              |                 |
| Dynamic                    | X                | X                 |              |                 |
| Proximity                  | X                | X                 |              |                 |
| Least Outstanding Requests | X                | X                 | X            | X               |

All traffic steering methods marked above as Enterprise-only can also be obtained as a self-service add-on as well. All endpoint steering methods marked as Enterprise-Only require an enterprise plan with Cloudflare.

### Health monitors

A health monitor determines the health of endpoints once they are configured inside an endpoint pool. Health monitors generate probes, which are connection attempts to endpoints. Health monitors use the responses to the probes to record endpoint health. Health monitors serve as templates that include service type, path, and port, and advanced features such as interval, timeout, and protocol specific settings for evaluating endpoint health. The health monitor template is then applied to the endpoint pool, which contains endpoints hosting similar services. Once a health monitor is attached to the endpoint pool, the endpoint address is used as the destination for the health monitor probe. A single health monitor can be used across many endpoint pools, and health monitors are account-level objects, allowing them to be leveraged by multiple zones within the same Cloudflare account.

By default, health monitor probes are sent directly to the endpoint address, bypassing the entire layer 7 stack. This means that actual traffic to the endpoint through the load balancer will receive different treatment than the health monitor probe. Depending on the configuration, this could result in a health monitor reporting an endpoint as healthy, even if actual connections or requests are failing.

The Simulate Zone feature ensures that health monitor probes follow the same path as actual requests, passing through the entire layer 7 stack. This ensures health monitors take the exact same path through the network and through other layer 7 processes to reach the endpoint.

The Simulate Zone feature is required for health monitors when certain features are enabled, such as [Authenticated Origin Pulls (AOP)](https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/), where probes would fail if they weren’t being provided with the proper mTLS certificate for authentication on the origin. Simulate Zone also ensures health monitor probes use the same path provided by [Argo Smart Routing](https://developers.cloudflare.com/argo-smart-routing/) and the same [Dedicated CDN Egress IPs](https://developers.cloudflare.com/smart-shield/configuration/dedicated-egress-ips/) when organizations leverage [Smart Shield Advanced](https://developers.cloudflare.com/smart-shield/get-started/#packages-and-availability) to restrict the edge IP addresses that Cloudflare uses to reach their endpoints.

![HTTPS health monitor to monitor the status of an endpoint](https://developers.cloudflare.com/_astro/lb-ref-arch-16.BYSozQzy_Z1LA0T2.webp "Figure 16: HTTPS health monitor configuration")

Figure 16: HTTPS health monitor configuration

Health monitor Probes can be configured as the following types:

* HTTP
* HTTPS
* TCP
* UDP ICMP
* ICMP Ping
* SMTP
* LDAP

Once a health monitor is defined, it can be assigned to an endpoint and the probes will be sent to the endpoint at the interval defined. There are two additional settings to note in regards to the health monitor configuration within the endpoint pool. The first is the Health Threshold, which is used to determine how many endpoints within the pool need to be healthy in order to consider the endpoint pool to be healthy or degraded.

* Endpoint pool in healthy state  
   * Contains only healthy endpoints
* Endpoint pool in degraded state  
   * Contains at least one critical endpoint but remains at or above the health threshold setting
* Endpoint pool in critical state  
   * Contains healthy endpoints below the health threshold  
   * Not capable of handling traffic; removed from all steering decisions.

![Comparison of three endpoint pools with different numbers of healthy endpoints](https://developers.cloudflare.com/_astro/lb-ref-arch-17.BM3mVtFf_Z1UUgUA.svg "Figure 17: When endpoints pool are considered healthy, degraded, or critical")

Figure 17: When endpoints pool are considered healthy, degraded, or critical

The second setting after defining the health monitor in the endpoint pool is to define which regions the health monitor probes should source from inside the Cloudflare global network. The available selections are listed below:

* All Regions (Default)
* All Data Centers (Enterprise Only)
* Western North America
* Eastern North America
* Western Europe
* Eastern Europe
* Northern South America
* Southern South America
* Oceania
* Middle East
* Northern Africa
* Southern Africa
* Southern Asia
* Southeast Asia
* Northeast Asia

![Endpoint pool settings to further customize the health monitors](https://developers.cloudflare.com/_astro/lb-ref-arch-18.BeeIf21t_16mIgt.webp "Figure 18: Health Threshold and region selection for an endpoint pool configuration")

Figure 18: Health Threshold and region selection for an endpoint pool configuration

With the exception of “All Regions” and “All Data Centers”, health monitor probes will only originate from data centers in the selected region or regions. For locally relevant services, it may not matter whether or not a data center on the other side of the world can reach the endpoints. Therefore, limiting checks to a specific region or a set of regions may make sense. The selection of “All Regions” or “All Data Centers” is intended to be used for globally available services where reaching a set of endpoints could be crucial to the function of the application.

### Endpoints and endpoint pools

Endpoints are the actual servers that handle connections and requests after a load balancer has applied all its policies. Endpoints can be physical servers, virtual machines, or serverless applications. As long as they can handle a request or connection from a user or client, they can be considered an endpoint. There are several different methods of defining and connecting endpoints to Cloudflare and the next section details those methods.

#### Connecting endpoints to Cloudflare

Cloudflare endpoints can be defined in two ways, by IP address or by hostname. IP addresses are the most straightforward and basic of connection methods, hostnames offer a few options to consider. A hostname can be defined in Cloudflare DNS and it can be proxied or DNS-only (unproxied). Another option, of course, is that the hostname is not in a domain which Cloudflare is an authoritative DNS server for which means Cloudflare will rely on outside DNS servers to resolve that hostname to an IP address. Cloudflare Tunnel can also be used and offers two different options as well. These methods are discussed below in this section.

##### Cloudflare proxied, DNS, IP, and non-Cloudflare endpoints

As mentioned in the “HTTP(S) Load Balancing” section above, load balancing is the very last process run before a request is sent to an endpoint. In the case of however, even if an endpoint is proxied via Cloudflare’s edge, after the load balancer, the request is forwarded directly to the endpoint without passing through the layer 7 stack again. This doesn’t mean the endpoint is unprotected or uncached, however. As long as the load balancer itself is proxied then all those protections are provided to the load balancer rather than the endpoints. Any direct communication with the endpoint can still be proxied and treated with Cloudflare’s layer 7 stack, but communication with an endpoint places all the processing in front of the load balancer, not the endpoint. Figure 19 illustrates the difference of where the Cloudflare layer 7 stack is placed in relation to the endpoint(s).

![Load balancing is the last process before dispatching to the endpoint](https://developers.cloudflare.com/_astro/lb-ref-arch-19.CKZfc_hA_Z18MGx.svg "Figure 19: Differences in the Layer 7 paths between load balancer and endpoint")

Figure 19: Differences in the Layer 7 paths between load balancer and endpoint

There are very few differences from a load balancer perspective when it comes to what type of endpoint is defined as part of an endpoint pool. Once the traffic and endpoint steering policies and the load balancer rules are applied, the Cloudflare Load Balancing service instructs the L7 stack where to forward the incoming request or connection. This request is sent directly to the endpoint. Depending on the type of connection to the endpoint, there may be a different path. Features like Argo Smart Routing or tunnel-connected endpoints that are terminated at different Cloudflare data centers will route traffic differently rather than sending the request out of the Cloudflare edge, over the internet, directly to the endpoint. Regardless of the path, however, load balancing is the last process in the stack and this means that traffic doesn’t receive any additional treatment. So while the connection to endpoint can change the path from Cloudflare to the endpoint, the treatment or processing doesn’t change once an endpoint is selected.

##### Cloudflare Tunnel

Cloudflare Tunnel is an outbound connection that enables organizations to simplify their firewall configurations, reduce complexity, enhance security, and more easily join their assets to the Cloudflare network. The executable that creates these tunnels is called cloudflared and may be referenced in this document and diagrams that follow.

Cloudflare Tunnel (cloudflared) can be installed directly on the endpoint or any server with IP connectivity to the endpoint. And because the connection to Cloudflare is initiated from where Cloudflare Tunnel was installed to Cloudflare, the only access needed is outbound access to Cloudflare. A single Cloudflare Tunnel can transport traffic to one or many different endpoints in one of two different ways, one which results in the endpoint being publicly accessible and one which keeps the endpoint completely only accessible privately.

Cloudflare Tunnel can be installed on the endpoint itself or on any server with layer 3 (IP) connectivity to the endpoint or endpoints that need to be connected to Cloudflare. The decision to separate cloudflared could be made for many different reasons including but not limited to isolating the endpoint(s) and ensuring their performance, having separate teams that manage network level connectivity and endpoints, or separation for architectural simplicity where servers have segregated roles or responsibilities.

![A single cloudflared instance tunnels traffic for multiple endpoints](https://developers.cloudflare.com/_astro/lb-ref-arch-20.BehqGz1M_2po7El.svg "Figure 20: A shared cloudflared deployed on a separate server tunnels traffic for multiple endpoints")

Figure 20: A shared cloudflared deployed on a separate server tunnels traffic for multiple endpoints

A single cloudflared instance will create 4 different tunnels, two tunnels in two different Cloudflare data centers. This model ensures high availability and mitigates the risk of individual connection failures. This means in event a single connection, server, or data center goes offline, the endpoints will remain available. Cloudflare Tunnel also allows organizations to deploy additional instances of cloudflared, for availability and failover scenarios. These unique instances are called replicas. Each replica establishes four new connections which serve as additional points of ingress to the endpoint(s). Each of the replicas will point to the same tunnel. This ensures that your network remains up in the event a single host running cloudflared goes down. By design, replicas do not offer any level of traffic steering (random, hash, or round-robin).

###### Public hostname

The public endpoint method allows organizations to define a tunnel that points to a specific service or port running on an endpoint. The tunnel can terminate on the endpoint or on any server with IP connectivity to the endpoint. Using this public hostname method requires that each service that will be accessed over the tunnel is defined in the tunnel configuration. When configured, a unique tunnel ID, such as d74b3a46-f3a3-4596-9049-da7e72c876f5, will be created for the IP and port or service for which the tunnel is connecting traffic. This tunnel ID is then created into a unique public hostname in the Cloudflare-owned domain of cfargotunnel.com which results in a DNS A record being created that points directly to that service, I.E. d74b3a46-f3a3-4596-9049-da7e72c876f5.cfargotunnel.com. While this hostname is public it can only be accessed or utilized by traffic that is sent through the account that owns the Cloudflare Tunnel configuration. No other accounts would be able to access or send traffic directly to this DNS address. A DNS CNAME record created outside of the account that owns the cfargotunnel.com hostname will not be able to send traffic through that specificCloudflare Tunnel.

When configured via the Dashboard, Cloudflare automatically creates a CNAME record in the DNS zone that refers to the cfargotunnel.com hostname. For example, a CNAME record of myTunnelService.example.com could be created to point the A record of d74b3a46-f3a3-4596-9049-da7e72c876f5.cfargotunnel.com. The main benefit being the ease of use and administration as the CNAME record is much more suggestive about its purpose and belongs to the customer DNS zone.

Another option is to create these tunnels and services on the host running cloudflared. This is called a [locally-managed tunnel](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/do-more-with-tunnels/local-management/). When working with locally-managed tunnels, the CNAME entry is not created automatically however, so the organization would have to configure this manually, after the tunnel and service is defined.

From a load balancer perspective, it's very important to understand how these tunnels can be used as an endpoint. An endpoint can only be defined by using the cfargotunnel.com hostname. Using a public CNAME record that points to the cfargotunnel.com address will not work properly and is not supported. This is especially important for endpoint services that don’t operate on ports 80 or 443\. Cloudflare Load Balancers default to these two ports to access the services running on the endpoints. If an organization has services running on other ports, they will need to configure a Cloudflare Tunnel with a [catch-all rule](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/do-more-with-tunnels/local-management/configuration-file/#how-traffic-is-matched) to reach that port. This configuration allows a Cloudflare Load Balancer to reach the service via port 443 while having Cloudflare tunnel proxy the connection to the desired port on the endpoint.

###### Private IP

The second method is for private subnets. This method allows organizations to define private IP addresses and a subnet mask which will be used to create a private virtual network within the Cloudflare global network. The private subnet method does not allow the definition of a port and as such, once a subnet and mask are defined, the entire subnet can be reached over that tunnel but only to users within the organization that are allowed access via defined Zero Trust policies.

This subnet then gets added to the virtual network inside of Cloudflare where the customer can control how users can access it and which users can access it. This subnet can be defined for any desired subnetting or routing, including using a 32-bit mask (single IP address, i.e., 10.0.0.1/32). The allowed subnet does not need to exist on the host that is running the cloudflared process either. All that is required is layer 3 or IP connectivity between the host running cloudflared and the subnet that is going to be reachable over Cloudflare Tunnel.

#### Endpoint pool details

Within the endpoint pool, there are several configuration options. This section details what these configuration options are and how they alter the behavior of a Cloudflare Load Balancer.

##### Endpoint steering

The first configuration, besides defining a name and description of the endpoint pool, is to determine the endpoint steering method. Endpoint steering is responsible for ultimately selecting the endpoint or endpoint that will receive the request or connection attempt (please refer to the [Steering methods](#steering-methods) section for a detailed description of each method).

##### Endpoints

Individual endpoints are defined within endpoint pools, and the endpoint pool allows for one or more endpoints to be defined per pool.

* The _endpoint name_ is primarily used for reference, reporting, and analytics; it does not affect the function of the load balancer or endpoint pool.
* The _endpoint address_, however, defines a resource that the load balancer can use to handle a request or connection.  
   * Endpoints within an endpoint pool must be accessible over port 80 or 443\. If the endpoint is not listening on port 80 or 443, then either a proxy service or network port forwarding device needs to be placed in front of the endpoint to map port 80 or 443 to the port that the service is actually listening on.  
   * Another method for mapping ports of endpoints to 80 or 443 is to connect to the endpoint service using [Cloudflare Tunnel](#cloudflare-tunnel), and then use the hostname created through that process as the endpoint address. This will automatically map the intended endpoint port to port 443.

_Endpoint address_ can be defined in one of the following ways:

* Publicly routable IP address
* Cloudflare-proxied publicly reachable hostname
* Publicly reachable non-Cloudflare hostname
* Private, non-publicly routable IP address with the selection of a virtual network

##### Virtual networks

Using public IPs and hostnames of any type require no additional configuration. In those scenarios, the virtual network should be set to the default value of “_none_”. The “_none_” setting signals that these resources will be accessible on the public Internet, routed via Cloudflare’s global edge network.

The use of the _virtual network_ option is reserved for private IP resources. This setting maps to IP subnets that are hosted behind [Cloudflare Tunnel configurations](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/configure-tunnels/). A virtual network should be selected that has a route to the IP address of the endpoint. To navigate to this setting in the Cloudflare Dashboard, select _Networks - Routes_ from the Zero Trust page.

##### Endpoint weight

_Endpoint weight_ is only used for the random, hash, and least outstanding request steering methods; it must always be defined as part of the endpoint definition. (Please refer to the [Weighted Steering](#weighted-steering) section for more information on how weights are used for endpoint selection.)

##### Host header modification

Endpoint pools allow for the host header to be modified before dispatching a request to an endpoint. This configuration only applies to the HTTP(S) layer 7 load balancer (it will be ignored when used with layer 4 load balancers, including private IP and Spectrum).

Within a layer 7 load balancer where requests are HTTP(S)-based, the Host header tells the endpoint which website is being requested, as a single endpoint may host several different web domains. When an endpoint is specifically configured to host a web domain, it may either not respond or send a failure response to a request for a resource, if it does not believe it is hosting the resource requested in the Host header (i.e., if there are mismatched Host headers).

For example:

* Say a user tries to reach `www.example.com`. The load balancer will be configured with the hostname of `www.example.com`to receive all the requests.
* Since the endpoints can’t have the same public hostname in DNS, its hostname is `endpoint1.example.com`.
* When the user makes a request to `www.example.com,` the Host header will be set to` www.example.com,` as well. The endpoint will need to be configured to respond to Host headers of `www.example.com`.
* In some cases (such as with certain cloud or SaaS applications), however, endpoints aren’t configurable in that manner, so the endpoint may receive a request with an unknown Host header and fail to respond appropriately.
* In this example, in the endpoint configuration, setting the Host header for the endpoint to the endpoint address of `endpoint1.example.com` will replace the Host header of `www.example.com` with `endpoint1.example.com`, and will allow the endpoint to properly respond to this request.

Figure 21 highlights the potential problem of mismatched Host headers:

![Mismatched Host headers may result in the endpoint rejecting the request](https://developers.cloudflare.com/_astro/lb-ref-arch-21.Bs0qP_r-_2hJASL.svg "Figure 21: How the load balancer can rewrite the Host header to match the endpoint")

Figure 21: How the load balancer can rewrite the Host header to match the endpoint

Also, at the endpoint pool, GPS coordinates for the pool (which are used with proximity traffic steering) can be defined. If proximity steering is not being used, then these coordinates are not required (please refer to the [Proximity Steering](#proximity-steering)).

##### Load shedding

[Load shedding](https://developers.cloudflare.com/load-balancing/additional-options/load-shedding/) — a real-time response available to administrators to protect against endpoints in a pool that are [becoming unhealthy ](https://developers.cloudflare.com/load-balancing/understand-basics/health-details/) — is also configured on the endpoint pool.

The load shedding setting is not intended to be enabled unless an administrator is trying to actively protect an endpoint pool from becoming unhealthy. It is activated, for example, when an endpoint that is still responding to requests is experiencing increased CPU or memory usage, increased response times, or occasionally failing to respond at all.

When an endpoint pool’s health begins to degrade, load shedding can help direct some of the existing loads from one endpoint pool to another.

Depending on the health of the endpoint pool, it may be enough to simply shed or redirect new requests and connections away from the endpoint pool. This policy applies to traffic, which is not subject to any session affinity rules since these are new connections that haven’t had an endpoint pool or endpoint selected yet (and, therefore, will not potentially affect the end user experience).

Should an endpoint pool approach critical failure due to load, the next option is to shed additional session affinity traffic. This will start to redirect requests and connections that are bound to endpoint pools through session affinity as well. However, please note that because this process can ultimately change the user’s endpoint, it could impact the end user’s experience. Ultimately, the impact is determined by the application that is being load balanced, and how much connection context is shared between endpoints.

##### Health monitors

Health monitors are attached to endpoints at the endpoint pool as well as health threshold and the health check region selection. Details of these options can be found in the [health monitor](#health-monitors) section.

### Load balancers

Load balancing within Cloudflare combines both GTM and Private Network Load Balancing into a single load balancer configuration. While certain features or terms may align more with GTM or Private Network Load Balancing, for Cloudflare customers, both are combined into a single, easy-to-manage instance.

Depending on their specific use case, organizations can leverage different types of Cloudflare Load Balancers. The following section highlights the main differences between the deployment models, and articulates when each type of load balancer should be implemented.

Figure 22 highlights all the possible combinations of load balancers and endpoints supported by Cloudflare:

![All the possible combinations of load balancer and endpoint types](https://developers.cloudflare.com/_astro/lb-ref-arch-22-ALT.DPr9OdxY_1kYKMO.svg "Figure 22: The combinations of public and private load balancers and endpoints and how they connect")

Figure 22: The combinations of public and private load balancers and endpoints and how they connect

#### Deployment models

Cloudflare offers three load balancing deployment models, each of which support different use cases, functionality, and privacy requirements.

* [Layer 7 HTTP(S) load balancing](#layer-7-https-load-balancing)
* [DNS-only load balancing](#dns-only-load-balancing)
* [Spectrum load balancing](#spectrum-load-balancing)

Except for the DNS-only load balancing option described in more detail below, all of the deployment models anchor traffic through the load balancer. This means the user or client creating the request or connection is never aware of the endpoints that are being used to service the request or connection. Endpoint information can certainly be exposed — if desired — through the use of headers, but this is not default behavior for any of these anchored deployment models.

The following explores the four main deployment models (and their differences) in more detail.

##### Layer 7 HTTP(S) load balancing

First, the most common model is the **HTTP(S)-based layer 7 proxied load balancer**. These load balancers exist on Cloudflare’s edge and are publicly reachable. Amongst other features, this model supports [WebSockets](https://developers.cloudflare.com/network/websockets/), which are open connections between the client and endpoint allowing for data to be passed back and forth between the two.

Because this same layer 7 security stack also provides WAF, DDoS protection, Bot Management, Zero Trust, and other services, accessing these public load balancers can be restricted to authenticated and authorized users as needed. (Please refer to [Securing Load Balancers](#protecting-and-securing-load-balancers) for more information.)

In this layer 7 stack, load balancing can further improve the performance, reliability, and reachability of an organization’s public-facing web assets. The endpoints for these load balancers may be deployed in public cloud, private cloud, on-premises, or any combination thereof within the same load balancer. (Please refer to [Connecting endpoints to Cloudflare](#connecting-endpoints-to-cloudflare) for more details about how to connect endpoints to Cloudflare’s edge network).

![Layer 7 load balancing request flow to two different types of endpoints](https://developers.cloudflare.com/_astro/lb-ref-arch-23-ALT.DRZo2XIF_1kYKMO.svg "Figure 23: How Cloudflare’s Layer 7 load balancers can steer traffic to both public and private endpoints")

Figure 23: How Cloudflare’s Layer 7 load balancers can steer traffic to both public and private endpoints

As illustrated in Figure 23 above, the load balancing component of the layer 7 stack is the last process run on a request as it moves towards the endpoint. This can have a large positive impact on increasing performance and reducing load on endpoints.

For example, caching can prevent requests from ever reaching the endpoint and can be responded to without ever having to engage the load balancers. Also, WAF, DDoS protection, and Bot Management can eliminate attack traffic altogether — leaving more capacity for legitimate traffic.

Once a request reaches the load balancer process, the request is always sent directly to the endpoint that was selected. This means that even if the endpoint is proxied through Cloudflare, the request will be sent directly to the endpoint and receives no further processing.

For customized treatment after the load balancer selects an endpoint, the load balancer’s Custom Rules are applied. (This is covered in detail in the [Load balancers](#load-balancers) section below).

**Important notes about Layer 7 HTTP(S) load balancers:**

* Layer 7 HTTP(S) load balancers support both public and private endpoints
* Layer 7 HTTP(S) load balancers will only support HTTP(S) and WebSocket traffic
* Zero trust policies can be applied to Layer 7 HTTP(S) load balancers

##### DNS-only load balancing

Cloudflare’s DNS-only load balancer is an unproxied load balancer. This means that only the initial DNS request for the resource — not the actual traffic — passes through the Cloudflare edge. Therefore, instead of a DNS request resolving to a Cloudflare IP and then moving through the layer 7 stack as seen earlier in Figure 7, Cloudflare receives a DNS request for a DNS-only load balancer, applies all the appropriate load balancing policies, then returns an IP address to the requesting client to reach out directly.

Because all the traffic between the client and the endpoint will travel directly between the two and not through Cloudflare’s layer 7 stack, any type of IP traffic can be supported by a DNS-only load balancer.

![The orange cloud icon represents a proxied Layer 7 Cloudflare Load Balancer](https://developers.cloudflare.com/_astro/lb-ref-arch-24.Bw_izDOL_114CG5.webp "Figure 24: A proxied load balancer configuration")

Figure 24: A proxied load balancer configuration

![The gray cloud icon represents an unproxied \(DNS-only\) load balancer](https://developers.cloudflare.com/_astro/lb-ref-arch-25.Dz4ThM-k_2oDFUF.webp "Figure 25: An unproxied (DNS-only) load balancer configuration")

Figure 25: An unproxied (DNS-only) load balancer configuration

Even though Cloudflare does not proxy these types of load balancer connections, the health monitor service is still monitoring the health on all the endpoints in the pool. Based on the health or availability of an endpoint, a Cloudflare DNS-only load balancer will either add or remove an applicable endpoint to a DNS response to ensure that traffic is being steered to healthy endpoints.

![DNS-only load balancers only use Cloudflare to respond to a DNS request](https://developers.cloudflare.com/_astro/lb-ref-arch-26.BB1TuXz__Zaj07b.svg "Figure 26: How Cloudflare’s DNS-only load balancer functions")

Figure 26: How Cloudflare’s DNS-only load balancer functions

After a DNS-only load balancer has selected an endpoint pool via traffic steering, one or many IP addresses may be returned in the DNS response.

The decision to send one or many IP addresses within the DNS response is based on the weight assigned to the endpoints within the selected endpoint pool:

* If all the weights are equal across all endpoints, all IP addresses of all the endpoints will be returned in DNS response.
* If at least one endpoint is specified with a unique weight within the endpoint pool, then only a single IP address will be returned in the DNS response — regardless of the endpoint steering method selected on the endpoint pool.

This gives organizations the flexibility to allow applications to be aware of all the endpoints and perform local failover, or to allow Cloudflare to provide a single IP for an application to utilize.

Figure 27 shows how the defined weight within an endpoint pool can affect how a DNS-only load balancer responds.

![DNS-only load balancers can respond to DNS requests with one or many IP addresses](https://developers.cloudflare.com/_astro/lb-ref-arch-27.CJr7dL0T_Zrfoln.svg "Figure 27: How weight affects the DNS response from a DNS-only load balancer")

Figure 27: How weight affects the DNS response from a DNS-only load balancer

Please note that DNS-only load balancers have a few limitations compared to proxied load balancers:

* The load balancer no longer hides the endpoint’s IP address from the client as it is sent back to the client directly.
* They do not have the built-in layer 7 stack services mentioned in the previous model; i.e., DNS-only load balancers do not include caching, WAF, DDoS protection, or Zero Trust support.
* Session affinity is limited to `ip_cookie`, which will select an endpoint deterministically and then map that endpoint to the client IP address for all subsequent requests.
* Finally, because connections are not proxied through the load balancer for DNS only, certain steering methods will not work either. For example, [LORS](#least-outstanding-requests-steering-lors) will not work since Cloudflare will not be aware of the connections to the endpoints. These steering methods will revert to random weighted steering.

For more information on additional steering methods, please refer to the [Steering](#steering) section.

There are also client and resolver DNS cache considerations when using DNS-only load balancers. The cache life is determined by the DNS server answering the request. The [Time-to-Live (TTL) ↗](https://www.cloudflare.com/learning/cdn/glossary/time-to-live-ttl/) value tells a DNS requester how long the response is valid before the client should send a new DNS request to see if the destination has changed. The TTL is calculated in seconds, so — for example — a TTL value of 3600 equates to a TTL of one hour. However, standard DNS TTL values are usually either 12 or 24 hours or 43200 and 86400 respectively.

The TTL of a DNS-only load balancer is set to 30 (seconds). This ensures that as endpoint health changes or endpoints are added or deleted, the DNS-only load balancer is queried more often to provide the most accurate list of available endpoints possible.

**Important notes about DNS-only load balancers:**

* DNS-only load balancers support only public endpoints
* DNS-only load balancers do not proxy traffic — and — as such, are not involved in the connections to endpoint
* DNS-only load balancers only respond to a DNS request with an IP address or set of IP addresses

##### Spectrum load balancing

Cloudflare also offers another ingress method via the [Spectrum](https://developers.cloudflare.com/spectrum/) product.

Where the layer 7 stack only supported HTTP(S) and WebSockets, Spectrum offers support for any TCP- or UDP-based protocol. A Cloudflare Load Balancer using Spectrum as an ingress for traffic operates at layer 4, where both TCP and UDP protocols exist. Any service that utilizes TCP or UDP for transport can leverage Spectrum with a Cloudflare Load Balancer including SSH, FTP, NTP, SMTP, and more.

Given the breadth of services and protocols this represents, the treatment provided is more generalized than what is offered with the layer 7 HTTP(S) stack. For example, Cloudflare Spectrum supports features such as TLS/SSL offloading, DDoS protection, IP Access lists, Argo Smart Routing, and session persistence with our layer 4 load balancers.

![Spectrum-based load balancing supports public endpoints](https://developers.cloudflare.com/_astro/lb-ref-arch-28-ALT.Dwf-s8s__1kYKMO.svg "Figure 28: Spectrum Layer 4 load balancers support both TCP and UDP protocols")

Figure 28: Spectrum Layer 4 load balancers support both TCP and UDP protocols

Cloudflare layer 4 Spectrum load balancers are publicly accessible. Access to these load balancing resources can be managed using a Spectrum configuration called _IP Access Rules,_ which can be defined as part of a WAF configuration, but are limited to rules created with the “allow” or “block” action for specific IP addresses, subnets, countries, or [Border Gateway Protocol (BGP) ↗](https://www.cloudflare.com/learning/security/glossary/what-is-bgp/) Autonomous System Numbers (ASNs).

In addition to being public, Spectrum load balancers are always proxied. The proxy setting shown earlier (Figures 24 and 25) will be ignored when Spectrum is configured as the ingress path for the load balancer. All traffic destined for Spectrum-based load balancers will always pass through the Cloudflare edge.

**Important notes about Spectrum load balancers:**

* Spectrum load balancers support both public and private endpoints
* Spectrum load balancers are initially created as Layer 7 HTTP(S) load balancers. A Spectrum application is then created with a Load Balancer endpoint type, and the load balancer that has already been created is selected.
* Spectrum load balancers are always proxied, regardless of the proxy setting on the load balancer configuration
* There is no ability to change the ingress port from the Internet via Spectrum to the endpoint; i.e., if the traffic comes in on port 22 to Spectrum, it will be steered to port 22 on the endpoint
* Spectrum load balancers only support session affinity using the hash endpoint steering method
* Spectrum load balancers do not support Custom Rules

##### Deployment models at-a-glance

| Load Balancer Model | Public | Proxied | OSI Layer | Traffic Type |
| ------------------- | ------ | ------- | --------- | ------------ |
| Layer 7 HTTP(S)     | X      | X       | 7         | HTTP(S)      |
| DNS-Only            | X      | 7 (DNS) | IP-Based  |              |
| Spectrum            | X      | X       | 4         | TCP/UDP      |

#### Load balancer details

##### Hostname

The hostname setting is the publicly-reachable hostname for the load balancer. The hostname must be created within the zone for which the load balancer is being created.

##### Proxy status

The proxy setting determines whether Cloudflare will proxy traffic for the load balancer or simply provide a DNS reply with the endpoints for the client to directly connect. This is covered in detail in the [Deployment models](#deployment-models) section.

##### Session affinity

Session affinity, also known as session persistence or sticky sessions, keeps a client connected to the same endpoint for all subsequent requests after the first request or connection. This can be an important feature for applications that don’t share session data — the context of a user’s interaction with a web application — between endpoints. For example, if a new endpoint were selected in the middle of a client session and information about the session (e.g. the contents of a user’s shopping cart) were lost, the user experience for that application would be poor.

Cloudflare offers three methods for enabling session affinity:

1. **By Cloudflare cookie only (cookie):** On the first request to a proxied load balancer, a cookie is generated, encoding information of which endpoint the request will be forwarded to. Subsequent requests (by the same client to the same load balancer) will be sent to the endpoint that the cookie encodes for a) the duration of the cookie and b) as long as the endpoint remains healthy. If the cookie has expired or the endpoint is unhealthy, a new endpoint will be calculated and used.
2. **By Cloudflare cookie and Client IP fallback (ip\_cookie):** This behaves similar to the cookie method above, except that the cookie is generated based on the client IP address. In this case, requests from the same IP address always get steered towards the same endpoint for a) the duration of the cookie and b) as long as the endpoint remains healthy. If the cookie has expired or the endpoint is unhealthy, a new endpoint will be calculated and used.
3. **By HTTP header (header):** On the first request to a proxied load balancer, a session key is generated based on the configured HTTP headers. Subsequent requests to the load balancer with the same headers will be sent to the same endpoint, for a) the duration of the session or b) as long as the endpoint remains healthy. If the session has been idle for the duration of session affinity time-to-live (TTL) seconds or the endpoint is unhealthy, then a new endpoint will be calculated and used.

These three session affinity options only apply to layer 7 HTTP(S) load balancers. Session affinity requires a TTL, which determines how long the load balancer will route subsequent requests to a specific endpoint. The default TTL is 82,800 seconds (23 hours), but it can be set for anywhere from 1,800 seconds (30 minutes) to 604,800 seconds (seven days).

For cookie-based session affinity, the expiration timer is never reset, meaning that the timer is counting down from the start of the session — regardless of the session being idle or active. HTTP header-based session affinity will reset the expiration timer every time there is activity in the session.

##### Endpoint draining

Endpoint draining is a subfeature of session affinity. It allows for sessions to gracefully expire from an endpoint while not allowing new sessions to be created on that same endpoint. Endpoint draining is useful for maintenance, as it does not require administrators to arbitrarily or abruptly cut off user sessions in order to remove all active sessions from an endpoint.

The endpoint drain TTL is the amount of time that endpoints will be allowed to maintain active sessions before being forcefully terminated. Once the endpoint drain TTL is set, endpoint draining is started by disabling an endpoint (or multiple endpoints) within an endpoint pool. As seen in the below image, administrators can monitor the time remaining on an endpoint drawing operation from the load balancer UI.

![Endpoint draining in process from web user interface](https://developers.cloudflare.com/_astro/lb-ref-arch-30.todYN9Ax_1LLmJE.webp "Figure 30: Endpoint draining occurring within a Cloudflare Load Balancer")

Figure 30: Endpoint draining occurring within a Cloudflare Load Balancer

Endpoint draining is only applicable for session affinity because without session affinity, subsequent requests or connections are not guaranteed to be steered to the same endpoint. Thus, disabling an endpoint does not have an impact on user experience.

##### Zero-downtime failover

Zero-downtime failover automatically sends traffic to endpoints within an endpoint pool during transient network issues. 

Zero-downtime failover will trigger a single retry only if there is another healthy endpoint in the pool and a [521, 522, 523, 525 or 526 error code](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-5xx-errors/error-521/) is occurring. No other error codes will trigger a zero-downtime failover operation.

These response codes are not returned from the endpoint, but from requests made by upstream Cloudflare services to an organization's endpoints. 

Zero-downtime failover has three modes of operation:

1. **None (Off):** No failover will take place and users may receive error messages or a poor user experience.
2. **Temporary:** Traffic will be sent to other endpoint(s) until the endpointal endpoint is available again.
3. **Sticky:** The session affinity cookie is updated and subsequent requests are sent to the new endpoint moving forward as needed. This is not supported when session affinity is using HTTP header mode.

##### Adaptive routing - failover across pools

_Adaptive routing - failover across pools_ extends the functionality of zero-downtime failover by allowing failover to extend to endpoints in another endpoint pool, rather than only failing over to an endpoint in the _same_ pool.

##### Endpoint pools

Endpoint pools are configured in a priority order and can be rearranged as needed. This priority order is only considered when using _Off - Failover traffic steering;_ otherwise, endpoint pools will be selected based on the criteria outlined in the [Steering methods](#steering-methods) section.

The endpoint pools assigned to a load balancer represent the entire collection of endpoints that could possibly handle requests or connections through the load balancer. An endpoint pool typically contains endpoints that all have the same capabilities and are in the data center or geographic region. All endpoints in a pool should be capable of handling any request directed to an endpoint pool. For more information about endpoint pools, please refer to the [Endpoint pools](#endpoint-pools) section.

##### Fallback pools

A fallback pool is the pool of last resort. When all endpoint pools are unavailable or unhealthy, the fallback pool will be used for all requests and connections. While health monitor data is always considered when steering traffic within a load balancer, a fallback pool does not rely on this data and is not subject to it.

##### Health monitors

Health monitors are usually configured as part of the endpoint pool. Health monitors can be added, changed, or deleted as part of the load balancer configuration. Please see the [Health monitors](#health-monitors) section for more information.

##### Traffic steering

Traffic steering is the method of steering between endpoint pools. For help understanding which traffic steering method to select, please see the [Steering types and methods](#steering-types-and-methods) section.

##### Custom rules

[Custom rules](https://developers.cloudflare.com/load-balancing/additional-options/load-balancing-rules/) allow users to perform actions on requests or connections before the load balancer finishes its decision process. Custom rules are configured with expressions that match certain [fields](https://developers.cloudflare.com/load-balancing/additional-options/load-balancing-rules/reference/) in requests or connections. Once the expression is created to match traffic, an [action](https://developers.cloudflare.com/load-balancing/additional-options/load-balancing-rules/actions/) is assigned for when a request or connection matches the expression.

Custom rules are a powerful tool for customizing the steering and output from a load balancer before the request or connection is sent to the endpoint. For example, the HTTP method (e.g. GET, PUT, POST) could be matched to ensure that POST messages are sent to a specific endpoint pool dedicated to handling receiving information from clients.

Alternatively, that session affinity TTL could be reset based on a request going to a specific URL path to ensure that the client has enough time to complete the transaction.

It is not possible to document all of the potential combinations of fields that can be matched and actions that can be taken. However, the following resources describe all of the fields and actions that are currently available:

* [Supported fields and operators](https://developers.cloudflare.com/load-balancing/additional-options/load-balancing-rules/reference/)
* [Load Balancing actions](https://developers.cloudflare.com/load-balancing/additional-options/load-balancing-rules/actions/)

If the default behavior of a load balancer is not covered in the documents listed above, it is likely that a custom rule can help meet unique use case requirements.

### Protecting and securing load balancers

#### Inherent security

All Cloudflare Load Balancer deployment models come with inherent protections. The following section briefly highlights the default security Cloudflare provides, as well as optional protections that can be added in front of Cloudflare Load Balancers:

* Proxied HTTP layer 7 load balancer (Public)  
   * [DDoS protection](https://developers.cloudflare.com/ddos-protection/managed-rulesets/http/) to protect against attacks  
   * WAF with [Cloudflare managed ruleset](https://developers.cloudflare.com/waf/managed-rules/reference/cloudflare-managed-ruleset/) and [OWASP ruleset](https://developers.cloudflare.com/waf/managed-rules/reference/owasp-core-ruleset/) to block known vulnerabilities and exploits
* DNS-only load balancer (Public)  
   * [DNS DDoS protection ↗](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/) to ensure a DNS-only load balancer is always available
* Spectrum layer 4 load balancer (Public)  
   * [DDoS Protection](https://developers.cloudflare.com/spectrum/about/ddos-for-spectrum/) to protect against layer 4 attacks

#### Additional options

Cloudflare offers additional security layers that can be used in conjunction with load balancing to protect any services — including websites, APIs, HTTP(S)-based services, and more:

* Proxied HTTP layer 7 load balancer (Public)  
   * [Bot management](https://developers.cloudflare.com/bots/) to control which bots can access resources  
   * [WAF](https://developers.cloudflare.com/waf/) for creating custom rules for web applications  
   * [Client-side security](https://developers.cloudflare.com/client-side-security/) for monitoring script usage on web applications  
   * [API Shield](https://developers.cloudflare.com/api-shield/) for protecting APIs
* DNS-only load balancer (Public)  
   * [DNSSEC](https://developers.cloudflare.com/dns/dnssec/) to ensure authenticity of DNS records
* Spectrum layer 4 load balancer (Public)  
   * [IP Access Rules](https://developers.cloudflare.com/spectrum/reference/configuration-options/#ip-access-rules) for controlling access to public layer 4 load balancers

## Summary

The Cloudflare global anycast network is a powerful platform for load balancing. A load balancing configuration in Cloudflare is accessible in over 330 cities across the world and has virtually unlimited capacity and bandwidth.

These load balancers operate within approximately 50ms of about 95% of the Internet-connected population, including endpoints that allow Cloudflare Load Balancers to perform both GTM and Private Network Load Balancing. Cloudflare now combines these two distinct load balancing concepts into a single load balancer. This helps enable organizations to steer traffic to geographically-relevant data centers, then select the proper endpoint to handle the request.

With Cloudflare Tunnel, endpoints can be located within private networks and still be utilized by Cloudflare Load Balancers. Cloudflare offers public layer 7 load balancers — that supports both HTTP(S) and WebSockets, as well as public layer 4 load balancers that can steer any TCP or UDP traffic. This means that Cloudflare can offer load balancing services to all organizations and users, no matter their location, use cases, or existing configurations.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/architectures/","name":"Reference Architectures"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/architectures/load-balancing/","name":"Load Balancing Reference Architecture"}}]}
```

---

---
title: Magic Transit Reference Architecture
description: This reference architecture describes the key architecture, functionalities, and network deployment options of Cloudflare Magic Transit.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/architectures/magic-transit.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Magic Transit Reference Architecture

**Last reviewed:**  over 3 years ago 

## Introduction

The purpose of this document is to describe the key architecture, functionalities, and network deployment options of [Cloudflare Magic Transit](https://developers.cloudflare.com/magic-transit/) — a BGP-based DDoS protection and traffic acceleration service for Internet-facing network infrastructure.

### Who is this document for and what will you learn?

This reference architecture is designed for IT or network professionals with some responsibility over or familiarity with their organization's existing network infrastructure. It is useful to have some experience with technologies and concepts important to content delivery, including routers, DNS and firewalls.

To build a stronger baseline understanding of Cloudflare, we recommend the following resources:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (5 minute read) or [video ↗](https://youtu.be/XHvmX3FhTwU?feature=shared) (2 minutes)
* Blog: [Magic Transit makes your network smarter, better, stronger, and cheaper to operate ↗](https://blog.cloudflare.com/magic-transit) (14 minute read)

Those who read this reference architecture will learn:

* How Cloudflare Magic Transit protects your network infrastructure from denial of service attacks (DDoS)
* How to architect Magic Transit into your existing network infrastructure

## What is Magic Transit?

Protecting network infrastructure from DDoS attacks demands a unique combination of strength and speed. Volumetric attacks can easily overwhelm hardware boxes and their bandwidth-constrained Internet links. And most cloud-based solutions redirect traffic to centralized scrubbing centers, which impacts network performance significantly.

Cloudflare Magic Transit provides DDoS protection and traffic acceleration for on-premise, cloud, and hybrid networks. With data centers spanning [hundreds of cities ↗](https://www.cloudflare.com/network/) and offering hundreds of Tbps in mitigation capacity, Magic Transit can detect and mitigate attacks close to their source of origin in under three seconds globally on average — all while routing traffic faster than the public Internet.

![Figure 1: Magic transit overview](https://developers.cloudflare.com/_astro/magic-transit-ref-arch-1.BqSmsUYf_ZgdSYQ.webp "Figure 1: Magic transit overview")

Figure 1: Magic transit overview

At a high level, Magic Transit works as follows:

* **Connect:** Using Border Gateway Protocol (BGP) route announcements to the Internet, and the Cloudflare anycast network, customer traffic is ingested at a Cloudflare data center closest to the source.
* **Protect and Process:** All customer traffic is inspected for attacks. Advanced and automated mitigation techniques are applied immediately upon detecting an attack. Additional functions such as load balancing, next-generation firewall, content caching, and serverless compute are also available as a service.
* **Accelerate:** Clean traffic is routed over Cloudflare’s low-latency network links for optimal throughput and handed off over IP tunnels (either GRE or IPsec) or private network interconnects (PNI) to the origin network. Magic Transit uses anycast IP addresses for Cloudflare’s tunnel endpoints, meaning that any server in any data center is capable of encapsulating and decapsulating packets for the same tunnel. For more details specifically on tunnels and encapsulation, refer to [GRE and IPsec tunnels](https://developers.cloudflare.com/magic-transit/reference/gre-ipsec-tunnels/).

### Baking resilience into our network using anycast

Magic Transit uses anycast IP addresses for its end of the network tunnel endpoints — so a single tunnel configured from a customer’s network to Cloudflare connects to all Cloudflare global data centers (excluding the [China Network](https://developers.cloudflare.com/china-network/)). This does not add strain on the router; from the router’s perspective, it is a single tunnel to a single IP endpoint.

This works because while the tunnel endpoint is technically bound to an IP address, it need not be bound to a specific device. Any device that can strip off the outer headers and then route the inner packet can handle any packet sent over the tunnel.

In the event of a network outage or other issues, tunnels fail over automatically — with no impact to a customer’s network performance.

## Deployment architectures for Magic Transit

### Default configuration (ingress only, direct server return)

By default, Magic Transit processes traffic in the ingress direction only (from the Internet to the customer network). The server return traffic back to the clients is routed by the customer's DC edge router via its uplinks to the Internet/ISP based on the edge router’s default routing table. This server return traffic will not transit through Cloudflare via tunnels. This is referred to as Direct Server Return (DSR).

The network diagram in Figure 2 illustrates such a Magic Transit setup, and the end-to-end packet flow of Magic Transit-protected traffic. The tunnel in this setup uses GRE for encapsulation.

![Figure 2: Reference Configuration of Magic Transit anycast Tunnel \(GRE\) With Default DSR Option](https://developers.cloudflare.com/_astro/magic-transit-ref-arch-2.XvKY3pME_2an4s8.webp "Figure 2: Reference Configuration of Magic Transit anycast Tunnel (GRE) With Default DSR Option")

Figure 2: Reference Configuration of Magic Transit anycast Tunnel (GRE) With Default DSR Option

* Cloudflare provides the customer with a pair of anycast IP addresses for the Cloudflare end of the tunnel endpoints. These are publicly routable IP addresses from Cloudflare-owned address space. The pair of anycast IP addresses can be used to configure two tunnels for network redundancy, although only one is required for a basic configuration. The above configuration shows a single tunnel, with the Cloudflare end of the tunnel endpoint address being 192.0.2.1.
* The customer end of the anycast GRE tunnel needs to be a publicly routable address. It is typically the IP address of the WAN interface on the customer edge router. In this example it is 192.0.2.153.
* The IP addresses of the tunnel interfaces are RFC 1918 private addresses. These addresses are only "locally significant" within the particular Magic Transit service instance that they are part of. Therefore, the customer can select any RFC 1918 addresses they desire, as long as they do not overlap with those of other tunnels configured within the same Magic Transit service instance.
* As best practice, given the tunnels are point-to-point connections, a /31 subnet is sufficient for allocating the 2 IP addresses required for a given tunnel. In the above example, the 10.10.10.0/31 subnet is chosen, with the Cloudflare end of the tunnel interface being 10.10.10.0/31 and the customer's DC edge router side being 10.10.10.1/31.
* Once the tunnel is configured, a route is configured in the Magic Transit service instance to forward traffic destined to a given customer prefix onto the correct tunnel.
* Traffic destined to customer prefix 203.0.113.0/24 is routed onto the tunnel whose remote end (i.e. the customer’s end, from the Cloudflare network's perspective) of the tunnel interface is 10.10.10.1.
* Given this is a Direct Server Return (DSR) setup, the server return traffic follows the default route (ip route 0/0) configured on the customer edge router and is sent to its uplink peer (i.e. customer’s ISP's router), en route back to the clients over the Internet. This return traffic does not traverse Cloudflare network.

**Note:** The smallest IP prefix size (i.e. with the longest IP subnet mask) that most ISPs accept in each other's BGP advertisements is /24; e.g. x.x.x.0/24 or y.y.y.0/23 are okay, but z.z.z.0/25 is not. Therefore, the smallest IP prefix size Cloudflare Magic Transit can advertise on behalf of the customers is /24.

### Magic Transit with egress option enabled

When Magic Transit is deployed with the Egress option enabled, egress traffic from the customer's network flows over the Cloudflare network as well. This deployment option provides symmetry to the traffic flow, where both client-to-server and server-return traffic flow through the Cloudflare network. This implementation provides added security and reliability to the server-return traffic, as afforded by the Cloudflare network.

The following network diagram illustrates the end-to-end packet flow between the end client and customer network when the Magic Transit Egress option is enabled.

![Figure 3: Magic Transit With Egress Option Enabled](https://developers.cloudflare.com/_astro/magic-transit-ref-arch-3._h1mIh77_Z2pXG3o.webp "Figure 3: Magic Transit With Egress Option Enabled")

Figure 3: Magic Transit With Egress Option Enabled

* The ingress traffic flow is the same as in the Default Configuration use case above.
* For egress traffic to be received and processed by Magic Transit, the source IP addresses of the traffic need to be in the range of the Magic Transit-protected IP prefixes, and the destination IP addresses need to be public Internet routable, i.e. non-RFC 1918 addresses.

It is worth noting that for customers who bring their own public IP addresses ([BYOIP](https://developers.cloudflare.com/byoip/)) for cloud-hosted services, the Magic Transit Egress option can provide additional value by eliminating the need for them to purchase and implement BYOIP services with their cloud providers, reducing their cloud bill and lowering operational costs.

To accomplish this, the IP tunnels that on-ramps to Magic Transit are configured between the cloud providers' VPCs and the Cloudflare network. With the Magic Transit Egress option, both directions of client-server traffic would flow through these tunnels. The BYOIP addresses in the tunneled packets are hidden behind the outer tunnel endpoint IP addresses and the tunnel header, making them "invisible" to the underlying cloud provider network elements between the VPCs and the Cloudflare network.

### Magic Transit over Cloudflare Network Interconnect (CNI)

[Cloudflare Network Interconnect (CNI)](https://developers.cloudflare.com/network-interconnect/) allows customers to connect their network infrastructure directly to Cloudflare – bypassing the public Internet – for a more reliable, performant, and secure experience.

* CNI is provisioned by the cross-connect providers as a set of layer 2 connections, and Cloudflare allocates a pair of IP addresses from Cloudflare’s own Internet-routable IP address block for each connection.
* Cloudflare coordinates with the customer to configure these links and to establish a BGP peering session over the links during CNI onboarding.
* Once the BGP session is up between the Cloudflare network and the customer edge router that are connected via CNI, Cloudflare-owned prefixes will be advertised over this CNI link to the customer edge router.

Figure 4 illustrates a reference configuration for Magic Transit over CNI, and its associated packet flow.

**Note:** The example demonstrated here is for the default Magic Transit service without the Egress option enabled. As described in earlier sections, in Magic Transit Direct Server Return mode (i.e. Ingress only), the server return traffic will be routed by the customer edge router to the clients via their ISP through the public Internet.

![Figure 4: Reference Configuration of Magic Transit Over CNI \(Default DSR Option\)](https://developers.cloudflare.com/_astro/magic-transit-ref-arch-4.CCh1ixzi_ZlhE2p.webp "Figure 4: Reference Configuration of Magic Transit Over CNI (Default DSR Option)")

Figure 4: Reference Configuration of Magic Transit Over CNI (Default DSR Option)

When the Magic Transit Egress option is enabled and utilized, the server return traffic can be sent back to the clients through the Cloudflare network, via the IP tunnels that are configured over the CNI connections. Figure 5 illustrates one such example.

![Figure 5: Reference Configuration of Magic Transit Over CNI with Egress Option Enabled](https://developers.cloudflare.com/_astro/magic-transit-ref-arch-5.Dru7wSdW_lR5Sr.webp "Figure 5: Reference Configuration of Magic Transit Over CNI with Egress Option Enabled")

Figure 5: Reference Configuration of Magic Transit Over CNI with Egress Option Enabled

### Magic Transit protecting public cloud-hosted services

Magic Transit protects services hosted on-premise and in the cloud. This use case illustrates the configuration for a cloud-hosted deployment.

![Figure 6: Protect Multi-Cloud-Based Services With Magic Transit \(Egress Option Enabled\)](https://developers.cloudflare.com/_astro/magic-transit-ref-arch-6.Cik4bTwC_Z2l472d.webp "Figure 6: Protect Multi-Cloud-Based Services With Magic Transit (Egress Option Enabled)")

Figure 6: Protect Multi-Cloud-Based Services With Magic Transit (Egress Option Enabled)

* In this example, a given customer has two cloud VPC deployments spread across two different cloud providers, and in two different geographical regions.
* In this example, the customer’s /24 or larger prefix is split into multiple smaller (i.e. longer subnet mask length) prefixes (e.g. /26) and assigned to the various VPCs in different locations. Upon establishing the tunnels from the Cloudflare network to each of the VPCs, the customer can configure routes centrally in the Magic Transit configuration to route traffic to the respective VPCs. Such configuration can be made via API or UI dashboard.

Note that with the Magic Transit Egress option, the customer can bypass each cloud provider's BYOIP services, its associated fees, and the configuration and operations complexity, by sending egress traffic (i.e. server return or server-to-Internet traffic from the protected prefix) through the Cloudflare global network via the Magic Transit tunnels.

### Magic Transit and Cloudflare WAN

In addition to protecting and routing traffic for external-facing services of an enterprise (i.e. north-south Internet-routable traffic) with the Cloudflare Magic Transit service, customers can protect east-west "intra-enterprise" internal traffic (e.g. RFC 1918 private addresses), interconnecting all the sites of an enterprise, using [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/) (formerly Magic WAN).

Cloudflare WAN replaces legacy WAN architectures with the Cloudflare network, providing global connectivity, cloud-based security, performance, and control through one simple user interface.

The Cloudflare Magic Transit and Cloudflare WAN services combined provide a holistic, secure, reliable, and performant global network-as-a-service solution for an entire enterprise, protecting and accelerating north-south as well as east-west traffic.

Both services can either be deployed in the same service instance, or, for customers who prefer to keep the administration and traffic flow of external, Internet-facing networks and internal corporate networks completely separate, different service instances can be deployed for Magic Transit and Cloudflare WAN.

Figure 7 illustrates an example of deploying Magic Transit and Cloudflare WAN services in separate service instances.

![Figure 7: Magic Transit + Cloudflare WAN Provide Network-as-a-Service for the Entire Enterprise](https://developers.cloudflare.com/_astro/magic-transit-ref-arch-7.DESTWgck_Z1mgu04.webp "Figure 7: Magic Transit + Cloudflare WAN Provide Network-as-a-Service for the Entire Enterprise")

Figure 7: Magic Transit + Cloudflare WAN Provide Network-as-a-Service for the Entire Enterprise

_Note: Labels in this image may reflect a previous product name._

* In the example, GRE tunnels are used to connect the customer's various sites over the Cloudflare global anycast network. The Cloudflare anycast IP address for the Magic Transit service instance is 192.0.2.1, while the one for the Cloudflare WAN service instance is 192.0.2.2\. The Magic Transit service is enabled with the Egress option.
* The Magic Transit service protects and routes external-facing front-end client-server traffic. The Cloudflare WAN service protects and routes enterprise internal traffic such as that of internal applications, back-end database sync, and branch-to-DC and branch-to-branch traffic.

### Cloudflare Network Firewall: control and filter unwanted traffic before it reaches the enterprise network

While Magic Transit protects customers' services from DDoS attacks, many network administrators want to be able to control and block other unwanted or potentially malicious traffic. [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/) enforces consistent network security policies across the entire customer WAN, including headquarters, branch offices, and virtual private clouds, and allows customers to deploy fine-grained filtering rules globally in seconds — all from a common dashboard.

Cloudflare Network Firewall is deployed and configured as part of Magic Transit. All ingress traffic flowing through Cloudflare edge data centers, whose destination prefixes are protected by Magic Transit, can be filtered by Cloudflare Network Firewall.

![Figure 8: Cloudflare Network Firewall Blocks Unwanted and Malicious Traffic at the Internet Edge](https://developers.cloudflare.com/_astro/magic-transit-ref-arch-8.BRW-6GQa_22TJ4T.webp "Figure 8: Cloudflare Network Firewall Blocks Unwanted and Malicious Traffic at the Internet Edge")

Figure 8: Cloudflare Network Firewall Blocks Unwanted and Malicious Traffic at the Internet Edge

_Note: Labels in this image may reflect a previous product name._

In Cloudflare Network Firewall rules, administrators can match and filter network traffic not only based on the typical 5-tuple (source/destination IP, source/destination port, protocol) information carried in the IP packet header but also other packet information such as IP packet length, IP header length, TTL, etc. In addition, geographical information such as the name of the Cloudflare data center/colo, the region, and the country the data centers are located in can also be used in configuring Network Firewall rules (geo-blocking).

For further details on Cloudflare Network Firewall and its configuration, refer to [Introducing Magic Firewall ↗](https://blog.cloudflare.com/introducing-magic-firewall/) and [Cloudflare Network Firewall documentation](https://developers.cloudflare.com/cloudflare-network-firewall/).

## A note on always-on and on-demand deployments

A cloud DDoS mitigation service provider can monitor traffic for threats at all times (the always-on deployment model) or reroute traffic only when an attack is detected (on-demand). This decision affects response time and time-to-mitigation. In some cases, it also has repercussions for latency.

In an on-demand deployment model, inbound traffic is monitored and measured at the network edge to detect volumetric attacks. During normal operations, or "peacetime," all traffic directly reaches applications and infrastructure without any delay or redirection. Traffic is diverted to the cloud scrubbing provider only in the case of an active DDoS attack. In many cases, a customer is required to call the service provider to redirect traffic, thereby increasing the response time.

The always-on mode is a hands-off approach to DDoS mitigation that does not require the customer to do anything in the event of an attack. The organization’s traffic is always routed through the cloud provider’s data centers for threat inspection, even during peacetime. This minimizes the time from detection to mitigation, and there is no service interruption.

Of all approaches and deployment options, the always-on method provides the most comprehensive protection.

However, depending on the provider, diverting all traffic through the DDoS mitigation provider’s cloud might add latency that is suboptimal for business-critical applications. Cloudflare is architected so that customers do not incur a latency penalty as a result of attacks — even for always-on deployments. Analyzing traffic at the edge is the only way to mitigate at scale without impacting performance.

This is because ingesting traffic via anycast ensures that traffic travels only to the nearest Cloudflare data center for inspection. With data centers in [hundreds of cities worldwide ↗](https://www.cloudflare.com/network/), it is likely to be a short distance. This eliminates the trombone effect.

In many cases, [traffic is faster when routed over Cloudflare ↗](https://www.cloudflare.com/static/360e550c8890054d5e5835efb9fb8dd1/Magic%5FTransit%5Fprotects%5Fnetworks%5Fwhile%5Falso%5Fimproving%5Fperformance%5F%5F1%5F.pdf) than over the public Internet. We believe customers should not have to sacrifice performance to achieve comprehensive security.

## Summary

Cloudflare offers comprehensive network services to connect and protect on-premise, cloud-hosted, and hybrid enterprise networks. Cloudflare provides various connectivity and deployment options to suit customers' unique architectures.

* Cloudflare Magic Transit is a cloud-native network security solution that uses the power of the Cloudflare global network to protect organizations against DDoS attacks.
* Magic Transit comes with a built-in network firewall that helps customers phase out on-premise firewalls and deploy network security as-a-service that scales.
* In addition to protecting and routing traffic for external-facing services of an enterprise (i.e. north-south Internet-routable traffic), customers can connect and protect east-west “intra-enterprise” internal traffic using Cloudflare WAN.

If you would like to learn more about Magic Transit, Cloudflare WAN, or Cloudflare Network Firewall, [contact us for a demo ↗](https://www.cloudflare.com/magic-transit/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/architectures/","name":"Reference Architectures"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/architectures/magic-transit/","name":"Magic Transit Reference Architecture"}}]}
```

---

---
title: Multi-vendor Application Security and Performance Reference Architecture
description: This reference architecture describes how a multi-vendor approach for application security and performance can be accomplished.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/architectures/multi-vendor.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Multi-vendor Application Security and Performance Reference Architecture

**Last reviewed:**  over 2 years ago 

## Introduction

Over time and with the rapidly evolving application security and performance industries, companies have come to deploy multiple vendors to provide services. Sometimes customers opt for using multiple vendors for reasons of regulatory/company compliance, resiliency, performance, or cost.

Although some customers look to implement multi-vendor solutions for various reasons discussed in this document, multi-vendor deployments can introduce additional complexity, higher operational costs due to multiple dashboards and configurations, and a steeper learning curve. Additionally, while trying to establish a baseline of supported features across multiple vendors, customers can end up having a minimum common denominator setup, not taking advantage of the latest capabilities/innovations from a vendor. Customers should carefully consider the goals and requirements, and weigh pros and cons with all stakeholders, before proceeding with a multi-vendor deployment.

This document examines why some customers deploy a multiple or dual vendor approach and how Cloudflare can be incorporated into such a solution. Specifically, this document describes how a multi-vendor approach for application security and performance can be accomplished. This document is targeted for architects and those interested in using multi-vendor cloud-based solutions for security and performance.

### Who is this document for and what will you learn?

This reference architecture is designed for IT, security or network professionals with some responsibility over or familiarity with their organization’s existing network infrastructure. It is useful to have some experience with technologies and concepts important to application security and performance, including proxies, DNS and firewalls.

To build a stronger baseline understanding of Cloudflare, we recommend the following resources:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (5 minute read) or [video ↗](https://youtu.be/XHvmX3FhTwU?feature=shared) (2 minutes)

Those who read this reference architecture will learn:

* How Cloudflare application security and performance capabilities can work alongside existing technology vendors
* Understanding the decisions to be made when using many vendors

## Cloud based security and performance providers

Before discussing multi-vendor security and performance solutions, it’s important to note how cloud-based solutions providing these services work in general and how traffic is routed through them.

Cloud-based security and performance providers like Cloudflare work as a reverse proxy. A reverse proxy is a server that sits in front of web servers and forwards client requests to those web servers. Reverse proxies are typically implemented to help increase security, performance, and reliability.

![Figure 1: Client request to origin server](https://developers.cloudflare.com/_astro/Figure_1.DmJWHu1Y_Z20N9WE.webp "Figure 1")

Figure 1

Normal traffic flow without a reverse proxy would involve a client sending a DNS lookup request, receiving the origin IP address, and communicating directly to the origin server(s). This is visualized in Figure 1.

When a reverse proxy is introduced, the client still sends a DNS lookup request to its resolver, which is the first stop in the DNS lookup. In this case, the DNS resolver returns a vendor’s reverse proxy IP address to the client and the client then makes a request to the vendor’s reverse proxy. The cloud-based proxy solution can now provide additional security, performance, and reliability services like [CDN ↗](https://www.cloudflare.com/cdn/), [WAF ↗](https://www.cloudflare.com/waf/), [DDoS ↗](https://www.cloudflare.com/ddos/), [API Shield ↗](https://www.cloudflare.com/products/api-shield/), [Bot Management ↗](https://www.cloudflare.com/products/bot-management/) capabilities, etc, before deciding, based on security policy, whether to route the client request to the respective origin server(s). This is visualized in Figure 2.

![Figure 2: Client request routed through reverse proxy for additional security and performance services](https://developers.cloudflare.com/_astro/Figure_2.Ca4wC8bv_Z1yv3uT.webp "Figure 2")

Figure 2

In some cases, the vendor providing the reverse proxy also provides DNS services; this is visualized in Figure 3 below. This can be beneficial for managing all services from a single dashboard and for operational simplicity.

![Figure 3: Same vendor providing DNS and security/performance services via proxy](https://developers.cloudflare.com/_astro/Figure_3.CznC1gz__Z1Ljx9F.webp "Figure 3")

Figure 3

## Cloudflare’s reverse proxy architecture and solution

Cloudflare provides a reverse proxy architecture using its global [anycast network ↗](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/) for the respective security, performance, and reliability services it provides. Anycast is a network addressing and routing method in which incoming requests can be routed to a variety of different locations or ‘nodes’ advertising the same IP address space. Cloudflare is extremely performant and reliable thanks to anycast, as well as its global presence in [hundreds of cities worldwide ↗](https://www.cloudflare.com/network/). Cloudflare is also directly connected to 12,000 networks, including every major ISP, cloud provider, and enterprise, and within \~50 ms from 95% of the world’s Internet-connected population.

Cloudflare has one global network with every service running on every server in every Cloudflare data center. Since Cloudflare’s network uses anycast, the closest data center to the client will respond to the client request. This decreases latency while improving network resiliency, availability, and security due to the increased overall distribution of traffic across Cloudflare's network.

[Cloudflare’s global anycast network ↗](https://www.cloudflare.com/network/) provides the following advantages:

* Incoming traffic is routed to the nearest data center with the capacity to process the requests efficiently.
* Availability and redundancy is inherently provided. Since multiple nodes advertise the same IP address, if one node were to fail, requests are simply routed to another node in close proximity.
* Because anycast distributes traffic across multiple data centers, it increases overall distribution of traffic across Cloudflare’s network, preventing any one location from becoming overwhelmed with requests. For this reason, anycast networks are very resilient to DDoS attacks.

![Figure 4: Cloudflare providing DNS and security/performance services via global anycast network](https://developers.cloudflare.com/_astro/Figure_4.BQ6xEEwJ_29LseW.webp "Figure 4")

Figure 4

## Cloudflare onboarding options

This section provides a brief overview of the Cloudflare onboarding options which are useful to understand prior to looking into the details around a multi-vendor solution. The method of onboarding allows for variance in how the multi-vendor solution is deployed/configured. If you’re already familiar with the Cloudflare onboarding options, you can jump to the next section discussing multi-vendor solutions.

Cloudflare provides multiple options to easily onboard and consume security, performance, and reliability services. One of the advantages of cloud solutions offered via proxy setup is the ease of onboarding and getting started because it primarily involves DNS configuration to route client requests through the proxy. However, even within the onboarding with DNS configuration, Cloudflare offers multiple options and flexibility.

The core requirement is, traffic must be proxied through Cloudflare; this is also referred to as ‘orange-clouded,’ because the traffic to the site is being proxied through Cloudflare. Within the dashboard, you will see the status for a specific DNS entry as ‘Proxied’ and the orange cloud icon as shown in Figure 5 below.

![Figure 5: Cloudflare configured to proxy traffic for site https://api2.cf-tme.com](https://developers.cloudflare.com/_astro/Figure_5.BkWvJnng_Z1gEzNj.webp "Figure 5")

Figure 5

There are several methods to proxy traffic through Cloudflare and the method used will depend on customer requirements.

**1\. Full DNS setup - Cloudflare as primary DNS provider**

Cloudflare is configured as the primary DNS provider and A records are configured to proxy traffic through Cloudflare. When the proxy is enabled on a DNS record, the response will be Cloudflare anycast IP addresses allowing for Cloudflare to be the proxy.

**2\. Secondary DNS setup with Secondary DNS override**

Cloudflare is configured as a secondary provider and all DNS records are transferred from the primary provider. Cloudflare provides a feature called [Secondary DNS override](https://developers.cloudflare.com/dns/zone-setups/zone-transfers/cloudflare-as-secondary/proxy-traffic/) that allows customers to override the response served from Cloudflare secondary nameservers. This allows for customers to take advantage of leveraging zone transfers to automatically sync between DNS providers. It also provides the flexibility to update select records in Cloudflare DNS to redirect certain traffic to another service provider like Cloudflare. In this case, the response will be Cloudflare anycast IP addresses allowing for Cloudflare to be the proxy.

**3\. Partial / CNAME setup**

In this setup, Cloudflare is not the authoritative DNS provider and the customer manages DNS records externally.

Converting to CNAME setup ensures the hostname eventually resolves to Cloudflare IPs. This is useful when customers don’t want to change their current DNS setup but still want to use other Cloudflare services.

If a customer's current DNS provider doesn’t support CNAME on the zone apex (sometimes called the "root domain" or "naked domain") like Cloudflare does with [CNAME Flattening](https://developers.cloudflare.com/dns/cname-flattening/), you must purchase Static IPs from Cloudflare and create an A record to those Static IPs in the provider DNS. In Cloudflare, you can then create an A record to point the zone apex to the origin.

Many customers using Cloudflare services take advantage of the cross-product integration and innovations along with simplicity of a single UI for management and operational simplicity and use multiple Cloudflare services together like CDN and WAF. Although not recommended, it’s also possible to use security services like WAF with other CDN providers by setting up DNS to forward traffic through Cloudflare via CNAME and disabling Cloudflare caching via Cache Rules.

## Why multi-vendor?

Typically customers opt for a multi-vendor approach for reasons of regulatory/company compliance, resiliency, performance, and cost.

### Regulatory/company compliance

Some customers may have to comply with regulatory/company policy of not being dependent on a single vendor for all security, performance, and reliability services. This could be done for reasons of a company’s policy of mitigating risk for specific vendor outages/issues and/or for leverage to mitigate against increased vendor pricing/costs. For compliance with these policies, a multi-vendor strategy is required.

### Resiliency

When a single vendor is used for all security and performance services, this may be perceived as a single point of failure. This can be driven by regulatory pressure to improve reliability in all critical systems, outages experienced with an incumbent vendor, or uncertainty with the long term reliability of a single vendor.

### Performance

In many cases a single vendor may be very well connected and provide the expected level of performance within a certain region, but less so in other regions; this could be due to a number of reasons including investment, limited resources, geopolitical reasons, etc. Many customers desire to fully optimize speed in performance critical applications and media by implementing a multi-vendor approach that is often coupled with real time performance monitoring to steer traffic to the most optimal vendor based on that data.

### Cost

Just like the performance of a particular vendor can vary based on content, time of day, and location, so can the cost, and sending particular traffic through a particular vendor can help optimize the overall cost of the delivery. Typically these benefits are seen driving a multi-vendor strategy in very specific use cases, such as for high volume media traffic, as the cost of onboarding and managing multiple vendors typically increases monetary and resource costs outside of specific niche use cases. Additionally, adopting a multi-vendor approach helps avoid vendor lock-in with any single provider, offering greater flexibility and negotiating power across vendors.

## Multi-vendor solution considerations

Any multi-vendor architecture will contain several components an organization must decide on prior to implementing, both on the business and technical side. Additionally, there are several things to keep in mind to help optimize your setup to align with Cloudflare’s strengths and unique differentiators.

Optimize for feature set and delivery methodology. Cloudflare is able to offer feature parity with most major vendors, with custom features easily delivered through our serverless compute service. For delivery methodology, Cloudflare’s anycast architecture is unique in that every server can deliver every service that Cloudflare offers, making it an optimal candidate for an active/active approach.

Leverage Cloudflare’s API and rapid deployment capabilities wherever possible. Since Cloudflare offers every feature API first, and config changes typically are visible in a few seconds, this makes it easy for teams to test and deploy changes in a programmatic fashion without having to wait for long deployment times.

Avoid a “stacked” approach. This means avoid having Cloudflare placed in the request flow behind another vendor. We often hear companies consider stacking vendors with the hope of providing defense in depth by running the same traffic through each layer in a linear fashion. In theory this would allow for both vendors' policies to be run, and any bad traffic not caught by one vendor is hopefully caught by the next. What we see in practice when this setup is used is very different. The main disadvantage is the loss of full traffic visibility when sitting behind another vendor, which hinders many of Cloudflare’s threat intelligence powered services such as Bot Management, Rate Limiting, DDoS mitigation, & IP reputation database. This is also highly suboptimal from the performance side since the traffic must pass through two networks each with their own processing and connection overhead before going back to origin. Also, it creates unnecessary complexity in operations, management, and support.

One note on a stacked approach is that in certain cases for particular point solutions, it can make sense to place one vendor solution in front of the other, such as particular bot management solutions and API gateways, especially when migrating towards a new vendor/provider. In these scenarios it’s important to understand where each solution falls in the request flow to optimize effectiveness.

While Cloudflare and many providers maintain a high degree of availability and a robust fault tolerant architecture, some customers have a further desire to reduce dependency and respectively single vendor point of failures. It’s important to plan for a worst case scenario where some or all of a vendor's services are down and how to work around that in a short timeframe. Customers must consider how to have redundancy across DNS providers, networks, and origin connectivity to eliminate the risk of a single vendor/component failure cascading into a widespread outage.

While the specifics may vary widely depending on the vendor and business case, the technical considerations for a multi-vendor deployment can be bucketed into three areas: routing logic, configuration management and origin connectivity.

### Routing

The first and likely most important decision that must be made when looking at a multi-vendor strategy is how to route traffic to each provider. This depends on both the business logic driving the multi-vendor strategy and the technical capabilities of each vendor in question. Traffic to each provider will be routed using DNS and shift depending on the current conditions and needs of the business. Cloudflare can support configurations as an authoritative DNS provider, secondary DNS provider, or non-Cloudflare DNS (CNAME) setups for a zone.

![Figure 6: Client request being routed to origin server\(s\) in a multi-vendor setup](https://developers.cloudflare.com/_astro/Figure_6.Bij5Z-XO_CjSqH.webp "Figure 6")

Figure 6

DNS based load balancing and health checks can be leveraged here so that client requests to the domain/site are distributed across healthy origin server(s). The DNS provider monitors the health of the servers and DNS responds to the client request using a round-robin approach with the respective IPs.

If a multi-vendor DNS approach is also desired for DNS-level resiliency, a variety of configurations are possible here with multiple authoritative nameservers from different vendors. See the ‘Multi-vendor DNS setup options’ section in this document for additional details. The key here is ensuring consistent configurations across multiple providers. Depending on the DNS setup/configuration, this consistency can be resolved using different approaches such as zone transfers, automation via tools such as Terraform or OctoDNS, monitoring/automation via scripting, or even manual configuration.

### Configuration

While many vendors can deliver a similar end user experience, configuration and management can differ greatly between providers, which drives up the cost of a successful implementation. Ultimately that means the business must become familiar with each vendor's configuration logic and develop a system to map between them. Wherever possible, seek out vendors that optimize for management simplicity, automation support, and rapid deployment to help minimize the cost and management overhead.

API support for all vendor’s product functionality becomes critical here. Maintaining consistent configuration is important not only in the routing in certain multi-vendor DNS setups but also for maintaining consistency between all of the respective services such as WAF, API security, etc. as traffic can be routed to either provider. Automation tools such as Terraform or custom scripted automation tools will leverage the APIs to maintain this consistency between vendors.

### Connectivity

Another important decision that must be made is how each provider will connect back into your organization. This will largely depend on the vendor's capabilities plus the technical and security requirements of your organization.

Clients will make requests over the Internet and the requests will be routed to the respective vendor’s proxy service on the vendor’s cloud. In the most basic scenario, the proxy will simply route the traffic over the Internet to the origin; this is the default setup.

If the customer wants more security or additional performance benefits, they may decide to also leverage vendor offered connectivity options such as encrypted tunnels to origin or direct connect options from customer data centers directly to Cloudflare data centers via cross connect from a customer’s equipment to Cloudflare. Vendors may also offer accelerated routing capabilities where they actively monitor the fastest paths over the Internet to ensure the most optimal routes to the origin are used.

Cloudflare offers all of these connectivity options along with Smart Routing to ensure the fastest paths to origin are used. These connectivity options are discussed in more detail in the ‘Cloudflare connectivity options’ section of this document.

**Operations and Troubleshooting**

Some important considerations when designing a multi-vendor solution are operations and troubleshooting. Having a multi-vendor solution can raise operational costs and also impact troubleshooting as you now have two different environments to manage and troubleshoot.

A primary focus for Cloudflare has always been operational simplicity and providing visibility. Cloudflare provides a single unified dashboard where all security, performance, and reliability services can be accessed from a consistent operationally simple UI.

Additionally, Cloudflare offers logging, analytics and security analytics dashboards. Logs with additional details are also accessible from the UI. Customers have granular data that can be used for analysis and troubleshooting.

Figure 7 below shows a view of Cloudflare Security Analytics which brings together all of Cloudflare’s detection capabilities in one place. This provides security engineers and admins with a quick view of current traffic and security insights in regards to their site.

![Figure 7: Cloudflare Security Analytics](https://developers.cloudflare.com/_astro/Figure_7.QuPc0brB_Z17cODf.webp "Figure 7")

Figure 7

In addition to analytics for each product and security analytics shown above, you can also view logs within the UI and export logs to Cloudflare or third party clouds or products for additional analysis.

In Figure 8 below a Logpush is being configured to automatically export logs to an external destination.

![Figure 8: Cloudflare Logpush for exporting logs to external destinations](https://developers.cloudflare.com/_astro/Figure_8.DnHWeRK__n7SEI.webp "Figure 8")

Figure 8

When selecting the vendors for a multi-vendor solution you should ensure you select vendors where the below criteria is met:

* The vendor provides for operational simplicity with a single consistent UI for all operations where users can easily manage and get things done in one place.
* The vendor has useful security analytics to give an understanding of a sites’ traffic, security insights, and useful data for troubleshooting.
* The vendor has the ability to export logs/request data to third party clouds/applications.
* The vendor has an API first approach and provides APIs for all operations so tasks can be easily automated.
* The vendor is reputable and can provide effective support and help when needed.
* Employees are trained and have expertise or are comfortable using the vendor’s products.

## Common deployments

### Multi-vendor active-active security and different provider for DNS

The below diagram describes a typical multi-vendor setup in which both vendors are ‘active’ meaning they are both serving traffic for the same resource (`www.example.com`) and traffic is split between the two.

On the routing front, this example shows the authoritative DNS living outside of the two providers and load balancing between them. This DNS provider could be self hosted or live on another third party provider. Traffic is directed to each provider by responding to queries for `www.example.com` with a provider specific CNAME record or static IP for apex domain traffic. To achieve this traffic split, the third party DNS provider does need to have some ability to load balance the traffic. Most major DNS providers will have some mechanism to perform DNS based load balancing with varying degrees of complexity and configurability. This could mean round robining between records in the simplest case, or varying the response based on client location, health check data and more.

![Figure 9: Multi-vendor setup with Cloudflare and another vendor and different provider for DNS](https://developers.cloudflare.com/_astro/Figure_9.yGPacbGy_Z270dyN.webp "Figure 9")

Figure 9

Depending on the authoritative DNS provider, traffic can be evenly split between the two or adjusted dynamically. Oftentimes customers will choose to inform the DNS routing with performance/availability data sourced from a third party monitoring service such as Thousandeyes or Catchpoint and adjust DNS responses based on that data. Third party monitoring services are often used to capture full HTTP request/response metrics to route based on real-time performance. Traffic can easily be shifted away from a provider by updating the authoritative DNS and waiting for the record TTL to expire.

It’s important to note here that the third party services are looking at end-to-end application performance metrics, not just DNS response time or limited data used by DNS resolvers. The DNS records will be updated based on the performance data to reflect the correct security vendor’s proxy to point to.

Both providers’ configurations are kept in sync by the administrators, pushing out changes via Terraform which makes calls to each provider's API. Keep in mind that while Cloudflare does have full API support for every feature, this may not be the case for every provider.

If only one external DNS provider is used, it does create a single point of failure if that DNS provider has an outage. A way to mitigate this risk is to implement a multi-vendor DNS solution; this is discussed in more detail in the [Multi-vendor DNS options](#multi-vendor-dns-setup-options) section in this document.

Another challenge of a parallel approach is keeping configurations in sync across providers to deliver a consistent end user experience. This means the administrators need to be familiar with the configuration management of both vendors and understand how feature parity can be achieved.

Once traffic is routed to the security and performance service provider via DNS, all security and performance services and respective policies are applied, and the traffic is then routed over the Internet back to the origin where the customer’s firewall is allowing IPs specified by each provider.

### Multi-vendor active-active security with multi-vendor DNS from same providers

The below example describes a setup where the DNS providers are also the security proxy vendors, and DNS records are kept in sync via zone transfers. A multi-vendor DNS solution is recommended as the preferred and most resilient solution.

here are different setups possible between the different DNS vendors and these are discussed in more detail in the ‘Multi-vendor DNS setup’ section of this document with advantages/disadvantages of each.

In this example, there are multiple authoritative DNS providers used where one is primary and the other is secondary. Per the use of secondary DNS and respective standard, zone transfers easily allow DNS configurations between different providers to remain synced.

In order to point requests to both providers (for the same hosts) in this model, the vendor set up as secondary must be able to overwrite records intended to go through a proxy. Without the ability to overwrite records as a secondary, the destination for all primary records would remain static and reduce the flexibility and resilience of the overall setup; Cloudflare provides this capability with [Secondary DNS override](https://developers.cloudflare.com/dns/zone-setups/zone-transfers/cloudflare-as-secondary/proxy-traffic/). For example, if the provider such as Cloudflare is set up as a secondary, Cloudflare will have DNS automatically synced to them from the primary via zone transfer, and can use Secondary DNS override to update the A record to point to its own proxy/services.

While DNS based load balancing isn’t required here, it’s helpful to have at each provider so requests can be predictably split across multiple vendors, otherwise the traffic split is largely dictated by the client resolver nameserver selection.

![Figure 10: Multi-vendor setup with Cloudflare and another vendor with multi-vendor DNS from same providers.](https://developers.cloudflare.com/_astro/Figure_10.C8edWi-O_1SI8n1.webp "Figure 10")

Figure 10

At the authoritative DNS provider, each vendor has their NS records listed and the client will select a nameserver based on their resolver. The resolver will receive the full set of authoritative nameservers upon request. The logic used by most resolvers typically takes into account resolution time as well as availability. In this scenario, the resolvers are used to make the decision on which name server to use based on performance/availability data they already have.

It’s important to note here that typically the DNS resolvers have already seen queries and responses associated with the nameservers used. For example, the nameserver the vendor assigns to the customer may already be used by other sites for their authoritative DNS and the resolvers already have a strong historical baseline of performance data to start leveraging immediately.

In this example, we are also seeing records being kept in sync via periodic zone transfers. Cloudflare is able to support both outgoing and incoming zone transfers. Traffic is directed to each proxy by either a provider specific CNAME record or static IP.

The configuration on the DNS side can vary; the different options are discussed in more detail in the next section. DNS can be set up with one provider acting as primary and the other acting as secondary. The DNS provider acting as primary is where all the DNS configuration is done and the secondary DNS receives the configuration copy via zone transfer.

Some DNS providers like [Cloudflare](https://developers.cloudflare.com/dns/zone-setups/zone-transfers/cloudflare-as-secondary/proxy-traffic/) offer the capability where secondary DNS can overwrite the A and AAAA records. This allows the provider to rewrite the A/AAAA record to proxy traffic through a different vendor as desired. In this case the secondary DNS provider will provide a different response than the primary for the same hostname. This means that depending on what nameserver a client resolver queries, the request will be routed to the vendor’s respective network. This allows for flexibility and reduced complexity by relying on the client resolver for traffic steering and failover if the nameservers are slow or unreachable. This comes at the cost of direct control and predictability over what provider a client selects.

Another variation is to have specific applications/hostnames hosted through specific providers. That could mean, in the above example, both the primary and secondary DNS servers have `www.example.com` mapped to a Cloudflare address, regardless of which provider resolves the initial DNS query.

## Multi-vendor DNS setup options

The important routing decision is dictated by DNS. As discussed, there are multiple configurations possible for a multi-DNS setup. The below assumes you are using two DNS providers which are also the providers for the security solution.

**1\. Two authoritative - one primary and one secondary**

This setup involves setting one provider as a primary and the second provider as a secondary. The purpose of secondary DNS is to support multi-DNS solutions where synchronization between the configurations of primary and secondary is automated.

In this setup both DNS providers are authoritative but only one is primary and the source of truth and where DNS configuration changes/updates are made. The configuration changes/updates on primary are synced to the secondary DNS provider via zone transfers managed by the provider. DNS of both providers answer DNS queries.

The advantage and main use case with this deployment model is that it uses a standard for syncing DNS across multiple providers and was created for just this reason, and the DNS provider is responsible for the zone transfers. This option provides simplicity in maintaining DNS synchronization between providers.

Sometimes customers may decide to use another option due to the following:

* The requirement of updating DNS records when the record management and zone transfer pipeline is down.
* Not wanting to rely on a third party/vendor for the DNS synchronization and desiring more control.
* Having specific restrictions/regulations excluding this option.

This setup is recommended for customers who desire simplicity offered by a secondary DNS and provider for maintaining synchronization.

Pros:

* Uses standard (AXFR, IXFR) to keep DNS synced and done automatically via Zone Transfers.
* Simplicity as the DNS provider is responsible for DNS synchronization.

Cons:

* If the record management and zone transfer pipeline is down, DNS records cannot be updated.
* Some customers do not want to rely on a vendor/3rd party for DNS sync and desire more control and flexibility.

**2\. Two authoritative - both primary**

Some customers may also want to have the added assurance of being able to update DNS records when the record management and zone transfer pipeline is down. They also may not want to rely on a third party/vendor for DNS synchronization and desire more control. In this case, both DNS providers can be used as primary.

In this setup each DNS provider is authoritative and primary. There is no secondary DNS and changes/updates to DNS can be made at either provider; also, both DNS providers answer DNS queries.

Synchronization of the DNS configuration between providers is critical, and in this setup it now becomes the customer’s responsibility to keep DNS in sync at both providers. Customers typically do this synchronization with automation tools like OctoDNS, Terraform, or via custom automation leveraging the vendors’ APIs.

This setup is recommended for customers who desire the most flexible and resilient option that supports updating DNS records even when the record management and zone transfer pipeline is down and/or customers who want more control over DNS synchronization.

Pros:

* If control plane is down on one provider, DNS records can still be updated at the other.
* More control and no reliance on DNS provider for DNS synchronization.

Cons:

* More complexity in keeping DNS between providers synced.
* Customer is responsible for DNS synchronization which can be done via automation tools, automated via vendor APIs, or manually.

**3\. One or more authoritative - hidden primary and multiple secondary**

In a hidden primary setup, users establish an unlisted primary server to store all zone files and changes, then enable one or more secondary servers to receive and resolve queries. Although most of the time the primary is authoritative, it doesn’t have to be. In this option, the primary is not listed with the registrar. The primary does not respond to queries and its main purpose is being the single source of truth.

Although the secondary servers essentially fulfill the function of a primary server, the hidden setup allows users to hide their origin IP and shield it from attacks. Additionally, the primary can be taken offline for maintenance without causing DNS service to be disrupted.

This setup is recommended for customers who desire simplicity offered by a secondary DNS and provider for maintaining synchronization. This solution also provides for flexibility in taking the primary offline as needed with less impact.

Pros:

* Allows customers to maintain DNS record management on their infrastructure and use standard to keep DNS synced automatically via Zone Transfers.
* Primary is used only for source of truth and maintaining DNS records and can be taken offline for maintenance /administration.

Cons:

* If the record management and zone transfer pipeline is down, DNS records cannot be updated.
* Some customers do not want to rely on a vendor/3rd party for DNS sync and desire more control.

## Configuration and management best practices

![Figure 11: Configuration via Terraform for multi-vendor setup with Cloudflare and other vendor](https://developers.cloudflare.com/_astro/Figure_11.Dt7KSeKt_Z1dldBq.webp "Figure 11")

Figure 11

Figure 11 depicts a typical pattern seen when managing configurations across both Cloudflare and other providers in parallel. In this example, we are assuming that the same workloads are being split through both providers and the admin team is updating both configurations via API through Terraform. This can also be tied into an internal CI/CD pipeline to match your typical developer workflow. All Cloudflare functions can be configured via API and are delivered first via API. This diagram also depicts logs being sent to a common SIEM and native alerting functions that can be delivered via e-mail, webhook, or PagerDuty for alerts based on performance, security or administrative criteria.

With the wide variety of customization options Cloudflare provides (Ruleset Engine, native features, Worker customizations), Cloudflare can likely meet feature parity with most other major vendors out in the market, however it's not guaranteed that these features will be configurable in the same manner. This is where working closely with your Cloudflare account team becomes critical in understanding the key differences in operation and best practices to align your workflow with Cloudflare.

## Connectivity options

For a multi-vendor offering it's important to consider the methods that each provider offers for connectivity to the origin(s) and the trade offs in security, performance, and resiliency. Cloudflare offers several options that fit most use cases and can be deployed in parallel with per application (hostname/DNS record) granularity to fit a hybrid customer environment.

### Internet (default)

In the most basic scenario, the proxy will simply route the traffic over the Internet to the origin; this is the default setup for all vendors. In this setup the client and origin are both endpoints directly connected to the Internet via their respective ISPs. The request is routed over the Internet from the client to the vendor proxy (via DNS configuration) before the proxy routes the request over the Internet to the customer's origin.

The below diagram describes the default connectivity to origins as requests flow through the Cloudflare network. When a request hits a proxied DNS record and needs to reach the origin, Cloudflare will send traffic from the network over the Internet from a set of Cloudflare owned addresses.

![Figure 12: Connectivity from Cloudflare to origin server\(s\) via Internet](https://developers.cloudflare.com/_astro/Figure_12.D0NtsXlk_Znplc.webp "Figure 12")

Figure 12

Optionally, customers can also choose to leverage [Dedicated CDN Egress IPs](https://developers.cloudflare.com/smart-shield/configuration/dedicated-egress-ips/), which allocates customer-specific IPs that Cloudflare will use to connect back to your origins. We recommend allowlisting traffic from only these networks to avoid direct access. In addition to IP blocking at the origin side firewall, we also strongly recommend additional verification of traffic via either the "Full (Strict)" SSL setting or mTLS auth to ensure all traffic is sourced from requests passing through the customer configured zones.

Cloudflare also supports [Bring Your Own IP (BYOIP)](https://developers.cloudflare.com/byoip/). When BYOIP is configured, the Cloudflare global network will announce a customer’s own IP prefixes and the prefixes can be used with the respective Cloudflare Layer 7 services.

### Private connection - tunnel or VPN

Another option is to have a private tunnel/connection over the Internet for additional security. Some vendors offer private connectivity via tunnels or VPNs which can be encrypted or unencrypted; these vary in complexity/management and require additional security/firewall updates to allow for connectivity. A traditional VPN setup is also limited via a centralized vendor location back to the origin.

Cloudflare offers [Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/) which is tunneling software that provides an encrypted tunnel between your origin(s) and Cloudflare’s network. Also, since Cloudflare leverages anycast on its global network, the origin(s) will, like clients, connect to the closest Cloudflare data center(s).

When you run a tunnel, a lightweight daemon in your infrastructure, cloudflared, establishes four outbound-only connections between the origin server and the Cloudflare network. These four connections are made to four different servers spread across at least two distinct data centers providing robust resiliency. It is possible to install many cloudflared instances to increase resilience between your origin servers and the Cloudflare network.

Cloudflared creates an encrypted tunnel between your origin web server(s) and Cloudflare’s nearest data center(s), all without opening any public inbound ports. This provides for simplicity and speed of implementation as there are no security changes needed on the firewall. This solution also lowers the risk of firewall misconfigurations which could leave your company vulnerable to attacks.

The firewall and security posture is hardened by locking down all origin server ports and protocols via your firewall. Once Cloudflare Tunnel is in place and respective security applied, all requests on HTTP/S ports are dropped, including volumetric DDoS attacks. Data breach attempts, such as snooping of data in transit or brute force login attacks, are blocked entirely.

![Figure 13: Connectivity from Cloudflare to origin server\(s\) via Cloudflare Tunnel](https://developers.cloudflare.com/_astro/Figure_13.CsKShnx8_nURrh.webp "Figure 13")

Figure 13

The above diagram describes the connectivity model through Cloudflare Tunnel. Note, this option provides you with a secure way to connect your resources to Cloudflare without a publicly routable IP address. Cloudflare Tunnel can connect HTTP web servers, SSH servers, remote desktops, and other protocols safely to Cloudflare.

### Direct connection

Most vendors also provide an option of directly connecting to their network. Direct connections provide security, reliability, and performance benefits over using the public Internet. These direct connections are done at peering facilities, Internet Exchanges (IXs) where Internet Service Providers (ISPs) and Internet networks can interconnect with each other, or through vendor partners.

![Figure 14: Connectivity from Cloudflare to origin server\(s\) via Cloudflare Network Interconnect \(CNI\)](https://developers.cloudflare.com/_astro/Figure_14.pA3d5-ag_2uI3x1.webp "Figure 14")

Figure 14

The above diagram describes origin connectivity through [Cloudflare Network Interconnect (CNI) ↗](https://blog.cloudflare.com/cloudflare-network-interconnect/) which allows you to connect your network infrastructure directly with Cloudflare and communicate only over those direct links. CNI allows customers to interconnect branch and headquarter locations directly with Cloudflare. Customers can interconnect with Cloudflare in one of three ways: over a private network interconnect (PNI) available at [Cloudflare peering facilities ↗](https://www.peeringdb.com/net/4224), via an IX at any of the [many global exchanges Cloudflare participates in ↗](https://bgp.he.net/AS13335#%5Fix), or through one of our [interconnection platform partners ↗](https://blog.cloudflare.com/cloudflare-network-interconnect-partner-program).

Cloudflare’s global network allows for ease of connecting to the network regardless of where your infrastructure and employees are.

## Additional routing and security options

Most vendors also provide additional capabilities for enhanced/optimized routing and additional security capabilities when communicating with the origin. You should check with respective vendor documentation to confirm support if parity is expected in terms of performance and security capabilities.

Cloudflare offers [Argo Smart Routing](https://developers.cloudflare.com/argo-smart-routing/) for finding and using optimized routes across the Cloudflare network to deliver responses to users more quickly and Authenticated Origin Pulls (mTLS) to ensure requests to your origin server come from the Cloudflare network

### Argo Smart Routing

Argo Smart Routing is a service that finds optimized routes across the Cloudflare network to deliver responses to users more quickly.

Argo Smart Routing accelerates traffic by taking into account real-time data and network intelligence from routing over 28 million HTTP requests per second; it ensures the fastest and most reliable network paths are traversed over the Cloudflare network to the origin server. On average, Argo Smart Routing accounts for 30% faster performance on web assets.

In addition, Cloudflare CDN leverages Argo Smart Routing to determine the best upper tier data centers for Argo Tiered Cache. Argo Smart Routing can be enabled to ensure the fastest paths over the Cloudflare network are taken between upper tier data centers and origin servers at all times. Without Argo Smart Routing, communication between upper tier data centers to origin servers are still intelligently routed around problems on the Internet to ensure origin reachability. For more information on Argo Smart Routing as it relates to CDN, see the [Cloudflare CDN Reference Architecture](https://developers.cloudflare.com/reference-architecture/architectures/cdn/).

### Authenticated Origin Pulls (mTLS)

Authenticated Origin Pulls helps ensure requests to your origin server come from the Cloudflare network, which provides an additional layer of security on top of [Full](https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/) or [Full (strict)](https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/) SSL/TLS encryption modes Cloudflare offers.

This authentication becomes particularly important with the [Cloudflare Web Application Firewall (WAF)](https://developers.cloudflare.com/waf/). Together with the WAF, you can make sure that all traffic is evaluated before receiving a response from your origin server.

If you want your domain to be [FIPS ↗](https://en.wikipedia.org/wiki/Federal%5FInformation%5FProcessing%5FStandards) compliant, you must upload your own certificate. This option is available for both [zone-level](https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/set-up/zone-level/) and [per-hostname](https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/set-up/per-hostname/) authenticated origin pulls.

## Summary

To summarize, a successful multi-vendor strategy for application security and performance requires careful consideration of your business objectives, infrastructure requirements, and vendor capabilities. There are several options to choose from when deploying a multi-vendor strategy with various advantages and limitations to each. Cloudflare can support these configurations by delivering services through the Cloudflare Global Network that are highly resilient, performant, and cost effective to fit your organizations multi-vendor strategy.

[ Download this page as a PDF ](https://developers.cloudflare.com/reference-architecture/static/multi-vendor-application-security-performance.pdf) 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/architectures/","name":"Reference Architectures"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/architectures/multi-vendor/","name":"Multi-vendor Application Security and Performance Reference Architecture"}}]}
```

---

---
title: Evolving to a SASE architecture with Cloudflare
description: This reference architecture explains how organizations can work towards a SASE architecture using Cloudflare.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/architectures/sase.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Evolving to a SASE architecture with Cloudflare

**Last reviewed:**  over 1 year ago 

Download a [PDF version](https://developers.cloudflare.com/reference-architecture/static/cloudflare-evolving-to-a-sase-architecture.pdf) of this reference architecture.

## Introduction

Cloudflare One is a secure access service edge (SASE) platform that protects enterprise applications, users, devices, and networks. By progressively adopting Cloudflare One, organizations can move away from their patchwork of hardware appliances and other point solutions and instead consolidate security and networking capabilities on one unified control plane. Such network and security transformation helps address key challenges modern businesses face, including:

* Securing access for any user to any resource with Zero Trust practices
* Defending against cyber threats, including multi-channel phishing and ransomware attacks
* Protecting data in order to comply with regulations and prevent leaks
* Simplifying connectivity across offices, data centers, and cloud environments

Cloudflare One is built on Cloudflare's [connectivity cloud ↗](https://www.cloudflare.com/connectivity-cloud/), ​​a unified, intelligent platform of programmable cloud-native services that enable any-to-any connectivity between all networks (enterprise and Internet), cloud environments, applications, and users. It is one of the [largest global networks ↗](https://www.cloudflare.com/network/), with data centers spanning [hundreds of cities worldwide ↗](https://www.cloudflare.com/network/) and interconnection with over 13,000 network peers. It also has a greater presence in [core Internet exchanges ↗](https://bgp.he.net/report/exchanges#%5Fparticipants) than many other large technology companies.

As a result, Cloudflare operates within \~50 ms of \~95% of the world's Internet-connected population. And since all Cloudflare services are designed to run across every network location, all traffic is connected, inspected, and filtered close to the source for the best performance and consistent user experience.

This document describes a reference architecture for organizations working towards a SASE architecture, and shows how Cloudflare One enables such security and networking transformation.

### Who is this document for and what will you learn?

This reference architecture is designed for IT or security professionals with some responsibility over or familiarity with their organization's existing infrastructure. It is useful to have some experience with technologies important to securing hybrid work, including identity providers (IdPs), user directories, single sign on (SSO), endpoint security or management (EPP, XDR, UEM, MDM), firewalls, routers, and point solutions like packet or content inspection hardware, threat prevention, and data loss prevention technologies.

To build a stronger baseline understanding of Cloudflare, we recommend the following resources:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (5 minute read) or [video ↗](https://youtu.be/XHvmX3FhTwU?feature=shared) (2 minutes)
* Solution Brief: [Cloudflare One ↗](https://cfl.re/SASE-SSE-platform-brief) (3 minute read)
* Whitepaper: [Overview of Internet-Native SASE Architecture ↗](https://cfl.re/internet-native-sase-architecture-whitepaper) (10 minute read)
* Blog: [Zero Trust, SASE, and SSE: foundational concepts for your next-generation network ↗](https://blog.cloudflare.com/zero-trust-sase-and-sse-foundational-concepts-for-your-next-generation-network/) (14 minute read)

Those who read this reference architecture will learn:

* How Cloudflare One protects an organization's employees, devices, applications, data, and networks
* How Cloudflare One fits into your existing infrastructure, and how to approach migration to a SASE architecture
* How to plan for deploying Cloudflare One

While this document examines Cloudflare One at a technical level, it does not offer fine detail about every product in the platform. Instead, it looks at how all the services in Cloudflare One enable networking and network security to be consolidated on one architecture. Visit the [developer documentation ↗](https://developers.cloudflare.com/) for further information specific to a product area or use case.

## Disintegration of the traditional network perimeter

Traditionally, most employees worked in an office and connected locally to the company network via Ethernet or Wi-Fi. Most business systems (e.g. file servers, printers, applications) were located on and accessible only from this internal network. Once connected, users would typically have broad access to local resources. A security perimeter was created around the network to protect against outsider threats, most of which came from the public Internet. The majority of business workloads were hosted on-premises and only accessible inside the network, with very little or no company data or applications existing on the Internet.

However, three important trends created problems for this "castle and moat" approach to IT security:

1. **Employees became more mobile**. Organizations increasingly embrace remote / hybrid work and support the use of personal (i.e. not company-owned) devices.
2. **Cloud migration accelerated**. Organizations are moving applications, data, and infrastructure from expensive on-premises data centers to public or private cloud environments in order to improve flexibility, scalability, and cost-effectiveness.
3. **Cyber threats evolved**. The above trends expand an organization's attack surface. For example, attack campaigns have become more sophisticated and persistent in exploiting multiple channels to infiltrate organizations, and cybercriminals face lower barriers to entry with the popularity of the "cybercrime-as-a-service" black market.

Traditional perimeter-based security has struggled to adapt to these changes. In particular, extending the "moat" outwards has introduced operational complexity for administrators, poor experiences for users, and inconsistency in how security controls are applied across users and applications.

![With many different methods to connect networks and filter/block traffic, managing access to company applications is costly and time consuming.](https://developers.cloudflare.com/_astro/cf1-ref-arch-1.DR89R8uB_Z1SsQpq.svg) 

The diagram above shows an example of this adapted perimeter-based approach, in which a mix of firewalls, WAN routers, and VPN concentrators are connected with dedicated WAN on-ramps consisting of MPLS circuits and/or leased lines. The diagram also demonstrates common problem areas. In an effort to centralize policy, organizations sometimes force all employee Internet traffic through their VPN infrastructure, which results in slow browsing and user complaints. Employees then seek workarounds — such as using non-approved devices — which increases their exposure to Internet-borne attacks when they work from home or on public Wi-Fi. In addition, IT teams are unable to respond quickly to changing business needs due to the complexity of their network infrastructure.

Such challenges are driving many organizations to prioritize goals like:

* Accelerating business agility by supporting remote / hybrid work with secure any-to-any access
* Improving productivity by simplifying policy management and by streamlining user experiences
* Reducing cyber risk by protecting users and data from phishing, ransomware, and other threats across all channels
* Consolidating visibility and controls across networking and security
* Reducing costs by replacing expensive appliances and infrastructure (e.g. VPNs, hardware firewalls, and MPLS connections)

## Understanding a SASE architecture

In recent years, [secure access service edge ↗](https://www.cloudflare.com/learning/access-management/security-service-edge-sse/), or SASE, has emerged as an aspirational architecture to help achieve these goals. In a SASE architecture, network connectivity and security are unified on a single cloud platform and control plane for consistent visibility, control, and experiences from any user to any application.

SASE platforms consist of networking and security services, all underpinned by supporting operational services and a policy engine:

* Network services forward traffic from a variety of networks into a single global corporate network. These services provide capabilities like firewalling, routing, and load balancing.
* Security services apply to traffic flowing over the network, allowing for filtering of certain types of traffic and control over who can access what.
* Operational services provide platform-wide capabilities like logging, API access, and comprehensive Infrastructure-as-Code support through providers like Terraform.
* A policy engine integrates across all services, allowing admins to define policies which are then applied across all the connected services.
![Cloudflare's SASE cloud platform offers network, security, and operational services, as well as policy engine features, to provide zero trust connectivity between a variety of user identities, devices and access locations to customer applications, infrastructure and networks.](https://developers.cloudflare.com/_astro/cf1-ref-arch-2.BMHjAM9W_2btPiQ.svg) 

## Cloudflare One: single-vendor, single-network SASE

Most organizations move towards a SASE architecture progressively rather than all at once, prioritizing key security and connectivity use cases and adopting services like [Zero Trust Network Access ↗](https://www.cloudflare.com/learning/access-management/what-is-ztna/) (ZTNA) or [Secure Web Gateway ↗](https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/) (SWG). Some organizations choose to use SASE services from multiple vendors. For most organizations, however, the aspiration is to consolidate security with a single vendor, in order to achieve simplified management, comprehensive visibility, and consistent experiences.

[Cloudflare One ↗](https://www.cloudflare.com/cloudflare-one/) is a single-vendor SASE platform where all services are designed to run across all locations. All traffic is inspected closest to its source, which delivers consistent speed and scale everywhere. And thanks to composable and flexible on-ramps, traffic can be routed from any source to reach any destination.

Cloudflare's connectivity cloud also offers many other services that improve application performance and security, such as [API Gateway ↗](https://www.cloudflare.com/learning/security/api/what-is-an-api-gateway/), [Web Application Firewall ↗](https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/), [Content Delivery ↗](https://www.cloudflare.com/learning/cdn/what-is-a-cdn/), or [DDoS mitigation ↗](https://www.cloudflare.com/learning/ddos/ddos-mitigation/), all of which can complement an organization's SASE architecture. For example, our Content Delivery Network (CDN) features can be used to improve the performance of a self hosted company intranet. Cloudflare's full range of services are illustrated below.

![Cloudflare's anycast network allows provides services on all connected servers to enable secure connections on public and home networks and at corporate offices.](https://developers.cloudflare.com/_astro/cf1-ref-arch-4.Bjts0g1J_Z1YR1dx.svg) 

### Cloudflare's anycast network

Cloudflare's SASE platform benefits from our use of [anycast ↗](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/) technology. Anycast allows Cloudflare to announce the IP addresses of our services from every data center worldwide, so traffic is always routed to the Cloudflare data center closest to the source. This means traffic inspection, authentication, and policy enforcement take place close to the end user, leading to consistently high-quality experiences.

Using anycast ensures the Cloudflare network is well balanced. If there is a sudden increase in traffic on the network, the load can be distributed across multiple data centers – which in turn, helps maintain consistent and reliable connectivity for users. Further, Cloudflare's large [network capacity ↗](https://www.cloudflare.com/network/) and [AI/ML-optimized smart routing ↗](https://blog.cloudflare.com/meet-traffic-manager/) also help ensure that performance is constantly optimized.

By contrast, many other SASE providers use Unicast routing in which a single IP address is associated with a single server and/or data center. In many such architectures, a single IP address is then associated with a specific application, which means requests to access that application may have very different network routing experiences depending on how far that traffic needs to travel. For example, performance may be excellent for employees working in the office next to the application's servers, but poor for remote employees or those working overseas. Unicast also complicates scaling traffic loads — that single service location must ramp up resources when load increases, whereas anycast networks can share traffic across many data centers and geographies.

![Cloudflare's anycast network ensures fast and reliable connectivity, whereas Unicast routing often sends all traffic to a single IP address, resulting in slower and failure prone connections.](https://developers.cloudflare.com/_astro/cf1-ref-arch-5.DVAtCA4Y_1d5wQ8.svg) 

## Deploying a SASE architecture with Cloudflare

To understand how SASE fits into an organization's IT infrastructure, see the diagram below, which maps out all the common components of said infrastructure. Subsequent sections of this guide will add to the diagram, showing where each part of Cloudflare's SASE platform fits in.

![Typical enterprise IT infrastructure may consist of different physical locations, devices and data centers that require connectivity to multiple cloud and on-premises applications.](https://developers.cloudflare.com/_astro/cf1-ref-arch-6.CZw0spTE_Z1gHcKU.svg) 

In the diagram's top half there are a variety of Internet resources (e.g. Facebook), SaaS applications (e.g. ServiceNow), and applications running in an [infrastructure-as-a-service (IaaS) ↗](https://www.cloudflare.com/learning/cloud/what-is-iaas/) platform (e.g. AWS). This example organization has already deployed cloud based [identity providers ↗](https://www.cloudflare.com/learning/access-management/what-is-an-identity-provider/) (IdP), [unified endpoint management ↗](https://www.cloudflare.com/learning/security/glossary/what-is-endpoint/) (UEM) and endpoint protection platforms (EPP) as part of a Zero Trust initiative.

In the bottom half are a variety of users, devices, networks, and locations. Users work from a variety of locations: homes, headquarters and branch offices, airports, and others. The devices they use might be managed by the organization or may be personal devices. In addition to the cloud, applications run in a data center in the organization's headquarters and in a data center operators' colo facility ([Equinix ↗](https://www.equinix.com/), in this example).

A SASE architecture will define, secure, and streamline how each user and device will connect to the various resources in the diagram. Over the following sections, this guide will show ways to integrate Cloudflare One into the above infrastructure:

* **Applications and services**: Placing access to private applications and services behind Cloudflare
* **Networks**: Connecting entire networks to Cloudflare
* **Forwarding device traffic**: Facilitating access to Cloudflare-protected resources from any device
* **Verifying users and devices**: Identifying which users access requests come from, and which devices those users have

### Connecting applications

This journey to a SASE architecture starts with an organization needing to provide remote access to non-Internet facing, internal-only web applications and services (e.g. SSH or RDP). Organizations typically deploy VPN appliances to connect users to the company network where the applications are hosted. However, many applications now live in cloud Infrastructure-as-a-Service platforms, where traditional VPN solutions are hard to configure. This often results in poor application and connectivity performance for users.

#### Tunnels to self-hosted applications

[Zero Trust Network Access ↗](https://www.cloudflare.com/learning/access-management/what-is-ztna/) (ZTNA) is a SASE service that secures access to self-hosted applications and services. ZTNA functionality can be divided broadly into two categories: 1) establishing connectivity between Cloudflare's network and the environments where the applications are running, and 2) setting policies to define how users are able to access these applications. In this section, we first examine the former — how to connect apps to Cloudflare.

Connectivity to self-hosted applications is facilitated through tunnels that are created and maintained by a software connector,[cloudflared](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/get-started/). `cloudflared` is a lightweight daemon installed in an organizations' infrastructure that creates a tunnel via an outbound connection to Cloudflare's global network. The connector can be installed in a variety of ways:

* In the OS installed on the bare metal server
* In the OS that is running in a virtualized environment
* In a [container ↗](https://hub.docker.com/r/cloudflare/cloudflared) running in a Docker or Kubernetes environment

`cloudflared` runs on Windows, Linux, or macOS operating systems and creates an encrypted tunnel using QUIC, a modern protocol that uses UDP (instead of TCP) for fast tunnel performance and modern encryption standards. Generally speaking, there are two approaches for how users can deploy `cloudflared` in their environment:

1. **On the same server and operating system where the application or service is running**. This is typically in high-risk or compliance deployments where organizations require independent tunnels per application. `cloudflared` consumes a small amount of CPU and RAM, so impact to server performance is marginal.
2. **On a dedicated server(s) in the same network where the applications run**. This often takes the form of multiple containers in a Docker or Kubernetes environment.

`cloudflared` manages multiple outbound connections back to Cloudflare and usually requires no changes to network firewalls. Those connections are spread across servers in more than one Cloudflare data center for reliability and failover. Traffic destined for a tunnel is forwarded to the connection that is geographically closest to the request, and if a `cloudflared` connection isn't responding, the tunnel will automatically failover to the next available.

For more control over the traffic routed through each tunnel connection, users can integrate with the Cloudflare [load balancing](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/routing-to-tunnel/public-load-balancers/) service. To ensure reliable local connectivity, organizations should deploy more than one instance of `cloudflared` across their application infrastructure. For example, with ten front-end web servers running in a Kubernetes cluster, you might deploy three kubernetes services [running cloudflared replicas](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/deployment-guides/kubernetes/).

![Using cloudflared, multiple outbound connections are created back to Cloudflare across multiple data centers to improve overall performance and reliability.](https://developers.cloudflare.com/_astro/cf1-ref-arch-7.Dk3BnKM8_UmiKN.svg) 

Once tunnels have been established, there are two methods for how user traffic is forwarded to your application or service. Each method below is protected by policies managed by the ZTNA service that enforces authentication and access (which will be explored in further depth [later in this document](#secure-access-to-self-hosted-apps-and-services)).

##### Public hostname

Each public hostname is specific to an address, protocol, and port associated with a private application, allowing for narrow access to a specific service when there might be multiple applications running on the same host.

For example, organizations can define a public hostname (`mywebapp.domain.com`) to provide access to a web server running on `https://localhost:8080`, while ensuring no access to local Kubernetes services.

Key capabilities:

* A hostname is created in a public DNS zone and all requests to that hostname are first routed to the Cloudflare network, inspected against configured security and access policies, before being routed through the tunnel to the secured private resource
* Multiple hostnames can be defined per tunnel, with each hostname mapping to a single application (service address and port)
* Support for HTTP/HTTPS protocols
* Access to resources only requires a browser
* When Cloudflare's device client is deployed on an user device, policies can leverage additional contextual signals (e.g. determining whether the device is managed or running the latest OS) in policy enforcement
* For access to SSH/VNC services, Cloudflare renders an SSH/VNC terminal using webassembly in the browser

Applications exposed this way receive all of the benefits of Cloudflare's leading DNS, CDN, and DDoS services as well as our web application firewall (WAF), API, and bot services, all without exposing application servers directly to the Internet.

##### Private network

In some cases, users may want to leverage ZTNA policies to provide access to many applications on an entire private network. This allows for greater flexibility over the ways clients connect and how services are exposed. It also enables communication to resources over protocols other than HTTP. In this scenario, users specify the subnet for the private network they wish to be accessible via Cloudflare.

Key capabilities:

* `cloudflared`, combined with Cloudflare device agent, provides access to private networks, allowing for any arbitrary L4 TCP, UDP or ICMP connections
* One or many networks can be configured using CIDR notation (e.g. 172.21.0.16/28)
* Access to resources on the private network requires the Cloudflare device agent to be installed on clients, and at least one Cloudflare Tunnel server on the connecting network

For both methods, it is important to note that `cloudflared` only proxies inbound traffic to a private application or network. It does not become a gateway or "on-ramp" back to Cloudflare for the network that it proxies inbound connections to. This means that if the web server starts its own connection to another Internet-based API, that connection will not be routed via Cloudflare Tunnel and will instead be routed via the host server's default route and gateway.

This is the desirable outcome in most network topologies, but there are some instances in which network services need to communicate directly with a remotely-connected user, or with services on other segmented networks.

If users require connections that originate from the server or network to be routed through Cloudflare, there are multiple on-ramps through which to achieve this, which will be explained further in the "Connecting Networks" section.

#### SaaS applications

SaaS applications are inherently always connected to and accessed via the public Internet. As a result, the aforementioned tunnel-and-app-connector approach does not apply. Instead, organizations with a SASE architecture inspect and enforce policies on Internet-bound SaaS traffic via a [secure web gateway ↗](https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/) (SWG), which serves as a cloud-native forward proxy.

The SWG includes policies that examine outbound traffic requests and inbound content responses to determine if the user, device, or network location has access to resources on the Internet. Organizations can use these policies to control access to approved SaaS applications, as well as detect and block the use of unapproved applications (also known as [shadow IT ↗](https://www.cloudflare.com/learning/access-management/what-is-shadow-it/)).

Some SaaS applications allow organizations to configure an IP address allowlist, which limits access to the application based on the source IP address of the request. With Cloudflare, organizations can obtain dedicated [egress IP](https://developers.cloudflare.com/cloudflare-one/traffic-policies/egress-policies/dedicated-egress-ips/) addresses, which can be used as the source address for all traffic leaving their network. When combined with an allowlist in a SaaS application, organizations can ensure that users are only able to access applications if they are first connected to Cloudflare. (More detail on this approach is outlined in a later section about connecting user devices.)

Another method to secure access to SaaS applications is to configure single sign-on (SSO) so that Cloudflare becomes an identity proxy — acting as the identity provider (IDP) — as part of the authentication and authorization process.

Key capabilities:

* Apply consistent access policies across both self-hosted and SaaS applications
* Layer device security posture into the authentication process (e.g. users can ensure that only managed devices, running the latest operating system and passing all endpoint security checks, are able to access SaaS applications)
* Ensure that certain network routes are used for access (e.g. users can require that devices are connected to Cloudflare using the device agent, which allows them to filter traffic to the SaaS application and prevent downloads of protected data)
* Centralize SSO applications to Cloudflare and create one SSO integration from Cloudflare to their IdP — making both infrastructure and access policies SSO-agnostic (e.g. users can allow access to critical applications only when MFA is used, no matter which IdP is used to authenticate)

When Cloudflare acts as the SSO service to an application, user authentication is still handled by an organization's existing identity provider, but is proxied via Cloudflare, where additional access restrictions can be applied. The diagram below is a high-level example of a typical request flow:

![The flow of SSO requests is proxied through Cloudflare, where the IdP is still used to authenticate, but Cloudflare provides additional access controls.](https://developers.cloudflare.com/_astro/cf1-ref-arch-8.B5wnNeFj_asbcF.svg) 

The last method of connecting SaaS applications to Cloudflare's SASE architecture is with an API-based [cloud access security broker ↗](https://www.cloudflare.com/learning/access-management/what-is-a-casb/) (CASB). The Cloudflare CASB integrates via API to [popular SaaS suites](https://developers.cloudflare.com/cloudflare-one/integrations/cloud-and-saas/) — including Google Workspace, Microsoft 365, Salesforce, and more — and continuously scans these applications for misconfigurations, unauthorized user activity, and other security risks.

Native integration with the Cloudflare [data loss prevention ↗](https://www.cloudflare.com/learning/access-management/what-is-dlp/) (DLP) service enables CASB to scan for sensitive or regulated data that may be stored in files with incorrect permissions — further risking leaks or unauthorized access. CASB reports findings that alert IT teams to items such as:

* Administrative accounts without adequate MFA
* Company-sensitive data in files stored with public access permissions
* Missing application configurations (e.g. domains missing SPF/DMARC records)

#### Checkpoint: Connecting applications to Cloudflare

Now, this is what the architecture of a typical organization might look like once they have integrated with Cloudflare services. It is important to note that Cloudflare is designed to secure organizations' existing applications and services in the following ways:

* All self-hosted applications and services are only accessible through Cloudflare and controlled by policies defined by the Cloudflare ZTNA
* SaaS application traffic is filtered and secured via the Cloudflare SWG
* SaaS services are scanned via the Cloudflare CASB to check for configuration and permissions of data at rest
![Access to all applications is now only available via Cloudflare.](https://developers.cloudflare.com/_astro/cf1-ref-arch-9.DbbzPtNJ_Z1xm3bo.svg) 

### Connecting networks

Once an organization's applications and services have been integrated, it is time to connect Cloudflare to their existing networks. Regional offices, corporate headquarters, retail locations, data centers, and cloud-hosted infrastructure all need to forward traffic to the new corporate SASE network.

When all traffic flows through Cloudflare, SASE services perform the following actions:

* Granting application access
* Filtering general Internet-bound traffic (e.g. blocking access to sites that host malware)
* Isolating web sites to protect users from day-zero or unknown harmful Internet content
* Filtering traffic to identify data defined by DLP policies — then blocking the download/upload of that data to insecure devices or applications
* Providing visibility into the use of non-approved applications and allowing admins to either block or apply policies around their use

There are several approaches for connecting networks to Cloudflare, which can provide further flexibility in how an organization provides access to SASE-protected resources:

1. **Use software agents to create tunnels from host machines back to Cloudflare**. This is typically the method favored by users who own their own servers and applications.
2. **Set up IPsec or GRE tunnels from network routers and firewalls to connect them to the Cloudflare WAN service**. This is the approach that network administrators use when they want to forward traffic to and from entire networks.
3. **Connect a network directly to Cloudflare**. This method works best when an organization's network resides in a supported data center, usually one that is colocated with a Cloudflare data center.

These methods will be explained further in the next sections.

#### Using software agents

There are two software-based methods of connecting networks to Cloudflare, depending on the type of applications that currently exist on the network.

##### Client-to-server connectivity

As described in the previous section, [cloudflared](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/) proxies requests to applications and services on private networks. It installs on servers in the private network and creates secure tunnels to Cloudflare over the Internet. These connections are balanced across multiple Cloudflare data centers for reliability and can be made via multiple connectors, which helps increase the capacity of the tunnels.

Using `cloudflared`, Cloudflare Tunnel supports client to server connections over the Tunnel. Any service or application running behind the Tunnel will use the default routing table when initiating outbound connectivity.

This model is appropriate for a majority of scenarios, in which external users need to access resources within a private network that does not require bidirectionally-initiated communication.

![Requests initiated from a client are securely tunneled to Cloudflare via a device agent, while requests from inside the private network follow the default route.](https://developers.cloudflare.com/_astro/cf1-ref-arch-10.PVIlTF5F_2l0MEM.svg) 

For bidirectional, or meshed connectivity, organizations should use the WARP Connector.

##### Mesh connectivity

The [WARP Connector](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/warp-connector/) is a lightweight solution for site-to-site, bidirectional, and mesh networking connectivity that does not require changes to underlying network routing infrastructure. WARP Connector software is installed on a Linux server within an organization's network, which then becomes a gateway for other local networks that need to on-ramp traffic to Cloudflare.

This provides a lightweight solution to support services such as Microsoft's System Center Configuration Manager (SCCM), Active Directory server updates, VOIP and SIP traffic, and developer workflows with complex CI/CD pipeline interaction. It can either be run supplementally to `cloudflared` and Cloudflare WAN (formerly Magic WAN), or can be a standalone remote access and site-to-site connector to the Cloudflare network.

The WARP Connector can proxy both user-to-network and network-to-network connectivity, or can be used to establish an overlay network of Carrier Grade NAT ([CGNAT ↗](https://en.wikipedia.org/wiki/Carrier-grade%5FNAT)) addressed endpoints to provide secure, direct connectivity to established resources using CGNAT IP ranges. This helps address overlapping network IP range challenges, point-solution access problems, or the process of shifting network design without impacting a greater underlying system.

![In an example scenario, a developer might push code to a git repository, which ends up in a Kubernetes cluster in a staging network. From staging, it is accessed by a QA tester. All of this traffic is routed and protected via WARP Connector.](https://developers.cloudflare.com/_astro/cf1-ref-arch-11.CZ1ltr0Y_Z1RiCFP.svg) 

Cloudflare Tunnel via `cloudflared` is the primary method for connecting users to applications and services on private networks because it is a simpler, more granular and agile solution for many application owners (vs. IP tunnel based connectivity technology, like [IPsec ↗](https://www.cloudflare.com/learning/network-layer/what-is-ipsec/) and [GRE ↗](https://www.cloudflare.com/learning/network-layer/what-is-gre-tunneling/)). Cloudflare Tunnel via WARP Connector is the preferred method for mesh or other software-defined networking — most of which require bidirectional connectivity — when organizations do not want to make changes to the underlying network routing or edge infrastructure.

#### Using network equipment

Where it is not optimal or possible to install software agents, networks can also be connected to Cloudflare using existing network equipment, such as routers and network firewalls. To do this, organizations create IPsec or GRE tunnels that connect to Cloudflare's cloud-native [Cloudflare WAN ↗](https://www.cloudflare.com/network-services/products/magic-wan/) service. With Cloudflare WAN, existing network hardware can connect and route traffic from their respective network locations to Cloudflare through a) secure, IPsec-based tunnels over the Internet or, b) across [Cloudflare Network Interconnect ↗](https://www.cloudflare.com/network-services/products/network-interconnect/) (CNI) — private, direct connections that link existing network locations to the nearest Cloudflare data center.

Cloudflare's WAN service uses a "light-branch, heavy-cloud" architecture that represents the evolution of software-defined WAN (SD-WAN) connectivity. With Cloudflare WAN, as depicted in the network architecture diagram below, the Cloudflare global network functions as a centrally-managed connectivity hub that securely and efficiently routes traffic between all existing network locations:

![Cloudflare's Connectivity Cloud securely links a variety of network locations to the Internet through products such as Firewall, ZTNA, CASB and Load Balancer.](https://developers.cloudflare.com/_astro/cf1-ref-arch-12.D-EXKLBe_2c1ypU.svg) 

As previously described, Cloudflare uses a routing technique called [anycast ↗](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/) to globally advertise all of the services and endpoints on the Cloudflare network, including the endpoints for WAN IP tunnels.

With [anycast IPsec ↗](https://blog.cloudflare.com/anycast-ipsec/) or anycast GRE tunnels, each tunnel configured from an organization's network device (e.g. edge router, firewall appliance, etc.) connects to hundreds of global Cloudflare data centers. Traffic sourced from an organization's network location is sent directly over these tunnels and always routes to the closest active Cloudflare data center. If the closest Cloudflare data center is unavailable, the traffic is automatically rerouted to the next-closest data center.

![In an example scenario, IPsec traffic from an office network's router would be sent to the closest Cloudflare data center.](https://developers.cloudflare.com/_astro/cf1-ref-arch-13.5dK35i5D_Z1Fn4Lh.svg) 

To further network resiliency, Cloudflare WAN also supports Equal Cost Multi-Path (ECMP) routing between the Cloudflare network and an organization's network location(s). With ECMP, traffic can be load-balanced across multiple anycast IP tunnels, which helps increase throughput and maximize network reliability. In the event of network path failure of one or more tunnels, traffic can be automatically failed over to the remaining healthy tunnels.

The simplest and easiest way to on-ramp existing network locations to the Cloudflare WAN service is to deploy Cloudflare One Appliance, a lightweight appliance you can install in corporate network locations to automatically connect, steer, and shape any IP traffic through secure IPsec tunnels. When the WAN Connector is installed into a network, it will automatically establish communication with the Cloudflare network, download and provision relevant configurations, establish resilient IPsec tunnels, and route connected site network traffic to Cloudflare.

The WAN Connector can be deployed as either a hardware or virtual appliance, making it versatile for a variety of user network environments — on-premises, virtual, or public cloud. Management, configuration, observability, and software updates for WAN Connectors is centrally managed from Cloudflare via either the dashboard or the Cloudflare API. As of 2023, the WAN Connector is currently best-suited for connecting small and medium-sized networks to Cloudflare (for example, small offices and retail stores).

In situations where deploying the Cloudflare One Appliance is not feasible or desirable, organizations can securely connect their site networks to Cloudflare by configuring IPsec tunnels from their existing IPsec-capable network devices, including WAN or SD-WAN routers, firewalls, and cloud VPN gateways. Please refer to the Cloudflare [documentation](https://developers.cloudflare.com/cloudflare-wan/configuration/manually/third-party/) for up-to-date examples of validated IPsec devices.

There may also be situations where network-layer encryption is not necessary — for example, when a site's WAN-bound traffic is already encrypted at the application layer (via TLS), or when an IPsec network device offers very limited throughput performance as it encrypts and decrypts IPsec traffic. Under these circumstances, organizations can connect to the Cloudflare network using [GRE tunnels](https://developers.cloudflare.com/cloudflare-wan/configuration/manually/how-to/configure-tunnel-endpoints/).

Organizations may also connect their network locations directly to the Cloudflare network via [Cloudflare Network Interconnect ↗](https://www.cloudflare.com/network-services/products/network-interconnect/) (CNI). Cloudflare [supports a variety of options](https://developers.cloudflare.com/network-interconnect/) to connect your network to Cloudflare:

* Direct CNI for Cloudflare WAN and Magic Transit
* Classic CNI for Magic Transit
* Cloud CNI for Cloudflare WAN and Magic Transit
* Peering via either an internet exchange, or a private network interconnect (PNI).

The following table summarizes the different methods of connecting networks to Cloudflare:

| **Use case**                                                                                                                                           | **Recommended**                             | **Alternative solution**                                                                                      |
| ------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
| Remote users connecting to applications on private networks in a Zero Trust model (e.g. most VPN replacement scenarios)                                | **Cloudflare Tunnel (with cloudflared)**    | **Cloudflare WAN** Alternative option if cloudflared not suitable for environment                             |
| Site-to-site connectivity between branches, headquarters, and data centers                                                                             | **Cloudflare WAN**                          | **Cloudflare Tunnel (with WARP Connector)** Alternative option if routing changes cannot be made at perimeter |
| Egress traffic from physical sites or cloud environments to cloud security inspection (e.g. most common SWG and branch firewall replacement scenarios) | **Cloudflare WAN**                          | **N/A**                                                                                                       |
| Service-initiated communication with remote users (e.g. AD or SCCM updates, DevOps workflows, VOIP)                                                    | **Cloudflare Tunnel (with WARP Connector)** | **Cloudflare WAN** Alternative option if inbound source IP fidelity not required                              |
| Mesh networking and peer-to-peer connectivity                                                                                                          | **Cloudflare Tunnel (with WARP Connector)** | **N/A**                                                                                                       |

Each of these methods of connecting and routing traffic can be deployed concurrently from any location. The following diagram highlights how different connectivity methods can be used in a single architecture.

Note the following traffic flows:

* All traffic connected via a WARP Connector or device agent can communicate with each other over the mesh network  
   * Developers working from home can communicate with the production and staging servers in the cloud  
   * The employee in the retail location, as well as the developer at home, can receive VOIP calls on their laptop
* A HPC Cluster in AWS represents a proprietary solution in which no third-party software agents can be installed; as a result, it uses an IPsec connection to Cloudflare WAN
* In the retail location, the Cloudflare One Appliance routes all traffic to Cloudflare via an IPsec tunnel  
   * An employee's laptop running the device agent creates its own secure connection to Cloudflare that is routed over the IPsec tunnel
* The application owner of the reporting system maintains a connection to Cloudflare using `cloudflared` and doesn't require any networking help to expose their application to employees
![Connecting and routing traffic can be created using various methods such as Cloudflare Network Interconnect, IPSEC tunnels, WARP Connector and cloudflared.](https://developers.cloudflare.com/_astro/cf1-ref-arch-14.BMsYJBWD_1UbvIi.svg) 

_Note: Labels in this image may reflect a previous product name._

_Note: All of the endpoints connected via the WARP Connector or device agent are automatically assigned IP addresses from the 100.96.0.0/12 address range, while endpoints connected to Cloudflare WAN retain their assigned RFC1918 private IP addresses. `cloudflared` can be deployed in any of the locations by an application owner to provide hostname-based connectivity to the application._

Once the networks, applications, and user devices are connected to Cloudflare — regardless of the connection methods and devices used — all traffic can be inspected, authenticated, and filtered by the Cloudflare SASE services, then securely routed to their intended destinations. Additionally, consistent policies can be applied across all traffic, no matter how it arrives at Cloudflare.

#### Checkpoint: Connecting networks to Cloudflare

Now this is what a SASE architecture looks like where corporate network traffic from everywhere is forwarded to and processed by Cloudflare. In this architecture, it is possible to make a network connection from any remote location, office location or data center and connect to applications and services living in SaaS infrastructure, cloud-hosted infrastructure or an organization's own on-premise data centers.

![Traffic from all networks, North and South, as well as East and West, is now flowing through and secured by Cloudflare.](https://developers.cloudflare.com/_astro/cf1-ref-arch-15.BL6UWZPA_3hLzV.svg) 

_Note: Labels in this image may reflect a previous product name._

### Forwarding device traffic

The previous sections explain using ZTNA to secure access to self-hosted applications and using an SWG to inspect and filter traffic destined for the Internet. When a user is working on a device in any of the company networks that is connected to Cloudflare's connectivity cloud, all that traffic is inspected and policies applied without disrupting the user's workflow. Yet, users are not always (or ever) in the office; they work from home, on the road, or from other public networks. How do you ensure they have reliable access to your internal applications? How do you ensure their Internet browsing is secure no matter their work location?

There are several approaches to ensure that traffic from a user device which isn't connected to an existing Cloudflare protected network, are also forwarding traffic through Cloudflare and be protected.

* [Install an agent on the device](#connecting-with-a-device-agent)
* [Modify browser proxy configuration](#browser-proxy-configuration)
* [Direct the user to a remote browser instance](#using-remote-browser-instances)
* [Modify DNS configuration](#agentless-dns-filtering)

#### Connecting with a device agent

The preferred method of ensuring device traffic is forwarded to Cloudflare is to install the device agent (also referred to as [Cloudflare One Client](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/)). The agent runs on Windows, macOS, Linux, iOS, and Android/ChromeOS, and creates a secure connection to Cloudflare where all non-local traffic is sent. Because of Cloudflare's use of anycast networking, the device agent always connects to the nearest Cloudflare server to ensure the best performance for the user. The device agent also collects local machine and network information, which is sent in the request to enrich the policy in Cloudflare.

To allow for flexibility in how different devices and users connect, there are multiple [deployment modes](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/configure/modes/):

* A full L4 traffic proxy
* L7 DNS proxy
* L7 HTTP proxy
* The ability to just collect device posture information

For example, organizations might have an office that continues to use an existing [DNS filtering ↗](https://www.cloudflare.com/learning/access-management/what-is-dns-filtering/) service, so they can configure the agent to just proxy network and HTTP traffic.

The agent can also be configured with flexible routing controls that allow for scenarios in which traffic destined for office printers is not sent to the Cloudflare network but, instead, routed to the local network. These [split tunnel configurations](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/configure/route-traffic/split-tunnels/) can be made specific to groups of users, types of device operating system, or networks and by default, traffic destined to all private [IPv4 and IPv6 ranges ↗](https://datatracker.ietf.org/doc/html/rfc1918) is sent to the device's default gateway. If the application the user is attempting to reach is not in public DNS, you can configure the hostname and domain to be resolved with [local DNS services](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/cloudflared/private-dns/), so that the device agent does not attempt to resolve these using Cloudflare DNS.

![Using the device agent allows Internet and company application bound traffic to be secured by Cloudflare's SWG and ZTNA services.](https://developers.cloudflare.com/_astro/cf1-ref-arch-16.DBOEvI3k_Z1Cgds4.svg) 

The agent is more than just a network proxy; it is able to examine the device's security posture, such as if the operating system is fully up-to-date or if the hard disk is encrypted. Cloudflare's integrations with [CrowdStrike ↗](https://www.cloudflare.com/partners/technology-partners/crowdstrike/endpoint-partners/), [SentinelOne ↗](https://www.cloudflare.com/partners/technology-partners/sentinelone/), and other third-party services also provide additional data about the security posture of the device. All of this information is associated with each request and, therefore, available for use in company policies — as explained in the "Unified Management" section.

The agent can be [deployed](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/) to a device either manually or using existing endpoint management (UEM) technologies. Using the agent, users register and authenticate their device to Cloudflare with the integrated identity providers. Identity information — combined with information about the local device — is then used in your SWG and ZTNA policies (including inline CASB capabilities shared across these Cloudflare services).

#### Browser proxy configuration

When it is not possible to install software on the device, there are agentless approaches.

One option is to configure the browser to forward HTTP requests to Cloudflare by configuring proxy server details in the browser or OS. Although this can be done manually, it is more common for organizations to automate the configuration of browser proxy settings using Internet-hosted [Proxy Auto-Configuration](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/proxy-endpoints/) (PAC) files. The browser identifies the PAC file location in several ways:

* MDM software configuring the setting in the browser
* In Windows domains, Group Policy Objects (GPO) can configure the browser's PAC file
* Browsers can use [Web Proxy Auto-Discovery ↗](https://datatracker.ietf.org/doc/html/draft-ietf-wrec-wpad-01) (WPAD)

From there, configure a proxy endpoint where the browser will send all HTTP requests to. If using this method, please note that:

* Filtering HTTPS traffic will also require [installing and trusting Cloudflare root certificates](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/user-side-certificates/) on the devices.
* A proxy endpoint will only proxy traffic sourced from a set of known IP addresses, such as the pool of public IP addresses used by a site's NAT gateway, that the administrator must specify.

#### Using remote browser instances

Another option to ensure device traffic is sent to Cloudflare is to use [remote browser isolation ↗](https://www.cloudflare.com/learning/access-management/what-is-browser-isolation/) (RBI). When a remote user attempts to visit a website, the corresponding requests and responses are handled by a headless remote browser running in the Cloudflare network that functions as a "clone" of the user device's local browser. This shields the user's device from potential harmful content and code execution that may be downloaded from the website it visits.

RBI renders the received content in an isolated and secure cloud environment. Instead of executing the web content locally, the user device receives commands for how to "draw" the final rendered web page over a highly optimized protocol supported by all HTML5-compliant browsers on all operating systems. Because the remote browser runs on Cloudflare's servers, SWG policies are automatically applied to all browser requests.

Ensuring access to sites is protected with RBI does not require any local software installation or reconfiguring the user's browser. Below are [several ways](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/setup/) to accomplish this:

* Typically, a remote browser session is started as the result of an SWG policy — the user just requests websites without being notified that the content is loading in a remote browser.
* Organizations can also provide users with a link that automatically ensures RBI always processes each request.
* Organizations can also opt to use the ZTNA service to redirect all traffic from self-hosted applications via RBI instances.

All requests via a remote browser pass through the Cloudflare SWG; therefore, policies can enforce certain website access limitations. For instance, browser isolation policies can be established to:

* Disable copy/paste between a remote web page and the user's local machine; this can prevent the employee from pasting proprietary code into third-party chatbots.
* Disable printing of remote web content to prevent contractors from printing confidential information
* Disable file uploads/downloads to ensure sensitive company data is not sent to — or downloaded from — certain websites.
* Disable keyboard input (in combination with other policies) to limit data being exposed, such as someone typing in passwords to a phishing site.

Isolating web applications and applying policies to risky websites helps organizations limit data loss from cyber threats or user error. And, like many Cloudflare One capabilities, RBI can be leveraged across other areas of the SASE architecture. Cloudflare's [email security ↗](https://www.cloudflare.com/learning/email-security/what-is-email-security/) service, for example, can automatically rewrite and isolate suspicious links in emails. This "email link isolation" capability helps protect the user from potential malicious activity such as credential harvesting phishing.

#### Agentless DNS Filtering

Another option for securing traffic via the Cloudflare network is to configure the device to forward DNS traffic to Cloudflare to be inspected and filtered. First [DNS locations](https://developers.cloudflare.com/cloudflare-one/traffic-policies/get-started/dns/#connect-dns-locations) are created which allow policies to be applied based on different network locations. They can be determined either by the source IP address for the request or you can use "[DNS over TLS ↗](https://www.cloudflare.com/learning/dns/dns-over-tls/)" or "[DNS over HTTPS ↗](https://www.cloudflare.com/learning/dns/dns-over-tls/)".

When using source IP addresses, either the device will need to be told which DNS servers to use, or the local DNS server on the network the device is connected to needs to forward all DNS queries to Cloudflare. For DNS over TLS or HTTPS support, the devices need to be configured and support varies. Our recommendation is to use DNS over HTTPS which has wider operating system support.

All of the above methods result in only the DNS requests — not all traffic — being sent to Cloudflare. SWG DNS policies are then implemented at this level to manage access to corporate network resources.

#### Summary of SWG capabilities for each traffic forwarding method

The following table summarizes SWG capabilities for the various methods of forwarding traffic to Cloudflare (as of Oct 2023):

| IP tunnel or Interconnect (Cloudflare WAN) | Device Agent (WARP)\*1 | Remote Browser | Browser proxy | DNS proxy |       |
| ------------------------------------------ | ---------------------- | -------------- | ------------- | --------- | ----- |
| Types of traffic forwarded                 | TCP/UDP                | TPC/UDP        | HTTP          | HTTP      | DNS   |
| **Policy types**                           |                        |                |               |           |       |
| DNS                                        | Yes                    | Yes            | Yes           | Yes       | Yes   |
| HTTP/S\*2                                  | Yes                    | Yes            | Yes           | Yes       | N/A   |
| Network (L3/L4 parameter)                  | Yes                    | Yes            | Yes           | Yes       | No    |
| **Data available in policies**             |                        |                |               |           |       |
| Identity information                       | No                     | Yes            | Yes           | No        | No\*3 |
| Device posture                             | No                     | Yes            | No            | No        | No    |
| **Capabilities**                           |                        |                |               |           |       |
| Remote browser isolation                   | Yes                    | Yes            | Yes           | Yes       | N/A   |
| Enforce egress IP                          | Yes                    | Yes            | Yes           | Yes       | N/A   |

Notes:

1. Running the device agent in DNS over HTTP mode provides user identity information, in addition to the same capabilities as connecting via DNS.
2. To filter HTTPS traffic, the Cloudflare [certificate](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/user-side-certificates/) needs to be installed on each device. This can be automated when using the device agent.
3. If configuring DNS over HTTPS, it is possible to inject a [service token](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/dns-over-https/#filter-doh-requests-by-user) into the request, which associates the query with an authenticated user.

#### Checkpoint: Forwarding device traffic to Cloudflare

By connecting entire networks or individual devices, organizations can now route user traffic to Cloudflare for secure access to privately-hosted applications and secure public Internet access.

Once traffic from all user devices is forwarded to the Cloudflare network, it is time for organizations to revisit their high-level SASE architecture:

![With all devices and networks connected, any traffic destined for company applications and services all flows through Cloudflare, where policies are applied to determine access.](https://developers.cloudflare.com/_astro/cf1-ref-arch-17.Cv4XcukK_ZUwUrV.svg) 

_Note: Labels in this image may reflect a previous product name._

### Verifying users and devices

At this point in implementing SASE architecture, organizations have the ability to route and secure traffic beginning from the point a request is made from a browser on a user's device, all the way through Cloudflare's network to either a company-hosted private application/service or to the public Internet.

But, before organizations define policies to manage that access, they need to know who is making the request and determine the security posture of the device.

#### Integrating identity providers

The first step in any access decision is to determine who is making the request – i.e., to authenticate the user.

Cloudflare integrates with identity providers that manage secure access to resources for organizations' employees, contractors, partners, and other users. This includes support for integrations with any [SAML](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/generic-saml/) \- or OpenID Connect ([OIDC](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/generic-oidc/)) - compliant service; Cloudflare One also includes pre-built integrations with [Okta](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/okta/), [Microsoft Entra ID (formerly Azure Active Directory)](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/entra-id/), [Google Workspace](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/google-workspace/), as well as consumer IdPs such as [Facebook](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/facebook-login/), [GitHub](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/github/) and [LinkedIn](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/linkedin/).

Multiple IdPs can be integrated, allowing organizations to apply policies to a wide range of both internal and external users. When a user attempts to access a Cloudflare secured application or service, they are redirected to authenticate via one of the integrated IdPs. When using the device agent, users must also authenticate to one of their organization's configured IdPs.

![Users are presented with a list of integrated identity providers before accessing protected applications.](https://developers.cloudflare.com/_astro/cf1-ref-arch-18.dg0Dmn3U_Z1aBTIk.svg) 

Once a user is authenticated, Cloudflare receives that user's information, such as username, group membership, authentication method (password, whether MFA was involved and what type), and other associated attributes (i.e., the user's role, department, or office location). This information from the IdP is then made available to the policy engine.

In addition to user identities, most corporate directories also contain groups to which those identities are members. Cloudflare supports the importing of group information, which is then used as part of the policy. Group membership is a critical part of aggregating single identities so that policies can be less complex. It is far easier — for example — to set a policy allowing all employees in the sales department to access Salesforce, than to identify each user in the sales organization.

Cloudflare also supports authentication of devices that are not typically associated with a human user – such as an IoT device monitoring weather conditions at a factory. For those secure connections, organizations can generate [service tokens](https://developers.cloudflare.com/cloudflare-one/access-controls/service-credentials/service-tokens/) or create [Mutual TLS ↗](https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/) (mTLS) certificates that can be deployed to such devices or machine applications.

#### Trusting devices

Not only does the user identity need to be verified, but the security posture of the user's device needs to be assessed. The device agent is able to provide a range of device information, which Cloudflare uses to build comprehensive security policies.

The following built-in posture checks are available:

* [Application check](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/application-check/): Checks that a specific application process is running
* [File check](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/file-check/): Checks for the presence of a file
* [Firewall](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/firewall/): Checks if a firewall is running
* [Disk encryption](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/disk-encryption/): Checks if/how many disks are encrypted
* [Domain joined](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/domain-joined/): Checks if the device is joined to a Microsoft Active Directory domain
* [OS version](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/os-version/): Checks what version of the OS is running
* [Unique Client ID](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/device-uuid/): When using an MDM too, organizations can assign a verifiable UUID to a mobile, desktop, or laptop device
* [Device serial number](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/corp-device/): Checks to see if the device serial matches a list of company desktop/laptop computers

Cloudflare One can also integrate with any deployed endpoint security solution, such as [Microsoft Endpoint Manager](https://developers.cloudflare.com/cloudflare-one/integrations/service-providers/microsoft/), [Tanium](https://developers.cloudflare.com/cloudflare-one/integrations/service-providers/taniums2s/), [Carbon Black](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/carbon-black/), [CrowdStrike](https://developers.cloudflare.com/cloudflare-one/integrations/service-providers/crowdstrike/), [SentinelOne](https://developers.cloudflare.com/cloudflare-one/integrations/service-providers/sentinelone/), and more. Any data from those products can be passed to Cloudflare for use in access decisions.

All of the above device information, combined with data on the user identity and also the network the device is on, is available in Cloudflare to be used as part of the company policy. For example, organizations could choose to only allow administrators to SSH into servers when all of the following conditions are met: their device is free from threats, running the latest operating system, and joined to the company domain.

Because this information is available for every network request, any time a device posture changes, its ability to connect to an organization's resources is immediately impacted.

#### Integrating email services

Email — the #1 communication tool for many organizations and the most common channel by which phishing attacks occur — is another important corporate resource that should be secured via a SASE architecture. Phishing is the root cause of upwards of 90% of breaches that lead to financial loss and brand damage.

Cloudflare's email security service scans for signs of malicious content or attachments before they can reach the inbox, and also proactively scans the Internet for attacker infrastructure and attack delivery mechanisms, looking for programmatically-created domains that are used to host content as part of a planned attack. Our service uses all this data to also protect against business and vendor email compromises ([BEC ↗](https://www.cloudflare.com/learning/email-security/business-email-compromise-bec/) / [VEC ↗](https://www.cloudflare.com/learning/email-security/what-is-vendor-email-compromise/)), which are notoriously hard to detect due to their lack of payloads and ability to look like legitimate email traffic.

Instead of deploying tunnels to manage and control traffic to email servers, Cloudflare provides two methods of email security [setup](https://developers.cloudflare.com/email-security/deployment/):

* [Inline](https://developers.cloudflare.com/email-security/deployment/inline/): Redirect all inbound email traffic through Cloudflare before they reach a user's inbox by modifying MX records
* [API](https://developers.cloudflare.com/email-security/deployment/api/): Integrate Cloudflare directly with an email provider such as Microsoft 365 or Gmail

Modifying MX records (inline deployment) forces all inbound email traffic through our cloud email security service where it is scanned, and — if found to be malicious — blocked from reaching a user's inbox. Because the service works at the MX record level, it is possible to use the email security service with any [SMTP-compliant ↗](https://www.cloudflare.com/learning/email-security/what-is-smtp/) email service.

![Protecting email with Cloudflare using MX records ensures all emails are scanned and categorized.](https://developers.cloudflare.com/_astro/cf1-ref-arch-19.B4iJKLu2_IWNy0.svg) 

Organizations can also opt to integrate email security directly with their email service via APIs. Note that this approach has two drawbacks: there are fewer integrations Cloudflare supports and there is always a small delay between the email being delivered to the service and Cloudflare detecting it via the API.

![Protecting email with Cloudflare using APIs avoids the need to change DNS policy, but introduces delays into email detection and limits the types of email services that can be protected.](https://developers.cloudflare.com/_astro/cf1-ref-arch-20.CpqyyvgC_w1wri.svg) 

#### Checkpoint: A complete SASE architecture with Cloudflare

The steps above provide a complete view of evolving to SASE architecture using Cloudflare One. As the diagram below shows, secure access to all private applications, services, and networks — as well as ensuring the security of users' general Internet access — is now applied to all users in the organization, internal or external.

![A fully deployed SASE solution with Cloudflare protects every aspect of your business. Ensuring all access to applications is secured and all threats from the Internet mitigated.](https://developers.cloudflare.com/_astro/cf1-ref-arch-21.B4dzMu9Q_Z2pc5vA.svg) 

_Note: Labels in this image may reflect a previous product name._

For ease of use, the entire Cloudflare One platform can be configured via [API](https://developers.cloudflare.com/api/); and with Cloudflare's [Terraform provider ↗](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs), organizations can manage the Cloudflare global network using the same tools they use to automate the rest of their infrastructure. This allows IT teams to fully manage their Cloudflare One infrastructure, including all the policies detailed in the next section, using code. There are also (as of Oct 2023) more than 500 [GitHub ↗](https://github.com/cloudflare) repositories, many of which allow IT teams to use and build tools to manage their Cloudflare deployment.

## Unified management

Now that all users, devices, applications, networks, and other components are seamlessly integrated within a SASE architecture, Cloudflare One provides a centralized platform for comprehensive management. Because of the visibility Cloudflare has across the entire IT infrastructure, Cloudflare can aggregate signals from various sources, including devices, users, and networks. These signals can inform the creation of policies that govern access to organization resources.

Before we go into the details of how policies can be written to manage access to applications, services, and networks connected to Cloudflare, it's worth taking a look at the two main enforcement points in Cloudflare's SASE platform that control access: SWG and the ZTNA services. These services are configured through a single administrative dashboard, simplifying policy management across the entire SASE deployment.

The following diagram illustrates the flow of a request through these services, including the application of policies and the source of data for these policies. In the diagram below, the user request can either enter through the SWG or ZTNA depending on the type of service requested. It's also possible to combine both services, such as implementing a SWG HTTP policy that uses DLP service to inspect traffic related to a privately hosted application behind a ZTNA Cloudflare Tunnel. This configuration enables organizations to block downloads of sensitive data from internal applications that organizations have authorized for external access.

![User requests to the Internet or self hosted applications go through our SWG and/or ZTNA service. Administrators have a single dashboard to manage policies across both.](https://developers.cloudflare.com/_astro/cf1-ref-arch-23.By2O_HTZ_Z24JfLW.svg) 

In the following sections, we introduce examples of how different policies can be configured to satisfy specific use cases. While these examples are not exhaustive, the goal is to demonstrate common ways Cloudflare One can be configured to address the challenges organizations encounter in its transition to a SASE architecture.

Connecting an IdP to Cloudflare provides the ability to make access decisions based on factors such as group membership, authentication method, or specific user attributes. Cloudflare's device agent also supplies additional signals for policy considerations, such as assessing the operating system or verifying the device's serial number against company-managed devices. However, there are features that allow users to incorporate additional data into deployment for building powerful policies.

### Lists

Cloudflare's vast intelligent network continually monitors billions of web assets and [categorizes them](https://developers.cloudflare.com/cloudflare-one/traffic-policies/domain-categories/) based on our threat intelligence and general knowledge of Internet content. You can use our free [Cloudflare Radar ↗](https://radar.cloudflare.com/) service to examine what categories might be applied to any specific domain. Policies can then include these categories to block known and potential security risks on the public Internet, as well as specific categories of content.

Additionally, Cloudflare's SWG offers the flexibility to create and maintain customized [lists of data](https://developers.cloudflare.com/cloudflare-one/reusable-components/lists/). These lists can be uploaded via CSV files, manually maintained, or integrated with other processes and applications using the Cloudflare API. A list can contain the following data:

* URLs
* Hostnames
* Serial numbers (macOS, Windows, Linux)
* Emails
* IP addresses
* Device IDs (iOS, Android)

For example, organizations can maintain a list of IP addresses of all remote office locations, of short term contractors' email addresses, or trusted company domains. These lists can be used in a policy to allow contractors access to a specific application if their traffic is coming from a known office IP address.

### DLP profiles and datasets

Cloudflare looks at various aspects of a request, including the source IP, the requested domain, and the identity of the authenticated user initiating the request. Cloudflare also offers a DLP service which has the ability to detect and block requests based on the presence of sensitive content. The service has built in DLP profiles for common data types such as financial information, personally identifiable information (PII), and API keys.

There is even a profile for source code, so users can detect and block the transfer of C++ or Python files. Organizations can create customized DLP profiles and use regular expressions to define the patterns of data they are looking for. For data that is hard to define a pattern for, datasets can be used which match exact data values. These datasets allow for the bulk upload of any data to be matched, such as lists of customer account IDs or sensitive project names. These profiles and data sets can be incorporated into policies to prevent users from downloading large files containing confidential customer data.

To reduce the risk of false positives, internal users have the option to establish a match count on the profile. This means that a specific number of matches within the data are required before profile triggers. This approach prevents scenarios where a random string resembling PII or a credit card number would trigger the profile unnecessarily. By implementing a match count, the policy demands that multiple data elements align with the profile, significantly increasing its accuracy.

Organizations can further increase the accuracy of the DLP profile by enabling context analysis. This feature requires certain proximity keywords to exist within approximately 1000 characters of a match. For example, the string "123-45-6789" will only count as a detection if it is in proximity to keywords such as "ssn". This contextual requirement bolsters the accuracy of the detection process.

The DLP service seamlessly integrates with both Cloudflare's SWG and API-driven CASB services. In the case of the API CASB, DLP profiles are selected for scanning each integration with each SaaS application. This customization allows tailored detection criteria based on the type of data you wish to secure within each application.

For the SWG service, DLP profiles can be included into any policy to detect the existence of sensitive data in any request passing through the gateway. The most common action associated with this detection is to block the request, providing a robust layer of security.

### Access Groups

Access Groups are a powerful tool in the ZTNA service for aggregating users or devices into a unified entity that can be referenced within a policy. Within Cloudflare, multiple pieces of information can be combined into a single Access Group, efficiently reusing data across multiple policies while maintaining it in one centralized location.

Consider an Access Group designed to manage access to critical server infrastructure. The same Access Group can be used in a device agent policy that prevents administrators from disabling their connection to Cloudflare. This approach streamlines policy management and ensures consistency across various policy implementations.

Below is a diagram featuring an Access Group named "Secure Administrators," which uses a range of attributes to define the characteristics of secure administrators. The diagram shows the addition of two other Access Groups within "Secure Administrators". The groups include devices running on either the latest Windows or macOS, along with the requirement that the device must have either File Vault or Bitlocker enabled.

![An example of using Access Groups can be for grouping up many device, network or user attributes into a single policy that can be reused across applications.](https://developers.cloudflare.com/_astro/cf1-ref-arch-24.aWooHqll_22Jt0n.svg) 

Consistent with Cloudflare's overarching flexibility, Access Groups can be created, updated, and applied to policies through Cloudflare API or using Terraform. This allows a seamless integration with existing IT systems and processes, ensuring a cohesive approach to access management.

Now that we have a solid understanding of all the components available, let's zoom in and take a look at some common use cases and how they are configured. Keep in mind that Cloudflare's policy engines are incredibly powerful and flexible, so these examples are just a glimpse into the capabilities of Cloudflare's SASE platform.

### Example use cases

#### Secure access to self hosted apps and services

One common driver for moving to a SASE architecture is replacing existing VPN connectivity with a more flexible and secure solution. Cloudflare One SASE architecture enables high performance and secure access to self hosted applications from anywhere in the world. However, the next step entails defining the policies that control access to resources.

In this example, consider two services: a database administration application ([pgadmin ↗](https://www.pgadmin.org/) for example) and an SSH daemon running on the database server. The diagram below illustrates the flow of traffic and highlights the ZTNA service. It's important to note that all other services still retain the ability to inspect the request. For instance, the contractor using their personal cell phone in Germany should only have access to the db admin tool, while the employee on a managed device can access both the db admin tool and SSH into the database server.

![An employee working on a managed device at home can access both the db admin tool as well as the SSH service. However a contractor in Germany only has access to the db admin tool.](https://developers.cloudflare.com/_astro/cf1-ref-arch-25.DbM82XF7_NBUE1.svg) 

The policies that enable access rely on two Access Groups.

* Contractors  
   * Users who authenticate through Okta and are part of the Okta group labeled "Contractors"  
   * Authentication requires the use of a hardware token
* Database and IT administrators  
   * Users who authenticate through Okta and are in the Okta groups "IT administrators" or "Database administrators"  
   * Authentication requires the use of a hardware token  
   * Users should be on a device with a serial number in the "Managed Devices" list

Both of these groups are then used in two different access policies.

* Database administration tool access  
   * Database and IT admins are allowed access  
   * Members of the "Contractor" access group are allowed access, but each authenticated session requires the user to complete a justification request  
   * The admin tool is rendered in an isolated browser on Cloudflare's Edge network and file downloads are disabled
* Database server SSH access  
   * "Database and IT administrators" group is allowed access  
   * Their device must pass a Crowdstrike risk score of at least 80  
   * Access must come from a device that is running our device agent and is connected to Cloudflare

These policies show that contractors are only allowed access to the database administration tool and do not have SSH access to the server. IT and database administrators can access the SSH service only when their devices are securely connected to Cloudflare via the device agent. Every element of the access groups and policies is evaluated for every login, so an IT administrator using a compromised laptop or a contractor unable to authenticate with a hardware token will be denied access.

Both user groups will connect to Cloudflare through the closest and fastest access point of Cloudflare's globally distributed network, resulting in a high quality experience for all users no matter where they are.

#### Threat defense for distributed offices and remote workers

Another reason for using a SASE solution is to apply company security policies consistently across all users (whether they are employees or contractors) in the organization, regardless of where they work. The Cloudflare One SASE architecture shows that all user traffic, whether routed directly on the device or through the connected network, will go through Cloudflare. Cloudflare's SWG then handles inspection of this traffic. Depending on the connection method, policies can be applied either to the HTTP or DNS request. For example:

![Blocking high risk websites can be done by selecting a few options in the SWG policy](https://developers.cloudflare.com/_astro/cf1-ref-arch-26.CctZYYxb_Zudxsc.svg) 

This can then be applied to secure and protect all users in one policy. Cloudflare can write another policy allowing access to social media websites while isolating all sessions in a remote browser hosted on Cloudflare's network.

![Isolating all social media websites can be done by identifying the application or website name and selecting what actions the user can take, such as stopping them from copy and pasting or printing.](https://developers.cloudflare.com/_astro/cf1-ref-arch-27.BlDxrRwj_2nRDyn.svg) 

With this setup, every request to a social media website ensures the following security measures:

* Any content on the social media website that contains harmful code is prevented from executing on the local device
* External users are restricted from downloading content from the site that could potentially be infected with malware or spyware

#### Data protection for regulatory compliance

Because Cloudflare One has visibility over every network request, Cloudflare can create policies that apply to the data in the request. This means that the DLP services can be used to detect the download of content from an application and block it for specific user demographics. Let's look at the following policy.

![Our DLP policies allow for the inspection of content in a request and blocking it.](https://developers.cloudflare.com/_astro/cf1-ref-arch-28.DKy2S5nx_2nRDyn.svg) 

This policy would prevent contractors from downloading a file containing customer accounts information. Furthermore, Cloudflare can configure an additional policy to block the same download if the user's device does not meet specific security posture requirements. This ensures the consistent enforcement of a common rule: no sensitive customer data can be downloaded onto a device that does not meet the required security standards.

DLP policies can also be applied in the other direction, ensuring that company sensitive documents are not uploaded to non approved cloud storage or social media.

![A DLP policy can also examine if a HTTP PUT, i.e. a file upload, is taking place to a non approved application where the request contains sensitive data.](https://developers.cloudflare.com/_astro/cf1-ref-arch-29.BGL4hCeF_2nRDyn.svg) 

### Visibility across the deployment

At this point in the SASE journey, users have re-architectured the IT network and security infrastructure to fully leverage all the capabilities of the Cloudflare One SASE platform. A critical element in long term deployment involves establishing complete visibility into the organization and the ability to diagnose and quickly resolve issues.

For quick analysis, Cloudflare provides built-in dashboards and analytics that offers a daily overview of the deployment's operational status. As traffic flows through Cloudflare, the dashboard will alert internal users to the most frequently used SaaS applications, enabling quick actions if any unauthorized applications are accessed by external users. Moreover, all logging information from all Cloudflare One services is accessible and searchable from the administrator's dashboard. This makes it efficient to filter for specific blocked requests, with each log containing useful information such as the user's identity, device information, and the specific rule that triggered the block. This can be very handy in the early stages of deployment where rules can often need tweaking.

However, many organizations rely on existing dedicated tools to manage long term visibility over the performance of their infrastructure. To support this, Cloudflare allows the export of all logging information into such tools. Every aspect of Cloudflare One is logged and can be exported. Cloudflare offers built in integrations for continuous transmission of small data batches to a variety of platforms, including AWS, Google Cloud Storage, SumoLogic, Azure, Splunk, Datadog, and any S3 compatible service. This flexibility allows organizations to selectively choose which fields to control the type and volume of data to incorporate into existing tools.

On top of logs which are related to traffic and policies, Cloudflare also audits management activity. All administrative actions and changes to Cloudflare Tunnels are logged. This allows for change management auditing and, like all other logs, can be exported into other tools as part of a wider change management monitoring solution.

#### Digital Experience Monitoring

Cloudflare has [deep insight ↗](https://radar.cloudflare.com/) into the performance of the Internet and connected networks and devices. This knowledge empowers IT administrators with visibility into minute-by-minute experiences of their end-users, enabling swift resolution of issues that impact productivity.

The Digital Experience Monitoring (DEM) service enables IT to run constant tests against user devices to determine the quality of the connection to company resources. The results of these tests are available on the Cloudflare One dashboard, enabling IT administrators to review and identify root causes when a specific user encounters difficulties accessing an application. These issues could stem from the user's local ISP or a specific underperforming SaaS service provider. This data is invaluable in helping administrators in diagnosing and addressing poor user experiences, leading to faster issue resolution.

The dashboard shows a comprehensive summary of the entire device fleet, displaying real-time and historical connectivity metrics for all organization devices. IT admins can then drill down into specific devices for further analysis.

## Summary

Having acquired a comprehensive understanding of Cloudflare's SASE platform, you are now well-equipped to integrate it with existing infrastructure. This system efficiently secures access to applications for both employees and external users, starting from the initial request on the device and extending across every network to the application, regardless of its location. This powerful new model for securing networks, applications, devices, and users is built on the massive Cloudflare network and managed through an intuitive management interface.

It's worth noting that many of the capabilities described in this document can be used for free, without any time constraints, for up to 50 users. [Sign up ↗](https://dash.cloudflare.com/sign-up) for an account and head to the [Cloudflare One ↗](https://one.dash.cloudflare.com/) section. While this document has provided an overview of the platform as a whole, for those interested in delving deeper into specific areas, we recommend exploring the following resources.

| Topic                     | Content                                                                                                                                                                                                             |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Cloudflare Tunnels        | [Understanding Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/) \- [Open source repository for cloudflared ↗](https://github.com/cloudflare/cloudflared) |
| WAN as a Service          | [Cloudflare WAN documentation](https://developers.cloudflare.com/cloudflare-wan/) \- [WAN transformation](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-wan/wan-transformation/)  |
| Secure Web Gateway        | [How to build Gateway policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/)                                                                                                                 |
| Zero Trust Network Access | [How to build Access policies](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/)                                                                                                          |
| Remote Browser Isolation  | [Understanding browser isolation](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/)                                                                                                       |
| API-Driven CASB           | [Scanning SaaS applications](https://developers.cloudflare.com/cloudflare-one/integrations/cloud-and-saas/)                                                                                                         |
| Email security            | [Understanding Cloudflare Email security](https://developers.cloudflare.com/email-security/)                                                                                                                        |
| Replacing your VPN        | [Using Cloudflare to replace your VPN](https://developers.cloudflare.com/learning-paths/replace-vpn/concepts/)                                                                                                      |

If you would like to discuss your SASE requirements in greater detail and connect with one of our architects, please visit [https://www.cloudflare.com/cloudflare-one/ ↗](https://www.cloudflare.com/cloudflare-one/) and request a consultation.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/architectures/","name":"Reference Architectures"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/architectures/sase/","name":"Evolving to a SASE architecture with Cloudflare"}}]}
```

---

---
title: Cloudflare Security Architecture
description: This document provides insight into how this network and platform are architected from a security perspective, how they are operated, and what services are available for businesses to address their own security challenges.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/architectures/security.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cloudflare Security Architecture

**Last reviewed:**  almost 2 years ago 

## Introduction

Today, everything and everyone needs to be connected to everything everywhere, all the time, and everything must be secure. However, many businesses are not built on infrastructure that supports this reality. Historically, employees worked in an office where most business systems (file servers, printers, applications) were located on and accessible only from the private office network. A security perimeter was created around the network to protect against outsider threats, most of which came from the public Internet.

However, as Internet bandwidth increased and more people needed to do work outside of the office, VPNs allowed employees access to internal systems from anywhere they could get an Internet connection. Applications then started to move beyond the office network, living in the cloud either as SaaS applications or hosted in IaaS platforms. Companies rushed to expand access to their networks and invest in new, dynamic methods to detect, protect, and manage the constantly evolving security landscape. But this has left many businesses with complex policies and fragile networks with many point solutions trying to protect different points of access.

Since 2010, Cloudflare has been building a unique, large-scale network on which we run a set of security services that allow organizations to build improved connectivity and better protect their public and private networks, applications, users, and data. This document provides insight into how this network and platform are architected from a security perspective, how they are operated, and what services are available for businesses to address their own security challenges. The document comprises two main sections:

* How Cloudflare builds and operates its secure global network.
* How to protect your business infrastructure and assets using Cloudflare services built on the network.

### Who is this document for and what will you learn?

This document is designed for IT and security professionals who are looking at using Cloudflare to secure aspects of their businesses. It is aimed primarily at Chief Information Security Officers (CSO/CISO) and their direct teams who are responsible for the overall security program at their organizations. Because the document covers the security of the entire Cloudflare platform it does not go into deep details about any particular service. Instead, please visit our [Architecture Center ↗](https://www.cloudflare.com/architecture/) to find specific information for a service or product.

To build a stronger baseline understanding of Cloudflare, we recommend the following resources:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (5 minute read) or [video ↗](https://youtu.be/XHvmX3FhTwU?feature=shared) (2 minutes)
* [How Cloudflare strengthens security everywhere you do business ↗](https://cf-assets.www.cloudflare.com/slt3lc6tev37/is7XGR7xZ8CqW0l9EyHZR/1b4311823f602f72036385a66fb96e8c/Everywhere%5FSecurity-Cloudflare-strengthens-security-everywhere-you%5Fdo-business.pdf) (10 minutes)

## Secure global network

Any cloud security solution needs to be fast and always available. Our network protects over 20% of Internet web properties, operates in over 330 cities, and is 50 ms away from 95% of the Internet-connected population. Each server in each data center runs every service, so that traffic is inspected in one pass and acted upon close to the end user. These servers are connected together by over 13,000 network peers with over 405 Tbps network capacity. Cloudflare’s network is also connected to [every Internet exchange ↗](https://bgp.he.net/report/exchanges#%5Fparticipants) (more than Microsoft, AWS, and Google) to ensure that we are able to peer traffic from any part of the Internet.

With millions of customers using Cloudflare, the network serves over [57 million HTTP requests ↗](https://radar.cloudflare.com/traffic) per second on average, with more than 77 million HTTP requests per second at peak. As we analyze all this traffic, we detect and block an average of [209 billion cyber threats each day ↗](https://radar.cloudflare.com/security-and-attacks). This network runs at this massive scale to ensure that customers using our security products experience low latency, access to high bandwidth, and a level of reliability that ensures the ongoing security of their business. (Note metrics are correct as of June 2024.)

### Architecture

#### Network

The Cloudflare network is not like a traditional enterprise network. It has been designed from the ground up using a service isolation, least privilege, and zero trust architecture. Public-facing edge servers, and the data centers they reside in, can be seen as islands in a vast lake of connectivity — where nothing trusts anything without strong credentials and tight access policies.

![The Cloudflare network has data centers in over 320 major cities.](https://developers.cloudflare.com/_astro/security-ref-arch-1.WLeUmjWV_lr8J1.svg) 

A unique aspect of the network's security architecture is how we use anycast networking. In every data center we broadcast the entire Cloudflare network range (IPv6 and IPv4) for both UDP and TCP. [Border Gateway Protocol ↗](https://www.cloudflare.com/learning/security/glossary/what-is-bgp/) (BGP) ensures routers all around the Internet provide the shortest possible path for any user to the nearest Cloudflare server where traffic is inspected. From a security perspective, this is very important. During distributed denial-of-service (DDoS) attacks to customers behind our network, a combination of high bandwidth capacity and distribution of requests across thousands of local servers helps ensure our network stays performant and available, even during some of the largest attacks in [Internet history ↗](https://blog.cloudflare.com/cloudflare-mitigates-record-breaking-71-million-request-per-second-ddos-attack).

Server updates, such as access policies, rate limiting, and firewall rules, are performed by our [Quicksilver service ↗](https://blog.cloudflare.com/introducing-quicksilver-configuration-distribution-at-internet-scale). Customer changes are reflected across the entire network in seconds, allowing customers to respond to changing business requirements and ensuring policies are quickly implemented globally.

Every level of the network conforms to strict hardened security controls. Processes running on the edge are designed with a need-to-know basis and run with least privilege. We have our own key management system to ensure keys are secured at rest and in transit and that the right access to keys is given at the right time. To ensure tight control over and detailed visibility of changes to the network, all infrastructure is managed via code ([IaC ↗](https://en.wikipedia.org/wiki/Infrastructure%5Fas%5Fcode)).

#### Servers

Cloudflare designs and owns all the servers in our network. There are two main types.

* **Private core servers**: The control plane where all customer configuration, logging, and other data lives.
* **Public edge servers**: Where Internet and privately tunneled traffic terminates to the Cloudflare network, to be inspected and then routed to its destination.

Server hardware is designed by Cloudflare and built by industry-respected manufacturers that complete a comprehensive supply chain and security review. Every server runs an identical software stack, allowing for consistent hardware design. The operating system on edge servers is also a single design and built from a highly modified Linux distribution, tailored for the scale and speed of our platform. Cloudflare is a significant contributor to the Linux kernel, and we regularly share information on how we secure our [servers and services ↗](https://blog.cloudflare.com/the-linux-kernel-key-retention-service-and-why-you-should-use-it-in-your-next-application), helping the Linux community and the rest of the Internet benefit from our [engineering ↗](https://blog.cloudflare.com/linux-kernel-hardening).

#### Services

Every server runs all Cloudflare products and services that customers use to secure their networks and applications. Later in this document we provide an overview of these services, but for the moment it's important to provide insight into the development of the software. From the initial design of every product, the engineering team works hand in hand with security, compliance, and risk teams to review all aspects of the service. These teams can be viewed as part of the engineering and product teams, not an external group. They are essential to the development of everything we do at Cloudflare and we have some of the most respected professionals in the industry. Code is reviewed by security teams at every stage of development, and we implement many automated systems to analyze software looking for vulnerabilities. Threat modeling and penetration testing frameworks such as [OWASP ↗](https://owasp.org/www-project-web-security-testing-guide/latest/3-The%5FOWASP%5FTesting%5FFramework/), [STRIDE ↗](https://en.wikipedia.org/wiki/STRIDE%5F%28security%29), and [DREAD ↗](https://en.wikipedia.org/wiki/DREAD%5F%28risk%5Fassessment%5Fmodel%29) are used during design, development, and the release process.

Many of our products run on our [serverless runtime](https://developers.cloudflare.com/workers/) environment, which leverages the very latest techniques in service isolation. We anticipated this secure runtime environment could be very valuable to our customers, so we productized it, allowing them to [build](https://developers.cloudflare.com/workers/reference/how-workers-works/) and [run ↗](https://blog.cloudflare.com/cloud-computing-without-containers) their own applications on our network. More about that at the very end of this document.

#### Innovation

To ensure we are delivering the most secure network and platform possible, we are always innovating. New technologies need to be created to solve the ever-increasing range of security threats and challenges. Cloudflare leads many initiatives, such as further securing BGP using [RPKI ↗](https://isbgpsafeyet.com/), and we regularly contribute to working IETF groups on many common Internet security protocols. We strive to help increase and monitor [IPv6 adoption ↗](https://radar.cloudflare.com/adoption-and-usage), which inherently creates a more secure Internet, and we stay ahead of future challenges by deploying technologies such as [post-quantum cryptography ↗](https://blog.cloudflare.com/post-quantum-for-all) before any increase in computing power from quantum computers threatens existing cryptographic techniques.

### Operational security

Not only must the design of the network be secure, but so should how we run and maintain it. We operate at a massive scale, and the common design of our servers helps optimize software deployments and monitoring. Defining who has access to maintain the network is fully automated, following infrastructure-as-code practices with role-based access controls (RBAC) and least privilege controls used everywhere.

Customers send sensitive information to our products and services. The mission for the Cloudflare compliance team is to ensure the underlying infrastructure that supports these services meets [industry compliance standards ↗](https://www.cloudflare.com/trust-hub/compliance-resources/) such as FedRAMP, SOC II, ISO, PCI certifications, C5, privacy, and regulatory frameworks. The compliance team works with all engineering organizations to help integrate these requirements as part of the way we work. From a compliance perspective, our areas of focus include:

* Privacy and security of customer data
* Maintaining compliance validations
* Helping customers with their own compliance
* Monitoring the changes to the regulatory landscape
* Providing feedback to regulatory bodies on upcoming changes

We also run a [bug bounty program ↗](https://hackerone.com/cloudflare), giving incentives for the community to find and report vulnerabilities to us for financial reward.

In summary, Cloudflare not only has built the right technology to secure our network, but also has well-staffed and mature teams ensuring that the right processes are created, followed, and monitored. As Cloudflare has grown over the past decade, we've accrued some of the best security knowledge in the industry, which in turn has attracted top talent to come work with us. This effect compounds each year, bringing our security skills and knowledge to greater heights. We are also very transparent about how Cloudflare runs and secures its network, and we [often blog ↗](https://blog.cloudflare.com/secure-by-design-principles) about our processes and evolving approach to security.

## Using Cloudflare to protect your business

The reason the Cloudflare network exists is to provide services to customers to protect their own assets, such as users, applications, and data. The following section details what these services are, their basic architecture, and how they are used by customers. Note that this section does not go into extensive detail on each service. Instead, please refer to our [Architecture Center ↗](https://cloudflare.com/architecture) or [product documentation](https://developers.cloudflare.com/directory/) to understand more about a specific product, service, or solution. The goal in this document is to provide information about the overall set of security services available and the general use cases they are designed for. As such, we provide a table of contents so you can jump to a section of interest.

1. [Securing public and private resources](#securing-public-and-private-resources)
2. [Protecting public resources](#protecting-public-resources)  
   1. [Common attacks and protection](#common-attacks-and-protection)  
         1. [DDoS attacks](#ddos-attacks)  
         2. [Zero-day attacks](#zero-day-attacks)  
         3. [Unauthorized access](#unauthorized-access)  
         4. [Client-side attacks](#client-side-attacks)  
         5. [Data exfiltration](#data-exfiltration)  
         6. [Credential stuffing](#credential-stuffing)  
         7. [Brute force attacks](#brute-force-attacks)  
         8. [Credit card skimming](#credit-card-skimming)  
         9. [Inventory hoarding](#inventory-hoarding)  
         10. [Fuzzing (vulnerability scanning)](#fuzzing-vulnerability-scanning)  
         11. [Cross-Site Scripting (XSS) attacks](#cross-site-scripting-xss-attacks)  
         12. [Remote Code Execution (RCE) attacks](#remote-code-execution-rce-attacks)  
         13. [SQL injection (SQLi) attacks](#sql-injection-sqli-attacks)  
         14. [Malware](#malware)  
   2. [Cloudflare application security products](#cloudflare-application-security-products)  
         1. [Security Analytics](#security-analytics)  
         2. [Web Application Firewall (WAF)](#web-application-firewall-waf)  
         3. [Rate limiting](#rate-limiting)  
         4. [L7 DDoS](#l7-ddos)  
         5. [API Shield](#api-shield)  
         6. [Bot Management](#bot-management)  
         7. [Client-side security](#client-side-security)  
         8. [SSL/TLS](#ssltls)  
         9. [Security Center](#security-center)  
         10. [Cloudflare for SaaS](#cloudflare-for-saas)  
   3. [Cloudflare network security products](#cloudflare-network-security-products)  
         1. [Magic Transit](#magic-transit)  
         2. [Cloudflare WAN](#cloudflare-wan)  
         3. [Cloudflare Network Firewall](#cloudflare-network-firewall)  
         4. [Network Flow](#network-flow)  
         5. [Spectrum](#spectrum)
3. [Protecting private resources](#protecting-private-resources)  
   1. [Securing connectivity to private resources](#securing-connectivity-to-private-resources)  
   2. [User connectivity](#user-connectivity)  
   3. [Integrating identity systems](#integrating-identity-systems)  
   4. [Access control](#access-control)  
   5. [Protecting data](#protecting-data)  
   6. [Securing Internet access](#securing-internet-access)
4. [Observability](#observability)
5. [Developer platform](#developer-platform)

In general, what customers need to effectively combat and protect against the growing breadth and complexity of threats is a unified security solution that provides visibility, analytics, detection, and mitigation in an operationally consistent and efficient manner. Cloudflare addresses these needs in several ways:

* Operational consistency: Cloudflare has a single dashboard/UI for all administrative tasks.
* Operational simplicity: Cloudflare is well-known for minimizing operational complexity with well-designed user interfaces that minimize manual configurations and UI workflows. Additionally, cross-product integrations allow for automating configurations and policies.
* Continuous innovation: Cloudflare continues to innovate across its broad security portfolio with unique differentiating capabilities such as its CAPTCHA replacement product, Turnstile, and the industry-first API Sequence Mitigation capability.
* Workload location agnostic: Cloudflare was built first and foremost around performance and security services. As such, it was built from the ground up to be workload location agnostic with multi-cloud inherently being a top use case. Customers can deploy workloads in multiple clouds and/or on-prem and get the same operational consistency.
* Performance and scale: All Cloudflare services run on every server in every data center on the same global cloud, allowing for maximum performance in terms of global reachability and latency and ability to scale out, leveraging the full capacity of Cloudflare’s global infrastructure.
* API first: Cloudflare is API first. All configurations and capabilities available from the UI/dashboard are also available from the API. Cloudflare can easily be configured with Terraform to support automation for customer workflows/processes.

Cloudflare’s security services that protect networks, applications, devices, users, and data can be grouped into the following categories.

![Cloudflare has a wide range of security services across SASE/SSE, application and network security.](https://developers.cloudflare.com/_astro/security-ref-arch-2.40SWzQcS_ZH96Uh.svg) 

Note this list is focused on security and doesn't include products such as our content delivery network (CDN), load balancing, and domain name services (DNS).

### Securing public and private resources

There are two main types of resources our customers are trying to secure:

* **Public resources** are defined as any content, asset, or infrastructure that has an interface available and accessible to the general Internet, such as brand websites, ecommerce sites, and APIs. They can also be defined by the fact they are accessible by anonymous users or people who register themselves to gain access, such as social media websites, video streaming services, and banking services.
* **Private resources** are defined as content, assets, or infrastructure with the intended set of users constrained to a single company, organization, or set of customers. These services typically require accounts and credentials to gain access. Examples of such resources are the company HR system, source code repositories, and a point of sale (POS) system residing on a retail branch network. These resources are typically accessible only by employees, partners, and other trusted, known identities.

Public and private resources can also include both infrastructure-level components like servers and consumed resources like websites and API endpoints. Communication over networks and the Internet happens in different stages and levels as shown in the open systems interconnection (OSI) model diagram below.

![The network OSI model describes network communication from the physical through to the application layer.](https://developers.cloudflare.com/_astro/security-ref-arch-3.D6GGUlec_Z11MYkq.svg) 

Cloudflare can protect at multiple layers of the OSI model, and in this document we are primarily concerned with protecting resources at layers 3, 4, and 7.

* Layer 3, referred to as the “network layer,” is responsible for facilitating data transfer between two different networks. The network layer breaks up segments from the transport layer into smaller units, called packets, on the sender’s device and reassembles these packets on the receiving device. The network layer is where routing takes place — finding the best physical path for the data to reach its destination.
* Layer 4, referred to as the “transport layer,” is responsible for end-to-end communication between the two devices. This includes taking data from the session layer and breaking it up into chunks called “segments” before sending it to layer 3.

Cloudflare security products that can be used for L3 and L4 security include Cloudflare's network services offerings, including [Magic Transit](https://developers.cloudflare.com/magic-transit/), [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/), [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/), [Network Flow](https://developers.cloudflare.com/network-flow/) (formerly Magic Network Monitoring), and [Spectrum](https://developers.cloudflare.com/spectrum/).

* Layer 7, referred to as the “application layer,” is the top layer of the data processing that occurs just below the surface or behind the scenes of the software applications that users interact with. HTTP and API requests/responses are layer 7 events.

Cloudflare has a suite of application security products that includes [Web Application Firewall](https://developers.cloudflare.com/waf/) (WAF), [Rate Limiting](https://developers.cloudflare.com/waf/rate-limiting-rules/), [L7 DDoS](https://developers.cloudflare.com/ddos-protection/managed-rulesets/http/), [API Shield](https://developers.cloudflare.com/api-shield/), [Bot Management](https://developers.cloudflare.com/bots/), and [client-side security](https://developers.cloudflare.com/client-side-security/).

Note that SaaS applications could be considered both public and private. For example, Salesforce has direct Internet-facing access but contains very private information and is usually only accessible by employee accounts that are provisioned by IT. For the purpose of this document, we will consider SaaS applications as private resources.

These are general guidelines because with Cloudflare it's possible to have very sensitive internal applications be protected by publicly accessible remote access services. We will explain more as we continue through this document.

### Protecting public resources

Businesses rely on public websites and API endpoints for daily ecommerce transactions and brand awareness, and often the entire business is an online service. High availability, performance, and security are top concerns, and customers use Cloudflare to ensure their businesses stay up and running. Cloudflare security services help prevent fraud, data exfiltration, and attacks that can create liability, cause losses and brand damage, and slow down or halt business.

Public assets need to be protected on multiple fronts and from various attacks; therefore, multiple different security capabilities need to be implemented. Additionally, customers must tackle the operational efficiency of solutions they implement. Managing multiple point products for mitigating different attacks or having multiple vendors to meet company security objectives and requirements creates many operational inefficiencies and issues, such as multiple UIs/dashboards, training, lack of cross-product integrations, etc.

The diagram below shows a typical request for a public asset going through the Cloudflare network. Our security services are part of many capabilities, and Cloudflare acts as a reverse proxy where requests are routed to the closest data center and performance and security services are applied prior to that request being routed onto the destination. These services can easily be consolidated and used together regardless of where workloads are deployed; the operations and implementation remain consistent. Note: the diagram doesn't detail all of Cloudflare's services.

![Every request through Cloudflare passes once for inspection across all security products.](https://developers.cloudflare.com/_astro/security-ref-arch-4.PP-9vg85_1jncS8.svg) 

The diagram highlights the following:

* The [world's fastest DNS service ↗](https://www.dnsperf.com/) provides fast resolution of public hostnames
* Ensure data compliance by [choosing geographic locations ↗](https://www.cloudflare.com/data-localization/) for the inspection and storage of data
* Spectrum extends Cloudflare security capabilities to all UDP/TCP applications
* Security services inspect a request in one pass
* Application performance services also act on the request in the same pass
* [Smart routing](https://developers.cloudflare.com/argo-smart-routing/) finds the lowest latency path between Cloudflare and the public destination

#### Common attacks and protection

Cloudflare's broad product portfolio protects against a wide variety of attacks. Several common attacks are described in more detail below and include a reference to the Cloudflare products that are used to mitigate the specific attack.

##### DDoS attacks

A [distributed denial-of-service (DDoS) attack ↗](https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/) is a malicious attempt to disrupt the availability of a targeted server, service, or network by overwhelming the target or its surrounding infrastructure with a flood of traffic. The goal is to slow down or crash a program, service, computer, or network, or to fill up capacity so that no one else can use or receive the service. DDoS attacks can occur at L3, L4, or L7, and Cloudflare provides protections at all these different layers.

![DDoS attacks are prevented at layers 3, 4 and 7.](https://developers.cloudflare.com/_astro/security-ref-arch-5.Dk00_Til_Z1zB2tT.svg) 

Cloudflare’s L7 DDoS Protection prevents denial of service at layer 7; Spectrum protects at layer 4; and Magic Transit protects at layer 3\. In addition to the core DDoS-specific security products, Cloudflare provides advanced rate limiting capabilities to allow for throttling traffic based on very granular request data, including headers information and API tokens. Cloudflare’s Bot Management capabilities can also limit denial-of-service attacks by effectively mitigating bot traffic.

Products: [L7 DDoS](https://developers.cloudflare.com/ddos-protection/managed-rulesets/http/), [Spectrum](https://developers.cloudflare.com/spectrum/), [Magic Transit](https://developers.cloudflare.com/magic-transit/)

##### Zero-day attacks

A zero-day exploit (also called a zero-day threat) is an attack that takes advantage of a security vulnerability that does not have a fix in place. It is referred to as a "zero-day" threat because once the flaw is discovered, the developer or organization has "zero days" to then come up with a solution.

Web Application Firewall (WAF) [Managed Rules](https://developers.cloudflare.com/waf/managed-rules/) allow you to deploy pre-configured managed rulesets that provide immediate protection against the following:

* Zero-day vulnerabilities
* Top 10 attack techniques
* Use of stolen/exposed credentials
* Extraction of sensitive data

WAF checks incoming web requests and filters undesired traffic based on sets of rules (rulesets) deployed at the edge. These managed rulesets are maintained and regularly updated by Cloudflare. From the extensive threat intelligence obtained from across our global network, Cloudflare is able to quickly detect and classify threats. As new attacks/threats are identified, Cloudflare will automatically push WAF rules to customers to ensure they are protected against the latest zero-day attacks.

Additionally, Cloudflare provides for [WAF Attack Score](https://developers.cloudflare.com/waf/detections/attack-score/), which complements Cloudflare managed rules by detecting attack variations. These variations are typically achieved by malicious actors via fuzzing techniques that are trying to identify ways to bypass existing security policies. WAF classifies each request using a machine learning algorithm, assigning an attack score from 1 to 99 based on the likelihood that the request is malicious. Rules can then be written which use these scores to determine what traffic is permitted to the application.

![Machine learning maintains lists of managed rules to determine if the request should be let through the WAF or not.](https://developers.cloudflare.com/_astro/security-ref-arch-6.DGieuMIT_Z7OIzr.svg) 

Products: [WAF - Cloudflare Managed Rules](https://developers.cloudflare.com/waf/managed-rules/)

##### Unauthorized access

Unauthorized access can result from broken authentication or broken access control due to vulnerabilities in authentication, weak passwords, or easily bypassed authorization. Cloudflare mTLS (mutual TLS) and JWT (JSON Web Tokens) validation can be used to bolster authentication. Clients or API requests that don’t have a valid certificate or JWT can be denied access via security policy. Customers can create and manage mTLS certificates from the Cloudflare dashboard or an API. Cloudflare’s WAF and [Exposed Credentials Check](https://developers.cloudflare.com/waf/managed-rules/check-for-exposed-credentials/) managed ruleset can be used to detect compromised credentials being used in authentication requests. WAF policies can also be used to restrict access to applications/paths based on different request criteria.

Products: [SSL/TLS - mTLS](https://developers.cloudflare.com/ssl/client-certificates/enable-mtls/), [API Shield (JWT Validation)](https://developers.cloudflare.com/api-shield/security/jwt-validation/), [WAF](https://developers.cloudflare.com/waf/)

##### Client-side attacks

Client-side attacks like [Magecart ↗](https://blog.cloudflare.com/detecting-magecart-style-attacks-for-pageshield) involve compromising third-party libraries, compromising a website, or exploiting vulnerabilities in order to exfiltrate sensitive user data to an attacker-controlled domain. Client-side security leverages Cloudflare’s position in the network as a reverse proxy to receive information directly from the browser about:

1. What JavaScript files/modules are being loaded
2. Outbound connections made
3. Inventory of cookies used by the application

Client-side security uses threat-feed detections of malicious JavaScript domains and URLs. In addition, it can download JavaScript source files and run them through a machine learning classifier to identify malicious behavior and activity; the result is a JS Integrity Score designating if the JavaScript file is malicious. Client-side security can also detect changes to JavaScript files. Alerts using emails, webhooks, and PagerDuty can be set based on different criteria such as new resources identified, code changes, and malicious code/domains/URLs.

[Content security rules](https://developers.cloudflare.com/client-side-security/rules/) can be created and applied to add an additional level of security that helps detect and mitigate certain types of attacks, including:

* Content/code injection
* Cross-site scripting (XSS)
* Embedding malicious resources
* Malicious iframes (clickjacking)

Products: [Client-side security](https://developers.cloudflare.com/client-side-security/)

##### Data exfiltration

Data exfiltration is the process of acquiring sensitive data through malicious tactics or through misconfigured services. Cloudflare Sensitive Data Detection addresses common data loss threats. Within the WAF, these rules monitor the download of specific sensitive data — for example, financial and personally identifiable information. Specific patterns of sensitive data are matched upon and logged. Sensitive data detection is also integrated with API Shield so customers are alerted on any API responses returning sensitive data matches.

Products: [WAF - Sensitive Data Detection](https://developers.cloudflare.com/waf/managed-rules/)

##### Credential stuffing

Credential stuffing is a cyberattack in which credentials obtained from a data breach on one service are used to attempt to log in to another unrelated service. Usually, automation tools or scripting are used to loop through a vast number of stolen credentials, sometimes augmented with additional data in the hopes of achieving account takeover.

Cloudflare Bot Management can be used to detect potentially malicious bots. Cloudflare challenges can also be used to challenge suspect requests and stop automated attempts to gain access. WAF policies can be used with specific request criteria to prevent attacks. Additionally, Cloudflare’s WAF and Exposed Credentials Check managed ruleset can be used to detect compromised credentials being used in auth requests. Rate limiting can also throttle requests and effectiveness of malicious credential stuffing techniques.

Products: [Bot Management](https://developers.cloudflare.com/bots/), [WAF](https://developers.cloudflare.com/waf/), [Rate Limiting](https://developers.cloudflare.com/waf/rate-limiting-rules/)

##### Brute force attacks

Brute force attacks attempt to guess passwords or clues, using random characters sometimes combined with common password suggestions. Usually, automation tools or scripting are used to loop through a vast number of possibilities in a short amount of time.

Cloudflare Bot Management can be used to detect potentially malicious bots. Cloudflare challenges can also be used to challenge suspect requests and stop automated brute force attacks. WAF and rate limiting policies can be used with specific request criteria to apply granular policies on application login pages to block or throttle traffic.

Products: [Bot Management](https://developers.cloudflare.com/bots/), [WAF](https://developers.cloudflare.com/waf/), [Rate Limiting](https://developers.cloudflare.com/waf/rate-limiting-rules/)

##### Credit card skimming

Credit card skimming is a fraudulent method to skim payment information from websites. Client-side security can be used to detect clients using malicious JavaScript libraries or making connections to known malicious domains or URLs. Client-side security will also detect changes to files/code being used on a site and give a JS Integrity Score to JavaScript files assessing whether the code is malicious. Content Security Policies (CSPs) can be deployed to enforce a positive security model. These capabilities can prevent compromised code from performing malicious behavior such as credit card skimming.

Products: [Client-side security](https://developers.cloudflare.com/client-side-security/)

##### Inventory hoarding

Inventory hoarding is when malicious bots are used to buy large quantities of products online, preventing legitimate consumers from purchasing them. This can cause many issues for businesses, including creating artificial scarcity, causing inflated prices, and disrupting access for legitimate customers. Cloudflare Bot Management can be used to detect potentially malicious bots. Cloudflare challenges can also be used to challenge suspect requests and stop automated processes. WAF policies can be used with specific request criteria to prevent attacks.

Products: [Bot management](https://developers.cloudflare.com/bots/), [WAF](https://developers.cloudflare.com/waf/)

##### Fuzzing (vulnerability scanning)

[Fuzzing ↗](https://owasp.org/www-community/Fuzzing) is an automated testing method used by malicious actors that uses various combinations of data and patterns to inject invalid, malformed, or unexpected inputs into a system. The malicious user hopes to find defects and vulnerabilities that can then be exploited. Cloudflare WAF leverages machine learning to detect fuzzing based attempts to bypass security policies. The WAF attack score complements managed rules and highlights the likeliness of an attack.

Bot Management can detect potentially malicious bots by automating vulnerability scanning. With API Shield, customers can employ schema validation and sequence mitigation to prevent the automated scanning and fuzzing techniques with APIs.

Products: [WAF](https://developers.cloudflare.com/waf/), [Bot Management](https://developers.cloudflare.com/bots/), [API Shield](https://developers.cloudflare.com/api-shield/)

##### Cross-Site Scripting (XSS) attacks

Cross-Site Scripting (XSS) attacks are a type of injection attack in which malicious scripts are injected into websites and then used by the end user’s browser to access sensitive user information such as session tokens, cookies, and other information.

Cloudflare WAF leverages machine learning to detect attempts to bypass security policies and provides a specific WAF Attack Score for the likeliness the request is an XSS attack.

Products: [WAF](https://developers.cloudflare.com/waf/)

##### Remote Code Execution (RCE) attacks

In a remote code execution (RCE) attack, an attacker runs malicious code on an organization’s computers or network. The ability to execute attacker-controlled code can be used for various purposes, including deploying additional malware or stealing sensitive data.

Cloudflare WAF leverages machine learning to detect attempts to bypass security policies and provides a specific WAF Attack Score for the likeliness the request is an RCE attack.

Products: [WAF](https://developers.cloudflare.com/waf/)

##### SQL injection (SQLi) attacks

Structured Query Language Injection (SQLi) is a code injection technique used to modify or retrieve data from SQL databases. By inserting specialized SQL statements into an entry field, an attacker is able to execute commands that allow for the retrieval of data from the database, the destruction of sensitive data, or other manipulative behaviors.

Cloudflare WAF leverages machine learning to detect attempts to bypass security policies and provides a specific WAF Attack Score for the likeliness the request is an SQLi attack.

Products: [WAF](https://developers.cloudflare.com/waf/)

##### Malware

Malware can refer to viruses, worms, trojans, ransomware, spyware, adware, and other types of harmful software. A key distinction of malware is that it needs to be intentionally malicious; any software that unintentionally causes harm is not considered to be malware.

When Uploaded Content Scanning is enabled, content scanning attempts to detect items such as uploaded files, and scans them for malicious signatures like malware. The scan results, along with additional metadata, are exposed as fields available in WAF custom rules, allowing customers to implement fine-grained mitigation rules.

Products: [WAF - Uploaded Content Scanning](https://developers.cloudflare.com/waf/detections/malicious-uploads/)

#### Cloudflare application security products

This document has covered some common attacks and Cloudflare products used to detect and mitigate respective threats. Below we highlight and provide some additional details on each product across Cloudflare’s application and network security portfolio.

##### Security Analytics

Security Analytics brings together all of Cloudflare’s security detection capabilities within one dashboard. Customers can get a quick view and insight on mitigated and unmitigated traffic, attack traffic, bot traffic, malicious content upload attempts, and details around rate limiting analysis and account takeover analysis. Right from the dashboard displaying detected threats, with the click of a button customers can take action to put in place policies to mitigate.

![All security detection can be seen from a single dashboard.](https://developers.cloudflare.com/_astro/security-ref-arch-7.BelBfrod_Z12bNrP.svg) 

##### Web Application Firewall (WAF)

Using Cloudflare [WAF](https://developers.cloudflare.com/waf/), customers can deploy custom rules based on very granular request criteria to mitigate specific threats or to block requests with certain HTTP anomalies. In addition, customers can deploy Cloudflare managed rules to mitigate zero-day attacks, common OWASP Top 10 attacks, requests using known leaked credentials, and requests extracting sensitive data.

[WAF Managed Rules](https://developers.cloudflare.com/waf/managed-rules/) allow customers to deploy pre-configured managed rulesets that provide immediate protection against:

* Zero-day vulnerabilities
* Top 10 attack techniques
* Use of stolen/exposed credentials
* Extraction of sensitive data

##### Rate limiting

[Rate limiting](https://developers.cloudflare.com/waf/rate-limiting-rules/) can be used to mitigate various attacks, including volumetric attacks, credential stuffing, web scraping, and DoS attacks. Cloudflare rate limiting allows customers to define rate limits for requests matching an expression, and the action to perform when those rate limits are reached. Rate limiting can be granular based on specific request or header criteria and can also be based on sessions or API tokens. Customers can configure actions including logging, blocking, and challenges for when the specified rate is exceeded.

Customers can also configure which request criteria is used as a counter for determining when to throttle or block after a limit is exceeded. Customers can implement two different behaviors for rate limiting:

1. **Block for the selected duration**. Once the rate is exceeded, the WAF will block all requests during the selected duration before the counter is reset.
![All actions are blocked once the rate limit is reached.](https://developers.cloudflare.com/_astro/security-ref-arch-8.DyW4Rkuf_ZVhqMl.svg) 
1. **Throttle requests over the maximum configured rate**. The WAF will block any requests exceeding the configured rate, and the remaining requests will be allowed. The analogy for this behavior is a sliding window effect.
![All security detection can be seen from a single dashboard.](https://developers.cloudflare.com/_astro/security-ref-arch-9.CXEx1mEx_1WjhMC.svg) 

##### L7 DDoS

The Cloudflare [HTTP DDoS Attack Protection](https://developers.cloudflare.com/ddos-protection/managed-rulesets/http/) managed ruleset is a set of pre-configured rules used to match known DDoS attack vectors at layer 7 (application layer) on the Cloudflare global network. The rules match known attack patterns and tools, suspicious patterns, protocol violations, requests causing large amounts of origin errors, excessive traffic hitting the origin/cache, and additional attack vectors at the application layer. Cloudflare updates the list of rules in the managed ruleset on a regular basis.

##### API Shield

[API Shield](https://developers.cloudflare.com/api-shield/) is Cloudflare’s API management and security product. API Shield delivers visibility via API discovery and analytics, provides endpoint management, implements a positive security model, and prevents API abuse.

![All security detection can be seen from a single dashboard.](https://developers.cloudflare.com/_astro/security-ref-arch-10.B6IOqcpe_Z1cxdgt.svg) 

API Gateway’s API Discovery is used to learn all API endpoints in a customer’s environment using machine learning. After this step, customers can save endpoints to Endpoint Management so additional API performance and error information can be collected and security policies can be applied.

Customers can enable a positive security model using mTLS, JWT validation, and schema validation and protect against additional API abuse with rate limiting and volumetric abuse protection as well as sequence mitigation and GraphQL protections.

![API Shield has many stages, discovery, review, using a positive security model, abuse protection, data protection and endpoint management/monitoring.](https://developers.cloudflare.com/_astro/security-ref-arch-11.CCbosnqv_oq720.svg "Common user workflow for API Shield")

Common user workflow for API Shield

##### Bot Management

[Bot Management](https://developers.cloudflare.com/bots/) is used to mitigate various malicious activities, including web scraping, price scraping, inventory hoarding, and credential stuffing. Cloudflare has multi-layered bot mitigation capabilities that include heuristics, machine learning, anomaly detection, and JS fingerprinting. Bot management also assigns a bot score to every request. WAF rules can be created around bot scores to create very granular security policies.

![Bot management can filter good and bad bots.](https://developers.cloudflare.com/_astro/security-ref-arch-12.8OEt5sGB_1ltaUy.svg) 

Additionally, Cloudflare can take the action of challenging clients if it suspects undesired bot activity. Cloudflare offers its [challenge](https://developers.cloudflare.com/cloudflare-challenges/) platform where the appropriate type of challenge is dynamically chosen based on the characteristics of a request. This helps avoid CAPTCHAs, which result in a poor customer experience.

Depending on the characteristics of a request, Cloudflare will choose an appropriate type of challenge, which may include but is not limited to:

* A non-interactive challenge.
* A custom interactive challenge (such as clicking a button).
* Private Access Tokens (using recent Apple operating systems).

With [Turnstile](https://developers.cloudflare.com/turnstile/), Cloudflare has completely moved away from CAPTCHA. Turnstile is Cloudflare’s smart CAPTCHA alternative. It can be embedded into any website without sending traffic through Cloudflare and works without showing visitors a CAPTCHA. Turnstile allows you to run challenges anywhere on your site in a less intrusive way and uses APIs to communicate with Cloudflare’s Managed Challenge platform.

![Turnstile can be deployed to totally avoid presenting users with a CAPTCHA.](https://developers.cloudflare.com/_astro/security-ref-arch-13.Dw5VEN0r_kv0N3.svg) 

##### Client-side security

[Client-side security](https://developers.cloudflare.com/client-side-security/) (formerly known as Page Shield) ensures the safety of website visitors’ browser environment and protects against client-side attacks like Magecart. By using a Content Security Policy (CSP) deployed with a report-only directive to collect information from the browser, client-side security tracks loaded resources like scripts and detects new resources or connections being made by the browser. Additionally, client-side security alerts customers if it detects scripts from malicious domains or URLs — or connections being made from the browser to malicious domains or URLs.

Client-side security can download JavaScript source files and run them through a machine learning classifier to identify malicious behavior and activity; the result is a JS Integrity Score designating if the JavaScript file is malicious.

##### SSL/TLS

Cloudflare’s [SSL/TLS](https://developers.cloudflare.com/ssl/) provides a number of features to meet customer encryption requirements and certificate management needs. An SSL/TLS certificate is what enables websites and applications to establish secure connections. With SSL/TLS, a client — such as a browser — can verify the authenticity and integrity of the server it is connecting with, and use encryption to exchange information.

Cloudflare’s global network is at the core of several products and services that Cloudflare offers. In terms of SSL/TLS, this means instead of only one certificate, there can actually be two certificates involved in a single request: an edge certificate and an origin certificate.

![SSL/TLS can be used for both Cloudflare to user, and origin server to Cloudflare security.](https://developers.cloudflare.com/_astro/security-ref-arch-14.JS7QlPBw_pntod.svg) 

Edge certificates are presented to clients visiting the customer’s website or application. Origin certificates guarantee the security and authentication on the other side of the network, between Cloudflare and the origin server of the customer's website or application. [SSL/TLS encryption modes](https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/) control whether and how Cloudflare will use both these certificates, and you can choose between different modes.

Customers can also enable [mutual Transport Layer Security (mTLS)](https://developers.cloudflare.com/ssl/client-certificates/enable-mtls/) for hostnames and API endpoints to bolster security for authentication, enforcing that only devices with valid certificates can gain access. Additional security features like [Authenticated Origin Pulls](https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/) can be configured to help ensure requests to the origin server come from the Cloudflare network. [Keyless SSL](https://developers.cloudflare.com/ssl/keyless-ssl/) allows security-conscious clients to upload their own custom certificates and benefit from Cloudflare, but without exposing their TLS private keys. With [Cloudflare for SaaS](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/), customers can also issue and validate certificates for their own customers.

##### Security Center

[Cloudflare Security Center](https://developers.cloudflare.com/security-center/) offers attack surface management (ASM) that inventories IT assets, enumerates potential security issues, controls phishing and spoofing risks, and enables security teams to investigate and mitigate threats in a few clicks. The Security Center is a great starting point for security analysts to get a global view of all potential issues across all applications/domains.

Key capabilities offered:

* Inventory and review IT infrastructure assets like domains, ASNs, and IPs.
* Manage an always up-to-date list of misconfigurations and risks in Cloudflare IT assets.
* Query threat data gathered from the Cloudflare network to investigate and respond to security risks.
* Gain full control over who sends email on your organization's behalf with DMARC Management.

##### Cloudflare for SaaS

If you build and host your own SaaS product offering, then [Cloudflare for SaaS](https://developers.cloudflare.com/cloudflare-for-platforms/) might be of interest. It allows customers to extend the security and performance benefits of Cloudflare’s network to their customers via their own custom or vanity domains. Cloudflare for SaaS offers multiple configuration options. In the below diagram, custom hostnames are routed to a default origin server called “fallback origin”.

![Bring Cloudflare security to customer domains using your SaaS application.](https://developers.cloudflare.com/_astro/security-ref-arch-15.BuEBz4JW_sCl4H.svg) 

#### Cloudflare network security products

##### Magic Transit

[Magic Transit](https://developers.cloudflare.com/magic-transit/) protects entire IP subnets from DDoS attacks, providing for sub-second threat detection while also accelerating network traffic. It uses Cloudflare’s global network to mitigate attacks, employing standards-based networking protocols like BGP, GRE, and IPsec for routing and encapsulation.

All network assets, whether on-premises or in private or public-hosted cloud environments, can easily be protected by sitting behind and being advertised from the Cloudflare network providing over 405 Tbps network capacity.

![Magic Transit can secure your private network links.](https://developers.cloudflare.com/_astro/security-ref-arch-16.D6MVHn2o_Z1CKeYi.svg) 

##### Cloudflare WAN

With [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/), customers can securely connect any traffic source — data centers, offices, devices, cloud properties — to Cloudflare’s network and configure routing policies to get the bits where they need to go. Cloudflare WAN supports a variety of on-ramps, including anycast GRE and IPsec tunnels, Cloudflare Network Interconnect, Cloudflare Tunnel, the Cloudflare One Client, and a variety of network on-ramp partners. Cloudflare WAN can help end reliance on traditional SD-WAN appliances and securely connect users, offices, data centers, and hybrid cloud over the Cloudflare global network without relying on vendor-specific hardware or software.

##### Cloudflare Network Firewall

[Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/) is Cloudflare's firewall-as-a-service solution delivered from Cloudflare's global network and is integrated with Magic Transit and Cloudflare WAN. It allows for enforcing consistent network security policies across customers' entire WAN, including headquarters, branch offices, and virtual private clouds. Customers can deploy granular rules that globally filter on protocol, port, IP addresses, packet length, and bit field match.

##### Network Flow

[Network Flow](https://developers.cloudflare.com/network-flow/) (formerly Magic Network Monitoring) is a cloud network flow monitoring solution that gives customers end-to-end network traffic visibility, DDoS attack type identification, and volumetric traffic alerts. When a DDoS attack is detected, an alert can be received via email, webhook, or PagerDuty.

##### Spectrum

[Spectrum](https://developers.cloudflare.com/spectrum/) is a reverse proxy product that extends the benefits of Cloudflare to all TCP/UDP applications providing L4 DDoS protection. Spectrum also provides an IP firewall allowing customers to deny IPs or IP ranges to granularly control traffic to application servers. Customers can also configure rules to block visitors from a specified country or even an Autonomous System Number (ASN).

### Protecting private resources

Private resources typically contain highly sensitive, company confidential information and either by way of laws and regulations, or by the nature of the confidentiality of the data, access to them is much more restricted. Traditionally, private applications were only accessible on private networks in company buildings that users had to have physical access to. But as we all know today, access to private resources needs to take place from a wide range of locations, and paradoxically, private applications can live in very public locations. Most SaaS applications are exposed to the public Internet.

The following are typical attributes of private resources:

* Users have been pre-authorized and provisioned. They can't just sign up. They need to be given specific access to the resource either directly or via access control mechanisms such as certificates, group membership, or role assignment.
* Network access to a self-hosted resource is typically over-managed, private network routes and not accessible via the general Internet.
* Private resources that live in data centers (physical or virtual) and are connected to networks that are hosted and managed by the business, which are either on-premises or virtual private networks running in public cloud infrastructure.

As mentioned, traditional access to private resources required physical access to the network by being in the office connected via Ethernet. As remote access needs increased, companies installed on-premises VPN servers that allowed users and devices to "dial in" to these private networks. Many applications have left these private networks and instead migrated to SaaS applications or are hosted in public cloud infrastructure. This traditional approach has become unmanageable and costly, with a variety of technologies providing network connectivity and access control.

Another important thing to note is that many of the services used for securing and providing connectivity for public resources can also be used for private resources. The most obvious here is Cloudflare WAN and Cloudflare Network Firewall. Customers also use our WAF in front of privately hosted applications that are only accessible through private networks. The idea is that even if access to an application is only from trusted private connections, it is still possible for an attacker to compromise what seems to be a trusted device; therefore, application injection attacks and other vulnerabilities can be exploited by devices with existing trusted network access. This is exactly in line with the idea of a Zero Trust security program. Read more about the approaches to Zero Trust using a SASE platform in our [SASE reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/).

As we describe the following Cloudflare services, you will learn how the Cloudflare network and our methods of connecting it to your own private networks provides greater security, flexibility, and a more centralized control plane for access to private resources. The following diagram illustrates the sort of environment that represents a typical customer's private infrastructure.

![Cloudflare's SASE platform can protect users and devices no matter where in your enterprise network, or not, they reside.](https://developers.cloudflare.com/_astro/security-ref-arch-18.D5ODORV0_Z1gHcKU.svg) 

Protecting internal resources can be broken down into the following areas.

* Securing connectivity between the user and the application/network.
* Identity systems providing authentication and maintaining user identities and group membership.
* Policies controlling user access to applications/data.
* Data protection controls to identify and protect sensitive and confidential data.
* Protecting users and devices from attacks (malware, phishing, etc.) that originate from access to the Internet.
* Operational visibility to IT and security teams.

#### Securing connectivity to private resources

Many privately hosted applications and networks do not have direct connectivity to the Internet. As mentioned previously, access traditionally has been enabled by one of two methods. One is when users connect physically to the same networks the private resources reside on, i.e. walking into the office and connecting to the office WiFi. The other is creating a virtual private network (VPN) connection over the Internet and "dialing in" to the private company network.

However, the need today is still the same. You have private networks with private applications — and remote users need access. You should regard Cloudflare as your new enterprise network, where all authorized users (employees, contractors, partners) can connect to any private application from anywhere. This means your network topology will feature Cloudflare in the middle, providing connectivity from all networks to each other.

![Cloudflare's SASE platform can also connect a wide variety of networks together into one larger, new corporate network.](https://developers.cloudflare.com/_astro/security-ref-arch-19.DZCNQ04z_ZymtGH.svg) 

In the above diagram you can see a variety of private networks and end user devices connected to Cloudflare, which then facilitates the routing and access controls between those networks, and therefore the applications and other resources. This is often regarded as East to West traffic. Because traffic originates from, and is destined for, a privately managed network.

Because all network traffic routes through Cloudflare, security controls are defined and apply to all traffic as it flows between networks. As long as a network, device, or user is connected to Cloudflare, you can identify it and apply policy. It also means things like data protection can be simplified — one single rule can be implemented to detect the transfer of and access to sensitive data and can be applied across the entire network with ease.

Existing private infrastructure can be complex. Cloudflare provides a variety of methods by which businesses can connect their networks and user devices into this new enterprise network. We often call these methods "on-ramps," which describes how traffic for a specific network or device is routed into Cloudflare. The following table outlines these different methods.

| Method                                                                                                                               | Description                                                                                                                                                  | Common Use                                                                                                                                                                          |
| ------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/)                                                                  | IPsec or GRE tunnel from networking devices to Cloudflare, routing entire network traffic.                                                                   | Connecting existing network routers to Cloudflare. Allowing all traffic into and out of the network to go through Cloudflare.                                                       |
| [Cloudflare One Appliance](https://developers.cloudflare.com/cloudflare-wan/configuration/appliance/)                                | Appliance-based IPsec or GRE tunnel from networking devices to Cloudflare, routing entire network traffic.                                                   | Uses the same technology as Cloudflare WAN; however, instead of using existing networking devices, a dedicated appliance or virtual machine is used — the Cloudflare One Appliance. |
| [cloudflared](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/)                               | Software agent deployed on servers or alongside services like Kubernetes for creating a tunnel for incoming connections to private applications or networks. | IT admins or application owners can easily install this tunnel software to expose their application to the Cloudflare network.                                                      |
| [WARP Connector](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/warp-connector/) | Software agent deployed on servers for creating a tunnel for incoming and outgoing connections to private applications or networks.                          | Similar to cloudflared, but supports East to West traffic and is often used in place of Cloudflare WAN when there is no ability to create an IPsec tunnel from existing devices.    |
| [WARP Desktop Agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/)             | Software agent deployed on user devices, creating a tunnel for traffic to and from private applications and networks.                                        | Connecting end user devices like phones and laptops to be part of the Cloudflare network.                                                                                           |
| [Cloudflare Network Interconnect ↗](https://www.cloudflare.com/network-services/products/network-interconnect/)                      | Direct connection between your physical networks and Cloudflare.                                                                                             | When your applications live in the same data centers we operate in, we can connect those networks directly to Cloudflare.                                                           |

For more details on how these methods work, please refer to our [SASE reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/).

#### User connectivity

All the above methods are for connecting networks and applications to Cloudflare, and some users will be on devices connected directly to those networks. They might be in the corporate headquarters or working from a branch or retail location. However, many users are working from home, sitting in a coffee shop, or working on a plane. Cloudflare provides the following methods for connecting users to Cloudflare. This is the same concept of installing a VPN client on a user device, with the difference that the connection is made to our global network and not to your own VPN applicances.

##### Device agent

For the best user experience and the greatest degree of access control, we recommend deploying our [device agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/) to devices. Supported on Windows, macOS, Linux, iOS, and Android, the agent performs two main roles. First, it routes all traffic from the device to Cloudflare, allowing for access to all your existing connected private networks and applications. Second, the agent provides device posture information such as operating system version, encrypted storage status, and other details. This information is then associated with the authenticated user and can be used as part of access control policy. The agent can be installed manually, but most enterprises deploy it using their device management (MDM) software.

##### Browser proxy

There may be instances where you cannot install software on end user devices. In those instances, Cloudflare provides a proxy endpoint where browsers can be configured to on-ramp their traffic to Cloudflare. This is either done manually by the end user, or by using [automated browser configuration](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/proxy-endpoints/) files.

##### Isolated browser

In some situations, you have no ability to modify the end device in any way. In those instances we provide the ability for a user to access a browser that runs directly on our edge network. This [browser isolation service](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/) requires users to point their browser at a Cloudflare URL, which in turn runs a headless, secure browser on one of our edge servers. Secure vectors are then used over HTTPS and WebRTC connections. For more information, refer to [this architecture](https://developers.cloudflare.com/reference-architecture/diagrams/sase/sase-clientless-access-private-dns/).

#### Integrating identity systems

Users cannot just sign up and access your private resources; their identity and associated credentials are typically created and managed in an enterprise identity provider (IdP). Cloudflare integrates with both [enterprise and consumer-based identity services](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/), as well as providing a simple one-time password (OTP) via email service for when you have a need to authenticate a user with only an email address.

Cloudflare supports integrations with multiple identity providers, including of the same type. So if you manage an Okta instance for your employees, but may have acquired another company with its own Okta instance, both can be integrated with Cloudflare. Cloudflare then acts as a proxy for the SSO process. Applications are configured using SAML and OIDC to use Cloudflare for authentication and then Cloudflare in turn redirects users through the authentication flow of an integrated IdP. Group information can also be synchronized via SCIM into Cloudflare to be used in access control policies.

![Many different IdP's can be integrated, from Google, Microsoft and Github as well as any SAML or OAuth system.](https://developers.cloudflare.com/_astro/security-ref-arch-20.CGOXN25S_Z20rPBo.svg) 

This centralization of identity into a common access control layer allows you to build clearly defined and easily managed policies that can be applied across the entire network. If you then decide to migrate from one IdP to another vendor, you only need to change one identity integration with Cloudflare, and all your downstream applications and existing policies will continue to work.

#### Access control

The focus on this document is about security, and now that applications, devices, identities, and networks are all connected, every request to and from any resource on the network, and also to the Internet, is now subject to Cloudflare's access control and firewall services. There are two services that apply policy-based controls to traffic.

* **Zero Trust Network Access**: Our [Access](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/) product manages access to specific networks or applications that are deemed private. It enforces authentication either for users via an existing identity provider, or for other applications via service tokens or mTLS.
* **Secure Web Gateway**: Our [Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) product is used to analyze traffic and apply policies, no matter the destination. It is most commonly used to allow, block, or isolate traffic that is destined for the Internet. This can be used to apply access controls to SaaS applications, but any traffic flowing through Cloudflare can be inspected and acted upon by Gateway. Therefore it can also be used to add additional access controls to non-Internet, private tunneled applications.
![Cloudflare's ZTNA and SWG services can be combined to secure both private and Internet access.](https://developers.cloudflare.com/_astro/security-ref-arch-21.CYH5oM7H_Bgt5p.svg) 

Both of these technologies can be combined to ensure appropriate access to private applications. For users with our [device agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/) installed, the policies can also include device-level requirements. When combined with identity data, policies such as the following can be written to control access to, for example, an internal database administration tool.

* User must have authenticated via the company IdP, and used MFA as part of the authentication
* User must be in the "Database Administrators" group in the IdP
* User device must have a Crowdstrike risk score above 70
* User device must be on the very latest release of the operating system

It is possible to define access groups of users that can be applied across multiple policies. This allows IT and security administrators to create a single definition of what a secure administrator looks like, which is then reusable across many policies.

![Policies can easily be written which define tight access groups to private resources.](https://developers.cloudflare.com/_astro/security-ref-arch-22.DQuxIF4A_18eRkk.svg) 

#### Protecting data

All traffic is flowing through Cloudflare, so therefore all data is flowing through Cloudflare. This allows you to apply data controls on that traffic. Typically, employees are allowed access to sensitive applications and data only on managed devices where the device agent installs Cloudflare certificates that allow Cloudflare to terminate SSL connections on our network. This in turn allows for inspection of the contents of HTTPS web traffic and policy can be written to manage and secure that data.

Cloudflare has a [data loss prevention](https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/) (DLP) service that defines profiles that can be used to identify sensitive data. These profiles are then used in Gateway policies to match specific traffic and either allow, block, or isolate it.

The same DLP profiles can also be used in our Cloud Access Security Broker (CASB) service, where Cloudflare is integrated via APIs to SaaS applications. We then scan the storage and configuration of those applications looking for misconfiguration or sensitive data that's publicly exposed.

#### Securing Internet access

A lot of this section has focused on protecting access to private networks and applications, but a business must also protect their employees and their devices. Our [secure web gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) (SWG) service sits between users connected to Cloudflare and any resource they are attempting to access, both public and private. Policies can be written to prevent employees from accessing high-risk websites or known sites that distribute malware. Policies can also be written to mitigate phishing attacks by blocking access to domains and websites known to be part of phishing campaigns. Protecting users and their devices from Internet threats also reduces associated risks of those same users and devices accessing private resources.

Another critical private resource to secure is email. This is often one of the most private of all resources, as it contains confidential communications across your entire organization. It's also a common attack surface, mostly by way of phishing attacks. [Email security ↗](https://www.cloudflare.com/zero-trust/products/email-security/) (CES) examines all emails in your employee's inboxes and detects spoofed, malicious, or suspicious emails and can be configured to act accordingly. CES can be integrated by changing your domain MX records and redirecting all email via Cloudflare. Another option, for Microsoft and Google, is to integrate via API and inspect email already in a user’s inbox. For suspicious emails, links in the email are rewritten to leverage Cloudflare's [browser isolation service](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/) so that when a user heads to that website, their local machine is protected against any malicious code that might be running in the browser.

![Cloud email security filters unwanted email traffic from your users inboxes.](https://developers.cloudflare.com/_astro/security-ref-arch-23.DIu_T4WS_Z197s7j.svg) 

### Observability

No matter if your resources are private or public, visibility into what's going on is critical. The Cloudflare administrative dashboard has a wide range of built-in dashboards and reports to get a quick overview. Notifications can also be configured to inform admins, either via email or services such as PagerDuty, of important events.

All Cloudflare services provide detailed logs into activity. These logs can also be exported into other security monitoring or SIEM tools via our log shipping integrations. There are built-in integrations for common services such as AWS, Datadog, Splunk, New Relic, and Sumo Logic. But we also support generic distribution of logs into Azure and Google storage as well as Amazon S3 and S3-compatible services.

In summary, the following diagram details how Cloudflare's SASE services can connect and secure access to your private resources. For a more in-depth review, please read our [SASE reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/).

![Cloud email security filters unwanted email traffic from your users inboxes.](https://developers.cloudflare.com/_astro/security-ref-arch-24.DyfzYaJH_Z2pc5vA.svg) 

## Developer platform

Many of Cloudflare's security services are built on a highly optimized serverless compute platform based on [V8 Isolates ↗](https://blog.cloudflare.com/cloud-computing-without-containers) which powers our developer platform. Like all our services, serverless compute workloads run on all servers in our global network. While our security services offer a wide range of features, customers always want the ultimate flexibility of writing their own custom solution. Customers therefore can use Cloudflare Workers and its accompanying services (R2, D1, KV, Queues) to interact with network traffic as it flows to and from their resources, as well as implementing complex security logic.

The following use cases show how our customers’ security teams have used our [developer platform ↗](https://workers.cloudflare.com/):

* In our ZTNA service, Cloudflare Access, when a request is made to access a private resource, that request can include a call to a Cloudflare Worker, passing in everything known about the user. Custom business logic can then be implemented to determine access. For example:  
   * Only allow access during employee working hours. Check via API calls to employee systems.  
   * Allow access only if an incident has been declared in PagerDuty.
* Implement honeypots for bots: Because Workers can be attached to routes of any Cloudflare-protected resource, you can examine the bot score of a request and then redirect or modify the request if you suspect it's not legitimate traffic. For example, execute the request but modify the response to redact information or change values to protect data.
* Write complex web application firewall (WAF) type rules: As described above, our WAF is very powerful for protecting your public-facing applications. But with Workers, you can write incredibly complex rules based on information provided in the [IncomingRequestCfProperties](https://developers.cloudflare.com/workers/runtime-apis/request/#incomingrequestcfproperties), which expose metadata for every request. These properties contain extensive information and can be expressed as code for effective rule implementation.
* Enhance traffic with extra security information: Your downstream application may have other security products in front of it, or maybe provides other security if certain HTTP headers exist. Using Workers, you can enhance any requests to the application and add in headers to help the downstream application implement greater security controls.
* Write your own authentication service: Some customers have extreme requirements, and the power of Workers allows you, as we have with our own product suite, to write entire authentication stacks. One such customer [did just this ↗](https://www.cloudflare.com/case-studies/epam/). While this isn't common, it's an example of the flexibility of using Cloudflare. You can mix complex code that you write with our own products to fine-tune exactly the right security outcome.

Using Workers for implementing some of your security controls has the following advantages:

* **Advanced logic and testability**: Enables the implementation of highly sophisticated logic that's easily testable through unit tests.
* **Accessibility to developers**: Security features are accessible to a broader audience due to native support in languages like JavaScript, TypeScript, Rust, and Python, catering to developers' familiarity.
* **Granularity and flexibility**: Offers unparalleled granularity, with support for regex, JSON parsing, and easy access to request/response headers and bodies enriched by Cloudflare. Policies can be designed based on any feature of the request/response.
* **Response modification**: While traditional security stacks often focus solely on requests, Workers empowers effortless modification of responses. For instance, verbose error messages can be obscured to enhance security.
* **Implement DevSecOps lifecycles**: Workers makes it very easy to adhere to DevSecOps best practices like version control, code audits, automated tests, gradual roll-outs, and rollback capabilities.

However, you should also consider the following:

* **Cost**: By adding Workers into the request process, you will incur extra costs. However, this might be acceptable for the scenarios where the significant security outcome is highly beneficial.
* **Latency**: While minimal, there will always be some impact on traffic latency because you are running your own logic on every request.
* **Requires developer skill set**: This is a bit obvious, but worth mentioning. Using Workers requires a development team to create, test, and maintain whatever code is implemented.

You can review some examples of how our Workers platform can be used for [security](https://developers.cloudflare.com/workers/examples/?tags=Security) or [authentication](https://developers.cloudflare.com/workers/examples/?tags=Authentication) use cases.

## Summary

You should now have a good understanding of the massive scale of the Cloudflare network, how it's secured and operated, and the broad range of services available to you for protecting your business assets. We have built the future of networking and security, and we invite you to consider using our services to better secure your business.

In summary, the benefits of using Cloudflare for your business’s security are:

* Protect all your business assets, public or private.
* Leverage a comprehensive range of security services on a single platform.
* Rely on a massively scaled network with high performance and reliability.
* Implement security controls once, in a single dashboard, and impact traffic from anywhere.
* Empower DevSecOps teams with full API and Terraform support.

We have a very simple [self-service signup ↗](https://dash.cloudflare.com/sign-up), where many of our services can be evaluated for free. If you wish to work with our expert team to evaluate Cloudflare, please [reach out ↗](https://www.cloudflare.com/plans/enterprise/contact/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/architectures/","name":"Reference Architectures"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/architectures/security/","name":"Cloudflare Security Architecture"}}]}
```

---

---
title: Designing ZTNA access policies for Cloudflare Access
description: This guide is for customers looking to deploy Cloudflare's ZTNA service. It provides best practices and guidelines for how to effectively build the right policies.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/design-guides/designing-ztna-access-policies.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Designing ZTNA access policies for Cloudflare Access

**Last reviewed:**  over 1 year ago 

## Introduction

Organizations today are increasingly adopting a [Zero Trust security ↗](https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/) posture to safeguard company assets and infrastructure in a constantly evolving threat landscape. The traditional security associated with legacy network design assumes trust within the corporate network perimeter. In contrast, Zero Trust operates on the principle of "Never trust, always verify" and implements continuous [authentication and strict access controls ↗](https://www.cloudflare.com/learning/access-management/what-is-access-control/) for all users, devices, and applications, regardless of their location or network.

Typically two technologies play a role in a Zero Trust architecture. First, a [Secure Web Gateway (SWG) ↗](https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/) filters outbound traffic destined for the Internet and blocks users from accessing high risk websites such as those involved in phishing campaigns. Then, to enable remote access for users to SaaS apps, internally-hosted applications and networks, Zero Trust Network Access ([ZTNA ↗](https://www.cloudflare.com/learning/access-management/what-is-ztna/)) services are used to create secure tunnels and provide access for remote users into private applications.

This guide is for customers looking to deploy Cloudflare's ZTNA service ([Access](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/)) and provides best practices and guidelines for how to effectively build the right policies. If you have not already done so, we recommend also reading Cloudflare's [SASE reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/), which goes into detail on all aspects of how to use Cloudflare as part of your Zero Trust initiatives.

### Who is this document for and what will you learn?

This document is aimed at administrators who are evaluating or have adopted Cloudflare to replace existing VPN services or provide new remote access to internal resources. This serves as a starting point for designing your first ZTNA policies and as an ongoing reference. This guide covers three main sections:

* **Technical prerequisites**: What needs to be in place before you can secure access to your first application and define access policies.
* **Building policies**: The main components of an access policy and how they are combined.
* **Use cases**: Common use cases and policies that can serve as blueprints for your own policy designs.

This design guide assumes you have a basic understanding of Cloudflare's ZTNA solution, [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/access-controls/). Therefore, this guide focuses on designing effective access policies and assumes you have already configured [DNS](https://developers.cloudflare.com/cloudflare-one/traffic-policies/get-started/dns/), [identity](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/) and [device posture providers](https://developers.cloudflare.com/cloudflare-one/integrations/service-providers/) as well as [created connectivity](https://developers.cloudflare.com/cloudflare-one/networks/) to self-hosted applications and related networks.

By the end of this guide, you will be equipped to implement granular access policies that enforce Zero Trust principles across various common enterprise scenarios.

## Prerequisites

This section covers the essential architectural components and concepts to understand before you can design granular access policies.

Note

We recommend reading the [SASE reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/) to get a deeper understanding of connecting applications, identity providers, and device posture providers.

Cloudflare allows organizations to facilitate application access using our [connectivity cloud ↗](https://www.cloudflare.com/connectivity-cloud/), which securely connects users, applications and data regardless of their location. Core to the platform is Cloudflare's [extensive global network ↗](https://www.cloudflare.com/network/) which delivers low-latency connectivity for users worldwide. By running every service in every data center, Cloudflare applies networking, performance and security functions in a single pass, eliminating the need to route traffic through multiple, specialized security servers, and therefore reduces latency and avoids performance bottlenecks.

![Figure 1 shows the basic components involved in remote access with Cloudflare's ZTNA service.](https://developers.cloudflare.com/_astro/figure1.CjKTWbna_Z1Cgds4.svg "Figure 1 shows the basic components involved in remote access with Cloudflare's ZTNA service.")

Figure 1 shows the basic components involved in remote access with Cloudflare's ZTNA service.

There are two main ways to provide access to private applications and networks: by public hostname, where requests are proxied to the application, or by private IP, where the user is on a device or network that is connecting them to their private corporate network via Cloudflare.

### Active domain in Cloudflare

To use public hostnames, you need to have an [active domain](https://developers.cloudflare.com/fundamentals/manage-domains/add-site/) in Cloudflare. Most customers use Cloudflare as their primary DNS service, but it is possible to configure domains for use with Access and maintain [DNS records elsewhere](https://developers.cloudflare.com/dns/zone-setups/partial-setup/).

### Network route to applications

For Cloudflare to control access, it needs to be in front of the application and have a secure and reliable network route for successfully authenticated users. Requests for application access come to Cloudflare first, where policy is applied, and then if successful, user requests are routed to the application.

Cloudflare supports access to the following types of applications:

* SaaS applications on the Internet
* Self-hosted applications accessed via public hostname
* Self-hosted applications accessed via private IP

For SaaS and other Internet-facing applications, access from Cloudflare is simple — it is already on the Internet. But for self-hosted applications, you create a tunnel from Cloudflare to the private network where the application is running. There are two methods for doing this:

* Our recommended approach is to use [software agents](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/) such as [cloudflared](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/get-started/) or [WARP connector](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/warp-connector/). (Note, only cloudflared currently supports proxying of public hostnames to private applications.)
* For network-based connectivity, [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/) (formerly Magic WAN) uses IPsec or GRE tunnels connecting Cloudflare to existing network appliances that are connected to the private networks, and [Network Interconnect](https://developers.cloudflare.com/network-interconnect/) creates direct connectivity if your applications run on servers in a data center Cloudflare operates in. (For migrating from existing legacy VPN solutions to network-based tunnels, you may find [this guide](https://developers.cloudflare.com/reference-architecture/design-guides/network-vpn-migration/) useful.)

Once we have established connectivity to your applications, it is time to facilitate user access. Depending on your policy requirements (more on this later) users can access the application directly over an Internet connection to a public hostname, or — for greater security — we recommend using our [device agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/), the Cloudflare One Client, which creates a tunnel directly to Cloudflare and also provides information about their device for use in access policies.

### Identity

A critical part of application access is authenticating a user. Cloudflare has a [built-in authentication](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/one-time-pin/) method based on email. But we highly recommend configuring a third-party identity provider. We support both consumer and enterprise [identity providers](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/), and any SAML or OpenID compliant service can be used. Group membership is one of the most common attributes of defining application access and can be defined manually or imported using the System for Cross-Domain Identity Management ([SCIM](https://developers.cloudflare.com/cloudflare-one/team-and-resources/users/scim/)).

### Device posture

The final prerequisite for building really effective access policies is to configure [device posture](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/). When using the [device agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/), Cloudflare has access to a [variety of information](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/) about the device which can then be used in an access policy. When using an [agentless method](https://developers.cloudflare.com/reference-architecture/diagrams/sase/sase-clientless-access-private-dns/) to access applications, only the user identity information is available. We also support using device posture information from [other vendors](https://developers.cloudflare.com/cloudflare-one/integrations/service-providers/), such as Microsoft, Crowdstrike and Sentinel One.

![Figure 2 - two employees with different devices trying to access the same corporate application. Only the user with the device agent can access the SSH service.](https://developers.cloudflare.com/_astro/figure2.BibmIt2I_NBUE1.svg "Figure 2 - two employees with different devices trying to access the same corporate application. Only the user with the device agent can access the SSH service.")

Figure 2 - two employees with different devices trying to access the same corporate application. Only the user with the device agent can access the SSH service.

## Building policies

To quickly summarize the architecture described so far, Cloudflare is:

* In front of network access to the application.
* Integrated with your identity providers.
* Aware of device posture details for your users using our device agent or a third party vendor.

When a user makes a request to access an application, they must first authenticate, then, before access is granted, policies in the application are evaluated based on the data associated with the requesting user. Policies and other application specific settings are defined in an Access application.

### Access application types

Cloudflare Access supports four main types of applications:

* **Self-hosted** refers to applications that your organization hosts and manages, either on premises or in the cloud. Cloudflare creates a public hostname which it uses to proxy traffic through a secure tunnel to the application. While access via public hostnames is supported if your server is just publicly facing on the Internet, we recommend you use `cloudflared` to create a secure, outbound-only connection from your application to Cloudflare's edge. Once that occurs, Cloudflare will then reverse proxy the target application/content to your users.
* **Private IP** applications are similarly privately hosted, but lack fully-qualified public hostnames. Access can be facilitated via `cloudflared`, WARP Connector, Cloudflare WAN, or Cloudflare Network Interconnect. Remote users not connected to a network already connected to Cloudflare will need to use the device client to get access to the application via private IP and to avoid using IP addresses with users, use [internal DNS services](https://developers.cloudflare.com/cloudflare-one/traffic-policies/resolver-policies/#use-cases) to resolve private hostnames to private IP addresses. But it is possible to provide access without any software deployed to the client by using our agentless [browser isolation service](https://developers.cloudflare.com/reference-architecture/diagrams/sase/sase-clientless-access-private-dns/).
* **SaaS** applications are accessed over the public Internet, and therefore do not require any tunnel connectivity to Cloudflare. Instead, Access acts as an identity proxy between users and the SaaS application. When a user attempts to access the SaaS app, they are first authenticated by Cloudflare, which redirects to your main identity service. SaaS applications are then configured via SAML or OAuth to trust Cloudflare. This allows organizations to implement additional security layers (like device posture checks) and centralize access control for their SaaS applications, even if the SaaS or identity provider does not natively support these features.
* **Infrastructure** applications enable users to control access to individual servers, clusters or databases in a private network. Infrastructure apps work by defining a 'target' proxied over `cloudflared`, but allows users to group multiple machines under the same target - essentially, allowing users to define common access policies across potentially disparate infrastructure resources. Built-in access and command logging capabilities means organizations can maintain detailed audit trails for compliance and security investigation purposes.

Note

It is possible to configure SaaS applications to accept traffic only coming from Cloudflare. This forces all SaaS application traffic to be proxied and routed via Cloudflare Gateway which, in turn, allows for the use of security controls to inspect and filter traffic such as downloads of sensitive company data from SaaS applications. The second use case below will describe how to achieve this.

Access applications typically map directly to a single application. However, it is possible to have an Access application, and its associated policies sit in front of more than one application endpoint. This might be a range of IPs related to multiple Windows RDP servers where you wish to implement a common access policy. The same idea can also be applied to public hostnames, where you might have more than one hostname that refers to several applications you wish to have the same policies. For instance, you might have wiki.domain.com and wiki.domain.co.uk — different application instances, but with common access policy requirements.

Next, we examine the main elements of a ZTNA-protected application that need to be understood to create effective access policies, then later in the document we will examine some use cases that apply those specific elements.

### Authentication

Authenticating a user's identity is a key component of any Zero Trust policy. When attempting to log into an application, a user will be redirected to a configured identity provider. If a user fails to authenticate with the identity provider, Cloudflare will not accept their request for the application.

As mentioned above, Cloudflare can be integrated with all your identity providers (IdPs), both enterprise and consumer. Then at the application and policy level, you choose which IdPs you want to allow for authentication. For example, you may have an application that only a limited number of employees can access. Therefore, you would only enable your corporate IdP. For another application, you may wish to allow access to a wider group of non-employee users, such as contractors or third-party partners. Some of those users you might authenticate via their GitHub or LinkedIn credentials.

When a user attempts to access an application they will be presented with a sign-in page where they choose which IdP to authenticate with. For applications with only a single IdP, you can automatically redirect the user to that IdP. It is also possible to configure the application to display every possible IdP that has been configured, allowing you to add new providers in the future without the need to update the policy.

![Figure 3 - How employees from different parts of the organization authenticate to the same application.](https://developers.cloudflare.com/_astro/figure3.eRr6LFPW_Z1aBTIk.svg "Figure 3 - How employees from different parts of the organization authenticate to the same application.")

Figure 3 - How employees from different parts of the organization authenticate to the same application.

After authentication, the IdP is going to send information about the identity back to Cloudflare. Depending on the IdP, this information may include [Authentication Method Reference ↗](https://datatracker.ietf.org/doc/html/rfc8176) (amr) values, IdP groups, SAML attributes or OIDC claims which can then be used in policies.

When using our device agent, users must also authenticate and can be presented a custom list of IdPs. Once the agent is authenticated, they are able to connect to Cloudflare and it is possible to configure applications to skip authentication, instead trusting the existing authentication session associated with the device agent.

### Policies

Now we arrive at the main focus of this guide: the policies which define access to applications. This is where the real work is done to define who has access, and how. Before looking at example use cases, here is a breakdown of how policies work.

![Figure 4 - Our ZNTA service Access can use a wide variety of attributes in an access policy.](https://developers.cloudflare.com/_astro/figure4.Hsz5t8u9_1QdwfX.svg "Figure 4 - Our ZNTA service Access can use a wide variety of attributes in an access policy.")

Figure 4 - Our ZNTA service Access can use a wide variety of attributes in an access policy.

Each application can contain multiple policies, and are evaluated in order. Because multiple policies — each with multiple sets of rules — can get quite complex, there is a policy tester where you provide a username and see how the user is evaluated against all the policies and rules. Policies consist of the following elements:

#### Name

While it seems obvious what this is for, we highly recommend having a strategy for naming your policies. This is because you will likely create similar policies across multiple applications, such as "Allow all full-time employees" or "Block high-risk users". Using the same naming scheme across all applications will vastly streamline your ability to review application access and to understand the full list of policies in the future.

#### Action

The Action field in a policy determines what happens when a user or service matches the policy's criteria. There are four main types of actions:

* **Allow** grants access to the application. A login page will be presented to a user on initial access request.
* **Block** denies access to the application. This is generally not required because Access is denied by default. The only reason users should implement a block policy is for testing a specific policy condition or short-circuiting policy evaluation. If a block policy has higher precedent than an Allow, and a user matches the block policy, all other policy evaluation ceases.
* **Bypass** allows users or services to disable any enforcement for traffic before accessing the application. For example, a specific endpoint in an application may need to be broadly accessible over the Internet.
* **Service Auth** allows you to authenticate requests from other services or applications using [mTLS](https://developers.cloudflare.com/ssl/client-certificates/enable-mtls/) or [service tokens](https://developers.cloudflare.com/cloudflare-one/access-controls/service-credentials/service-tokens/). No login page will be presented to the user or service if they meet this policy criteria. This is designed so that non-user requests, such as those from other applications, can access secured resources.

Note

Cloudflare Access is a deny by default service, which means if a request does not match any policy action, the default action is "Block."

#### Session duration

Session duration refers to the length of time a user's authentication remains valid after they have successfully logged in to an application. Typically, the session duration is set for 24 hours, but you can also set durations for sensitive applications to expire immediately. This approach aligns with the core Zero Trust principle of "never trust, always verify." Even if a user initially presents the appropriate device posture and identity context, continuous verification ensures that access rights are reassessed with each new request. This method significantly reduces the risk window, as it removes the assumption that the initial authentication and authorization state remains valid over an extended period.

#### Rules

These are the main focus of a policy. Rules define all the attributes that dictate if the policy allows or denies access, or renders the application in an isolated browser. They are composed of a selector and value, which is essentially the attribute you wish to evaluate and the data you are evaluating.

Each rule is a filter to determine which users this policy is going to affect. There are several categories of rules:

* **Include** rules define who or what is eligible for access. When a user matches an "Include" rule, they become a candidate for access, subject to other rules types in the policy. These rules use OR logic — satisfying any one is sufficient. For example, you may make an application available to a specific group, but need to add in contractors for an email list, and as long as the user matches one of these (group membership, or a valid email) they are included in the rule. Every policy must have at least one Include clause.
* **Require** rules set mandatory conditions that must be met for access to be granted. Unlike Include rules, "Require" rules use AND logic — every rule must be met. This is typically used to layer security on top of the basic access criteria defined by Include rules. For example, administrators can require that anyone trying to access an application use specific MFA methods.
* **Exclude** rules define exceptions to access, overriding other rule types. If a user matches an "Exclude" rule, they're denied access regardless of other policy conditions. For example, a user may meet a requirement to use a MFA method during login, but if their specific [multifactor authentication (MFA) method](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/mfa-requirements/) is defined in an Exclude rule, they will be blocked by the policy. Alternatively, if a user is associated with a 'high risk' IdP group, they can be excluded on that basis even if they meet all the other posture requirements.

A useful way to imagine how these different types of rules are applied, is to imagine a funnel. Include selectors define what attributes of the user, traffic or device are included in the policy that will be Allowed, Blocked and so on. Require then further filters from that list what attributes must be associated with the user with the Exclude type filtering out users who have matched both the Include and Require.

![Figure 5 - Policies and rules are evaluated in a funnel. With Include rules aggregating all users, Require rules mandating specific requirements and Exclude rules removing user identities from the policy evaluation.](https://developers.cloudflare.com/_astro/figure5.DEijf6Ia_10A5dl.svg "Figure 5 - Policies and rules are evaluated in a funnel. With Include rules aggregating all users, Require rules mandating specific requirements and Exclude rules removing user identities from the policy evaluation.")

Figure 5 - Policies and rules are evaluated in a funnel. With Include rules aggregating all users, Require rules mandating specific requirements and Exclude rules removing user identities from the policy evaluation.

The above diagram visualises an example for the policy "All employees and contractors on secure devices using strong MFA". Anyone in the group "All Employees" or contractors who have authenticated with a username in their company domain will match this policy. They are required to be using a device that has the latest OS and is using encrypted storage. They must have authenticated with an MFA factor, but not SMS. Also, they must be accessing the application via Cloudflare's secure web gateway.

There are many different [types of selectors](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/#selectors). While every possible selector is not listed here, the following lists specific outcomes that organizations using Cloudflare Access typically desire when building policies. This will help you understand how to achieve a specific outcome.

* **Is user traffic coming over Cloudflare Gateway?**Guaranteeing that a user only accesses an application over our SWG, Cloudflare Gateway, is a great way to prevent unauthorized access due to phishing or credential theft. Additionally, you can ensure all traffic bound to the application is logged and filtered by Cloudflare Gateway.  
You can configure this control by enabling the "gateway" device posture check and then requiring "gateway" in your application policies. Requiring "gateway" is more flexible than relying solely on the device agent because users can also on-ramp from Browser Isolation or a Cloudflare WAN-connected site, both of which provide traffic logging and filtering. Additionally, when using the device agent, this allows you to guarantee that a user is coming from a compliant device that has passed a set of device posture checks.  
Requiring the gateway is enforced continuously for [self-hosted applications](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/self-hosted-public-app/). For SaaS apps, it is only enforced at the time of login. However, a dedicated egress IP can be leveraged in tandem to enforce that traffic always goes via Cloudflare Gateway.
* **Does the user belong to an existing group, or have specific identity attributes?**If your IdP supports SCIM, group membership information can be imported into Cloudflare, where it can be used in policies. Group information can also come from the SAML or OAuth data sent as part of authentication. In fact, when OIDC or SAML is used and claims are sent, they can be used in a policy. So if your users authenticate to your IDP using SAML, and the resulting token contains their "role," you can query that value in the rule.
* **Which identity service was used for authentication ?**Similar to IdP groups and attributes, this "Login methods" selector asks which identity service was used, and, like IdP groups, this is better suited to an access group rather than a specific line item on an access policy. Login methods allow you to apply different policies to specific users who authenticated with certain identity providers. For example, you might only allow users who have authenticated with a consumer identity such as GitHub or LinkedIn to gain access if their authentication method included a hard token-based MFA.  
This is an atypical scenario, but if you do need to enable multiple IdPs for authentication, then you can use this selector to make sure users are authenticating with a specific service. The value of this requirement becomes clearer when dealing with multiple layered security policies, and need to define different levels of access based on the login.
* **Individual or organizational emails**All identity services provide an email address, which in many cases matches the individual's username. Using an email in a policy can be useful when wanting to allow access to an entire domain of users, but they might authenticate via a consumer IdP that allows for any email. For example, you might only allow access for users who have authenticated via GitHub using their @company.com email address.  
Another good use of this selector is if you are managing a [list of emails](https://developers.cloudflare.com/cloudflare-one/reusable-components/lists/) of users that might be high risk or have been blocked from a specific application. You can use an Exclude rule, with your list to ensure a subset of users cannot access an application.
* **How did the user authenticate?**When an identity provider authenticates a user and then redirects them back to Cloudflare, it includes information about what authentication method was used. This is typically sent as [Authentication Method Reference ↗](https://datatracker.ietf.org/doc/html/rfc8176) data. Using this you can check if MFA was used and what type.  
This can be useful to define different levels of credential requirements for different applications. For example, a general company application might just require that MFA was used and not care how. But a really sensitive administration tool might require a FIDO2 hardware-based security key,and therefore explicitly deny access if only an OTP via SMS is used as part of the authentication process.
* **What country is the request coming from?**You can set rules based on the geographic lookup of the incoming request. This could be useful for restricting access to certain countries where you do business.
* **What IP range is the request coming from?**You can set rules based on the IP range of the incoming request. This could be allowing access only from your corporate network IP ranges.
* **Is it possible to verify device or user information from a list?**Sometimes, you might want to grant or restrict access based on specific device or user characteristics that do not fit neatly into other categories. This is where [lists](https://developers.cloudflare.com/cloudflare-one/reusable-components/lists/) come in handy: you can define or import a list of contractor emails, or a list of approved device serial numbers and use those as criteria within an Access policy. These lists can be updated manually or via our [API](https://developers.cloudflare.com/api/resources/zero%5Ftrust/subresources/gateway/subresources/lists/methods/create/), allowing for integration with other device or user management systems.
* **Is the device's security posture adequate?**This is where the device client provides telemetry on the native device making the access request. It accomplishes this by performing device-level scans. Is the device's hard drive encrypted? The agent can check if technologies like BitLocker or FileVault are active, in addition to checking for specific volume names. If you are protecting a sensitive application, or something that holds critical information, this is an effective requirement to enforce.
* **Is the request being made by another process or application?**It is not always a real human on a device attempting to access an application. This makes it useful to leverage Cloudflare Access to manage communication to APIs by other software. The request may contain service tokens, mutual TLS certificates, and SSH certificates, which enables logins for automated processes and machine-to-machine communication. Using service auth options within Cloudflare also centralizes the storage and lifecycle management of these tokens and certificates.
* **What does your third-party tool say about your device?**Many organizations use other specialized tools for endpoint security, such as Crowdstrike, SentinelOne, or Microsoft Intune, to provide telemetry regarding the security posture of the device making the application request. Rather than require the user to navigate multiple UIs, you can integrate these tools into Cloudflare One via their API, and apply their insights into device posture attributes that can be enforced during an application login.

Note

Some third-party device posture integrations can be used even when the user device does not have our agent installed. Instead, the third party integration matches the user based on email and provides information directly to Cloudflare.

#### Additional settings

Below are a few additional application settings to consider that help improve security.

##### Isolate application

Sometimes you want to manage access to a self-hosted application for less trusted, third-party users such as contractors or partners. You might want to allow them to read content in an application, but limit their ability to download files, copy and paste data, and print the page. Cloudflare Access allows you to render the application in a remote [browser](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/isolate-application/) (using [remote browser isolation, or RBI](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/)) so that the application is rendered using a headless browser on our network versus sending all the content down to the user's browser. This allows Cloudflare to then enforce a range of controls over how the user can interact with the content.

The setting is at the policy level, so one policy can allow trusted users (such as employees) to access applications normally, while another policy with browser isolation enabled can apply the RBI service for contractors.

This setting forces traffic to an isolated browser before being delivered to the end user, which means all traffic is then inspected and managed by Cloudflare Gateway. To limit what the user can do, you need to create an accompanying policy in the gateway, which also identifies the same users and then enforces the [controls](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/isolation-policies/#policy-settings) you wish to limit access. Note that it is important to write the Gateway policy such that it only enforces RBI for the same group of users accessing the application that the Cloudflare Access policy applies to. Otherwise, the policy will default to enforce browser isolation for all users.

It is possible to actually enforce RBI for the same set of users if they attempt to access the application using a non-secured device. In this case, you would continue to define a policy for employees in Cloudflare Access. But then, also create a policy in Cloudflare Gateway to isolate the application if users going to the same application URL have failed a device posture check that deems the device is not managed or secure. This could be if the device does not have the company endpoint security client (Crowdstrike or SentinelOne for example) installed, or has failed a security check. We will demonstrate this in the use cases below.

Inversely, isolating the browser also protects the local device from anyone attempting to exploit vulnerabilities or execute malicious code against the application.

##### Justification

You may wish to audit an application's every authentication event and capture justification details. This setting creates a more well-defined audit trail of user access, and allows administrators to review and analyze access patterns and justifications. When enabled, users will be prompted to provide a brief explanation before gaining access. This can be particularly useful for sensitive applications or during specific time periods, such as outside normal business hours.

##### Temporary authentication

Add an additional layer of access control by requiring users to obtain "temporary authentication" approval from designated authorizers before accessing the application. When enabled, users requesting access will trigger a notification to authorized approvers.

### Access Groups

One of the most important parts of defining ZTNA policies is to leverage reusable elements called [Access Groups](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/groups/). Each access group uses the same rules we've just described to define users, traffic or devices. These groups can then be used across many policies to allow, deny, bypass, or isolate access to an application.

For example, you can define "Employees" once as an Access Group, and then use that in every application policy where you want to refer to employees. Updates to this Access Group would then be reflected in every policy. This is also a good way to include nested logic (for example, users with a Linux device and has antivirus software enabled)

Below is a diagram featuring an Access Group named "Secure Administrators," which uses a range of attributes to define the characteristics of secure administrators. The diagram shows the addition of two other Access Groups within "Secure Administrators". The groups include devices running on either the latest Windows or macOS, along with the requirement that the device must have either File

![Figure 6 - An access group that matches to IT administrators on secure systems.](https://developers.cloudflare.com/_astro/cf1-ref-arch-24.aWooHqll_22Jt0n.svg "Figure 6 - An access group that matches to IT administrators on secure systems.")

Figure 6 - An access group that matches to IT administrators on secure systems.

## Use cases

Now that the basic infrastructure to secure access to an application and the policy systems have been covered, let's dive into some common use cases.

### Only allow company wiki access to users on trusted devices

Many companies host some sort of internal content system where confidential company information resides. Wiki's are a common type of application that allows employees to collaborate easily with anyone. But because this information is confidential, it is important to both validate the user authentication with strong credentials, and also ensure that their access is via a secure device, via a secure connection.

However, sometimes company users use non-company devices and need to access the wiki. You may wish to set up a policy that allows this, but limits the user's actions. For instance, prevent them from editing the data, or from copying and pasting it to their unmanaged device. This use case explains how to set up a Cloudflare Access application to define secure access for employees, giving them fully functional access when they are on a secured device over a secured connection, but still allow them some limited access from a non secure device.

First, create an Access application with the following parameters:

| Name            | Company Wiki                                                   |
| --------------- | -------------------------------------------------------------- |
| Type            | Self-hosted                                                    |
| Public Hostname | wiki.mycustomerexample.com                                     |
| Authentication  | Company Microsoft Entra IdP                                    |
| Policies        | Employees on trusted devices Employees using untrusted devices |

Before we examine how the two policies are defined, observe an example where an Access Group was created to identify an employee and approved devices were running the latest operating system version.

#### Access Group: Secure employees

This access group is going to be used in both policies, and its sole goal is to identify what a "Secure Employee" is.

| Name            | Secure Employees                                                                          |
| --------------- | ----------------------------------------------------------------------------------------- |
| **Include**     |                                                                                           |
| Azure AD Groups | "Full-Time Employees"                                                                     |
| **Require**     |                                                                                           |
| Azure AD Groups | "Completed security training"                                                             |
| OS Version      | "Latest version of macOS", "Latest version of Windows", "Latest Kernel version for Linux" |

This is a very simple Access Group, with just two group selectors. Note that because we are checking membership based on groups from a specific directory, it also implies that the user must have authenticated to that directory. It means in the future, if you move to another identity provider or change the group membership requirements for what defines a Full-Time Employee, you change just this Access Group once.

As you can see, it defines that "all employees" are those in the Azure AD group "Full Time Employees", who are also in the group "Completed security training." The first selector defines the initial scope of the Access Group, and the second selector requires that they must also be in that specific group.

This Access Group requires that three [device posture checks](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/) have been created for the [OS version](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/os-version/). For example, the posture check "Latest version of macOS" is defined as "macOS version is greater than or equal to 15.1" and reflects the latest version the company considers stable and secure (vs. the very latest OS version). Once included in the Access policy, this will enforce the logic we’ve established here - if any user wants to sign in as a 'Secure Employee', they'll need to meet these requirements.

#### Employees on trusted devices

Now we define the first policy in the application. First, select the Access Group that has already been defined. Then, define the following rules to determine how users authenticate and how they connect to the application.

| Policy name             | Employees on trusted devices         |
| ----------------------- | ------------------------------------ |
| Action                  | Allow                                |
| Access groups           | Include - Secure employees           |
| **Rules**               |                                      |
| Require                 |                                      |
| Authentication Method   | MFA - Multiple Factor Authentication |
| Gateway                 | On                                   |
| **Additional settings** |                                      |
| Isolate Application     | No                                   |

This policy ensures that users can gain full access to your company wiki only if they have passed the following requirements:

* They are full-time employees on devices with the latest operating system.
* Users have authenticated using MFA.
* Users are accessing the application via a device that has the Cloudflare device agent running.

#### Employees using untrusted devices

The second policy should handle users who are not on secure devices. Note that this policy is second in the list of policies in the application and therefore will be evaluated when users do not meet the requirements of the first policy.

| Policy name             | Employees using untrusted devices    |
| ----------------------- | ------------------------------------ |
| Action                  | Allow                                |
| Access groups           | Include - All Employees              |
| **Rules**               |                                      |
| Require                 |                                      |
| Authentication Method   | MFA - Multiple Factor Authentication |
| **Additional settings** |                                      |
| Isolate Application     | Yes                                  |

Although this policy is very similar to the first, it removes the requirement to have a device on the latest operating system and also using our device agent. The user is still required to be a full-time employee authenticated with strong, MFA-backed credentials.

But notice we now enable "Isolate Application." What does this mean? This forces all requests to the application to now be rendered on our RBI technology. RBI will prevent the wiki UX from loading directly in the end user's browser, and instead renders the content in a headless browser running on a server in Cloudflare's global cloud network. Then, the results of that render are securely and efficiently communicated down to the end user's browser. Because of this, the request is also sent via our SWG service, which enables you to write a policy that controls how users can interact with the wiki.

**Gateway HTTP Policy**

| Isolate company applications for users on insecure devices |                            |
| ---------------------------------------------------------- | -------------------------- |
| Action                                                     | Isolate                    |
| **Traffic**                                                |                            |
| Domain in                                                  | wiki.mycustomerexample.com |
| **Device Posture**                                         |                            |
| Passed device posture not in                               | Warp Check                 |
| **Settings**                                               |                            |
| Disable copy / paste                                       | Yes                        |
| Disable file downloads                                     | Yes                        |
| Disable file uploads                                       | Yes                        |
| Disable keyboard                                           | Yes                        |
| Disable printing                                           | Yes                        |

In the example above, the SWG policy is matching any traffic heading to your company wiki, then enforcing RBI (to match the ZTNA application policy) and then disabling all interaction with the wiki.

It also adds the device posture check "WARP Check (Mac OS)" to scan the user's device for the presence of our device agent. If the user's device does not have the agent installed and enabled, then the device posture check cannot occur and they will automatically fail to meet the policy requirements. If the user does have the device agent enabled, then they will pass the posture check and be granted full wiki access. Note that "WARP" is the previous name for the Cloudflare One Client, which is Cloudflare's device agent.

Essentially, the employee on an insecure device is permitted to view the wiki in a "read-only" mode, but is restricted from further interactions like uploading/downloading or copying/pasting confidential information.

This policy approach accomplishes several objectives:

1. It enforces the use of trusted devices for full access to the wiki, aligning with your Zero Trust security goals.
2. It provides a fallback option for employees using personal devices, allowing them to access the wiki in a limited, secure manner through browser isolation.
3. It incentivizes employees to use their company devices and/or keep the Cloudflare One Client enabled, which is a net positive for an organization's security posture.
4. It demonstrates the power and flexibility of more granular security controls achieved by combining Cloudflare Access policies with Cloudflare Gateway HTTP policies.

This approach both secures your wiki and establishes a model for protecting other applications — allowing your organization to maintain strong cyber hygiene while adapting to the realities of hybrid work scenarios.

### Secure access to Salesforce

The second use case implements a secure access strategy that also requires the use of the device client. However, the implementation is slightly more involved than the previous wiki example.

Before addressing the specifics, you will learn about the benefits of securing access to SaaS apps through Cloudflare. After all, Salesforce and other major SaaS providers already offer robust security features, including their own access controls, MFA, and audit logs. So why do some organizations still choose to route their SaaS traffic through Cloudflare?

The key benefit here is centralizing security policy enforcement across your entire IT ecosystem. By routing Salesforce access through Cloudflare, you are not just securing Salesforce – you are integrating it into a broader Zero Trust strategy that includes a single point of visibility for all user activity, and reduces the complexity of managing multiple security systems. It also allows you to implement the enforcement of many different IdPs for access to a single SaaS application.

In the context of this use case, it is important to protect Salesforce — which contains sensitive customer data — against misuse, and to secure access only to authorized users. We are going to design a secure access policy that can cover both of these objectives.

The first step is to configure an [egress IP policy under Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/egress-policies/). This allows you to purchase and assign specific IPs to your users that have their traffic filtered via Gateway. Then in Salesforce, you can enforce that access is only permitted for traffic with a source IP that matches the one in your egress policy. This combination ensures that the only way to get access to Salesforce is via Cloudflare.

| Egress Policy                       |                  |
| ----------------------------------- | ---------------- |
| **Identity**                        |                  |
| User Group Names                    | All Employees    |
| **Select Egress IP**                |                  |
| Use dedicated Cloudflare Egress IPs | \[203.0.113.88\] |

This is important not only for securing access to Salesforce, but also for adequately protecting its contents while in use. The next step is to examine the access policy which is similar to the one we just created for the wiki. However, this policy is limiting access to members of the Sales or Executives groups. We are also using our Crowdstrike integration to ensure that users are on company managed devices.

| Policy name                    | Account executives on trusted devices |
| ------------------------------ | ------------------------------------- |
| Action                         | Allow                                 |
| **Include**                    |                                       |
| Member of group                | Sales, Executives                     |
| **Require**                    |                                       |
| Authentication method          | MFA - multi-factor authentication     |
| Gateway                        | On                                    |
| Crowdstrike Service to Service | Overall Score above 80                |

The second policy now applies to all employees but we are going to apply a few more steps before access is granted.

| Policy name                    | Employees on trusted devices                                        |
| ------------------------------ | ------------------------------------------------------------------- |
| Action                         | Allow                                                               |
| **Include**                    |                                                                     |
| Member of group                | All Employees                                                       |
| **Require**                    |                                                                     |
| Authentication method          | MFA - multi-factor authentication                                   |
| Gateway                        | On                                                                  |
| Crowdstrike Service to Service | Overall Score above 80                                              |
| **Additional Settings**        |                                                                     |
| Purpose justification          | On                                                                  |
| Temporary authentication       | On                                                                  |
| Email addresses of approvers   | [salesforce-admin@company.com](mailto:salesforce-admin@company.com) |

We are going to add in temporary authentication to this second policy. That means if Cloudflare determines that the incoming request is from someone outside of the Sales or Executives department, an administrator will need to explicitly grant them temporary access. In context, this policy could be used to secure access to Salesforce for employees outside the Sales department, as the customer information could be sensitive and confidential.

This approach is important for several reasons:

* It allows for human oversight on potentially risky access attempts, reducing the chance of unauthorized access through compromised or insecure devices.
* It provides flexibility for legitimate users to access the application even when their device fails to meet the highest security standards. This encourages users to maintain good security practices on their devices.
* In addition, since all user traffic is routed through Cloudflare, we can enforce additional security measures (such as preventing the download of sensitive data) via web traffic policies.

### Only allow secure admins access to database tools

This scenario covers protecting a PostgreSQL database administration tool. This represents a privately-hosted, high-value target due to its access to sensitive data. It also requires taking extra care in designing secure access for it. Given the nature of database tools, access policies will not be layered for this use case.

| Policy name                         | Only IT admin access                  |
| ----------------------------------- | ------------------------------------- |
| Action                              | Allow                                 |
| **Include**                         |                                       |
| Assign a group                      | IT Admins                             |
| **Require**                         |                                       |
| Authentication method               | MFA - multi-factor authentication     |
| Gateway                             | On                                    |
| Device Posture - Serial Number List | Company Managed Device Serial Numbers |
| OS Version                          | Latest version of Windows             |
| Domain Joined                       | Joined to corporate AD domain         |
| **Exclude**                         |                                       |
| Authentication method               | SMS                                   |
| **Additional Settings**             |                                       |
| Purpose justification               | On                                    |

Here, we are introducing a high number of security posture checks, starting with MFA. We have two expressions regarding MFA: the first one requires that users authenticate with a MFA method. The second 'excludes' expression pointing out that SMS is not considered a valid authentication method. We do this because SMS is one of the easier methods for attackers to exploit and subvert, and therefore [considered less secure ↗](https://sec.okta.com/articles/2020/05/sms-two-factor-authentication-worse-just-good-password) than other MFA methods. As a result, we are only allowing access when the user provides stronger credentials such as a hard key or an OTP from an authenticator app. Enforcing these stricter MFA requirements reduces the risk of credential-based attacks, and makes it much more challenging for potential attackers to gain unauthorized access to this critical database—even if they have obtained the user's password.

Other posture elements here include:

* Requiring the latest OS.
* The user's device is joined to a Microsoft Active Directory domain.
* The user's device is explicitly a company-managed device (shown by referencing a list of managed device serial numbers).

These combined posture checks ensure that only up-to-date, company-controlled devices within your managed environment can access the database, further reducing the attack surface and the risk of access from potentially compromised or uncontrolled endpoints.

Under additional settings, we are also requiring that users enter a purpose justification for accessing the database. This allows your security teams to analyze access patterns and identify potentially suspicious behavior. This set of security controls also ensures that access to your critical database is tightly regulated, logged, and justified — significantly reducing the risk of unauthorized access or misuse.

This level of protection and visibility would be significantly more complex and resource-intensive to achieve with disparate, standalone security solutions. Centralizing security policy enforcement via Cloudflare allows you to simplify how you implement fine-grained access to critical internal resources.

### Secure RDP access

This final use case centers on securing remote access to devices via RDP in two ways — self-hosted or private IP. Both options offer unique benefits, but ultimately it comes down to your priorities: is it more important to simplify access, or to tightly monitor activity?

We will start with the self-hosted option — proxying port 3389 over a tunnel, mapping it to a hostname.

| Application Configuration |                                     |
| ------------------------- | ----------------------------------- |
| Application Name          | RDP service on database server      |
| Hostname                  | rdp.databaseserver.company.internal |

Define the policy:

| Policy name                         | Admin Access                          |
| ----------------------------------- | ------------------------------------- |
| Action                              | Allow                                 |
| **Include**                         |                                       |
| Member of Group                     | IT Admins                             |
| **Require**                         |                                       |
| Authentication method               | MFA - multi-factor authentication     |
| Gateway                             | On                                    |
| WARP                                | On                                    |
| Device Posture - Serial Number List | Company Managed Device Serial Numbers |
| External Evaluation                 | \[Time Evaluator URL\]                |

Inside the policy, we have made this application available to our new access group for IT Admins. Under "Require," we are enforcing the use of the Cloudflare One Client specifically (as opposed to only Cloudflare Gateway). The user must be on a company-managed device, with an active device client that is authenticated to the company's instance of Cloudflare, MFA must be used during login, and there is an additional option below for external evaluation.

[External evaluation](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/external-evaluation/) means we have an API endpoint containing some sort of [access logic ↗](https://github.com/cloudflare/workers-access-external-auth-example) — in this case, time of day access. We are making an API call to this endpoint, and defining the key that Cloudflare is using to verify that the response came from the API. This is useful for several reasons:

External evaluation allows users to create bespoke security posture checks based on criteria that may not be covered by the default set of posture checks. For this example, we will be using a service built on [Cloudflare Workers ↗](https://workers.cloudflare.com/).

* Restricting access to the terminal outside of business hours implements a form of time-based access control. This adds an extra layer of security by limiting the window of opportunity for potential attackers.

Now, you will learn how to secure RDP access as a private IP application:

| Application Configuration |                 |
| ------------------------- | --------------- |
| Application Name          | RDP             |
| Destination IP            | 169.254.255.254 |

As mentioned before, private IP applications work because Cloudflare proxies the IP range across its network. The nature of this application necessitates the use of the device client, as unless the user is connected to Cloudflare (and more specifically, unless they can take advantage of the Client-to-Tunnel connectivity), they will not be able to reach non-local RFC 1918 addresses.

| Traffic                                            |                                                                 |
| -------------------------------------------------- | --------------------------------------------------------------- |
| Destination IP                                     | 169.254.255.254                                                 |
| Destination Port                                   | 3389                                                            |
| **Identity**                                       |                                                                 |
| User Group Names                                   | Server Admins                                                   |
| **Device Posture**                                 |                                                                 |
| Passed Device Posture Checks                       | WARP Check (Mac OS) (File) Latest Version of macOS (OS version) |
| **Action**                                         | Allow                                                           |
| **Enforce Cloudflare One Client session duration** | 60m0s                                                           |

Defining the application here is simple, as Cloudflare automatically fills in the IP range, and you need to limit the detected protocol to RDP. However, the rules for private IP applications are slightly different. You will notice they appear as network policies under the Cloudflare Gateway menu, despite managing them in Access. Certain options, such as checking for MFA and external evaluation, do not appear here. However, these attributes can be verified when the user activates their device client and authenticates to their organization.

One option available here is enforcing the device agent client session duration. This means that after a certain amount of time, the user will be forced to reauthenticate. This feature allows you to take a Zero Trust approach to protecting private IP applications as well. It ensures that even if a user's credentials are compromised or their device is left unattended, the potential window for unauthorized access is limited. By regularly requiring reauthentication, we are continuously verifying the user's identity and authorization status, aligning with the core Zero Trust principle of "never trust, always verify."

By combining granular access controls with detailed activity logging, Cloudflare provides a comprehensive security solution for protecting and monitoring access to critical resources in a Zero Trust methodology.

## Summary

Successful ZTNA implementation is about more than just technical configuration — it requires careful consideration of your organization's specific needs, user workflows, and security requirements. Cloudflare's flexibility allows you to start with basic secure access policies, then evolve them as your organization's needs change and security requirements mature. By following the principles and practices outlined in this guide, you can create a robust security posture that protects you as precisely and transparently as possible.

If you are interested in learning more about ZTNA, SASE, or other aspects of the Cloudflare One platform, please visit our [reference architecture library](https://developers.cloudflare.com/reference-architecture/) or our [developer docs](https://developers.cloudflare.com/) to get started.

Related resources

* [Cloudflare SASE reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/)
* [Using Cloudflare SASE with Microsoft](https://developers.cloudflare.com/reference-architecture/architectures/cloudflare-sase-with-microsoft/)
* [How to deploy Cloudflare ZTNA](https://developers.cloudflare.com/learning-paths/clientless-access/concepts/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/design-guides/","name":"Design Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/design-guides/designing-ztna-access-policies/","name":"Designing ZTNA access policies for Cloudflare Access"}}]}
```

---

---
title: Extend Cloudflare's benefits to SaaS providers' end-customers
description: Learn how to use Cloudflare to extend performance, security, and data localization to your end users.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/design-guides/extending-cloudflares-benefits-to-saas-providers-end-customers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Extend Cloudflare's benefits to SaaS providers' end-customers

**Last reviewed:**  over 1 year ago 

## Introduction

A key aspect of developing a Software-as-a-service (SaaS) application is ensuring its security against the wide array of potential attacks it faces on the Internet. Cloudflare's network and security services can be used to protect your customers using your SaaS application, off-loading the risk to a vendor with experience in [protecting applications ↗](https://radar.cloudflare.com/reports/ddos).

This design guide illustrates how providers, building and hosting their own product/application offering, can leverage Cloudflare to extend the security, performance, and compliance benefits of Cloudflare's network to their end-customers.

The following diagrams visualize the use of the following services:

* Data Localization Suite (specifically, [Regional Services](https://developers.cloudflare.com/data-localization/regional-services/))
* [Cloudflare for SaaS](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/)
* [Cloudflare Tunnels](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/) to securely expose web applications (with [public hostnames](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/routing-to-tunnel/) and [private networks](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/))
* Load Balancers to manage traffic and ensure reliability and performance, implementing Global Traffic Management (GTM) and [Private Network Load Balancing](https://developers.cloudflare.com/load-balancing/private-network/).

This setup is ideal for SaaS providers who need to ensure minimal downtime, auto-renewal of SSL/TLS certificates, efficiently distribute traffic to healthy endpoints, and regional traffic management for compliance and performance optimization.

This document assumes that the provider's application DNS is registered and managed through Cloudflare as the primary and authoritative DNS provider. You can find details on how to set this up in the [Cloudflare DNS Zone Setup Guide](https://developers.cloudflare.com/dns/zone-setups/full-setup/).

This solution supports subdomains under your own zone while also allowing your customers to use their own domain names (vanity or custom domains) with your services. For example, for each customer you may create the custom hostname `mycustomer.myappexample.com` but also want to allow them to use their own domain, `app.mycustomerexample.com` to point to their tenant on your service. Each subdomain (`mycustomer.myappexample.com`) can be created on the main domain (`myappexample.com`) through the [Cloudflare API](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-dns-records/#create-dns-records), allowing you to easily automate the creation of DNS records when your customers create an account on your service.

## Benefits

Before looking at how Cloudflare can be configured to protect your SaaS application through your custom hostnames, it's worth reviewing the benefits of taking this approach.

| Benefit                  | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
| ------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Minimized Downtime       | Ensure [minimal downtime](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/certificate-management/issue-and-validate/validate-certificates/#minimize-downtime) not only during custom hostname migrations to Cloudflare for SaaS but also throughout the entire lifecycle of the application.                                                                                                                                                                                                                                                     |
| Security and Performance | Extends Cloudflare's [security](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/waf-for-saas/) and [performance](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/performance/) benefits to end-customers through their custom domains.                                                                                                                                                                                                                                                                            |
| Auto-Renewal             | Automates the [renewal](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/certificate-management/issue-and-validate/renew-certificates/) and management process for custom hostname certificates.                                                                                                                                                                                                                                                                                                                                                  |
| Apex Proxying            | Supports end-customers using [domain apex](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/domain-support/hostname-validation/realtime-validation/#apex-proxying) (otherwise known as root domain) as custom hostnames. Used where your DNS service doesn't allow [CNAMEs for root domains](https://developers.cloudflare.com/dns/cname-flattening/), instead a [static IP](https://developers.cloudflare.com/byoip/address-maps/#static-ips-or-byoip) is used to allow an A record to be used.                                                           |
| Smart Load Balancing     | Use the load balancer as [custom origins](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/start/advanced-settings/custom-origin/) to steer traffic with [session affinity](https://developers.cloudflare.com/load-balancing/understand-basics/session-affinity/). In the context of Cloudflare for SaaS, a custom origin lets you send traffic from one or more custom hostnames to somewhere besides your default proxy fallback origin.                                                                                                                 |
| O2O                      | For end-customers who already proxy traffic through Cloudflare, [O2O](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/saas-customers/how-it-works/) may be required. Generally, it's recommended for those end-customers to [not proxy](https://developers.cloudflare.com/dns/proxy-status/#dns-only-records) the hostnames used by the SaaS provider. If O2O functionality is required, please review the [product compatibility](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/saas-customers/product-compatibility/). |
| Regional Services        | Allows [regional traffic management](https://developers.cloudflare.com/data-localization/regional-services/) to comply with data localization requirements.                                                                                                                                                                                                                                                                                                                                                                                                                              |

## Products included in this guide

The following products are used to deliver this solution.

| Product                                                                                                                           | Function                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| --------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Cloudflare for SaaS](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/)                            | Extends the security and performance benefits of Cloudflare’s network to your customers through their own custom or vanity domains. This includes [Certificate Management](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/certificate-management/), [WAF for SaaS](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/waf-for-saas/), [Early Hints for SaaS](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/performance/early-hints-for-saas/) and [Cache for SaaS](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/performance/cache-for-saas/). |
| [DDoS Protection](https://developers.cloudflare.com/ddos-protection/)                                                             | Volumetric attack protection is automatically enabled for [proxied](https://developers.cloudflare.com/dns/proxy-status/) hostnames.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
| [Regional Services](https://developers.cloudflare.com/data-localization/regional-services/) (part of the Data Localization Suite) | Restrict inspection of data (processing) to only those data centers within jurisdictional boundaries.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
| [Load Balancer](https://developers.cloudflare.com/load-balancing/)                                                                | Distributes traffic across your endpoints, which reduces endpoint strain and latency and improves the experience for end users.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
| [Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/)                      | Secure method to connect to customers' networks and servers without creating holes in [firewalls](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/configure-tunnels/tunnel-with-firewall/). cloudflared is the daemon (software) installed on origin servers to create a secure tunnel from applications back to Cloudflare.                                                                                                                                                                                                                                                                                                                            |

## Cloudflare for SaaS examples

The primary objective of using Cloudflare is to ensure that all requests to your application's custom hostname are routed through Cloudflare's security and performance services first to apply security controls and routing or load balancing of traffic. Since the origin server often needs to be publicly accessible, securing the connection between Cloudflare and the origin server is crucial. For comprehensive guidance on securing origin servers, please refer to Cloudflare's documentation: [Protect your origin server](https://developers.cloudflare.com/fundamentals/security/protect-your-origin-server/).

The diagrams below begin by illustrating the simplest approach to achieving this goal, followed by more complex configurations.

### Standard fallback origin setup

This standard Cloudflare for SaaS setup is the most commonly used and easiest to implement for most providers. Typically, these providers are SaaS companies, which develop and deliver software as a service solutions. This setup requires only a single DNS record to direct requests to Cloudflare, which then proxies the traffic to your application using an A record.

![Figure 1: Standard fallback origin setup.](https://developers.cloudflare.com/_astro/standard-fallback-origin-setup.DrGJNOUB_1wPLqh.svg "Figure 1: Standard fallback origin setup.")

Figure 1: Standard fallback origin setup.

1. The custom hostname (`custom.example.com`) is configured as a CNAME record pointing to the fallback origin of the provider. The fallback origin is the server or servers that Cloudflare will route traffic to by default when a request is made to the custom hostname. This DNS record does not need to be managed within Cloudflare; it just needs to point to the Cloudflare-hosted record from the provider (`fallback.myappexample.com`).
2. The Fallback Origin is set up as an A record that points to the public IP address of the origin server. Cloudflare will route traffic sent to the custom hostnames to this origin server by default.

The origin server receives the details of the custom domain through either the [host header or SNI](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/reference/connection-details/). This enables the origin server to determine which application to direct the request to. This method is applicable for both custom hostnames (for example, `app.mycustomerexample.com`) and vanity domains (for example, `customer1.myappexample.com`). Since all requests for your application are now routed through the Cloudflare network, you can leverage a range of security and performance services for every request, including:

* [Web Application Firewall](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/waf-for-saas/)
* [Access control policies](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/secure-with-access/)
* [Caching of application content](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/performance/cache-for-saas/)
* [Support browser early hints](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/performance/early-hints-for-saas/)
* [Image Transformations](https://developers.cloudflare.com/images/)
* [Waiting Room](https://developers.cloudflare.com/waiting-room/)
* [Workers for Platform](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/)

For implementation details to get started, review the [developer documentation](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/start/getting-started/).

### Standard fallback origin setup with regional services

This approach introduces using Cloudflare's [Regional Services](https://developers.cloudflare.com/data-localization/regional-services/) solution to regionalize TLS termination and HTTP processing to confirm with any compliance regulations that dictate your service process data in specific geographic locations. This ensures that traffic destined for the origin server is handled exclusively within the chosen region.

![Figure 2: Standard fallback origin setup with regional services.](https://developers.cloudflare.com/_astro/standard-fallback-origin-setup-regional-services.DgKfyYv8_1GmwVA.svg "Figure 2: Standard fallback origin setup with regional services.")

Figure 2: Standard fallback origin setup with regional services.

1. The custom hostname (`custom.example.com`) is configured as a CNAME record that points to a regionalized SaaS hostname (`eu-customers.myappexample.com`). This configuration ensures that all processing, including TLS termination, occurs exclusively within the specified geographic region.
2. The regionalized SaaS hostname is set up as a CNAME record that directs traffic to the standard [Fallback Origin](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/start/getting-started/#1-create-fallback-origin) of the SaaS provider (`fallback.myappexample.com`).
3. The fallback origin is set up as an A record that points to the public IP address of the origin server. Cloudflare will route traffic sent to the custom hostnames to this origin server by default.

### Cloudflare Tunnel as fallback origin setup with regional services

For enhanced security, rather than exposing your application servers directly to the Internet via public IPs, SaaS providers can use [Cloudflare Tunnels](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/). These tunnels connect your network to Cloudflare's nearest data centers, allowing SaaS applications to be accessed through [public hostnames](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/routing-to-tunnel/). As a result, Cloudflare becomes the sole entry point for end-customers from the public Internet into your application network.

![Figure 3: Cloudflare Tunnel as Fallback Origin Setup with Regional Services.](https://developers.cloudflare.com/_astro/cloudflare-tunnel-fallback-origin-setup-regional-services.h18fhKDd_Z2kIyIF.svg "Figure 3: Cloudflare Tunnel as Fallback Origin Setup with Regional Services.")

Figure 3: Cloudflare Tunnel as Fallback Origin Setup with Regional Services.

1. The custom hostname (`custom.example.com`) is configured as a CNAME record that points to a regionalized SaaS hostname (`eu-customers.myappexample.com`). This configuration ensures that all processing, including TLS termination, occurs exclusively within the specified geographic region.
2. The regionalized SaaS hostname is set up as a CNAME record that directs traffic to the standard [Fallback Origin](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/start/getting-started/#1-create-fallback-origin) of the SaaS provider (`fallback.myappexample.com`).
3. The fallback origin is a CNAME DNS record that points to a [public hostname](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/routing-to-tunnel/) exposed by Cloudflare Tunnel. This public hostname should be configured to route traffic to your application, for example, `localhost:8080`.

This setup is ideal for SaaS providers that do not need granular load balancing, such as [geo-based traffic steering](https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/), across multiple origin servers. It's also well-suited for simple testing and development environments, where [protecting your origin server](https://developers.cloudflare.com/fundamentals/security/protect-your-origin-server/) by only allowing requests through the Cloudflare Tunnel is sufficient. However, for distributed applications requiring load balancing at both global and local levels, we recommend using [Cloudflare's Load Balancer](https://developers.cloudflare.com/load-balancing/) with global and private network load balancing capabilities.

### Global Traffic Management (GTM) & Private Network Load Balancing as custom origin setup

Cloudflare offers a powerful set of load balancing capabilities. These allow you to reliably steer traffic to different origin servers where your SaaS applications are hosted, whether through public hostnames (as described above) or private IP addresses. This setup helps prevent origin overload by distributing traffic across multiple servers and enhances security by only permitting requests through the Cloudflare Tunnel.

![Figure 4: Global Traffic Management \(GTM\) & Private Network Load Balancing as custom origin setup.](https://developers.cloudflare.com/_astro/gtm-ltm-custom-origin-setup.C_l8lMsz_60Ayr.svg "Figure 4: Global Traffic Management (GTM) & Private Network Load Balancing as custom origin setup.")

Figure 4: Global Traffic Management (GTM) & Private Network Load Balancing as custom origin setup.

1. The custom hostname (`custom.example.com`) is configured as a CNAME record pointing to a Cloudflare [regionalized Load Balancer](https://developers.cloudflare.com/data-localization/how-to/load-balancing/) (`eu-lb.myappexample.com`). This ensures that all processing, including TLS termination, takes place within a specified geographic region. Additionally, the SaaS provider needs to set up the load balancer as the [custom origin](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/start/advanced-settings/custom-origin/) for the custom hostname.
2. The regional load balancer is set up with [origin pools](https://developers.cloudflare.com/load-balancing/pools/) to distribute requests across multiple downstream servers. Each pool can be configured to use either [public hostnames](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/routing-to-tunnel/) with Global Traffic Management (GTM) or [private network](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/) addresses with Private Network Load Balancing. In the diagram above, we utilize both options:  
   * Origin pool 1 uses the [Cloudflare Tunnel hostname](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/routing-to-tunnel/dns/) (`<UUID>.cfargotunnel.com`) as the endpoint or origin server for handling those requests. When using a public hostname, it is necessary to set the [HTTP host header value](https://developers.cloudflare.com/load-balancing/additional-options/override-http-host-headers/) to match the public hostname configured and exposed by the [Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/). This ensures that the origin server can correctly route the incoming requests.  
   * Origin pool 2 uses the private IP address or private network (that is, `10.0.0.5`) within the SaaS provider's internal network, where the SaaS application resides. This pool must be configured to operate within the specified [virtual network](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/cloudflared/tunnel-virtual-networks/) to ensure proper routing of requests.
3. Cloudflare Tunnel exposes both [public hostnames](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/routing-to-tunnel/) with GTM and [private networks](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/) (private IPs) with Private Network Load Balancing.

For enhanced granularity in application serving and scalability, it is generally recommended to use private networks rather than public hostnames. Private networks enable Cloudflare to preserve and accurately pass the host header to the origin server. In contrast, when using public hostnames, providers must configure the [header value](https://developers.cloudflare.com/load-balancing/additional-options/override-http-host-headers/) on the load balancer, which is restricted to one public hostname per load balancer endpoint, potentially limiting flexibility.

Be aware of the Zero Trust [Tunnel limitations](https://developers.cloudflare.com/cloudflare-one/account-limits/#cloudflare-tunnel), Cloudflare for SaaS [connection request details](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/reference/connection-details/), and the Custom Origin [SNI specification](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/start/advanced-settings/custom-origin/#sni-rewrites). For further information about the Cloudflare Load Balancer, review its [reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/load-balancing/).

## Automation

As a SaaS provider, it is advisable to automate most, if not all, of these processes using [APIs](https://developers.cloudflare.com/fundamentals/api/), [SDKs](https://developers.cloudflare.com/fundamentals/api/reference/sdks/), scripts, [Terraform](https://developers.cloudflare.com/terraform/), or other automation tools.

An example of a high-level migration plan can be [downloaded here](https://developers.cloudflare.com/reference-architecture/static/example-cloudflare-saas-migration-plan.pdf).

It is highly recommended to migrate to Cloudflare for SaaS in phases and address any issues as they arise, particularly with [Domain Control Validation (DCV)](https://developers.cloudflare.com/ssl/edge-certificates/changing-dcv-method/troubleshooting/). Be sure to review the [validation status](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/domain-support/hostname-validation/validation-status/) and relevant [documentation](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/domain-support/hostname-validation/) during the process.

## Summary

By leveraging Cloudflare's infrastructure, SaaS providers can deliver secure, reliable, and performance services to their end-customers. This ensures a seamless and secure user experience while meeting compliance requirements, such as regionalization.

Several Cloudflare customers are currently using the Cloudflare for SaaS solution (formerly known as SSL for SaaS). Notable public use cases include:

* [Shopify ↗](https://www.cloudflare.com/case-studies/shopify/)
* [Porsche Informatik ↗](https://www.cloudflare.com/case-studies/porsche-informatik/)
* [Divio ↗](https://www.cloudflare.com/case-studies/divio/)
* [mogenius ↗](https://www.cloudflare.com/case-studies/mogenius/)
* [Quickbutik ↗](https://www.cloudflare.com/case-studies/quickbutik/)

Additionally, when migrating to Cloudflare for SaaS, it is crucial to have a runbook and clear public documentation to communicate relevant details to your end-customers. Excellent public examples of this are the [Salesforce CDN ↗](https://help.salesforce.com/s/articleView?id=sf.community%5Fbuilder%5Fcdn.htm&type=5) and [Shopify ↗](https://help.shopify.com/en/manual/domains/add-a-domain/connecting-domains) documentation.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/design-guides/","name":"Design Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/design-guides/extending-cloudflares-benefits-to-saas-providers-end-customers/","name":"Extend Cloudflare's benefits to SaaS providers' end-customers"}}]}
```

---

---
title: Leveraging Cloudflare for your SaaS applications
description: This document provides a reference and guidance for using Cloudflare for Platforms. It is designed for SaaS application owners, engineers, or architects who want to learn how to make their application more scalable and secure.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/design-guides/leveraging-cloudflare-for-your-saas-applications.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Leveraging Cloudflare for your SaaS applications

**Last reviewed:**  over 1 year ago 

## Introduction

When building a SaaS application, it is common to create unique hostnames for each customer account (or tenant), for example `app.customer.com`. It is important to ensure that all communication to this application hostname is done using SSL/TLS and therefore a certificate must be created for your customer's hostname on your application. Certificate management is hard, and often application architects and developers would use a [multi-domain certificate ↗](https://www.cloudflare.com/learning/ssl/types-of-ssl-certificates/) (MDC), so they can buy and add just one certificate that has hundreds of domains listed. However, this does not scale well when your application reaches thousands and millions of customers.

Also, a customer of your application might wish to have their main website domain hosted directly on your application. So that, for example, `www.customer.com` is actually delivering content directly from your SaaS application.

Many SaaS applications have caching and security solutions, such as Cloudflare, in front of their applications and as such need to onboard these hostnames. This is often done using a "Zone" model, where inside Cloudflare, or another vendor such as AWS Cloudfront, a "Zone" is created for `app.customer.com`. This means that, as each new customer is onboarded, a new "Zone" must be created - this might be manageable in the tens and hundreds of customers but, when you get to thousands and millions, management of all these zones and their configurations is hard.

Cloudflare for Platforms extends far beyond this traditional model of most edge providers, by managing traffic across many hostnames and domains in one "Zone". You can now manage `www.customer1.com` and `www.customer2.net`, and millions more hostnames, through the same configuration while also customizing features as needed.

This document provides a reference and guidance for using Cloudflare for Platforms. The document is split into three main sections.

* Overview of the SaaS model and the common challenges Cloudflare for Platforms solves
* SSL certificate issuance in a SaaS model
* Customizing the experience for each of your clients

### Who is this document for and what will you learn?

This reference architecture is designed for SaaS application owners, engineers, or architects who want to learn how to make their application more scalable and secure through Cloudflare.

To build a stronger baseline understanding of Cloudflare, we recommend the following resources:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (5 minute read) or [video ↗](https://www.youtube.com/watch?v=XHvmX3FhTwU) (2 minutes)
* [Cloudflare Ruleset Engine](https://developers.cloudflare.com/ruleset-engine/) \- We will discuss integrations with the ruleset engine. Familiarity with that feature will be helpful.
* [Cloudflare Workers](https://developers.cloudflare.com/workers/) \- We will also discuss integrations with Cloudflare Workers, our serverless application platform. A basic familiarity with this platform will be helpful.

Those who read this reference architecture will learn:

* How Cloudflare's unique offering can solve key challenges for SaaS applications
* How to customize the Cloudflare experience for each of your end customers
* Tools to integrate serverless applications, for each of your clients, through Workers for Platforms

## Why Cloudflare for Platforms?

### The SaaS model

Software as a Service (SaaS) has been a key innovation of the cloud computing era. On premises managed legacy enterprise software - such as accounting, HR, and CRMs - required dedicated attention from IT personnel to establish a platform (whether dedicated hardware, VMs, or cloud instances) for each application in the enterprise. The SaaS model allows providers, like Shopify and Salesforce, to extend their own platform to their customers instead. Now, the customer does not have to provision hardware or consider any other infrastructure concerns; instead, they subscribe to access to the SaaS platform which is always up to date, secure and available.

### Third party hostname challenges

For many SaaS applications, it is important to provide a service under the client's own domain. Their domain is important for branding, security, and organization; and many clients have heavily invested in the right `.com` to represent their business. Many clients with domains linked to their brand will push back against deploying their applications on the provider's domain.

This is especially true for customer-facing applications like an e-commerce solution. You would want to expose this as `shop.example.com`, not `example.shop.com`. To secure traffic to the SaaS application, the provider ("shop") needs a certificate for their customer, `example.com`.

![Figure 1: eCommerce flow through a SaaS platform.](https://developers.cloudflare.com/_astro/figure1.T_DPd5f7_Z1n6Xge.svg "Figure 1: eCommerce flow through a SaaS platform.")

Figure 1: eCommerce flow through a SaaS platform.

This is a challenge for SaaS solutions, as certificate issuance is tightly controlled through the [DCV Validation process](https://developers.cloudflare.com/ssl/edge-certificates/changing-dcv-method/dcv-flow/). The owner of a domain needs to authorize any certificates, and traditional methods of validation are driven by the domain owner and deliver the certificate only to them.

![Figure 2: Certificates cannot be automatically renewed on legacy platforms. They will expire and break traffic without manual action.](https://developers.cloudflare.com/_astro/figure2.BYh8B09n_sBMB3.svg "Figure 2: Certificates cannot be automatically renewed on legacy platforms. They will expire and break traffic without manual action.")

Figure 2: Certificates cannot be automatically renewed on legacy platforms. They will expire and break traffic without manual action.

This poses a dilemma: the SaaS model offers clear advantages but introduces a new challenge of its own. A novel solution would let providers and end customers both get the most out of the SaaS model.

## Issuing SSL certificates through Cloudflare for Platforms

### Manage certificates for any hostname on the Internet

Cloudflare for SaaS provides a unique solution to these common challenges for SaaS providers. By leveraging Cloudflare's position as a low-latency, global network, we can transparently manage certificate issuance for end clients while also providing several other benefits to a SaaS platform.

### Secure and powerful validation modes

Cloudflare has a unique ability to manage the Domain Control Validation (DCV) process in a SaaS scenario. In a traditional model, certificate issuers ask domain owners to place a [particular token](https://developers.cloudflare.com/ssl/edge-certificates/changing-dcv-method/dcv-flow/#dcv-tokens) (either a DNS TXT record or a small text file) at their origin in order to validate that they are authorized for that domain. This has to be done repeatedly at certificate renewal, which has become more common with recent security improvements.

![Figure 3: The DCV process.](https://developers.cloudflare.com/_astro/figure3.DZ4GG0vx_1j8azE.svg "Figure 3: The DCV process.")

Figure 3: The DCV process.

Since Cloudflare's network can easily sit in between the client and the SaaS provider, we can automatically respond with the correct DCV token on behalf of any domain that points traffic to the SaaS provider on Cloudflare.

![Figure 4: Certificates automatically renew on Cloudflare-enabled platforms.](https://developers.cloudflare.com/_astro/figure4.TeeqPEfC_Z1MnSQY.svg "Figure 4: Certificates automatically renew on Cloudflare-enabled platforms.")

Figure 4: Certificates automatically renew on Cloudflare-enabled platforms.

Instead of repeatedly performing a complex process at every certificate renewal, the client performs a much simpler process only once.

# Customize your customers Cloudflare experience

## Managed features in Cloudflare for platforms

Cloudflare for Platforms gives you much more than just SSL certificate management. We give you built-in features to control security and performance capabilities, at scale, for each of your clients. Cloudflare's security features, such as [DDoS](https://developers.cloudflare.com/ddos-protection/), [WAF](https://developers.cloudflare.com/waf/), [Bot Management](https://developers.cloudflare.com/bots/), and [Rate Limiting](https://developers.cloudflare.com/waf/rate-limiting-rules/) are seamlessly extended to clients on your platform. Security posture can be customized within [Managed Rules](https://developers.cloudflare.com/waf/managed-rules/) for individual customers, to exempt good traffic or tighten security. On the [performance](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/performance/) side, [Cache](https://developers.cloudflare.com/cache/), [Argo Smart Routing](https://developers.cloudflare.com/argo-smart-routing/), and HTTP/2 features like [Early Hints](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/performance/early-hints-for-saas/) provide scalable and customizable behavior for all of your customers. Customizable cache rules lets you drive high hit rates across all of your customers.

If you need even more flexibility than our rules provide, to give individual behavior to thousands or millions of customers, [Custom Metadata](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/domain-support/custom-metadata/) allows complete per-client flexibility. By setting tags like `WAF: On` or `Performance: Premium` for each customer, you can customize their security and performance feature set. We have built features like [WAF for SaaS](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/waf-for-saas/) which interface with this metadata directly; as well as an API within our Workers serverless environment to use them within custom code.

## Scalable serverless applications with Workers for Platforms

If you need more customization than even metadata can provide, or are running a service where your customers write or generate their own application code, [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/) lets you deploy a complete serverless application for each of your customers.

We provide several key features such as the [Dispatch Worker](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/dynamic-dispatch/), which gives you infinite flexibility in deciding which customer application to route to. For example, you can run security checks, then decode an HTTP header telling you the user's ID, and then load the appropriate serverless application for this user's request. [Outbound Workers](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/) give you additional visibility and control into what Internet resources your customer's applications can access, providing a familiar security model in a distributed deployment.

We also provide features for [observability](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/observability/), [configuration](https://developers.cloudflare.com/terraform/), and many other tools needed for a production-grade platform deployment. These are detailed in other [reference architectures](https://developers.cloudflare.com/reference-architecture/) and function the same way for platform cases as for the more standard models described in those guides.

## Use cases

Let's review three common use cases where Cloudflare for Platforms can enable providers to seamlessly extend SSL, performance, and security to their end customers.

### SSL issuance at scale for your platform

In this common design, Cloudflare enables your platform to issue SSL certificates and provide performance and security features. We will not customize the features for each of your clients, but will provide common capabilities for everyone who uses the platform.

1. Cloudflare secures traffic from your clients to your platform, at global scale, by validating and distributing SSL certificates.
2. In this design, you will use the same L7 configuration - that is, all of the features that act on your traffic, and run after SSL, for each of your clients.
3. Just set up a [Cloudflare for SaaS](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/start/getting-started/) zone and [order a custom hostname](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/domain-support/create-custom-hostnames/) for each client hostname. The system will take you through an easy flow to point each client's traffic to your platform, and order their certificate.  
   1. You can almost always use our default settings through this process, but bespoke SSL customization is also possible.  
   2. Origin traffic routing is also handled through the SSL for SaaS process. Our default configuration is secure for most needs.  
         * For highly secure use cases, you can use [Authenticated Origin Pulls](https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/), [Dedicated CDN Egress IPs](https://developers.cloudflare.com/smart-shield/configuration/dedicated-egress-ips/), or an advanced design with [Tunnels](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/).
![Figure 5](https://developers.cloudflare.com/_astro/figure5.C5V4KUCx_LQsqe.svg) 

### Feature Customization for your Platform customers

Here, we are not just provisioning a certificate for each client - we are giving each of them a custom configuration. For example, your Basic tier only gets essential WAF, Advanced tier gets Bot management. You can also run common features across all customers.

1. In addition to securing SSL traffic, use an additional field provided when you add each customer ([Custom Metadata](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/domain-support/custom-metadata/)) to tag the correct feature set.
2. Cloudflare features read the Metadata to customize for each client. [WAF features are the key security customization](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/waf-for-saas/). Provide different levels of security, or even customized WAF rulesets.
3. [On the performance side,](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/performance/) you can also add Argo Smart Routing, Cache, and Early Hints to level up the performance for chosen customers.
![Figure 6](https://developers.cloudflare.com/_astro/figure6.CCbjP4Rl_2n2SNF.svg) 

### Serverless application platform for your customers

In the most advanced design, we are customizing a full serverless application in our Workers runtime for each of your customers. Simple Workers perform similar tasks to feature customization. Advanced Workers can run your entire platform on the Cloudflare network.

1. Instead of deploying customized Cloudflare capabilities, each customer has their own "User Worker" JavaScript serverless application containing custom code.
2. You retain control through Dispatch Workers, which determine which code to run, and Outbound Workers, which restrict the access of customer code.
3. Use advanced Developer Platform capabilities like D1, Workers KV, and Queues to build your entire business on Cloudflare.
![Figure 7](https://developers.cloudflare.com/_astro/figure7.1flW0nWM_ZTtK6n.svg) 

## Summary

With Cloudflare for SaaS, you will be able to easily solve the common challenges that come with a growing platform business. From SSL certificate issuance, through Security, and on to custom serverless applications, Cloudflare for SaaS lets you scale our entire platform to your customers - at the scale of millions.

You can find further details on all of the features we have discussed here in the following links:

* [Cloudflare for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/)
* [Custom metadata](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/domain-support/custom-metadata/)
* [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/design-guides/","name":"Design Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/design-guides/leveraging-cloudflare-for-your-saas-applications/","name":"Leveraging Cloudflare for your SaaS applications"}}]}
```

---

---
title: Network-focused migration from VPN concentrators to Zero Trust Network Access
description: The traditional approach of installing and maintaining hardware for remote access to private company networks is no longer secure or cost effective. IT teams are recognizing the cost and effort to install and maintain their own hardware can be offset with more modern, and more secure cloud hosted services.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/design-guides/network-vpn-migration.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Network-focused migration from VPN concentrators to Zero Trust Network Access

**Last reviewed:**  over 1 year ago 

## Introduction

Over the past few years, the traditional approach of installing and maintaining hardware for remote access to private company networks is no longer secure or cost effective. Due to an increase in [vulnerabilities ↗](https://www.networkworld.com/article/2114694/new-vpn-risk-report-finds-nearly-half-of-enterprises-attacked-via-vpn-vulnerabilities.html) found in on-premises VPN products, security and IT teams are looking for solutions that don't require teams to monitor for and respond to [CVE alerts ↗](https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=vpn). These same systems also limit the user's bandwidth because they route all user Internet traffic through a single infrastructure which results in a poor user experience. IT teams are recognizing the cost and effort to install and maintain their own hardware can be offset with more modern, and more secure cloud hosted services. User expectations for application performance are exposing limitations in bandwidth constrained, self hosted VPN solutions. In summary, running your own VPN is expensive, high risk and doesn't deliver a great user experience.

![Diagram showing suboptimal traffic paths for traffic to Internet resources.](https://developers.cloudflare.com/_astro/traditional-vpn.BpH8a1pr_19JYSW.svg "Figure 1: A traditional VPN deployment, where all user traffic destined for the Internet must route through the company hosted and managed VPN service.")

Figure 1: A traditional VPN deployment, where all user traffic destined for the Internet must route through the company hosted and managed VPN service.

As such, many organizations are looking to move to a [zero trust ↗](https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/) security posture using [Zero Trust Network Access ↗](https://www.cloudflare.com/learning/access-management/what-is-ztna/) (ZTNA) services as part of a [Secure Access Service Edge ↗](https://www.cloudflare.com/learning/access-management/what-is-sase/) (SASE) architecture to provide remote access to private resources. With all the critical software running as a cloud service, organizations are relieved of the duty of keeping servers and software up to date. Cloud platforms are also architected for massive scale which significantly increases available bandwidth for end users, therefore improving their experience.

![Diagram showing traffic paths directly flowing to Internet resources.](https://developers.cloudflare.com/_astro/sase-remote-access.CybpgS2A_Z2dxYlP.svg "Figure 2: SASE platforms do not degrade user Internet access experience, and provide fast, secure global access to self hosted hosted resources.")

Figure 2: SASE platforms do not degrade user Internet access experience, and provide fast, secure global access to self hosted hosted resources.

In the old model, the VPN hardware had direct access to the networks the applications resided on and typically users had access to the entire network. New SASE methods of remote access create connectivity from the cloud platform to the networks where applications live, but expose access only to a specific application or network address. Cloudflare's recommended approach is to install software agents, similar to those on end user devices, that create secure tunnels from the cloud to private networks. However, this isn't always an easy path to take. For network administrators trying to quickly replace legacy remote access hardware, having to deploy new servers or go through lengthy change control to deploy software to existing application servers, may not be possible in acceptable time frames. Instead network administrators might be more familiar, and have more control over, creating secure tunnels from cloud SASE platforms to existing network hardware using familiar protocols such as GRE or IPsec. This might even mean using the same hardware appliances that were being used for VPN access, but simply dumbing them down to secure tunnel connectors, and switching off (or removing licenses for) any expensive and vulnerable remote access capabilities.

This design guide is for organizations in that situation, where they need a fast way to quickly replace or mitigate their use of self hosted remote access hardware and then gradually move to the recommended software agent approach where appropriate.

Audience for this guide

This guide is specifically aimed at network architects or IT admins who want to use familiar protocols and leverage existing network hardware, potentially the same equipment used for current VPN services, but wish to use those devices as tunnel termination devices and move the VPN and access controls into the cloud as part of a longer term migration away from managing their own hardware.

### Who is this document for and what will you learn?

This guide is written for network and security experts considering a replacement of their current VPN vendor, while preparing their organization for a zero trust or SASE architecture. It assumes familiarity with networking concepts such as IPsec tunnels, routing tables and split tunneling.

What you will learn:

* How Cloudflare can replace a traditional VPN-like implementation
* How to get visibility into VPN network traffic
* What you need to consider to implement a Cloudflare solution at scale
* Steps to take to move to a recommended Zero Trust Network Access implementation

The solution this guide describes requires you have a contract with Cloudflare that includes:

* Cloudflare One licenses for the amount of users you are looking to onboard
* Cloudflare WAN (formerly Magic WAN)

To build a stronger baseline understanding of Cloudflare, we recommend the following resources:

1. What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (five-minute read) or [video ↗](https://www.youtube.com/watch?v=XHvmX3FhTwU) (two minutes)
2. Blog: [What is SASE? | Secure access service edge | Cloudflare ↗](https://www.cloudflare.com/learning/access-management/what-is-sase/) (14-minute read)
3. Reference architecture: [Evolving to a SASE architecture with Cloudflare](https://developers.cloudflare.com/reference-architecture/architectures/sase/) (three-hour read)

## Benefits of a SASE platform

Traditional VPN approaches typically provide the following types of access.

* Allowing remote users access to self hosted private applications running on a corporate network
* Routing all user Internet traffic through a single, concentrated VPN access point where security policies are applied

A SASE platform replaces traditional VPN hardware by offering two key services. First, it maps user access directly to internal applications hosted on corporate networks or in the cloud, unlike hosting your own VPN service which typically provides broad access to the entire corporate network. Second, it enables filtering of Internet traffic close to the user, allowing users to securely access the Internet without routing all traffic through the corporate network, thereby improving efficiency and maintaining security.

### Zero Trust Network Access (ZTNA)

Remote users authenticate and connect to a cloud hosted Zero Trust Network Access (ZTNA) service, which in turn has connectivity into the networks where the private applications reside. Cloudflare's [SASE reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/) describes three methods for connecting Cloudflare to your existing applications and networks:

1. Software connectors ([cloudflared](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/cloudflared/) or [WARP Connector](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/warp-connector/))
2. IPsec or GRE tunnels using [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/)
3. Direct network connections using [Cloudflare Network Interconnect](https://developers.cloudflare.com/network-interconnect/)

All three methods have their specific advantages, however, software connectors are usually preferred when considering a modern Zero Trust implementation for three reasons.

1. They deliver a network connectivity model that is flexible and easy to replicate across environments. You can move the applications and servers with little to no changes in configuration.
2. Software daemon architecture simplifies scaling to increased traffic demands, just install more agents on more servers.
3. Because daemons run close to your applications (as opposed to at your network edge), you can build isolated network or application segments in which to enforce policy, preventing lateral movement and getting the full benefits of the zero trust model.

Note

This guide will initially describe the use of Cloudflare WAN to create IPsec tunnels from Cloudflare to existing network hardware, and then recommend a migration path to move to a software agent based approach.

### Secure Web Gateway (SWG)

Traffic destined for the general Internet is routed via a cloud Secure Web Gateway (SWG). Policies are written that filter requests to malicious websites and allow access to SaaS applications based on user identity and device security posture.

Cloudflare's [SASE reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/) describes different methods for connecting user devices to Cloudflare, some require the installation of device agents, others require the user simply point their web browser at a URL. In this document, because most traditional VPNs require some client software on the device, we will describe a solution using the Cloudflare [device agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/).

### Why a phased approach?

In situations where existing remote access hardware is vulnerable and there is an urgent need to replace, speed is key. Also, the team tasked with moving away from existing VPN hardware might be more familiar with networks than installing software on servers. Trying to implement a full project to replace existing hardware with a radically different model, that is, deploying software agents, may take weeks rather than days. This guide walks through quickly removing or mitigating existing VPN solutions and then proposes later steps to take full advantage of using all aspects of a SASE platform.

This approach allows network and security teams to get up-and-running quickly, while gaining experience in modern zero trust deployments to allow for remote access to internal applications. The added visibility into network traffic will also enable teams to gain insight into application usage, and plan for a successful and secure zero trust migration.

This guide will describe the following phases at a high level, if you need help with specific details related to your environment please [contact Cloudflare ↗](https://www.cloudflare.com/products/zero-trust/plans/enterprise/).

* Phase 1: Quickly replace existing traditional/vulnerable VPN hardware with cloud-based remote access while gaining insight into application traffic.
* Phase 2: Scaling up and offloading traditional IPsec tunnels.
* Phase 3: Improving security posture by segmenting application access and enabling clientless access.

## Phase 1: Connectivity and network-based policies

Consider an organization with global IT infrastructure. Specifically, three data centers deployed in Europe, USA and Asia with each their own VPN service. To get the best performance, this VPN implementation requires employees to make a conscious decision to connect to one of the VPN clusters depending on their location. In this example all user Internet traffic is routed through the VPN service, where firewalls apply a level of security protecting users from the dangers of the general Internet.

![A traditional VPN deployment using VPN concentrators spread across three DCs.](https://developers.cloudflare.com/_astro/vpn-concentrators.B1KJmuAT_Z1ehVVl.svg "Figure 3: A traditional VPN deployment using VPN concentrators spread across three DCs.")

Figure 3: A traditional VPN deployment using VPN concentrators spread across three DCs.

During this first phase, network connectivity will be created between user devices and the private networks they currently access via existing network infrastructure. This is achieved in two ways.

* On employee devices install the Cloudflare [device agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/). This replaces the use of existing VPN client software.
* Using existing network hardware in the data center, create IPsec tunnels to Cloudflare which are managed using Cloudflare WAN service.

Both employee devices and data center networks will connect to their closest Cloudflare server. This is thanks to [Cloudflare's anycast architecture ↗](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/), and ensures the most optimal path for user traffic without any effort by employees or IT support staff. Users no longer need to make a choice to which VPN service region to connect to, as Cloudflare will always ensure they connect to the closest and most responsive service for the best access performance to their private applications.

### Connecting networks to Cloudflare

Figure 4 shows traffic from end user devices to Cloudflare and tunnels routing traffic to private data centers. When user traffic reaches the closest Cloudflare access point, Cloudflare will route traffic destined for private applications directly to the data centers, while processing Internet-bound traffic through Cloudflare's [Secure Web Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) (SWG). It is possible to leverage existing DNS services to resolve requests to private addresses using Cloudflare [Gateway DNS policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/dns-policies/). [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/on-ramps/) is used to create IPsec tunnels between Cloudflare and data centers and is configured with static routes that determine how traffic reaches each existing network and applications.

![A high level design of Cloudflare traffic routing for phase 1 of the migration.](https://developers.cloudflare.com/_astro/phase-1.ghshUb-E_Z8l0n4.svg "Figure 4: A high level design of Cloudflare traffic routing for phase 1 of the migration.")

Figure 4: A high level design of Cloudflare traffic routing for phase 1 of the migration.

By using existing network or security appliances to terminate IPsec tunnels, secure off-ramps can be created with limited impact on the current infrastructure. These IPsec tunnels also allow for outbound server-initiated traffic to continue flowing. However, depending on the scale of the deployment, the existing appliances might run into bandwidth limitations. It is best to consider this first phase a 'pilot' or low-scale deployment to get up and running quickly and validate user-application connectivity. The next phase will improve on the design using the insights gathered during this phase.

With such a design in place, Cloudflare will be able to filter traffic based on the identity of the requesting user. For example, users authenticated to the corporate identity provider and are members of the "Engineering" group will only be allowed access to the internally hosted source code repository. Furthermore, the user device may need to pass [certain posture checks](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/) before connecting. There are [example network policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/network-policies/common-policies/#restrict-access-to-private-networks) in the zero trust documentation you can use as a reference. In essence, this will enable you to define network access policies using user identities instead of their associated IP address ranges. Getting rid of traditional 5-tuple ACLs will be a first step towards a zero trust model.

### Device agent deployment

Now that we've connected your networks to Cloudflare, we need to get traffic from employee devices to the Cloudflare network which requires the [device agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/). When the agent is initially installed, users are prompted to authenticate via an identity provider (IdP) configured with Cloudflare. The IdP will ensure users authenticate using an existing identity and can also import group membership information used in access policies. [Device enrollment policies](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/device-enrollment/) are used to ensure only the right users, authenticated with the right methods and using secure devices can connect new devices to your organization’s Cloudflare Zero Trust instance before they even get access to any applications.

Use [device profiles](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/configure/device-profiles/) to apply different device agent configurations to different users – or the same users in different locations using [Managed networks](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/configure/managed-networks/). For companies which don't route Internet traffic via their VPN server, device profiles allow you to [configure the device agent to exclude Internet traffic](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/configure/route-traffic/split-tunnels/) from the Cloudflare tunnel and connect directly to the Internet. Note that this guide does heavily recommend sending Internet bound traffic via Cloudflare where you have greater control over the security of that traffic. But you can selectively bypass Cloudflare for bandwidth heavy traffic such as video conference calls.

Traffic from employees using the device agent destined for internal resources will have a source IP in the 100.96.0.0/12 IP range. This is a range from the [RFC 6598 Carrier-grade NAT space ↗](https://datatracker.ietf.org/doc/html/rfc6598) which should be added as a route in the data center regions to allow for traffic to flow back to these users. See for more information the [Cloudflare WAN with WARP integration](https://developers.cloudflare.com/cloudflare-wan/zero-trust/cloudflare-one-client/) documentation.

### Deploying software connectors for DNS

Although this phase focuses on using the Cloudflare WAN service and IPsec tunnels for the bulk of the employee traffic, the Cloudflare software connectors play a key role in DNS resolution of internal hostnames. Getting experience with using these software connectors will also help in the next phase, so efforts to define the processes to deploy and manage them should start in this first phase.

Cloudflare offers two types of software connectors:

* [cloudflared](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/get-started/)
* [WARP connector](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/warp-connector/)

As discussed in the introduction, `cloudflared` is the preferred method for Zero Trust Network Access, but only supports inbound connectivity to your networks and application servers, any server initiated connection will not go via the tunnel and instead follow the server's default network path. WARP connector is designed to create tunnels that facilitate both inbound and outbound connectivity, but it doesn't currently have the same level of failover support and ease of configuration. For this guide, we will be discussing using `cloudflared` as it supports the internal DNS use case described.

For large remote access use cases, Cloudflare recommends deploying connectors to dedicated hosts. See the [System Requirements documentation](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/configure-tunnels/tunnel-availability/system-requirements/) for more deployment recommendations and server sizing. Where to deploy these servers depends on the access they need and the internal firewall rules and segmentation of the network. Some customers start with their first deployment in their DMZ, while others install it deeper in their network and evolve from there.

Installing `cloudflared` is best done in an automated manner, so we recommend deploying using a virtualization technology such as Docker or deploying as VMware guests and configuring via Ansible. Preferably, as traffic using `cloudflared` tunnels increases, such systems can scale the deployment automatically based on real-time metrics collected from the hosts. `cloudflared` instances can be monitored using the [Prometheus metrics endpoint](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/monitor-tunnels/metrics/). Prometheus is an HTTP-based monitoring and alerting system similar in functionality to SNMP, exposing metrics that can be polled from the resource to be monitored. Most monitoring systems on the market today support Prometheus as a format to collect the metrics needed for alerting and automatically scaling the deployment.

For more information about deploying `cloudflared` connectors at scale:

* [Various guides to deploy and update](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/deployment-guides/) connectors in environments such as Ansible, Terraform and Kubernetes
* High availability using [replicas](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/configure-tunnels/tunnel-availability/#cloudflared-replicas)
* [Monitor tunnels with Grafana](https://developers.cloudflare.com/cloudflare-one/tutorials/grafana/)

### DNS resolution with Resolver Policies

As you can see in Figure 4, both DNS and general network traffic will flow from the employee device to Cloudflare. By default, the device agent forwards all DNS queries to Cloudflare for inspection and filtering based on [DNS policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/dns-policies/). This is great, because it will allow administrators to configure [DNS policies to block potential security threats](https://developers.cloudflare.com/cloudflare-one/traffic-policies/dns-policies/common-policies/#block-security-threats) and immediately start to protect employees as they go online. This also applies to situations where Internet traffic is from the tunnel to Cloudflare, but the client still resolves hostname requests via Cloudflare DNS services.

For internal domains, however, Cloudflare will need to know how to resolve them. This is where [resolver policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/resolver-policies/) come into play. After the DNS policies are applied to incoming DNS requests, customers can choose to forward requests for internal DNS hostnames to their internal DNS servers. For example, the domain `example.local` might be hosted on a DNS server running at 10.10.10.123\. A resolver policy will make sure requests for hostnames part of that domain will be sent to that IP.

A tunnel exposing a route to the internal DNS server is needed. `cloudflared` should be deployed on a host that can route DNS traffic to the 10.10.10.123 IP address. Requests for internal domains via the DNS gateway will then be redirected to this DNS server, via the tunnel.

### Analytics and logging

As steps are taken in this first phase and the first users will start accessing applications, the need for proper monitoring and logging will become apparent. Having visibility into the traffic flowing through Cloudflare will help with:

* Operational activities such as troubleshooting by your support staff.
* Monitoring for potential threats by a SOC, possibly using a security information and event management ([SIEM ↗](https://www.cloudflare.com/learning/security/what-is-siem/)) service.
* Visibility into application traffic to see where potential security and performance improvements can be made (see also phase 2).

Cloudflare provides visibility at different levels, available through the dashboard or exported using [Logpush](https://developers.cloudflare.com/logs/logpush/). For traffic flowing over Cloudflare WAN IPsec tunnels, [Network Analytics](https://developers.cloudflare.com/analytics/network-analytics/) can be found in the dashboard and through the [GraphQL API](https://developers.cloudflare.com/analytics/graphql-api/). This will show sampled statistics of the traffic and can be used for trend and traffic flow analysis.

Next are more detailed [network session logs](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/zero%5Ftrust%5Fnetwork%5Fsessions/) that collect information on all network connections/sessions going through Cloudflare's secure web gateway, including unsuccessful requests. These are followed by [Gateway activity logs](https://developers.cloudflare.com/cloudflare-one/insights/logs/dashboard-logs/gateway-logs/), which contain information about triggered policies as traffic gets inspected by the gateway engine. A combination of these logs will enable full visibility into all network flows, including users' identities. Using this information, network and security teams can run their analysis on what type of traffic flows where, and use that to plan for the next steps.

Finally, for real-time alerting, [Cloudflare Notifications](https://developers.cloudflare.com/notifications/get-started/) can be configured for events such as IPsec and `cloudflared` tunnel health, as well as Cloudflare infrastructure status in general.

## Phase 2: Scaling up and offloading IPsec

In most environments the IPsec termination points are limited in their throughput and sooner or later this could pose a problem when scaling up to the traffic across the entire business. The final step of phase 1 will provide you with insight into application traffic flow. Although you might not have been able to completely map your application landscape, you probably will have found some applications that cause significant load on the current IPsec tunnels.

Fortunately, most of these applications can be migrated one-by-one to the more scalable software connector based tunnels. Any application which doesn't rely on server-initiated traffic is eligible for this type of migration. With the experience gained during the initial deployment of `cloudflared` in phase 1:

1. Deploy two or more `cloudflared` instances in the relevant environment, the USA datacenter in the example below.
2. Add [Private Networks to the tunnel](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/cloudflared/) to define routing and access that is scoped more specifically to the network and applications it handles traffic for. For example, expose the 10.20.56.0/24 subnet via the software connector tunnel, instead of the larger 10.20.0.0/16 exposed by the Cloudflare WAN managed IPsec tunnel.
3. Traffic from employees will now be routed via the software connector tunnel for the /24 subnet instead of the /16 route going over the IPsec tunnel, thereby offloading the reliance on the IPsec termination device.

![An evolved architecture diagram showing software connector based tunnels offloading \(or replacing\) the IPsec tunnels.](https://developers.cloudflare.com/_astro/phase-2.DT29_r7n_Z1R2m70.svg "Figure 5: An evolved phase 2 architecture diagram showing software connector based tunnels offloading (or replacing) the IPsec tunnels.")

Figure 5: An evolved phase 2 architecture diagram showing software connector based tunnels offloading (or replacing) the IPsec tunnels.

In some cases (such as the Asia datacenter above) this might mean that the IPsec tunnels are not needed anymore and software connectors are the sole connection into the infrastructure. In that case, the whole 10.30.0.0/16 subnet can be managed by `cloudflared` and the IPsec tunnel (and its related hardware) decommissioned. It is likely that this phase will be an ongoing effort: as more applications are mapped and traffic flows deemed eligible for software connector based tunnels, they will be migrated as needed.

## Phase 3: Application-based policies

The first two phases of this guide have resulted in a design very similar to traditional VPNs leveraging VPN concentrators, where policy enforcement happens at the perimeter. Although we've done so for reasons laid out in the introduction, the promise of a zero trust architecture is to improve security posture by defining smaller application/network segments for which security policies are applied as close to the resource as possible.

This phase is about making the resources exposed behind the tunnels smaller and more isolated to prevent lateral movement within internal networks. You will be able to use the visibility gained in the previous phases to select an application (or set of applications), associated IP addresses and deploy a dedicated software connector instance. See [the documentation on how to deploy connectors and expose private networks](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/get-started/create-remote-tunnel/) and in step 3 configure the IP addresses of the application.

![Example architecture of tunnels deployed per application to improve security posture by reducing lateral movement within data centers.](https://developers.cloudflare.com/_astro/phase-3.CMITQCmp_Z109DDG.svg "Figure 6: Example phase 3 architecture of tunnels deployed per application to improve security posture by reducing lateral movement within data centers.")

Figure 6: Example phase 3 architecture of tunnels deployed per application to improve security posture by reducing lateral movement within data centers.

Because each software connector instance will be dedicated to the application, it can be configured as the sole entry point. Traffic to and from the network segment where the application resides can be fully blocked off, preventing any internal lateral movement. All that is required is a valid outbound route to the Internet for the software connector to create the tunnel, and for the network/application to be able to reach the server the software connector is deployed on. The access controls doesn't just manage IP routing, but also at the protocol level. So with this approach you can define access only to HTTPS on that server, which may also be running SSH and other services. But you only want to define access specifically to that application port.

In the example above, subnets X and Y are completely segmented from the rest of the data center. Traffic to the applications running in those subnets (10.10.45.1 and 10.20.56.1, respectively) can only flow through Cloudflare with the associated authentication and authorization policies applied. One to one deployment of software connectors is not always the right approach. You might have several applications running on a private network, and deploy multiple servers running `cloudflared` to handle traffic for the applications.

### Clientless access

In addition to routing traffic for private IP addresses, `cloudflared` can expose internal applications via publicly resolvable hostnames. This makes it possible to connect to such applications without using any software on the device. This can be very useful for use cases where you are unable to install software on the device, such as giving application access to contractors or partners.

In the example below, `erp.example.com` is added as [Public Hostname](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/routing-to-tunnel/) to the tunnel, routing traffic to port 80 and/or 443 to a specific IP address on the internal subnet Y. Access to this resource from the Internet is then protected using [Cloudflare Access security policies](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/) which also rely on the IdP connection you've set up for onboarding your employees.

![Adding a public hostname to a tunnel for clientless access to internal applications.](https://developers.cloudflare.com/_astro/clientless-access.Cnw_KhKM_Z109DDG.svg "Figure 7: Adding a public hostname to a tunnel for clientless access to internal applications.")

Figure 7: Adding a public hostname to a tunnel for clientless access to internal applications.

Not all applications will be suitable for this type of access. Only HTTP(S) applications or [applications that can be rendered in the browser](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/non-http/) such as SSH and VNC are supported. To learn more about such a deployment and additional advanced options such cookie settings, browser isolation and using the Access token in your application for authentication, see the [self-hosted application documentation](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/self-hosted-public-app/).

## Summary

This design guide started out with a fairly traditional VPN environment with its common features, limitations and risks. By using a combination of Cloudflare on user devices and Cloudflare WAN towards the datacenter networks, phase one and two described a low-risk design to migrate using existing technology and knowledge. This already brought about benefits in terms of decommissioning of VPN concentrators, improved network visibility and improving performance for users to access internal resources.

Phase three improved on the design by introducing identity-based network policies and smaller network segments with software connectors. This has further opened up the opportunity to offer other zero trust access models such as clientless access for web applications and browser-rendered VNC or SSH sessions.

The flexibility of the Cloudflare connectivity cloud to connect any device, application and network enables this zero trust migration to be taken step by step. Thereby reducing risk and allowing network and security teams to adapt their knowledge and architectures in the pace required by their organizations.

### Further reading

* Cloudflare WAN integration: [WARP on-ramp to Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/zero-trust/cloudflare-one-client/)
* Policy configuration: [Gateway Network policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/network-policies/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/design-guides/","name":"Design Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/design-guides/network-vpn-migration/","name":"Network-focused migration from VPN concentrators to Zero Trust Network Access"}}]}
```

---

---
title: Securely deliver applications with Cloudflare
description: Cloudflare provides a complete suite of services around application performance, security, reliability, development, and Zero Trust.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/design-guides/secure-application-delivery.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Securely deliver applications with Cloudflare

**Last reviewed:**  over 2 years ago 

## Overview and the Cloudflare advantage

Cloudflare provides a complete suite of services around application performance, security, reliability, development, and Zero Trust. Cloudflare’s global network is approximately 50 ms away from about 95% of the Internet-connected population and consists of services that run on every server in every data center. The global scale of Cloudflare also allows for a robust threat intelligence source which is constantly fed back into Cloudflare security products to enhance the machine learning models and services even further.

![Cloudflare provides application performance and security services that run on every server in every data center, ensuring the highest level of performance regardless of user location.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-1.WZGcpCJi_Z1LLB6S.webp "Figure 1: Cloudflare services run on every server in every data center")

Figure 1: Cloudflare services run on every server in every data center

Other differentiators include the fact that Cloudflare is not a point product unlike some vendors who only offer API security or zero trust services or specific performance/security services. Customers have started moving away from the point-product approach due to operational and management complexities, inefficiencies related to not being able to leverage cross-product innovation/integrations, and not being able to leverage scale of the network/resources across all services.

![Cloudflare’s global platform integrates zero trust, network and application services through several product suites including Cloudflare One, Cloudflare’s Developer Platform and our compliance and privacy features.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-2.BYvDdWY__1vj9Qs.webp "Figure 2: Cloudflare Global Cloud Platform.")

Figure 2: Cloudflare Global Cloud Platform.

Additionally, customers do not want to be locked in to a specific cloud provider, but many performance and security vendors lock customers into their platform by focusing on and optimizing services to their own cloud and making it operationally difficult to adopt a multi-cloud strategy.

Cloudflare is agnostic to where the workloads run or what cloud provider is being used. Customers get the same consistent unified dashboard and operational simplicity whether workloads run in a specific cloud or on-premise. Unlike many vendors, taking advantage of cross-product innovations and integration does not depend on customers using a specific cloud for workloads.

This document demonstrates how easy it is to use Cloudflare’s collective services regardless of where workloads run. For the example in this document, an application workload will use Cloudflare DNS, CDN, WAF, and Access while also using Cloudflare Tunnel to connect securely to the Cloudflare network. It’s rare for a vendor to provide this comprehensive level of security capability in an operationally simple and consistent fashion.

For additional details and reference architectures on specific services, see our [reference architecture documents](https://developers.cloudflare.com/reference-architecture/).

## Onboarding and protecting the application with Cloudflare

Cloud-based security and performance providers like Cloudflare work as a reverse proxy. A reverse proxy is a server that sits in front of web servers and forwards client requests to those web servers. Reverse proxies are typically implemented to help increase security, performance, and reliability.

Normal traffic flow without a reverse proxy would involve a client sending a DNS lookup request, receiving the origin IP address, and communicating directly to the [origin server(s) ↗](https://www.cloudflare.com/learning/cdn/glossary/origin-server/).

When a reverse proxy is introduced, the client still sends a DNS lookup request to its resolver, which is the first stop in the DNS lookup. In some cases, the vendor providing the reverse proxy also provides DNS services; this is visualized in Figure 3 below. However, the client now communicates to the reverse proxy and the reverse proxy communicates to the origin server(s). This traffic flow, where all traffic passes through the reverse proxy, allows for additional application security, performance, and reliability services to be implemented easily for applications.

![Cloudflare provides reverse proxy functionality between clients and origin servers, enabling greater user and application security.](https://developers.cloudflare.com/_astro/Figure_3.CznC1gz__Z1Ljx9F.webp "Figure 3: Same vendor providing DNS and security/performance services via proxy.")

Figure 3: Same vendor providing DNS and security/performance services via proxy.

In this example, we have a website running on one of the major cloud providers and we want to use Cloudflare DNS, CDN, WAF, and Access. We want to start with these services for demonstration purposes; customers can expand these to include other Cloudflare services as desired. Cloudflare provides the benefit of decoupling all services from the cloud provider and if we want to change cloud providers later or protect other applications running in other clouds, the dashboard and operations all stay consistent.

Customers can easily and securely connect their web application to the Cloudflare network and leverage application performance and security services. There are several connectivity options that fit different use cases.

### Connectivity options

#### Public connection over the Internet

In the most basic scenario, the Cloudflare proxy will route the request traffic over the Internet to the origin. In this setup the client and origin are both endpoints directly connected to the Internet via their respective ISPs. The request is routed over the Internet from the client to Cloudflare proxy (via DNS configuration) before the proxy routes the request over the Internet to the customer's origin.

The below diagram describes the default connectivity to origins as requests flow through the Cloudflare network. When a request for the origin resolves to an IP hosted by Cloudflare, that request is then handled by the Cloudflare network and forwarded onto the origin server over the public Internet.

![Cloudflare provides application performance and security services over Internet connectivity.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-4.B97I5-Ti_Zfzpzg.webp "Figure 4: Connectivity from Cloudflare to origin server(s) via Internet")

Figure 4: Connectivity from Cloudflare to origin server(s) via Internet

The origin is connected directly to the Internet and traffic is routed to the origin based on the IP address resolved by Cloudflare DNS. The DNS A record associates the domain name with the IP address of the origin server(s) or typically a load balancer the origin(s) are sitting behind.

In this model, when Cloudflare DNS receives a query for the A record, a Cloudflare anycast IP address is returned, so all traffic is routed through Cloudflare. However, unless additional precautions are taken, it’s possible for the origin to be reached directly bypassing Cloudflare if someone knows the IP address of the origin(s).

Additionally, in this model, the customer has to open firewall rules for the origin(s) or web server(s) so they can be accessible on the respective http/https ports. However, customers can choose to leverage [Dedicated CDN Egress IPs](https://developers.cloudflare.com/smart-shield/configuration/dedicated-egress-ips/), which allocates customer-specific IPs that Cloudflare will use to connect back to your origins. We recommend allowlisting traffic from only these networks to avoid direct access.

In addition to IP blocking at the origin-side firewall, we also strongly recommend additional verification of traffic via either the ["Full (Strict)" SSL setting](https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/) or [mTLS auth](https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/) to ensure all traffic is sourced from requests passing through the customer configured zones.

Cloudflare also supports [Bring Your Own IP (BYOIP)](https://developers.cloudflare.com/byoip/). When BYOIP is configured, the Cloudflare global network will announce a customer’s own IP prefixes and the prefixes can be used with the respective Cloudflare Layer 7 services. This allows customers to proxy traffic through Cloudflare and still have the customer IP address returned in the DNS resolution. This can be [beneficial ↗](https://blog.cloudflare.com/bringing-your-own-ips-to-cloudflare-byoip/) for cases where the customer IP prefixes are already allow-listed and updating firewall rules is not desirable or present an administrative hurdle.

#### Private connection over the Internet - Tunnel

The recommended option when connecting origin(s) over the Internet is to have a private tunnel/connection over the Internet for additional security.

A traditional VPN setup is not optimal due to backhauling traffic to a centralized VPN gateway location which then connects back to the origin; this negatively impacts end-to-end throughput and latency. Cloudflare offers [Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/) software that provides an encrypted tunnel between your origin(s) and Cloudflare’s network. Also, since Cloudflare leverages anycast on its global network, the origin(s) will, like clients, connect to the closest Cloudflare data center(s) and therefore optimize the end-to-end latency and throughput.

When you run a tunnel, a lightweight daemon in your infrastructure, cloudflared, establishes four outbound-only connections between the origin server and the Cloudflare network. These four connections are made to four different servers spread across at least two distinct data centers providing robust resiliency. It is possible to install many cloudflared instances to increase resilience between your origin servers and the Cloudflare network.

Cloudflared creates an encrypted tunnel between your origin web server(s) and Cloudflare’s nearest data center(s), without the need for opening any public inbound ports. This provides for simplicity and speed of implementation as there are no security changes needed on the firewall. This solution also lowers the risk of firewall misconfigurations which could leave your company vulnerable to attacks.

The firewall and security posture is hardened by locking down all origin server ports and protocols via your firewall. Once Cloudflare Tunnel is in place and respective security applied, all requests on HTTP/S ports are dropped, including volumetric DDoS attacks. Data breach attempts, such as snooping of data in transit or brute force login attacks, are blocked entirely.

![aCloudflare provides application performance and security services securely with Cloudflare Tunnel over the Internet.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-5.CMyrXFd3_Z2aqEj4.webp "Figure 5: Connectivity from Cloudflare to origin server(s) via Cloudflare Tunnel")

Figure 5: Connectivity from Cloudflare to origin server(s) via Cloudflare Tunnel

The above diagram describes the connectivity model through Cloudflare Tunnel. This option provides you with a secure way to connect your resources to Cloudflare without a publicly routable IP address. Cloudflare Tunnel can connect HTTP web servers, SSH servers, remote desktops, and other protocols safely to Cloudflare.

#### Direct connection - Cloudflare Network Interconnect (CNI)

Most vendors also provide an option of directly connecting to their network. Direct connections provide security, reliability, and performance benefits over using the public Internet. These direct connections are done at peering facilities, Internet exchanges (IXs) where Internet service providers (ISPs) and Internet networks can interconnect with each other, or through vendor partners.

![Cloudflare provides application performance and security services over a direct connection, Cloudflare Network Interconnect.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-6.Cgv5GAfz_Z1GxNJF.webp "Figure 6: Connectivity from Cloudflare to origin server(s) via Cloudflare Network Interconnect (CNI)")

Figure 6: Connectivity from Cloudflare to origin server(s) via Cloudflare Network Interconnect (CNI)

The above diagram describes origin connectivity through [Cloudflare Network Interconnect (CNI) ↗](https://blog.cloudflare.com/cloudflare-network-interconnect/) which allows you to connect your network infrastructure directly with Cloudflare and communicate only over those direct links. CNI allows customers to interconnect branch and headquarter locations directly with Cloudflare. Customers can interconnect with Cloudflare in one of three ways: over a private network interconnect (PNI) available at [Cloudflare peering facilities ↗](https://www.peeringdb.com/net/4224), via an IX at any of the [many global exchanges Cloudflare participates in ↗](https://bgp.he.net/AS13335#%5Fix), or through one of Cloudflare’s [interconnection platform partners ↗](https://blog.cloudflare.com/cloudflare-network-interconnect-partner-program).

Cloudflare’s global network allows for ease of connecting to the network regardless of where your infrastructure and employees are.

## Routing to the origin

Regardless of which connectivity model is used, DNS resolution is done first and provides Cloudflare the information of where to route to. Cloudflare can support configurations as an authoritative DNS provider, secondary DNS provider, or non-Cloudflare DNS (CNAME) setups for a zone. For Cloudflare performance and security services to be applied, the traffic must be routed to the Cloudflare network.

### Example: Securing your application with Cloudflare Tunnel and Access

#### Securing connectivity with Cloudflare Tunnel

Although there are multiple ways to onboard an application to use Cloudflare services, a common approach is to use Cloudflare DNS as the primary authoritative DNS. The additional benefit for customers here is that Cloudflare is consistently ranked the [fastest available authoritative DNS provider globally ↗](https://www.dnsperf.com/#!dns-providers).

In this example, we’ll connect our origin server to Cloudflare securely with Cloudflare Tunnel. You can configure DNS in the dashboard and enter the site you want to onboard. You’ll receive a pair of Cloudflare nameservers to configure at your domain registrar’s site. Once that’s completed, Cloudflare becomes the primary authoritative DNS provider.

If Cloudflare is configured for just routing over the Internet, the DNS configuration would look something like below, where the A record points to the IP address of the origin server or respective load balancer. As Cloudflare is acting as a reverse proxy, the status shows as Proxied." As is, Cloudflare is still acting as a reverse proxy so all the Cloudflare services such as CDN, WAF, and Access can be used.

![Typical configuration for directing traffic through Cloudflare network.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-7.DSuS_Zmx_Z1KVrsr.webp "Figure 7: DNS configuration for 'cftestsite3.com' - pointing to IP address of origin or load balancer.")

Figure 7: DNS configuration for 'cftestsite3.com' - pointing to IP address of origin or load balancer.

We can also use Cloudflare Tunnel over the Internet to provide for more security and to prevent the need for opening any inbound firewall rules to the origin(s). In this way, instead of an A record in the DNS configuration, we will have a CNAME record pointing to the tunnel we deploy. Here we deploy a tunnel from the origin to the Cloudflare network, and the DNS will automatically be configured. A CNAME record that points to the tunnel will be created; this enforces all traffic going to the origin(s) be routed over the Cloudflare Tunnel.

To create and manage tunnels, you need to install and authenticate cloudflared on your origin server. cloudflared is what connects your server to Cloudflare’s global network.

There are two options for creating a tunnel - [via the dashboard](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/get-started/create-remote-tunnel/) or [via the command line](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/do-more-with-tunnels/local-management/create-local-tunnel/). It’s recommended getting started with the dashboard, since it will allow you to manage the tunnel from any machine.

A remotely-managed tunnel only requires the tunnel token to run. Anyone with access to the token will be able to run the tunnel. You can get a tunnel’s token from the dashboard or via the API as shown below. The command provided in the dashboard will install and configure cloudflared to run as a service using an auth token.

In the Cloudflare dashboard, navigate to Zero Trust > Networks > Connectors. Select the "Create a tunnel" button, name the tunnel, and save.

![Cloudflare allows for easily creating and naming a tunnel.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-8.Z4WG1c9g_23Uw1c.webp "Figure 8: Cloudflare Tunnel Creation.")

Figure 8: Cloudflare Tunnel Creation.

Next, you’ll be presented with a screen where you select the operating system (OS) of your origin server. You will then be provided a CLI command that you can run on your origin that will automatically download and install the Cloudflare Tunnel software.

![Cloudflare supports tunnel deployment/configuration for all popular operating systems.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-9.CdoD37WQ_1e1PXb.webp "Figure 9: Instructions to install and run a connector.")

Figure 9: Instructions to install and run a connector.

Below, the CLI command has been run to download and install the Cloudflare Tunnel software.

![Cloudflare supports easy deployment/configuration of Cloudflare Tunnel via CLI.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-10.CMYXjvNp_1gq2nY.webp "Figure 10: Downloading and installing Cloudflare Tunnel")

Figure 10: Downloading and installing Cloudflare Tunnel

The connector will now automatically be displayed as connected.

![On successful configuration, Cloudflare displays the Connectors and status of connection to Cloudflare network.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-11.gt8WHsdP_Z2vn9v1.webp "Figure 11: Cloudflare Tunnel Connectors showing in dashboard.")

Figure 11: Cloudflare Tunnel Connectors showing in dashboard.

In the dashboard, you can now continue with the next step which is to create the tunnel and map it to a service on the origin as shown below. In this case, all HTTPS traffic will be sent over the tunnel to the origin server.

![Cloudflare Tunnel configuration allows for routing traffic to specific services running on the origin.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-12.NvahhCan_ZS8PHs.webp "Figure 12: Cloudflare Tunnel Configuration.")

Figure 12: Cloudflare Tunnel Configuration.

You can now see in the dashboard that the tunnel has been created and is healthy.

![Cloudflare provides health status of deployed tunnels.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-13.-gtSCOhj_1sDTjd.webp "Figure 13: Cloudflare Tunnel is created and healthy.")

Figure 13: Cloudflare Tunnel is created and healthy.

Further, if we look at the DNS configuration, we can see a DNS record was automatically created pointing to the tunnel ID. When you create a tunnel, Cloudflare generates a subdomain of `cfargotunnel.com` with the UUID of the created tunnel. Unlike publicly routable IP addresses, the subdomain will only proxy traffic for a DNS record in the same Cloudflare account. It’s not possible for another user to create a DNS record in another account or system to proxy traffic over this tunnel.

![Cloudflare Tunnel automatically creates a CNAME DNS entry directing traffic to the deployed tunnel](https://developers.cloudflare.com/_astro/secure-app-dg-fig-14.7RsLkGj__ZAP93B.webp "Figure 14: Cloudflare DNS CNAME record automatically created")

Figure 14: Cloudflare DNS CNAME record automatically created

We now have secure application access. Users can only access the application through the tunnel connected to the Cloudflare network. Further, since Tunnel uses outbound connections to Cloudflare and any return traffic from an outbound connection will be allowed, no inbound firewall rule is required creating less overhead and more operational simplicity.

If you were to deploy the tunnel via CLI, after the tunnel install, you would also need to authenticate [cloudflared](https://developers.cloudflare.com/cloudflare-one/glossary/?term=cloudflared) on the origin server. cloudflared is what connects the server to Cloudflare’s global network. This authentication can be done with the `cloudflared tunnel login` command as shown below.

![Cloudflare provides for easily authenticating Cloudflare Tunnel with a Cloudflare account.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-15.SDbZBRZ0_Z1EOknB.webp "Figure 15: Authenticating cloudflared on the origin server.")

Figure 15: Authenticating cloudflared on the origin server.

You’ll be asked to select the zone you want to add the tunnel to as shown below.

![Cloudflare can enforce tunnel-only connections to a specific zone.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-16.HaC4ddok_ZrNh8C.webp "Figure 16: Adding Cloudflare Tunnel to a selected zone.")

Figure 16: Adding Cloudflare Tunnel to a selected zone.

Next, you’ll authorize the tunnel for the zone.

![Users must authorize the zone a tunnel connects to.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-17.Q5VBNA6l_Z23QVho.webp "Figure 17: Authorizing the tunnel for a zone.")

Figure 17: Authorizing the tunnel for a zone.

Finally, you should receive confirmation that a certificate has been installed allowing your origin to create a tunnel on the respective zone.

![Cloudflare provides a confirmation on successfully installing a certificate to origin, allowing it to connect via Tunnel to the Cloudflare network.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-18.BGUm8dv9_sCLxR.webp "Figure 18: Confirmation that certificate has been successfully installed.")

Figure 18: Confirmation that certificate has been successfully installed.

#### Securing the application with Cloudflare Access

The current setup as described prior in this document is shown below, where the origin server(s) are connected to the Cloudflare network via Tunnel. Now, we can start to consume Cloudflare services.

![Cloudflare behaves as a proxy where traffic is directed and performance and security services applied.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-19.BOD18Aay_Z20qBVH.webp "Figure 19: Web app securely connected to Cloudflare network for performance and security services.")

Figure 19: Web app securely connected to Cloudflare network for performance and security services.

Currently the origin is only accessible via Cloudflare Tunnel. Because a public hostname is used, access to the origin is public. The application is secured behind Cloudflare and protected from DDoS and other types of attacks. For additional security, Cloudflare Access can be used to place a layer of authentication and access controls in front of the tunneled application. Access enforces an authentication step before requests to the origin can be served. Many other identity, device and network attributes can be used in the policy, allowing customers to define access beyond just authentication. For example, customers can define the network the request originates from, as well as ensuring the user device is running the latest operating system.

Below, you can see an application has been created for cftestsite3.com.

![Cloudflare Access allows for creating application policies to secure application access.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-20.Uy7D6cRj_Z2mWRLk.webp "Figure 20: Cloudflare Access Policy Configuration.")

Figure 20: Cloudflare Access Policy Configuration.

Looking at policy configuration below you can see it requires users to be part of the "Secure Employees" Access group.

![Cloudflare allows assigning multiple Access groups to an application to enforce a set of predefined policies.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-21.Do5840XS_Z1qYz2I.webp "Figure 21 : Access group assigned to the application.")

Figure 21 : Access group assigned to the application.

If we take a deeper look at the "Secure Employees" Access group, it can be seen below that members are from the company’s Okta identity provider (IdP) group called "Employees." Further, the Access group is enforcing multi-factor authentication (MFA).

![Cloudflare Access groups allow for simplicity in defining criteria for certain groups/individuals to access the application.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-22.BkeW7CIH_ZIQ8QU.webp "Figure 22 : Access group configuration with defined group criteria.")

Figure 22 : Access group configuration with defined group criteria.

Looking at the "Image and Video Gallery" application, under "Authentication," customers can also manually select identity providers users can use to connect to this application.

![Cloudflare Access supports all major Identity Providers \(IdPs\) and users can manually select which IdPs can be used.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-23.Dh6tiJyh_Z1CbWRa.webp "Figure 23 : Manually selecting identity providers users can use.")

Figure 23 : Manually selecting identity providers users can use.

We now have secure application access to the origin(s) via Tunnel and also authentication and access policies to the application via Access. When users try to access the site, they are greeted with a Cloudflare Access page asking users to authenticate with the configured IdP; the page can be customized to customer’s liking as shown below.

![Using Cloudflare Access configured with a company’s IdP, users are forced to authenticate to access the application.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-24.DLtovmiZ_1y5SGp.webp "Figure 24 : Sign-in via IdP configured in Access.")

Figure 24 : Sign-in via IdP configured in Access.

### Using other Cloudflare services (CDN, WAF, Security Analytics, etc.)

In the current setup, the origin server(s) are securely connected to the Cloudflare network via Cloudflare Tunnel and Cloudflare Access via policies enforcing authentication and other security requirements.

Since Cloudflare is already set up and acting as a reverse proxy for the site, traffic is being directed through Cloudflare, so all Cloudflare services can easily be leveraged including CDN, Security Analytics, WAF, API Shield, Bot Management, client-side security, etc.

When a DNS lookup request is made by a client for the respective website, in this case "cftestsite3.com," Cloudflare returns an anycast IP address, so all traffic is directed to the closest data center where all services will be applied before the request is forwarded over Cloudflare Tunnel to the origin server(s).

Cloudflare CDN leverages Cloudflare’s global anycast edge network. In addition to using anycast for network performance and resiliency, the Cloudflare CDN leverages [Argo Tiered Cache](https://developers.cloudflare.com/cache/how-to/tiered-cache/) to deliver optimized results while saving costs for customers. Customers can also enable [Argo Smart Routing](https://developers.cloudflare.com/argo-smart-routing/) to find the fastest network path to route requests to the origin server. As shown below, the Cloudflare CDN is now caching content globally and granular CDN policies to affect default behavior can be applied.

![Cloudflare provides analytics for visibility into caching data and performance.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-25.NHZVy6aF_Z2dl3G6.webp "Figure 25 : Cloudflare Caching Analytics.")

Figure 25 : Cloudflare Caching Analytics.

There are [different caching topologies and configurations available](https://developers.cloudflare.com/reference-architecture/architectures/cdn/). Below, you can see a Cache Rule has been configured to cache requests to the domain and override the origin TTL.

![Cloudflare Cache Rules allow for granular control of caching.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-26.DeIWbffl_1wiJVI.webp "Figure 26 : Cloudflare rule configuration.")

Figure 26 : Cloudflare rule configuration.

[Cloudflare Cache Reserve](https://developers.cloudflare.com/cache/advanced-configuration/cache-reserve/) has also been enabled by clicking the "Enable storage sync" button under "Caching > Cache Reserve" in the dashboard. Cache Reserve leverages Cloudflare’s persistent object storage, R2, to eliminate egress costs from other public cloud providers. It improves cache hit ratios by enabling customers to persistently cache data with the push of a single button.

![Cloudflare provides one-click enablement of Cache Reserve which provides persistent object storage for CDN to cut down on egress fees charged by many cloud providers.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-27.B9L-Y7WG_Z24NLlQ.webp "Figure 27 : Cloudflare Cache Reserve.")

Figure 27 : Cloudflare Cache Reserve.

Additionally, as shown below, Cloudflare Security Analytics brings together all of Cloudflare’s detection capabilities and provides a global view and important insights for all traffic going to the respective site. As traffic is being routed through the Cloudflare network, Cloudflare has visibility into threats and insights which are exposed to customers in the dashboard, logs, and reporting.

![Cloudflare Security Analytics brings together all of Cloudflare’s detection capabilities in one place.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-28.bElqNgGP_Z1HDfbx.webp "Figure 28 : Cloudflare Security Analytics.")

Figure 28 : Cloudflare Security Analytics.

Cloudflare WAF rules can be applied to enforce policies on traffic inline. Below a firewall policy is in place to log all traffic with a bot score of < 30 and WAF attack score < 50\. A bot score of < 30 signifies all traffic that’s classified as either automated or likely automated and a WAF attack score < 50 signifies all traffic that’s classified as either malicious or likely malicious.

![Cloudflare WAF allows for easy configuration of rules with visibility into how often the rule is hit.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-29.JeDDUmel_VW7QL.webp "Figure 29 : Cloudflare WAF.")

Figure 29 : Cloudflare WAF.

Cloudflare WAF allows for granular policies that can leverage many different request criteria including header information. Customers can take a [variety of actions](https://developers.cloudflare.com/firewall/cf-firewall-rules/actions/) including logging, blocking, and challenge.

![Cloudflare allows for matching on a combination of request attributes and Cloudflare data/fields to determine if specific actions should be taken.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-30.Bt_pyY4I_lFI5E.webp "Figure 30 : Cloudflare WAF Rule Configuration.")

Figure 30 : Cloudflare WAF Rule Configuration.

Customers can use WAF to implement and use custom rules, rate limiting rules, and managed rules. A brief description of each is provided below.

* WAF Custom Rules: provides ability to create custom rules based on different request attributes and header information to block any threat
* WAF Rate Limiting Rules: prevents abuse, DDoS, brute force attempts, and provides for API-centric controls.
* WAF Managed Rules  
   * Cloudflare Managed Ruleset: provides advanced zero-day vulnerability protection  
   * Cloudflare OWASP Core Ruleset: block common web application vulnerabilities, some of which are in OWASP top 10  
   * Cloudflare Leaked Credential Check: checks exposed credential database for popular content management system (CMS) applications

The same methodology applies for all other Cloudflare Application Performance and Security products (API Shield, Bot Management, etc.): once configured to route traffic through the Cloudflare network, customers can start leveraging the Cloudflare services. Figure 31 displays Cloudflare’s Bot Analytics which categorizes the traffic based on bot score, shows the bot score distribution, and other bot analytics. All of the request data is captured inline and all enforcement based on defined policies is also done inline.

![Cloudflare provides analytics and insights into bot traffic including bot score distribution.](https://developers.cloudflare.com/_astro/secure-app-dg-fig-31.B-ExrLSz_2oBbOp.webp "Figure 31 : Cloudflare Bot Management - Bot Analytics.")

Figure 31 : Cloudflare Bot Management - Bot Analytics.

## Summary

Cloudflare offers comprehensive application performance and security services. Customers can easily onboard and start using all performance and security services by routing traffic to their origin server(s) through Clooudflare’s network. Additionally, Cloudflare offers multiple connectivity options including Cloudflare Tunnel for securely connecting origin server(s) to Cloudflare’s network.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/design-guides/","name":"Design Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/design-guides/secure-application-delivery/","name":"Securely deliver applications with Cloudflare"}}]}
```

---

---
title: Securing guest wireless networks
description: This guide is designed for IT or security professionals who are looking at Cloudflare to help secure their guest wireless networks.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ IPv6 ](https://developers.cloudflare.com/search/?tags=IPv6) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/design-guides/securing-guest-wireless-networks.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Securing guest wireless networks

**Last reviewed:**  over 1 year ago 

## Introduction

Many organizations and businesses offer free wireless Internet access to their customers, clients, patients, students, and visitors. In industries like hospitality, providing guest Wi-Fi is often essential. For colleges and universities, having a reliable and secure Wi-Fi service can be a significant factor in attracting potential students and visitors.

Offering free wireless Internet access brings several benefits. Businesses use guest Wi-Fi to enhance customer engagement by directing users to landing pages for marketing campaigns or offering coupons. Additionally, many guest Wi-Fi systems collect valuable user analytics, such as email addresses, browsing behavior, and even dwell time in specific locations. This data can help influence decisions like product placement in stores or drive follow-up email marketing campaigns.

However, providing guest Wi-Fi also introduces risks. Malicious users could exploit your network for illegal activities, such as accessing prohibited content, purchasing contraband, or engaging in cybercrime. In some cases, businesses like hotels, cafes, and libraries have faced lawsuits for allegedly enabling illegal downloads through their guest Wi-Fi. These lawsuits, often filed by copyright holders, claim that businesses facilitated piracy by failing to monitor or control the content accessed or downloaded by their guests.

![Figure 1: Guest networks are often directly connected to the Internet with little security.](https://developers.cloudflare.com/_astro/figure1.BV1Def0b_1Rc2QB.svg "Figure 1: Guest networks are often directly connected to the Internet with little security.")

Figure 1: Guest networks are often directly connected to the Internet with little security.

While it may be unlikely that your organization could face criminal charges, your organization could become part of lengthy investigations, potentially resulting in legal expenses and reputation damage. In this guide, you will learn how Cloudflare can help minimize risk, provide visibility into guest Internet activity and [better secure your guest wireless network ↗](https://www.cloudflare.com/zero-trust/solutions/secure-guest-wifi/).

### Who is this document for and what will you learn?

This reference architecture is designed for IT or security professionals who are looking at Cloudflare to help secure their guest wireless networks. To build a stronger baseline understanding of Cloudflare, we recommend the following resources:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (5 minute read) or [video ↗](https://www.youtube.com/watch?v=XHvmX3FhTwU) (2 minutes)
* Cloudflare Zero Trust | [https://www.cloudflare.com/zero-trust/ ↗](https://www.cloudflare.com/zero-trust/)
* SASE Architecture with Cloudflare | [/reference-architecture/architectures/sase/](https://developers.cloudflare.com/reference-architecture/architectures/sase/)

This reference architecture guide will help readers understand:

1. **Cloudflare Gateway DNS**: Learn how to integrate Cloudflare Gateway DNS policies into common guest wireless deployment scenarios.
2. **Best practices for DNS policies**: Discover effective methods for building guest wireless DNS policies to enforce your acceptable use policy and prevent malicious activities.
3. **Enhanced visibility and security**:
* Use the Cloudflare Zero Trust dashboard to access detailed logs and analytics, offering insights into DNS queries, traffic patterns, and potential security threats.
* Enable **Logpush** to export logs to external storage solutions for long-term analysis or compliance purposes.
* Integrate with your SIEM (Security Information and Event Management) platform to correlate Cloudflare logs with other security data, streamlining incident detection and response.

### Gateway DNS

Cloudflare offers an enhanced, protected DNS resolver service for Zero Trust customers. This service utilizes Anycast, a routing technology that enables multiple servers or data centers to share the same IP address. When a request is sent to an Anycast IP address, routers use the Border Gateway Protocol (BGP) to direct the request to the nearest server. As a result, DNS queries are always routed to the closest Cloudflare data center based on your location. With data centers in over 330 cities, Cloudflare operates one of the [largest global networks ↗](https://www.cloudflare.com/network/). This service can also strengthen your organization's security by enabling the creation of policies to filter DNS resolutions for potentially malicious, questionable, or inappropriate destinations. This guide explains how to enable this service and configure your environment to secure guest wireless networks, reducing risks to your organization.

### DNS locations

Cloudflare [DNS locations](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/locations/) are a collection of DNS endpoints which can be mapped to physical entities such as offices, homes or data centers. [Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) identifies locations differently depending on the DNS query protocol. IPv4 traffic is identified from the source IP address from which a DNS query originated. IPv6 traffic can be identified by the unique IPv6 resolver address created in the Cloudflare dashboard. The following sections describe how to ensure DNS queries are appropriately mapped to your physical locations depending on the network environment and protocols being used. Later in this document you will learn how to use the location's IP address as an attribute which you can apply to Gateway DNS policies.

The goal is to have DNS requests from your Wi-Fi networks be sent via Cloudflare's secure DNS and secure web gateway service, where your DNS policies can filter requests and block those you deem risky. This guide walks through the different possible network architectures you might have for guest networks and gives guidance on how to implement Cloudflare to protect devices on those guest Wi-Fi networks.

## Securing guest traffic sourced from a basic wireless router

### Using business Internet and a static IPv4 address

A common method for providing guest wireless access is to set up a completely separate network from the corporate or production network. For example, a branch office or retail store might use a single wireless router to achieve this. The router would broadcast a guest wireless Service Set IDentifier (SSID), assign IP addresses to connected devices, and provide Internet connectivity. The public static IPv4 address assigned to the router can then serve as a DNS location attribute in the Cloudflare Zero Trust dashboard. If the router's IP address is dynamically assigned by your ISP refer to the section "Dedicated DNS resolver IPv4 and IPv6 addresses".

To route all DNS queries through Cloudflare, update your router's DNS settings in the WAN interface to use Cloudflare's resolver IP addresses. The specific resolver IPs for Zero Trust can be found in the DNS location settings in the Cloudflare dashboard. Refer to your router's manufacturer documentation for detailed configuration steps to update the WAN interface. Typically, devices connected via Wi-Fi will use the router's IP address as their DNS server. The router forwards the DNS queries to Cloudflare on their behalf. As a result, DNS queries from the wireless devices will be sent Cloudflare and originate from the static IP address assigned to the router.

For enhanced security, prevent wireless guests from accessing other DNS services by creating a firewall rule on the router (if supported). This rule should allow access only to Cloudflare's DNS servers and block all other DNS destinations on UDP/TCP port 53\. Additionally, some advanced wireless routers support content filtering. If available, enable options to block DNS over TLS (DoT) or DNS over HTTPS (DoH) to ensure endpoints cannot bypass your configured DNS security settings in Cloudflare.

![Figure 2: When DNS queries are forwarded to Cloudflare, policies can be implemented to prevent access to malicious and high risk destinations. Guest-Security-Block and Guest-Content-Block refer to the specific DNS policies applied to the wireless guest devices.](https://developers.cloudflare.com/_astro/figure2.DLXV4yIx_1Rc2QB.svg "Figure 2: When DNS queries are forwarded to Cloudflare, policies can be implemented to prevent access to malicious and high risk destinations.  `Guest-Security-Block` and `Guest-Content-Block` refer to the specific DNS policies applied to the wireless guest devices.")

Figure 2: When DNS queries are forwarded to Cloudflare, policies can be implemented to prevent access to malicious and high risk destinations. \`Guest-Security-Block\` and \`Guest-Content-Block\` refer to the specific DNS policies applied to the wireless guest devices.

## Secure guest traffic sourced from an enterprise network

Some companies go beyond using consumer or semi professional grade, all in one wireless routers and deploy guest Wi-Fi access on top of an existing enterprise networking solution. For example, the same Wi-Fi access point hardware might be broadcasting both the enterprise internal network as well as the guest network.

### Segment internal and guest networks

A common approach to separating internal and guest networks involves the use of distinct SSIDs. The internal corporate SSID and the guest wireless SSID can be linked to separate VLANs (Virtual Local Area Networks) or [Dot1q tags ↗](https://en.wikipedia.org/wiki/IEEE%5F802.1Q), providing virtual segmentation between the networks.

In this configuration:

1. A subnet is assigned to the guest wireless VLAN.
2. The default gateway for that subnet is configured on an interface (or virtual interface) of an upstream network device such as a firewall or router.
3. The device segments guest network traffic from internal network traffic while also acting as a secure gateway to the public Internet.

### Configure DNS for the guest network

Similar to simpler setups, DNS queries from guest wireless devices should be forwarded to Cloudflare's resolver IPs. You can achieve this by:

* Assigning Cloudflare DNS servers in the DHCP scope for guest devices.
* Configuring the upstream network device to proxy DNS queries to Cloudflare.

Note, you might also be providing guest devices access to some internal resources, and as such you might configure clients to use an internal DNS service. You can also set up this service to forward Internet bound DNS requests to Cloudflare.

To enhance security, configure outbound Internet firewall rules to allow DNS queries only to Cloudflare's enterprise resolver IPs on TCP/UDP port 53.

### Assign a unique Public IPv4 address for guest traffic

To ensure guest traffic is sourced from a unique public IPv4 address:

1. Create a Port Address Translation (PAT) policy on your firewall or edge device specifically for guest traffic.  
   * PAT (or NAT overload) allows multiple devices on the local network to access the Internet using a single public IP address.
2. Define the source address range as the guest subnet in the firewall settings.
3. Specify the translated source address—a public IPv4 address—to be used for all Internet-bound traffic originating from the guest network.

Refer to your firewall manufacturer's documentation for detailed instructions on setting up a PAT or NAT overload rule.

### Map guest traffic in Cloudflare

Once guest network traffic is assigned a unique public IPv4 address, this address can be used as an attribute in the Cloudflare dashboard to map your DNS location effectively.

![Figure 3: This diagram shows how guest Wi-Fi traffic has different DNS filtering policies versus your use of our Gateway DNS service to secure corporate network traffic.](https://developers.cloudflare.com/_astro/figure3.BJGAREAk_Z6e9pd.svg "Figure 3: This diagram shows how guest Wi-Fi traffic has different DNS filtering policies versus your use of our Gateway DNS service to secure corporate network traffic.")

Figure 3: This diagram shows how guest Wi-Fi traffic has different DNS filtering policies versus your use of our Gateway DNS service to secure corporate network traffic.

## Secure guest wireless at locations with a dynamically assigned public IPv4 or IPv6 address

### Dedicated DNS resolver IPv4 and IPv6 addresses

If you are unable to use a static public IP address on your edge device, Cloudflare offers dedicated IPv4 and IPv6 resolver endpoint addresses that can be assigned specifically to your organization. In this scenario, the destination address to which DNS queries are sent can serve as a method to map your physical location to a Cloudflare DNS endpoint.

Cloudflare provides unique IPv6 resolver endpoint addresses at no cost through the Zero Trust dashboard. However, due to the limited availability of IPv4 addresses, dedicated IPv4 DNS endpoints are only available with Cloudflare Enterprise plans.

For example, if your guest wireless router is dynamically assigned an IPv6 address and an IPv6 DNS server by your ISP, you can modify the IPv6 DNS address to match the IPv6 DNS endpoint address configured in your Cloudflare DNS Location settings.

### Add DNS locations

Now that we have covered various options for sending DNS queries to Cloudflare's DNS resolvers and identifying your organization's guest wireless network—either by its source IP address or a dedicated resolver address—you're ready to create new locations in Zero Trust.

To get started, navigate to **DNS Locations** in the Zero Trust dashboard. For detailed, step-by-step instructions, refer to the [**DNS Locations**](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/locations/) guide. When using IPv4 or IPv6 endpoint filtering and location matching, you can define a network and subnet mask in CIDR notation to represent your location's source IP addresses. For example:

* If all your wireless networks share a public IP address within the same subnet, you can apply a policy to all locations at once using a single DNS location object.
* To assign unique policies to specific locations, use a host address ending in /32 to represent each location individually.

### Creating DNS policies

To get started, navigate to firewall policies and select DNS in the Zero Trust dashboard. For detailed, step-by-step instructions, refer to the [DNS Policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/dns-policies/) guide.

To keep your policies organized, we recommend using meaningful names that clearly indicate their purpose. For instance, a policy named **Guest-Security-Block** conveys:

* **Guests**: Who the policy applies to.
* **Security**: The type of content being evaluated.
* **Block**: The action being taken.

Cloudflare provides a range of managed categories which you can use to filter a wide range of different types of threats. For example, adding into the DNS policy the [security category](https://developers.cloudflare.com/cloudflare-one/traffic-policies/domain-categories/#security-categories) Malware will prevent a connected device from making a DNS request to any site that Cloudflare has tagged as being known as part of a malware campaign or might be hosting malware. As well as security categories, we also have [content categories](https://developers.cloudflare.com/cloudflare-one/traffic-policies/domain-categories/#content-categories) which identify sites such as Cryptocurrency, P2P sharing sites or adult themed sites. Cloudflare also manages a list of [applications](https://developers.cloudflare.com/cloudflare-one/traffic-policies/application-app-types/), so you can filter access to public cloud storage or file sharing sites.

Cloudflare also allows [custom feeds](https://developers.cloudflare.com/security-center/indicator-feeds/#publicly-available-feeds) where you can either subscribe to another vendor to provide a list of sites to filter, or you can use some of the built in government based threat feeds. This allows you to be very selective about what sites you wish to filter.

For devices making requests from known DNS locations, it's also possible to add these to the policy. So you can create different policies for different guest Wi-Fi locations. This can help with situations where local laws require you to prevent access to a specific type of Internet site.

Policies can be made up of multiple rules, so a single policy can prevent access to high risk websites as well as inappropriate content.

### Recommended policies

Cloudflare has several additional recommended DNS policies that can be found in the [Secure your Internet traffic implementation guide](https://developers.cloudflare.com/learning-paths/secure-internet-traffic/build-dns-policies/recommended-dns-policies/). These policies are designed to enhance your organization's overall security and should also be factored in when setting up policies for your internal production web traffic.

### Visibility into Guest DNS Internet Activity

With DNS traffic now routed through Cloudflare and your wireless networks secured, you can gain detailed visibility into your guests' Internet activity using logs and advanced logging tools. Every DNS request is [logged](https://developers.cloudflare.com/cloudflare-one/insights/logs/dashboard-logs/gateway-logs/) in Cloudflare and our dashboard provides a simple search interface. These logs help you understand how your policies are applied and detect trends or patterns in guest Internet usage, providing actionable insights to fine-tune your security configurations.

For advanced telemetry and seamless data management, consider enabling **Logpush** in your Cloudflare dashboard. Sending these logs to an external source, most commonly a SIEM platform, brings the following benefits:

* **Centralized Analysis**: Consolidate logs from multiple Cloudflare services with other organizational data in your SIEM for comprehensive visibility.
* **Enhanced Threat Detection**: Correlate DNS activity with other security events to detect patterns of malicious behavior more effectively.
* **Compliance and Audit Readiness**: Store DNS logs for long-term retention to meet regulatory compliance requirements or support incident audits.
* **Real-Time Alerts**: Leverage SIEM integration to trigger automated alerts and responses based on suspicious DNS activity.
* **Operational Insights**: Gain a deeper understanding of guest browsing behavior to identify performance bottlenecks or optimize content filtering policies.

By leveraging logs, Logpush, and SIEM integrations, you not only enhance visibility into guest Internet activity but also strengthen your organization's overall security posture.

## Going beyond DNS filtering

Up to this point all methods mentioned have revolved around DNS, mainly due to the fact most traffic over guest Wi-Fi networks will utilize DNS and these configurations do not require any agents or certificates installed on devices, for this reason DNS centric protections are always the recommended starting point when it comes to securing guest Wi-Fi networks. Unfortunately there are ways to bypass DNS based security enforcement like:

* Changing your dns resolver manually
* Using IP address to reach sites (potentially saving IP to fully qualified domain name mappings via host file)
* Using non sanctioned VPN clients

For these reasons you should also consider applying security in layers and add network centric enforcement to complement the protections provided via DNS.

![Figure 4: This diagram shows how to connect guest networks to Cloudflare and the high level traffic flow to reach Internet resources.](https://developers.cloudflare.com/_astro/figure4.NuRfhipz_WvTvI.svg "Figure 4: This diagram shows how to connect guest networks to Cloudflare and the high level traffic flow to reach Internet resources.")

Figure 4: This diagram shows how to connect guest networks to Cloudflare and the high level traffic flow to reach Internet resources.

To provide network level filtering, Cloudflare must be in the traffic path for more than just the DNS request. This is achieved by routing Internet-bound traffic over an [IPsec ↗](https://www.cloudflare.com/learning/network-layer/what-is-ipsec/) tunnel to Cloudflare. Cloudflare's [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/) (formerly Magic WAN) service allows third-party devices to establish IPsec or GRE tunnels to the Cloudflare network. It is also possible to just deploy our [Cloudflare One Appliance](https://developers.cloudflare.com/cloudflare-wan/configuration/appliance/), a pre-configured lightweight network appliance that automatically creates the tunnel back to Cloudflare and can be managed remotely. Once traffic reaches Cloudflare multiple security controls can be overlaid such as:

* Cloud based network firewall ([Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/))
* Secure web gateway ([Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/))

Below is the high level traffic flow that correlates to the above diagram:

1. Internet destined traffic will be routed to cloudflare from connected guest networks, this can be easily done with a policy based route. In most guest Wi-Fi setups devices will only be expected to generate internet bound traffic so in most cases a Policy based route referencing ANY as the destination will be sufficient. Ex Source 192.168.53.0/24 to Destination ANY next hop Cloudflare IPsec tunnel.
2. Once traffic reaches the cloudflare edge it will first be inspected by Cloudflare Network Firewall. Cloudflare Network Firewall can be used to create network and transport layer blocks which would allow admins to restrict access to certain destination IP's or ports, a common policy could be blocking all DNS traffic not directed towards cloudflare DNS resolvers. Custom lists can be used to import existing lists customers may already have. [IDS](https://developers.cloudflare.com/cloudflare-network-firewall/about/ids/) can be enabled to monitor if any guest users are attempting to launch known exploits from your Guest network. Managed threat [lists](https://developers.cloudflare.com/waf/tools/lists/managed-lists/#managed-ip-lists) allow you to use cloudflare's auto updated threat intel to block known threats like known malware repositories or botnets.
3. Traffic is then forwarded to Cloudflare gateway. At gateway network based policies can be created using the same Content categories and Security risks mentioned earlier within DNS based policies, the benefit is when these filters are applied at the network level, even if a user bypasses DNS these policies can still be applied providing multi tiered enforcement. It would be recommended to mirror DNS based rules in accordance with your organization's acceptable use policy. Cloudflare Gateway also acts as a secure outbound proxy and as such can SNAT private address to Internet routable public addresses, by default rfc 1918 addresses will automatically be SNAT to Shared Cloudflare egress ip's. This removes the need for managing PAT directly from your edge device and also provides a layer of privacy as traffic will source from Cloudflare owned ip's when browsing Internet sites. Dedicated Egress ip's unique to your account can also be provided and egress ip selection controlled via policy.
4. Traffic is now routed to the final Internet destination, return traffic will be routed back through Cloudflare edge and returned to the corresponding IPsec tunnel.

## Summary

By following these strategies and leveraging Cloudflare Zero Trust, organizations can offer a secure, reliable, and policy-compliant wireless experience for their guests. These measures not only safeguard networks but also enhance visibility and enable proactive threat mitigation.

If you are interested in learning more about Gateway, or other aspects of the Cloudflare SASE platform, refer to our [Reference Architecture library](https://developers.cloudflare.com/reference-architecture/) or our [Developer docs](https://developers.cloudflare.com/) to get started.

## Related Resources

* [Evolving to a SASE architecture with Cloudflare](https://developers.cloudflare.com/reference-architecture/architectures/sase/)
* [Cloudflare One Appliance deployment options · Cloudflare Reference Architecture docs](https://developers.cloudflare.com/reference-architecture/diagrams/sase/cloudflare-one-appliance-deployment/)
* [DNS policies - Cloudflare Zero Trust](https://developers.cloudflare.com/cloudflare-one/traffic-policies/dns-policies/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/design-guides/","name":"Design Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/design-guides/securing-guest-wireless-networks/","name":"Securing guest wireless networks"}}]}
```

---

---
title: Streamlined WAF deployment across zones and applications
description: Learn how to streamline WAF deployment across different zones and applications.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/design-guides/streamlined-waf-deployment-across-zones-and-applications.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Streamlined WAF deployment across zones and applications

**Last reviewed:**  over 1 year ago 

## Introduction

Security perimeters have become less defined compared to the traditional "Castle and Moat" deployments that were popular in the past. Within a fixed perimeter, it was relatively easier to secure multiple applications using a single Web Application Firewall (WAF) deployment inside a datacenter. Today this approach does not provide enough flexibility as applications and services expand beyond the traditional datacenter. There are several good reasons to configure networks and services in a hybrid approach and to adopt SaaS platforms, so it is valuable to update the WAF approach to cover this scenario.

Cloud-based WAF solutions can control the perimeter sprawl with a flexible deployment model that covers applications and services deployed on-premises, on cloud-based IaaS and PaaS environments, and in hybrid environments.

At the same time, an incorrect implementation of a cloud-based WAF can lead to security policy fragmentation and duplication, causing increased overheads both in maintenance and in monitoring. Aside from the clear economic impact that such inefficiencies bring, the lower efficiency can also degrade the security posture itself. This ultimately can lead to security incidents of varying degrees of severity depending on the scenario.

### Who is this document for and what will you learn?

This Design Guide is written for security and network administrators / architects that are looking to implement a flexible, cloud-based WAF security configuration. This configuration can span across multiple applications, domains, and services - all deployed in a hybrid environment.

Cloudflare offers comprehensive Application Security & Performance solutions, which include a highly-configurable, cloud-based Web Application Firewall (WAF).

In this guide, you will learn:

* How to implement the Cloudflare WAF and factor common rules.
* How to easily implement common configurations across multiple applications.
* How to deploy exceptions and specific configurations when needed.
* What are the best practices to follow when deploying the Cloudflare WAF.

## Example Scenario

Most Cloudflare customers onboard multiple Cloudflare Zones within a single Account (or Enterprise Organization). Each Cloudflare Zone is usually mapped to a second-level domain (such as `example.com`), and all its subdomains are then handled within that Cloudflare Zone (`web1.example.com`, `web2.example.com` etc.).

In this setup, Cloudflare is a DNS-based reverse proxy. Each Fully Qualified Domain Name (FQDN) is configured in its Cloudflare zone, and it points to a customer origin server. In this respect, Cloudflare does not make particular distinctions on what/where that origin server is located. It could be deployed on-premises, it could be a virtual machine running in the Cloud, or it could be a SaaS service provided by a third-party.

Frequently, multiple FQDNs are pointing to the same, shared web infrastructure. This is reached using an IP address or another FQDN, for example. It is also possible that some FQDNs are pointed at dedicated origin infrastructure or to an external SaaS endpoint.

In many cases, Cloudflare customers end up managing many Cloudflare Zones (such as `example.com`,`example.org`, `myappexample.com` and so on) within a single Cloudflare Account, and many FQDNs within each zone. Frequently, many FQDNs across multiple zones are pointing at a shared web infrastructure behind the scenes.

For example, you could be in the following (or similar) scenario:

* The majority of your web applications run on a newly deployed in-house Content Management System (CMS)
* You also have some legacy web applications that are running on their custom stacks.
* Finally, you may have dedicated infrastructure (managed by a partner) for a few applications.

![Diagram showing the example scenario with multiple domains, subdomains and web applications](https://developers.cloudflare.com/_astro/diagram-1.D8xm98w0_1518JU.svg "Figure 1: An example scenario with multiple domains, subdomains and web applications.")

Figure 1: An example scenario with multiple domains, subdomains and web applications.

### WAF Requirements

From a WAF setup perspective, this scenario raises interesting requirements:

* To create an easily deployable configuration that implements standard WAF rules configuration in front of most applications.
* To have the ability to fine tune and tweak which rules are deployed in front of the legacy applications, which may be more prone to false positives than the others.
* To include a "catch-all" configuration, ensuring that a Cloudflare default WAF setup is always applied to all web traffic that does not fall in the above scenarios.
* To minimize set up time and ongoing maintenance efforts, as applications are added and removed over time.

In this Design Guide we will review how the Cloudflare WAF operates and what tools are provided to achieve all the above architectural requirements.

## Cloudflare Web Application Firewall

The Cloudflare WAF operates at both the zone and the account level. There are different [WAF phases](https://developers.cloudflare.com/ruleset-engine/about/phases/) (`http_request_firewall_custom`, `http_ratelimit` and `http_request_firewall_managed`) that map to Custom Rules, Rate Limiting Rules, and Managed Rules. These phases exist both at the account and the zone level. For more information, please [refer to the following documentation](https://developers.cloudflare.com/waf/reference/phases/). It is important to note that the Account rulesets are evaluated before the zone rulesets.

## Example Use Case - Implementing the Cloudflare Managed Ruleset

For the purposes of this guide, we will build on the example scenario and WAF Requirements provided above. You have a single Cloudflare Account (or Enterprise Organization) and two 2nd level domains onboarded on it.

Let's imagine that there are six applications behind six FQDNs across two domains. For these applications, you want to apply a baseline WAF security posture. However, of these six applications, two will require a more special treatment:

* One is implemented on a legacy application server, prone to false positives.
* Another is implemented by a third party on their own infrastructure.

Let's visualize the scenario below:

![Diagram showing how the example scenario can be modelled in a Cloudflare Account with multiple zones](https://developers.cloudflare.com/_astro/diagram-2.DsX9Y3eo_184cRD.svg "Figure 2: The example scenario now included in a Cloudflare Account with multiple zones.")

Figure 2: The example scenario now included in a Cloudflare Account with multiple zones.

### Using Account Level WAF to minimize configuration overheads

We will use the [Cloudflare Managed Ruleset](https://developers.cloudflare.com/waf/managed-rules/reference/cloudflare-managed-ruleset/) as an example, keeping in mind that the approach can also be used for other Cloudflare Managed Rules, Rate Limiting Rules, and Custom Rules.

* For `web1.example.com`, `web2.example.com`, `web3.example.com` and `web5.example.org`: you want to apply the default WAF Managed Ruleset, already tuned by Cloudflare.
* For `special4.example.com`: you want to apply a different subset of the default Managed Ruleset, as you already identified a couple of rules that are causing false positives on the legacy application.
* For `special6.example.org`: you want to apply the Managed Ruleset in logging mode, as this is a newly introduced application from a third party and you need to start evaluating how to protect it.

Then, you can adopt the following approach:

* Deploy one instance of the Cloudflare Managed Ruleset at the Account Level. This implements the common subset of rules for the four FQDNS requiring it. This is easier to set up and maintain than replicating the same configuration four times at the Zone level.
* For `special4.example.com` and `special6.example.org`, you will deploy two additional instances of the Managed Ruleset, with the specific tweaks required by the applications behind these particular FQDNs.

In practice, using the [Account Level WAF's Managed rulesets](https://developers.cloudflare.com/waf/account/managed-rulesets/), you can deploy the three instances of our Managed Ruleset. Each instance will have its own [Custom Filter Expression](https://developers.cloudflare.com/ruleset-engine/rules-language/expressions/edit-expressions/), which will check that the HTTPS requests's hostname belongs to one of the FQDNs in a list:

* For the first list (`web1.example.com`, `web2.example.com`, `web3.example.com` and `web5.example.org`), you will apply the Cloudflare Managed Ruleset in its `Default` configuration.
* For `special4.example.com`, the same ruleset will be deployed in `Default` mode, but taking care of disabling the specific rules that cause false positives. This can be achieved with the [Rule Overrides](https://developers.cloudflare.com/ruleset-engine/managed-rulesets/override-managed-ruleset/), using the Dashboard or the APIs. [Real examples are available here](https://developers.cloudflare.com/ruleset-engine/managed-rulesets/override-examples/).
* For `special6.example.org`, you repeat the setup done for the first list, this time modifying the Managed Ruleset instance to operate in `Log` mode instead of `Default`.

Let's visualize the complete configuration in the below diagram:

![Diagram depicting the implemented WAF configuration at the account level](https://developers.cloudflare.com/_astro/diagram-3.DrnYaql1_20FyQx.svg "Figure 3: The Account WAF implementation to protect multiple applications across different hostnames with repeatable configurations.")

Figure 3: The Account WAF implementation to protect multiple applications across different hostnames with repeatable configurations.

This setup will provide three instances of the Managed Ruleset, calibrated for each application group.

If you have additional applications to be protected in the future, it is sufficient to include the new application FQDN to the filter expression. Generally, most will be added to the standard ruleset instance that is using the recommended Cloudflare configuration. Another common strategy is to add new applications to the `Log` mode instance, so that it can be monitored and eventually transitioned to the `Default` mode ruleset or to a more specific variation if required.

## Additional Considerations

### False Positives Tuning

The rulesets (and in particular the Managed Ruleset) are already finely tuned by Cloudflare to avoid false positives. They can be deployed for most applications with little to no tweaking required. This means that customers work directly with the default rulesets configurations in most cases, with the possibility to customize only when needed.

If this is your scenario, you can simplify the above setup in the following way by using [Exceptions](https://developers.cloudflare.com/waf/managed-rules/waf-exceptions/):

* First, you can identify which applications (FQDNs) require a special treatment by deploying the ruleset in `Log` mode. For example, following testing you find that `special1.example.com` requires disabling a small set of Managed Rules, and `special2.example.org` disabling a similar, but different set of rules.
* Deploy two managed Exceptions, with a filter matching on the each FQDN, and then skipping thoserules from the Managed Ruleset.
* Finally, deploy a Default version of the Managed Ruleset, which will match on everything else, and run the Cloudflare recommended settings of the Managed Ruleset.

This approach can be simpler when there are few exceptions to the norm, and when the initial calibration confirms that the fine tuning already done by Cloudflare to minimize false positives is appropriate in your situation.

### Using Lists

Cloudflare provides the ability to create [lists of hostnames](https://developers.cloudflare.com/waf/tools/lists/create-dashboard/). In this case, the Filter expression can be changed to reference such list variables.

You can then update the lists directly and re-use them across multiple rulesets. For example, use the same list for the Cloudflare Managed Rules and also for the OWASP Ruleset and Rate Limiting. Your filters [will reference the lists directly](https://developers.cloudflare.com/waf/tools/lists/use-in-expressions/), meaning a cleaner and maintainable configuration.

When using lists, it is also much easier to adopt a "catch all rule" that runs last in the evaluation order. This could implement, for example, the `Default` Cloudflare Managed Ruleset when the host in the HTTPS request is not included in any of your lists. This ensures that a default WAF Managed Rules configuration is always applied, in case some of your applications are not added by mistake to the lists.

### Using automations

The WAF configuration can be managed [via API calls](https://developers.cloudflare.com/api/) and [Terraform ↗](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs). This is particularly useful when you want to scale the approach to many more zones and FQDNs, and to avoid repetitive and manual tasks in the Dashboard.

For example, a default Terraform configuration file could be created to define Rulesets and Lists and then maintained and applied as needed without needing to make changes in the Cloudflare Dashboard.

### Avoid mixing setup at Account and Zone level

When possible, Cloudflare recommends maintaining the configuration at the Account level, in particular when a Cloudflare Zone will contain multiple DNS records, each requiring custom configuration.

At the Zone level WAF, you can deploy only one instance of each ruleset (Managed Rules, OWASP Rules, etc.), and therefore handling special scenarios can be more complex or not possible at this level.

### Custom Rules and Rate Limiting Rules

The approach described above for Managed Rules can be applied also to [Custom Rulesets](https://developers.cloudflare.com/waf/account/custom-rulesets/) and [Rate Limiting](https://developers.cloudflare.com/waf/account/rate-limiting-rulesets/), extending the flexibility to all the WAF security tools at your disposal.

Unless your configuration is specific to a single zone, Cloudflare recommends implementing it at the Account level.

For more information, please refer to the following resources:

* [Create a Rate Limiting Rule at the Account level](https://developers.cloudflare.com/waf/account/rate-limiting-rulesets/create-dashboard/)
* [Create Custom Rulesets at the Account level](https://developers.cloudflare.com/waf/account/custom-rulesets/)

## Summary

In conclusion, this design guide illustrates how you can implement flexible WAF configurations to cover multiple applications and domains. The described approach reduces the effort required to deploy, maintain, and update your WAF security configuration.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/design-guides/","name":"Design Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/design-guides/streamlined-waf-deployment-across-zones-and-applications/","name":"Streamlined WAF deployment across zones and applications"}}]}
```

---

---
title: Using a zero trust framework to secure SaaS applications
description: Learn how to eliminate the trade-off between security and performance by using Cloudflare's global network.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/design-guides/zero-trust-for-saas.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Using a zero trust framework to secure SaaS applications

**Last reviewed:**  over 1 year ago 

## Introduction

SaaS applications have become crucial in today's business landscape, particularly with the rise of hybrid workforces. As organizations adopt flexible working models, the ability of SaaS apps to provide seamless, global access is essential for maintaining productivity and fostering collaboration across distributed teams.

SaaS applications significantly reduce the burden on IT teams by eliminating the need to manage the underlying infrastructure. By entrusting these responsibilities to the SaaS provider, organizations no longer need to worry about hardware and software lifecycle management or scalability challenges. Furthermore, the subscription-based model of SaaS applications lowers adoption barriers by minimizing upfront costs and ultimately offer a lower Total Cost of Ownership (TCO) compared to legacy applications.

Along with these advantages, SaaS applications introduce new challenges and security risks. Their Internet accessibility requires greater focus on the security of users and devices to prevent unauthorized access and data leaks. User provisioning (onboarding/offboarding), appropriate access controls and control/visibility into device security is essential to ensure only authorized users on trusted devices access company applications. Moreover, IT teams must monitor SaaS applications for misconfiguration and gain visibility into risky user activity. Employees might publicly share files that contain sensitive information or integrate managed SaaS applications with unauthorized third-party apps, all without the IT team's knowledge.

The ease with which users can sign up for new SaaS services, particularly free and popular ones, often leaves IT teams unaware of all the applications employees use -- a trend known as [shadow IT ↗](https://www.cloudflare.com/en-gb/learning/access-management/what-is-shadow-it/). These unmanaged SaaS applications can be misused by employees, either intentionally or accidentally, potentially leading to data leaks due to the upload of sensitive data into applications that are not under the control of the IT team.

Trying to use a [traditional castle-and-moat security model ↗](https://www.cloudflare.com/en-gb/learning/access-management/castle-and-moat-network-security/) is unsuitable for SaaS applications, as the services and their data are no longer confined to on-premises data centers within an enterprise network. This outdated approach forces a trade-off between security and performance:

* One strategy organizations adopt to enhance security involves shielding SaaS applications from the broader Internet by implementing IP allow lists and routing traffic through the organization's data center where traffic can be inspected and filtered according to security policy. However, this method negatively impacts the user experience, leading to increased latency and reduced bandwidth when routing all traffic through a single data center.
* Conversely, if user traffic is sent directly to the Internet, bypassing a local VPN client by using split tunneling, security and visibility are compromised as enterprise network controls are bypassed (and IP allow lists are no longer feasible).

![Figure 1: Two different routes to a SaaS application, one secure but low performance, the second fast but less security.](https://developers.cloudflare.com/_astro/zero-trust-saas-image-01.exIRfP3T_ZtnRVL.svg "Figure 1: Two different routes to a SaaS application, one secure but low performance, the second fast but less security.")

Figure 1: Two different routes to a SaaS application, one secure but low performance, the second fast but less security.

This is where a [SASE (Secure Access Service Edge) architecture implementing a Zero Trust framework](https://developers.cloudflare.com/reference-architecture/architectures/sase/) becomes essential. By centralizing security in a global cloud network, the trade-off between security and performance is eliminated. User traffic no longer needs to be routed through a single remote data center for security. With Cloudflare user traffic is routed into our services at the nearest data center – out of hundreds – where it will undergo the necessary security controls. These security controls are implemented in a single-pass architecture to avoid adding unnecessary latency and are applied consistently across the entire Cloudflare network.

![Figure 2: SASE solutions ensure user traffic is secured and filtered close to the user.](https://developers.cloudflare.com/_astro/zero-trust-saas-image-02.DkyQaTm1_Z1rGeyM.svg "Figure 2: SASE solutions ensure user traffic is secured and filtered close to the user.")

Figure 2: SASE solutions ensure user traffic is secured and filtered close to the user.

This design guide will focus on how Cloudflare's SASE architecture can more effectively and efficiently secure user access to, and the data within SaaS applications. For a broader understanding of how Cloudflare can be used for an organization's zero trust initiatives, please read our [SASE reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/).

### Who is this document for and what will you learn?

This guide is designed for IT and security professionals seeking to safely adopt and deploy SaaS applications within their organization while maintaining a positive user experience. It assumes familiarity with concepts such as identity providers (IdPs), user directories, single sign-on (SSO), and data loss prevention (DLP) technologies.

What you will learn:

* How to secure access to managed SaaS applications and protect their data
* Key considerations when using cloud email solutions
* How to get visibility of and regain control over unmanaged SaaS applications

This guide assumes you have an Enterprise contract with Cloudflare that includes:

* Cloudflare Zero Trust licenses for the number of users you plan to onboard
* Cloudflare Cloud Email security licenses for the number of users whose cloud inbox emails will be filtered

Free and PayGo capabilities

A lot of the capabilities described in this document [are also available in our free and Pay-as-you-go plans ↗](https://www.cloudflare.com/en-gb/plans/zero-trust-services/).

Recommended resources for a stronger understanding of Cloudflare:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (five-minute read) or [video ↗](https://www.youtube.com/watch?v=XHvmX3FhTwU) (two minutes)
* Blog: [Zero Trust, SASE, and SSE: Foundational Concepts for Your Next-Generation Network ↗](https://blog.cloudflare.com/zero-trust-sase-and-sse-foundational-concepts-for-your-next-generation-network/) (14-minute read)
* Reference Architecture: [Evolving to a SASE Architecture with Cloudflare](https://developers.cloudflare.com/reference-architecture/architectures/sase/) (three-hour read)

## Securing managed SaaS applications

Managed SaaS applications are those procured and approved by IT, forming part of the official suite of tools employees use to perform their tasks. IT typically manages these applications and are responsible for:

1. **Securing access:** Ensuring only authorized users and devices can access SaaS applications. This includes managing the onboarding and offboarding of users. For instance, if an employee leaves the organization, their access is automatically revoked. Typically this involves integrating the SaaS application with the company identity management solution.
2. **Data protection:** Preventing data leaks from within the SaaS application and proactively mitigating risky behaviors by users that may result in data breaches.
3. **Monitor configuration:** Identifying and promptly correcting misconfigurations within the SaaS application to ensure they operate securely and efficiently.
4. **Cloud email security:** IT teams should take special care when dealing with cloud email SaaS solutions. Since email is a primary target for attacks, a specialized approach is required to protect users from phishing and other email-based threats.

Note a section later in this document will cover how to gain visibility into, and control over, unmanaged applications. For example where your marketing department decides to sign up and start using a new CRM system without engaging IT or security departments.

### Securing access

#### Using SaaS IP allow lists

One simple method for securing access to SaaS applications, is to only allow access from a specific set of IP addresses. This forces users to have to connect to, and have their traffic exit from a specific network and therefore ensure whatever access controls are in place on that network are applied to that traffic.

Organizations that already use IP allow lists to secure access to SaaS applications can easily migrate to Cloudflare using [dedicated egress IPs](https://developers.cloudflare.com/cloudflare-one/traffic-policies/egress-policies/dedicated-egress-ips/). User traffic egresses from Cloudflare to the Internet and onto the SaaS application, sourced from a set of IP addresses unique to the organization. This approach supports various ways in which users access Cloudflare before gaining access to the SaaS application:

* **Hybrid employees:** Connecting to Cloudflare using our Zero Trust client, [WARP](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/).
* **Office-based users:** Connecting to a local network which routes Internet bound traffic to Cloudflare through GRE or IPsec [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/) (formerly Magic WAN) tunnels.
* **Contractors and external users:** Accessing SaaS applications through a [remote browser](https://developers.cloudflare.com/learning-paths/clientless-access/alternative-onramps/clientless-rbi/) hosted in a Cloudflare data center.

Organizations add the new dedicated egress IPs to the existing SaaS IP allow lists for the Cloudflare sourced traffic to be allowed into the SaaS application. This way, organizations can maintain legacy connectivity methods in parallel with Cloudflare and migrate users gradually. Once all users are migrated to access with Cloudflare, the SaaS IP allow lists can be updated by removing the IPs corresponding to legacy infrastructure.

There are several advantages to using Cloudflare's dedicated egress IPs when compared with using IPs from on-prem infrastructure:

* [Dedicated egress IPs can be geolocated](https://developers.cloudflare.com/cloudflare-one/traffic-policies/egress-policies/dedicated-egress-ips/#ip-geolocation) to one or more Cloudflare data centers in a geography of your choosing, instead of being restricted to the geographic locations of your existing Internet breakout data centers.
* Users will always connect to Cloudflare [through the closest Cloudflare Data Center and Cloudflare will optimize the path towards the SaaS application](https://developers.cloudflare.com/cloudflare-one/traffic-policies/egress-policies/dedicated-egress-ips/#egress-location).
* Dedicated egress IPs are assigned to user traffic using policies that follow zero trust principles. [Egress policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/egress-policies/) can be defined that will only assign a dedicated egress IP to a user if they belong to the correct IdP group and/or pass [device posture](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/) checks. Otherwise, traffic will be sourced from Cloudflare's public IP range, which may not be part of the SaaS IP allowlist, preventing access to the SaaS application while still allowing Internet usage.
* Dedicated egress IPs imply that traffic needs to flow through Cloudflare before reaching the SaaS application. This makes it easy to add secure web gateway policies to protect data in the SaaS applications once users have authenticated.

![Figure 3: Enforce only traffic that has been secured by Cloudflare is accepted by the SaaS application.](https://developers.cloudflare.com/_astro/zero-trust-saas-image-03.DmqMPB93_1jLnmj.svg "Figure 3: Enforce only traffic that has been secured by Cloudflare is accepted by the SaaS application.")

Figure 3: Enforce only traffic that has been secured by Cloudflare is accepted by the SaaS application.

#### Using Cloudflare as an identity proxy

With Cloudflare, [Zero Trust Network Access (ZTNA) ↗](https://www.cloudflare.com/en-gb/learning/access-management/what-is-ztna/) can be applied to managed SaaS applications. In this scenario, Cloudflare acts as the [Single Sign-On (SSO) ↗](https://www.cloudflare.com/en-gb/learning/access-management/what-is-sso/) service for an application, proxying user authentication requests to the organization's existing identity providers (IdPs). This allows for additional restrictions to be layered on before granting access, such as requiring [multi-factor authentication ↗](https://www.cloudflare.com/en-gb/learning/access-management/what-is-multi-factor-authentication/), implementing [device posture checks](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/), or [evaluating the country](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/#selectors) the request is coming from.

![Figure 4: Cloudflare can act as an identity proxy, providing a consistent authentication experience for all SaaS applications.](https://developers.cloudflare.com/_astro/zero-trust-saas-image-04.ayHv4mW0_Z1VzQus.svg "Figure 4: Cloudflare can act as an identity proxy, providing a consistent authentication experience for all SaaS applications.")

Figure 4: Cloudflare can act as an identity proxy, providing a consistent authentication experience for all SaaS applications.

Most organizations initially use Cloudflare's [ZTNA service](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/) for self-hosted applications. Extending it to SaaS applications simplifies IT management in several ways, as both self-hosted and SaaS apps will:

* Use the same access policies
* Leverage the same IdP and device posture integrations
* Consistently audit access requests

IT teams will also benefit from a consistent and automated process for onboarding and offboarding users from applications. Since all access policies leverage authentication from existing IdPs, changes in a user's status will automatically affect the outcome of access requests for both self hosted applications as well as SaaS.

Consider a scenario where a user moves to a different group or team within an organization. As soon as the user group information is updated on the IdP, Cloudflare's ZTNA policies will dynamically enforce these changes, ensuring that the user's access to the SaaS applications is immediately adjusted based on their new role. This also helps in SaaS applications' license optimization. For example, if an employee is transferred from the sales team, which uses Salesforce, to a team that does not require access to Salesforce, the ZTNA policies will revoke their access to the application. This automated process helps in reclaiming the license that was previously assigned to the user, ensuring that only those who actually need the application have access to it.

Finally, SaaS applications are accessible over the Internet, allowing any device to access them if a user authenticates successfully. However, with Cloudflare's ZTNA service, IT teams can ensure that only managed devices access a SaaS application by enforcing device posture checks, in addition to identity checks. A common use case is [verifying the presence of an IT-deployed device certificate](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/client-certificate/#configure-the-client-certificate-check) before granting application access.

#### Deployment guidelines

For SaaS applications that do not support SSO or organizations that are already implementing IP allow lists to secure access to SaaS applications, implementing dedicated egress IPs is the most straightforward approach to enhance access security to SaaS applications, without impacting the user experience.

Organizations that would like to simplify their onboarding/offboarding of users to applications and standardize ZTNA policies should consider implementing Cloudflare's ZTNA solution for both self-hosted and SaaS applications. In such scenarios, it might still be relevant to consider dedicated egress IPs for a subset of critical SaaS applications. As egress policies operate at the network and transport layers, their enforcement is almost real-time. [For example](https://developers.cloudflare.com/cloudflare-one/tutorials/m365-dedicated-egress-ips/#%5Ftop), consider an egress policy for a specific SaaS application that accounts for posture status from an external endpoint management solution. If a device becomes compromised and its posture status becomes non-compliant, the egress policy will no longer match. This results in the user of that device losing access to the SaaS application, as traffic will no longer be sourced from the dedicated egress IP.

Finally, organizations that have already integrated all their SaaS applications with an IdP for SSO can still consider adding IP allow lists with dedicated egress IPs for a subset of applications for the same reason as detailed before.

### Data protection for managed SaaS applications

While extending ZTNA principles to managed SaaS applications ensures that only the right users and devices can access these applications, it is crucial to address the risk of authorized users leaking data once they have access.

![Figure 5: Cloudflare can also protect data that's downloaded or uploaded to managed SaaS applications.](https://developers.cloudflare.com/_astro/zero-trust-saas-image-05.SnFY_pU3_16Omiq.svg "Figure 5: Cloudflare can also protect data that's downloaded or uploaded to managed SaaS applications.")

Figure 5: Cloudflare can also protect data that's downloaded or uploaded to managed SaaS applications.

To mitigate these risks, controls should be implemented for both data in transit and data at rest.

#### Data in transit

As mentioned before, all traffic can be forced through Cloudflare using the device agent, Cloudflare WAN (CWAN) tunnels, or the remote browser. This allows [secure web gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) policies to manage and protect data as it is uploaded or downloaded from SaaS applications. Common use cases include:

* Restricting the ability to download [all](https://developers.cloudflare.com/cloudflare-one/traffic-policies/http-policies/common-policies/#block-google-drive-downloads) or a [subset of files](https://developers.cloudflare.com/cloudflare-one/traffic-policies/http-policies/common-policies/#block-file-types) from managed SaaS applications to specific groups of users within the organization.
* Using [Data Loss Prevention (DLP)](https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/#%5Ftop) profiles to limit the download of data containing sensitive information from managed SaaS applications.

For more information about securing data in transit, refer to our [reference architecture center](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-in-transit/).

#### Data at rest

Cloudflare's [Cloud Access Security Broker (CASB)](https://developers.cloudflare.com/cloudflare-one/integrations/cloud-and-saas/) integrates with [popular SaaS applications](https://developers.cloudflare.com/cloudflare-one/integrations/cloud-and-saas/) through APIs. Once integrated, Cloudflare continuously scans these applications for security risks. This enables IT teams to detect incidents of authorized users oversharing data, such as sharing a file publicly on the Internet. For Google Workspace, Microsoft 365, Box, and Dropbox, the API CASB can also utilize DLP profiles to detect the sharing of sensitive data. For more information about securing data at rest, refer to our [reference architecture center](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-at-rest/).

In addition to the previous measures, IT teams should also consider introducing [User Entity and Behavior Analytics (UEBA) ↗](https://www.cloudflare.com/en-gb/learning/security/what-is-ueba/) controls. Cloudflare can assign a [risk score](https://developers.cloudflare.com/cloudflare-one/team-and-resources/users/risk-score/) to users when detecting activities and behaviors that could introduce risks to the organization. These risk behaviors include scenarios where users trigger an unusually high number of DLP policy matches. By implementing these measures, organizations can significantly reduce the risk of data leaks from managed SaaS applications, even by authorized users.

![Figure 6: Cloudflare can secure data traveling over its network, as well as using SaaS application APIs to examine data stored at rest.](https://developers.cloudflare.com/_astro/zero-trust-saas-image-06.ClpGGJtH_D2MCT.svg "Figure 6: Cloudflare can secure data traveling over its network, as well as using SaaS application APIs to examine data stored at rest.")

Figure 6: Cloudflare can secure data traveling over its network, as well as using SaaS application APIs to examine data stored at rest.

### Monitor configuration

While this design guide has primarily focused on SaaS application users so far, it is important to note that a significant number of SaaS data leaks today are not caused by user behavior but rather by misconfigurations made by IT teams. When these misconfigurations go unchecked, they expose both the SaaS application and the organization to serious security risks.

You can mitigate these risks using Cloudflare's CASB. The API CASB continuously scans for and identifies misconfigurations, enabling swift remediation. It can detect issues such as exposed credentials, keys that need rotation, users with disabled two-factor authentication (2FA), unauthorized third-party apps with access to the SaaS application, among others.

### Cloud email security

Phishing attacks and campaigns to spread malware to take over devices and access company data usually focus on email as the channel for attack. The vast majority of companies today have migrated their email from on-premises servers to cloud hosted services. While the built-in security of solutions such as Microsoft 365 and Google Workspace are good, they are unable to keep up with the constant evolution of attack methods. Many organizations therefore deploy advanced email security solutions integrated with existing email platforms.

#### Securing access

As described already, implementing ZTNA to secure your email platform offers numerous benefits. One key advantage is ensuring that email access is restricted to trusted, managed devices, even when using a cloud-based email service. This typically involves using Cloudflare to verify the presence of a [client certificate](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/client-checks/client-certificate/) and confirm that there are no risks detected by an external endpoint management solution, such as [Crowdstrike](https://developers.cloudflare.com/cloudflare-one/integrations/service-providers/crowdstrike/) or [SentinelOne](https://developers.cloudflare.com/cloudflare-one/integrations/service-providers/sentinelone/).

#### Tenant control

Organizations with stringent requirements about email communications for compliance or regulatory reasons, operational control or accountability, or to reduce the potential for data leaks can block access to email tenants other than the organization's own. This can be achieved by using [Cloudflare Gateway SaaS tenant controls](https://developers.cloudflare.com/cloudflare-one/traffic-policies/http-policies/tenant-control/). Cloudflare injects custom HTTP headers into the traffic flow, informing Microsoft 365 and Google Workspace of the specific tenant users are allowed to authenticate into and blocking any access attempts to any other tenant.

![Figure 7: Cloudflare can enforce access to only specific cloud email tenants.](https://developers.cloudflare.com/_astro/zero-trust-saas-image-07.Dp1tEZPu_RR77s.svg "Figure 7: Cloudflare can enforce access to only specific cloud email tenants.")

Figure 7: Cloudflare can enforce access to only specific cloud email tenants.

#### Filtering inbound emails

While SaaS email solutions offer native security capabilities, their popularity makes them high-value targets for attackers who seek to exploit vulnerabilities and limitations in their inbound filtering capabilities. To mitigate this risk, IT teams should consider supplementing the native capabilities of cloud email solutions with specialized solutions for inbound email filtering.

[Cloudflare's Email security ↗](https://www.cloudflare.com/en-gb/zero-trust/products/email-security/) scans for malicious content or attachments in emails and proactively monitors the Internet for attacker infrastructure and attack delivery mechanisms. It identifies programmatically-created and impersonation domains used to host malicious content as part of planned attacks. This data also helps protect against business and vendor email compromises ([BEC ↗](https://www.cloudflare.com/en-gb/learning/email-security/business-email-compromise-bec/)/[VEC ↗](https://www.cloudflare.com/en-gb/learning/email-security/what-is-vendor-email-compromise/)), which are notoriously difficult to detect due to their lack of payloads and resemblance to legitimate email traffic and a gap for legacy email security platforms.

Integrating Cloudflare into the existing email infrastructure is both flexible and straightforward, with deployment options available in [inline](https://developers.cloudflare.com/email-security/deployment/inline/) and [API](https://developers.cloudflare.com/email-security/deployment/api/) modes.

In an inline deployment, Cloudflare's Email security will evaluate email messages before they reach a user's inboxes (by pointing the email domain MX record to Cloudflare). This allows Cloudflare to [quarantine messages](https://developers.cloudflare.com/email-security/email-configuration/admin-quarantine/) so they never reach the user's inbox or [tag messages with email headers](https://developers.cloudflare.com/email-security/reference/dispositions-and-attributes/#header-structure) to inform the email provider how emails should be handled (for example, [by redirecting bulk emails directly to the spam folder](https://developers.cloudflare.com/email-security/deployment/inline/setup/office-365-area1-mx/use-cases/one-junk-admin-quarantine/)). Cloudflare can also [modify the subject and body of email messages](https://developers.cloudflare.com/email-security/email-configuration/email-policies/text-addons/) to inform a user to be more cautious about a suspicious email and [rewrite links within emails and even isolate those links behind a remote browser](https://developers.cloudflare.com/email-security/email-configuration/email-policies/link-actions/).

In an API deployment, Cloudflare's Email security will see the email messages only after they have reached the users' inboxes by setting up Journaling/BCC rules in the email provider or through API scan. Then, through integrations with the email provider, Cloudflare can [retract phishing emails](https://developers.cloudflare.com/email-security/email-configuration/retract-settings/) from users' inboxes. Unlike the inline mode, this deployment method does not support quarantining emails or modifying the email messages. However, it is an easy way to add protection in complex email infrastructures with no changes to the existing mail flow operations.

These modes can be used concurrently to enhance email security. The inline mode ensures that Cloudflare's Email security scans and filters emails before they reach users' inboxes. For emails that initially pass through without being flagged as threats, Cloudflare [periodically re-evaluates them](https://developers.cloudflare.com/email-security/email-configuration/retract-settings/office365-retraction/#post-delivery-retractions-for-new-threats). If these emails are later identified as part of a phishing campaign, they are automatically retracted with the API. This proactive approach protects organizations against deferred phishing attacks, where attackers send emails with seemingly benign links that are weaponized after delivery to bypass initial detection.

![Figure 8: Cloudflare can protect email services either inline or by API.](https://developers.cloudflare.com/_astro/zero-trust-saas-image-08.CeM49-0Z_ZWpOCc.svg "Figure 8: Cloudflare can protect email services either inline or by API.")

Figure 8: Cloudflare can protect email services either inline or by API.

#### Ensuring availability

Cloudflare also helps ensure the availability of cloud email services. It auto-scales TCP connections and SMTP traffic to handle message spikes, protecting the organization from email DoS attacks. The service automatically pools and queues messages for extended periods and throttles delivery post-spike according to the downstream email service's capacity. This pooling and queuing capability is beneficial during cloud email service outages.

#### Filtering outbound emails with outbound data loss prevention

Organizations using Microsoft 365 can enhance protection against sensitive information leaks through email by integrating a Cloudflare add-in into their environment. This integration enables IT administrators to establish [outbound Data Loss Prevention (DLP) policies](https://developers.cloudflare.com/cloudflare-one/email-security/outbound-dlp/) that leverage the same DLP profiles used with the Secure Web Gateway (SWG) and API Cloud Access Security Broker (CASB).

Moreover, organizations that utilize [Microsoft Purview Sensitivity Labels](https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/dlp-profiles/integration-profiles/) for classifying and safeguarding sensitive documents can incorporate these labels into Cloudflare's DLP profiles. This capability allows the creation of targeted policies, such as blocking emails containing Microsoft Office documents marked as 'Highly Confidential' in Microsoft Outlook from being sent to external recipients. These DLP profiles can also be applied across SWG and API CASB.

## Regain control over unmanaged SaaS applications

Unmanaged SaaS applications are those used by employees without IT's approval or knowledge, commonly referred to as [shadow IT ↗](https://www.cloudflare.com/en-gb/learning/access-management/what-is-shadow-it/). This growing challenge is driven by the proliferation of free or low-cost SaaS applications. While these apps can boost employee satisfaction and productivity, they also pose significant risks, such as:

* **Data breaches:** Employees can upload sensitive data to these applications without any security controls. And without Single Sign-On (SSO) or strong password protocols, the risk of data loss or theft is significantly higher.
* **Compliance violations:** In regulated industries, the use of unauthorized SaaS tools can lead to non-compliance with legal and industry standards, potentially resulting in fines, legal action, and reputational damage.
* **Increased costs:** IT typically can often secure favorable pricing by managing SaaS subscription across the business. However, when employees independently purchase subscriptions with personal credit cards, it can lead to unchecked shadow IT spending and higher overall costs for the organization.

To mitigate these risks, the first step is to discover which SaaS applications employees are using. When all traffic from employee devices is routed through Cloudflare, [reports are generated](https://developers.cloudflare.com/cloudflare-one/insights/analytics/shadow-it-discovery/) showing the usage of common SaaS applications.

![Figure 9: When all user traffic bound for the Internet goes through Cloudflare, it allows IT to monitor for unapproved SaaS applications.](https://developers.cloudflare.com/_astro/zero-trust-saas-image-09.DHrIIpJM_ZDfugU.svg "Figure 9: When all user traffic bound for the Internet goes via Cloudflare, it allows IT to monitor for unapproved SaaS applications.")

Figure 9: When all user traffic bound for the Internet goes via Cloudflare, it allows IT to monitor for unapproved SaaS applications.

With this information, IT teams can analyze and decide how to handle each unmanaged SaaS application:

* **Allow the application:** If the application presents no risk to the organization, it is deemed acceptable for employee use, and no further action is required.
* **Allow the application with data protection controls:** If the application is acceptable but poses a data leak risk, appropriate data protection measures should be implemented.
* **Adopt the application as a managed SaaS application:** If the application is beneficial for the organization, it should be brought under IT management.
* **Block the application:** If the application is deemed unacceptable, it should be blocked using Cloudflare Gateway DNS and/or HTTP policies.

### Data protection for unmanaged SaaS applications

Data protection for unmanaged SaaS applications is similar to that for managed SaaS applications, but the focus shifts from mitigating the downloading of data to preventing the uploading of sensitive information. Policies can be configured using Cloudflare Gateway to address these risks. Common use cases include:

* Restricting the ability to [upload certain file types](https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/dlp-policies/common-policies/#block-file-types) to SaaS applications, limiting this capability to specific groups of users within the organization.
* Using Data Loss Prevention (DLP) profiles to block the upload of data containing sensitive information.

In addition to these measures, [remote browser isolation](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/#%5Ftop) can be considered for unmanaged SaaS applications. This approach allows users to access certain unmanaged SaaS applications while [restricting their actions within those applications](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/isolation-policies/#policy-settings) to prevent misuse.

![Figure 10: DLP policies can be combined with browser isolation, to protect company data.](https://developers.cloudflare.com/_astro/zero-trust-saas-image-10.zOip4DKU_ZJaIoL.svg "Figure 10: DLP policies can be combined with browser isolation, to protect company data.")

Figure 10: DLP policies can be combined with browser isolation, to protect company data.

### Adopting a new SaaS application

Many SaaS applications offer a free version as part of their business model to encourage users to integrate them into their work. This helps demonstrate the application's usefulness and facilitates its adoption at the corporate level ([Cloudflare follows this model as well ↗](https://www.cloudflare.com/en-gb/plans/zero-trust-services/)). When a previously unmanaged SaaS application is officially adopted by the organization, IT teams take over its management to ensure proper support and adherence to best practices. This involves aligning the new SaaS application with all the aspects discussed in the Securing Managed SaaS Applications section.

After fully adopting the new SaaS application, access to the consumer version may be restricted. If the corporate SaaS version has a unique domain, access to other tenant domains or the consumer domain can be blocked using Cloudflare DNS and/or HTTP policies. Some SaaS solutions offer [native tenant control](https://developers.cloudflare.com/cloudflare-one/traffic-policies/http-policies/tenant-control/) through HTTP headers, which can be enforced by injecting these headers for data in transit using Cloudflare Gateway HTTP policies.

## Summary

This design guide described how organizations can enhance their SaaS application security by implementing a Zero Trust framework within a SASE architecture. With Cloudflare, organizations gain access to a comprehensive solution that addresses the challenges posed by both managed and unmanaged SaaS applications. By using techniques like ZTNA, dedicated egress IPs, CASB, and robust email security measures, organizations can ensure secure access, protect sensitive data, and gain control over shadow IT, all while maintaining a positive user experience. These techniques and when to apply them are summarized in the diagram below:

![Figure 11: Techniques for enforcing a zero trust approach in SaaS applications.](https://developers.cloudflare.com/_astro/zero-trust-saas-image-11.qEiUE-gW_2vxGC2.svg "Figure 11: Techniques for enforcing a zero trust approach in SaaS applications.")

Figure 11: Techniques for enforcing a zero trust approach in SaaS applications.

## Related resources

* [SASE reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/)
* [Using Cloudflare SASE with Microsoft](https://developers.cloudflare.com/reference-architecture/architectures/cloudflare-sase-with-microsoft/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/design-guides/","name":"Design Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/design-guides/zero-trust-for-saas/","name":"Using a zero trust framework to secure SaaS applications"}}]}
```

---

---
title: Building zero trust architecture into your startup
description: Cloudflare Zero Trust is a simple, (sometimes free!) way for startups to develop a comprehensive Zero Trust strategy. This guide explains how to use Cloudflare to establish the foundation for a Zero Trust architecture.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/design-guides/zero-trust-for-startups.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Building zero trust architecture into your startup

**Last reviewed:**  almost 2 years ago 

## Introduction

Most of Cloudflare's documentation (and, generally, documentation by most vendors in the space) is written with the assumption that adopting Zero Trust products will require shifting away from something. In scenarios in which nothing is built, or there is no tool that fulfills the goals which your team is trying to accomplish, this can sometimes be confusing and alienating. New startups are especially underserved; as you focus all your energy on getting your business off the ground, it can be time consuming or confusing to read documentation that is angled towards enterprises undergoing network transformation.

This guide explains how to use Cloudflare to establish the foundation for a Zero Trust architecture early in the establishment of your security, networking, and development operations practices — with the goal of creating a sustainable, scalable business built on Zero Trust security principles.

The common principles for building a 'business' have fundamentally changed. Twenty years ago, this may have looked like getting office space (or a garage), buying some hardware infrastructure, servers, and user machines on which to begin building. As building continues, you add hardware-stacked firewalls and security appliances to create a corporate perimeter and protect the things that primarily exist in one place. There's lots of good written content on the evolution of networking and security practices so we won't belabor the point here; the important detail is to recognize how the 'new' model matters for your startup as you build.

Chances are good that today most of your infrastructure will exist in a public cloud provider. Most of your code will be pushed and reviewed via common repository management tools, most of your developers will write code on MacOS or Linux machines, and will probably rely heavily on a form of containerization for local development. Within this model, Zero Trust security principles are just as relevant — albeit much easier to achieve — when your business grows into multiple complex functions, departments, and an expanding set of assets and data.

Using Cloudflare Zero Trust is a simple, (sometimes free!) way for startups to develop a comprehensive Zero Trust strategy that will grow organically with your business.

## Who is this document for and what will you learn?

Cloudflare has lots of existing content related to migration and implementation of our Zero Trust product set. This document speaks directly to technical founders and founding engineers of young startup organizations who are looking to develop the framework for a modern corporate network, with modern security controls, from their first line of code.

In this document we'll explore:

* Getting started with practical Zero Trust remote access (ZTNA) capabilities
* Establishing sources of truth for identity, device posture, and learning how to use them
* Network building, both traditional and mesh
* Building Zero Trust into internal tooling
* Reviewing threats on the Internet
* TLS decryption and its relevance for your goals
* Exploring Zero Trust for your SaaS tools
* Navigating contractor and customer access
* Building with Infrastructure as Code

A few things explicitly not covered in this document:

* Introduction to basic Zero Trust terminology and concepts
* Recommendations for or against specific third-party vendor usage (while other vendors are mentioned in this document, it's purely illustrative and should not be taken as a formal recommendation from Cloudflare)
* Details on why you should explore adopting a Zero Trust security methodology (we have lots of good resources detailing that in the links below)
* Microsegmentation and autonomous Zero Trust concepts (these may be covered in future updates)
* Passwordless authentication (this is a cool and emerging space, and we'll provide some recommendations here in the future)

To build a stronger baseline understanding of Cloudflare, we recommend the following resources:

* What is Cloudflare? | [Website ↗](https://www.cloudflare.com/what-is-cloudflare/) (five-minute read) or [video ↗](https://www.youtube.com/watch?v=XHvmX3FhTwU) (two minutes)
* Blog: [Zero Trust, SASE, and SSE: foundational concepts for your next-generation network ↗](https://blog.cloudflare.com/zero-trust-sase-and-sse-foundational-concepts-for-your-next-generation-network/) (14-minute read)
* Reference architecture: [Evolving to a SASE architecture with Cloudflare](https://developers.cloudflare.com/reference-architecture/architectures/sase/) (three-hour read)

## Getting started — Foundational decisions

### Asset inventory

Before thinking about your remote access or security goals, it's important to take stock of your current assets. Think about the answers to the following questions:

* What already exists and is in need of a sustainable model for security?
* If you have begun building infrastructure in a public cloud provider, how many distinct virtual private clouds (VPCs) have you already established, and how do they communicate with each other? More importantly, how and why do your users access those environments?
* Is it all through the console and browser-based management or terminal tools?
* Have you set up public IP access for some services over HTTPS or SSH?
* Are there resources that may allow access from the Internet that are intended to be entirely private?
* Have you established a traditional VPN to allow remote access to the environment, and how is it gated?

Next, build a map of your physical and virtual private infrastructure (essentially, anything that contains company data). For many startups, this may just be implemented via a single cloud provider. Note all the resources in that environment that are accessed, either by human users, other infrastructure, or public or private APIs — then document the purpose of each service that sees regular traffic. As you do so, try to answer the following questions:

* Is this an internal web-based tool built to monitor your build pipeline?
* Is it a self-hosted analytics tool like Grafana, or a supporting metrics server like Prometheus?
* How are users reaching that service — via a public IP, a private IP, or a local path?
* Are users able to reach the service from other cloud environments or VPCs? If so, how are they connected?

Once you've developed a comprehensive list of your existing resources, this will serve as an asset inventory for your development of a Zero Trust architecture. If you don't know what you need to protect, it'll be difficult to protect it, no matter how many security tools you have.

![A snapshot of the foundational decisions to make when establishing a zero trust architecture](https://developers.cloudflare.com/_astro/zero-trust-design-guide-getting-started-foundational-decisions.BjoDdDt1_Z23yUcD.svg) 

A valuable third step may be to begin stack-ranking these services by risk level in the event of a breach, to later determine the specificity of your security policy. For example, your internal tool to alert on build status may be a level 3, but your production database for customer information would be a level 1\. A level 3 application may be able to be accessed by a user on their own device, assuming they can meet your identity control requirements, but a level 1 application may require access from a corporate device and the use of a specific kind of multi-factor authentication (MFA).

Note

If you've already grown to the point that documenting your asset inventory is very difficult or time-consuming for your business, you can use tools like our [Private Network Discovery](https://developers.cloudflare.com/cloudflare-one/insights/analytics/shadow-it-discovery/#private-network-origins) capability to build a sense of what your users access in your network space.

### Common goals and outcomes

Many startups that use Cloudflare are encouraged to adopt a Zero Trust security posture by external sources: investors, partners, vendors, risk analysts, or compliance officers. Even if this is a project or evaluation that is driven by outside parties, you can still establish common goals to ensure it drives a measurable, desirable impact.

Some common goals we hear from customers:

* Make internal tooling easy for our users to access securely
* Build security into the development pipeline
* Adopt increased security without sacrificing user and work experience
* Define and execute a bring your own device (BYOD) strategy
* Simplify management of networks and application access
* Protect data in SaaS applications and on the corporate network
* Ensure auditability (“a quick view of what's happening, who's doing it, and if it's okay”)
* Demonstrate security best practices to our customers and end-users

It's also possible that your goals may be simpler or more tactical than this; for instance, adopt a modern remote access tool, securely connect my internal networks, or only allow corporate devices to connect to my Gitlab Enterprise tenant. Whatever your goal, the most important element in goal-setting will be to establish what you need now and balance it against what you may need or expect to need in the near or mid-term future. If you intend to grow significantly, expect to sign customers with demanding security reviews, or be prepared to apply for a new compliance certification, such as SOC II or PCI. In order to accomplish this, it is crucial to start with a Zero Trust vendor, which can help layer on additional security tooling and capabilities without exponentially increasing complexity or cost.

Goal-setting is also an important exercise for prioritization. If you know that your primary goal is to _identify and put identity-aware security in front of all our internal services_, but that in the next six months you intend to _restrict BYOD usage to level 3 applications_, your first goal will need to strategically support the execution of the second. Understanding the stack-rank of priorities over the next few months (knowing things change quickly in your startup!) can save you the time spent in re-architecture discussions, or unraveling technical or commercial decisions with vendors that fit your needs in the short term, but not the mid-term.

### Identity

Identity is at the core of every Zero Trust strategy. Ultimately, most customer goals revolve around using a central source of identity to authenticate, validate, and log all actions taken by a user, spanning both 'owned' (hosted, private network) applications and SaaS applications. Identity (through an SSO provider, for example) can then be used to layer additional security controls like multi-factor authentication, or phishing-resistant authentication.

One of the most important things you can do early is to coach users to become accustomed to using multi-factor authentication. Phishing-resistant MFA options like physical keys, local authenticators, and biometric authentication have been credited by Cloudflare as a major factor in [stopping the attempted breach ↗](https://blog.cloudflare.com/2022-07-sms-phishing-attacks) that affected Twilio and other SaaS companies in 2022.

In the context of getting started with Zero Trust, the type of identity provider that you decide to use (Google Workspace and Microsoft Entra Identity being the most common) is less important than your implementation strategy. As long as you have a directory that is secure, allows for phishing-resistant authentication methods, and is designated as your source of truth, you have the necessary components to integrate with a Zero Trust vendor like Cloudflare and deliver continuous interrogation of that identity-as-security posture for all of your corporate tools.

#### SSO integration

Many directory services also provide single sign-on (SSO) solutions for integrating directly with SaaS applications. While this is a simple and logical choice, many enterprise applications make SSO integration a challenge, and onboarding a critical mass of SaaS applications to any one directory service can drive vendor lock-in. As your organization continues to grow, your identity strategy will inevitably change and mature, and it's important to maintain flexibility to address unexpected challenges, like some of the vendor breaches that we saw in 2023.

Along with the challenges related to flexibility, many SSO providers have yet to fully integrate device posture concepts into their 'source of truth' model. Some vendors like Okta offer machine certification as part of an authentication event, but it's limited to Okta's FastPass product and doesn't include signals from other sources or vendors to better determine what constitutes a corporate device.

#### Third-party access

Finally, you will not always own the identities that are used to access your systems. You may hire external auditors who need to use their own company identities to authenticate. You may decide to allow contractors to use their existing GitHub identities to access private GitHub repositories. There may be times where you simply need to provide access to someone with just an email address to access a low risk resource, such as showing customers a preview of a new product interface. So your Zero Trust solution needs to allow identities beyond your central directory to also gain access.

#### Where does Cloudflare fit in?

Later in this document, we'll describe using Cloudflare Zero Trust to protect your internal applications, and how to use Cloudflare as your SSO in front of your SaaS applications to deliver a simple, unified security posture everywhere.

Cloudflare _matters_ in this case because once you've determined a source of truth for your identity provider, you need tooling to perform continuous authentication against your user population. This tooling is difficult to build and maintain, as evidenced by a number of well-known technology companies who retired their internally-built Zero Trust proxy and switched to Cloudflare in 2023, citing management complexity and an inability to add new security functionality.

Cloudflare can simplify your architecture by becoming the singular enforcement point for your identity against your private applications, your networks, your developer services, and your SaaS applications. Cloudflare is one of the only vendors to be able to provide Zero Trust authentication concepts as a web proxy (layer 7 services), as a VPN replacement (layer 3/4 services), and as a secure web gateway.

![The various ways employees, contractors, vendors, or customers may verify their identity to access your company's resources](https://developers.cloudflare.com/_astro/zero-trust-design-guide-getting-started-foundational-decisions-identity.OTP3iPEW_Z20rPBo.svg) 

### Device posture

As your business grows and you begin to operationalize the distribution of endpoints to your user population, device posture is a key component of a strong Zero Trust strategy. Once you've validated your users' identity posture, there are other actions you can take to further reduce the risk of a data breach. Consider this: even if your user is valid and has an active identity session, their device could theoretically be infected, and attackers could benefit from (or _hijack_) their valid identity session.

Companies use device posture to prove that a connection is coming from a trusted device. Let's look at the theory behind device posture before listing some common strategies and approaches to getting started. In this example, you have sensitive data located somewhere in AWS. This data is critical to the operation of your business. It is (rightly) protected behind identity-aware authentication, so you feel confident that it can only be accessed by users with the proper identity posture. Your users are all remote, and connect to AWS from Macbooks that are pre-configured with your endpoint detection and response (EDR) software of choice. Users on their Macbooks, configured with enterprise EDR software, have a lower risk of potential breaches than when they use their personal laptops to access company data. But how do you prove that your users with valid identity posture _only_ access your sensitive data from the devices that contain a lower risk of breach?

As your security organization grows and you begin to implement data loss prevention (DLP) strategies and tools, this becomes doubly important. If your users can theoretically access sensitive data without applying a burden of proof to the device used for access, users may be able to (intentionally or inadvertently) circumvent your security tooling and create the risk of exfiltration, or at a minimum, blind spots for your visibility and auditability.

Common device posture strategies usually rely on a combination of an endpoint management tool (like JAMF, Intune, etc.), a corporate certificate, and security tooling like EDR software that might sit on the device. Some of this tooling can fingerprint your devices in a way that can be externally validated where supported. In order to achieve Zero Trust access controls with device posture validation, an endpoint agent from the Zero Trust vendor typically needs to be deployed on the devices. Then, it is used to 'independently' verify a claim from a third party vendor before applying that device state to be used in a policy. When evaluating vendors, it is important to evaluate their ability to poll for state relatively frequently, so that they are adhering to the Zero Trust policy philosophy for “continuous evaluation” of state.

#### Where does Cloudflare fit in?

As you begin to use third-party vendors for Zero Trust security outcomes, those vendors need to ingest first-party signals to help you make the best security decisions. In this case, Cloudflare becomes your point of policy enforcement for device posture — in addition to identity posture. The Cloudflare device agent will evaluate your device ownership or health metrics, and use them in conjunction with policies about user identity to ensure access to sensitive resources both has proper identity verification and is coming from a compliant device with the acceptable level of security control.

![Cloudflare's device posture enforcement in action](https://developers.cloudflare.com/_astro/zero-trust-design-guide-getting-started-foundational-decisions-device-posture.BpvZA4DM_ZYq6xW.svg) 

## Traditional and mesh network building

In the 'old world' model (also known as a castle and moat security architecture), your infrastructure would probably be homogeneous and protected by a firewall. To access network resources, users not in the office (or other third parties, vendors, etc.) would need to connect to the network via a VPN and firewall, or use another available network route via a public IP address. Because most infrastructure now lives in the cloud, and most startups begin remote-first, almost none of the traditional networking concepts will be explicitly relevant as you design the initial phases of your 'corporate network'.

In this more traditional networking model, your infrastructure will probably be structured in several of the following ways:

* It will exist in one or multiple VPCs (which may or not be connected by cloud provider transit gateways)
* The addressing of your services will probably be managed by your cloud provider
* You will use internal DNS from a cloud provider like AWS' Route53 DNS (most businesses still rely on internal DNS to some extent, no matter how cloud-native they may be)
* There may always be a reason to maintain some concept of a privately networked space, as long as you maintain your own infrastructure
* It's possible that all users won't have a need to understand or navigate using your internal DNS infrastructure (but technical users and services likely will)

_As you begin establishing patterns in the infrastructure that you build, it's likely that you'll collate around a single, primary cloud provider. The main concepts relevant for this document will focus on users connecting to your network to access internal resources and services, and the way that your internal services communicate with the Internet broadly. Management of cloud infrastructure permissions and policies, as well as recognition of the ways in which your internal services can communicate with one another is equally relevant to a comprehensive Zero Trust strategy, but will be discussed in depth in future updates to this document._

### Connecting users to networks

This will probably be one of the most common Zero Trust use cases for a majority of startups. You may be asking yourself, How can I get my user access to my internal network or application without managing VPN hardware or exposing my business to risk? As you navigate the best way to connect your users to your private networks and services — while still adhering to Zero Trust principles — there are two important things to consider:

1. **Limiting exposure** — A Zero Trust philosophy encourages organizations to limit the amount of ways in which networks or services can be accessed. Having public IP addresses or ingress paths into your network can introduce unwanted risk. This is typically accomplished by using outbound-only proxies that connect to Zero Trust vendors to only proxy authenticated traffic into your network, and do not require any public IP access of any kind.
2. **Limiting lateral movement** — One of the best ways to reduce the radius of a potential data breach is to practice least-privilege access for all resources. Least-privilege access is a core tenet of a Zero Trust architecture, in which users only receive the level of access they need for their role, rather than getting carte blanche access to the entire corporate network. The most analogous concept as it relates to Zero Trust frameworks is that of 'microtunnels' — a recommended approach in which each application or service that needs to be accessed receives its own distinct 'route'. Similar to microtunnels, least-privilege access enables you to build a practice in which only explicit services and users have access to specific resources, helping position future security organizations very favorably.

Defining a clear strategy for infrastructure creation and management — along with a predictable internal IP and DNS record structure — will be invaluable for accessing and protecting your assets as your organization continues to grow. A little later in the document, we'll expand on the ways you can use automated workflows to create infrastructure that can instantly integrates with your chosen Zero Trust security provider. It will be significantly easier to layer security policies over your access control models if you have a continued, clear sense of what infrastructure exists and how it is currently addressed.

#### Where does Cloudflare fit in?

Cloudflare Zero Trust can make private networking concepts extensible to your end users with a combination of endpoint software and cloud networking connectors. In this case, you can use Cloudflare as an 'overlay' network to extend secure access to your internal network for end users without exposing public IPs, allowing ingress from your cloud environments, or introducing any sort of additional risk that usually comes with remote access.

With this 'overlay' network, a small piece of software sits in your network and provides both 'network' tunnels (to give users administrative access to services on your internal network, replacing traditional exposed-bastion concepts) and 'application' tunnels (micro-tunnels that will only allow an authenticated user to explicitly reach the singular service defined in the tunnel).

![Cloudflare providing network and application tunnels to access both company and Internet resources](https://developers.cloudflare.com/_astro/zero-trust-design-guide-traditional-and-mesh-network-building-connecting-users-to-networks.DbAc3MuA_ZoUR5r.svg) 

This makes it significantly easier to manage user access to multiple, distinct private networking environments without forcing the user to change their profile, switch settings, or constantly disconnect or reconnect from one or multiple clients. It also gives you the capability to easily expose a single private application or service to specific audiences while adhering to Zero Trust principles.

## Connecting networks to networks

For most startups, networking is not at the top of their list of things to change. Typically, businesses follow the path of least resistance, which typically involves managing connected VPCs in AWS or GCP, and maybe setting up a few external connections to physical locations. Most businesses, however, find that their growth results in an increasingly complex network topology — a process that tends to happen very quickly.

When simplifying the corporate network, some common extensions may include customer networks, partners, multi-cloud, acquisitions, disaster-recovery planning, and more. As your security organization matures, there will be more and more reasons to spread infrastructure across multiple VPCs (even within the same cloud environment). And, as security groups for those VPCs become increasingly complex, you will find that you are managing multiple internal networks with distinct policies and sometimes distinct operations.

As these network extensions become more relevant for your business, it's worthwhile to review which connectivity options make the most sense, and explore strategies to build a functionally complex, fundamentally secure network.

### Traditional connectivity

The traditional methods of network connectivity still have significant value both in physical and in cloud environments, but using them efficiently while maintaining an effective security perimeter can be a challenge. When businesses only had physical connectivity requirements, like branch offices or supplemental data centers, the framework was much more simple. You'd use either edge devices like routers or firewalls to terminate physical connectivity, or a dedicated head-end device to build VPN (“virtual”) network connectivity between the sites. Essentially, you would be connecting two 'networks' together by providing a new route to a new network or subnet for all the machines on your initial site.

In addition to creating WAN connectivity, the end goal of bridging multiple sites is management simplicity. Having a unified network means that it is easier to support network functions like edge routing, gateways, and addressing via DHCP. However, this can also result in overly-broad policy management, and it can be difficult to manage the security implications of increasingly growing networks with increasingly complex edge cases and unique scenarios.

For modern startups, the problems may not be the exact ones described above, but you will likely still have to solve for growing network complexity. The best way to navigate this is to _plan effectively_. If you begin building your corporate network with security and scalability in mind, you will be able to easily solve increasing complexity as your security and IT organizations grow.

### Mesh connectivity

While traditional networking concepts primarily focus on connecting networks to one another, mesh or peer-to-peer networking concepts connect networks to assets or independent endpoints (e.g. end-user devices, like laptops and cell phones, or IoT devices, like smart lights and security cameras).

In a traditional network, you may have a VPN tunnel that creates a site-to-site connection between the IP spaces of 10.0.0.0/8 and 192.168.0.0/24, giving all devices within either network a gateway to communicate locally with devices on either network. Conversely, in a mesh networking model, you may only want certain IP spaces to communicate with each other — for instance, enabling 10.2.3.4 to communicate with the device that has the IP address 192.168.0.50.

If you only operate with 'micro-tunnels' (e.g. discrete X can only reach discrete Y), you massively reduce your opportunities for lateral movement. For example, using a mesh networking model means that IP address 10.2.3.4 would not be able to reach sensitive data on a different 192.168.0.0/24 address (although it might be able to within a traditional network model). However, this increased security posture also results in increased complexity. Not only do you (usually) need to manage agents on each relevant endpoint in a mesh network, but you then need to be prepared to build and manage discrete policies for each asset and connectivity path.

Editor's note

In some analyst circles, the mesh connectivity space is beginning to be referred to as 'Secure Networking', and while we appreciate the opportunity for differentiation, Cloudflare believes that there are methods for making both traditional and mesh networking effectively secure.

### Where does Cloudflare fit in?

If both operating models sound complicated and imperfect, it's because they are. Because of this, Cloudflare believes that a blend of the two is typically the right approach for businesses of all sizes.

If your organization is experimenting with mesh connectivity, Cloudflare can help support discrete connectivity models while layering in unique identity concepts and supporting your security and scalability needs as you construct a networking framework to support your future growth.

The Cloudflare products that are typically most relevant for startups are a combination of the Cloudflare One Client (via `cloudflared`) and Cloudflare Tunnel (via WARP Connector). This allows you to manage remote access, mesh connectivity, and traditional networking connectivity from a single dashboard. On a more granular level, this means you can configure device posture information, identity information, client certificates, and common L4 indicators (like port, IP, and source/destination protocols) from a single point of policy enforcement — enabling you to build robust security policies for both human and autonomous network interaction.

![Cloudflare connecting multiple cloud providers, public, and private networks](https://developers.cloudflare.com/_astro/zero-trust-design-guide-traditional-and-mesh-network-building-connecting-networks-to-networks.Du7unmEQ_1hvRm.svg) 

This blend of networking models is designed to support a wide range of use cases, whether you are trying to provide remote access to your corporate network, extend your corporate network to encompass cloud environments on on-premises equipment, or continue building out a model for mesh connectivity between critical infrastructure without introducing additional risk or overhead.

## Building Zero Trust into internal tooling

Among almost all the startups (and mature companies) that Cloudflare has worked with, security for internal tooling continues to be a ubiquitous challenge. You may build tools that you need to accomplish tasks specific to your business, or you may choose to self-host or use open source software — which can also be consumed as popular SaaS applications for services, monitoring, or other functions.

The principles outlined in previous sections address methods of managing remote access to these resources and deal primarily with authentication. In summary, achieving a Zero Trust model in practice requires you to ensure that access to each internal service is controlled by a continuous identity authentication proxy, ideally one that is physically separated from your network perimeter, has clear auditability, and offers the capability to quickly revoke user access as needed.

However, one of the biggest challenges businesses face as they begin to implement a Zero Trust model is not authentication, but authorization. Getting the user to the front door of your application in a Zero Trust model is (relatively) easy, but managing their credentials for both authentication and authorization, ensuring that the two match, and simultaneously maintaining a positive, non-invasive user experience can be very difficult.

In an ideal world, we believe that authentication and authorization should be handled by the same service. This would mean that while deliberating how to secure your internal applications — whether to build OAUTH capabilities into them directly or to integrate directly with the primary SSO that you use for your SaaS applications — you would also consider how your authentication methods may conflict or become duplicative with your identity validation methods. There are two primary ways to use these concepts to set yourself up for scalable success with authentication and authorization.

### Consuming Zero Trust vendor tokens

'Vendor tokens' is a concept that does not exist for every Zero Trust or SSE vendor. This is due to Cloudflare's relatively unique approach; because we're the world's largest provider of authoritative DNS, we provide DNS for the 'external' path to your internal applications, then create tokens for user access.

These tokens are based on the information Cloudflare receives from your identity provider after a successful authentication event, which matches against custom policies for that application. Each token contains all of the content that would be signed in a user's authentication event with their IdP: their name, username, email, group membership, and whatever other values are present. It also gets a unique tag to indicate its relevance to a specific application.

Once the _Cloudflare_ token has been created, it is passed to your internal applications to validate their requests and authorize access to your internal tooling. This takes minimal additional work per-application, and can be built into application creation workflows where you would otherwise need a complete OAUTH integration or SSO integration.

By using Cloudflare tokens, your users will have a seamless experience both _authenticating_ through your established Zero Trust proxy and getting _authorized_ directly into your application with the same information.

![How Cloudflare consumes tokens to validate requests and authorize access to internal tools](https://developers.cloudflare.com/_astro/zero-trust-design-guide-building-zero-trust-into-internal-tooling-consuming-tokens.D9KBiyO0_Z1MjIBX.svg) 

### Your Zero Trust vendor as an SSO

Some Zero Trust vendors provide the capability to operate as an SSO provider, integrating directly with your applications (like open-source or self-hosted solutions) which come with a pre-built SSO connector. In this flow, your SSO controls your authorization to the application, and your Zero Trust vendor calls out to your identity provider to make authentication decisions, without needing to manage multiple primary directories.

For Cloudflare users, this offers a number of advantages: it helps streamline authentication (AuthN) and authorization (AuthZ), reduces your reliance on a specific SSO vendor, and allows you to use multiple simultaneous authentication providers. Most importantly, it enables you to easily adopt or switch to a new identity provider.Businesses may not use the same identity provider at 25-50 users that they use at 300-500+, and there is always significant friction in the hard cutover required to move from one SSO integration to another. This transition can be especially difficult considering the time and frustration present in some applications' SSO integrations. Using Cloudflare as an SSO provider can help alleviate that friction by aggregating all of your identity, device posture, and risk integrations within a single policy enforcement point — thereby helping you streamline your AuthZ/AuthN and put additional security controls in front of your self-hosted applications.

### Where does Cloudflare fit in?

We recommend using our Cloudflare Access product for remote access to your internal services (by way of our Cloudflare Tunnel software in your network). With Cloudflare Access, you can [consume the JWT](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/authorization-cookie/validating-json/) created by Cloudflare Access or use [Access for SaaS](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/saas-apps/) to act as a SAML or OAUTH proxy for your private, self-hosted applications (which have SSO integrations pre-built into them).

In a lot of cases, you may even use both products for application access. For example, if you're self-hosting [Sentry ↗](https://sentry.io/) — which is not currently available on the public Internet — follow these steps:

1. Set up a public hostname with Cloudflare Access (which your users would navigate to Sentry on).
2. Install a Cloudflare Tunnel with an associated **Published application** to point to your local Sentry service.
3. Integrate Sentry with Access for SaaS as the SSO provider.

Now, users reaching the application from outside your network will already carry the Cloudflare JWT, and will be seamlessly authenticated into your application.

![Building zero trust into internal tooling and SSO](https://developers.cloudflare.com/_astro/zero-trust-design-guide-building-zero-trust-into-internal-tooling-sso.3OqU4GE9_24dl06.svg) 

## Remote access for contractors, vendors, and customers

Established and accepted patterns for corporate user remote access don't always extend to heterogeneous sets of users, which usually include contractors, third-party vendors, and even customers. All these user groups can have valid reasons for engaging with your private resources. It's possible you may hire development or maintenance contractors that need access to some parts of your network or applications, but providing them complete network access would introduce unnecessary risk.

It's also possible that you may provide hosted or managed services to your customers that they would then deploy within their own networks. In that case, you may need to connect with those services to appropriately manage them. Or, subsequently, you may host private resources for customers within your own environment and need to give them secure access to only access their relevant tenant.

### Establishing scope

Whenever you determine a need for third-party user access to your environment, you should first determine three attributes:

* What they need to access
* What level of authentication is required for that access
* How long this access will be relevant

### Web access for third parties

After determining the scope, you should determine the least-privilege access model appropriate for the user group. This may mean integrating with a secondary identity provider (maybe the customer or vendor's IdP) to use in authentication events, or using a temporary authentication method like a one-time PIN to authenticate against their email address only.

Some businesses also add vendor and contractor users to _their_ identity provider to streamline authentication and to control methods (like the use of MFA and other authentication factors). At a minimum, we recommend working with a Zero Trust security provider who supports multiple, simultaneous methods for authentication, and can apply them via specific policies or applications.

This allows you to keep all of your existing methods of secure remote access consistent. Your external user cohort will use the same paths into your network and will be subject to all of your security controls. Meanwhile, you will receive detailed logging and audit trails to dictate exactly what users had access to, how frequently they accessed them, and what kind of actions they took within your network. Assigning least-privilege controls can also easily establish an access model while ensuring that users aren't able to perform any lateral actions or access resources within your network unnecessarily.

### Administrative or network third-party access

If this access can't be established over a web browser and needs network-level controls, your external users may need to deploy the endpoint agent used for your Zero Trust deployment. For example, contractor groups often have multiple endpoint agents connected to a single user machine, which can introduce network routing complexity — or even conflicts, if some of these private networks overlap across different businesses.

To ensure a simple, manageable process for ensuring third-party access, consider the following:

1. **Can your Zero Trust vendor support multiple profiles for endpoint agent deployment?** Contractor users should have tightly-scoped routing controls to ensure limited access to your network and limited risks of conflict with other agents on the device.
2. **Is third-party access materially different from corporate user access?** If not, you can streamline your administrative management activities by building functional identities and integrations for third parties. New policies may not necessarily need to be created, as long as everything can be audited and differentiated.

### Access to customer environments (and vice versa)

In some cases, corporate users need secure (persistent or temporary) access to customer environments, or customers may need similar secure access to unique, hosted environments within your network. This process may include hosting software tenants for customers, running maintenance on customer-hosted software, or providing connectors for product functionality that ties into customers' internal networks.

For these use cases, the traditional recommended model has been a networking configuration like site-to-site VPNs and similar options. These can be scoped appropriately, but often result in overly broad connectivity between your corporate network and your customer network, and can introduce risk or overly-broad access capability.

In a Zero Trust security framework, this kind of access should be explicitly scoped in a least-privilege model. This can be accomplished by setting up identity-aware or service-aware site-to-site connectivity, or by using unidirectional connector models to provide secure access in either direction, which can be scoped to specific actions.

### Where does Cloudflare fit in?

Cloudflare can help provide scoped secure access for both web and network connectivity to your third-party users in a Zero Trust framework.

* **Cloudflare Access can integrate and use [multiple identity providers simultaneously](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/).** This can be scoped to a single application and a singular policy, and can have granular capabilities to 'force' some user access to authenticate in specific ways. There are also many third-party specific workflows — like [purpose justification](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/require-purpose-justification/) — that can ensure that user access is both easy for third parties, and documented and controllable for administrators.
* **Cloudflare Zero Trust can be deployed with flexible endpoint agent parameters and [logical groupings](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/configure/device-profiles/) for contractor and third-party users.** If you have external users with internal access needs, they can be both tightly-scoped and limit potential conflict with other external systems.
* **[Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/) can act as a unidirectional access model to provide corporate users access to scoped customer resources.** It is lightweight, easy to deploy, and can even be built into your deployment packages and deployed alongside the services you manage in customer environments.
* **WARP Connector can help you build secure, extensible networks relevant for each of your client controls.** This is particularly helpful when bidirectional (site-to-site) traffic flows are a necessity for the way that you engage with your customers, interact with their applications, or address other management concerns. WARP Connector has all of the same inline security policy application and auditability controls as the rest of your deployment, so you can maintain a Zero Trust security posture while achieving customer connectivity.
![How Cloudflare provides remote access for contractors, vendors, and customers](https://developers.cloudflare.com/_astro/zero-trust-design-guide-remote-access-for-contractors-vendors-and-customers.V8gJYmrW_WuaLX.svg) 

## Protecting against Internet threats (or, _is secure web gateway a part of Zero Trust?_)

Traditionally, the concept of Zero Trust access has been explicitly relegated to user or machine access to internal or privileged resources. On a functional level, this requires replacing network extension, reducing over-permissioning, and minimizing lateral movement and threat vectors typically delivered from VPN remote access connectivity. But for many businesses, their VPN didn't only proxy their private network traffic. It also managed their Internet traffic and allowed them to maintain a unified view of threats — typically, either through a module to send DNS queries to a cloud provider, or by simply backhauling all user traffic to the corporate network to be sent through the corporate firewalls.

The security and complexity challenges introduced by this castle-and-moat model has forced many vendors to address the two primary functions a VPN serves. Now, it is common to hear secure web gateways (SWG) and Zero Trust access (ZTNA) discussed in the same sentence or as part of the same product.

Although this shift was driven by vendors and analysts, rather than security researchers, it has seemed to improve security manageability for customers while simplifying the buying and deployment process for startups. Namely, deploying a single agent to handle both your corporate and Internet traffic is a significant improvement over using multiple device agents to handle all sorts of security tooling.

### Long Live The New Perimeter

In the old world, your perimeter was denoted by your public egress IP address, and indicated that you were subject to a series of security controls before your traffic went out to the Internet. Maybe it was a firewall, IPS, IDS, or something else. For that reason, businesses began requiring a specific source IP for traffic before it could be 'trusted'; this was used with vendors, third parties, and SaaS applications. Traffic originating from the corporate network (with your corporate source IPs) was one of the biggest indicators of 'trust'. It's no longer that simple.

Today, it's likely that your business has no central 'perimeter' at all. It likely started in the cloud, ships out user endpoints either raw or with some pre-configured security control, and runs everything remotely and asynchronously. This model is highly impactful for your productivity and ability to scale. However, as your security organization grows and matures, there will be an inherent benefit to setting a baseline security 'posture' that will denote the new perimeter.

#### A perimeter-less model

In a world in which your Zero Trust provider and your SSO should be able to protect most of your private applications, networks, services, and SaaS applications, users should be more empowered than ever to work from anywhere — and your asynchronous, highly-effective style of work shouldn't need to be interrupted if you follow best practices. In other words, **your definition of a 'secure' endpoint becomes your new corporate perimeter.**

A defined secure endpoint, with clear measurability is significantly better for security posture because, unlike a source IP address, it's both highly targeted and continually validated. In the old world, this would mean egressing through a firewall and being subject to security controls. In the new world, this typically means verifying encryption, interrogating posture on the device, and determining whether or not the traffic coming from the machine was inspected by a secure web gateway. It could even still include source IP address as a method of validation, but never as the primary control.

As you think about how you want to manage the usage of BYOD (and how you want to ensure your corporate data is being accessed securely), you just have to make a determination about what constitutes your secure endpoint strategy. Then, consider how you should interrogate requests to sensitive resources to ensure that they are compliant with this strategy. For instance, think about the steps users will need to take in order to access Workday (or another PII-heavy system). Before granting access, you may want to send their traffic through your secure web gateway and apply data loss prevention policies. Now ask yourself, what other steps do you need to take in order to enforce these requirements?

Within this discussion, we are thinking about Internet security (e.g. secure web gateways, DNS filtering, traffic proxying, and so on) as a set of advanced security signals from which you can apply more accurate, granular Zero Trust policies for your sensitive resources. It's also a good practice to get started withDNS filtering as soon as possible, since deploying software and proxying traffic from your endpoints will only become a more complex process as your business and security needs grow. As you start to think about other advanced security controls, like HTTP filtering and data loss prevention, we recommend reading [Getting Started with TLS Decryption ↗](https://developers.cloudflare.com/learning-paths/secure-internet-traffic/build-http-policies/tls-inspection/) to get a sense of the decisions to be made before decrypting traffic.

### Where does Cloudflare fit in?

In addition to providing Zero Trust security capabilities for internal applications, network remote access, and SaaS applications, Cloudflare also provides the following functionality:

* DNS filtering
* An L4 firewall
* A secure web gateway (SWG) — complete with application-awareness, TLS decryption, data loss prevention, CASB functionality, browser isolation, and the ability to adopt a dedicated egress IP structure directly from the Cloudflare network

All of our SWG functionality is controlled via policy that factors in user identity, device posture, and user risk, and is delivered from the same endpoint agent as your Zero Trust controls — using the same policy engines and policy enforcement opportunities.

Cloudflare allows you to functionally build a new perimeter by identifying, applying policies to, and securing the outbound traffic on your managed endpoint devices. You can achieve the same unified security control as the old castle-and-moat perimeter, while applying independent, granular security evaluation (but without backhauling any user traffic). Then, you can use that security evaluation to apply even stronger controls from your Zero Trust-protected applications, helping you distinguish between low, medium, and high risk users, make determinations about how to handle BYOD traffic, and more.

![How Cloudflare protects against Internet threats](https://developers.cloudflare.com/_astro/zero-trust-design-guide-protecting-against-internet-threats.C7veiXE5_23FcOW.svg) 

## Adopting and securing SaaS applications

The concept of SaaS security means a lot of things to a lot of people. For that reason, it's a somewhat controversial topic, especially as it relates to Zero Trust. SaaS services saw huge user population booms during the first wave of COVID, due in large part to a significant increase in remote work. Almost overnight, it was easier and more practical for users to connect to services that existed outside of corporate infrastructure than it was to connect to internal services.

Some make the argument that SaaS applications are either 1) inherently secure when you've integrated SSO, or 2) are the functional responsibility of the SaaS provider to secure. While these arguments address the way in which your SaaS investment is accessed and secured, they do not contextualize why companies use SaaS — which is typically for storing corporate information. The proliferation of 'places your sensitive data may live' will be an increasingly important factor in your SaaS security decisions.

The above statements all imply that you know what SaaS tooling your users engage with, but often that is not the case. First, we'll address 'sanctioned' SaaS adoption, and then we will discuss concepts related to 'unsanctioned' SaaS (also known as shadow IT).

### Sanctioned SaaS applications

Determining your required security posture is an important first step for your end users before you build any sort of security policy. So, if you have applications which contain significant amounts of corporate data or other data subject to compliance laws or other regulations, it may make sense to restrict those exclusively to devices that fit your aforementioned 'perimeter'.

The best way to accomplish this is to find an aggregator of your signal (like Cloudflare's Access for SaaS) that can ensure all of the individual pieces of your security policy are continuously being applied for user access. Can you accomplish all of this with a traditional SSO vendor? Maybe. Okta's FastPass, for example, makes a determination of machine identity by validating a certificate that is installed on local devices, then determining the source IP address of the request. In most cases, however, FastPass would not be able to tell you more about the security inspection events present in that user's traffic, or anything else about the health of the end-user device. To this point, it is worth noting that your SSO provider is only as useful as the amount of data it can consume to make a policy decision.

If you decide that only machine certificates or only another measure of signal is appropriate for denoting a corporate device, this is totally appropriate at any stage of a business's security maturity — in fact, many businesses have yet to adopt device posture of any kind.

Another way to manage your sanctioned SaaS applications is to integrate with your Zero Trust vendor via API. Then, you can scan them for misconfiguration or the presence of unexpected sensitive data. This process is independent of traditional Zero Trust access controls, but is offered by most Zero Trust vendors and can surface ongoing necessary configuration changes for all of your SaaS tools in a single view.

By evaluating the presence of sensitive data in SaaS applications that you manage, you can start to develop a sense of policy priority. Put another way, it may change the way that you think about what should be able to be accessed via BYOD vs. what should require authorized access from a managed endpoint. Or, conversely, how can you quantify the risk for BYOD access such that your users can be effectively conditioned?

### Unsanctioned SaaS applications (Shadow IT)

The security model significantly changes when you move from SaaS applications you do control (i.e. can integrate with SSO and other third-party tools) to applications you don't control. SaaS apps that fall into this category are often classified as 'unsanctioned' applications — sometimes, because they are managed by a secondary vendor that doesn't support SSO, or because they are services which haven't been explicitly approved by your IT organization for use. These unsanctioned apps are called shadow IT.

How do these apps proliferate within your environment? The logic is simple, especially with a startup. Users like to move quickly and may gravitate toward the most convenient method of getting their work across the finish line. Sometimes that can mean using tools that haven't been vetted or approved for use (or for potentially storing sensitive data).

Shadow IT is typically addressed as part of a general Internet security program, which sometimes falls within the same consideration set (or the same vendors) as a Zero Trust deployment. De-risking unsanctioned SaaS applications is almost always centered around visibility. The most important thing you can do — without having things like SSO or your CASB tool integrated with an application — is understand the breadth of shadow IT usage.

Documenting unsanctioned applications usually requires using a forward-proxy tool like a DNS filter, secure web gateway, or some email-specific tooling. These tools can provide insights into which users have engaged with unsanctioned SaaS apps, and potentially even how they engaged with them (did they upload/download files, how much bandwidth have they transferred, etc.).

By implementing policies and strategies to document SaaS usage, you can start to form a better understanding of how your sensitive data is stored, moved, or manipulated within SaaS tools. Some businesses limit the use of SaaS to explicitly-approved corporate tools, while others are more lenient. There's no wrong approach, but building an early framework for how to capture usage information can help you work backwards in the event that it becomes a pressing matter for your organization.

This framework can also give your IT organization direction on which tools to consider procurement cycles for. For example, if a critical mass of users already engages with a tool, it can sometimes make sense to get Enterprise capabilities for that tool to reduce the risk of shadow IT or allow your team to implement increased security features, sometimes without dramatically changing cost.

### Where does Cloudflare fit in?

Cloudflare can help set a foundation for visibility and management of your [shadow IT](https://developers.cloudflare.com/cloudflare-one/insights/analytics/shadow-it-discovery/) environment and subsequent discoveries. User traffic to the Internet can be audited and organized from the Cloudflare One Client and our [Secure Web Gateway (SWG)](https://developers.cloudflare.com/cloudflare-one/traffic-policies/), and can you understand where your sensitive data moves outside of your corporate-accepted SaaS tenants.

This can then be an opportunity to further expand your Zero Trust strategy by ensuring those newly-discovered tools are either explicitly blocked or explicitly allowed, setting specific data security controls on them, or integrating them with your Zero Trust vendor (using something like [Access for SaaS](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/saas-apps/aws-sso-saas/) to apply security policies).

## Long-term management with APIs and Infrastructure as Code (IaC)

Many startups we speak to are ultimately concerned with the headcount and expertise required to manage security tooling that appears complex or overprovisioned for their use cases. Much of what they already do for development is managed through orchestration tools, Infrastructure as Code, and directly via API — but they often want to achieve a state of DevSecOps, where all Zero Trust (and other security tooling) projects can be built, managed, and maintained as code.

While this is somewhat of an emerging concept for traditional security tooling, it should still be a critical consideration as you evaluate potential vendors. Keep in mind that although concepts like Terraform are supported by a number of Zero Trust vendors, these vendors may not support (or publish) provider or API endpoints for every concept in the product, which can lead to duplication or division in management efforts.

If your goal as an organization is to manage your networking and security stacks as code, it is important to start that framework early in your Zero Trust journey. While there may be challenges to navigate, getting a head start on network development will pay dividends as your business and security needs become inevitably more complex and difficult to manage.

As you continue to evaluate vendor partners for Zero Trust or general security initiatives, we recommend that you ensure that they have well-documented and complete API endpoints for their entire product portfolio and management controls — as well as documentation for orchestration and Infrastructure as Code tools (like Terraform).

### Where does Cloudflare fit in?

Cloudflare is very passionate about Zero Trust security in the context of DevSecOps. We build API-first as a primary ethos for all our products, and make all relevant API endpoints available to customers on the first day of feature availability, along with our extensive [documentation ↗](https://developers.cloudflare.com/api/).

Separately, many of our customers manage their Cloudflare Zero Trust deployment without ever touching our dashboard; instead, they use Terraform or similar tools for their entire management plane. If this is the case for you, we have a comprehensive and complete [Terraform provider ↗](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs) to enable you to accomplish Zero Trust as Code.

## Summary

In conclusion, making a few deliberate choices today about how your company approaches the basics of security and authentication will benefit your startup for years to come. The decisions you make now lay the foundation for a modern security infrastructure that will scale smoothly as your business grows. However you move forward, a few well-informed moves will ensure that your startup is built on sustainable, scalable Zero Trust security principles.

If you would like to discuss your Zero Trust requirements in greater detail and connect with one of our architects, visit [https://www.cloudflare.com/cloudflare-one/ ↗](https://www.cloudflare.com/cloudflare-one/) and request a consultation.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/design-guides/","name":"Design Guides"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/design-guides/zero-trust-for-startups/","name":"Building zero trust architecture into your startup"}}]}
```

---

---
title: Content-based asset creation
description: AI systems combine text-generation and text-to-image models to create visual content from text. They generate prompts, moderate content, and produce images for various applications.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/ai/ai-asset-creation.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Content-based asset creation

**Last reviewed:**  about 2 years ago 

Combining text-generation models with text-to-image models can lead to powerful AI systems capable of generating visual content based on input prompts. This integration can be achieved through a collaborative framework where a text-generation model generates prompts for the text-to-image model based on input text.

Here's how the process can work:

* Input Text Processing: The input text is provided to the system, which can be anything from a simple sentence to multiple paragraphs. This text serves as the basis for generating visual content.
* Prompt Generation: The text-generation model generates prompts based on the input text. These prompts are specifically crafted to guide the text-to-image model in generating images that are contextually relevant to the input text. The prompts can include descriptions, keywords, or other cues to guide the image generation process.
* Content Moderation: Text-classification models can be employed to ensure that the generated assets comply with content policies
* Text-to-Image Model: A text-to-image model takes the prompts generated by the text-generation model as input and produces corresponding images. The text-to-image model learns to translate textual descriptions into visual representations, aiming to capture the essence and context conveyed by the input text.

Example uses of such compositions of AI models can be employed to generation visual assets for marketing, publishing, presentations, and more.

## Asset generation

![Figure 1 asset generation](https://developers.cloudflare.com/_astro/ai-asset-generation.BN6tfVXY_1MIa7Q.svg "Figure 1: Content-based asset generation")

Figure 1: Content-based asset generation

1. **Client upload**: Send POST request with content to API endpoint.
2. **Prompt generation**: Generate prompt for later-stage text-to-image model by calling [Workers AI](https://developers.cloudflare.com/workers-ai/) [text generation models](https://developers.cloudflare.com/workers-ai/models/) with content as input.
3. **Safety check**: Check for compliance with safety guidelines by calling [Workers AI](https://developers.cloudflare.com/workers-ai/) [text classification models](https://developers.cloudflare.com/workers-ai/models/) with the previously generated prompt as input.
4. **Image generation**: Generate image by calling [Workers AI](https://developers.cloudflare.com/workers-ai/) [text-to-image models](https://developers.cloudflare.com/workers-ai/models/) previously generated prompt.

## Related resources

* [Community project: content-based asset creation demo ↗](https://auto-asset.pages.dev/)
* [Workers AI: Text generation models](https://developers.cloudflare.com/workers-ai/models/)
* [Workers AI: Text-to-image models](https://developers.cloudflare.com/workers-ai/models/)
* [Workers AI: llamaguard-7b-awq](https://developers.cloudflare.com/workers-ai/models/llamaguard-7b-awq/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/ai/","name":"Artificial Intelligence (AI)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/ai/ai-asset-creation/","name":"Content-based asset creation"}}]}
```

---

---
title: Composable AI architecture
description: The architecture diagram illustrates how AI applications can be built end-to-end on Cloudflare, or single services can be integrated with external infrastructure and services.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/ai/ai-composable.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Composable AI architecture

**Last reviewed:**  almost 2 years ago 

## Introduction

The AI market is witnessing a rapid evolution, propelled by the swift pace of technological advancement. With breakthroughs occurring frequently, staying up-to-date with the latest innovations is imperative for organizations aiming to remain competitive. Short iteration cycles and agility have become indispensable in this landscape, allowing businesses to swiftly adopt and leverage the newest advancements in AI technology.

In this dynamic environment, the concept of composability, data portability, and standard APIs emerges as crucial factors in navigating the complexities of the AI ecosystem:

* Composability refers to the ability to assemble various AI components into tailored solutions, enabling organizations to mix and match different technologies to suit their specific needs.
* Data portability plays a pivotal role in facilitating seamless data exchange between different AI systems and platforms, ensuring interoperability and preventing data silos.
* Standard APIs for interoperability serve as the linchpin for integrating diverse AI components, enabling seamless communication and collaboration between disparate systems.

The significance of composability, data portability, and standard APIs becomes particularly pronounced in mitigating vendor lock-in and fostering flexibility. By embracing these principles, organizations can sidestep dependency on single vendors and instead opt for a best-in-class approach, selecting the most suitable solutions for their unique requirements. Overall, these principles pave the way for a more agile, adaptable, and future-proof AI ecosystem.

Cloudflare's AI platform has been designed with these principles in mind. The architecture diagram illustrates how AI applications can be built end-to-end on Cloudflare, or single services can be integrated with external infrastructure and services.

## Composable AI infrastructure

![Figure 1: Composable AI architecture](https://developers.cloudflare.com/_astro/ai-composable.CBIbt7we_Z1j2Kgc.svg "Figure 1: Composable AI architecture")

Figure 1: Composable AI architecture

1. **Compute**: The compute layer is the core of the application. All business logic, as well as use of other components, is defined here. The compute layer interacts with other services such as inference services, vector search, databases and data storage. Serverless solutions such as [Cloudflare Workers](https://developers.cloudflare.com/workers/) offer fast iteration and automatic scaling, which allows developers to focus on the use case instead of infrastructure management. Importantly for composability is the support of standard interfaces such as HTTP or TCP, which the Workers' runtime both supports via the [fetch() API](https://developers.cloudflare.com/workers/runtime-apis/fetch/) and [connect() API](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) respectively.
2. **Inference**: AI inference is responsible for the AI-capabilities of the application. Operational models vary between self-hosting models or consuming Inference-as-a-service providers such as [Workers AI](https://developers.cloudflare.com/workers-ai/). In the latter case, [REST APIs](https://developers.cloudflare.com/api/resources/ai/methods/run/) make interacting with inference services from any service/client easy to implement. Using platform-specific integrations such as [Bindings](https://developers.cloudflare.com/workers-ai/configuration/bindings/) for interaction between Workers and other services enable simplified development as complexity such as authentication is abstracted away.
3. **Vector Search**: Certain use cases such as [RAG](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-rag/) leverage vector search for similarity matching. Operational models vary between self-hosting databases or consuming vector-specific database-as-a-service (DBaaS) providers such as [Vectorize](https://developers.cloudflare.com/vectorize/). In the latter case, [REST APIs](https://developers.cloudflare.com/api/resources/vectorize/subresources/indexes/methods/list/) make interacting with it from any service/client easy to implement. Using platform-specific integrations such as [Bindings](https://developers.cloudflare.com/vectorize/get-started/embeddings/#3-bind-your-worker-to-your-index) for interaction between Workers and other services enable simplified development as complexity such as authentication is abstracted away.
4. **Data & Storage**: Databases and data storage add state to AI applications. User management, session storage and persisting data are common requirements for AI applications. Depending on the use case, different solutions are required such as relationship databases or object storage. A variety of solutions for self-hosted or managed services exist. On Cloudflare, this could be for instance [D1](https://developers.cloudflare.com/d1/) and [R2](https://developers.cloudflare.com/r2/). REST APIs make interacting with inference services from any service/client easy to implement. Using platform-specific integrations such as Bindings for interaction between Workers and data and database services enable simplified development as complexity such as authentication is abstracted away.

## Related resources

* [Workers: Serverless compute](https://developers.cloudflare.com/workers/)
* [Workers AI: Serverless AI inference](https://developers.cloudflare.com/workers-ai/)
* [Vectorize: Serverless Vector database](https://developers.cloudflare.com/vectorize/)
* [D1: Serverless SQLite database](https://developers.cloudflare.com/d1/)
* [R2: Object storage](https://developers.cloudflare.com/r2/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/ai/","name":"Artificial Intelligence (AI)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/ai/ai-composable/","name":"Composable AI architecture"}}]}
```

---

---
title: Multi-vendor AI observability and control
description: By shifting features such as rate limiting, caching, and error handling to the proxy layer, organizations can apply unified configurations across services and inference service providers.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/ai/ai-multivendor-observability-control.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Multi-vendor AI observability and control

**Last reviewed:**  almost 2 years ago 

## Introduction

The AI landscape is rapidly evolving with new models, services, and applications emerging daily. Many developers and organizations seek to enhance agility by opting for inference-as-a-service solutions like [Workers AI](https://developers.cloudflare.com/workers-ai/), rather than developing or managing models themselves.

Inference-as-a-Service is a cloud-based model that allows users to deploy and execute AI without managing underlying infrastructure. The platform handles all aspects of model serving, including scaling resources based on demand, often-times supporting both real-time and batch inference. Users can send input data to the model via API calls, with the service provider managing servers, scaling, and maintenance tasks. Typically operating on a pay-as-you-go model, inference services simplify model deployment and scaling, enabling organizations to leverage AI capabilities without infrastructure complexities.

As this field evolves rapidly, developers and organizations face several challenges:

* Fragmentation: Many inference service providers offer only a limited range of models and features. Different use cases may require multiple vendors, leading to fragmentation.
* Availability: With increasing demand and fast-paced technological advancements, inference service providers struggle to maintain high API availability.
* Lack of observability: Providers often offer limited analytics and logging capabilities, which vary across vendors. Gaining a unified view of AI usage proves challenging.
* Lack of security control: Organizations encounter difficulties in maintaining adequate security measures.
* Lack of cost control: Understanding usage insights can be challenging, and the absence of custom rate limits poses risks in public-facing AI use cases.

Using a forward proxy can mitigate these challenges. Positioned between the service making inference requests and the inference service platform, it serves as a single point for observability and control. By shifting features such as rate limiting, caching, and error handling to the proxy layer, organizations can apply unified configurations across services and inference service providers.

## AI forward proxy setup

The following architecture illustrates the setup of [AI Gateway](https://developers.cloudflare.com/ai-gateway/) as a forward proxy between a service and one or multiple AI inference providers, such as [Workers AI](https://developers.cloudflare.com/workers-ai/)

![Figure 1: Multi-vendor AI architecture](https://developers.cloudflare.com/_astro/ai-multi-vendor-observability-control.DprqSV76_MTyHF.svg "Multi-vendor AI architecture")

Multi-vendor AI architecture

1. **Inference request**: Send POST request to your AI gateway.
2. **Request proxying**: Forward `POST` request to AI Inference provider or serve response from [cache, if enabled and available](https://developers.cloudflare.com/ai-gateway/features/caching). During this process, both [analytics](https://developers.cloudflare.com/ai-gateway/observability/analytics/) and [logs](https://developers.cloudflare.com/ai-gateway/observability/logging/) are collected. Additionally, controls such as Rate Limiting are enforced.
3. **Error handling**: In case of errors, retry request or fallback to other inference provider, depending on configuration.

## Related resources

* [AI Gateway: Get started](https://developers.cloudflare.com/ai-gateway/get-started/)
* [AI Gateway: Supported Providers](https://developers.cloudflare.com/ai-gateway/usage/providers/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/ai/","name":"Artificial Intelligence (AI)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/ai/ai-multivendor-observability-control/","name":"Multi-vendor AI observability and control"}}]}
```

---

---
title: Retrieval Augmented Generation (RAG)
description: RAG combines retrieval with generative models for better text. It uses external knowledge to create factual, relevant responses, improving coherence and accuracy in NLP tasks like chatbots.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/ai/ai-rag.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Retrieval Augmented Generation (RAG)

**Last reviewed:**  about 2 years ago 

Retrieval-Augmented Generation (RAG) is an innovative approach in natural language processing that integrates retrieval mechanisms with generative models to enhance text generation.

By incorporating external knowledge from pre-existing sources, RAG addresses the challenge of generating contextually relevant and informative text. This integration enables RAG to overcome the limitations of traditional generative models by ensuring that the generated text is grounded in factual information and context. RAG aims to solve the problem of information overload by efficiently retrieving and incorporating only the most relevant information into the generated text, leading to improved coherence and accuracy. Overall, RAG represents a significant advancement in NLP, offering a more robust and contextually aware approach to text generation.

Examples for application of these technique includes for instance customer service chat bots that use a knowledge base to answer support requests.

In the context of Retrieval-Augmented Generation (RAG), knowledge seeding involves incorporating external information from pre-existing sources into the generative process, while querying refers to the mechanism of retrieving relevant knowledge from these sources to inform the generation of coherent and contextually accurate text. Both are shown below.

Looking for a managed option?

[AI Search](https://developers.cloudflare.com/ai-search/) offers a fully managed way to build RAG pipelines on Cloudflare, handling ingestion, indexing, and querying out of the box. [Get started with AI Search](https://developers.cloudflare.com/ai-search/get-started/).

## Knowledge Seeding

![Figure 1: Knowledge seeding](https://developers.cloudflare.com/_astro/rag-architecture-seeding.BVBY5k5z_1MIa7Q.svg "Figure 1: Knowledge seeding")

Figure 1: Knowledge seeding

1. **Client upload**: Send POST request with documents to API endpoint.
2. **Input processing**: Process incoming request using [Workers](https://developers.cloudflare.com/workers/) and send messages to [Queues](https://developers.cloudflare.com/queues/) to add processing backlog.
3. **Batch processing**: Use [Queues](https://developers.cloudflare.com/queues/) to trigger a [consumer](https://developers.cloudflare.com/queues/reference/how-queues-works/#consumers) that process input documents in batches to prevent downstream overload.
4. **Embedding generation**: Generate embedding vectors by calling [Workers AI](https://developers.cloudflare.com/workers-ai/) [text embedding models](https://developers.cloudflare.com/workers-ai/models/) for the documents.
5. **Vector storage**: Insert the embedding vectors to [Vectorize](https://developers.cloudflare.com/vectorize/).
6. **Document storage**: Insert documents to [D1](https://developers.cloudflare.com/d1/) for persistent storage.
7. **Ack/Retry mechanism**: Signal success/error by using the [Queues Runtime API](https://developers.cloudflare.com/queues/configuration/javascript-apis/#message) in the consumer for each document. [Queues](https://developers.cloudflare.com/queues/) will schedule retries, if needed.

## Knowledge Queries

![Figure 2: Knowledge queries](https://developers.cloudflare.com/_astro/rag-architecture-query.CtBKQkxk_1MIa7Q.svg "Figure 2: Knowledge queries")

Figure 2: Knowledge queries

1. **Client query**: Send GET request with query to API endpoint.
2. **Embedding generation**: Generate embedding vectors by calling [Workers AI](https://developers.cloudflare.com/workers-ai/) [text embedding models](https://developers.cloudflare.com/workers-ai/models/) for the incoming query.
3. **Vector search**: Query [Vectorize](https://developers.cloudflare.com/vectorize/) using the vector representation of the query to retrieve related vectors.
4. **Document lookup**: Retrieve related documents from [D1](https://developers.cloudflare.com/d1/) based on search results from [Vectorize](https://developers.cloudflare.com/vectorize/).
5. **Text generation**: Pass both the original query and the retrieved documents as context to [Workers AI](https://developers.cloudflare.com/workers-ai/) [text generation models](https://developers.cloudflare.com/workers-ai/models/) to generate a response.

## Related resources

* [Tutorial: Build a RAG AI](https://developers.cloudflare.com/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai/)
* [Get started with AI Search](https://developers.cloudflare.com/ai-search/get-started/)
* [Workers AI: Text embedding models](https://developers.cloudflare.com/workers-ai/models/)
* [Workers AI: Text generation models](https://developers.cloudflare.com/workers-ai/models/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/ai/","name":"Artificial Intelligence (AI)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/ai/ai-rag/","name":"Retrieval Augmented Generation (RAG)"}}]}
```

---

---
title: AI Vibe Coding Platform
description: Cloudflare's low-latency, fully serverless compute platform, Workers offers powerful capabilities to enable A/B testing using a server-side implementation.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/ai/ai-vibe-coding-platform.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# AI Vibe Coding Platform

## Introduction

An AI-powered coding platform (sometimes referred to as a [“vibe coding” ↗](https://www.cloudflare.com/learning/ai/ai-vibe-coding/) platform) enables users to build applications by describing what they want in natural language. These platforms allow anyone to build applications by handling everything from code generation, testing and debugging, to project deployment.

Building the infrastructure for such a platform introduces a unique set of challenges. AI-generated code is inherently untrusted and must be executed in a secure, sandbox to prevent abuse and ensure isolation between users. To support rapid, conversational development, the platform must provide near-instantaneous feedback loops with live previews and real-time debugging. Finally, the platform needs a way to deploy and host the thousands or millions of applications its users will create, without running up the costs of traditional server infrastructure.

Cloudflare has all the components required to build one of these platforms — from middleware that connects to AI models, to secure sandboxes for code execution, and a serverless deployment platform that scales to millions of applications.

![Figure 1: AI Vibe Coding Platform on Cloudflare](https://developers.cloudflare.com/_astro/cf-vibe-plat.hdatWAqi_1eJrFI.svg) 

To get started with a reference implementation of an AI vibe coding platform immediately, deploy this [starter template ↗](https://github.com/cloudflare/vibesdk) to your Cloudflare account:

[![Deploy to Cloudflare Workers](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/vibesdk)

## Core Architecture Components

![Figure 2: Vibe Hosting Overview](https://developers.cloudflare.com/_astro/vibe-hosting-overview.ZFFcirO4_2mDVq2.svg) 

To build an AI-powered coding platform, you will need these key components:

* **AI for Code Generation:** Integrate with AI models to interpret user prompts and automatically generate code.
* **Secure Execution Sandbox:** Provide a secure, isolated environment where users can instantly run and test untrusted, AI-generated code.
* **Scalable Application Deployment :** Deploy and host AI-generated applications at scale.
* **Analytics & Observability:** Collect logs and metrics to monitor AI usage, application performance, and platform costs.

## AI Integration and Code generation

#### Connecting to AI Providers for Code Generation

The first step is processing a user's natural language prompt and securely routing it to an AI model to generate code.

When using various AI providers, you need visibility into costs, the ability to cache responses to reduce expenses, and failover capabilities to ensure reliability. [AI Gateway](https://developers.cloudflare.com/ai-gateway/) acts as a unified control point between your platform and AI providers to deliver these capabilities, enabling:

* A [unified access point](https://developers.cloudflare.com/ai-gateway/usage/chat-completion/) to route requests across LLM providers, allowing you to use [models](https://developers.cloudflare.com/workers-ai/models/) from a range of providers (OpenAI, Anthropic, Google, and others)
* [Caching](https://developers.cloudflare.com/ai-gateway/features/caching/) for popular responses, so when someone asks to "build a todo list app", the gateway can serve a cached response instead of going to the provider (saving inference costs)
* [Observability](https://developers.cloudflare.com/ai-gateway/observability/analytics/) into the requests, tokens used, and response times across all providers in one place
* [Cost tracking](https://developers.cloudflare.com/ai-gateway/observability/costs/) across AI providers

#### Making your AI better at building on Cloudflare

If you’re building an AI code generator and want it to be more knowledgeable about how to best build applications on Cloudflare, there are two tools we recommend using:

* **[Cloudflare Workers Prompt](https://developers.cloudflare.com/workers/get-started/prompting/#build-workers-using-a-prompt):** Structured prompt with examples that teach AI models about Cloudflare's APIs, configuration patterns, and best practices. Include these in your AI system for higher quality code output.
* **[Cloudflare’s Documentation MCP server ↗](https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/docs-vectorize):** If your AI tool supports [Model Context Protocol (MCP)](https://developers.cloudflare.com/agents/model-context-protocol/), connect it to Cloudflare's documentation MCP server to get up-to-date knowledge about Cloudflare’s platform.

## Development environment for executing AI-generated code

Both [Sandboxes](https://developers.cloudflare.com/changelog/2025-06-24-announcing-sandboxes/) and [Containers](https://developers.cloudflare.com/containers/) provide secure, isolated environments for executing untrusted AI-generated code. They offer:

* **Strong isolation and sandboxing controls** to prevent malicious or buggy code from affecting other instances
* **Fast startup times** to enable rapid iteration cycles with real-time feedback
* **Real-time output streaming** of logs and results for live progress updates and debugging
* **Preview URLs** to allow users to test applications during development
* **Global edge deployment** on Cloudflare's network for low-latency execution worldwide

**Sandboxes provide a fully-managed solution** that works out-of-the-box, with [pre-built APIs](https://developers.cloudflare.com/changelog/2025-08-05-sandbox-sdk-major-update/) for code execution, output formatting, and developer tools, making them ideal for most AI code execution use cases.

![Figure 3: Vibe Code Development - Sandbox SDK](https://developers.cloudflare.com/_astro/ai-platform-sandbox.DziHb_r3_ZCHeQ.svg) 

**Containers offer complete runtime control** through custom Docker images, allowing you to run any language or framework with up to 4GB RAM and dedicated vCPU and are best when you need custom runtimes or resource-intensive workloads.

![Figure 4: Isolated Containers](https://developers.cloudflare.com/_astro/BYO-sandbox.cc63egyA_Zx7iBh.svg) 

## Deploying applications to production

When building an AI-powered coding platform, you need to be able to deploy and host the thousands to millions of applications that the platform will generate.

[Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/) provides this infrastructure by enabling you to deploy unlimited applications, with each application running in its own isolated Worker instance, preventing one application from impacting others.

**With Workers for Platforms, you get:**

* **Isolation and multitenancy** — every application runs in its own dedicated Worker, a secure and isolated sandbox environment
* **Egress control and usage limits** — Configure firewall policies for all outgoing requests through an [outbound worker](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/) and [custom usage limits](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/custom-limits/) to prevent abuse
* **Dedicated resources per project:** Attach a KV store or database to each application, enabling more powerful functionality while ensuring [resources](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/bindings/) are only accessible by the application they’re attached to.
* **Logging & Observability** across the platform to gather insights, monitor performance, and troubleshoot issues across applications
![Figure 5: Complete Vibe Coding Platform](https://developers.cloudflare.com/_astro/vibe-hosting-analytics.udVLDrQc_wI25g.svg) 

## Conclusion

Cloudflare provides a complete set of services needed for building AI-powered platforms that need to run, test, and deploy untrusted code at scale.

Cloudflare has a template AI vibe coding platform that you can deploy, so you can get started with a complete example that handles everything from code generation, sandboxes development with a preview environment, and integration with Workers for Platforms for deploying and hosting the applications at scale.

[![Deploy to Cloudflare Workers](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/vibesdk)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/ai/","name":"Artificial Intelligence (AI)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/ai/ai-vibe-coding-platform/","name":"AI Vibe Coding Platform"}}]}
```

---

---
title: Automatic captioning for video uploads
description: By integrating automatic speech recognition technology into video platforms, content creators, publishers, and distributors can reach a broader audience, including individuals with hearing impairments or those who prefer to consume content in different languages.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/ai/ai-video-caption.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Automatic captioning for video uploads

**Last reviewed:**  about 2 years ago 

## Introduction

Automatic Speech Recognition (ASR) models have revolutionized the accessibility of video content by enabling the generation of subtitles and translations. These models utilize advanced algorithms to transcribe spoken words into text with high accuracy. By integrating ASR technology into video platforms, content creators, publishers, and distributors can reach a broader audience, including individuals with hearing impairments or those who prefer to consume content in different languages.

The process begins with capturing the audio from the video source, which is then fed into the ASR model. This model analyzes the audio waveform and converts it into a textual representation, capturing the spoken content in the form of subtitles. Furthermore, you can also use ASR models for language translation, enabling the creation of multilingual subtitles. Once the subtitles are generated, they can be displayed alongside the video, providing a synchronized text representation of the spoken content.

## Automatic captioning on upload

![Figure 1: Automatic captioning on upload](https://developers.cloudflare.com/_astro/ai-auto-caption-architecture-diagram.CyBpgQKS_1MIa7Q.svg "Figure 1:  Automatic captioning on upload")

Figure 1: Automatic captioning on upload

1. **Client upload**: Send POST request with both video and audio to API endpoint.
2. **Audio transcription**: Generate timestamped transcriptions by calling [Workers AI](https://developers.cloudflare.com/workers-ai/) [automatic speech recognition (ARS) model](https://developers.cloudflare.com/workers-ai/models/) with audio as input. Use [Workers](https://developers.cloudflare.com/workers/) to convert the output to a supported subtitled format.
3. **Store subtitles**: Store the subtitle file(s) on [R2](https://developers.cloudflare.com/r2/).
4. **Store video**: Store the video files on [R2](https://developers.cloudflare.com/r2/).
5. **Client request**: Send GET requests for video and subtitle(s) to origin. Use global [Cache](https://developers.cloudflare.com/cache/) to increase performance.
6. **Origin request**: Fetch file(s) from [R2](https://developers.cloudflare.com/r2/) on cache `MISS` by using [Public Buckets](https://developers.cloudflare.com/r2/buckets/public-buckets/).

## Related resources

* [Community project: automatic captioning demo ↗](https://auto-caption.pages.dev/)
* [Workers AI: Automatic speech recognition (ARS) model](https://developers.cloudflare.com/workers-ai/models/)
* [R2: Object storage for all your data](https://developers.cloudflare.com/r2/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/ai/","name":"Artificial Intelligence (AI)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/ai/ai-video-caption/","name":"Automatic captioning for video uploads"}}]}
```

---

---
title: Ingesting BigQuery Data into Workers AI
description: You can connect a Cloudflare Worker to get data from Google BigQuery and pass it to Workers AI, to run AI Models, powered by serverless GPUs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

### Tags

[ AI ](https://developers.cloudflare.com/search/?tags=AI) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/ai/bigquery-workers-ai.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Ingesting BigQuery Data into Workers AI

**Last reviewed:**  over 1 year ago 

You can connect a Cloudflare Worker to get data from Google BigQuery and pass it to Workers AI, to run AI Models, powered by serverless GPUs. This will allow you to enhance data with AI-generated responses, such as detecting the sentiment score of some text or generating tags for an article. This document describes a simple way to get started if you are looking to give Workers AI a try and see how the [new and different AI models](https://developers.cloudflare.com/workers-ai/models/) would perform with your data hosted in BigQuery.

## User-based approach

This version of the integration is aimed at workflows that require interaction with users to fetch data or generate ad-hoc reports.

![Figure 1: Ingesting Google BigQuery Data into Workers AI \(user-based\)](https://developers.cloudflare.com/_astro/user-based-architecture.C4nsq5nK_ZsDllv.svg "Figure 1: Ingesting Google BigQuery Data into Workers AI (user-based)")

Figure 1: Ingesting Google BigQuery Data into Workers AI (user-based)

1. A user makes a request to a [Worker ↗](https://workers.cloudflare.com/) endpoint. (Which can optionally incorporate [Access](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/) in front of it to authenticate users).
2. Worker fetches [securely stored](https://developers.cloudflare.com/workers/configuration/secrets/) Google Cloud Platform service account information such as service key and generates a JSON Web Token to issue an authenticated API request to BigQuery.
3. Worker receives the data from BigQuery and [transforms it into a format](https://developers.cloudflare.com/workers-ai/guides/tutorials/using-bigquery-with-workers-ai/#6-format-results-from-the-query) that will make it easier to iterate when interacting with Workers AI.
4. Using its [native integration](https://developers.cloudflare.com/workers-ai/configuration/bindings/) with Workers AI, the Worker forwards the data from BigQuery which is then run against one of Cloudflare's hosted AI models.
5. The original data retrieved from BigQuery alongside the AI-generated information is returned to the user as a response to the request initiated in step 1.

## Cron-triggered approach

For periodic or longer workflows, you may opt for a batch approach. This diagram also explores more products where you can use the data ingested from BigQuery. It relies on [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/), which are built into the Developer Platform and available for free when using Workers to schedule initialization of workloads.

![Figure 2: Ingesting Google BigQuery Data into Workers AI \(cron-triggered\)](https://developers.cloudflare.com/_astro/scheduled-based-architecture.DkGnVQUK_RrEDE.svg "Figure 2: Ingesting Google BigQuery Data into Workers AI (cron-triggered)")

Figure 2: Ingesting Google BigQuery Data into Workers AI (cron-triggered)

1. [A Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/) invokes the Worker without any user interaction.
2. Worker fetches [securely stored](https://developers.cloudflare.com/workers/configuration/secrets/) Google Cloud Platform service account information such as service key and generates a JSON Web Token to issue an authenticated API request to BigQuery.
3. Worker receives the data from BigQuery and [transforms it into a format](https://developers.cloudflare.com/workers-ai/guides/tutorials/using-bigquery-with-workers-ai/#6-format-results-from-the-query) that will make it easier to iterate when interacting with Workers AI.
4. Using its [native integration](https://developers.cloudflare.com/workers-ai/configuration/bindings/) with Workers AI, the Worker forwards the data from BigQuery to generate some content related to it.
5. Optionally, you can store the BigQuery data and the AI-generated data in a variety of different Cloudflare services.  
   * Into [D1](https://developers.cloudflare.com/d1/), a SQL database.  
   * If in step four you used Workers AI to generate embeddings, you can store them in [Vectorize](https://developers.cloudflare.com/vectorize/). To learn more about this type of solution, please consider reviewing the reference architecture diagram on [Retrieval Augmented Generation](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-rag/).  
   * To [Workers KV](https://developers.cloudflare.com/kv/) if the output of your data will be stored and consumed in a key/value fashion.  
   * If you prefer to save the data fetched from BigQuery and Workers AI into objects (such as images, files, JSONs), you can use [R2](https://developers.cloudflare.com/r2/), our egress-free object storage to do so.
6. You can set up an integration so a system or a user gets notified whenever a new result is available or if an error occurs. It's also worth mentioning that Workers by themselves can already provide additional [observability](https://developers.cloudflare.com/workers/observability/).  
   * Sending an email with all the data retrieved and generated in the previous step is possible using [Email Routing](https://developers.cloudflare.com/email-routing/email-workers/send-email-workers/).  
   * Since Workers allows you to issue HTTP requests, you can notify a webhook or API endpoint once the process finishes or if there's an error.

## Related resources

* [Tutorial: Using BigQuery with Workers AI](https://developers.cloudflare.com/workers-ai/guides/tutorials/using-bigquery-with-workers-ai/)
* [Workers AI: Get Started](https://developers.cloudflare.com/workers-ai/get-started/workers-wrangler/)
* [Workers: Secrets](https://developers.cloudflare.com/workers/configuration/secrets/)
* [Workers: Cron Triggers](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/)
* [Email Routing](https://developers.cloudflare.com/email-routing/email-workers/send-email-workers/)
* [Create a GCP service account ↗](https://cloud.google.com/iam/docs/service-accounts-create#iam-service-accounts-create-console)
* [Create a GCP service account key ↗](https://cloud.google.com/iam/docs/keys-create-delete#iam-service-account-keys-create-console)
* [Retrieval Augmented Generation (RAG) Reference Architecture](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-rag/)
* [Vectorize](https://developers.cloudflare.com/vectorize/)
* [Workers KV](https://developers.cloudflare.com/kv/)
* [R2](https://developers.cloudflare.com/r2/)
* [D1](https://developers.cloudflare.com/d1/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/ai/","name":"Artificial Intelligence (AI)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/ai/bigquery-workers-ai/","name":"Ingesting BigQuery Data into Workers AI"}}]}
```

---

---
title: Bot management
description: Cloudflare has bot management capabilities to help identify and mitigate automated traffic to protect domains from bad bots.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/bots/bot-management.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Bot management

**Last reviewed:**  over 1 year ago 

## Introduction

Cloudflare has bot management capabilities to help identify and mitigate automated traffic to protect domains from bad bots. [Bot Fight Mode](https://developers.cloudflare.com/bots/get-started/bot-fight-mode/) and [Super Bot Fight Mode](https://developers.cloudflare.com/bots/get-started/super-bot-fight-mode/) are options available on Free and Pro/Business accounts respectively. They offer a subset of features and capabilities available for Enterprise accounts. This reference architecture diagram focuses on [Enterprise Bot Management](https://developers.cloudflare.com/bots/get-started/bot-management/) available for Enterprise customers.

With [Enterprise Bot Management](https://developers.cloudflare.com/bots/get-started/bot-management/) customers have the maximum protection, features, and capability. A [bot score ↗](https://developers.cloudflare.com/bots/concepts/bot-score/) is exposed for every request. Cloudflare applies a layered detection approach to Bot Management with several detection engines that cumulatively can impact the bot score. A bot score is a score from 1 to 99 that indicates the likelihood that the request came from a bot. Scores below 30 are commonly associated with bot traffic and customers can then take action on this score with [WAF custom rules ↗](https://developers.cloudflare.com/waf/custom-rules/) or [Workers ↗](https://developers.cloudflare.com/workers/runtime-apis/request/#incomingrequestcfproperties). Additionally, customers can view this score along with other bot specifics like bot score source, bot detection IDs, and bot detection tags in the Bots, Security Analytics, and Events dashboards; these fields can also be seen in more detailed logs in Log Explorer or, with Log Push, logs with these respective fields can be exported to 3rd party SIEMs/Analytics platforms.

## Definitions

* **Bot Score:** A [bot score](https://developers.cloudflare.com/bots/concepts/bot-tags/) is a score from 1 to 99 that indicates how likely that request came from a bot. A score of 1 means Cloudflare is certain the request was automated.
* **Bot Score Source:** Bot Score Source is the detection engine used for the bot score.
* **Bot Detection ID:** [Detection IDs](https://developers.cloudflare.com/bots/additional-configurations/detection-ids/) are static rules used to detect predictable bot behavior with no overlap with human traffic. Detection IDs refer to the precise [detection](https://developers.cloudflare.com/bots/concepts/bot-detection-engines/) used to identify a bot, which could be from heuristics, verified bot detections, or anomaly detections.
* **Bot Tag:** [Bot tags](https://developers.cloudflare.com/bots/concepts/bot-tags/) provide more detail about why Cloudflare assigned a [bot score](https://developers.cloudflare.com/bots/concepts/bot-score/) to a request.
* **Verified Bots:** Cloudflare maintains [a list of "Verified" good bots ↗](https://radar.cloudflare.com/traffic/verified-bots) which can be used in policies to insure good bots such as those associated with a search engine are not blocked.
* **AI Bots:** [If the feature is enabled](https://developers.cloudflare.com/bots/concepts/bot/#ai-bots), Cloudflare will detect and block verified AI bots that respect `robots.txt` and crawl rate, and do not hide their behavior from your website. The rule has also been expanded to include more signatures of AI bots that do not follow the rules.

## Cloudflare Bot Management Detection Engines

* **Heuristics:** Cloudflare conducts a number of heuristic checks to identify automated traffic, and requests are matched against a growing database of malicious fingerprints. The [Heuristics engine](https://developers.cloudflare.com/bots/concepts/bot-score/#heuristics) immediately gives automated requests a score of 1.
* **Machine Learning (ML):** The [ML engine](https://developers.cloudflare.com/bots/concepts/bot-score/#machine-learning) accounts for the majority of all detections, human and bot. The ML model leverages Cloudflare's global network, which proxies billions of requests daily, to identify both automated and human traffic. The ML engine produces scores 2 through 99.
* **Anomaly Detection (AD):** The [AD engine](https://developers.cloudflare.com/bots/concepts/bot-score/#anomaly-detection) is an optional detection engine that uses a form of unsupervised learning. Cloudflare records a baseline of a domain's traffic and uses the baseline to intelligently detect outlier requests. Anomaly Detection is user agent-agnostic and can be turned on or off by your account team. Cloudflare does not recommend AD for domains that use [Cloudflare for SaaS](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/certificate-management/) or expect large amounts of API traffic. The AD engine immediately gives automated requests a score of 1.
* **JavaScript Detections (JSD)**: The [JSD engine](https://developers.cloudflare.com/bots/concepts/bot-score/#javascript-detections) identifies headless browsers and other malicious fingerprints. This engine performs a lightweight, invisible JavaScript injection on the client side of any request. The JSD engine either blocks, challenges, or passes requests to other engines. JSD is enabled by default but is completely optional.

## Bot Dashboards, Analytics, and Logs

Cloudflare bot score and bot traffic analysis is available in several locations.

* **Bots dashboard:**Customers can easily see bot activity up to 30 days back and filter on bot score and other bot, traffic, and request filters. The [bot feedback loop](https://developers.cloudflare.com/bots/concepts/feedback-loop/) allows customers to report back to Cloudflare any false positives or false negatives for further investigation.
* **Security Analytics:**Security Analytics brings together all of Cloudflare's detection capabilities in one dashboard and provides a broad view of all traffic across the site. The Bots Likelihood graph and widget provide visibility and allow customers to easily view and filter based on bot score and respective categorization of Automated, Likely Automated, Human, and Likely Human.
* **Events:**Events displays all events the WAF took action on. Events and logs can easily be filtered by bot score and other bot, traffic, or request criteria.
* **Log Explorer:**Customers can use Log Explorer to pull additional detailed log data. Logs can easily be filtered by bot score and other bot, traffic, or request criteria.
* **Log Push:**Customers can also export logs to a third party SIEM or Analytics platform. Bot score, bot score source, bot detection IDs, and bot detection tags can all be exported as part of the logs.  
## Bot Management Traffic Flow

![Figure 1: How Cloudflare identifies, scores and processes traffic from bots.](https://developers.cloudflare.com/_astro/bot-management-ra-diagram.D8aExrGs_ZGXIKY.svg "Figure 1: How Cloudflare identifies, scores and processes traffic from bots.")

Figure 1: How Cloudflare identifies, scores and processes traffic from bots.

1. Client request is sent to the closest Cloudflare Data Center via anycast ensuring low latency.
2. Cloudflare applies a layered approach for bot detection; each detection mechanism impacts the bot score assigned by Cloudflare to every request. Every request is assigned a bot score between 1-99 inclusive.
3. Once the client request has been processed by all of Cloudflare's detection engines and assigned a bot score, defined security policies will be executed, some of which may also be leveraging bot score. Various actions can be taken based on the assigned bot score including block, allow, rate limit, and one of the challenge actions.
4. Cloudflare provides analytics and insights into traffic and requests traversing the Cloudflare network. Customers can use the Bots, Security Analytics, Events, and Log Explorer dashboards to understand the overall traffic and bots activity across their site. Customers can also export logs to third party SIEM and Analytics Platforms.

# Related Resources

* [Cloudflare Bot Management Product Page ↗](https://www.cloudflare.com/application-services/products/bot-management/)
* [Cloudflare Blog - Bot Management ↗](https://blog.cloudflare.com/tag/bot-management/)
* [Bots documentation](https://developers.cloudflare.com/bots/)
* [Video: Cloudflare Bot Management and Turnstile with Demo ↗](https://youtu.be/6EnekTohO7I?si=tk8FUB0xtk1PxsJV)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/bots/","name":"Bots"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/bots/bot-management/","name":"Bot management"}}]}
```

---

---
title: Designing a distributed web performance architecture
description: A prescriptive pattern for building a Cloudflare-based L7 performance architecture that reduces latency, raises cache efficiency, and improves Core Web Vitals.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/content-delivery/distributed-web-performance-architecture.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Designing a distributed web performance architecture

**Last reviewed:**  24 days ago 

## Introduction

This guide describes a comprehensive layer 7 (L7) Application Performance strategy for architects and developers. In today's competitive digital landscape, **application performance is a critical business differentiator**. However, the ultimate objective is finding the performance-security equilibrium point.

While this guide focuses on maximizing speed and user experience (UX), performance cannot come at the expense of security. Architects must balance latency reduction against the necessary processing overhead of rigorous security controls, such as DDoS protection, WAF and Bot Management.

In high-risk scenarios, security must take precedence, where the "latency budget" gained from these performance optimizations is strategically reinvested to power essential protections, ensuring the application remains both fast enough to convert users and secure enough to protect the business.

Note

Performance optimization is a highly contextual endeavor where the "right" metrics and improvements can be unique to each organization and application.

| Key business metrics                    | Why it matters                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **User Engagement & Retention**         | **First Impressions & Abandonment:** A fast-loading website is fundamental to a positive user experience. Users today expect instant access to information, and research highlights this, showing that a significant portion of users will abandon a website if it [takes too long to load ↗](https://support.google.com/adsense/answer/7450973?hl=en), directly increasing the bounce rate.                                                                                                                       |
| **Revenue Generation & Conversion**     | **Direct Business Impact:** Web performance directly impacts a website's conversion rate, which is the percentage of visitors who complete a desired action, such as making a purchase or signing up for a newsletter. A faster site leads to higher conversion rates; for example, one [study ↗](https://www.cloudflare.com/en-gb/learning/performance/more/website-performance-conversion-rates/) found that even a 100-millisecond reduction in homepage load time resulted in a 1.11% increase in conversions. |
| **Organic Visibility & Search Ranking** | **Traffic Acquisition & Authority:** Search Engine Optimization (SEO) is how search engines like Google use page speed as a ranking factor. Faster-loading websites tend to rank higher in search results, which leads to more organic traffic. Google's **Core Web Vitals (CWVs)** are a set of metrics that measure a page's loading speed, interactivity, and visual stability, all of which are directly tied to performance and can significantly boost a site's search engine ranking.                       |
| **High-Speed Delivery & Reliability**   | **User Experience & Trust:** This metric combines a high **Download Success Rate** (Availability/Resiliency) with maximum **Download Throughput** (Speed). For mission-critical assets like software, video, or AI models, it ensures users get the file fast and reliably, directly impacting product usability and customer trust, especially during traffic spikes.                                                                                                                                             |
| **Edge Efficiency & Cost Control**      | **Operational Cost Reduction:** This metric is primarily measured by the **Cache Hit Ratio (CHR)** for large files. Maximizing the CHR offloads traffic from the origin server, which is the key driver for minimizing infrastructure load and achieving significant **Data Egress Cost Reduction** (for example, through the [Bandwidth Alliance ↗](https://www.cloudflare.com/bandwidth-alliance/)), directly translating to lower operational costs and greater profitability for the business.                 |

Measuring the Impact: While marketing dashboards (for example, [Google Analytics](https://developers.cloudflare.com/fundamentals/reference/google-analytics/)) track business outcomes, Cloudflare [Web Analytics](https://developers.cloudflare.com/web-analytics/) and [Observatory](https://developers.cloudflare.com/speed/observatory/) measure the performance drivers. Use them to correlate real-time Core Web Vitals (CWV) and Real User Monitoring (RUM) improvements directly with reduced bounce rates and higher conversions, without compromising privacy or relying on heavy client-side scripts.

By following this architecture, organizations can expect:

* **Improving Core Web Vitals (CWV)** like LCP and INP, which can help reduce bounce rates and drive sales.
* Maximizing Cache Hit Ratio, which offloads traffic from the origin, reducing infrastructure spend, and overall **lowering operational costs**.
* Ensuring high uptime/availability and **business resiliency** even during traffic spikes.

## Performance goals and metrics

[Measuring performance is tricky ↗](https://blog.cloudflare.com/loving-performance-measurements/), and it serves a broader business context where Security and [Compliance ↗](https://www.cloudflare.com/trust-hub/) are often non-negotiable prerequisites. Organizations frequently validate that their architecture meets regulatory standards (such as [data residency ↗](https://www.cloudflare.com/learning/privacy/what-is-data-localization/) or [encryption protocols](https://developers.cloudflare.com/ssl/reference/protocols/), including [Post-Quantum Cryptography (PQC)](https://developers.cloudflare.com/ssl/post-quantum-cryptography/)) before unlocking performance capabilities.

Once these security and compliance baselines are secured, effective optimization starts with measuring the “right” things - which interestingly is slightly different for everyone. Nonetheless, most people would agree to focus on user-centric metrics for website performance, using [TTFB as a diagnostic tool ↗](https://blog.cloudflare.com/ttfb-is-not-what-it-used-to-be/) for server responsiveness, but prioritizing [Core Web Vitals (CWV) ↗](https://www.cloudflare.com/learning/performance/what-are-core-web-vitals/) for measuring user experience.

Successful implementation is measured by these metrics:

| Metric                              | Target (75th percentile) | What it measures                                                                                                                                                                                                                                                                                                                    |
| ----------------------------------- | ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Largest Contentful Paint (LCP)**  | < 2.5 s                  | Loading performance (hero image/text visibility).                                                                                                                                                                                                                                                                                   |
| **Interaction to Next Paint (INP)** | < 200 ms                 | Interactivity and responsiveness to inputs.                                                                                                                                                                                                                                                                                         |
| **Cumulative Layout Shift (CLS)**   | < 0.1                    | Visual stability (unexpected layout shifts).                                                                                                                                                                                                                                                                                        |
| **Time to First Byte (TTFB)**       | < 800 ms                 | Server responsiveness (network + processing time). Gain deep visibility into connection performance by leveraging fields like [_cf.timings.origin\_ttfb\_msec_](https://developers.cloudflare.com/ruleset-engine/rules-language/fields/reference/cf.timings.origin%5Fttfb%5Fmsec/) to isolate origin latency from network overhead. |

The 75th percentile target is [based on previous analysis ↗](https://web.dev/articles/defining-core-web-vitals-thresholds) for reasonable balance.

Note

While [previous analysis ↗](https://web.dev/articles/defining-core-web-vitals-thresholds) recommends looking at the 75th percentile for CWV, server-side latency metrics (like TTFB) should be monitored at the 99th percentile (P99) or higher. Because a single user session often involves dozens of requests, the [probability of a user not experiencing a latency spike ↗](https://blog.cloudflare.com/loving-performance-measurements/) above the median (P50) is near zero. The P99 metric often better represents the "median" user experience for a full session.

## Data flow

This diagram illustrates the request lifecycle, highlighting how Cloudflare's layers/[phases](https://developers.cloudflare.com/ruleset-engine/reference/phases-list/) \- Network, Optimization, Caching, and Origin connectivity - work together to minimize latency.

![Figure 1: Data flow overview showing the request lifecycle across User, Cloudflare Edge, Tiered Edge, and Origin layers.](https://developers.cloudflare.com/_astro/data-flow-overview.DfUAkD8f_Z10Qic0.webp "Figure 1: Data flow overview")

Figure 1: Data flow overview

For demonstration purposes, the architecture is organized into four logical layers and follows specific [phases](https://developers.cloudflare.com/ruleset-engine/reference/phases-list/). Optimizing every step in this chain is required to achieve the best aggregate performance.

### 1\. User (eyeball client)

The performance journey begins at the client's device. Device hardware, [browser ↗](https://caniuse.com/), network quality and topology determine initial responsiveness. The goal here is to establish the fastest possible connection to the Cloudflare network.

* **DNS Resolution:** The client device queries the domain, going through both a public DNS resolver and, ultimately, to an authoritative DNS server. Cloudflare's [global anycast network ↗](https://www.cloudflare.com/network/) routes requests to the nearest Point of Presence (PoP), with [global DNS ↗](https://www.dnsperf.com/) resolution ensuring minimal lookup latency, including the possibility to expand to [mainland China](https://developers.cloudflare.com/china-network/).
* **Connection Establishment:** The client establishes a connection via IPv4/[IPv6](https://developers.cloudflare.com/network/ipv6-compatibility/) using [HTTP/3 (QUIC)](https://developers.cloudflare.com/speed/optimization/protocol/http3/) and [TLS 1.3](https://developers.cloudflare.com/ssl/edge-certificates/additional-options/tls-13/) \- this also allows for [Post-Quantum Cryptography (PQC)](https://developers.cloudflare.com/ssl/post-quantum-cryptography/). If the client has visited before, [0-RTT Connection Resumption](https://developers.cloudflare.com/speed/optimization/protocol/0-rtt-connection-resumption/) eliminates round-trips during the handshake. Additionally, [HTTP Strict Transport Security (HSTS)](https://developers.cloudflare.com/ssl/edge-certificates/additional-options/http-strict-transport-security/) enforces browser-side redirects to HTTPS, removing unnecessary server round-trips. It is generally recommended to [enforce HTTPS connections](https://developers.cloudflare.com/ssl/edge-certificates/encrypt-visitor-traffic/). Furthermore, by leveraging relevant [TCP fields](https://developers.cloudflare.com/changelog/2025-10-30-tcp-rtt-and-tcp-fields/), you can implement adaptive performance strategies.
* **Browser Optimization:** Features like [Speed Brain](https://developers.cloudflare.com/speed/optimization/content/speed-brain/) (Speculation Rules API) proactively prefetch resources, while [Early Hints](https://developers.cloudflare.com/cache/advanced-configuration/early-hints/) send link headers to the browser during "server think time", speeding up page rendering.
* **Third-Party Offloading:** [Zaraz](https://developers.cloudflare.com/zaraz/) offloads third-party tools (like Google Analytics 4 or Mixpanel) to the cloud. This reduces main thread blocking on the device, significantly improving INP.
* **Web Analytics (RUM):** Leverage Cloudflare [Web Analytics](https://developers.cloudflare.com/web-analytics/) to collect privacy-first, cookie-less performance data directly from the user's browser. This lightweight JavaScript beacon provides real-world insights into Core Web Vitals (LCP, INP, CLS) without tracking users or storing client-side state.

![Figure 2: Smart Shield Advanced network diagram showing Argo Smart Routing, Tiered Cache, Cache Reserve, Connection Reuse, Dedicated Egress IPs, and Load Balancing across multiple Points of Presence.](https://developers.cloudflare.com/_astro/network-diagram.PeUYDGK__Z2qTCdR.webp "Figure 2: Smart Shield Advanced network diagram")

Figure 2: Smart Shield Advanced network diagram

### 2\. Network and optimization (Cloudflare edge)

Once the request reaches the network edge, Cloudflare processes and optimizes the content before it is served or fetched from the cache.

* **Traffic Management:** The request is inspected. [URL Normalization](https://developers.cloudflare.com/rules/normalization/) ensures consistency, while [Redirect Rules](https://developers.cloudflare.com/rules/url-forwarding/) or [Transform Rules](https://developers.cloudflare.com/rules/transform/) handle path modifications efficiently. [Waiting Room](https://developers.cloudflare.com/waiting-room/) protects the backend during [massive traffic surges](https://developers.cloudflare.com/learning-paths/surge-readiness/concepts/), maintaining availability.
* **Programmatic Customization:** For advanced use cases where standard rules are insufficient, [Snippets and Workers](https://developers.cloudflare.com/rules/snippets/when-to-use/) allow for programmatic customization. This enables executing custom code logic to modify headers, rewrite URLs, [image optimizations](https://developers.cloudflare.com/images/transform-images/transform-via-workers/), or implement unique caching logic directly at the edge. Utilize [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) to facilitate low-latency, zero-overhead communication between these Workers.
* **Content Optimization:** Text assets are compressed using [Compression Rules](https://developers.cloudflare.com/rules/compression-rules/) (Brotli/Gzip). Images are processed on-the-fly via [Image Transformations](https://developers.cloudflare.com/images/transform-images/) or [Polish](https://developers.cloudflare.com/images/polish/) to ensure they are served in the optimal format (AVIF/WebP) and size for the device, significantly improving LCP and CLS.
* **Font & Tag Optimization:** [Cloudflare Fonts](https://developers.cloudflare.com/speed/optimization/content/fonts/) eliminates DNS lookups and TLS connections to Google Fonts by serving them inline from the domain. [Google Tag Gateway](https://developers.cloudflare.com/google-tag-gateway/) improves ad signal measurement and privacy.
* **Routing, Availability & Protocol Intelligence:** Cloudflare operates one of the most [interconnected networks ↗](https://blog.cloudflare.com/network-performance-update-birthday-week-2025/) in the world, peering with over 13,000 networks, operating a [global backbone ↗](https://blog.cloudflare.com/backbone2024/), and participating in a leading number of [Internet Exchange Points (IXPs) ↗](https://bgp.he.net/report/exchanges#%5Fparticipants) globally. We leverage the [unique intelligence ↗](https://blog.cloudflare.com/how-cloudflare-uses-the-worlds-greatest-collection-of-performance-data/) derived from this massive dataset to dynamically optimize Congestion Control (CC) at the protocol level - automatically selecting the optimal algorithm and tuning adequate parameters for every connection based on real-time network conditions. For dynamic requests that cannot be cached, [Argo Smart Routing](https://developers.cloudflare.com/argo-smart-routing/) finds the fastest path through the network to the origin. [Custom Errors](https://developers.cloudflare.com/rules/custom-errors/) provide a consistent brand experience during failures.

![Figure 3: Data flow for network and content optimization showing Traffic Handling, Programmatic Customization, Content Optimization, and Font and Tag Optimization.](https://developers.cloudflare.com/_astro/data-flow-network-content-optimization.BxZ6NPp-_Z1YJT2N.webp "Figure 3: Data flow - network and content optimization")

Figure 3: Data flow - network and content optimization

### 3\. Tiered Cache and Storage (Cloudflare edge)

Cloudflare can be organized into a specific topology. This layer handles content retention and retrieval. It acts as a shield for the origin and a high-speed store for the client.

* **Cache Logic:** [Origin Cache Control Headers](https://developers.cloudflare.com/cache/concepts/cache-control/), [Cache Rules](https://developers.cloudflare.com/cache/how-to/cache-rules/) and [Caching Levels](https://developers.cloudflare.com/cache/how-to/set-caching-levels/) allow precise control over TTL and query string handling. Implement Cache Normalization strategies to consolidate requests with variable URLs - such as those with distinct marketing or SEO parameters - into a single [Cache Key](https://developers.cloudflare.com/cache/how-to/cache-keys/), significantly improving cache hit ratios. [Prefetch URLs](https://developers.cloudflare.com/speed/optimization/content/prefetch-urls/) can pre-populate the cache with critical assets via manifest files to further reduce latency. Note the [default caching behavior and limits](https://developers.cloudflare.com/cache/concepts/default-cache-behavior/#default-cached-file-extensions).
* **Tiered Caching:** If the content is not on the local PoP, Cloudflare checks an upper-tier cache topology. [Smart Tiered Caching](https://developers.cloudflare.com/cache/how-to/tiered-cache/) and [Regional Tiered Cache](https://developers.cloudflare.com/cache/how-to/tiered-cache/#regional-tiered-cache) centralize connections, increasing cache hit ratios and reducing global origin load. For a more customized approach, Enterprise customers can opt for a [Custom Tiered Cache](https://developers.cloudflare.com/cache/how-to/tiered-cache/#custom-tiered-cache) topology.
* **Dedicated long-term Cache:** [Cache Reserve](https://developers.cloudflare.com/cache/advanced-configuration/cache-reserve/) extends the life of large, infrequently accessed assets (for example, images, archived video, software updates, or static AI models) by moving infrequently accessed content to persistent object storage backend (powered by R2). This prevents eviction due to [Least Recently Used (LRU)](https://developers.cloudflare.com/cache/concepts/retention-vs-freshness/) algorithms and avoids latency-inducing origin fetches, while simultaneously supporting storage redundancy and resilience requirements.
* **Instant Purge:** Leverage Cloudflare's [decentralized purging architecture ↗](https://blog.cloudflare.com/instant-purge-for-all/) to invalidate content globally in approximately 150ms. This [Instant Purge](https://developers.cloudflare.com/cache/how-to/purge-cache/) capability supports various granular approaches - including Purge by URL, Tag, Prefix, or Hostname - ensuring users receive fresh content immediately without waiting for TTL expiration.
* **Cloud Connectivity:** [Cloud Connector Rules](https://developers.cloudflare.com/rules/cloud-connector/) simplify routing traffic to public cloud providers (AWS, Azure, GCP) for specific object storage or origin requirements. For private infrastructure, [Workers VPC](https://developers.cloudflare.com/workers-vpc/) enables direct connectivity to private storage endpoints or databases on public clouds (for example, AWS, Azure) without exposing them to the public Internet.
* **Static Asset Hosting:** Entire parts of an application (frontend assets, images, including large media files, software packages) can be stored directly in [R2 Object Storage](https://developers.cloudflare.com/r2/) or [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/), serving them from the edge without ever hitting a traditional origin server. Additional [storage options](https://developers.cloudflare.com/workers/platform/storage-options/) are available.

![Figure 4: Data flow for caching showing Local Edge, Tiered Cache, and Long-Term Cache or Storage layers with cache miss and fill paths.](https://developers.cloudflare.com/_astro/data-flow-caching.BaLZQbF7_Z1BU3fP.webp "Figure 4: Data flow - caching")

Figure 4: Data flow - caching

### 4\. Origin server

For requests that must traverse the full path (that is, dynamic content or cache misses), the origin configuration determines the final latency impact. Architects have two primary paths here: adopting the performant, resilient serverless model (also known as originless), or optimizing connectivity and security for a traditional Origin Server.

**Serverless:** Cloudflare's [Developer Platform](https://developers.cloudflare.com/learning-paths/workers/devplat/intro-to-devplat/) achieves the optimal performance tier by enabling an "originless" model. [Fullstack applications](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/fullstack-application/) are built and deployed directly on the global edge network worldwide, eliminating the full path traversal to a distant origin. Dynamic requests execute at the nearest Cloudflare PoP and provide seamless access to integrated [edge storage solutions](https://developers.cloudflare.com/workers/platform/storage-options/) like R2 Object Storage and D1 Serverless SQLite Database. This drastically reduces TTFB and contributes significantly to aggressive CWV targets. Furthermore, this Originless model, leveraging Workers and R2, is the optimal design for high-performance file distribution, eliminating the need for a traditional backend server to deliver large datasets and media.

**Traditional Origin Optimization:** For applications that cannot be [refactored or modernized ↗](https://www.cloudflare.com/modernize-applications/) to an originless model, the following optimizations are required to minimize the resulting latency impact of traditional infrastructure:

* **Connectivity:** Cloudflare connects using [HTTP/2 to Origin](https://developers.cloudflare.com/speed/optimization/protocol/http2-to-origin/), utilizing [Connection Reuse](https://developers.cloudflare.com/smart-shield/concepts/connection-reuse/) to multiplex requests over a single persistent connection, reducing TCP/TLS overhead. For enhanced reliability and security, [Cloudflare Network Interconnect (CNI)](https://developers.cloudflare.com/network-interconnect/) allows you to connect your network infrastructure directly to Cloudflare - bypassing the public Internet - for a more performant and secure experience. Additionally, leveraging the [Bandwidth Alliance ↗](https://www.cloudflare.com/bandwidth-alliance/) (including partners like [Microsoft Azure Routing Preference ↗](https://www.cloudflare.com/en-gb/partners/technology-partners/microsoft/azure-routing-preference/)) can significantly reduce or waive data egress fees.
* **Private Infrastructure:** [Workers VPC](https://developers.cloudflare.com/workers-vpc/) and [Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/) enable direct connectivity to private storage endpoints or databases on public clouds without necessarily exposing them to the public Internet.
* **Load Balancing:** Traffic is distributed across healthy servers using [Cloudflare Load Balancing](https://developers.cloudflare.com/load-balancing/understand-basics/proxy-modes/). If an origin fails, traffic is instantly rerouted to healthy server pools. Alternatively, [Round-Robin DNS](https://developers.cloudflare.com/dns/manage-dns-records/how-to/round-robin-dns/) can be used for simpler distribution strategies.

![Figure 5: Deployment models comparing Serverful \(DNS, CDN, Images, Zaraz, Waiting Room, Load Balancing, Network Interconnect\) and Serverless \(Workers, Workers KV, AI, Queues, R2, D1, Hyperdrive\) architectures.](https://developers.cloudflare.com/_astro/deployment-models.CfqQk9U__Z5jKVh.webp "Figure 5: Deployment models")

Figure 5: Deployment models

## Tools and resources

Continuous monitoring and testing verify each optimization. Measurement and logging confirm real gains, surface regressions early, and reveal edge cases long before they affect clients.

When analyzing this data, it is important to take into account [connection limits](https://developers.cloudflare.com/fundamentals/reference/connection-limits/) and [TCP connection behavior](https://developers.cloudflare.com/fundamentals/reference/tcp-connections/), while also accounting for [Cloudflare crawlers](https://developers.cloudflare.com/fundamentals/reference/cloudflare-site-crawling/) and the [/cdn-cgi/ endpoint](https://developers.cloudflare.com/fundamentals/reference/cdn-cgi-endpoint/), as well as potential [data discrepancies between Cloudflare and Google Analytics](https://developers.cloudflare.com/fundamentals/reference/google-analytics/).

### Cloudflare platform tools

* [Cloudflare Observatory ↗](https://dash.cloudflare.com/?to=/:account/:zone/speed/): The primary dashboard for performance. It combines Synthetic tests (Google Lighthouse) for standardized baselines with Real User Monitoring (RUM) to capture actual user experiences across different devices and regions.
* [GraphQL Analytics API](https://developers.cloudflare.com/analytics/graphql-api/): Use this for Trends and [Timing Insights ↗](https://blog.cloudflare.com/introducing-timing-insights/). Query specific metrics like `edgeDnsResponseTimeMs` versus `originResponseDurationMs` to pinpoint exactly where latency is introduced.
* [Web Analytics](https://developers.cloudflare.com/web-analytics/): Specific for privacy-first, edge-based RUM analytics.
* [Cache Analytics](https://developers.cloudflare.com/cache/performance-review/cache-analytics/): Critical for analyzing Cache Hit Ratio (CHR) and "Requests by Cache Status" to find uncached content that causes origin load.
* [Ruleset Engine](https://developers.cloudflare.com/ruleset-engine/): Review and leverage the extensive library of [fields](https://developers.cloudflare.com/ruleset-engine/rules-language/fields/reference/), including network metrics like [TCP RTT and TCP fields](https://developers.cloudflare.com/changelog/2025-10-30-tcp-rtt-and-tcp-fields/), to implement precise custom logic for routing, caching, and security based on real-time connection properties.
* Logging & Forensics:  
   * [Log Explorer](https://developers.cloudflare.com/log-explorer/): For ad-hoc querying of request logs directly in the dashboard. Use [Custom Log Fields](https://developers.cloudflare.com/logs/logpush/logpush-job/custom-fields/) to log additional request headers, response headers and cookies.  
   * [Logpush](https://developers.cloudflare.com/logs/logpush/): For exporting logs to third-party SIEMs with optional [Log Output Options](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/), supporting formats such as CSV or JSON. Essential for analyzing custom fields and long-term trends, as well as calculating the Download Success Rate and analyzing Download Throughput for large files.  
   * [Instant Logs](https://developers.cloudflare.com/logs/instant-logs/): Real-time traffic inspection for immediate debugging.  
   * [Network Error Logging (NEL)](https://developers.cloudflare.com/network-error-logging/): Captures client-side connectivity issues that the server might never see.

### Open source and automation

* [Cloudflare Telescope ↗](https://github.com/cloudflare/telescope): An open-source, cross-browser front-end testing agent capable of running tests in all major browsers. Use this to automate performance regression testing in your CI/CD pipeline.
* [Cloudflare Speed Test ↗](https://blog.cloudflare.com/how-does-cloudflares-speed-test-really-work/): Measures realistic Internet connection quality - including loaded latency, jitter, and packet loss - by simulating real-world usage on Cloudflare's global network using predefined data blocks, rather than simply testing for peak throughput saturation.
* [Cloudflare Prometheus Exporter ↗](https://github.com/cloudflare/cloudflare-prometheus-exporter): Scrapes metrics from the [GraphQL Analytics API](https://developers.cloudflare.com/analytics/graphql-api/) and exposes them in a Prometheus-compatible format, allowing you to visualize Cloudflare performance data alongside your infrastructure metrics in Grafana or similar tools.

### External validation and benchmarking tools

While Cloudflare provides internal metrics, external (third-party) tools are vital for independent validation and deep-dive analysis of the critical rendering path.

* [WebPageTest ↗](https://www.webpagetest.org/): Detailed waterfall charts and deep analysis of loading behavior.
* [Google PageSpeed Insights ↗](https://pagespeed.web.dev/): The standard for Core Web Vitals assessment (Field & Lab data).
* [DebugBear ↗](https://www.debugbear.com/tools): Excellent for continuous monitoring and tracking speed history.
* [Pingdom ↗](https://tools.pingdom.com/): Useful for simple, geographic-based availability and speed testing.
* [Treo.sh ↗](https://treo.sh/sitespeed): Fast, historical visualization of Chrome User Experience Report (CrUX) data.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/content-delivery/","name":"Content Delivery"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/content-delivery/distributed-web-performance-architecture/","name":"Designing a distributed web performance architecture"}}]}
```

---

---
title: Optimizing image delivery with Cloudflare image resizing and R2
description: Learn how to get a scalable, high-performance solution to optimizing image delivery.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/content-delivery/optimizing-image-delivery-with-cloudflare-image-resizing-and-r2.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Optimizing image delivery with Cloudflare image resizing and R2

**Last reviewed:**  almost 2 years ago 

## Introduction

Optimizing image delivery for websites is crucial for enhancing user experience. Since images often represent the largest portion of a website's data, they significantly affect page load times, search engine rankings, delivery costs, and overall performance. This reference architecture diagram will guide you through a straightforward, scalable, and high-performance solution. By simply adjusting the URL string to specify image size and quality, you can cache and deliver the optimized image to any user requesting that format. Below are the Cloudflare components involved in this solution:

* [Cloudflare CDN ↗](https://www.cloudflare.com/en-gb/application-services/products/cdn/) \- Leverage [Cloudflare’s Global Network ↗](https://www.cloudflare.com/en-gb/network/) to cache your transformed images for fast and reliable delivery to your end users.
* [Cloudflare Images ↗](https://www.cloudflare.com/en-gb/developer-platform/cloudflare-images/) \- Leverage Cloudflare Images to resize, optimize and transform your images that are stored in an object storage solution such as Cloudflare R2\. Transformations are performed based on a specifically-formatted URL which requires minimal refactoring to your application to support.
* [Cloudflare R2 Object Storage ↗](https://www.cloudflare.com/en-gb/developer-platform/r2/) \- R2 allows users to store a large amount of unstructured data, and in this use case will be used for storing our original images (best quality) for transformation.
* [Cloudflare Transform Rules](https://developers.cloudflare.com/rules/transform/) \- If you’re migrating from another solution to Cloudflare, Transform Rules allows you to Rewrite the URL from another solutions syntax to a Cloudflare specific syntax, which reduces the complexity of migration.

## Image Delivery with Cloudflare Image Resizing and R2

![Figure 1: Cloudflare Image Resizing and R2](https://developers.cloudflare.com/_astro/optimizing-image-delivery-with-cloudflare-image-resizing-and-r2-diagram.6srQTFoB_10fa9h.svg "Figure 1: Cloudflare Image Resizing and R2")

Figure 1: Cloudflare Image Resizing and R2

1. **User Request**: The user sends an HTTP request for an image (image.jpg), specifying transformations like width and quality directly in the URL as a comma-separated list of options.
2. **Cache Hit**: Cloudflare processes the request at the point of presence closest to the user. It first checks if the requested image transformation is already in Cloudflare’s Cache. If so, the image is immediately returned to the user, eliminating the need for further processing. If not, the request moves to the next step.
3. [Transform Rules](https://developers.cloudflare.com/rules/transform/) (optional): If you’re migrating from another images solution it may be necessary to rewrite the URL path and query string with a rewrite so that you can avoid any complex refactoring at the application level to assist with the migration. Both Dynamic and Static rewrites are supported, with dynamic rewrites supporting complex expressions to support a multitude of URL rewrites.
4. **Cache MISS - R2**: If the requested image is not available in Cloudflare’s Cache, then the request is sent to the origin, which in this scenario is [Cloudflare’s R2 Object Storage](https://developers.cloudflare.com/r2/) platform. Only the original images are stored in R2, no resized variants are stored in the R2 bucket, which makes operating R2 without object lifecycle rules less onerous.
5. **Transform Image**: Based on the URL syntax sent in step 1 or transformed in step 3, [Cloudflare Images](https://developers.cloudflare.com/images/) transforms the image and sends it to the Cache before serving back to the end user with the requested image.

## Image Resizing URL Syntax Reference

You can easily convert and resize images by requesting them through a specifically-formatted URL. This section explains the URL structure for image transformation, referring back to the diagram and detailing each URL component:

* **Part 1** \- Your specific domain name on Cloudflare, this is the Zone you onboarded to Cloudflare and where your website or images are served from. e.g. [https://www.mywebsite.com/ ↗](https://www.mywebsite.com/)
* **Part 2** \- A fixed prefix that identifies this is a special path handled by Cloudflare’s built-in Worker.
* **Part 3** \- A comma-separated list of options for the image, such as width=80,quality=75
* **Part 4** \- Absolute path on the origin server. For example: /uploads/image.jpg

The final URL used in the request would look like this:

```

https://www.mywebsite.com/cdn-cgi/image/width=80,quality=75/uploads/image.jpg


```

## Related Resources

* [Image Resizing Documentation](https://developers.cloudflare.com/images/transform-images/)
* [Cloudflare R2 Developer Docs](https://developers.cloudflare.com/r2/)
* [URL Rewrite Rules](https://developers.cloudflare.com/rules/transform/url-rewrite/)
* [Serverless image content management platform](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-image-content-management/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/content-delivery/","name":"Content Delivery"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/content-delivery/optimizing-image-delivery-with-cloudflare-image-resizing-and-r2/","name":"Optimizing image delivery with Cloudflare image resizing and R2"}}]}
```

---

---
title: Optimizing and securing connected transportation systems
description: This diagram showcases Cloudflare components optimizing connected transportation systems. It illustrates how their technologies minimize latency, ensure reliability, and strengthen security for critical data flow.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/iot/optimizing-and-securing-connected-transportation-systems.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Optimizing and securing connected transportation systems

**Last reviewed:**  over 1 year ago 

A connected transport system is an integrated network of vehicles, infrastructure, and/or services that rely on constant data exchange in real-time to improve safety, efficiency, and mobility. Examples include public transportation (buses, trams, and trains), emergency vehicles (ambulances, fire trucks, and police cars), fleet management systems (logistics and delivery trucks), autonomous vehicles, connected infrastructure (traffic lights, road signs), platooning systems (truck convoys), drone delivery vehicles, and connected cars. They can be broadly categorized into:

* **Fixed location devices**: Systems such as CCTV cameras, traffic signals, and roadside sensors that remain in fixed locations and push data through a central gateway.
* **Roaming devices**: These include trucks, delivery vehicles, emergency vehicles, drones, and autonomous cars that require continuous connectivity for real-time communication and control.

These systems need secure and reliable network connections to operate safely and efficiently. Emergency vehicles rely on stable, secure connections to respond quickly without delays. Public transportation systems, including buses and trains, depend on real-time data to keep schedules on track and passengers safe. Fleet management, autonomous vehicles, and drone delivery systems all require secure connections to protect sensitive data and ensure operational reliability.

These systems are prime targets for cyberattacks, which could disrupt services, put public safety at risk, or compromise sensitive data. Their safety and reliability are vital for modern mobility.

This reference architecture diagram illustrates the key Cloudflare components and technologies involved in effectively minimizing latency, ensuring high reliability, and maintaining strong security for connected transportation system communications. Each component plays a crucial role in processing, routing, optimizing, and securing data flow, ensuring that critical data is delivered efficiently and securely.

Devices connect to Cloudflare's anycast network, which inspects and filters incoming data to protect against threats like DDoS attacks, malicious bots, and unauthorised access. Cloudflare's integrated services (including the content delivery network, load balancing, edge computing, and storage solutions), work together seamlessly to enhance data delivery, scalability, and resilience. This ensures that data is processed, optimized, and delivered efficiently to reduce latency, distribute traffic effectively, and handle requests closer to users. Additionally, the routing of data to origins is optimized by the vast global network and smart routing to identify the fastest, most efficient paths. This combination of security, scalability, performance, and routing results in a safer and faster connection between devices and their destination services.

![Figure 1: Optimizing and securing connected transportation systems](https://developers.cloudflare.com/_astro/figure1.FtS8xCcW_2avVr4.svg) 
1. **Mutual TLS (mTLS)**: To ensure strong authentication, Cloudflare utilizes [mutual TLS](https://developers.cloudflare.com/ssl/client-certificates/enable-mtls/) (mTLS) to verify both client and server identities. This adds an initial layer of trust, ensuring only authorized devices can communicate with the application.
2. **Cloudflare anycast network**: Cloudflare uses [anycast ↗](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/) networking and is one of the world's most connected and geographically distributed networks. Traffic is routed to the nearest Cloudflare data center, which reduces the number of network hops, dynamically adapts to changing network conditions, and ensures data takes the shortest path to its destination, minimizing latency and maximizing reliability.
3. **Security services**:  
   1. **API Shield**: Cloudflare's [API Shield](https://developers.cloudflare.com/api-shield/get-started/) protects critical APIs from unauthorized access and abuse, ensuring secure data exchange between connected systems.  
   2. **Web Application Firewall (WAF)**: Cloudflare's [WAF](https://developers.cloudflare.com/waf/) helps block malicious traffic and prevent application or API vulnerabilities from being exploited, safeguarding your network, devices and applications.  
   3. **DDoS Protection**: Cloudflare's [DDoS protection](https://developers.cloudflare.com/ddos-protection/about/attack-coverage/), covering the network, transport and application layer, prevents volumetric attacks that could compromise the availability of connected systems. By providing multi-layered protection, Cloudflare is able to mitigate a wide variety of DDoS threats. At lower layers, Cloudflare defends against high-volume attacks such as SYN floods, UDP floods, and other types of protocol-based disruptions that can overwhelm network resources. At the application layer, more sophisticated attacks targeting the application itself, such as HTTP floods - which aim to exhaust server resources and disrupt user-facing services - are blocked even in the face of [large-scale DDoS attempts ↗](https://blog.cloudflare.com/tag/ddos-reports/).  
   4. **DNS security**: Cloudflare's [DNS security ↗](https://www.cloudflare.com/en-gb/application-services/products/dns/) helps protect name resolution, ensuring that malicious actors cannot hijack requests.  
   5. **TLS encryption**: [TLS encryption](https://developers.cloudflare.com/ssl/edge-certificates/) ensures that data exchanged across the network is protected from interception, maintaining data integrity and privacy.
4. **Performance and reliability services**:  
   1. **Content Delivery Network (CDN)**: [Distribute content ↗](https://www.cloudflare.com/en-gb/learning/cdn/what-is-a-cdn/) efficiently across the network, reducing latency for end users by caching data closer to them.  
   2. **Load balancing**: [Distribute incoming traffic](https://developers.cloudflare.com/load-balancing/get-started/quickstart/) across multiple servers or data centers, ensuring optimal resource utilization, preventing single points of failure, and improving the performance of connected systems.  
   3. **Cloudflare Workers**: Our serverless compute platform, [Cloudflare Workers](https://developers.cloudflare.com/workers/), allows data processing at the edge, reducing the need for data to travel long distances and significantly reducing latency. Combined with related services like [Workers KV](https://developers.cloudflare.com/kv/get-started/) and [D1 ↗](https://www.cloudflare.com/en-gb/developer-platform/products/d1/), Cloudflare's edge-based storage solutions enable efficient data management close to the user. Workers KV allows for quick, read-heavy data access, perfect for caching configurations and frequently used data, while D1 provides a serverless SQL database for more robust storage needs. Additionally, Cloudflare's [Durable Objects ↗](https://blog.cloudflare.com/sqlite-in-durable-objects/) help manage stateful interactions at the edge, facilitating real-time data consistency. These tools together allow for seamless data processing, storage and lazy updates to core services, minimizing back-and-forth to centralized servers and ensuring faster, more efficient performance.  
   4. **Workers AI**: [Workers AI](https://developers.cloudflare.com/workers-ai/) is a serverless AI inference platform that allows developers to run machine learning models on Cloudflare's global network. It can be used for real-time data analysis, anomaly detection, and predictive maintenance, providing intelligence at the edge and enhancing the reliability of connected systems.  
   5. **Argo Smart Routing**: [Argo Smart Routing](https://developers.cloudflare.com/argo-smart-routing/) optimizes path selection by analyzing real-time network conditions, ensuring that data packets follow the fastest and most reliable routes.  
   6. **Cloudflare R2 Storage**: [R2](https://developers.cloudflare.com/r2/) provides cost-effective, high-performance storage for data such as telemetry and sensor logs, allowing frequent access without incurring egress fees.
5. **Origin connections:** Cloudflare is origin agnostic, meaning it can securely connect to a wide range of disparate locations regardless of where the origin server is hosted. These origins could include on-premise servers, datacenters, or cloud service providers (CSPs) like AWS, Azure, or Google Cloud. Whether data needs to flow from public cloud environments or proprietary private systems, Cloudflare can establish secure connections to facilitate efficient data exchange.  
Connections to these origins can be made using a variety of methods based on the specific requirements of the setup. These range from simple public DNS configurations to more advanced options like [Cloudflare Network Interconnect (CNI)](https://developers.cloudflare.com/network-interconnect/) and [cloudflared tunnels](https://developers.cloudflare.com/cloudflare-one/faq/cloudflare-tunnels-faq/#how-can-origin-servers-be-secured-when-using-tunnel). CNI allows for private, direct connectivity between origin locations and Cloudflare, creating a secure layer that keeps data protected as it moves across networks. The cloudflared tunnel creates encrypted tunnels directly from the origin to Cloudflare's network, bypassing public exposure entirely and enhancing both security and reliability. By being origin agnostic and supporting multiple secure connection options, Cloudflare allows businesses to continue using their existing proprietary systems and infrastructure, while benefiting from Cloudflare's performance, security, and scalability features.

These components work together to deliver an optimized, secure, and reliable solution for connected vehicles and other transportation systems, addressing both fixed-location and roaming device needs. For example, imagine a fleet of connected delivery trucks that use digital tablets for both navigation, tracking and real-time customer interactions. These tablets display delivery updates, allow customers to provide signatures and even enable on-the-spot payments. Cloudflare's network ensures that data to and from the device is updated with minimal latency, allowing drivers to navigate efficiently without delays. Cloudflare's API Shield helps secure any interactions between the tablet and backend systems, protecting customer information and ensuring that payment data is transmitted securely. The system also benefits from Workers running at the edge, which can process data in real-time, such as verifying customer signatures with AI without having to send everything back to a central server. This seamless integration of Cloudflare's components helps enhance both operational effectiveness and customer satisfaction.

## Related resources

* [Composable AI Architecture](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-composable/)
* [Secure Application Delivery](https://developers.cloudflare.com/reference-architecture/design-guides/secure-application-delivery/)
* [Preventing DDOS Attacks](https://developers.cloudflare.com/learning-paths/prevent-ddos-attacks/concepts/)
* [Video - Quick API Shield Demo ↗](https://www.youtube.com/watch?v=zzw2jIGcv5A)
* [MTLS at Cloudflare](https://developers.cloudflare.com/learning-paths/mtls/concepts/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/iot/","name":"Internet of Things (IoT)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/iot/optimizing-and-securing-connected-transportation-systems/","name":"Optimizing and securing connected transportation systems"}}]}
```

---

---
title: Bring your own IP space to Cloudflare
description: Cloudflare allows enterprises to bring their IP space to the Cloudflare network. This allows them to gain the security and performance of the platform while still appearing to the rest of the world via their own public IP space.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/network/bring-your-own-ip-space-to-cloudflare.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Bring your own IP space to Cloudflare

**Last reviewed:**  over 1 year ago 

## Introduction

Cloudflare brings security and performance to our customers' digital estates. However, one of the characteristics of proxying services is that interactions on the web that go to Cloudflare (DNS queries or requests to SaaS providers, for example) will appear to the world as coming from the Cloudflare IP space. This can create challenges for some enterprises.

For example, partners or other B2B relationships may use the public IP space owned by a customer for attestation and attribution in various transactions. They may look at the resolved address for a public hostname (for example, `www.example.com`) and expect that IP to match a specific range or address known to be owned by the customer.

[Bring Your Own IP (BYOIP)](https://developers.cloudflare.com/byoip/) allows enterprises to bring their IP space to Cloudflare, thus gaining the security and performance of the Cloudflare platform while still appearing to the rest of the world via their own public IP space. This reference architecture diagram highlights the different ways customers can bring their IP space to the Cloudflare network and the benefits that are achieved.

## BYOIP scenario one - Cloudflare proxy services

The default behavior when a DNS query is made to a Cloudflare proxied hostname will be to return one of Cloudflare's [default anycast IP addresses ↗](https://www.cloudflare.com/ips/). The traffic is then accelerated, protected, and, if not served by Cloudflare cache, sent to the customer's origin server.

In the diagram below, instead of the default behavior, traffic will proxy through Cloudflare's application services platform but DNS queries will return an IP address that is owned by the customer while also benefiting from Cloudflare's anycast network.

There are two different network ranges used in this example:

* `152.3.15.0/24` \- Customer owned IP range that will be associated with the Cloudflare network.
* `152.3.14.0/24` \- Customer owned IP range that will continue to be associated with their origin network.

![Figure 1: Cloudflare announces customer IP range and proxies it to the origin server IP.](https://developers.cloudflare.com/_astro/figure1.BXY13mGX_196m46.svg "Figure 1: Cloudflare announces customer IP range and proxies it to the origin server IP.")

Figure 1: Cloudflare announces customer IP range and proxies it to the origin server IP.

1. In order for Cloudflare to respond to DNS queries with addresses from the customer's space, a Letter of Agency (LOA) must be provided by the customer to Cloudflare, so that the addresses can be provisioned and advertised. This address space (in the example, `152.3.15.0/24`) must be dedicated for Cloudflare's configuration and not used anywhere within the customer environment.
2. The Cloudflare DNS configuration for the origin server `www.abc.com` is configured with the IP address `152.3.14.10/32`.
3. A DNS query for `www.abc.com` is made.
4. Cloudflare returns an address from the customer's space that was previously configured from a BYOIP space provided by the customer. In this case, the response was `152.2.15.200`, which is a part of the `/24` prefix of `152.2.15.0/24`.
5. The eyeball sends a request to `152.2.15.200` which is routed to Cloudflare.
6. Cloudflare proxies the connection, using the SNI (`www.abc.com`) to determine the actual origin IP, `152.3.14.10`. The request is then routed through Cloudflare's proxy services, such as DDoS protection, Web Application Firewall, and Bot Management.
7. Successful requests are sent to origin (if not served by cache) to `152.3.14.10` with a source IP of the Cloudflare network.

## BYOIP scenario two - network DDoS protection

Cloudflare is well known for its DDoS mitigation services protecting public websites and APIs. The same technologies can also be used to protect entire networks. Cloudflare's [Magic Transit](https://developers.cloudflare.com/magic-transit/) service offers a cloud-based network DDoS mitigation service for our customers' public IP space.

![Figure 2: Protection against DDoS attacks can be placed in front of the BYOIP range in front of your Cloudflare tunneled network.](https://developers.cloudflare.com/_astro/figure2.D70IrQeq_guoPn.svg "Figure 2: Protection against DDoS attacks can be placed in front of the BYOIP range in front of your Cloudflare tunneled network.")

Figure 2: Protection against DDoS attacks can be placed in front of the BYOIP range in front of your Cloudflare tunneled network.

1. In order for Cloudflare to attract traffic destined for customer network prefixes, a Letter of Agency (LOA) must be provided by the customer to Cloudflare, so that the network prefixes can be provisioned and advertised.
2. Once provisioned, Cloudflare will advertise the customer prefixes to the Internet, attracting traffic destined for those networks to the Cloudflare network.
3. All traffic destined for those prefixes is routed to Cloudflare.
4. DDoS traffic is mitigated by Cloudflare and legitimate traffic is directed back to customer networks via [tunnels](https://developers.cloudflare.com/cloudflare-wan/), or via [Cloudflare Network Interconnect](https://developers.cloudflare.com/network-interconnect/) (CNI) on ramps to the customer environment.

More detailed information about Magic Transit capabilities can be found in the [Magic Transit Reference Architecture](https://developers.cloudflare.com/reference-architecture/architectures/magic-transit/).

## Related resources

* [Protect hybrid cloud networks with Cloudflare Magic Transit](https://developers.cloudflare.com/reference-architecture/diagrams/network/protect-hybrid-cloud-networks-with-cloudflare-magic-transit/)
* [Protect public networks with Cloudflare](https://developers.cloudflare.com/reference-architecture/diagrams/network/protect-public-networks-with-cloudflare/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/network/","name":"Network"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/network/bring-your-own-ip-space-to-cloudflare/","name":"Bring your own IP space to Cloudflare"}}]}
```

---

---
title: Optimizing device roaming experience with geolocated IPs
description: Cloudflare can use private mobile networks (APNs) to connect devices roaming across multiple countries through regional Internet breakouts.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/network/optimizing-roaming-experience-with-geolocated-ips.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Optimizing device roaming experience with geolocated IPs

**Last reviewed:**  over 1 year ago 

## Introduction

A private [Access Point Name ↗](https://en.wikipedia.org/wiki/Access%5FPoint%5FName) (APN) enables devices, like connected vehicles, connected containers, healthcare devices or drones, to be connected while roaming across different countries. The device connects with a SIM or eSIM card to a dedicated network, and as the device moves to a new country, it automatically selects the appropriate private APN for the local provider.

APN traffic, typically managed by a third party provider such as a telecommunications company, is routed through specific regional Internet breakouts to get access to the Internet. This architecture can create challenges in regards to the localization of that traffic. For example, a device roaming in France might have traffic exit to the Internet from a UK-based Internet breakout. Therefore web sites and other Internet services will treat the device as if it is in the UK and deliver content in the wrong language or apply regional restrictions.

In this document, we'll discuss how Cloudflare can be used to solve this problem and will use the example of a service provider using private mobile networks (APNs) to connect devices roaming across multiple countries through regional Internet breakouts. This use case is relevant to global enterprises with regional offices, transportation fleets with connected vehicles, or any organization needing to maintain consistent, secure, and region-specific connectivity for roaming devices.

![Figure 1: Showing how Internet breakouts can present an egress IP that doesn't match the country the device is in.](https://developers.cloudflare.com/_astro/figure1.CJM1DAO-_Z1g51kL.svg "Figure 1: Showing how Internet breakouts can present an egress IP that doesn't match the country the device is in.")

Figure 1: Showing how Internet breakouts can present an egress IP that doesn't match the country the device is in.

# Correctly locate and secure devices by connecting them to the Cloudflare global network

Cloudflare addresses these challenges by routing device traffic from the Internet breakout to our global network, where traffic is processed at a Cloudflare data center close to the Internet breakout. This allows for two benefits:

1. Cloudflare can analyse the traffic, determine the original country of origin, and then ensure that traffic egresses onto the Internet from an IP address that is geolocated to the same country of origin.
2. Cloudflare can filter traffic based on [secure web gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) policies, allowing you to protect devices from accessing risky Internet hosts. It also allows you to lock down access for devices to specific Internet hosts, such as only allow devices to make requests to APIs that support their function.

The architecture diagram below provides a visual representation of this solution, showing how traffic from various countries — routed via different mobile network APN — is directed through Internet breakouts. Cloudflare optimizes and secures the Internet connection by leveraging [geolocated public IPs](https://developers.cloudflare.com/cloudflare-one/traffic-policies/egress-policies/dedicated-egress-ips/), ensuring that the traffic is secure and regionally localized to the device location.

This diagram is intended for network engineers, IT architects, and decision-makers looking to improve service relevance and performance for end-users. Key use cases include multinational corporations aiming to provide faster, region-specific Internet access and services in users' native languages, ensuring a superior user experience across diverse geographical locations.

![Figure 2: Using Cloudflare you can ensure the egress IP as seen by Internet sites matches the country the device is roaming in.](https://developers.cloudflare.com/_astro/figure2.7C-teMEC_Z2qI5pf.svg "Figure 2: Using Cloudflare you can ensure the egress IP as seen by Internet sites matches the country the device is roaming in.")

Figure 2: Using Cloudflare you can ensure the egress IP as seen by Internet sites matches the country the device is roaming in.

_Note: Labels in this image may reflect a previous product name._

1. **Data collection and regional routing**.  
Traffic from roaming devices is securely collected through the service provider's private APN and routed to third-party regional Internet breakouts. Each country in the network is assigned a specific RFC1918 IP subnet, simplifying traffic segmentation and management.
2. **Traffic sorting**.  
The Internet breakout will categorize the traffic into separate buckets to identify its country of origin - in this example each country's APN is given a dedicated private IP subnet.
3. **Connectivity options**.  
Cloudflare supports multiple connection methods to integrate with the regional breakout architecture:  
   * [**GRE tunnels**](https://developers.cloudflare.com/cloudflare-wan/reference/gre-ipsec-tunnels/) for ease of use.  
   * [**IPsec tunnels**](https://developers.cloudflare.com/cloudflare-wan/reference/gre-ipsec-tunnels/) for encrypted communication.  
   * [**Cloudflare Network Interconnect (CNI)**](https://developers.cloudflare.com/cloudflare-wan/network-interconnect/) for direct, high-performance connections.
4. **Localized Internet breakout using [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/) (formerly Magic WAN) and [Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/)**.  
With Cloudflare WAN and using [dedicated egress](https://developers.cloudflare.com/cloudflare-one/traffic-policies/egress-policies/dedicated-egress-ips/) with our [secure web gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/), Cloudflare enables Internet traffic to exit with source IPs registered in the desired country. This ensures end-users benefit from geolocalized content and services, such as access to region-specific platforms, tailored to their location.
5. **Advanced security and filtering options**.  
Cloudflare enhances the security of Internet breakouts with advanced features, including:  
   * [**DNS filtering**](https://developers.cloudflare.com/cloudflare-one/traffic-policies/get-started/dns/) to manage and block access to unwanted, high risk domains.  
   * [**Network firewalling**](https://developers.cloudflare.com/cloudflare-one/traffic-policies/network-policies/) for enforcing detailed security policies. For example, you can restrict vehicles to only send data over the Internet to a designated set of cloud telemetry systems while blocking all other traffic.  
   * [**Full SSL inspection**](https://developers.cloudflare.com/cloudflare-one/traffic-policies/http-policies/tls-decryption/) to protect against sophisticated threats and provide traffic visibility on encrypted traffic. It enables additional protections such as antivirus scanning, malware prevention, and file sandboxing.

# Related Resources

* [Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/)
* [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/)
* [Cloudflare servers don't own IPs anymore ↗](https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-anymore/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/network/","name":"Network"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/network/optimizing-roaming-experience-with-geolocated-ips/","name":"Optimizing device roaming experience with geolocated IPs"}}]}
```

---

---
title: Protect data center networks
description: This document focuses on the reference architecture of using Cloudflare WAN, Cloudflare Network Firewall, and Cloudflare Gateway services.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/network/protect-data-center-networks.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Protect data center networks

**Last reviewed:**  over 1 year ago 

## Introduction

Network security teams have traditionally used various network firewalls or security appliances at the perimeter to protect their data center networks against both external and internal threats, for example, DDoS attacks, malware, ransomware, phishing, leaking of sensitive information, etc. In addition, the same or additional firewall or security appliances are deployed at the [DMZ ↗](https://en.wikipedia.org/wiki/DMZ%5F%28computing%29) or core layer of the data center networks to control and secure internal private network traffic routed between multiple data center sites across their wide-area network (WAN).

But these firewalls and security appliances are often expensive, complex to configure and manage, difficult to scale to handle large attacks, and require upgrades and patches to defend against newly discovered threats and vulnerabilities.

[Cloudflare Magic Transit](https://developers.cloudflare.com/magic-transit/), [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/) (formerly Magic WAN), [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/) and [Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) services running natively on [Cloudflare's massive global network ↗](https://www.cloudflare.com/network/) provide solutions to all the shortcomings described above and more. These services offer in-line, scalable and performant global protection for your data center networks, all from a single cloud network platform.

* [Magic Transit ↗](https://www.cloudflare.com/network-services/products/magic-transit/) provides instant detection and mitigation against network-layer DDoS attacks on your public, Internet-facing networks.
* [Cloudflare WAN ↗](https://www.cloudflare.com/network-services/products/magic-wan/) provides any-to-any, hybrid/multi-cloud secure connectivity between your private, enterprise networks.
* [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/) is a cloud-native network firewall service that can be used to filter traffic that is routed to and from your networks that are protected by Magic Transit. It also supports functionalities such as [Intrusion Detection](https://developers.cloudflare.com/cloudflare-network-firewall/about/ids/) (IDS) and [packet capture](https://developers.cloudflare.com/cloudflare-network-firewall/packet-captures/).
* [Gateway ↗](https://www.cloudflare.com/zero-trust/products/gateway/) is a secure web gateway (SWG) service that allows you to inspect and control both Internet-bound traffic that is originated from your networks, as well as private network-to-private network traffic (that is, east-west), by proxying such traffic through Cloudflare's global network while applying DNS, network and HTTP based [policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/).

This document focuses specifically on the reference architectures of using Cloudflare Magic Transit, Cloudflare WAN, Cloudflare Network Firewall and Cloudflare Gateway services to protect both external and internal communications to your data center networks. For details of how Magic Transit, Cloudflare WAN, Cloudflare Network Firewall and Cloudflare Gateway works and how it can be architected for various use cases, see the linked resources at the end of the document.

To illustrate the architecture and how it works, the following diagrams visualize an example corporation with a set of data center networks that are either public-facing, connecting to users on the Internet or private, internal facing, used for communication within the enterprise. These networks are deployed at two on-premises locations. The prefixes of the public-facing networks are to be protected by Cloudflare Magic Transit.

| Data center 1                       | Data center 2                         |
| ----------------------------------- | ------------------------------------- |
| Public-facing network: 192.0.2.0/24 | Public-facing network: 203.0.113.0/24 |
| Private network: 192.168.1.0/24     | Private network: 172.16.2.0/24        |

The edge router(s) at each data center is connected to Cloudflare network via two Direct [Cloudflare Network Interconnect](https://developers.cloudflare.com/network-interconnect/) (CNI) connections, which are direct, private connections between your network and Cloudflare network. One of the Direct CNI connections is for carrying public-facing network traffic, while the other is for carrying private network traffic. Optionally, you can choose to carry both public and private network traffic over a single CNI connection but many organizations do desire to transport external and internal network traffic over separate connections in their security practice.

* For data center 1, CNI connection 1 is used to transport public-facing network traffic and connection 2 is used to transport private network traffic.
* For data center 2, CNI connection 3 is used to transport public-facing network traffic and connection 4 is used to transport private network traffic.

## Protect inbound traffic to public-facing networks

The reference architecture diagram below illustrates how Cloudflare Magic Transit and Cloudflare Network Firewall can be used to protect the data centers' public-facing networks from inbound traffic originating from the Internet.

![Figure 1. Protect Public-facing Networks from Inbound Traffic.](https://developers.cloudflare.com/_astro/figure1.ByCLqfND_1EfOWx.svg "Figure 1. Protect Public-facing Networks from Inbound Traffic.")

Figure 1\. Protect Public-facing Networks from Inbound Traffic.

_Note: Labels in this image may reflect a previous product name._

1. Using Border Gateway Protocol ([BGP ↗](https://www.cloudflare.com/learning/security/glossary/what-is-bgp/)) and [IP anycast ↗](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/), Cloudflare advertises the customer's protected IP prefixes to the Internet from all of [Cloudflare's global data centers ↗](https://www.cloudflare.com/network/). At the same time, on-premises network(s) would stop advertising the same exact prefixes from their respective on-premises border routers. This ensures that all traffic passes through Cloudflare for Magic Transit DDoS protection and policy enforcement before being delivered to the customer's data center. Internet traffic destined to these protected IP prefixes will always be routed to the Cloudflare data center that is closest to the source of the traffic. Optionally, you could advertise less-specific IP prefixes from the border routers to the Internet. This way, in the unlikely event of a Magic Transit service failure, traffic can be quickly re-routed directly to network locations from the Internet.
2. Traffic originating from the Internet and destined to the protected IP prefixes is ingested into the global Cloudflare network.
3. All DDoS attack traffic is mitigated in-line, close to the sources, at every Cloudflare data center using advanced and automated [DDoS mitigation](https://developers.cloudflare.com/ddos-protection/) technologies.
4. Traffic that passes DDoS mitigation is subjected to additional network firewall filtering using [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/).
5. Clean, filtered traffic is routed to the protected networks through Direct [Cloudflare Network Interconnect](https://developers.cloudflare.com/network-interconnect/) (CNI) connections.
6. There are two ways to route server return traffic back to the clients. One way is to route it natively out of the data center and onto the Internet, bypassing Cloudflare network. This method is called Direct Server Return (DSR). It results in asymmetric routing for the bi-directional traffic, which may cause problems with network security and traffic filtering when there are stateful firewalls or NAT devices in the network path, either through other parts of the data center or between the data center and the Internet. Caution needs to be taken in ensuring such an issue does not exist in your network. The other way is to symmetrically route server return traffic back through the Cloudflare network, over the same connection that carries the client-to-server traffic, using [Magic Transit Egress](https://developers.cloudflare.com/magic-transit/reference/egress/). This method is depicted in the diagram above, where the server return traffic is routed to the Cloudflare network via the same CNIs that transport public-facing network traffic from Cloudflare to the data center, using routing techniques such as policy-based routing (PBR) at your sites.
7. Magic Transit Egress traffic is subject to Cloudflare Network Firewall filtering before being routed out to the Internet towards the users.

## Protect Internet access from public-facing networks

The reference architecture diagram below illustrates how Cloudflare services - Magic Transit (Egress), Cloudflare Network Firewall, and Cloudflare Gateway can be used to protect outbound Internet traffic originating from the data centers' public-facing networks (that is, servers with public IP addresses).

![Figure 2. Protect outbound traffic from public-facing networks.](https://developers.cloudflare.com/_astro/figure2.CWqDwBZ8_Zz0dof.svg "Figure 2. Protect outbound traffic from public-facing networks.")

Figure 2\. Protect outbound traffic from public-facing networks.

_Note: Labels in this image may reflect a previous product name._

1. Each site network routes outbound Internet traffic originating from the public-facing networks to Cloudflare, via the same CNIs that inbound traffic traverses. This can be done at your site through routing techniques of your choice, such as policy based routing (PBR).
2. Upon entering the Cloudflare network, outbound Internet traffic is first routed through Cloudflare Network Firewall where it is subject to any configured network firewall policies.
3. Outbound Internet traffic is subsequently sent to [Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/), our secure web gateway service where various [policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) enforce a comprehensive set of security and control measures on the outbound traffic, ensuring the utmost protection for your networks. For example, Gateway DNS and HTTP policies can both be configured to prevent your servers from connecting to questionable Internet sites and from downloading malware or other malicious content.
4. Once traffic clears inspection, Gateway proxies the outbound traffic to their destinations on the Internet. The source IP addresses of the outbound traffic are the Cloudflare owned IP addresses associated with the Gateway service, which if you want you can purchase and set your [own egress IP](https://developers.cloudflare.com/cloudflare-one/traffic-policies/egress-policies/dedicated-egress-ips/)
5. Return traffic from the Internet, destined to Cloudflare's IP addresses linked to the Gateway service, is routed into Cloudflare's global network.
6. Traffic is inspected against Gateway policies.
7. Return traffic that passes Gateway inspection is routed to Cloudflare Network Firewall for further packet filtering.
8. Return traffic that passes Cloudflare Network Firewall filtering is routed from Cloudflare to your network locations via CNIs that transport public-facing network traffic.

## Protect site-to-site, inter-data center, private network traffic

The reference architecture diagrams below illustrate how Cloudflare services — Cloudflare WAN, Cloudflare Network Firewall and Cloudflare Gateway — can be used to protect site-to-site, inter-data center traffic between your private networks.

**Site to Site Private Network Traffic Connectivity**

First, let us examine the use case where you do not intend to subject site-to-site private network traffic to Cloudflare Gateway proxy firewall service and simply route it using Cloudflare WAN service.

![Figure 3.1. Protect inter-data center non-gateway-proxied traffic between private networks.](https://developers.cloudflare.com/_astro/figure3.1.Bcrim4pP_2v3a3D.svg "Figure 3.1. Protect inter-data center non-gateway-proxied traffic between private networks.")

Figure 3.1\. Protect inter-data center non-gateway-proxied traffic between private networks.

_Note: Labels in this image may reflect a previous product name._

1. Each site routes site-to-site private network traffic, destined to the other data center location, to Cloudflare WAN via the corresponding CNI connections. This can be done at your site through routing techniques of your choice, such as policy based routing (PBR).
2. Upon entering the Cloudflare network, traffic is routed through Cloudflare Network Firewall.
3. Cloudflare Network Firewall subjects traffic to any configured network firewall policies.
4. Traffic that clears the Cloudflare Network Firewall rules and is not intended to be further proxied by Cloudflare Gateway service, is routed back to the destination network via the corresponding CNI.

**Site to Site Private Network Traffic with Application Level Security Controls**

For the use case where you do want to apply application level policy for fine-grain control and security on certain private network traffic, you can route and proxy such traffic through Cloudflare WAN and Cloudflare Gateway service. The following diagram illustrates the architecture and packet flow of such use cases.

![Figure 3.2: Figure 3.2. Protect inter-data center gateway-proxied traffic between private networks.](https://developers.cloudflare.com/_astro/figure3.2.D9WLCVnf_Z2oL35A.svg "Figure 3.2. Protect inter-data center gateway-proxied traffic between private networks.")

Figure 3.2\. Protect inter-data center gateway-proxied traffic between private networks.

_Note: Labels in this image may reflect a previous product name._

1. Each site routes private network traffic destined to the other data center location to Cloudflare WAN via the corresponding CNI connections. This can be done at your site through routing techniques of your choice, such as policy based routing (PBR).
2. Upon entering the Cloudflare network, traffic is routed through Cloudflare Network Firewall where it is subject to any configured network firewall policies.
3. After clearing Cloudflare Network Firewall, traffic is subsequently routed to [Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/), our secure web gateway service.
4. Cloudflare Gateway subjects traffic to any configured L3-7 [policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) that enforce a comprehensive set of security and control measures, ensuring the utmost protection for your networks. Once traffic clears inspection, Gateway proxies the traffic to its destination private network. The source IP addresses of the proxied traffic are the Cloudflare owned IP addresses associated with the Gateway service.
5. The proxied traffic, en-route to its destination private network, is routed through Cloudflare Network Firewall once again for further packet filtering.
6. Traffic that passes Cloudflare Network Firewall filtering is routed from Cloudflare to your network locations via the corresponding CNIs that transport private network traffic.

## Protect outbound Internet traffic from private networks

The reference architecture diagram below illustrates how Cloudflare services — Cloudflare WAN, Cloudflare Network Firewall and Cloudflare Gateway — can be used to protect outbound Internet traffic originating from the data centers' private networks. The use cases and the protection provided to the servers on the private networks are very similar to those described in the previous section about protecting Internet access from public-facing networks. The differences are that the servers have private IP addresses and that Cloudflare WAN service is used in this section, as opposed to the previous section where servers are assigned with public IP addresses and Magic Transit server is used.

![Figure 4. Protect outbound traffic from private networks.](https://developers.cloudflare.com/_astro/figure4.Chl4DAXi_1k88c5.svg "Figure 4. Protect outbound traffic from private networks.")

Figure 4\. Protect outbound traffic from private networks.

_Note: Labels in this image may reflect a previous product name._

1. Each site routes outbound Internet traffic originating from its private networks to Cloudflare WAN via the corresponding CNI connections. This can be done at your site through routing techniques of your choice, such as policy based routing (PBR).
2. Upon entering the Cloudflare network, outbound Internet traffic is first routed through Cloudflare Network Firewall where it is subject to any configured network firewall policies.
3. Traffic that clears Cloudflare Network Firewall is subsequently sent to [Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/), our secure web gateway service where any configured L3-7 [policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) enforce a comprehensive set of security and control measures on the outbound traffic, ensuring the utmost protection for your networks.
4. Once traffic clears inspection, Gateway proxies the outbound traffic to their destinations on the Internet. The source IP addresses of the outbound traffic are the Cloudflare owned IP addresses associated with the Gateway service.
5. Return traffic from the Internet, destined to Cloudflare's IP addresses linked to the Gateway service, is routed into Cloudflare's global network.
6. Traffic is inspected against Gateway policies.
7. Return traffic that passes Gateway inspection is routed to Cloudflare Network Firewall for further packet filtering.
8. Return traffic that passes Cloudflare Network Firewall filtering is routed from Cloudflare to your network locations via CNIs that transport private network traffic.

## Related Resources

* [Cloudflare Magic Transit](https://developers.cloudflare.com/magic-transit/)
* [Cloudflare DDoS Protection](https://developers.cloudflare.com/ddos-protection/)
* [Magic Transit Reference Architecture](https://developers.cloudflare.com/reference-architecture/architectures/magic-transit/)
* [Cloudflare Network Interconnect](https://developers.cloudflare.com/network-interconnect/)
* [Cloudflare Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/)
* [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/)
* [Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/)
* [Integration of Cloudflare Magic services and Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-wan/zero-trust/cloudflare-gateway/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/network/","name":"Network"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/network/protect-data-center-networks/","name":"Protect data center networks"}}]}
```

---

---
title: Protect hybrid cloud networks with Cloudflare Magic Transit
description: Cloudflare Magic Transit provides cloud-native, in-line DDoS protection, and traffic acceleration for all Internet-facing networks.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/network/protect-hybrid-cloud-networks-with-cloudflare-magic-transit.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Protect hybrid cloud networks with Cloudflare Magic Transit

**Last reviewed:**  over 1 year ago 

## Introduction

Protecting network infrastructure from DDoS attacks demands a unique combination of strength and speed. Volumetric attacks can easily overwhelm on-premise hardware-based DDoS protection appliances and their bandwidth-constrained Internet links.

A cloud-based DDoS protection solution is more agile, efficient and scalable but most solutions on the market lack the global network footprint and scrubbing center density required to maintain good network performance. DDoS protection solutions with such shortcomings require redirecting customer traffic through sparsely located scrubbing centers that are often thousands of miles away from where the traffic was originally ingested into the network, adding significant latency that inevitably impacts the end-to-end network performance and throughput.

Cloudflare Magic Transit provides cloud-native, in-line DDoS protection and traffic acceleration for all your Internet-facing networks that serve incoming user traffic from the Internet, regardless of where they are deployed, whether on-premise, in the cloud, or a combination of the two (that is, hybrid architecture). With data centers spanning hundreds of cities and with hundreds of Tbps in DDoS mitigation capacity, Magic Transit can detect and mitigate attacks close to their source of origin in under 3 seconds globally.

The details of how Magic Transit works and how it can be architected for various use cases are documented in the related resources at the end of this document - [Cloudflare Magic Transit](https://developers.cloudflare.com/magic-transit/) and [Magic Transit Reference Architecture](https://developers.cloudflare.com/reference-architecture/architectures/magic-transit/).

This document will focus specifically on, for a few common scenarios, the reference architectures of using Magic Transit to protect a hybrid cloud based network infrastructure.

## Scenario 1 - Customer BYOIP for both on-premise and cloud network deployments

In this scenario, there are multiple /24 or larger network prefixes that need to be protected by Magic Transit. These networks are deployed at on-premise locations as well as across multiple cloud providers’ regions.

For illustration purposes, below is an example list of the locations of Internet-facing networks and their respective IP prefixes.

```

AWS VPC: 192.0.2.0/24

GCP VPC: 198.51.100.0/24

Azure vNet: 203.0.113.0/26

On-premise data center 1: 203.0.113.64/26

On-premise data center 2: 203.0.113.128/25


```

![Figure 1: Customer BYOIP for all Cloud and on-premises networks.](https://developers.cloudflare.com/_astro/figure-1.TMcvFAT6_ZcHntR.svg "Figure 1: Customer BYOIP for all Cloud and on-premises networks.")

Figure 1: Customer BYOIP for all Cloud and on-premises networks.

_Note: Labels in this image may reflect a previous product name._

1. Using Border Gateway Protocol ([BGP ↗](https://www.cloudflare.com/learning/security/glossary/what-is-bgp/)), Cloudflare advertises customer’s protected IP prefixes to the Internet from all of Cloudflare’s global data centers, enabling [IP Anycast ↗](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/), so that Internet traffic destined to these protected IP prefixes will always be routed to the Cloudflare data center that is closest to the source of the traffic.

At the same time, on-premise network(s) and cloud provider network(s) would stop advertising the same exact prefixes from their respective on-premises border routers and cloud border routers. This ensures all Internet traffic destined to the Magic Transit protected IP prefixes will be routed through the Cloudflare network.

You can instead advertise less-specific IP prefixes from their border routers to the Internet. This way, if the Magic Transit service ever experiences a failure in a very unlikely event, traffic can be quickly re-routed directly to network locations from the Internet.

1. Traffic originated from the Internet and destined to the protected IP prefixes is ingested into Cloudflare network globally.
2. All traffic is scrubbed, that is, DDoS attack traffic is removed and mitigated in-line at every Cloudflare data center using advanced and automated [DDoS mitigation](https://developers.cloudflare.com/ddos-protection/) technologies.
3. Traffic that passes DDoS mitigation is subjected to additional network firewall filtering using the included [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/) service.
4. Clean, filtered traffic is routed to the protected networks either through private connections called [Cloudflare Network Interconnect](https://developers.cloudflare.com/network-interconnect/) (CNI), or through the public Internet using standard IP tunnels such as GRE or IPsec tunnels. More specific details on Magic Transit IP tunnels can be found in the [Magic Transit Tunnels and Encapsulation documentation](https://developers.cloudflare.com/magic-transit/reference/gre-ipsec-tunnels/).
5. The server return traffic from protected IP prefixes to the Internet users are routed directly over the Internet from the hybrid cloud locations, bypassing the Cloudflare network. This is called direct server return (DSR). Note you must have BYOIP with your Cloud Service Provider to use DSR.

With Magic Transit service being the single, consolidated cloud-native network protection solution running globally on the Cloudflare network, your global, hybrid cloud based Internet-facing networks are well protected from DDoS and other malicious attacks, regardless where and what environments they are deployed in.

One other added benefit of using such consolidated, cloud-native network protection solutions is that you can easily migrate or relocate Internet-facing networks between the various hybrid cloud environments without ever losing protection to these networks. They can do so by simply changing routes in the Magic Transit configuration to route traffic to the new location.

## Scenario 2 - Customer lease IP address from Cloudflare for both on-premise and cloud network deployments

In the case where you do not own any network prefixes that are equal to or larger than /24, but would still like to use Magic Transit to protect their networks, you can [lease IPs](https://developers.cloudflare.com/magic-transit/cloudflare-ips/) from Cloudflare to assign to these smaller networks. The following diagram illustrates the architecture of such a deployment. Similar to the previous scenario, these customer networks are deployed at on-premise locations as well as across multiple cloud providers’ regions.

For illustration purposes, below is an example list of the locations of Internet-facing networks and their respective IP prefixes.

```

AWS VPC: 192.0.2.0/28

GCP VPC: 192.0.2.16/28

Azure vNet: 192.0.2.32/28

On-premise data center 1: 192.0.2.48/28

On-premise data center 2: 192.0.2.64/28


```

![Figure 2: Customer lease IPs from Cloudflare for both on-premise and cloud network deployments.](https://developers.cloudflare.com/_astro/figure-2.DGu8Lrrt_Z2lxTsf.svg "Figure 2: Customer lease IPs from Cloudflare for both on-premise and cloud network deployments.")

Figure 2: Customer lease IPs from Cloudflare for both on-premise and cloud network deployments.

_Note: Labels in this image may reflect a previous product name._

1. Using Border Gateway Protocol (BGP), Cloudflare advertises its owned IP prefixes to the Internet, which includes the IP addresses that you lease.

\[Steps 2 through 5 are the same as those of scenario 1 above\]

1. The server return traffic, with leased Cloudflare IP addresses as their source IP addresses, cannot be routed to the Internet directly via the various sites’ border routers. This traffic has to be routed back through Cloudflare network to reach the Internet, using [Magic Transit Egress](https://developers.cloudflare.com/magic-transit/reference/egress/) functionality. It can be sent to the Cloudflare network via the same CNIs or IP tunnels that the Ingress traffic traversed, using routing techniques such as policy-based routing (PBR) at your sites.
2. Magic Transit Egress traffic is subject to Network Firewall filtering before being routed out to the Internet towards the users.

## Scenario 3 - Customer BYOIP for on-premise networks and lease IP address from Cloudflare for cloud network deployments

In this scenario, you can deploy larger on-premise networks and smaller cloud-based networks. You assign your own /24 IP prefixes to the on-premise networks while leasing IPs from Cloudflare for your cloud-based networks.

For illustration purposes, below is an example list of the locations of Internet-facing networks and their respective IP prefixes.

```

AWS VPC: 192.0.2.0/28

GCP VPC: 192.0.2.16/28

Azure vNet: 192.0.2.32/28

On-premise data center 1: 198.51.100.0/24

On-premise data center 2: 203.0.113.0/24


```

![Figure 3: Customer BYOIP for on-premise networks and lease IP from Cloudflare for cloud network deployments.](https://developers.cloudflare.com/_astro/figure-3.h9hJOj7g_Z2d2T9O.svg "Figure 3: Customer BYOIP for on-premise networks and lease IP from Cloudflare for cloud network deployments.")

Figure 3: Customer BYOIP for on-premise networks and lease IP from Cloudflare for cloud network deployments.

_Note: Labels in this image may reflect a previous product name._

1. Using Border Gateway Protocol (BGP), Cloudflare advertises both customer-owned and Cloudflare-owned IP prefixes to the Internet.

\[Steps 2 through 5 are the same as those of scenario 1 above\]

1. The server return traffic from your cloud-based networks is routed back through Cloudflare network to reach the Internet, using Magic Transit Egress functionality. It can be sent to the Cloudflare network via the same CNIs or IP tunnels that the Ingress traffic traversed, using routing techniques such as policy-based routing (PBR) at your physical sites.
2. This Magic Transit Egress traffic is subject to Network Firewall filtering before being routed out to the Internet towards the users.
3. The server return traffic from on-premises networks to the Internet users are direct server returned (DSR), bypassing the Cloudflare network.

_Note_: Alternatively, customers can choose to also route the on-premise networks’ server return traffic through Cloudflare via policy-based routing and Magic Transit Egress functionality. This adds an additional layer of security and control for the egress traffic with Network Firewall filtering. For example, it can block traffic destined to questionable IP addresses and sites, prohibited destinations, or countries.

## Related resources

* [Magic Transit Reference Architecture](https://developers.cloudflare.com/reference-architecture/architectures/magic-transit/)
* [Cloudflare Magic Transit](https://developers.cloudflare.com/magic-transit/)
* [Cloudflare Network Interconnect](https://developers.cloudflare.com/network-interconnect/)
* [Cloudflare DDoS Protection](https://developers.cloudflare.com/ddos-protection/)
* [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/)
* [Cloudflare IPsec Device Compatibility Matrix](https://developers.cloudflare.com/cloudflare-wan/reference/device-compatibility/)
* [Cloudflare Magic Transit Leased IP](https://developers.cloudflare.com/magic-transit/cloudflare-ips/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/network/","name":"Network"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/network/protect-hybrid-cloud-networks-with-cloudflare-magic-transit/","name":"Protect hybrid cloud networks with Cloudflare Magic Transit"}}]}
```

---

---
title: Protect public networks with Cloudflare
description: This document explains how Cloudflare Magic Transit, Cloudflare Network Firewall, and Gateway work. The products offer in-line, automatic, scalable network protection for all Internet-facing networks. The architecture is designed to protect public networks across multiple clouds and on-premises.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/network/protect-public-networks-with-cloudflare.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Protect public networks with Cloudflare

**Last reviewed:**  over 1 year ago 

## Introduction

Network security teams have traditionally used various network firewalls or security appliances at the perimeter of their network to protect their public-facing networks against both external and internal threats like DDoS attacks, malware, ransomware, phishing, and leaking of sensitive information. However, these firewalls and security appliances are often expensive, complex to configure and manage, difficult to scale to handle large attacks, and lack the flexibility to quickly incorporate upgrades and patches to defend against newly discovered threats and vulnerabilities.

[Cloudflare Magic Transit](https://developers.cloudflare.com/magic-transit/), [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/), and [Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) services running natively on [Cloudflare's massive global network ↗](https://www.cloudflare.com/network/) provide solutions to all the shortcomings described above and more. These services offer in-line, automatic, scalable network protection for all your Internet-facing networks, without slowing down performance, regardless of where they are deployed, whether on-premises, in the cloud, or a combination of the two (that is, a hybrid architecture).

* [Magic Transit ↗](https://www.cloudflare.com/network-services/products/magic-transit/) provides instant detection and mitigation against network-layer DDoS attacks on your public, Internet-facing networks.
* [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/) is a cloud-native network firewall service that can be used to filter traffic that is routed to and from your networks that are protected by Magic Transit. It also supports functionalities such as [Intrusion Detection](https://developers.cloudflare.com/cloudflare-network-firewall/about/ids/) (IDS) and [packet capture](https://developers.cloudflare.com/cloudflare-network-firewall/packet-captures/).
* [Gateway ↗](https://www.cloudflare.com/zero-trust/products/gateway/) is a secure web gateway (SWG) service that allows you to inspect and control Internet bound traffic originating from your network by proxying this traffic through Cloudflare's global network while applying DNS, network and HTTP based [policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/).

The details of how Magic Transit, Cloudflare Network Firewall, and Gateway work and how these products can be architected for various use cases can be found in the linked resources at the end of the document. This document will focus specifically on the reference architectures of using Cloudflare Magic Transit, Cloudflare Network Firewall, and Cloudflare Gateway services to protect public, Internet-facing network infrastructure.

To illustrate the architecture and how it works, the following diagrams visualize an example corporation with a set of public facing networks. These networks are deployed at 5 distinct locations, both on-premises and across multiple public clouds.

```

AWS VPC: 192.0.2.0/24

GCP VPC: 198.51.100.0/24

Azure vNet: 203.0.113.0/26

On-premises data center 1: 203.0.113.64/26

On-premises data center 2: 203.0.113.128/25


```

## Protect inbound network traffic

The reference architecture diagram below illustrates how Cloudflare Magic Transit and Cloudflare Network Firewall can be used to protect the public networks from inbound traffic originating from the Internet.

![Figure 1: Protect the public networks from inbound traffic originating from the Internet.](https://developers.cloudflare.com/_astro/figure-1.ChNCzrbx_Z2lBXDV.svg "Figure 1: Protect the public networks from inbound traffic originating from the Internet.")

Figure 1: Protect the public networks from inbound traffic originating from the Internet.

_Note: Labels in this image may reflect a previous product name._

1. Using Border Gateway Protocol ([BGP ↗](https://www.cloudflare.com/learning/security/glossary/what-is-bgp/)) and [IP anycast ↗](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/), Cloudflare advertises customer's protected IP prefixes to the Internet from all of Cloudflare's global data centers. Internet traffic destined to these protected IP prefixes will always be routed to the Cloudflare data center that is closest to the source of the traffic.  
At the same time, on-premises network(s) and cloud provider network(s) would stop advertising the same exact prefixes from their respective on-premises border routers and cloud border routers. This ensures all Internet traffic destined to the Magic Transit protected IP prefixes will be routed through the Cloudflare network.  
You can instead advertise less-specific IP prefixes from the border routers to the Internet. This way, in the unlikely event of a Magic Transit service failure, traffic can be quickly re-routed directly to network locations from the Internet.
2. Traffic originating from the Internet and destined to the protected IP prefixes is ingested into the global Cloudflare network.
3. All DDoS attack traffic is mitigated in-line at every Cloudflare data center using advanced and automated [DDoS mitigation](https://developers.cloudflare.com/ddos-protection/) technologies.
4. Traffic that passes DDoS mitigation is subjected to additional network firewall filtering using the included [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/) service.
5. Clean, filtered traffic is routed to the protected networks either through private [Cloudflare Network Interconnect](https://developers.cloudflare.com/network-interconnect/) (CNI) connections, or the public Internet using GRE or IPsec tunnels. More specific details on Magic Transit IP tunnels can be found in the [Magic Transit Tunnels and Encapsulation documentation](https://developers.cloudflare.com/magic-transit/reference/gre-ipsec-tunnels/).
6. The server return traffic is routed back through the Cloudflare network to reach the Internet, using [Magic Transit Egress](https://developers.cloudflare.com/magic-transit/reference/egress/). It can be routed to the Cloudflare network via the same CNIs or GRE, IPsec tunnels that the ingress traffic traversed, using routing techniques such as policy-based routing (PBR) at your sites.
7. Magic Transit Egress traffic is subject to Cloudflare Network Firewall filtering before being routed out to the Internet towards the users.

## Protect outbound network traffic

The reference architecture diagram below illustrates how Cloudflare services - Magic Transit (Egress), Cloudflare Network Firewall and Cloudflare Gateway, can be used to protect outbound Internet traffic originating from the public networks.

![Figure 2: Protect outbound Internet traffic originating from the public networks.](https://developers.cloudflare.com/_astro/figure-2.wsXd5oJY_Z2lxTsf.svg "Figure 2: Protect outbound Internet traffic originating from the public networks.")

Figure 2: Protect outbound Internet traffic originating from the public networks.

_Note: Labels in this image may reflect a previous product name._

1. Each site network routes outbound Internet traffic originating from the public networks to Cloudflare, via the same CNIs and IP tunnels that inbound traffic traverses. This can be done at your site through routing techniques of your choice, such as policy based routing (PBR).
2. Upon entering the Cloudflare network, outbound Internet traffic is first routed through Cloudflare Network Firewall where it is subject to any configured network firewall policies.
3. Outbound Internet traffic is subsequently sent to [Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/), our secure web gateway service where various [policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) enforce a comprehensive set of security and control measures on the outbound traffic, ensuring the utmost protection for your networks.
4. Once traffic clears inspection, Gateway proxies the outbound traffic to their destinations on the Internet. The source IP addresses of the outbound traffic are the Cloudflare owned IP addresses associated with the Gateway service.
5. Return traffic from the Internet, destined to Cloudflare's IP addresses linked to the Gateway service, is routed into Cloudflare's global network.
6. Traffic is inspected against Gateway policies.
7. Return traffic that passes Gateway inspection is routed to Cloudflare Network Firewall for further packet filtering, if any.
8. Return traffic that passes Cloudflare Network Firewall filtering is routed from Cloudflare to your network locations via CNIs or IP Tunnels over the Internet.

## Related Resources

* [Cloudflare Magic Transit](https://developers.cloudflare.com/magic-transit/)
* [Cloudflare DDoS Protection](https://developers.cloudflare.com/ddos-protection/)
* [Magic Transit Reference Architecture](https://developers.cloudflare.com/reference-architecture/architectures/magic-transit/)
* [Cloudflare Network Interconnect](https://developers.cloudflare.com/network-interconnect/)
* [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/)
* [Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/)
* [Integration of Cloudflare Magic services and Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-wan/zero-trust/cloudflare-gateway/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/network/","name":"Network"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/network/protect-public-networks-with-cloudflare/","name":"Protect public networks with Cloudflare"}}]}
```

---

---
title: Protect ISP and telecommunications networks from DDoS attacks
description: Learn how Internet service providers (ISPs) and telecommunications companies (such as T-Mobile or British Telecom) can protect themselves from DDoS attacks.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/network/protecting-sp-networks-from-ddos.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Protect ISP and telecommunications networks from DDoS attacks

**Last reviewed:**  over 1 year ago 

## Introduction

Internet service providers (ISPs) and telecommunications companies (such as T-Mobile or British Telecom) are vulnerable to network DDoS attacks, which are often focused on the end customer - for example, trying to attack a company or remote worker connected to the internet via a broadband internet service provider. Historically to protect these customers, service providers have relied on hosting their own on-premises mitigation systems. This approach necessitates significant investment to effectively combat the constantly evolving attacks, with capacity being finite in the face of escalating attack sizes.

Cloudflare is well known for its DDoS mitigation services protecting public websites and APIs, and the same technologies can also be used to protect entire networks. At Cloudflare, we have [witnessed a surge in hyper-volumetric ↗](https://blog.cloudflare.com/cloudflare-mitigates-record-breaking-71-million-request-per-second-ddos-attack) and highly sophisticated attacks, as highlighted in our quarterly [DDoS attack reports ↗](https://radar.cloudflare.com/reports/ddos/). These attacks, due to their sheer volume, can overwhelm and outmanoeuvre on-premises DDoS mitigation systems. As a result, these on-premises mitigation systems require constant maintenance and upgrades to keep up with larger attacks, leading to ongoing investments and, with the unpredictable attack size, open-ended costs.

[Cloudflare Magic Transit](https://developers.cloudflare.com/magic-transit/) offers cloud-based network DDoS mitigation as a service. Service providers are using [Cloudflare Magic Transit on-demand](https://developers.cloudflare.com/magic-transit/on-demand/), either as a supplementary solution or as a replacement for their existing setup, to safeguard their network infrastructure against this evolving threat.

## Protecting service provider networks from attack

There are two main steps to deploying this solution. Firstly, setting up Cloudflare to [monitor ↗](https://blog.cloudflare.com/flow-based-monitoring-for-magic-transit) and detect DDoS attacks on the network. Then, when a DDoS event is observed, reroute traffic through Cloudflare where DDoS mitigation takes place.

![Figure 1: Overall solution of user access controls to, and the discovery of, sensitive data.](https://developers.cloudflare.com/_astro/protecting-sp-networks-from-ddos-fig1.BXZ5xvR3_Z13kPfb.svg) 

_Note: Labels in this image may reflect a previous product name._

The first step is to gain visibility into the attacks taking place against the service provider network. The above diagram shows:

1. Cloudflare is made aware of the networks to be protected. Service providers identify the prefixes (i.e. 203.0.113.0/24) they wish to protect and initiate a one-off task to onboard these prefixes onto the Cloudflare Magic Transit service; this step is a prerequisite and doesn’t affect the actual network traffic flow. Cloudflare recommends onboarding more specific prefixes compared to those the service providers advertise to the Internet. As in this example, if 203.0.113.0/24 is the protected prefix that is onboarded to Cloudflare, then the less specific 203.0.112.0/23 that encompasses both 112.0/24 and 113.0/24 prefixes, can be advertised to your upstream ISP.
2. Service provider network devices send all traffic flow data (Netflow, IPFIX or sFlow) to the [Network Flow](https://developers.cloudflare.com/network-flow/) (formerly Magic Network Monitoring) service. Cloudflare analyses this flow data to detect DDoS attacks.
3. Cloudflare recommends, when possible, to connect to the Cloudflare network by setting up redundant [Cloudflare Network Interconnect](https://developers.cloudflare.com/network-interconnect/) (CNI) at our [Interconnection facilities ↗](https://www.peeringdb.com/net/4224), this allows adherence to the 1500 Bytes maximum transmission unit (MTU) for routed user traffic. Alternatively you can connect to the Cloudflare network using [Generic Routing Encapsulation (GRE) tunnels](https://developers.cloudflare.com/magic-transit/reference/gre-ipsec-tunnels/) over the Internet.
4. In peacetime, traffic flows as usual between the ISP network and their upstream transit and peer networks, bypassing the Cloudflare network.
![Figure 1: Overall solution of user access controls to, and the discovery of, sensitive data.](https://developers.cloudflare.com/_astro/protecting-sp-networks-from-ddos-fig2.mhCca2XR_ZUkAfy.svg) 

_Note: Labels in this image may reflect a previous product name._

The above diagram shows how Cloudflare monitors service provider traffic and, upon detecting a possible volumetric DDoS attack, automatically advertises the most specific protected prefix from the Cloudflare global network to the Internet. This ensures that all traffic to this protected prefix is rerouted through the Cloudflare network, where malicious traffic is mitigated.

1. Upon detecting a possible volumetric DDoS attack, Cloudflare automatically generates an alert. Service providers can receive the alert notifications via email and/or webhook. Additionally, the alert can trigger [automatic prefix announcement](https://developers.cloudflare.com/network-flow/magic-transit-integration/#activate-ip-auto-advertisement) from the Cloudflare network to the Internet, as per the Magic Transit configuration by the service provider.
2. Cloudflare advertises the protected prefix from all Cloudflare points-of-presence. Since Cloudflare advertises a more specific prefix, only the traffic destined for the attacked prefix is rerouted through the Cloudflare network.
3. Cloudflare's network mitigates the attack traffic while letting legitimate traffic through to the service provider network. Service providers receive the original packets with an MTU of 1500 Bytes when using [Cloudflare Network Interconnect](https://developers.cloudflare.com/network-interconnect/) (CNI).
4. Outbound traffic of the protected prefix, as well as the traffic of other prefixes, remains unaffected and continues to be routed to the Internet via the service provider's upstream links.
5. Private peering with trusted networks is unaffected and traffic from these content providers (such as Facebook, Netflix, YouTube) will not be rerouted via Cloudflare.

## Related resources

* [Magic Transit Reference Architecture](https://developers.cloudflare.com/reference-architecture/architectures/magic-transit/)
* [Cloudflare Network Interconnect](https://developers.cloudflare.com/reference-architecture/architectures/magic-transit/)
* [Flow-based monitoring for Magic Transit ↗](https://blog.cloudflare.com/flow-based-monitoring-for-magic-transit)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/network/","name":"Network"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/network/protecting-sp-networks-from-ddos/","name":"Protect ISP and telecommunications networks from DDoS attacks"}}]}
```

---

---
title: Extend ZTNA with external authorization and serverless computing
description: Cloudflare's ZTNA enhances access policies using external API calls and Workers for robust security. It verifies user authentication and authorization, ensuring only legitimate access to protected resources.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/sase/augment-access-with-serverless.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Extend ZTNA with external authorization and serverless computing

**Last reviewed:**  over 1 year ago 

## Introduction

Companies using Zero Trust Network Access (ZTNA) services build policies to determine if a user can access a protected resource such as a privately hosted Wiki server or source code repository. Policies typically use group membership, authentication methods, device security posture to determine which users can access which resources.

Secure access requires a range of attributes being available to the policy engine for evaluation. With Cloudflare's ZTNA service, [Access](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/), it is possible to include in the policy an external request to another API that provides part of the data required for the access decision.

For example, you might have a policy which states all members of the group "Engineers", who have authenticated with credentials that required a hard token, can have access to the self-hosted source code repository. But you also want to only allow engineers who have completed security training. That data might be available in another system, so Cloudflare allows you to, as part of the policy check, make a call using [Workers ↗](https://workers.cloudflare.com/) to the training system to determine if this user has passed security training.

Additionally, once authentication and the policy checks are successful Cloudflare passes traffic to the protected origin. It is important to note that the origin should, too, verify that the incoming requests are authenticated by Cloudflare in order to avoid any illegitimate access. Cloudflare inserts a JWT token in the traffic destined to the origin to prove cryptographically that the request was successfully authenticated, and the origin can use this data as part of its authorization logic.

To help integrate these types of use cases, Cloudflare has an [entire development platform](https://developers.cloudflare.com/workers/) on which you can design and run your own business logic. This means you spend less time trying to piece a solution together and more time getting the integration done.

This document outlines how to combine both solutions to enhance Cloudflare Access capabilities in terms of [authorization and authentication ↗](https://www.cloudflare.com/learning/access-management/what-is-access-control/).

## Showcased products

**[Workers](https://developers.cloudflare.com/workers/)** 

Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.

**[Access](https://developers.cloudflare.com/cloudflare-one/)** 

Cloudflare Zero Trust replaces legacy security perimeters with Cloudflare's global network, making the Internet faster and safer for teams around the world

## Use-cases

* **Custom authorization logic**: Access External evaluation using Workers as a backend (for example, using your own implementation of [Open Policy Agent aka OPA ↗](https://www.openpolicyagent.org/integrations/cloudflare-worker/)\])
* **Augmented [JSON Web Token (JWT)](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/authorization-cookie/validating-json/)**: Using Cloudflare's own authentication JWT material, for example, adding posture details as part of an incoming request.
* **Serverless augmented apps protected with Zero-trust**: Allowing anyone building serverless applications to benefit from native ZTNA features

![Figure 1: Showing a request to a private resource and where  Access can be customized for AuthZ and AuthN](https://developers.cloudflare.com/_astro/diagram1.D2YkG0lA_Z23n9kq.svg "Figure 1: Showing a request to a private resource and where  Access can be customized for AuthZ and AuthN")

Figure 1: Showing a request to a private resource and where Access can be customized for AuthZ and AuthN

## Getting started

The following outlines how organizations can run their own custom business logic, allowing them to tailor authentication and authorization processes to meet almost any requirement. Each use case below refers to a step in the above diagram.

### 1\. Custom authorization process using your own rules

During policy evaluation, the [external evaluation](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/external-evaluation/) rule allows for executing your own code during access policy evaluation. In this example an API exposed by Cloudflare Workers receives data about the user making the request, the important part being their username.

The code typically makes calls to either a [database](https://developers.cloudflare.com/d1/) or another API to evaluate if the passed username has access to the application. The external evaluation rule requires that the call returns either a True or False, and this is combined with the policy to determine access.

[ Learn more ](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/external-evaluation/) External authorization with Cloudflare's external evaluation functionality 

### 2\. Analyze and validate the authentication material (JWT)

When a user successfully authenticates and is authorized to access a protected application, Cloudflare inserts a [JSON Web Token (JWT)](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/authorization-cookie/validating-json/) into the HTTP traffic sent to the origin. This token serves as a valuable asset for expanding custom business logic through secure processing. The format for that JWT is deterministic and rather lightweight to avoid overloading the requests towards origin unnecessarily.

Here is an example of a JWT sent to an origin (use [JWT.io ↗](http://jwt.io) to read the contents of a JWT)

JWT content

```

{

  "aud": ["264063895705477af73bfbaed1bf401981f4812eefcdb9fea33f5e10e666e282"],

  "email": "john.doe@cloudflare.com",

  "exp": 1728551137,

  "iat": 1728464737,

  "nbf": 1728464737,

  "iss": "https://myorg.cloudflareaccess.com",

  "type": "app",

  "identity_nonce": "IA0hPRvwILtbUXSQ",

  "sub": "ce40d564-c72f-475f-a9b8-f395f19ad986",

  "device_id": "8469d7c4-83a9-11ee-b559-76e6e80876db",

  "country": "FR"

}


```

Cloudflare exposes a specific [endpoint](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/authorization-cookie/validating-json/#%5Ftop) to allow anyone to validate and expand a Cloudflare signed JWT.

Cloudflare's Workers are a great candidate for interacting with incoming JSON Web Tokens (JWTs), enabling additional processing directly within the serverless platform without introducing any added latency.

[ Learn more ](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/authorization-cookie/application-token/#user-identity) How to validate and visualize Cloudflare Access JWTs 

### 3\. Augment the authentication material (JWT) with extra authentication details

In some situations, it is beneficial to elaborate on this JWT in order to execute additional processing on the protected destination application (for example, adding device [posture details](https://developers.cloudflare.com/cloudflare-one/reusable-components/posture-checks/) as part of an incoming request).

In the following example, we want to make sure the exposed application is aware of the status of the device's firewall and disk encryption (Note that the Cloudflare One Client needs to be installed on the client machine for these signals to be collected).

![Figure 2: Modified origin request including posture details](https://developers.cloudflare.com/_astro/diagram2.DPpYfIXE_Fj0YS.svg "Figure 2: Modified origin request including posture details")

Figure 2: Modified origin request including posture details

When a JSON Web Token (JWT) is expanded, the details of the attached authentication event become visible. This expansion reveals much more information than what is provided by default within the JWT itself, an example is below.

Expanded JWT

```

{

  "id": "P51Tuu01fWHMBjIBvrCK1lK-eUDWs2aQMv03WDqT5oY",

  "name": "John Doe",

  "email": "john.doe@cloudflare.com",

  "amr": [

    "pwd"

  ],

  "oidc_fields": {

    "principalName": "john.doe_cloudflare.com#EXT#@XXXXXXcloudflare.onmicrosoft.com"

  },

  "groups": [

    {

      "id": "fdaedb59-e9be-4ab7-8001-3e069da54185",

      "name": "Security Team"

    }

  ],

  "idp": {

    "id": "b9f4d68e-dac1-48b0-b728-ae05a5f0d4b2",

    "type": "azureAD"

  },

  "geo": {

    "country": "FR"

  },

  "user_uuid": "ce40d564-c72f-475f-a9b8-f395f19ad986",

  "account_id": "121287a0c6e6260ec930655e6b39a3a8",

  "iat": 1724056537,

  "devicePosture": {

    "f6f9391e-6776-4878-9c60-0cc807dc7dc8": {

      "id": "f6f9391e-6776-4878-9c60-0cc807dc7dc8",

      "schedule": "5m",

      "timestamp": "2024-08-19T08:31:59.274Z",

      "description": "",

      "type": "disk_encryption",

      "check": {

        "drives": {

          "C": {

            "encrypted": true

          }

        }

      },

      "success": false,

      "rule_name": "Disk Encryption - Windows",

      "input": {

        "requireAll": true,

        "checkDisks": []

    },

    "a0a8e83d-be75-4aa6-bfa0-5791da6e9186": {

      "id": "a0a8e83d-be75-4aa6-bfa0-5791da6e9186",

      "schedule": "5m",

      "timestamp": "2024-08-19T08:31:59.274Z",

      "description": "",

      "type": "firewall",

      "check": {

        "firewall": false

      },

      "success": false,

      "rule_name": "Local Firewall Check - Windows",

      "input": {

        "enabled": true

      }

    }

    ...

  }


```

Using the details in the JWT, you can use a Worker to extract the details of the device posture and then reinsert them into HTTP headers which the application uses for its own authorization logic. Below is a guided tutorial explaining how this request modification can be performed with Cloudflare Developer platform.

[ Tutorial ](https://developers.cloudflare.com/cloudflare-one/tutorials/extend-sso-with-workers) How to augment Cloudflare Access JWT with Cloudflare's Workers 

## Related Resources

* [External Evaluation rules](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/external-evaluation/)
* [SASE reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/)
* [External Evaluation blog post ↗](https://blog.cloudflare.com/access-external-validation-rules/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/sase/","name":"Secure Access Service Edge (SASE)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/sase/augment-access-with-serverless/","name":"Extend ZTNA with external authorization and serverless computing"}}]}
```

---

---
title: Cloudflare One Appliance deployment options
description: Learn how to deploy Cloudflare One Appliance and evaluate your various deployment options.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/sase/cloudflare-one-appliance-deployment.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cloudflare One Appliance deployment options

**Last reviewed:**  over 1 year ago 

## Introduction

Cloudflare helps organizations transform their networks by providing secure, high-performance connectivity for on-premises networks, virtual cloud networks and access to SaaS applications. As applications migrate to the cloud, [Cloudflare's SASE ↗](https://www.cloudflare.com/zero-trust/) platform enables businesses to replace traditional on-premise solutions, ensuring secure access, low latency, and automated scalability across distributed environments. This approach reduces reliance on legacy hardware, simplifies IT management, and improves user experience for cloud-based services.

Cloudflare One Appliance (formerly Magic WAN Connector) is a physical, or virtual (deployed as a VM on a hypervisor) device which, using [Zero Touch Provisioning ↗](https://en.wikipedia.org/wiki/Zero-touch%5Fprovisioning), automatically on-ramps traffic for a local network to Cloudflare, and replaces existing, difficult to manage edge hardware.

Every organization and network is different, and as such there is no one-size-fits-all when it comes to how a Cloudflare One Appliance can be deployed. Therefore, the purpose of this document is to provide a high-level explanation of the deployment options that would make sense to most environments, while also describing the support of a few advanced use cases.

## Deployment locations

The first decision for a Cloudflare One Appliance deployment is its location in the network, and this relates to whether the organization wants to keep the existing Customer Premises Equipment (CPE, edge router or firewall at a site), and if so, for what reason. Experience shows that this decision usually leads to three different topologies:

* **Connector replacing the CPE** (Figure 1a): When the link is an Internet connection and the organization does not have any real use of existing equipment since the Connector supports all the required networking features such as DHCP, DNS, NAT, Trunking (801.1Q), IP access lists, breakout traffic, etc. Examples could be:  
   * The transition from MPLS to Internet-based connectivity, where the MPLS router probably does not add any value in the deployment.  
   * An Internet-facing CPE reaching, or already having exceeded, its end of life.  
   * An Internet-facing CPE that is redundant with Cloudflare One Appliance and can be removed for simplicity's sake.
* **Connector north of the CPE** (Figure 1b): This option might be preferred when the existing CPE is a firewall, and the organization wants to keep it for:  
   * Additional LAN protection as a result of a defense-in-depth approach.  
   * Advanced segmentation requirements, for example allowing/blocking traffic between segments based on various Layer 3 to Layer 7 rules, since Cloudflare One Appliance supports segmentation only on layers 3 and 4 of the OSI model.
* **Connector south of the CPE** (Figure 1c): Reasons for installing Cloudflare One Appliance south of an existing Internet-facing CPE might be:  
   * CPE cannot be replaced because it connects to a broadband service with a presentation (for example RJ-11) or protocol (for example PPPoE) that Cloudflare One Appliance does not support.  
   * CPE cannot be replaced because it is part of a fiber service that only works with that specific hardware, such as an ISP-provided ONT (Optical Network Terminal).  
   * CPE cannot be replaced (yet) because it is part of an active managed service.  
   * CPE cannot be replaced because it is a firewall that the organization wants to keep in place for other reasons (technical or contractual).

![Figure 1: Connector location options: \(a\) replacing CPE, \(b\) north of CPE , \(c\) south of CPE.](https://developers.cloudflare.com/_astro/figure01.Dcrrl27C_Z1tATI8.svg "Figure 1. Connector location options: (a) replacing CPE, (b) north of CPE , (c) south of CPE")

Figure 1\. Connector location options: (a) replacing CPE, (b) north of CPE , (c) south of CPE

_Note: Labels in this image may reflect a previous product name._

## High availability

In Wide Area Network (WAN) environments, where remote offices, data centers, and cloud services are interconnected, any downtime can lead to loss of access to critical applications, communication disruptions, and productivity losses. To avoid such downtimes, WAN networks are mostly designed with High Availability (HA) principles in mind. Deploying redundant hardware and uplink circuits, as well as failover mechanisms, ensures that if one component fails, another can immediately take over. This resilience is key to maintaining seamless connectivity, reliability, and service continuity in distributed networks.

### Uplink HA

Cloudflare One Appliance can use two or more WAN ports for uplinks, and therefore it can connect to multiple different ISPs for circuit resiliency. One option for a basic level of HA is to use a single Cloudflare One Appliance with two uplinks, while traffic can be load-balanced between them (Figure 2 below). This approach could be used for non-critical branches, small offices, and other similar types of locations, or as an intermediate step towards a full HA deployment.

![Figure 2. Uplink high-availability deployment.](https://developers.cloudflare.com/_astro/figure02.BGru8RdY_1Itufx.svg "Figure 2. Uplink high-availability deployment.")

Figure 2\. Uplink high-availability deployment.

### Full HA

In this type of setup, a redundant device is configured to take over in case of a failure in the primary device, allowing seamless traffic failover and ensuring uninterrupted access to applications, data, and services. This approach enhances network resilience, improves service reliability, and helps maintain productivity by reducing the risk of single points of failure.

Figure 3 below illustrates the deployment topology where Cloudflare One Appliance supports full HA. Using an election process, one device becomes active and the other becomes passive. To achieve this, the two Connectors must connect to a LAN switch on the same Layer 2 domain (like a VLAN) for heartbeat messages to be sent between them. Active/passive means that the active Connector is the only device that propagates traffic at any point in time.

![Figure 3. Full HA with dual Connectors and dual uplinks.](https://developers.cloudflare.com/_astro/figure03.CgaueUuZ_1Cmgj9.svg "Figure 3. Full HA with dual Connectors and dual uplinks.")

Figure 3\. Full HA with dual Connectors and dual uplinks.

Each Cloudflare One Appliance connects to the same two ISPs using dual uplinks, and automatically creates one IPsec tunnel per WAN port. This requires each ISP to support multiple ports on their on-site Network Termination Unit (or their CPE, if there is one present). In this HA deployment there are four tunnels in total, two per Connector, while traffic can be load-balanced between the two tunnels on the active device. When either the active Connector, or its IPsec tunnels go down, the other Connector takes over and propagates traffic, holding the active role until it fails (preemption is not used to avoid unnecessary failover delays).

## Advanced use cases

This section describes how the Cloudflare One Appliance can be deployed to support a few advanced use cases, that is, use cases beyond the typical scenarios where the Connector acts as a simple CPE that on-ramps traffic to Cloudflare for site-to-site, or site-to-Internet, connectivity and protection.

### Protecting local Internet breakout (LIBO)

The main use case for this type of deployment is based on the fact that many organizations today require local Internet breakout to improve the performance of Cloud and SaaS applications, while they probably continue to use their private MPLS connectivity for self-hosted applications, or site-to-site connectivity, until they decide to further modernize their architectures at a later stage. Reasons behind such a decision might be:

* MPLS service is still in contract, but it is planned to be replaced by Internet connectivity everywhere when the term ends
* Self-hosted applications might require low latency with agreed SLAs, so a hybrid MPLS/Internet architecture might be required

![Figure 4. Hybrid MPLS/Internet use case.](https://developers.cloudflare.com/_astro/figure04.B7yWVURB_eNn7d.svg "Figure 4. Hybrid MPLS/Internet use case.")

Figure 4\. Hybrid MPLS/Internet use case.

This type of hybrid architecture requires the MPLS Customer Edge router (CE) or some other L3 device in the LAN to route traffic via different interfaces depending on the destination. Traffic flows in this scenario as follows:

1. Devices on the local network use the MPLS CE (or some other local L3 device) as their default gateway
2. Private traffic is sent towards the MPLS network. For example, the MPLS CE knows how to route these because it receives RFC1918 ranges via BGP from the MPLS network.
3. Internet traffic from the LAN network is forwarded towards the Cloudflare One Appliance (MPLS CE/L3 gateway points a static default route towards the Connector)

All traffic towards internal locations and self-hosted applications follows the MPLS path, while traffic to cloud-based and SaaS applications follows the local Internet breakout path, protected by Cloudflare security services.

### Split tunneling

In some deployments, customers might want to protect only specific protocols using Cloudflare security services such as our [secure web gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/), while the rest of the traffic routes through the existing edge device (router or firewall). Figure 5 illustrates such a use case.

![Figure 5. 'Split Tunneling' use case.](https://developers.cloudflare.com/_astro/figure05.BDoVf7qZ_h0DqV.svg "Figure 5. 'Split Tunneling' use case.")

Figure 5\. 'Split Tunneling' use case.

In this example, the organization wants Cloudflare to protect all Internet web traffic (HTTP/HTTPS), while the rest of the traffic flows out via the existing firewall. The latter could be traffic towards existing VPNs, or non-web traffic exiting the site, but protected by the on-premises firewall. This method could take advantage of local device policy-based routing (PBR) capabilities, for example:

1. Local devices use the on-premises firewall as their default gateway
2. Firewall uses PBR to direct appropriate traffic to the right destination
3. Web traffic (TCP 80/443) is sent towards Cloudflare via the Cloudflare One Appliance
4. All other traffic exits via the on-premises firewall

As long as PBR capability exists locally, and the ISP provides at least two public IP addresses to the organization, the possibilities of splitting traffic towards the Cloudflare One Appliance are endless, and really depend on each organization's unique environment and use cases.

### Protecting segments / segmentation

Another advanced group of use cases that Cloudflare One Appliance can support is local segmentation, and protection of specific local networks. To achieve that, and depending on an organization's current architecture, line of business, security policies, and compliance requirements, Cloudflare One Appliance can be installed in any location south of the site edge device to provide more granular network security, as illustrated in figure 6 and described in the following paragraphs.

![Figure 6. Segmentation-related use cases.](https://developers.cloudflare.com/_astro/figure06.NzTDAI8s_17vwyb.svg "Figure 6. Segmentation-related use cases.")

Figure 6\. Segmentation-related use cases.

In this example, the Cloudflare One Appliance will create an IPsec tunnel to Cloudflare through the on premises firewall and local Internet connection. Subnet A and B are both connected to the Cloudflare One Appliance, but have no direct connection with each other. This will enable a couple of use cases:

* **Internet security**: Segment 1 adheres to Cloudflare security policies, bypassing the local firewall policy.
* **Site-to-site connectivity**: Segment 1 can connect to local segments in other locations (or entire sites, for example Site 2), depending on the organization's policy.

The example also shows how Cloudflare One Appliance can be used to provide two types of local network segmentation:

* **Intra-segment**: Traffic between LAN ports on the same Connector is blocked by default, hence, Subnet A and Subnet B in Segment 1 cannot talk to each other. The administrator would have to explicitly allow this traffic flow by using configuration logic similar to IP access lists. This ability to hairpin local traffic via the Connector's LAN ports, avoids traffic tromboning via the Cloudflare platform (that is, travel out and back in via the IPsec/GRE tunnel), which could result in those segments losing connectivity to each other in the event of Internet circuit outage. Therefore, this capability allows local nodes that do not necessarily require Internet access to function, for example printers, file servers, network attached storage (NAS) nodes, and various Internet of Things (IoT) devices, to continue being accessible by local hosts in different segments during Internet outages.
* **Inter-segment**: Cloudflare One Appliance does not allow any inbound traffic on its WAN ports. Therefore, Segments 1 and 2 cannot talk to each other.

To summarize, Cloudflare One Appliance is a Zero-Touch Provisioning (ZTP) device that organizations can use to connect to Cloudflare and consume advanced security and connectivity services, while keeping operational costs low.

## Related Resources

* [Cloudflare WAN - Cloud-delivered enterprise networking ↗](https://www.cloudflare.com/en-gb/network-services/products/magic-wan/)
* [Announcing the Cloudflare One Appliance: the easiest on-ramp to your next generation network ↗](https://blog.cloudflare.com/magic-wan-connector/)
* [Configuring Cloudflare One Appliance](https://developers.cloudflare.com/cloudflare-wan/configuration/appliance/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/sase/","name":"Secure Access Service Edge (SASE)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/sase/cloudflare-one-appliance-deployment/","name":"Cloudflare One Appliance deployment options"}}]}
```

---

---
title: Deploy self-hosted VoIP services for hybrid users
description: Learn how Cloudflare improves over traditional VPN solutions by leveraging its global network.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/sase/deploying-self-hosted-VoIP-services-for-hybrid-users.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Deploy self-hosted VoIP services for hybrid users

**Last reviewed:**  4 months ago 

## Introduction

Traditional VPN solutions create several problems for VoIP deployments, primarily due to their inefficiencies in handling real-time traffic protocols such as [SIP ↗](https://en.wikipedia.org/wiki/Session%5FInitiation%5FProtocol) and [RTP ↗](https://en.wikipedia.org/wiki/Real-time%5FTransport%5FProtocol). Legacy VPN deployments introduce high latency and jitter, which negatively impact voice call quality. Additionally, they often struggle with [NAT ↗](https://en.wikipedia.org/wiki/Network%5Faddress%5Ftranslation) traversal, leading to connection issues for VoIP calls.

Cloudflare improves over traditional VPN solutions by leveraging its [global network ↗](https://www.cloudflare.com/network/) of data centers in over 330 cities to significantly reduce latency for remote users. When using our device agent, remote users are automatically connected to the nearest Cloudflare data center, thus reducing latency.

This document explains how to architect access to a self-hosted VoIP service using Cloudflare. Note the solution below uses our [WARP Connector](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/warp-connector/), a small piece of software deployed on a server in the same subnet as the VoIP servers and creates bi-directional traffic flow through Cloudflare to users.

## Bi-directional VoIP traffic flow

![Figure 1: Cloudflare facilitates secure connectivity from user devices to the network where the SIP server is running.](https://developers.cloudflare.com/_astro/figure1.lv12Z4R7_Z1pX4PS.svg "Figure 1: Cloudflare facilitates secure connectivity from user devices to the network where the SIP server is running.")

Figure 1: Cloudflare facilitates secure connectivity from user devices to the network where the SIP server is running.

The diagram above shows the WARP Connector and our device agent deployed to establish highly performant, reliable connectivity for private VoIP services. Note that Cloudflare will assign remote users an address from the CGNAT range, which is used for the private network created between device agents. The WARP Connector ensures secure, bidirectional communication between remote users and the on-premise SIP server, without exposing the server to the public Internet. This shields the VoIP infrastructure from potential attacks while maintaining a seamless, encrypted connection for real-time communications.

1. VoIP server resides on a private network with no public IP.
2. WARP Connector creates a secure tunnel to Cloudflare and is configured as a virtual router in the private network.
3. Allow traffic from Cloudflare to reach the VoIP server, but also allow private network initiated traffic, such as an outbound VoIP call from the server, to route over the Cloudflare tunnel. In the above diagram, we add a static route on the default gateway of `100.96.0.0/12` (the WARP CGNAT range) via `10.0.50.10` (the WARP Connector virtual router).
4. Traffic passes through our [Secure Web Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) (SWG), which applies network level firewall rules to both inbound and outbound traffic.
5. A device agent is installed on remote user devices. The agent establishes a secure tunnel to Cloudflare, which allows VoIP software to both receive and make calls.

## Call flow examples

VoIP software running on the remote user's device registers with the VoIP server using SIP. The Cloudflare device agent will be assigned an address from the CGNAT IP range, `100.96.0.0/12`. As routing has been established to Cloudflare for `100.96.0.0/12` and to the on-premise network of `10.0.50.0/24`, call flows will work as normal – both direct and indirect media are supported.

### Remote user calling another remote user

When calls are made from user to user, some traffic flows from user devices through Cloudflare to the on-premise server, while other traffic flows through Cloudflare directly to the other user. Note that the device agent is creating a secure tunnel through which the CGNAT addresses are routed. Both users in this flow have registered their SIP clients with the server.

![Figure 2: For remote user to remote user, not all traffic flows over the WARP Connector to the SIP server.](https://developers.cloudflare.com/_astro/figure2.DATzV5BV_1qJ6ea.svg "Figure 2: For remote user to remote user, not all traffic flows over the WARP Connector to the SIP server.")

Figure 2: For remote user to remote user, not all traffic flows over the WARP Connector to the SIP server.

The above diagram shows the high level signaling and media paths.

1. Alice registers directly with the SIP server (`10.0.50.60`) with a Cloudflare assigned CGNAT IP of `100.96.0.12`.
2. Bob also registers directly with the SIP server (`10.0.50.60`) with their CGNAT IP of `100.96.0.13`.
3. When Alice calls Bob, the SIP server will send a SIP INVITE message to Bob at `100.96.0.13`.
4. The default gateway for the SIP server is `10.50.0.1`, but we have defined a static route such that for destination `100.96.0.0/12`, the next hop is the WARP Connector interface (`10.0.50.10`).
5. The SIP INVITE message will be routed across the WARP Connector to the Cloudflare network and then received by Bob.
6. Bob accepts and the SIP server will send SIP/SDP messages to both Alice and Bob specifying which parameters to use for the RTP (audio) data.
7. For Direct Media paths where the SIP server is not in the audio path and the RTP streams are directly between Alice and Bob, ensure that [**Allow all Cloudflare One traffic to reach enrolled devices**](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/peer-to-peer/#enable-peer-to-peer) has been enabled in Cloudflare. Audio streams in the Direct Media use case will not need to route over the WARP Connector.

### Remote user to on-premise user

Calls between remote and on-premise users are very similar, but RTP audio will be sent over the WARP Connector in addition to the SIP signaling.

![Figure 3: Remote user to on-premise user has all traffic routed via Cloudflare to SIP server and client.](https://developers.cloudflare.com/_astro/figure3.Bnu64MY9_1t8Fbh.svg "Figure 3: Remote user to on-premise user has all traffic routed via Cloudflare to SIP server and client.")

Figure 3: Remote user to on-premise user has all traffic routed via Cloudflare to SIP server and client.

The high-level signaling and media paths are shown below:

![Figure 4: Both signaling and media \(audio, video etc\) travel via secured tunnels from remote devices to on-premise clients.](https://developers.cloudflare.com/_astro/figure4.pvAsOncQ_Z2vCGY.svg "Figure 4: Both signaling and media (audio, video etc) travel via secured tunnels from remote devices to on-premise clients.")

Figure 4: Both signaling and media (audio, video etc) travel via secured tunnels from remote devices to on-premise clients.

1. Alice registers directly with the SIP server (`10.0.50.60`) with her CGNAT IP of `100.96.0.12`.
2. Bob also registers directly with the SIP server (`10.0.50.60`) with their LAN IP of `10.0.50.101`.
3. When Alice calls Bob, the SIP server will send a SIP INVITE message to Bob at `10.0.50.101`.
4. The default gateway for the SIP server is `10.50.0.1`, but we have defined a static route such that for destination `100.96.0.0/12`, the next hop is the WARP Connector interface (`10.0.50.10`).
5. The SIP INVITE message will be sent on the local network to Bob.
6. Bob accepts and the SIP server will send SIP/SDP messages to both Alice and Bob specifying which parameters to use for the RTP (audio) data.
7. Bob will send audio to Alice at `100.96.0.12`, which will be routed across the WARP Connector to Cloudflare, and Alice will send audio to Bob at `10.0.50.101`, which will be sent from Cloudflare across the WARP Connector to the on-premise local network.

## Summary

With Cloudflare's WARP Connector, remote users communicating with other remote users or on-premise users via on-premise SIP servers will have a seamless and secure experience for both ends. Key benefits include:

1. **Bidirectional connectivity**: WARP Connector supports bidirectional traffic, which is crucial for remote users communicating with on-premise users. Both signaling and media traffic (SIP/RTP) flow securely between the two, regardless of where the user is physically located. This is done via Cloudflare's global network, using an encrypted tunnel, ensuring data integrity and encryption​.
2. **Private communication over CGNAT**: The WARP Connector assigns Carrier-Grade NAT (CGNAT) IPs to devices, which allows remote users to securely communicate with on-premise users over private networks. This ensures that communication remains isolated from the public Internet, enhancing security. The CGNAT functionality means that remote and on-premise users can communicate as though they are on the same network​.
3. **No NAT traversal issues**: NAT traversal often poses a challenge in VoIP scenarios, but because WARP Connector preserves source IP addresses and handles bidirectional traffic without additional NAT boundaries, remote and on-premise users can communicate without issues typically caused by firewalls or NAT devices, improving the overall call setup and quality​.

## Related resources

* [Set up WARP Connector](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/warp-connector/)
* [Enable Peer-to-peer connectivity](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/peer-to-peer/#enable-peer-to-peer)
* [About the Cloudflare One Client](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/sase/","name":"Secure Access Service Edge (SASE)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/sase/deploying-self-hosted-voip-services-for-hybrid-users/","name":"Deploy self-hosted VoIP services for hybrid users"}}]}
```

---

---
title: DNS filtering solution for Internet service providers
description: Learn how to use Cloudflare Gateway as a DNS filtering solution for Internet service providers.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/sase/gateway-dns-for-isp.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# DNS filtering solution for Internet service providers

**Last reviewed:**  over 1 year ago 

## Introduction

Internet service providers are constantly exploring new revenue opportunities to expand their business, and many are now turning to security as a value-added service alongside their connectivity offerings. Traditionally, integrating security with connectivity posed significant challenges due to the reliance on legacy solutions that required costly on-premises hardware. This makes it difficult to deploy and manage and introduces post-deployment struggles with scalability and availability.

Today these limitations can be addressed through cloud-based solutions like [Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/), our Secure Web Gateway service. Cloudflare Gateway's DNS filtering capabilities allow service providers to offer enhanced security as a value-added service for residential and mobile subscribers or B2B clients. With easy-to-create policies backed by Cloudflare's [extensive threat intelligence ↗](https://www.cloudflare.com/en-gb/security/), service providers can effectively safeguard their customers from accessing potentially [harmful domains](https://developers.cloudflare.com/cloudflare-one/traffic-policies/domain-categories/#security-categories).

Moreover, Cloudflare Gateway eliminates concerns around availability, performance, and scalability, as it is built on [Cloudflare's 1.1.1.1 public DNS resolver](https://developers.cloudflare.com/1.1.1.1/), one of the [fastest ↗](https://www.dnsperf.com/#!dns-providers) and most widely-used DNS resolvers in the world.

Furthermore, this solution opens up opportunities for developing additional services beyond security, such as parental controls or tailored filtering profiles for B2B clients.

## Solution

Providing DNS security to the service providers' end customers with Cloudflare is straightforward. Service providers simply forward their public DNS requests to their Cloudflare tenant, and Cloudflare will filter DNS queries in accordance with the configured DNS filtering policies.

![Figure 1: The service provider subscribers send DNS queries to the service provider DNS server, which will forward them to Cloudflare Gateway to apply DNS filtering policies.](https://developers.cloudflare.com/_astro/gateway-dns-for-isp-image-01.CA9DVOGS_jcv6x.svg) 

Cloudflare Gateway, like all Cloudflare services, utilizes [anycast technology ↗](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/), ensuring that all service provider DNS queries are directed to the nearest Cloudflare point of presence.

To distinguish queries originating from the service provider from those coming from other customers, admins configure a [location](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/locations/) in their Cloudflare tenant dashboard. When a DNS location is created, Gateway assigns IPv4/IPv6 addresses and DoT/DoH hostnames for that location. These assigned IP addresses and hostnames are then used by the service provider to send DNS queries for resolution. In turn, the service provider configures the location object with the public IP addresses of their on-premises DNS servers, allowing Cloudflare to accurately associate queries with the corresponding location.

On Locations

If stable and defined source IPv4 addresses cannot be assigned to the on-premises DNS servers, service providers can instead use unique destination location endpoints. Each location is assigned a distinct [DoT](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/locations/dns-resolver-ips/#dns-over-tls-dot) and [DoH](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/locations/dns-resolver-ips/#dns-over-https-doh) hostname, as well as a unique [destination IPv6 address](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/locations/dns-resolver-ips/#ipv4ipv6-address). Additionally, Cloudflare can provide unique [destination IPv4 addresses upon request](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/locations/dns-resolver-ips/#dns-resolver-ip).

DNS filtering is then enforced through DNS policies set up by the service provider to detect domains linked to [security risks](https://developers.cloudflare.com/cloudflare-one/traffic-policies/domain-categories/#security-categories). Cloudflare continuously updates the list of risky domains using [its extensive threat intelligence ↗](https://www.cloudflare.com/en-gb/security/). When a DNS query matches a flagged domain, the corresponding action specified in the DNS policy is executed. This action can be a '[Block](https://developers.cloudflare.com/cloudflare-one/traffic-policies/dns-policies/#block),' where Gateway responds with `0.0.0.0` for IPv4 queries or `::` for IPv6 queries, or displays a [custom block page hosted by Cloudflare](https://developers.cloudflare.com/cloudflare-one/reusable-components/custom-pages/gateway-block-page/). Alternatively, an `[Override](/cloudflare-one/traffic-policies/dns-policies/#override)` action or [block page URL redirect](https://developers.cloudflare.com/cloudflare-one/reusable-components/custom-pages/gateway-block-page/#redirect-to-a-block-page) can redirect the DNS query to a block page hosted by the service provider.

![Figure 2: A DNS policy to prevent users from navigating to malicious domains. The action is to override and redirect the DNS query to a block page hosted by the service provider.](https://developers.cloudflare.com/_astro/gateway-dns-for-isp-image-02.BLGXVL4a_Z1Mnjow.svg) 

To achieve more precise control over which domains are allowed or blocked, the service provider can configure additional Allowed Domain and Blocked Domains policies. By setting these policies with [lower precedence](https://developers.cloudflare.com/cloudflare-one/traffic-policies/order-of-enforcement/#order-of-precedence) than the Security Risks policy, the service provider can override the Security Risks policy for specific domains.

To streamline the management of allowed and blocked domains, use [lists](https://developers.cloudflare.com/cloudflare-one/reusable-components/lists/). Lists are easily updated through the dashboard or via [APIs](https://developers.cloudflare.com/api/resources/zero%5Ftrust/subresources/gateway/subresources/lists/methods/update/), making policy adjustments more efficient.

![Figure 3: DNS policies are applied according to their order of precedence. In this example, the 'Allow List Policy' and 'Block List Policy' will be considered before the 'Security List' policy.](https://developers.cloudflare.com/_astro/gateway-dns-for-isp-image-03.Dy2ZZQ-9_Z7o2FY.svg) 

Additionally, all DNS queries forwarded to Cloudflare Gateway are logged and can be exported to external systems using [Logpush](https://developers.cloudflare.com/cloudflare-one/insights/logs/logpush/).

Miscategorization of domains

In cases of a miscategorization of domains, raise a [categorization change request](https://developers.cloudflare.com/security-center/investigate/change-categorization/#via-the-cloudflare-dashboard) directly from the Cloudflare dashboard.

## Additional offerings based on DNS filtering capabilities

Service providers can enhance their offerings by using Cloudflare Gateway DNS policies to deliver additional value-added services alongside the base DNS security service. By using the same solution, service providers can develop customized content category filtering services. These services can be easily constructed using Cloudflare's built-in [content categories](https://developers.cloudflare.com/cloudflare-one/traffic-policies/domain-categories/#content-categories) and [application types](https://developers.cloudflare.com/cloudflare-one/traffic-policies/application-app-types/), as well as the service provider's own custom allow and block lists.

Some potential applications include:

* **Parental Control Services**: This service can block categories such as adult themes, child abuse, violence, and questionable content to ensure a safer online environment for children.
* **Educational Services**: Designed for schools and educational organizations, this service can extend beyond parental controls by blocking additional categories like CIPA, gambling, and entertainment, thereby promoting a focused learning atmosphere.
* **Enterprise Services**: This offering allows businesses to easily restrict access to non-work-related domains, including categories such as entertainment, social networking, gambling, shopping & auctions, society & lifestyle, and sports.

To differentiate these additional services from the core DNS security offering, the service provider would create additional DNS locations, one for each service. Cloudflare would be able to distinguish DNS queries for these services if the service provider sends them to one of the unique identifiers of a location. Each location has a unique [DoH](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/locations/dns-resolver-ips/#dns-over-https-doh) and [DoT](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/locations/dns-resolver-ips/#dns-over-tls-dot) hostname and a unique [destination IPv6 address](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/locations/dns-resolver-ips/#ipv4ipv6-address). Cloudflare can also provision [dedicated destination IPv4 addresses](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/locations/dns-resolver-ips/#dns-resolver-ip) per location.

## Related resources

* [Cloudflare Gateway DNS policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/dns-policies/)
* [Cloudflare Blog: Using the power of Cloudflare's global network to detect malicious domains using machine learning ↗](https://blog.cloudflare.com/threat-detection-machine-learning-models/)
* [Protect ISP and telecommunications networks from DDoS attacks](https://developers.cloudflare.com/reference-architecture/diagrams/network/protecting-sp-networks-from-ddos/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/sase/","name":"Secure Access Service Edge (SASE)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/sase/gateway-dns-for-isp/","name":"DNS filtering solution for Internet service providers"}}]}
```

---

---
title: Protective DNS for governments
description: Learn how to use Cloudflare Gateway as a Protective DNS service for governments.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/sase/gateway-for-protective-dns.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Protective DNS for governments

**Last reviewed:**  over 1 year ago 

## Introduction

Protective DNS services are security services that analyze DNS queries and block access to malicious websites and other harmful online content. As technology becomes increasingly vital for public sector operations, government departments are looking to adopt these cybersecurity services to bolster incident detection and response, and to build more resilient enterprise networks. Traditionally, deploying this type of solution posed significant challenges due to the reliance on legacy systems that required costly on-premises hardware. This makes it difficult to deploy and manage, and introduces post-deployment struggles with scalability and availability.

Today, these limitations can be addressed through cloud-based solutions like [Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/), our Secure Web Gateway service. Cloudflare Gateway's DNS filtering capabilities allow administrators to offer enhanced security. With easy-to-create policies backed by Cloudflare's [extensive threat intelligence ↗](https://www.cloudflare.com/en-gb/security/), government agencies can effectively safeguard their end users from accessing potentially [harmful domains](https://developers.cloudflare.com/cloudflare-one/traffic-policies/domain-categories/#security-categories). Additionally, agencies can further strengthen these defenses by [integrating their own threat intelligence data ↗](https://developers.cloudflare.com/security-center/indicator-feeds/) into the policies.

Finally, Cloudflare Gateway eliminates concerns around availability, performance, and scalability, as it is built on [Cloudflare's 1.1.1.1 public DNS resolver](https://developers.cloudflare.com/1.1.1.1/), one of the [fastest ↗](https://www.dnsperf.com/#!dns-providers) and most widely used DNS resolvers in the world.

## Solution

Cloudflare provides flexible DNS deployment models, delivering robust protection for every user, regardless of location. The service supports both office-based and remote users, offering the adaptability needed to address diverse operational requirements.

### Office or site based users

IT administrators forward public DNS requests to Cloudflare where they are filtered and logged in accordance with the configured DNS filtering policies. DNS forwarders can either be the agency's private DNS infrastructure or networking appliances, such as routers deployed at remote sites and configured as local DNS servers.

![Figure 1: DNS requests can be forwarded to Cloudflare via a variety of different methods.](https://developers.cloudflare.com/_astro/gateway-for-protective-dns-image-01.CM-gqunL_1k1veI.svg "Figure 1: DNS requests can be forwarded to Cloudflare via a variety of different methods.")

Figure 1: DNS requests can be forwarded to Cloudflare via a variety of different methods.

To distinguish queries originating from the government departments and agencies they are responsible for, admins configure a location in the Cloudflare dashboard. When a DNS location is created, Gateway assigns IPv4/IPv6 addresses and DNS over TLS/HTTPS (DoT/DoH) hostnames for that location. These IP addresses and hostnames are then used by the admins to send DNS queries for resolution. In turn, the administrator configures the location object with the public IP addresses of their on-premises DNS servers, allowing Cloudflare to accurately associate queries with the corresponding location.

DNS filtering is then enforced through policies set up by the administrator to detect domains linked to [security risks](https://developers.cloudflare.com/cloudflare-one/traffic-policies/domain-categories/#security-categories). Cloudflare continuously updates the list of high risk domains using [its extensive threat intelligence ↗](https://www.cloudflare.com/security/). When a DNS query matches a flagged domain, the corresponding action specified in the DNS policy is executed. This action can be a '[Block](https://developers.cloudflare.com/cloudflare-one/traffic-policies/dns-policies/#block),' where Gateway responds with `0.0.0.0` for IPv4 queries or `::` for IPv6 queries, or displays a [custom block page hosted by Cloudflare](https://developers.cloudflare.com/cloudflare-one/reusable-components/custom-pages/gateway-block-page/). Alternatively, an [Override](https://developers.cloudflare.com/cloudflare-one/traffic-policies/dns-policies/#override) action or [block page URL redirect](https://developers.cloudflare.com/cloudflare-one/reusable-components/custom-pages/gateway-block-page/#redirect-to-a-block-page) can redirect the DNS query to a block page hosted by the government agency.

Cloudflare's own threat intelligence can be seamlessly integrated with threat intelligence data provided by the agency or third-party sources. In this setup, the agency or the third-party entity acts as a [threat feed provider](https://developers.cloudflare.com/security-center/indicator-feeds/) to Cloudflare. This enables IT admins to create DNS policies that combine Cloudflare's security risk categories with the data sourced by the agency, for a unified and enhanced security posture (see diagram below). Additionally, [publicly available custom indicator feeds](https://developers.cloudflare.com/security-center/indicator-feeds/#publicly-available-feeds) can be accessed by eligible public and private sector organizations without the need to establish a provider relationship, further expanding security capabilities.

![Figure 2: Example DNS policy showing the use of a custom threat intel feed.](https://developers.cloudflare.com/_astro/gateway-for-protective-dns-image-02.CWdOzGbA_ZuK8CM.svg "Figure 2: Example DNS policy showing the use of a custom threat intel feed.")

Figure 2: Example DNS policy showing the use of a custom threat intel feed.

### Remote users

For users not connected to an agency network, you can redirect DNS requests to Cloudflare by using the DNS over HTTPS ([DoH](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/dns-over-https/)) hostname provided by a [location](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/locations/). This requires [configuration on each device](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/dns-over-https/#filter-doh-requests-by-location), which can be done using existing management solutions. This approach can be enhanced by incorporating [a user-specific authentication token](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/dns-over-https/#filter-doh-requests-by-user). These tokens enable Cloudflare to attribute DNS queries to individual users, providing granular visibility and facilitating the application of user-specific policies.

For more advanced identity-based DNS policies, Cloudflare's device agent can be deployed. In this setup, users authenticate to the device agent via [an identity provider integrated with Cloudflare](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/). The agent is then configured in [DNS only mode](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/configure/modes/#dns-only-mode), ensuring that all public DNS queries from the device are forwarded to Cloudflare. These queries include the user identity from the device, enabling identity-based policy enforcement.

![Figure 3: Showing how remote users can also redirect DNS requests for protection via Cloudflare.](https://developers.cloudflare.com/_astro/gateway-for-protective-dns-image-03.CNrab47I_27vHhA.svg "Figure 3: Showing how remote users can also redirect DNS requests for protection via Cloudflare.")

Figure 3: Showing how remote users can also redirect DNS requests for protection via Cloudflare.

The following policy shows how group information from the Identity provider can be used to apply specific protective DNS policies.

![Figure 4: An example of a DNS policy for users with the device agent. The policy uses group information from the identity provider so that it applies to a specific audience of users.](https://developers.cloudflare.com/_astro/gateway-for-protective-dns-image-04.Dz-unZHM_ZR4bn7.svg "Figure 4: An example of a DNS policy for users with the device agent. The policy uses group information from the identity provider so that it applies to a specific audience of users.")

Figure 4: An example of a DNS policy for users with the device agent. The policy uses group information from the identity provider so that it applies to a specific audience of users.

The device agent is compatible with the [leading desktop and mobile operating systems](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/download/), making it a solution for both managed and unmanaged devices. This versatility enables DNS security services to be extended, for example, to personal devices of high-risk individuals, ensuring a consistent level of protection regardless of location or device. For managed IT devices, our agent supports [managed deployments tools](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/mdm-deployment/), for ease of deployment and upgrades.

### Additional controls

To achieve more precise control over which domains are allowed or blocked, the administrator can configure additional Allowed Domain and Blocked Domain policies. By setting these policies with [lower precedence](https://developers.cloudflare.com/cloudflare-one/traffic-policies/order-of-enforcement/#order-of-precedence) than the Security Risks policy, the agency can override the Security Risks policy for specific domains.

To streamline the management of allowed and blocked domains, use [lists](https://developers.cloudflare.com/cloudflare-one/reusable-components/lists/). Lists are easily updated through the dashboard or via [APIs](https://developers.cloudflare.com/api/operations/zero-trust-lists-update-zero-trust-list), making policy adjustments more efficient.

![Figure 5: Show how lists can be used to provide custom hostname lists in the policy.](https://developers.cloudflare.com/_astro/gateway-for-protective-dns-image-05.DhzPgkVx_Z4ALxB.svg "Figure 5: Show how lists can be used to provide custom hostname lists in the policy.")

Figure 5: Show how lists can be used to provide custom hostname lists in the policy.

### Visibility

One of the key advantages of adopting Cloudflare Gateway as a protective DNS service is the enhanced visibility it provides IT administrators into existing and emerging threats impacting governmental departments and agencies. All DNS queries sent to Cloudflare Gateway are logged, and when an identity is associated with a query, it is mapped to the corresponding user in the logs.

Note

The ability to view personally identifiable information (PII) in Cloudflare Gateway logs is a [role-based permission](https://developers.cloudflare.com/cloudflare-one/roles-permissions/#cloudflare-zero-trust-pii) that can be selectively assigned to IT administrators.

These logs are accessible directly through [Cloudflare's dashboard](https://developers.cloudflare.com/cloudflare-one/insights/logs/dashboard-logs/gateway-logs/) or can be exported to external systems for further analysis via [Logpush](https://developers.cloudflare.com/cloudflare-one/insights/logs/logpush/). Cloudflare also offers robust analytics capabilities, empowering IT administrators to detect trends and identify indicators of compromise. A built-in analytics dashboard is available in [Cloudflare's dashboard](https://developers.cloudflare.com/cloudflare-one/insights/analytics/gateway/), and custom dashboards can be created using any GraphQL-compatible tool using [Cloudflare's GraphQL API](https://developers.cloudflare.com/analytics/graphql-api/).

## Additional capabilities

Cloudflare Gateway offers a comprehensive suite of services that go beyond protective DNS, functioning as a full-featured [Secure Web Gateway ↗](https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/). It supports HTTP inspection, providing deeper visibility into user traffic, and expands the scope of threat protection and data security capabilities available to users.

When inspecting HTTP traffic, Cloudflare prevents interference by decrypting, inspecting, and re-encrypting HTTPS requests in our data centers. Cloudflare Gateway only stores eligible cache content at rest and all cache disks are encrypted at rest. Furthermore, it is also possible to configure the geographical region of the servers where TLS decryption takes place with [Regional Services](https://developers.cloudflare.com/data-localization/regional-services/) in the Cloudflare [Data Localization Suite](https://developers.cloudflare.com/data-localization/) (DLS) and organizations have the ability to choose between adding a Cloudflare certificate on devices or [using their own certificate](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/user-side-certificates/custom-certificate/) (BYOPKI) for user traffic decryption and inspection.

### Threat protection

When Cloudflare Gateway is performing HTTP inspection, it extends protection beyond DNS security by enabling additional capabilities to safeguard users as they browse the Internet:

* **Anti-virus scanning (AV):** Users are protected when downloading or uploading files to or from the Internet. [Files are scanned](https://developers.cloudflare.com/cloudflare-one/traffic-policies/http-policies/antivirus-scanning/) in real time to detect malicious content.
* **Sandboxing:** For files not previously seen, Cloudflare Gateway can [quarantine them in a secure sandbox environment for analysis](https://developers.cloudflare.com/cloudflare-one/traffic-policies/http-policies/file-sandboxing/). In this sandbox, Cloudflare monitors the file's actions and compares them against known malware patterns. Files are only released to users if no malicious content is detected.
* **Remote Browser Isolation (RBI):** [Isolation policies](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/) can be configured to safeguard users when accessing potentially risky websites. For example, [if a user attempts to visit a newly seen domain that triggers an isolation policy](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/isolation-policies/), the website's active content is executed in a secure, isolated browser hosted in the nearest Cloudflare data center. This ensures that zero-day attacks and malware are mitigated before they can impact the user. This remote browsing experience is seamless and transparent, allowing users to continue using their preferred browsers and workflows. Every browser tab and window is automatically isolated, and sessions are deleted when closed.

### Data protection

In addition to threat protection, Cloudflare Gateway enables the implementation of robust data protection policies during HTTP inspection, including:

* **File upload controls:** Administrators can enforce policies that monitor and [restrict file uploads](https://developers.cloudflare.com/cloudflare-one/traffic-policies/http-policies/#download-and-upload-file-types) to the Internet, preventing the inadvertent sharing of sensitive data.
* **Data Loss Prevention (DLP):** [DLP policies](https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/) can be deployed to identify and block unauthorized sharing of confidential or classified information. For more details, see [securing data in transit](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-in-transit/).
* **Remote Browser Isolation (RBI):** Beyond threat protection, [isolation policies](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/) can enforce [user action restrictions](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/isolation-policies/#policy-settings), such as disabling copy/paste functionality or keyboard inputs, to safeguard sensitive information. For additional information, refer to [securing data in use](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-in-use/).

## Adopting Cloudflare Gateway as Secure Web Gateway

Expanding Cloudflare Gateway from a protective DNS service to a full-featured Secure Web Gateway is a straightforward process. Using Cloudflare's dashboard, IT administrators would configure [HTTP policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/http-policies/) in addition to existing DNS policies. These HTTP policies would enable the additional protections, namely, Antivirus Scanning, Sandboxing, Remote Browser Isolation (RBI), and Data Loss Prevention (DLP).

From the user's perspective, remote Workers would continue using the same device agent. To leverage these enhanced protections, they simply need to switch the device agent mode to [Traffic and DNS mode](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/configure/modes/#traffic-and-dns-mode-default). This mode can also be enforced when using device management to deploy the agent.

For office and site-based users, a network appliance can be configured to establish an [IPsec or GRE tunnel to Cloudflare](https://developers.cloudflare.com/cloudflare-wan/). This setup routes all Internet-bound traffic through Cloudflare Gateway, ensuring that security policies are applied before the traffic exits to the internet. Alternatively, [Proxy Auto-Configuration files (PAC)](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/proxy-endpoints/) can be used to forward DNS and HTTP/S traffic towards Cloudflare.

![Figure 6: The different options available to use Cloudflare Gateway as a full-featured Secure Web Gateway.](https://developers.cloudflare.com/_astro/gateway-for-protective-dns-image-06.C-pVIjaU_Uz0sQ.svg "Figure 6: The different options available to use Cloudflare Gateway as a full-featured Secure Web Gateway.")

Figure 6: The different options available to use Cloudflare Gateway as a full-featured Secure Web Gateway.

## Related resources

* [Evolving to a SASE architecture with Cloudflare](https://developers.cloudflare.com/reference-architecture/architectures/sase/)
* [Using a zero trust framework to secure SaaS applications](https://developers.cloudflare.com/reference-architecture/design-guides/zero-trust-for-saas/)
* [Learning path: Secure your Internet traffic and SaaS apps](https://developers.cloudflare.com/learning-paths/secure-internet-traffic/concepts/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/sase/","name":"Secure Access Service Edge (SASE)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/sase/gateway-for-protective-dns/","name":"Protective DNS for governments"}}]}
```

---

---
title: Access to private apps without having to deploy client agents
description: Learn how to provide access to private apps without having to deploy client agents.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/sase/sase-clientless-access-private-dns.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Access to private apps without having to deploy client agents

**Last reviewed:**  about 2 years ago 

## Introduction

Using Cloudflare to access private resources - such as applications, servers, and networks that are not exposed directly to the internet - usually involves deploying an ([agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/)) to devices and then using a server-side agent ([cloudflared](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/cloudflared/), [WARP Connector](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/warp-connector/)), to connect the private network or application to Cloudflare. This document describes an alternative approach which removes the need to deploy software to the user's device, making it easier for allowing third party access such as contractors and partners.

Typically, to provide access to internal resources, you use Cloudflare Zero Trust Network Access [ZTNA ↗](https://www.cloudflare.com/learning/access-management/what-is-ztna/) which supports two methods for how the user device accesses a private resource.

* A CNAME in public DNS, that resolves to a hostname representing the Cloudflare tunnel which proxies the request to the internal application.
* An IP address exposed by Cloudflare tunnel, that again, proxies traffic direct to that IP address.

## Accessing private applications

Some organizations don't like the idea of public DNS records which reference internal services, even though the ZTNA services provide strong access security, sometimes just the existence of a service name in public DNS is not desired. Exposing IP addresses directly to users is also a bad idea, they are hard to remember, and IP addresses can change. Unlike accessing a web application via a public DNS record through our proxy, applications exposed via private IP addresses also require the user to install an agent on their device to capture and route the traffic to Cloudflare which in turn routes it to the application. Installing this agent can be a challenge with third parties like partners or contractors.

So how do you allow access to private resources, without creating public DNS records and without requiring the user install software on their device? Cloudflare solved this challenge with [Resolver Policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/resolver-policies/) where internal DNS services can be used. When combined with agentless [Remote Browser Isolation](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/), it is possible to create Zero Trust access to private web applications with only a modern web browser. Policies to control access to apps are then written in our Secure Web Gateway (SWG) service as [network firewall](https://developers.cloudflare.com/cloudflare-one/traffic-policies/network-policies/) policies. This method supports HTTP based applications, although Cloudflare does provide a browser rendering service for SSH and VNC services.

Follow this [tutorial](https://developers.cloudflare.com/cloudflare-one/tutorials/clientless-access-private-dns/) for information on how to configure secure access to private web-based resources without having to deploy client agents.

![Figure 1: Remote Access Internal Hostname](https://developers.cloudflare.com/_astro/diagram1.CgnmLabJ_1lNR1W.svg "Figure 1: Remote browser connected to private web service using internal hostname")

Figure 1: Remote browser connected to private web service using internal hostname

1. Users start their access by authenticating to the [Cloudflare Browser Isolation ↗](https://your%5Fteam%5Fdomain.cloudflareaccess.com/browser) service. Note this is a browser running on Cloudflare’s edge network, therefore all requests will by default be handled by Cloudflare. The contents are rendered back to the users’ browser via secure encrypted vector streams that use HTTPS and WebRTC channels.
2. Once the user has authenticated to the remote browser, they make a request to an internal hostname which is a record in the internal DNS service. e.g. [https://app.company.internal ↗](https://app.company.internal)
3. Cloudflare looks up the internal hostname using [resolver policies](https://developers.cloudflare.com/cloudflare-one/traffic-policies/resolver-policies/), and gets the private IP address from the internal DNS server. This DNS resolution takes place within the Cloudflare network and requires no DNS client changes on the user's device.
4. Cloudflare evaluates the network firewall policies and verifies if the user has permission to reach the destination addresses.
5. If the request passes the policy, it is sent via secure [QUIC ↗](https://blog.cloudflare.com/getting-cloudflare-tunnels-to-connect-to-the-cloudflare-network-with-quic) tunnels to the Cloudflared connectors which then is reverse proxied to the application servers. All data is transmitted securely through Cloudflare back to the users’ browser via encrypted vector streams.

## Related resources

* [Tutorial: Access a web application via its private hostname without the Cloudflare One Client](https://developers.cloudflare.com/cloudflare-one/tutorials/clientless-access-private-dns/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/sase/","name":"Secure Access Service Edge (SASE)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/sase/sase-clientless-access-private-dns/","name":"Access to private apps without having to deploy client agents"}}]}
```

---

---
title: Secure access to SaaS applications with SASE
description: Cloudflare's SASE platform offers the ability to bring a more Zero Trust orientated approach to securing SaaS applications. Centralized policies, based on device posture, identity attributes and granular network location can be applied across one or many Saas applications.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/sase/secure-access-to-saas-applications-with-sase.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Secure access to SaaS applications with SASE

**Last reviewed:**  over 1 year ago 

## Introduction

SaaS applications have become essential tools in today's business operations. While SaaS applications reduce IT and infrastructure burden, they also introduce new security challenges that traditional architectures struggle to address. Many companies today are on the path to implementing a [Zero Trust architecture ↗](https://zerotrustroadmap.org/), which heavily combines identity, device and network information to better secure applications.

However SaaS applications tend to focus their security on their own platform, such as storing data at rest in a secure manner and ensuring their applications are not exposing customer data due to application vulnerabilities. This document is going to cover how to address some of the limitations of SaaS applications by using Cloudflare's Secure Access Service Edge (SASE) platform. Specifically our Zero Trust Network Access (ZTNA) and Secure Web Gateway (SWG) services, combined with integrations to your existing identity and device security vendors.

## Is SaaS not already secure?

Before discussing the specifics of implementing SASE for SaaS applications, we should consider asking: is SaaS not already secure? Major providers like Salesforce, ServiceNow, Microsoft and more have implemented robust security capabilities, including integrations with identity providers for Single Sign On (SSO), SSL/TLS for all application communication, encryption of data at rest and comprehensive audit logs. Unfortunately, SaaS vendors are not attempting to rebuild entire security platforms in their applications, so they are not able to provide many features required for a modern Zero Trust architecture.

SaaS applications are unable to evaluate the security posture of connecting devices. A compromised laptop with valid credentials appears identical to a securely managed, corporate device. When data is downloaded from the SaaS application, it has no visibility into where it goes or if the device it is being downloaded to is secure. Typically authentication for SaaS applications is externalized by redirecting users to an identity service, therefore the SaaS application has no sense of how the user authenticated and as such all trust is placed in the identity provider.

These security challenges are compounded by poor network access controls — most SaaS applications accept connections from any Internet source, but sometimes they can be limited to only accessing from a specific set of IP addresses that might be associated with one or more physical offices. But these rudimentary network controls are hard to expand for remote users working from home, or partners and contractors who need access.

Cloudflare's SASE platform offers the ability to bring a more Zero Trust orientated approach to securing SaaS applications. Centralized policies, based on device posture, identity attributes and granular network location can be applied across one or many SaaS applications. Cloudflare becomes the new corporate network, and it is possible to gate access to Internet based SaaS applications to those users and devices that are connected to Cloudflare. Essentially it is a new corporate network in the cloud.

## Securing access with Cloudflare

The diagram below shows how Cloudflare sits between your users, devices and networks that require access to any SaaS application. The two main services proving security capabilities are:

* [Zero Trust Network Access](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/). Allows Cloudflare to become an identity proxy, so that you can easily enable authentication with a wide variety of identity providers to a single SaaS application. This service also incorporates the ability to evaluate access based on device posture and network location.
* [Secure Web Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/). Once all traffic to access the SaaS application flows through our gateway, HTTPS connections are terminated at Cloudflare and you have the ability to inspect the data flowing to and from the SaaS application. This allows you to block sensitive data from being exported to insecure locations.

![Figure 1: Only traffic that has passed the Cloudflare network and relevant policies is authorized to access the SaaS application.](https://developers.cloudflare.com/_astro/figure1.CyQmr5MZ_Z1rhMkf.svg "Figure 1: Only traffic that has passed the Cloudflare network and relevant policies is authorized to access the SaaS application.")

Figure 1: Only traffic that has passed the Cloudflare network and relevant policies is authorized to access the SaaS application.

The above diagram shows the variety of ways in which traffic can on-ramp to Cloudflare, where the ZNTA service ensures authentication and the Secure Web Gateway filters both inbound and outbound traffic to/from the SaaS application.

1. Initial requests to the SaaS application are redirected to Cloudflare as part of SSO flow. ZTNA service authenticates users against existing identity providers.
2. A user, authenticated or not, is denied access to the SaaS application if their traffic is not sourced from Cloudflare.
3. A user on a non-managed device can use browser isolation, where the browser accessing the SaaS application runs on a Cloudflare server, and the results of the rendered page are securely delivered to a user's local browser.
4. A managed device is connected to Cloudflare using a secure tunnel and therefore all communication from device to SaaS application is filtered and secured.  
   1. Cloudflare agent device posture can also be incorporated into authorizing traffic from these devices.
5. A device connected to a local network, where all Internet traffic is routed to Cloudflare via a secure IPsec tunnel to Cloudflare, also ensures all traffic from network to SaaS application is filtered and secured.
6. Traffic then passes through our secure web gateway, where DNS and HTTP policies can be applied to traffic.  
   1. HTTP policies allow the examination of the data being both uploaded and downloaded from the SaaS application using DLP profiles.
7. Traffic egresses Cloudflare with a specific IP. The SaaS application is configured to allow all traffic coming from that address.

XDR platform integrations

When integrating with an XDR platform such as Crowdstrike, Sentinel One or Microsoft Intune, device posture is also available for any authenticated user because Cloudflare matches the identity with the user in the XDR system and device posture information is evaluated.

## Example policy

The following is an example set of policies which demonstrate how you can use Cloudflare to secure access to Salesforce.

The first step is using an [egress IP policy under Cloudflare Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/egress-policies/). This allows you to purchase and assign specific IPs to users that have their traffic filtered via Gateway. Then in Salesforce, you enforce that access is only permitted for traffic with a source IP that matches the one in your egress policy. This combination ensures that the only way to get access to Salesforce is via Cloudflare.

| Egress Policy                       |               |
| ----------------------------------- | ------------- |
| **Identity**                        |               |
| User Group Names                    | All Employees |
| **Select Egress IP**                |               |
| Use dedicated Cloudflare Egress IPs | 203.0.113.88  |

This is important not only for securing access to Salesforce, but also for adequately protecting its contents while in use. Now let us examine the access policy that is limiting access to members of the Sales or Executives groups. We are also using our Crowdstrike integration to ensure that users are on company managed devices.

| Policy name                    | Account executives on trusted devices |
| ------------------------------ | ------------------------------------- |
| Action                         | Allow                                 |
| **Include**                    |                                       |
| Member of group                | Sales, Executives                     |
| **Require**                    |                                       |
| Authentication method          | MFA - multi-factor authentication     |
| Gateway                        | On                                    |
| Crowdstrike Service to Service | Overall Score above 80                |

This second policy applies to all employees but we are going to apply a few more steps before access is granted.

| Policy name                    | Employees on trusted devices      |
| ------------------------------ | --------------------------------- |
| Action                         | Allow                             |
| **Include**                    |                                   |
| Member of group                | All Employees                     |
| **Require**                    |                                   |
| Authentication method          | MFA - multi-factor authentication |
| Gateway                        | On                                |
| Crowdstrike Service to Service | Overall Score above 80            |
| **Additional Settings**        |                                   |
| Purpose justification          | On                                |
| Temporary authentication       | On                                |
| Email addresses of approvers   | salesforce-admin@company.com      |

We are going to add in temporary authentication to this second policy. That means if Cloudflare determines that the incoming request is from someone outside of the Sales or Executives department, an administrator will need to explicitly grant them temporary access. In context, this policy could be used to secure access to Salesforce for employees outside the Sales department, as the customer information could be sensitive and confidential.

This approach is important for several reasons:

* It allows for human oversight on potentially risky access attempts, reducing the chance of unauthorized access through compromised or insecure devices.
* It provides flexibility for legitimate users to access the application even when their device fails to meet the highest security standards. This encourages users to maintain good security practices on their devices.
* In addition, since all user traffic is routed through Cloudflare, we can enforce additional security measures (such as preventing the download of sensitive data) via web traffic policies.

Cloudflare's SASE platform allows organizations to centralize security policy for accessing SaaS applications. It also enhances security by allowing you to leverage device posture and network attributes. You can configure it in a way where your SaaS application essentially is only accessible via your new corporate network which is built on Cloudflare.

## Related Resources

* [Evolving to a SASE architecture with Cloudflare](https://developers.cloudflare.com/reference-architecture/architectures/sase/)
* [Designing ZTNA access policies for Cloudflare Access](https://developers.cloudflare.com/reference-architecture/design-guides/designing-ztna-access-policies/)
* [Access to private apps without having to deploy client agents](https://developers.cloudflare.com/reference-architecture/diagrams/sase/sase-clientless-access-private-dns/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/sase/","name":"Secure Access Service Edge (SASE)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/sase/secure-access-to-saas-applications-with-sase/","name":"Secure access to SaaS applications with SASE"}}]}
```

---

---
title: Zero Trust and Virtual Desktop Infrastructure
description: This document provides a reference and guidance for using Cloudflare's Zero Trust services. It offers a vast improvement over remote access to web applications with greater security.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/sase/zero-trust-and-virtual-desktop-infrastructure.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Zero Trust and Virtual Desktop Infrastructure

**Last reviewed:**  over 1 year ago 

## Introduction

Virtual Desktop Infrastructure (VDI) is old, costly, and clunky for a number of reasons including poor user experience, high upfront investments, ongoing operational costs, and many others of which you can read about in detail [here ↗](https://blog.cloudflare.com/decommissioning-virtual-desktop/). We recognize and empathize with the challenges many organizations face that result in continued reliance on this approach. This reference architecture describes how Cloudflare's Zero Trust solution can help organizations secure their virtual desktop infrastructure (VDI) and in most cases offload it entirely. Many organizations use expensive and poor performing VDI only to provide a secure web browser to their remote users. In these cases, Cloudflare can help offload the use of VDI entirely for web-based applications or SaaS apps.

In other cases, a full virtualized desktop may be necessary for legacy apps, yet organizations still need help securing remote access to their VDI or securing the virtualized desktops themselves once users are interacting with them. This document provides a reference and guidance for using Cloudflare's Zero Trust services and is split into two main sections.

* Replacing your VDI for secure remote access to web-based applications. Accessing a full blown desktop environment to just use a web browser isn't the best experience for users. Cloudflare offers a vast improvement over remote access to web applications and can do so with greater security.
* Securing your VDI desktops...  
   * From unauthorized access.  
   * From risky public Internet destinations.

### Who is this document for and what will you learn?

This reference architecture is designed for IT or security professionals who are looking at using Cloudflare to replace or secure their Virtual Desktop Infrastructure. To build a stronger baseline understanding of Cloudflare, we recommend the following resources:

* [Decommissioning your VDI Blog Post ↗](https://blog.cloudflare.com/decommissioning-virtual-desktop/)
* [Leveraging Cloudflare's Secure Web Gateway with PAC files for VDI](https://developers.cloudflare.com/learning-paths/secure-internet-traffic/configure-device-agent/pac-files/#use-cases)

## Replacing Your VDI

In today's IT landscape, most applications and services that companies rely on are accessible through a web browser and often delivered by a SaaS provider. In these cases VDI is overkill and an incredibly expensive and burdensome way to provide a secure browser to a remote user. Instead, many organizations are turning to alternatives such as a [Remote Browser Isolation ↗](https://www.cloudflare.com/zero-trust/products/browser-isolation/) (RBI) service. These services lower costs and overhead, provide a better user experience and most importantly offer robust security and logging features.

![Figure 1: Remote browser isolation can provide a secure, controlled browser environment for accessing sensitive company applications.](https://developers.cloudflare.com/_astro/figure1.DA3CfHpk_Z1Gtz2p.svg "Figure 1: Remote browser isolation can provide a secure, controlled browser environment for accessing sensitive company applications.")

Figure 1: Remote browser isolation can provide a secure, controlled browser environment for accessing sensitive company applications.

The diagram above shows the general flow of how user traffic goes from their local browser to Cloudflare's remote browser and then to applications hosted on their infrastructure over a secure tunnel. Figure 2 below shows how users can access applications using remote browser isolation either directly in a browser or, if you require greater privacy and security for the traffic, using our device agent to create a tunnel from the device to Cloudflare. Both methods provide secure access to internal and external resources.

![Figure 2: Two different traffic flow options: clientless RBI & RBI using the device agent.](https://developers.cloudflare.com/_astro/figure2.BTMnNCIU_WSlB9.svg "Figure 2: Two different traffic flow options: clientless RBI & RBI using the device agent.")

Figure 2: Two different traffic flow options: clientless RBI & RBI using the device agent.

**Option 1: Clientless RBI**

* Device agent not required
* RBI URL can be protected by an [Access policy](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/) with authentication
* A simpler way to begin rolling out Cloudflare Zero trust while transitioning away from VDI
* A great option for third party contractor access who cannot install software on their device

**Option 2: RBI via the device agent**

* Provides full security capabilities including device posture checks, split tunneling and the ability to use the Secure Web Gateway service to filter Internet-bound traffic.
* More robust end state to transition to once workflows and confidence is built with users and internal teams
* Gather end user metrics around user experience, reliability and performance

## Securing Your VDI

### Securing access to your VDI using Zero Trust policies

When replacing your VDI is not an option and a fully virtualized desktop is required for legacy applications, Cloudflare's [SASE platform ↗](https://www.cloudflare.com/zero-trust/) can still help secure these environments by authorizing the access to them using identity based Zero Trust policies, as well as securing the Internet bound traffic from the devices themselves.

![Figure 3: Using Cloudflare Access ZTNA to secure VDI.](https://developers.cloudflare.com/_astro/figure3.CQN_cSLv_2rS1rC.svg "Figure 3: Using Cloudflare Access ZTNA to secure VDI.")

Figure 3: Using Cloudflare Access ZTNA to secure VDI.

The diagram above displays a general Zero Trust deployment using best practices for authenticating your remote users to the VDI infrastructure

1. The user device sends traffic to Cloudflare's network over a secure tunnel using the device agent.
2. Traffic destined to the VDI resources reaches ZTNA policies where it is evaluated for any combination of conditional access criteria, including device posture, identity and traffic context or type.
3. Traffic that passes the ZTNA policies is allowed to reach the VDI resources where the user can interact with the VDI normally.

This model could also benefit from the below options demonstrating how to filter traffic sourced from the VDI hosts as well (refer to below).

### Securing traffic from your VDI using secure web gateway policies

Cloudflare's SASE platform is capable of much more than replacing VPNs and bolstering policies towards internal services. It is just as important to protect users from accessing high risk sites on the Internet. Policies in Cloudflare's Secure Web Gateway can be tuned to filter DNS requests or become a sophisticated full forward proxy, inspecting both network and HTTP traffic as it heads towards the open Internet.

![Figure 4: Using Cloudflare's Secure Web Gateway to filter and protect traffic coming from VDI.](https://developers.cloudflare.com/_astro/figure4.DPa0cH6R_Z2r3Lh6.svg "Figure 4: Using Cloudflare's Secure Web Gateway to filter and protect traffic coming from VDI.")

Figure 4: Using Cloudflare's Secure Web Gateway to filter and protect traffic coming from VDI.

1. **DNS configurations** (Resolver IPs, DoH, DoT) or **PAC files** for **Non-persistent virtual desktop infrastructure (VDI) environments** can be configured within the infrastructure or directly on the VDI hosts  
a. DNS configurations allow for DNS policies to be enforced while PAC files allow for all gateway policy types (DNS, Network and HTTP).
2. Traffic is sent from the VDI to the secure web gateway where it is filtered by DNS, network or HTTP policies.
3. Traffic is sent to the Internet if it is allowed past Gateway policies

## Summary

As shown, we have seen several ways to incorporate Cloudflare's Zero Trust services with your existing VDI, either by replacing it completely in favor of Remote Browser Isolation technology or further securing it with our [Access](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/) or [Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) services.

For more thorough background, explanation and action steps to a smooth migration be sure to read the following resources:

* [Decommissioning your VDI Blog Post ↗](https://blog.cloudflare.com/decommissioning-virtual-desktop/)
* [Leveraging Cloudflare's Secure Web Gateway with PAC files for VDI](https://developers.cloudflare.com/learning-paths/secure-internet-traffic/configure-device-agent/pac-files/#use-cases)
* [Connect to private network services with Browser Isolation ↗](https://blog.cloudflare.com/browser-isolation-private-network/)
* [Clientless Web Isolation](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/setup/clientless-browser-isolation)
* [Determine When to use PAC Files](https://developers.cloudflare.com/learning-paths/secure-internet-traffic/configure-device-agent/pac-files/#use-cases)
* [Agentless DNS Configurations](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/dns/)
* [PAC Files for Agentless HTTP Filtering](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/proxy-endpoints/)

As always, if you have any questions on these services, be sure to reach out to your Cloudflare team or contact us to [talk to an expert ↗](https://www.cloudflare.com/products/zero-trust/plans/enterprise/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/sase/","name":"Secure Access Service Edge (SASE)"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/sase/zero-trust-and-virtual-desktop-infrastructure/","name":"Zero Trust and Virtual Desktop Infrastructure"}}]}
```

---

---
title: FIPS 140 level 3 compliance with Cloudflare Application Services
description: This document outlines a reference architecture for achieving Federal Information Processing Standard (FIPS) 140 Level 3 compliance using Cloudflare's Application Services.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/security/fips-140-3.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# FIPS 140 level 3 compliance with Cloudflare Application Services

**Last reviewed:**  about 1 year ago 

## Introduction

This document outlines a reference architecture for achieving Federal Information Processing Standard (FIPS) 140 Level 3 compliance using Cloudflare's Application Services. FIPS 140 is a U.S. government standard that specifies security requirements for cryptographic modules protecting sensitive information in computer and telecommunication systems.

FIPS 140 defines four security levels, with Level 3 being the most stringent for non-military applications. It mandates physical tamper-resistance to prevent unauthorized access to cryptographic keys and critical security parameters. This includes measures like robust enclosures, tamper-evident seals, and identity-based authentication.

Achieving FIPS 140 compliance, particularly Level 3, is crucial for organizations handling sensitive data, especially those in regulated industries like:

* **Government**: Federal agencies and contractors processing sensitive government information.
* **Healthcare**: Organizations handling protected health information (PHI) under HIPAA.
* **Financial** Services: Institutions dealing with financial transactions and customer data.
* **Defense**: Contractors working on defense projects requiring stringent security measures.

FIPS 140 compliance demonstrates a strong commitment to data security, builds trust with customers and partners, and ensures adherence to regulatory requirements. This reference architecture provides a comprehensive guide to leveraging Cloudflare's robust security features to meet these stringent standards.

## FIPS 140-3 levels

Organizations use the FIPS 140-3 standard to ensure that the hardware they select meets specific security requirements. The FIPS certification standard defines four increasing, qualitative levels of security.

* [ Level 1 ](#tab-panel-5981)
* [ Level 2 ](#tab-panel-5982)
* [ Level 3 ](#tab-panel-5983)
* [ Level 4 ](#tab-panel-5984)

Requires production-grade equipment and externally tested algorithms.

Adds requirements for physical tamper-evidence and role-based authentication.

Adds requirements for physical tamper-resistance and identity-based authentication. There must also be physical or logical separation between the interfaces by which critical security parameters enter and leave the module. Private keys can only enter or leave in encrypted form. Level 3 also requires the module to detect and react to out-of-range voltage or temperature (environmental failure protection, or EFP), or alternatively undergo environmental failure testing (EFT).

This level makes the physical security requirements more stringent, requiring the ability to be tamper-active, erasing the contents of the device if it detects various forms of environmental attack. EFP and protection against fault injection is required as well as multi-factor authentication.

## Key components

* **Cloudflare Keyless SSL**: A service that allows organizations to use Cloudflare's SSL/TLS protection while keeping their private keys securely stored in their own infrastructure, ensuring private keys remain under their control and never leave their premises, while still benefiting from Cloudflare's DDoS protection and performance optimization features.
* **Cloudflare Tunnel**: Provides a secure, encrypted connection between Cloudflare's global network and the private infrastructure hosting CloudHSM, protecting data in transit.
* **Hardware Security module**: A FIPS 140-2 Level 3 compliant HSM that securely manages cryptographic keys. Cloudflare supports a handful of HSMs, including AWS CloudHSM, Azure Key Vault, and Google Cloud KMS.

## Architecture overview

The architecture diagram below illustrates the key components and data flow for achieving FIPS 140 Level 3 compliance with Cloudflare Application Services and all its required components.

flowchart TB
  User((User/Client)) --> |1.SNI = keyless.example.com| CF[Cloudflare Edge Network]

  subgraph CF [Cloudflare Edge]
      KeylessSSL[Keyless SSL Service]
  end

  subgraph Private[Private Infrastructure]
      Tunnel[Cloudflare Tunnel]
      HSM[Hardware Security Module]
  		KeylessModule[Keyless Module]
  end

  Tunnel -->|2.Establish tunnel| KeylessSSL
  KeylessSSL -->|3.Keyless operation required| Tunnel
  Tunnel -->|4.Forward to HSM| KeylessModule
  KeylessModule -->|5.Key Operations via PKCS11| HSM

  classDef cloudflare fill:#F6821F,stroke:#fff,stroke-width:2px,color:#fff
  classDef aws fill:#232F3E,stroke:#fff,stroke-width:2px,color:#fff
  classDef default fill:#fff,stroke:#000,stroke-width:2px, color:#000

  class CF,KeylessSSL,Tunnel,KeylessModule cloudflare
  class HSM aws
  class User default

1. **User/Client**: Initiates an HTTPS request to a domain protected by Cloudflare. The Server Name Indication (SNI) extension in the request specifies the domain name like, for example, `keyless.example.com`. That domain is mapped to a certificate declared as [keyless](https://developers.cloudflare.com/ssl/keyless-ssl/), which means that only the public is being imported to Cloudflare, but a keyless listener is also declared, for subsequent key operations.
2. **Cloudflare secure tunnel establishment**: The Cloudflare [tunnel](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/) component is used to establish a secure and reliable connection with Cloudflare's global network. Only outgoing traffic is leaving the perimeter and the traffic can be narrowed down by a firewall. That secure connectivity will be used as a secured overlay for the key operations.
3. **Key Operations**: Cloudflare [SSL](https://developers.cloudflare.com/ssl/)detects that a keyless operation is necessary and then sends all the key operations towards the keyless module installed on the private infrastructure. All the traffic is flowing through the previously established secured tunnel.
4. **Keyless Module**: The [keyless module](https://developers.cloudflare.com/ssl/keyless-ssl/configuration/cloudflare-tunnel/#install) is responsible for forwarding the key operations to the Hardware Security Module (HSM) for cryptographic operations. The keyless module is a software component that acts as a proxy between Cloudflare and the HSM, ensuring that the private key never leaves the HSM.
5. **Key operations via PKCS11**: The HSM performs cryptographic operations using the private key stored securely within the HSM. The HSM is a tamper-resistant device that securely manages cryptographic keys and performs cryptographic operations, ensuring the highest level of security for sensitive data.

## Further reading

* [Cloudflare Keyless SSL](https://developers.cloudflare.com/ssl/keyless-ssl)
* [Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/)
* [Keyless SSL with secured Tunnel](https://developers.cloudflare.com/ssl/keyless-ssl/configuration/cloudflare-tunnel/)
* Supported HSMs:  
   * [ AWS cloud HSM ](https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/aws-cloud-hsm/) :  Learn how to use Keyless SSL with AWS CloudHSM.  
   * [ Azure Dedicated HSM ](https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/azure-dedicated-hsm/) :  Learn how to use Keyless SSL with Azure Dedicated HSM.  
   * [ Azure Managed HSM ](https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/azure-managed-hsm/) :  This tutorial uses Microsoft Azure’s Managed HSM to deploy a VM with the Keyless SSL daemon. Follow these instructions to deploy your keyless server.  
   * [ Configuration ](https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/configuration/)  
   * [ Entrust nShield Connect ](https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/entrust-nshield-connect/) :  Learn how to use Keyless SSL with Entrust nShield Connect.  
   * [ Fortanix Data Security Manager ](https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/fortanix-dsm/)  
   * [ Google Cloud HSM ](https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/google-cloud-hsm/) :  Learn how to use Keyless SSL with Google Cloud HSM.  
   * [ IBM Cloud HSM ](https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/ibm-cloud-hsm/) :  Learn how to use Keyless SSL with IBM Cloud HSM.  
   * [ SoftHSMv2 ](https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/softhsmv2/) :  Learn how to use Keyless SSL with SoftHSMv2\.

```

```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/security/","name":"Security"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/security/fips-140-3/","name":"FIPS 140 level 3 compliance with Cloudflare Application Services"}}]}
```

---

---
title: Securing data at rest
description: Learn how Cloudflare's API-driven Cloud Access Security Broker (CASB) works and secures data at rest.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/security/securing-data-at-rest.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Securing data at rest

**Last reviewed:**  almost 2 years ago 

## Introduction

Data at rest refers to data that is stored in a fixed location, such as on a local hard drive, on-premises server, or cloud storage. Many businesses today are using SaaS platforms that store a lot of business data in structured forms (like databases) and unstructured forms (files like documents, images, spreadsheets). The security of the actual storage of the data, such as encryption and reliable backups, is usually abstracted from your control. But the SaaS applications allow you to manage user accounts, define what data they have access to, and also provide an ability to share access to data.

While Cloudflare mostly secures data in transit as it travels over our network, we also have the ability to connect to your SaaS applications and use our DLP profiles to examine data at rest that might not be adequately secured and then provide recommendations for you to take action.

## Protecting data with Cloudflare CASB

Cloudflare's API-driven [Cloud Access Security Broker](https://developers.cloudflare.com/cloudflare-one/integrations/cloud-and-saas/) (CASB) works by integrating with SaaS APIs and discovering both unstructured data at rest (documents, spreadsheets, and so on) and also examining general configuration of the application and user accounts to ensure data access controls are correctly configured.

[DLP profiles](https://developers.cloudflare.com/cloudflare-one/cloud-and-saas-findings/casb-dlp/) are used to discover if files stored in your SaaS application contain sensitive data. Matches are then compared with access controls and findings are generated, such as findings to alert you to a spreadsheet that contains credit card information that is accessible by anyone on the Internet.

When Cloudflare CASB is combined with Cloudflare's [Secure Web Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) service, which inspects all the traffic going to and from a SaaS application, customers can achieve comprehensive visibility into both data in transit and data at rest for SaaS applications.

![Figure 1: Overall solution of user access controls to, and the discovery of, sensitive data.](https://developers.cloudflare.com/_astro/securing-data-at-rest-fig1.BdIkDfSv_ZG4jIx.svg "Figure 1: Overall solution of user access controls to, and the discovery of, sensitive data.")

Figure 1: Overall solution of user access controls to, and the discovery of, sensitive data.

## Securing user access to data at rest

1. Cloudflare authenticates users attempting to access SaaS applications, whether they are initiating the request from managed or unmanaged endpoints.  
   1. For managed endpoints, we recommend deploying our [device agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/) to maximize visibility and control of all traffic between the end user’s device and the resources being requested.  
   2. For unmanaged endpoints, we have [client-less solutions](https://developers.cloudflare.com/reference-architecture/diagrams/sase/sase-clientless-access-private-dns/) which all you to still have visibility over and inspection into the data accessed by users.
2. Cloudflare's [Zero Trust Network Access](https://developers.cloudflare.com/cloudflare-one/access-controls/policies/) (ZTNA) service can integrate directly with your [SaaS applications](https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/saas-apps/) using standard protocols (e.g. SAML or OIDC) to become the initial enforcement point for user access. Access calls your [identity provider](https://developers.cloudflare.com/cloudflare-one/integrations/identity-providers/) (IdP) of choice and uses additional security signals about your users and devices to make policy decisions.
3. As an extension of what was covered in Securing data in use, Cloudflare [Remote Browser Isolation](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/) (RBI) can also be used with [dedicated egress IPs](https://developers.cloudflare.com/cloudflare-one/traffic-policies/egress-policies/dedicated-egress-ips/), so that even remote clientless user’s traffic can arrive at the requested SaaS application from predictable and consistent IP addresses.

## Discovering and protecting the data at rest

1. In addition to what we covered in Securing data in transit, Cloudflare Data Loss Prevention (DLP) can be used to discover files that reside in your SaaS applications that contain sensitive data. CASB will scan every shared and/or publicly accessible file in the SaaS app for sensitive text that matches the DLP profile and alert you with recommended actions to take.
2. To complement the dedicated egress IP option mentioned above, SaaS providers enable the ability to restrict access to your organization's resources by only permitting access when traffic is sourced from specific IP addresses.
3. When you integrate a third-party SaaS application with Cloudflare CASB, CASB makes routine, out-of-band API calls that analyze the associated metadata of your configurations, users, files, and other SaaS ‘objects’. Security issues, or ‘Findings’, are then detected based on whether the metadata indicates any insecure or potentially hazardous configurations exist within the integrated SaaS applications. This can include application misconfigurations, exposed and/or sensitive data, and users accounts with poor security.

## Related resources

* [Securing data in transit](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-in-transit/)
* [Securing data in use](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-in-use/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/security/","name":"Security"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/security/securing-data-at-rest/","name":"Securing data at rest"}}]}
```

---

---
title: Securing data in transit
description: Data in transit is often considered vulnerable to interception or tampering during transmission. Data Loss Prevention (DLP) technologies can be used to inspect the contents of network traffic and block sensitive data from going to a risky destination.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/security/securing-data-in-transit.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Securing data in transit

**Last reviewed:**  almost 2 years ago 

## Introduction

Data in transit typically means when it's traveling over the network. Because the Internet is made up of many thousands of networks, it is important to ensure your data is secure as it moves from device to server and back. These days, most common activities that generate data in transit are related to:

* Browsing online and uploading/download data to/from cloud applications
* Sending texts, pictures and emails
* Applications exposing and consuming data through APIs

Data in transit is often considered vulnerable to interception or tampering during transmission, so it is important to secure it through encryption techniques such as [QUIC ↗](https://cloudflare-quic.com/), Transport Layer Security (TLS) or Secure Sockets Layer (SSL). This helps to ensure that the data remains confidential and protected from unauthorized access during its journey. There are other methods of inspecting data as it passes network boundaries to make decisions on if that data should continue to travel or not, Data Loss Prevention (DLP) technologies can be used to inspect the contents of network traffic and block sensitive data from going to a risky destination. This document outlines the methods Cloudflare has available to protect data in transit.

## Securing network connectivity

Cloudflare is one of the leading providers of cloud network security services. There are two main use cases Cloudflare is used to secure network traffic.

* Providing secure connectivity to public websites and APIs using SSL/TLS
* Creating secure tunnels to private networks and applications which are hosted either in the cloud or on-premises

Cloudflare's [SSL services](https://developers.cloudflare.com/ssl/) are used by millions of websites and are easily implemented by making changes to DNS entries, so that all connections to public websites and APIs are terminated on Cloudflare's edge network. Connectivity from Cloudflare to the destination website or API can also be secured using the same SSL technologies. To ensure the strongest security, Cloudflare uses [post quantum cryptography ↗](https://blog.cloudflare.com/post-quantum-to-origins).

![Figure 1: Securing data from the user device, all the way to the website/API](https://developers.cloudflare.com/_astro/securing-data-in-transit-fig1.BeOrOaHa_1LEx8w.svg "Figure 1: Securing data from the user device, all the way to the website/API")

Figure 1: Securing data from the user device, all the way to the website/API

1. Connection between user browser and Cloudflare secured by TLS/SSL
2. Connection from Cloudflare to destination server secured by TLS/SSL

Private resources, usually self hosted applications on private networks with no direct public Internet connection, require a different method of securing data in transit. There are a variety of different methods by which tunnels can be created from private networks to Cloudflare, more details on which can be found in the [SASE reference architecture](https://developers.cloudflare.com/reference-architecture/architectures/sase/), but the following diagram does a good job of summarizing the methods.

![Figure 2: Various methods of connecting and routing traffic to Cloudflare to secure private traffic.](https://developers.cloudflare.com/_astro/cf1-ref-arch-14.BMsYJBWD_1UbvIi.svg "Figure 2: Various methods of connecting and routing traffic to Cloudflare to secure private traffic.")

Figure 2: Various methods of connecting and routing traffic to Cloudflare to secure private traffic.

_Note: Labels in this image may reflect a previous product name._

Once private applications and networks have been connected to Cloudflare, devices can then be connected securely via our device agent such that data from a user device, all the way across the network to an application can be secured.

When traffic from the device, to the hosted application, all flows via Cloudflare, it's possible for us to inspect the traffic and apply further security based on the content of the data.

## Inspecting traffic with Cloudflare DLP

A common challenge is trying to determine what data is sensitive and requires policy intervention. Data Loss Prevention services are used to inspect the contents of a piece of traffic, and then provide metadata to the policy to impact enforcement.

For example, when a user attempts to upload a file to a SaaS application and the traffic route has been configured to always go via the Cloudflare network, [Cloudflare DLP](https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/) inspects the file by using DLP profiles assigned to a Gateway policy. After a DLP profile matches, the Gateway policy will allow or block the traffic, and the activity will be written to the logs. A DLP profile is a collection of regular expressions (also known as detection entries) that define the data patterns you want to detect. Cloudflare DLP provides [predefined profiles](https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/dlp-profiles/#configure-a-predefined-profile) for common detections, or you can build [custom profiles](https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/dlp-profiles/#build-a-custom-profile) specific to your data, and even the ability to leverage [Exact Data Match](https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/detection-entries/#exact-data-match) (EDM).

DLP profiles are then used in combination with other policy attributes to specifically identify the traffic, such as only enforcing the policy when sensitive data is being uploaded to approved Cloud based storage services.

![Figure 3: Example of a Cloudflare policy blocking confidential data uploaded to approved cloud storage.](https://developers.cloudflare.com/_astro/cf1-ref-arch-29.BGL4hCeF_2nRDyn.svg "Figure 3: Example of a Cloudflare policy blocking confidential data uploaded to approved cloud storage.")

Figure 3: Example of a Cloudflare policy blocking confidential data uploaded to approved cloud storage.

The following diagram shows a common flow for how Cloudflare inspects a request and enforces access based on a DLP based policy.

![Figure 4: Upload of file containing sensitive data blocked by Cloudflare DLP](https://developers.cloudflare.com/_astro/securing-data-in-transit-fig4.D-8KKTj8_1KHBJz.svg "Figure 4: Upload of file containing sensitive data blocked by Cloudflare DLP")

Figure 4: Upload of file containing sensitive data blocked by Cloudflare DLP

1. User attempts to upload a file to a SaaS application (via a secure tunnel to Cloudflare created by our [device agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/download/)). [Clientless](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/) options are supported as well.
2. Cloudflare's [Secure Web Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) (SWG) will first verify that the user is permitted to use the requested SaaS application, and then scrutinize the file's payload for [malicious code](https://developers.cloudflare.com/cloudflare-one/traffic-policies/http-policies/antivirus-scanning/) and [sensitive data](https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/).
3. The DLP profile determines the file contains national identifiers like US Social Security Numbers (SSN).
4. The Gateway policy is configured with a [Block action](https://developers.cloudflare.com/cloudflare-one/traffic-policies/http-policies/#block), so the attempt is [logged](https://developers.cloudflare.com/cloudflare-one/data-loss-prevention/dlp-policies/logging-options/#log-the-payload-of-matched-rules) and a [block page](https://developers.cloudflare.com/cloudflare-one/reusable-components/custom-pages/gateway-block-page/) returned to the end user's web browser.

## Related resources

* [Securing data in use](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-in-use/)
* [Securing data at rest](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-at-rest/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/security/","name":"Security"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/security/securing-data-in-transit/","name":"Securing data in transit"}}]}
```

---

---
title: Securing data in use
description: Learn how Cloudflare's Remote Browser Isolation (RBI) works and secures data in use.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/security/securing-data-in-use.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Securing data in use

**Last reviewed:**  almost 2 years ago 

## Introduction

Data in use refers to data that is being actively interacted with, processed, or manipulated by applications, systems, or users. For organizations, protecting data in use can be a challenge as it must remain accessible and usable by applications and users who need it to fulfill their duties, yet still protected against unauthorized access or tampering.

Today, a vast majority of a user’s interactions with operationally-critical data takes place inside a modern Internet browser, which today enables entire client applications, such as email clients, word processors, and spreadsheets, to be served to an end-user. This also means no software needs to be installed on the device, and also makes user interactions, such as copy and paste, and downloading sensitive data, relatively easy. Such interactions can pose a persistent risk to organizations whose employees and contractors are working with critical and/or sensitive data every day.

One method to secure data in use is to leverage greater control over the browsers themselves, and how employees use them to access applications and data. Cloudflare has approached this by building a headless browser solution on top of our massive global edge network, called [Remote Browser Isolation](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/) (RBI). When a user attempts to access, for example, a privately hosted resource, or a resource on the public Internet, instead of directly serving it to the user’s browser without any other safeguards, Cloudflare first renders the resource in a sandboxed environment hosted on the Cloudflare global network. Then, without any perceptible difference to the end-user, a small Javascript client is run within their local browser to safely and securely retrieve and render the remotely loaded web content using a novel, patented technology unique to Cloudflare, called Network Vector Rendering (NVR).

## Protecting data in use with Cloudflare RBI

Cloudflare RBI effectively creates an invisible “gap” between the web content a user is accessing and their device, in effect protecting the device and the network it is connected to from exploits and attacks. Unlike secure web gateways, antivirus software, or firewalls which rely on known threat patterns or signatures, RBI is a genuine zero trust mechanism. Because all requests made within a remotely loaded RBI instance go through the Cloudflare Secure Web Gateway, it's possible to enforce access policies to data and also inspect the traffic itself to enforce any data in transit policies.

Even more, organizations can enforce specific data in use access controls, like blocking the ability to download/upload, copy and paste, and print data.

Common policies used with RBI:

* Content category - [Social Networks](https://developers.cloudflare.com/cloudflare-one/traffic-policies/domain-categories/) (e.g. Facebook): Given the large volumes of data that popular social media platforms collect, these apps are an attractive target and yet another attack vector for malicious entities. RBI allows for limiting what data, especially if that data matches a DLP profile, from being pasted into these applications.
* Application - [Artificial Intelligence](https://developers.cloudflare.com/cloudflare-one/traffic-policies/application-app-types/) (e.g. ChatGPT): Generative AI tools can boost employee productivity, but understanding who is using them and for what is imperative at this stage of the generative AI evolution. Again, DLP profiles here can be applied to prevent the copy and pasting of sensitive data into public AI tools.
* Application - [SaaS](https://developers.cloudflare.com/cloudflare-one/traffic-policies/application-app-types/) (e.g. Salesforce, Zendesk, etc): These applications can often contain highly confidential data. RBI can be used to really lock down access for risky users that require some access, such as contractors or partners. Controls such as preventing printing, or even preventing any keyboard input at all, can result in third party users only looking at a read only view of the application, as if RBI is a pane of glass between the user and the data.

The following diagram visualizes a typical interaction between a user, RBI and a website such as ChatGPT.

![Figure 1: Text copy/paste blocked by Cloudflare RBI.](https://developers.cloudflare.com/_astro/securing-data-in-use-fig1.DERWxOEQ_Z1zMakq.svg "Figure 1: Text copy/paste blocked by Cloudflare RBI.")

Figure 1: Text copy/paste blocked by Cloudflare RBI.

1. User attempts to login to ChatGPT, and the request goes via Cloudflare since the user is running our [device agent](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/cloudflare-one-client/download/) to maximize visibility and control of all traffic between the end user’s device and the resources being requested. [Clientless](https://developers.cloudflare.com/cloudflare-one/networks/resolvers-and-proxies/) options are supported as well.
2. Cloudflare’s [Secure Web Gateway](https://developers.cloudflare.com/cloudflare-one/traffic-policies/) (SWG) will first verify that the user is permitted to access ChatGPT.
3. Cloudflare’s patented Network Vector Rendering (NVR) process begins as a headless browser on our edge network starts and rasterizes the web app, which involves writing SKIA draw commands.
4. NVR intercepts those draw commands > tokenizes them > compresses them > encrypts them > and sends them to the local web browser.
5. Because this request is running isolated, the policy also enforces preventing the user from [copying and pasting](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/isolation-policies/#copy-from-remote-to-client) sensitive content to ChatGPT from their local machine. Additional [policy settings](https://developers.cloudflare.com/cloudflare-one/remote-browser-isolation/isolation-policies/#policy-settings), such as ‘Disable printing’, ‘Disable upload / download’, and more are available as well.

## Related resources

* [Securing data in transit](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-in-transit/)
* [Securing data at rest](https://developers.cloudflare.com/reference-architecture/diagrams/security/securing-data-at-rest/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/security/","name":"Security"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/security/securing-data-in-use/","name":"Securing data in use"}}]}
```

---

---
title: A/B-testing using Workers
description: Cloudflare's low-latency, fully serverless compute platform, Workers offers powerful capabilities to enable A/B testing using a server-side implementation.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/serverless/a-b-testing-using-workers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# A/B-testing using Workers

**Last reviewed:**  almost 2 years ago 

## Introduction

A/B testing, also known as split testing, is a fundamental technique in the realm of web development, allowing teams to iteratively refine and optimize their digital experiences. A/B testing involves comparing two versions of a web page or app feature to determine which one performs better in achieving a predefined goal, such as increasing conversions, engagement, or user satisfaction.

The process typically begins with the creation of two variants: the control (A) and the variant (B). These variants are identical except for the specific element being tested, whether it's a headline, button color, layout, or any other component of the user interface or user experience. For example, a team might test two different call-to-action button colors to see which one generates more clicks.

Once the variants are ready, they are exposed to users in a randomized manner. This randomization ensures that any differences in performance between the variants can be attributed to the changes being tested rather than external factors like user demographics or behavior.

As users interact with the different variants, their actions and behaviors are tracked and analyzed to measure the performance of each variant against the predefined goal. Key metrics such as click-through rates, conversion rates, bounce rates, and engagement metrics are monitored to determine which variant is more effective in achieving the desired outcome.

A/B testing is a powerful tool for continuously optimizing and improving digital experiences, enabling teams to make data-driven decisions based on real user feedback rather than subjective opinions or assumptions. By systematically testing and refining different elements of their websites or applications, organizations can enhance user satisfaction, increase conversions, and ultimately achieve their business objectives in a competitive online landscape.

Cloudflare's low-latency, fully serverless compute platform, [Workers](https://developers.cloudflare.com/workers/) offers powerful capabilities to enable A/B testing using a server-side implementation. With the help of [Workers KV](https://developers.cloudflare.com/kv/), this solution can be make highly configurable with ease.

## A/B testing using Workers

![Figure 1: A/B testing using Workers](https://developers.cloudflare.com/_astro/a-b-testing-workers.2TNh_6Un_2d88FE.svg "Figure 1: A/B testing using Workers")

Figure 1: A/B testing using Workers

This architecture shows a same-URL A/B testing endpoint. The A/B testing logic and configuration is deployed on the server side, so that clients do not have to implement any changes to make use of A/B testing.

1. **Client**: Sends requests to server. This could be through a desktop or mobile browser, or native or mobile app.
2. **Configuration**: Process incoming request using Workers. Read current configuration by reading from [KV](https://developers.cloudflare.com/kv/) using the [get()](https://developers.cloudflare.com/kv/api/read-key-value-pairs/) method. This allows for flexible updates to the A/B services configuration fully decoupled from code-deployment.
3. **Origin requests**: Check for already existing cookies in the request headers. If no cookie for group assignment is set, randomly assign a group. If a cookie is set, extract assigned group from the cookie header. Send request to either the control endpoint (A) or variant endpoints (B) depending on the configuration and the assigned group.
4. **Response**: Return the response from the origin. Additionally, if no cookie was previously set, set a cookie with the respective assigned group for session affinity.

For an example with code snippets on how to use Workers and Workers KV to route requests to different origin web servers, refer to Workers KV's example on [routing across web servers](https://developers.cloudflare.com/kv/examples/routing-with-workers-kv/).

## Related resources

* [Workers: Get started](https://developers.cloudflare.com/workers/get-started/guide/)
* [Workers KV: Get started](https://developers.cloudflare.com/kv/get-started/)
* [Workers KV: Route requests to web servers with Workers and Workers KV](https://developers.cloudflare.com/kv/examples/routing-with-workers-kv/)
* [Code Example: A/B testing with same-URL direct access](https://developers.cloudflare.com/workers/examples/ab-testing/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/serverless/","name":"Serverless"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/serverless/a-b-testing-using-workers/","name":"A/B-testing using Workers"}}]}
```

---

---
title: Fullstack applications
description: A practical example of how these services come together in a real fullstack application architecture.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/serverless/fullstack-application.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Fullstack applications

**Last reviewed:**  6 months ago 

Fullstack web applications combine frontend and backend technologies to deliver complete, dynamic user experiences. These applications rely on a broad technology stack covering user interfaces, backend services, databases, integrations, and increasingly, AI-driven features to function seamlessly and scale reliably.

On the frontend, developers typically use HTML, CSS, and JavaScript, often alongside frameworks like React, Next.js, or Angular. These tools provide the structure and interactivity needed for modern user interfaces, helping manage state, render dynamic content, personalize experiences, and optimize performance across devices.

On the backend, server-side code handles tasks like processing requests, running business logic, authenticating users, integrating AI models, and interacting with databases. Developers build these services using languages like JavaScript, Python, or Java, supported by frameworks that simplify routing, middleware, and API creation.

Databases are critical in the stack, storing and retrieving application data. Relational databases like MySQL, PostgreSQL, and SQLite manage structured data and enforce data integrity, while NoSQL options like MongoDB or Cassandra offer flexibility for handling unstructured or large-scale datasets.

Modern fullstack development increasingly incorporates external services, APIs, pre-built components, and AI capabilities. This approach reduces the need to create complex features from scratch, such as content moderation, personalized recommendations, and semantic search. As a result, development teams can build applications more quickly and efficiently.

Cloudflare’s Developer Platform combines all these capabilities into a unified, globally distributed environment, offering developers everything they need to build, deploy, and scale modern fullstack applications with minimal operational overhead.

![Figure 1: Cloudflare Developer Platform](https://developers.cloudflare.com/_astro/developer-platform.g69XQgmR_2k3mC0.svg "Figure 1: Cloudflare Developer Platform")

Figure 1: Cloudflare Developer Platform

Cloudflare’s platform doesn’t just offer individual services. Rather, it offers a **composable ecosystem**, enabling teams to build powerful applications quickly, scale seamlessly, and innovate faster without the overhead of managing infrastructure.

## Fullstack application diagram

In this section, we’ll present a practical example of how these services come together in a real fullstack application architecture.

![Figure 2: Fullstack application](https://developers.cloudflare.com/_astro/fullstack-app-base.CZswu8qh_2IOYQ.svg "Figure 2: Fullstack application")

Figure 2: Fullstack application

### 1\. Client

Sends requests to the server. This could be through a desktop or mobile browser, or native or mobile app.

### 2\. Security

Process incoming requests to ensure the security of an application. This includes encryption of traffic using [SSL/TLS](https://developers.cloudflare.com/ssl/), offering [DDOS protection](https://developers.cloudflare.com/ddos-protection/), filtering malicious traffic through a [web application firewall (WAF)](https://developers.cloudflare.com/waf/), [mitigations against automated bots](https://developers.cloudflare.com/bots/), and [API Shield](https://developers.cloudflare.com/api-shield/) to identify and address your API vulnerabilities. Depending on the configuration, requests can be blocked, logged, or allowed based on a diverse set of parameters. Sensible fully managed and default configurations can be used to reduce attack surfaces with little to no overhead.

### 3\. Performance

Serve static requests from [global cache (CDN)](https://developers.cloudflare.com/cache/). This reduces latency and lowers resource utilization, as the requests are being served from cache instead of requiring a request to storage & media services or compute services. Take advantage of [Argo Smart Routing](https://developers.cloudflare.com/argo-smart-routing/) to route requests across the most efficient network path, avoiding congestion.

### 4\. Compute

Process dynamic requests using serverless compute with [Workers](https://developers.cloudflare.com/workers/). This could include authentication, routing, middleware, database interactions, and serving APIs. Moreover, [Workers Assets](https://developers.cloudflare.com/workers/static-assets/) can be used to serve client-side or server-side rendering web frameworks such as React, Next.js, or Angular. Utilize [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/) to allow users to deploy custom code on your platform or enable them to deploy their own code directly. For stateful workloads, [Durable Objects](https://developers.cloudflare.com/durable-objects/) provide low-latency, stateful compute by running logic close to where the object's data is stored, enabling coordination, persistence, and real-time communication at the edge.

For workloads that require the flexibility of traditional containerization, [Containers](https://developers.cloudflare.com/containers/) allows you to run existing Docker-compatible applications on Cloudflare’s global network. Containers is designed for applications needing more resources than a standard Worker.

### 5\. Data & Storage

Introduce state to applications by persisting and retrieving data. This includes [R2](https://developers.cloudflare.com/r2/) for object storage, [D1](https://developers.cloudflare.com/d1/) for relational data, [KV](https://developers.cloudflare.com/kv/) for data with high read requirements and [Durable Objects](https://developers.cloudflare.com/durable-objects/) for strongly consistent data storage. The [storage options guide](https://developers.cloudflare.com/workers/platform/storage-options/) can help to assess which storage option is the most suitable for a given use case.

### 6\. Realtime content & Media

Build real-time serverless video, audio, and data applications with [Realtime](https://developers.cloudflare.com/realtime/). Serve optimized images from [Images](https://developers.cloudflare.com/images/) and on-demand videos as well as live streams from [Stream](https://developers.cloudflare.com/stream/).

### 7\. AI

With [Workers AI](https://developers.cloudflare.com/workers-ai/), developers can run popular open-source models for tasks like text generation, image analysis, and content moderation powered by serverless GPUs. [Vectorize](https://developers.cloudflare.com/vectorize/) is a globally distributed vector database for similarity search, personalization, and recommendation features. [Agents](https://developers.cloudflare.com/agents/) further extend AI capabilities - Cloudflare provides the Agents SDK that lets you build and deploy AI-powered agents that can perform tasks, interact in real time, call models, manage state, run workflows, query data, and integrate human-in-the-loop actions.

### 8\. Orchestration & Abstraction

[Queues](https://developers.cloudflare.com/queues/) enable durable, asynchronous messaging to decouple services and handle traffic spikes. [Workflows](https://developers.cloudflare.com/workflows/) orchestrate complex processes across APIs, services, and human approvals, abstracting away infrastructure and state management. [Pipelines](https://developers.cloudflare.com/pipelines/) let you ingest high volumes of real time data, without managing any infrastructure.

### 9\. Cloudflare Observability

Send logs from all services with [Logpush](https://developers.cloudflare.com/logs/logpush/), gather insights with [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/) directly in the Cloudflare dashboard, collect custom metrics from Workers using [Workers Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/), or observe and control AI applications with [AI Gateway](https://developers.cloudflare.com/ai-gateway/).

### 10\. External Logs & Analytics

Integrate Cloudflare's observability solutions with your existing third-party solutions. Logpush supports many [destinations](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/) to push logs to for storage and further analysis. Also, Cloudflare analytics can be [integrated with analytics solutions](https://developers.cloudflare.com/analytics/analytics-integrations/). The [GraphQL Analytics API](https://developers.cloudflare.com/analytics/graphql-api/) allows for flexible queries and integrations.

### 11\. Tooling & Provisioning

Define and manage resources and configuration using third-party tools and frameworks such as [Terraform](https://developers.cloudflare.com/terraform/) and [Pulumi](https://developers.cloudflare.com/pulumi/), Cloudflare's Developer Platform command-line interface (CLI) [Wrangler](https://developers.cloudflare.com/workers/wrangler/), or the [Cloudflare API](https://developers.cloudflare.com/api/). All of these tools can be used either for manual provisioning, or automated as part of CI/CD pipelines.

### 12\. External Service Integrations

Cloudflare’s Developer Platform is built for seamless [integration with external services](https://developers.cloudflare.com/workers/configuration/integrations/). Whether connecting to third-party APIs, databases, SaaS platforms, or cloud providers, developers can easily make outbound requests from Workers, trigger workflows based on external events, and securely exchange data across systems.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/serverless/","name":"Serverless"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/serverless/fullstack-application/","name":"Fullstack applications"}}]}
```

---

---
title: Programmable Platforms
description: Workers for Platforms provide secure, scalable, cost-effective infrastructure for programmable platforms with global reach.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/serverless/programmable-platforms.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Programmable Platforms

**Last reviewed:**  about 1 year ago 

## Introduction

A programmable platform allows customers to customize a product by writing code. Unlike traditional SaaS with fixed features, it enables users to extend functionality, deploy backend logic, and build full-stack experiences—all within the platform’s infrastructure.

Hosting the infrastructure for these platforms presents several challenges, including security, scalability, cost efficiency, and performance isolation. Allowing customers to run custom code introduces risks such as untrusted execution, potential abuse, and resource contention, all of which must be managed without compromising platform reliability. Running millions of single-tenant applications is inherently costly, making efficient resource utilization critical. The ability to scale workloads to zero when idle is key to ensuring economic viability while maintaining rapid startup times when demand spikes. Additionally, ensuring seamless global execution with low-latency performance requires a resilient, distributed architecture. Robust monitoring, debugging, and governance capabilities are also essential to provide visibility and control over customer-deployed code without restricting innovation.

[Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/) provides the ideal infrastructure for building programmable platforms by offering secure, isolated environments where customers can safely execute custom code at scale, with automatic scaling to zero and a globally distributed runtime that optimizes performance and cost.

## Core Architecture Components

The Workers for Platforms architecture consists of several key components that work together to provide a secure, scalable, and efficient solution for multi-tenant applications. In the following core concepts are outlined.

1. **Main Request Flow**: An overview over the a request flow in a programmable platform.
2. **Invocation & Metadata Flow**: commonly, incoming requests and enriched with metadata to provide the function invocation with relevant context or perform routing logic.
3. **Egress Control**: controlling outbound connections to ensure compliant behaviour.
4. **Utilizing Storage & Data Resources**: leveraging databases & storage to build even richer end-user expierences at scale.
5. **Observability Tools**: Logging and metrics collection services to monitor platform performance and troubleshoot issues.

## Main Request Flow

![Figure 1: Workers for Platforms: Main Flow](https://developers.cloudflare.com/_astro/programmable-platforms-1.BCCEhzLr_2d88FE.svg "Figure 1: Workers for Platforms: Main Flow")

Figure 1: Workers for Platforms: Main Flow

1. **Client Request**: Send request from a client application to the platform's [Dynamic Dispatch Worker](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#dynamic-dispatch-worker).
2. **Routing**: Identify the correct workload to execute and route the request to the respective [User Worker](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#user-workers) in the [Dispatch Namespace](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#dispatch-namespace). Each customer's workload runs in an isolated User Worker with its own resources and security boundaries.

## Invocation & Metadata Flow

![Figure 2: Workers for Platforms: Main Flow](https://developers.cloudflare.com/_astro/programmable-platforms-2.DGAT6ZDR_Z19nioR.svg "Figure 2: Workers for Platforms: Main Flow")

Figure 2: Workers for Platforms: Main Flow

For many use cases, it makes sense to retrieve additional metadata, user data, or configuration to process incoming requests and provide the User Worker invocation with additional context.

1. **Incoming Request**: Send requests to custom hostnames or a Worker using a Workers wildcard route.
2. **Metadata Lookup**: Retrieve customer-specific configuration data from [KV](https://developers.cloudflare.com/kv/) storage. These lookups are typically based on the hostname of the incoming request or custom metadata in the case of custom hostnames.
3. **Worker Invocation**: Route requests to the appropriate User Worker in the Dispatch Namespace based on metadata. Optionally, provide additional context during function invocation.

## Egress Control Pattern

![Figure 3: Workers for Platforms: Egress Control](https://developers.cloudflare.com/_astro/programmable-platforms-3.C-LkeZtS_Z19nioR.svg "Figure 3: Workers for Platforms: Egress Control")

Figure 3: Workers for Platforms: Egress Control

Data observability and control is crucial for security. [Outbound Workers](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/) allow for interception of all outgoing requests in User Worker scripts.

1. **Worker Invocation**: Route requests to the appropriate User Worker in the Dispatch Namespace. Optionally pass additional parameters to the Outbound Worker during User Worker invocation.
2. **External requests**: Send requests via `fetch()` calls to external services through a controlled Outbound Worker.
3. **Request interception**: Evaluate outgoing requests and perform core functions like centralized policy enforcement and audit logging.

## Metrics & Logging Architecture

![Figure 4: Workers for Platforms: Metrics & Logging](https://developers.cloudflare.com/_astro/programmable-platforms-4.BoFSkvXQ_2iLi3x.svg "Figure 4: Workers for Platforms: Metrics & Logging")

Figure 4: Workers for Platforms: Metrics & Logging

1. **Logging**: Collect logs throughout all Workers in the request flow via [Tail Worker](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/observability/#tail-workers) and [Workers Trace Events Logpush](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/observability/#workers-trace-events-logpush) services.
2. **Metrics**: Collect custom metrics via [Workers Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/) and out-of-the-box [Analytics](https://developers.cloudflare.com/analytics/graphql-api/) that can readily be queried via GraphQL API.
3. **Third-party Integration**: Export logs and metrics to various external monitoring and analytics platforms like Datadog, Splunk, Grafana, and others via [Analytics integrations](https://developers.cloudflare.com/analytics/analytics-integrations/).

## Resource Isolation Model

![Figure 5: Workers for Platforms: Resources](https://developers.cloudflare.com/_astro/programmable-platforms-5.B2yd7IjV_Z1IMWex.svg "Figure 5: Workers for Platforms: Resources")

Figure 5: Workers for Platforms: Resources

1. **Incoming Request**: Send requests to custom hostnames or a Worker using a Workers wildcard route.
2. **Worker Invocation**: Route requests to the appropriate User Worker in the Dispatch Namespace.
3. **Resource Access**: Interact with per-script-specific resources:  
   * D1 for relational database storage  
   * Durable Objects for strongly consistent data  
   * KV for high-read, eventually consistent key-value storage  
   * R2 for object storage

## Deployment & Management Flow

![Figure 6: Workers for Platforms: Deployment & Management Flow](https://developers.cloudflare.com/_astro/programmable-platforms-6.BfYznbr5_2d88FE.svg "Figure 6: Workers for Platforms: Deployment & Management Flow")

Figure 6: Workers for Platforms: Deployment & Management Flow

1. **Management Interface**: Interact with the platform through GUI, API, or CLI interfaces.
2. **Platform Processing**: Process these interactions to:  
   * Transform and bundle code  
   * Perform security checks  
   * Apply configuration
3. **Change Management**: Deploy changes to Cloudflare using the Cloudflare REST API.

## Conclusion

Cloudflare Workers for Platforms provides a robust foundation for building multi-tenant SaaS applications with strong isolation, global distribution, and scalable performance. By leveraging this architecture, platform providers can focus on delivering value to their customers while Cloudflare handles the underlying infrastructure complexity.

## Related resources

* [Workers for Platforms: Get started](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/get-started/)
* [Workers for Platforms: Outbound Workers](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/)
* [Workers for Platforms: Observability](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/observability/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/serverless/","name":"Serverless"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/serverless/programmable-platforms/","name":"Programmable Platforms"}}]}
```

---

---
title: Serverless ETL pipelines
description: Cloudflare enables fully serverless ETL pipelines, significantly reducing complexity, accelerating time to production, and lowering overall costs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/serverless/serverless-etl.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Serverless ETL pipelines

**Last reviewed:**  almost 2 years ago 

## Introduction

Extract, Transform, Load (ETL) pipelines are a cornerstone in the realm of data engineering, facilitating the seamless flow of data from its raw state to a structured, usable format. ETL pipelines are instrumental in the data processing journey, particularly in scenarios where data needs to be collected, cleansed, and transformed before being loaded into a target destination.

The process begins with extraction, where data is gathered from various sources such as databases, files, or streams. This raw data is often disparate and unstructured, necessitating the next step: transformation. During transformation, the data undergoes a series of operations to standardize formats, clean inconsistencies, and enrich with additional context or calculations. This phase is critical for ensuring data quality and consistency, as well as aligning it with the requirements of downstream applications and analytics.

Finally, the transformed data is loaded into a destination, which could be a data warehouse, database, or any other storage solution. The loading phase involves efficiently moving the processed data to its intended destination, where it can be readily accessed and utilized for various purposes such as reporting, analysis, or feeding into machine learning models.

ETL pipelines play a pivotal role in data-driven decision-making processes across industries, enabling organizations to derive insights and value from their data assets. By automating and streamlining the journey from raw data to actionable insights, ETL pipelines empower businesses to make informed decisions, optimize processes, and gain competitive advantages in today's data-driven landscape.

Examples of ETL pipelines in action include scenarios like extracting sales data from multiple retail stores, transforming it to a standardized format, and loading it into a centralized data warehouse for analysis and reporting purposes. Similarly, ETL pipelines are utilized in data migration projects, where legacy data needs to be migrated to modern systems while ensuring data integrity and consistency throughout the process.

Cloudflare allows for the deployment of fully serverless ETL pipelines, which can reduce complexity, time to production and overall cost. The following diagrams demonstrate different methods of how Cloudflare can be used in common ETL pipeline deployments.

## ETL pipeline with HTTP-based ingest

![Figure 1: Serverless: HTTP-based ingest](https://developers.cloudflare.com/_astro/serverless-etl-http-based.DtreS_ZH_MTyHF.svg "Figure 1: ETL pipeline with HTTP-based ingest")

Figure 1: ETL pipeline with HTTP-based ingest

This architecture shows a fully serverless ETL pipeline with an API endpoint as ingest. Clients send data via HTTP request to be processed. Common examples include click-stream data or analytics.

1. **Client request**: Send POST request with data to be ingested. Examples would include click-stream data, analytics endpoints.
2. **Input processing**: Process incoming request using [Workers](https://developers.cloudflare.com/workers/) and send messages to [Queues](https://developers.cloudflare.com/queues/) to add to processing backlog.
3. **Data processing**: Use [Queues](https://developers.cloudflare.com/queues/) to trigger a [consumer](https://developers.cloudflare.com/queues/reference/how-queues-works/#consumers) that process input data in batches to prevent downstream overload and increase efficiency. The consumer performs all data cleaning, transformation and standardization operations.
4. **Object storage**: Upload processed data to [R2](https://developers.cloudflare.com/r2/) for persistent storage.
5. **Ack/Retry mechanism**: Signal success/error by using the [Queues Runtime API](https://developers.cloudflare.com/queues/configuration/javascript-apis/#message) in the consumer for each document. [Queues](https://developers.cloudflare.com/queues/) will schedule retries, if needed.
6. **Data querying**: Access processed data from external services for further data usage.

## ETL pipeline with object storage ingest

![Figure 2: Serverless: Object storage ingest](https://developers.cloudflare.com/_astro/serverless-etl-object-storage.B0XqHlLa_MTyHF.svg "Figure 2: ETL pipeline with object storage ingest")

Figure 2: ETL pipeline with object storage ingest

This architecture shows a fully serverless ETL pipeline with object storage as ingest. Common examples include log and unstructured document processing.

1. **Client request**: Upload raw data to R2 via S3-compatible API. Common examples include log and analytics data.
2. **Input processing**: Send messages to [Queues](https://developers.cloudflare.com/queues/) using [R2 event notifications](https://developers.cloudflare.com/r2/buckets/event-notifications/) upon object upload.
3. **Data processing**: Use [Queues](https://developers.cloudflare.com/queues/) to trigger a [consumer](https://developers.cloudflare.com/queues/reference/how-queues-works/#consumers) that process input data in batches to prevent downstream overload and increase efficiency. The consumer performs all data cleaning, transformation and standardization operations.
4. **Object storage**: Upload processed data to [R2](https://developers.cloudflare.com/r2/) for persistent storage.
5. **Ack/Retry mechanism**: Signal success/error by using the [Queues Runtime API](https://developers.cloudflare.com/queues/configuration/javascript-apis/#message) in the consumer for each document. [Queues](https://developers.cloudflare.com/queues/) will schedule retries, if needed.
6. **Data querying**: Access processed data from external services for further data usage.

## Related resources

* [Workers: Get started](https://developers.cloudflare.com/workers/get-started/guide/)
* [Queues: Get started](https://developers.cloudflare.com/queues/get-started/)
* [R2: Get started](https://developers.cloudflare.com/r2/get-started/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/serverless/","name":"Serverless"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/serverless/serverless-etl/","name":"Serverless ETL pipelines"}}]}
```

---

---
title: Serverless global APIs
description: An example architecture of a serverless API on Cloudflare and aims to illustrate how different compute and data products could interact with each other.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/serverless/serverless-global-apis.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Serverless global APIs

**Last reviewed:**  almost 2 years ago 

## Introduction

Serverless APIs represent a modern approach to building and deploying scalable and reliable application programming interfaces (APIs) without the need to manage traditional server infrastructure. These APIs are designed to handle incoming requests from users or other systems, execute the necessary logic or operations, and return a response, all without the need for developers to provision or manage underlying servers.

At the heart of serverless APIs is the concept of serverless computing, where developers focus solely on writing code to implement business logic, without concerning themselves with server provisioning, scaling, or maintenance. This allows for greater agility and faster time-to-market for API-based applications.

Developers define the API endpoints and the corresponding logic or functionality using functions or microservices, which are then deployed to the serverless platform. The platform handles the execution of these functions in response to incoming requests.

Additionally, serverless APIs often integrate seamlessly with other cloud services, such as authentication and authorization services, databases, and event-driven architectures, enabling developers to build complex, scalable, and resilient applications with minimal operational overhead.

Most cloud serverless implementations have a single region where your code is executed. This means any request, from anywhere in the world, must traverse the Internet to get to this single location. All responses to the API request must also be sent back over the same Internet route to the user.

![Figure 1: Traditional single-region architecture](https://developers.cloudflare.com/_astro/single-region.DcjMitxL_Z1D2c5c.webp "Figure 1:  Traditional single-region architecture")

Figure 1: Traditional single-region architecture

Cloudflare follows a different, global-first approach. Globally-deployed architectures enable lower latency and high availability for users accessing the API from different parts of the world. In order to realize performance gains, not only the compute needs to be distributed, but ideally the data as well. Different solutions such as a caching as well as global replication can enable this.

![Figure 2: Region Earth](https://developers.cloudflare.com/_astro/region-earth.DPRpgTD0_Z1dzB4T.webp "Figure 2:  Region Earth")

Figure 2: Region Earth

Overall, serverless globally-deployed APIs offer a cost-effective, scalable, and agile approach to building modern applications and services, allowing organizations to focus on delivering value to their users without being encumbered by the complexities of managing infrastructure.

## Serverless global APIs

![Figure 3: Serverless global APIs](https://developers.cloudflare.com/_astro/serverless-global-apis.BnHHhP-u_2d88FE.svg "Figure 3: Serverless global APIs")

Figure 3: Serverless global APIs

This is an example architecture of a serverless API on Cloudflare and aims to illustrate how different compute and data products could interact with each other.

1. **Client request**: Send request to API endpoint.
2. **API Shield/Router**: Process incoming request using [Workers](https://developers.cloudflare.com/workers/), check for validity, and perform authentication logic, if needed. Then, forward the (potentially transformed and/or enriched) API call to individual [Workers](https://developers.cloudflare.com/workers) using [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/). This allows for a separation of concerns.
3. **Read-heavy data**: Read from [KV](https://developers.cloudflare.com/kv/) to serve read-heavy, non-dynamic data. This could include configuration data or product information. Perform writes as needed keeping [limits](https://developers.cloudflare.com/kv/platform/limits/) in mind.
4. **Relational data**: Query [D1](https://developers.cloudflare.com/d1/) to handle relational-data. This could include user data, product data or other data.
5. **External data**: Query external databases using [Hyperdrive](https://developers.cloudflare.com/hyperdrive/). Leverage caching to improve performance where applicable. This can be especially helpful when a data migration is out of scope of the implementation.

## Related resources

* [Workers: Get started](https://developers.cloudflare.com/workers/get-started/guide/)
* [Queues: Get started](https://developers.cloudflare.com/queues/get-started/)
* [R2: Get started](https://developers.cloudflare.com/r2/get-started/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/serverless/","name":"Serverless"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/serverless/serverless-global-apis/","name":"Serverless global APIs"}}]}
```

---

---
title: Serverless image content management
description: Leverage various components of Cloudflare's ecosystem to construct a scalable image management solution
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/serverless/serverless-image-content-management.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Serverless image content management

**Last reviewed:**  about 2 years ago 

## Introduction

In this reference architecture diagram, we reveal how to leverage various components of Cloudflare’s ecosystem to construct a scalable image management solution. This solution integrates moderation principles via Cloudflare's Workers AI platform and performs image classification through inference at the edge. The storage of images is handled by Cloudflare's R2 product, an S3 API-like object storage system, while metadata is stored in a key/value store to enable content augmentation.

The servicing of images to requesting clients is secured by link signature, resizing based on device type or requested transformations and leveraging Cloudflare’s native security and performance features.

![Figure 1: Serverless image content management](https://developers.cloudflare.com/_astro/diagram.DEMTm7TJ_2sama5.svg "Figure 1: Serverless image content management reference architecture diagram")

Figure 1: Serverless image content management reference architecture diagram

### Products included in the recipe

| Product                                                                                          | Function                                                              |
| ------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------- |
| [DDoS ↗](https://www.cloudflare.com/application-services/products/bot-management/)               | Volumetric attack protection                                          |
| [Bot Management ↗](https://www.cloudflare.com/ddos/)                                             | Protection against scraping and general sophisticated automated abuse |
| [Web Application Firewall ↗](https://www.cloudflare.com/application-services/products/waf/)      | Protection against web threats                                        |
| [CDN ↗](https://www.cloudflare.com/application-services/products/cdn/)                           | Cache spreading of the images                                         |
| [Optimization ↗](https://www.cloudflare.com/application-services/products/website-optimization/) | Compression and acceleration of the image delivery                    |
| [Workers ↗](https://workers.cloudflare.com/)                                                     | Compute of the several serverless micro services                      |
| [AI ↗](https://ai.cloudflare.com/)                                                               | Image classification                                                  |
| [R2 ↗](https://www.cloudflare.com/developer-platform/r2/)                                        | S3-type object-storage platform                                       |
| [KV](https://developers.cloudflare.com/kv/)                                                      | Image metadata storage                                                |

## Getting started

This reference architecture diagram reveals how to harness the power of the Cloudflare platform to construct a fully serverless image and content management system. This implementation leverages various components of the Cloudflare stack, including edge compute with Cloudflare Workers, KV, and R2 object storage; application performance optimization and caching; application security features such as rate limiting and DDoS mitigation; and artificial intelligence with Workers AI.

The ultimate goal is to create a scalable and accessible platform for storing and serving images globally. This reference architecture will walk you through the key features and mechanisms that you can use with Cloudflare’s native capabilities as well as those that can be built with Cloudflare’s robust computing capabilities.

### 1\. Image servicing

Clients request images with [HMAC signatures](https://developers.cloudflare.com/workers/examples/signing-requests/) and any necessary transformations. Transformation parameters can be included in the [src-set](https://developers.cloudflare.com/images/transform-images/make-responsive-images/#srcset-for-high-dpi-displays) for HTML content or directly sent alongside [HTTP requests](https://developers.cloudflare.com/images/transform-images/transform-via-url/).

### 2\. Volumetric protection

Cloudflare's Application Security stack takes a comprehensive approach to shielding the image servicing from malicious activities. By implementing volumetric protection [rate limiting controls](https://developers.cloudflare.com/waf/rate-limiting-rules/), we effectively mitigate the risk of abuse and [DDoS](https://developers.cloudflare.com/ddos-protection/) attacks, ensuring uninterrupted service delivery.

### 3\. Signature validation

A [Cloudflare worker](https://developers.cloudflare.com/workers/) function validates [incoming signatures](https://developers.cloudflare.com/workers/examples/signing-requests/) to ensure the authenticity and integrity of requests. This security measure helps prevent content evasion and abuse of the service by verifying that the signature accompanying the request is legitimate. The application responsible for generating content and associated signatures can also set expiration dates for links, further guarding against tampering or man-in-the-middle attacks. HMAC (Hash-based Message Authentication Code) is commonly used as the signature mechanism of choice for this purpose.

### 4\. Image optimization and caching

Images are retrieved from [cache](https://developers.cloudflare.com/cache/) when available or stored on the server for the first time and delivered to clients upon request. We optimize image delivery by serving the most suitable format for each device, such as [WebP or AVIF](https://developers.cloudflare.com/images/polish/), while also applying compression to reduce file size. This ensures a smooth and seamless visual experience for users.

### 4\. Image transformations

Cloudflare's [image resizing](https://developers.cloudflare.com/images/) feature will resize the original images requested for transformation, completing the process entirely at the edge from any of our global locations. This fast and efficient process offers a wide range of transformation options.

### 5\. Content moderation and storage

A [Cloudflare Worker](https://developers.cloudflare.com/workers/) script meticulously analyzes incoming images, leveraging their [classification metadata](https://developers.cloudflare.com/workers-ai/models/) to ensure compliance with established policy of use. [Cloudflare R2](https://developers.cloudflare.com/r2/) serves as an S3-like object storage solution, storing images and their associated metadata (such as image classification) in a globally accessible and scalable manner. With lightning-fast delivery capabilities and the ability to scale from 0, Cloudflare R2 is an ideal solution for storing and managing large collections of images.

### 6\. Image classification

With [Cloudflare AI ↗](https://ai.cloudflare.com/) at its core, our [image classification](https://developers.cloudflare.com/workers-ai/models/) inference model will rapidly inspect each incoming image, classifying them in real-time. This cutting-edge technology allows us to streamline the process of moderating content, significantly reducing the need for a dedicated team to sift through and review every submission.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/serverless/","name":"Serverless"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/serverless/serverless-image-content-management/","name":"Serverless image content management"}}]}
```

---

---
title: Control and data plane architectural pattern for Durable Objects
description: Separate the control plane from the data plane of your application to achieve great performance and reliability without compromising on functionality.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/storage/durable-object-control-data-plane-pattern.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Control and data plane architectural pattern for Durable Objects

**Last reviewed:**  over 1 year ago 

## Introduction

[Durable Objects](https://developers.cloudflare.com/durable-objects/) are built on-top of [Cloudflare Workers](https://developers.cloudflare.com/workers/) spanning several locations across our global infrastructure network. Each Durable Object instance has its own durable storage persisted across requests, in-memory state, single-threaded execution, and can be placed in a specific region.

A single Durable Object instance has certain [performance and storage capabilities](https://developers.cloudflare.com/durable-objects/platform/limits/). Therefore, to scale an application without being restricted by the limits of a single instance we need to shard our application data as much as possible, and take advantage of the [Cloudflare infrastructure ↗](https://www.cloudflare.com/en-gb/network/) by spreading our Durable Object instances across the world, moving both the data and compute as close to the users as possible.

This document describes a useful architectural pattern to separate the control plane from the data plane of your application to achieve great performance and reliability without compromising on functionality.

* The **control plane** provides the administrative APIs used to manage resource metadata. For example, a user creating and deleting a wiki, or listing all wikis of a user.
* The **data plane** provides the primary function of the application and handles the operations on the resources data directly. For example, fetching and updating the content of a wiki, or updating the content of a collaborative document. Data planes are intentionally less complicated and usually handle a much larger volume of requests.
* The **management plane** is an optional component of a system providing a higher level of interaction than the control plane to simplify configuration and operations. In this document, we will not focus on this as the same principles apply as to the control plane.

## Control and data plane separation pattern

In this pattern, our application consists of at least one Durable Object instance per resource type handling all its control plane operations, and as many Durable Object instances as we need for the data plane operations, one for each resource instance created in the application.

You can scale to millions of Durable Object instances, one for each of your resources.

The main advantage of this architectural pattern is that our data plane operations, usually with larger volume of requests than control plane operations, are handled directly by the Durable Object instances holding the resource data without going through the control plane Durable Object instance. Therefore, the application's performance and availability is not limited by a single Durable Object instance, but is shared across thousands or millions of Durable Objects.

Consider an example for a generic resource type `XYZ`, where `XYZ` could in-practice be a wiki, a collaborative document, a database for each user, or any other resource type in your application.

![Figure 1: Control and data plane architectural pattern for Durable Objects](https://developers.cloudflare.com/_astro/diagram.BjLddBSp_8cBFg.svg "Figure 1: Control and data plane architectural pattern for Durable Objects")

Figure 1: Control and data plane architectural pattern for Durable Objects

1. A user in London (LHR) initiates a resource `XYZ` creation request. The request is routed to the nearest Cloudflare datacenter and received by the Workers fleet which serves the application API.
2. The Worker code will route the request to the appropriate control plane Durable Object instance managing the resources of type `XYZ`. We will use the `idFromName` approach to reference the Durable Object instance by name (`control-plane-xyz`). This allows immediate access to the control plane Durable Object instances without needing to maintain a mapping.  
   * The location of the control plane Durable Object will be close to the first request accessing it, or to the explicit region we provide using [Location Hints](https://developers.cloudflare.com/durable-objects/reference/data-location/#provide-a-location-hint).
3. The control plane Durable Object instance (`control-plane-xyz`) receives the request, and immediately creates another Durable Object instance (`data-plane-xyz-03`) near the user request's location (using Location Hints) so that the actual Durable Object instance holding the resource's content is near the user that created it. - We call a custom `init(...)` function on the created Durable Object instance (`data-plane-xyz-03`) passing any required metadata info that will be needed to start handling user requests. The Durable Object instance stores this information in its local storage and performs any necessary initialisation. This step can be skipped if each subsequent request to the created resource contains all the information needed to handle the request. For example, if the request URL contains all the information as path and query parameters. - We use the [idFromName](https://developers.cloudflare.com/durable-objects/api/namespace/#idfromname) approach to reference the Durable Object (`data-plane-xyz-03`) which allows the use of name-based resource identifiers. - Alternatively, we can use the [newUniqueId](https://developers.cloudflare.com/durable-objects/api/namespace/#newuniqueid) approach to reference the Durable Object which will give us a random resource identifier to use instead of a name-based one. This random identifier will need to be communicated back to the user so that they provide it in their subsequent requests when accessing the resource.
4. The control plane Durable Object instance (`control-plane-xyz`) stores the generated identifier (`data-plane-xyz-03`) to its local storage, in order to be able to list/delete all created resources, and then returns it to the Worker.
5. The user receives a successful response for the creation of the resource and the corresponding identifier, and (optionally) gets redirected to the resource itself.
6. The user sends a write request to the API for the resource identifier returned in the previous step, in order to update the content of the resource.
7. The Worker code uses the resource identifier provided to directly reference the data plane Durable Object instance for that resource (`data-plane-xyz-03`). The Durable Object instance will handle the request appropriately by writing the content to its local durable persistent storage and return a response accordingly.
8. Another user from Portland (PDX) is sending a read request to a previously created resource (`data-plane-xyz-01`).
9. The Worker code directly references the Durable Object instance holding the data for the given resource identifier (`data-plane-xyz-01`), and the Durable Object instance will return its content by reading its local storage.

As long as the application data model allows sharding at the resource level, you can scale out as much as you want, while taking advantage of data locality near the user that accesses that resource.

The same pattern can be applied as many times as necessary to achieve the performance required.

For example, depending on our load, we could further shard our control plane Durable Object into several Durable Objects. Instead of having a single Durable Object instance for all resources of type `XYZ`, we could have one for each region. The name-based approach to reference a Durable Object instance simplifies targeting the appropriate instance accordingly.

In conclusion, as long as you find a way to shard your application's data model in fine-grained resources that are self-contained, you are able to dedicate at least one Durable Object instance to each resource and scale out.

## Related resources

* [Durable Objects Namespace documentation](https://developers.cloudflare.com/durable-objects/api/namespace/)
* [Durable Objects: Easy, Fast, Correct — Choose three ↗](https://blog.cloudflare.com/durable-objects-easy-fast-correct-choose-three/)
* [Zero-latency SQLite storage in every Durable Object ↗](https://blog.cloudflare.com/sqlite-in-durable-objects/)
* [Data, Control, Management: Three Planes, Different Altitudes ↗](https://thenewstack.io/data-control-management-three-planes-different-altitudes/)
* Examples of this architectural pattern in real-world applications:  
   * [Durable Objects aren't just durable, they're fast: a 10x speedup for Cloudflare Queues ↗](https://blog.cloudflare.com/how-we-built-cloudflare-queues/)  
   * [Building a global TiddlyWiki hosting platform with Cloudflare Durable Objects and Workers — Tiddlyflare ↗](https://www.lambrospetrou.com/articles/tiddlyflare/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/storage/","name":"Storage"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/storage/durable-object-control-data-plane-pattern/","name":"Control and data plane architectural pattern for Durable Objects"}}]}
```

---

---
title: Egress-free object storage in multi-cloud setups
description: Learn how to use R2 to get egress-free object storage in multi-cloud setups.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/storage/egress-free-storage-multi-cloud.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Egress-free object storage in multi-cloud setups

**Last reviewed:**  almost 2 years ago 

## Introduction

Object storage is a modern data storage approach that stores data as objects rather than in a hierarchical structure like traditional file systems, making object storage highly scalable and flexible for managing vast amounts of data across diverse applications and environments.

Oftentimes organizations leverage multiple cloud providers to distribute their workloads across different platforms, mitigating risks associated with vendor lock-in, enhancing resilience, and optimizing performance and cost. However, managing data across multiple clouds introduces challenges related to data mobility and interoperability, particularly when it comes to transferring data between cloud providers or on-premises environments.

Egress fees are charges incurred when data is transferred out of a cloud provider's network, either to another cloud provider, on-premises infrastructure, or external services. These fees can vary depending on factors such as the volume of data transferred, the destination of the data, and the network bandwidth utilized.

[R2](https://developers.cloudflare.com/r2/) offers an enticing value proposition by not charging the costly egress bandwidth fees associated with typical cloud storage services. This can be very advantageous in the context of multi-cloud environments, especially when you want to run compute-intensive workloads such as AI model training, query engines, and other data science tools.

## R2 multi-cloud setup

![Figure 1: R2 multi-cloud setup](https://developers.cloudflare.com/_astro/r2-multi-cloud.jB-KW29c_Z1XXhic.svg "Figure 1: R2-multi-cloud setup")

Figure 1: R2-multi-cloud setup

1. **Worker and R2 interaction**: Use R2's [Workers API](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/) to interact with R2 from a Worker. Alternatively, for improved portability, use R2's [S3 API](https://developers.cloudflare.com/r2/api/s3/) from a Worker. No R2 egress fees apply.
2. **External service and R2 interaction**: Use R2's [S3 API](https://developers.cloudflare.com/r2/api/s3/) to interact with R2 from external services. No R2 egress fees apply.

## Related resources

* [R2: Get started](https://developers.cloudflare.com/r2/get-started)
* [R2: S3 API](https://developers.cloudflare.com/r2/api/s3/)
* [R2: Workers API](https://developers.cloudflare.com/r2/api/workers/)
* [R2: Configure aws4fetch for R2](https://developers.cloudflare.com/r2/examples/aws/aws4fetch/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/storage/","name":"Storage"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/storage/egress-free-storage-multi-cloud/","name":"Egress-free object storage in multi-cloud setups"}}]}
```

---

---
title: Event notifications for storage
description: Use Cloudflare Workers or an external service to monitor for notifications about data changes and then handle them appropriately.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/storage/event-notifications-for-storage.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Event notifications for storage

**Last reviewed:**  over 1 year ago 

## Introduction

Cloudflare [R2](https://developers.cloudflare.com/r2/) Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services. The lifecycle of data in object storage often extends beyond uploading, modifying, or deleting the data. There may be a requirement to transform, analyze, or perform post-processing on the data. R2 provides [event notifications](https://developers.cloudflare.com/r2/buckets/event-notifications/) to manage these event-driven workflows.

This document walks through how to use our built in serverless [Cloudflare Workers](https://developers.cloudflare.com/workers/) or an external service to monitor for notifications about data changes and then handle them appropriately.

## Push-based consumer Worker

Event notifications function by sending messages to a [queue](https://developers.cloudflare.com/queues/) whenever there is a change to your data. These messages are then handled by a [consumer Worker](https://developers.cloudflare.com/queues/reference/how-queues-works/#consumers). A consumer Worker is the term for a client that is subscribing to or consuming messages from a queue. The consumer Worker will automatically receive these messages, allowing you to define any subsequent actions that need to be taken.

For instance, you can configure a notification to trigger when new images are uploaded to your R2 bucket. This notification can then automatically start an AI workload that performs an action on the image, such as converting the image to text.

Consider the example below of push-based post-processing: when a user uploads a new object into R2, we want to log and store that event into a separate R2 bucket. You can create this scenario yourself by following this tutorial: [Log and store upload events in R2 with event notifications](https://developers.cloudflare.com/r2/tutorials/upload-logs-event-notifications/).

![Figure 1: Push-Based R2 Event Notifications](https://developers.cloudflare.com/_astro/pushed-based-event-notification.NdMYExDK_ZD7HLg.svg "Figure 1: Push-Based R2 Event Notifications")

Figure 1: Push-Based R2 Event Notifications

1. A user uploads a new object directly to R2.
2. An event notification is sent to the queue.
3. The consumer Worker is pushed the new work from the queue.
4. The Worker inserts a log event into R2.

## Pull-based HTTP consumer

Alternatively, you can establish a [pull-based consumer](https://developers.cloudflare.com/queues/configuration/pull-consumers/), where you pull from a queue over HTTP from any environment. Use a pull-based consumer if you need to consume messages from existing infrastructure outside of Cloudflare where you need to carefully control how fast messages are consumed.

A pull-based consumer must explicitly make a call to pull (and then acknowledge) messages from the queue, only when it is ready to do so.

Consider the scenario below: A user initiates a delete from R2\. An external service needs to be informed of the deletion, so a pull-based queue has been established for the external service to retrieve notifications.

![Figure 2: Pull-Based R2 Event Notifications](https://developers.cloudflare.com/_astro/pull-based-event-notification.KnQPn3ra_1TzX3M.svg "Figure 2: Pull-Based R2 Event Notifications")

Figure 2: Pull-Based R2 Event Notifications

1. A user initiates a delete from R2.
2. An event notification is sent to the queue.
3. The external service, when ready to process the request, makes an HTTP POST request to the queue to pull the message.
4. The queue sends the message in response to the POST request from step 3.
5. The external service must acknowledge that the message has been received.

You can follow the steps here to [configure a pull-based consumer](https://developers.cloudflare.com/queues/configuration/pull-consumers/#1-enable-http-pull).

## Additional example use cases

* Send an email to an administrator any time objects are deleted from R2.
* When a video or podcast is uploaded to R2, it automatically processes the content using one of Cloudflare's Automatic Speech Recognition (ASR) AI models to generate subtitles or even translate the content.
* Remove related database entries if an object in R2 is deleted.

## Related resources

* [Tutorial: Log and store upload events in R2 with event notifications](https://developers.cloudflare.com/r2/tutorials/upload-logs-event-notifications/)
* [Event Notifications documentation](https://developers.cloudflare.com/r2/buckets/event-notifications/)
* [Cloudflare R2 overview](https://developers.cloudflare.com/r2/)
* [Cloudflare Queues overview](https://developers.cloudflare.com/queues/)
* [Cloudflare Queues Pull Consumers](https://developers.cloudflare.com/queues/configuration/pull-consumers/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/storage/","name":"Storage"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/storage/event-notifications-for-storage/","name":"Event notifications for storage"}}]}
```

---

---
title: On-demand Object Storage Data Migration
description: Use Cloudflare migration tools to migrate data between cloud object storage providers.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/storage/on-demand-object-storage-migration.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# On-demand Object Storage Data Migration

**Last reviewed:**  over 1 year ago 

## Introduction

Migrating data between cloud object storage providers can be challenging and expensive. You need to ensure no objects are missed, especially when new data is coming in during your migration. Additionally, there may be a significant one-time data transfer fee to consider.

In order to address these challenges, Cloudflare has created two migration tools: [Sippy](https://developers.cloudflare.com/r2/data-migration/sippy/) and [Super Slurper](https://developers.cloudflare.com/r2/data-migration/super-slurper/). Sippy is an on-demand data migration service, and it is the primary focus of this reference architecture diagram. On the other hand, Super Slurper is designed for large-scale, one-time migrations to Cloudflare's global object storage service, [R2](https://developers.cloudflare.com/r2/). Moving all your data at once may not work for your scenario, so Sippy can help with that.

Sippy enables you to transfer data from other cloud providers to Cloudflare R2 as the data is requested. This workflow is ideal for situations where you want to avoid large upfront data transfer bills and selectively migrate data as it's accessed.

Migration-specific egress fees incurred when using other vendors cloud storage are reduced by leveraging requests within the flow of your application where you would already be paying egress fees to copy objects to R2 simultaneously.

Use Sippy to migrate your commonly accessed data objects and immediately start saving on egress fees. Then, use Super Sluper to migrate any remaining data.

Here's how Sippy works: it will first attempt to retrieve an object from R2 storage. If the object is not in R2, it will retrieve the object from your source cloud object storage. At the same time, it will add the object to R2 for future access, ensuring a seamless and efficient data migration process.

## On-demand Object Storage Data Migration with Sippy

![Figure 1: R2 On-demand Object Storage Data Migration with Sippy](https://developers.cloudflare.com/_astro/sippy-migration-diagram.CTGKS9AD_Z206LEl.svg "Figure 1: On-demand Object Storage Data Migration with Sippy")

Figure 1: On-demand Object Storage Data Migration with Sippy

1. The client requests an object from R2 using[ Workers ↗](https://developers.cloudflare.com/r2/api/workers/),[ S3 API ↗](https://developers.cloudflare.com/r2/api/s3/), or[ public bucket ↗](https://developers.cloudflare.com/r2/buckets/public-buckets/).
2. If the object is found in your R2 bucket it is served to the client.
3. If the object is not found in R2, the object will simultaneously be returned from your source storage bucket and copied to R2\. Note: Some large objects may take multiple requests to copy to R2 because they are copied over as multipart uploads. From the client’s perspective they will still get the file they are requesting.

After objects are copied, subsequent requests will be served from R2 and you’ll begin saving on egress fees immediately.

## Related Resources

* [Sippy Documentation](https://developers.cloudflare.com/r2/data-migration/sippy/)
* [Super Slurper Documentation](https://developers.cloudflare.com/r2/data-migration/super-slurper/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/storage/","name":"Storage"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/storage/on-demand-object-storage-migration/","name":"On-demand Object Storage Data Migration"}}]}
```

---

---
title: Storing user generated content
description: Store user-generated content in R2 for fast, secure, and cost-effective architecture.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/reference-architecture/diagrams/storage/storing-user-generated-content.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Storing user generated content

**Last reviewed:**  about 1 year ago 

## Introduction

User generated content (UGC) is an essential aspect of modern applications. This includes users uploading profile photos, documents, and videos, as well as AI models generating images, summaries, or structured data. Therefore, applications require a reliable, scalable, and cost-effective solution for storing and accessing this content.

Cloudflare [R2](https://developers.cloudflare.com/r2/) is an S3-compatible object storage with zero egress fees, making it ideal for handling content uploads and delivery at scale. Combined with Cloudflare [Workers](https://developers.cloudflare.com/workers/) and Cloudflare's global network, it enables fast, secure, and cost-effective workflows for ingesting and managing UGC.

This reference architecture explores two common UGC workflows, both optimized for performance, security, and cost efficiency:

1. **Secure User Uploads to R2 via Signed URLs:** Allowing users to upload files (profile images, documents, etc.) efficiently and securely without overloading backend systems.
2. **AI-Generated Content Stored in R2:** Storing content generated by Workers AI or external AI services, ensuring inference results are persistently available for future use.

## Use Cases

### Use Case 1: Secure User Uploads to R2 via Signed URLs

User generated content typically starts with file uploads, including profile pictures, resumes, rich media, and documents. Applications must securely validate and store these uploads while avoiding latency, high costs, and unnecessary complexity in the backend.

In this architecture, we use **R2** as the primary storage layer and a **Worker** to control upload access. Files are uploaded directly from the user's browser or device to R2 using signed URLs, which are generated by the Worker after validating the user's permissions and upload intent.

This approach avoids routing large files through the application backend or Worker, reducing latency and operational cost—while ensuring tight control over access and security.

And because R2 is natively integrated with Cloudflare's global network, files stored in R2 are accessible with low latency from anywhere in the world—and **without any egress fees**, even as your application scales.

![Use Case 1: Secure User Uploads to R2 via Signed URLs](https://developers.cloudflare.com/_astro/uploads-to-r2-via-signed-urls.ko_gZGAm_ZveeVx.svg "Use Case 1: Secure User Uploads to R2 via Signed URLs")

Use Case 1: Secure User Uploads to R2 via Signed URLs

**How it Works**

1. **User initiates upload from the frontend:** The app collects file details (e.g. size, name) and calls a backend API (a Cloudflare Worker) to begin the upload process.
2. **Worker authenticates the user and validates the request:** The Worker confirms that the user is logged in, has upload permissions, and that the file is within acceptable limits (for example, 10MB max, allowed MIME types).
3. **Worker returns a signed PUT URL to R2:** A signed URL allows the frontend to upload directly to R2 for a limited time, under a specific key or namespace. There is no need for the Worker to handle large files directly.
4. **Frontend uploads the file directly to R2:** The file is streamed directly from the client to R2.
5. **(Optional) Trigger post-upload workflows:** R2 offers [event notifications](https://developers.cloudflare.com/r2/buckets/event-notifications/) to send messages to a queue when data in your R2 bucket changes, like a new upload. Example post-processing:  
   * Scan, moderate, or transform the file.  
   * Write metadata (for example, `user_id`, `file_path`, `timestamp`) to [D1](https://developers.cloudflare.com/d1/), Cloudflare's serverless SQL database.  
   * Notify the user or update a dashboard/UI.

For more information on uploading data directly from the client to R2, refer to the documentation on [presigned URLs](https://developers.cloudflare.com/r2/api/s3/presigned-urls/).

### Use Case 2: AI-Generated Content Stored in R2

Many modern applications utilize AI-generated content, which can include product descriptions, profile pictures, audio clips, and more. When this content is created in response to user actions or scheduled events, it must be stored immediately, reliably, and at scale.

This architecture employs [Workers AI](https://developers.cloudflare.com/workers-ai/) to perform inference at the edge and then stores the generated output directly in Cloudflare R2, all within a single Worker.

![Use Case 2: AI-Generated Content Stored in R2](https://developers.cloudflare.com/_astro/ai-generated-content-in-r2.KciiXeXA_ZveeVx.svg "Use Case 2: AI-Generated Content Stored in R2")

Use Case 2: AI-Generated Content Stored in R2

**How it Works**

1. **User initiates content generation:** The frontend sends a request to a Cloudflare Worker to create content using an AI model (for example, "Create a thumbnail image for this product").
2. **Worker invokes Workers AI:** The Worker passes the user input to a model deployed on Workers AI.
3. **Generated output is returned to the Worker:** The response could be plain text, a Base64 image, a binary buffer, or other structured data—depending on the model type.
4. **Worker uploads the output to R2 directly:** No signed URL or client upload is needed. The Worker performs a secure, authenticated `PUT` request to store the output in a designated bucket.
5. **Worker returns success and metadata to the frontend:** The client receives a reference to the stored file (such as a path, object key, or signed download URL if needed).

Refer to [Use R2 from Workers](https://developers.cloudflare.com/r2/api/workers/workers-api-usage/) for more information on accessing R2 buckets via Cloudflare Workers.

## Summary

By storing **user-generated content in Cloudflare R2**, applications gain:

* A highly scalable storage backend
* Fast access through Cloudflare's edge computing
* Predictable costs with zero egress fees
* Seamless AI + UGC workflows that maximize efficiency

This architecture ensures that content is stored, processed, and delivered **fast, securely, and cost-effectively**.

## Related Links

* [Cloudflare R2 Product Page](https://developers.cloudflare.com/r2/)
* [R2 Presigned URLs](https://developers.cloudflare.com/r2/api/s3/presigned-urls/)
* [Use R2 from Workers](https://developers.cloudflare.com/r2/api/workers/workers-api-usage/)
* [Migrating Data to R2](https://developers.cloudflare.com/r2/data-migration/)
* [Event notifications for storage reference architecture](https://developers.cloudflare.com/reference-architecture/diagrams/storage/event-notifications-for-storage/)
* [Why choose Cloudflare R2 vs Amazon S3 ↗](https://www.cloudflare.com/pg-cloudflare-r2-vs-aws-s3/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/reference-architecture/","name":"Reference Architecture"}},{"@type":"ListItem","position":3,"item":{"@id":"/reference-architecture/diagrams/","name":"Reference Architecture Diagrams"}},{"@type":"ListItem","position":4,"item":{"@id":"/reference-architecture/diagrams/storage/","name":"Storage"}},{"@type":"ListItem","position":5,"item":{"@id":"/reference-architecture/diagrams/storage/storing-user-generated-content/","name":"Storing user generated content"}}]}
```
