---
title: Analytics
description: Cloudflare visualizes the metadata collected by our products in the Cloudflare dashboard. Refer to Types of analytics for more information about the various types of analytics and where they exist in the dashboard.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Analytics

Cloudflare visualizes the metadata collected by our products in the Cloudflare dashboard. Refer to [Types of analytics](https://developers.cloudflare.com/analytics/types-of-analytics/) for more information about the various types of analytics and where they exist in the dashboard.

---

## Features

### Workers Analytics Engine

Send unlimited-cardinality data from your Worker to a time-series database. Query it with SQL.

[ Use Workers Analytics Engine ](https://developers.cloudflare.com/analytics/analytics-engine/) 

### Account and zone analytics

Provides details about the requests and traffic related to your Cloudflare accounts and zones.

[ Use Account and zone analytics ](https://developers.cloudflare.com/analytics/account-and-zone-analytics/) 

### Cloudflare Network Analytics

Provides near real-time visibility into network and transport-layer traffic patterns and DDoS attacks.

[ Use Cloudflare Network Analytics ](https://developers.cloudflare.com/analytics/network-analytics/) 

### GraphQL Analytics API

Provides all of your performance, security, and reliability data from one endpoint. Select exactly what you need, from one metric for a domain to multiple metrics aggregated for your account.

[ Use GraphQL Analytics API ](https://developers.cloudflare.com/analytics/graphql-api/) 

---

## Related products

**[Workers](https://developers.cloudflare.com/workers/)** 

Cloudflare Workers allows developers to build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.

**[Logs](https://developers.cloudflare.com/logs/)** 

Detailed logs that contain metadata generated by Cloudflare products helpful for debugging, identifying configuration adjustments, and creating analytics.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}}]}
```

---

---
title: Types of analytics
description: Cloudflare Analytics is a comprehensive product that encompasses all metadata generated by the Cloudflare network. You can access these insights through the Cloudflare dashboard. Depending on where in the dashboard you are, it will show you different aspects from the collected metadata.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/types-of-analytics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Types of analytics

Cloudflare Analytics is a comprehensive product that encompasses all metadata generated by the Cloudflare network. You can access these insights through the Cloudflare dashboard. Depending on where in the dashboard you are, it will show you different aspects from the collected metadata.

## Account-level analytics

### Account Analytics (beta)

Available under **Analytics & Logs** in your Cloudflare dashboard when you log in, Account Analytics (beta) shows you an [overview of traffic for all domains](https://developers.cloudflare.com/analytics/account-and-zone-analytics/account-analytics/) under your Cloudflare account, such as requests and bandwidth by country, information related to security, cache, and errors, among others. To access Account Analytics, [log in to the Cloudflare dashboard ↗](https://dash.cloudflare.com/login), select the appropriate account, and go to **Analytics & Logs** \> **Account Analytics**.

### Network Analytics

Network Analytics provides [visibility into network and transport-layer traffic patterns, and DDoS attacks](https://developers.cloudflare.com/analytics/network-analytics/).

The Network Analytics dashboard is only available for customers with an Enterprise domain plan who use [Spectrum](https://developers.cloudflare.com/spectrum/), [Magic Transit](https://developers.cloudflare.com/magic-transit/), or [Bring Your Own IP (BYOIP)](https://developers.cloudflare.com/byoip/).

### Web Analytics

Web Analytics (formerly known as Browser Insights) [provides free, privacy-first analytics for your website](https://developers.cloudflare.com/web-analytics/). Web Analytics does not collect your visitor's personal data, and allows you to have a detailed view into the performance of web pages as experienced by your visitors.

### Carbon Impact Report

Carbon Impact Report gives you a [report on carbon savings ↗](https://blog.cloudflare.com/understand-and-reduce-your-carbon-impact-with-cloudflare/) from using Cloudflare services versus Internet averages for your usage volume.

Cloudflare is committed to use 100% renewable energy sources, but also to [remove all greenhouse gases emitted ↗](https://blog.cloudflare.com/cloudflare-committed-to-building-a-greener-internet/) as a result of powering our network since 2010.

## Analytics related to specific properties

Access aggregated traffic, security, and performance metrics for each domain proxied through Cloudflare. To access these analytics, [log in to the Cloudflare dashboard ↗](https://dash.cloudflare.com/login), select your account and domain, and go to the **Analytics & Logs** section.

Data available under the **Analytics & Logs** section includes:

* **HTTP Traffic** \- Requests, Data transfer, Page views, Visits, and API requests.
* **Security** \- Total Threats, Top Crawlers/Bots, Rate Limiting, Total Threats Stopped.
* **Performance** \- Origin Performance, Bandwidth Saved.
* **Edge Reachability** \- [Last mile insights](https://developers.cloudflare.com/network-error-logging/) for Enterprise customers.
* **Workers** \- [Detailed information](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/) related to your Workers per zone, and Workers KV per account.
* **Logs** \- [Detailed logs](https://developers.cloudflare.com/logs/) of the metadata generated by Cloudflare products for Enterprise customers.
* **Instant logs** \- [Live stream of traffic](https://developers.cloudflare.com/logs/instant-logs/) for your domain. Enterprise customers can access this live stream from the Cloudflare dashboard or from a command-line interface (CLI).

## Product analytics

Beyond the analytics provided for your properties, you can also access analytics related to specific products:

* [Bot Analytics](https://developers.cloudflare.com/bots/bot-analytics/) \- Shows which requests are associated with known bots, likely automated traffic, likely human traffic, and more.
* [Cache Analytics](https://developers.cloudflare.com/cache/performance-review/cache-analytics/) \- Insights to that help determine if resources are missing from cache, expired, or ineligible for caching.
* [DNS Analytics](https://developers.cloudflare.com/dns/additional-options/analytics/) \- Provides insights about DNS queries to your zone.
* [Load Balancing Analytics](https://developers.cloudflare.com/load-balancing/reference/load-balancing-analytics/) \- Features metrics to help gain insights into traffic load balancer steering decisions.
* [Security Events](https://developers.cloudflare.com/waf/analytics/security-events/) \- Highlights attack and mitigation metrics detected by the Cloudflare WAF and HTTP DDoS protection systems.
* [Security Analytics](https://developers.cloudflare.com/waf/analytics/security-analytics/) \- Displays information about all incoming HTTP requests, including those not affected by security measures (for example, from the WAF and DDoS protection systems).

## GraphQL APIs

If you would like to have more control over how you visualize the analytic and log information available on the Cloudflare dashboard, use the [GraphQL Analytics API](https://developers.cloudflare.com/analytics/graphql-api/) to build customized views. This API replaces and expands on the previous Zone Analytics API.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/types-of-analytics/","name":"Types of analytics"}}]}
```

---

---
title: Understanding sampling in Cloudflare Analytics
description: Sampling is a technique used in analytics to analyze a subset of data rather than processing every individual data point. In Cloudflare Analytics, sampling ensures efficient performance and scalability while maintaining high accuracy and reliability. This document provides a comprehensive overview of how sampling works, why it is used, and its impact on analytics across different Cloudflare tools.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/sampling.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Understanding sampling in Cloudflare Analytics

[Sampling ↗](https://en.wikipedia.org/wiki/Sampling%5F%28statistics%29) is a technique used in analytics to analyze a subset of data rather than processing every individual data point. In Cloudflare Analytics, sampling ensures efficient performance and scalability while maintaining high accuracy and reliability. This document provides a comprehensive overview of how sampling works, why it is used, and its impact on analytics across different Cloudflare tools.

## How sampling works

We use a sampling method called [Adaptive Bit Rate (ABR) ↗](https://blog.cloudflare.com/explaining-cloudflares-abr-analytics/) to ensure that queries complete quickly, even when working with large datasets. ABR dynamically adjusts the level of detail in the data retrieved based on query complexity and duration. This approach ensures fairness by preventing large or complex queries from consuming a disproportionate amount of computing resources, which could otherwise slow down or block smaller queries. By distributing resources more equitably, ABR allows the system to maintain consistent performance for all users, regardless of the dataset size.

To make this possible, data is stored at multiple resolutions (100%, 10%, 1%), each representing different sampling percentages. When a query is run, ABR selects the best resolution based on the query's complexity and number of rows to retrieve. By dynamically adjusting the data resolution, ABR optimizes performance and prevents delays. This sets it apart from systems that struggle with timeouts, errors, or high costs when dealing with large datasets.

## Why sampling is applied

Cloudflare's data pipeline handles [over 700 million events per second ↗](https://blog.cloudflare.com/how-we-make-sense-of-too-much-data) (and growing) across its global network. Processing and storing all this data in real-time would be prohibitively expensive and time-consuming. By leveraging carefully designed sampling methods, Cloudflare Analytics delivers accurate and actionable data, balancing precision with performance.

Sampling enables:

* **Scalability**: Reduces the volume of data processed without compromising insights.
* **Performance**: Speeds up query execution for analytics.
* **Cost-Efficiency**: Minimizes resource usage and storage needs.

## Can I trust sampled data?

Sampled data is highly reliable, and can provide insights that are as dependable as those derived from full datasets. Cloudflare designs sampling techniques to ensure we capture the essential characteristics of the entire dataset, delivering results you can trust.

Sampling is an approach similarly used in other domains, for instance:

* Google Maps: Just as online maps display lower-resolution images when zoomed out and higher-resolution images when zoomed in — keeping the total number of pixels relatively constant — Cloudflare Analytics dynamically adjusts sampling rates to efficiently provide insights, ensuring queries return consistent and accurate results regardless of dataset size.
* Opinion Polls: Similar to how pollsters sample a subset of the population to predict election outcomes, Cloudflare samples a portion of your data to provide accurate, system-wide insights.
* Movie Frames: Watching a movie at 30 frames per second (fps) instead of 60 fps does not change the overall experience, much like how analyzing fewer data points still reveals the same patterns and trends in your analytics dataset.

We acknowledge it can be challenging to verify the exact resolution of ABR query results at this time. However, as a general rule, you can check the number of rows read. A higher number of rows read will result in higher resolution results. For example, results based on thousands of rows are highly likely to be representative, while those based on just a few rows may not be as reliable.

In the near future, we plan to expose confidence intervals along with query results, so you can see precisely how accurate your results are.

## Additional considerations

**When sampling occurs**

* Sampling is typically applied to very high-traffic datasets where full data analysis would be impractical.
* For smaller datasets, full data analysis is often performed without sampling.

**Sampling rates**

* Sampling rates vary depending on the dataset and product.
* Cloudflare ensures that sampling rates are consistent within a single dataset to maintain accuracy across queries.

**Impact on metrics**

* While sampling reduces the volume of processed data, aggregated metrics like totals, averages, and percentiles are extrapolated based on the sample size. This ensures the reported metrics represent the entire dataset accurately.

**Limitations**

* Sampling may not capture extremely rare events with very low occurrence rates.

**Sampling in analytics interfaces**

* GraphQL API: Sampling metadata is included in the query response. For more information, refer to the sampling [GraphQL Analytics API](https://developers.cloudflare.com/analytics/graphql-api/sampling/) documentation.
* Workers Analytics Engine: For more information, refer to the [Workers Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/sampling/) documentation.
* Dashboard Analytics: Displays an icon with the sampled percentage of data, if sampled data was used for the visualization.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/sampling/","name":"Understanding sampling in Cloudflare Analytics"}}]}
```

---

---
title: Network analytics
description: Cloudflare Network Analytics (version 2) provides near real-time visibility into network and transport-layer traffic patterns and DDoS attacks. Network Analytics visualizes packet and bit-level data, the same data available via the Network Analytics dataset of the GraphQL Analytics API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/network-analytics/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Network analytics

Cloudflare Network Analytics (version 2) provides near real-time visibility into network and transport-layer traffic patterns and DDoS attacks. Network Analytics visualizes packet and bit-level data, the same data available via the Network Analytics dataset of the GraphQL Analytics API.

Requirements

Network Analytics requires the following:

* A Cloudflare Enterprise plan.
* Cloudflare Magic Transit or Spectrum.
* Cloudflare WAN.

For a technical deep-dive into Network Analytics, refer to our [blog post ↗](https://blog.cloudflare.com/building-network-analytics-v2/).

## Remarks

* The Network Analytics logs refer to IP traffic of Magic Transit customer prefixes/leased IP addresses or Spectrum applications. These logs are not directly associated with the [zones](https://developers.cloudflare.com/fundamentals/concepts/accounts-and-zones/#zones) in your Cloudflare account.
* The data retention for Network Analytics is 16 weeks. Additionally, data older than eight weeks might have lower resolution when using narrow time frames.

## Related resources

* [Cloudflare GraphQL API](https://developers.cloudflare.com/analytics/graphql-api/)
* [Cloudflare Logpush](https://developers.cloudflare.com/logs/logpush/)
* [Migrating from Network Analytics v1 to Network Analytics v2](https://developers.cloudflare.com/analytics/graphql-api/migration-guides/network-analytics-v2/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/network-analytics/","name":"Network analytics"}}]}
```

---

---
title: Adjust the displayed data
description: To perform a broad analysis of layer 3/4 traffic and DDoS attacks, use the All traffic tab.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/network-analytics/configure/displayed-data.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Adjust the displayed data

## Select the appropriate tab

To perform a broad analysis of layer 3/4 traffic and DDoS attacks, use the **All traffic** tab.

To focus on a specific mitigation system, select one of the [other available tabs](https://developers.cloudflare.com/analytics/network-analytics/understand/main-dashboard/#available-tabs). The tabs displayed in the dashboard depend on your Cloudflare services.

## Select high-level metric

To toggle your view of the data, select the **Total packets** or **Total bytes** side panels.

![Network Analytics side panels allowing you to use packets or bits/bytes as the base unit for the dashboard.](https://developers.cloudflare.com/_astro/high-level-metrics.DFUDKbKH_1CcwDD.webp) 

_Note: Labels in this image may reflect a previous product name._

The selected metric will determine the base units (packets or bits/bytes) used in the several dashboard analytics panels.

## Select a dimension

Under **Packets summary** or **Bits summary**, select one of the [available dimensions](https://developers.cloudflare.com/analytics/network-analytics/understand/main-dashboard/#available-dimensions) to view the data along that dimension. The default dimension is **Action**.

## Apply filters

You can apply multiple filters and exclusions to adjust the scope of the data displayed in Network Analytics. Filters affect all the data displayed in the dashboard.

There are two ways to filter Network Analytics data: select **Add filter** or select one of the stat filters.

### Select Add filter

Select **Add filter** to open the **New filter** popover. Specify a field, an operator, and a value to complete your filter expression. Select **Apply** to update the view.

Notes about filtering

When applying filters, observe these guidelines:

* Wildcards are not supported.
* You do not need to wrap values in quotes.
* When specifying an ASN number, leave out the `AS` prefix. For example, enter `1423` instead of `AS1423`.

### Select a stat filter

To filter based on the type of data associated with one of the Network Analytics stats, use the **Filter** and **Exclude** buttons that display when you hover over the stat.

## Create a Network Firewall rule from the applied filters

Note

This feature is only available to Magic Transit and Cloudflare WAN (formerly Magic WAN) users.

Select **Create Network Firewall rule** to create a [Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/) rule that will block all traffic matching the selected filters in Network Analytics.

Note that some filters will not be added to the new Network Firewall rule definition. However, you can further configure the rule in Network Firewall.

## Show IP prefix events

Enable the **Show annotations** toggle to show or hide annotations for advertised/withdrawn IP prefix events in the **Network Analytics** view. Select each annotation to get more details.

![Network Analytics chart displaying IP prefix-related annotations.](https://developers.cloudflare.com/_astro/view-annotations.D18njKAr_Z4472P.webp) 

## View logged or monitored traffic

[Network DDoS managed rules](https://developers.cloudflare.com/ddos-protection/managed-rulesets/network/) and [Advanced DDoS Protection systems](https://developers.cloudflare.com/ddos-protection/advanced-ddos-systems/overview/) provide a `log` or `monitoring` mode that does not drop traffic. These `log` and `monitoring` mode events are based on **Verdict** and **Outcome**/**Action** fields.

To filter for these traffic events:

1. In the Cloudflare dashboard, go to the **Network Analytics** page.  
[ Go to **Network analytics** ](https://dash.cloudflare.com/?to=/:account/networking-insights/analytics/network-analytics/transport-analytics)
2. Go to **DDoS managed rules** tab.
3. Select **Add filter**.  
   * Set `Verdict equals drop`.  
   * Set `Action equals pass`.
4. Select **Apply**.

By setting `verdict` to `drop` and `outcome` as `pass`, we are filtering for traffic that was marked as a detection (that is, verdict was `drop`) but was not dropped (for example, outcome was `pass`).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/network-analytics/","name":"Network analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/network-analytics/configure/","name":"Configure"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/network-analytics/configure/displayed-data/","name":"Adjust the displayed data"}}]}
```

---

---
title: Share and export data
description: When you add filters and specify a time range in Network Analytics, the URL changes to reflect those parameters.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/network-analytics/configure/share-export.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Share and export data

## Share Network Analytics filters

When you add filters and specify a time range in Network Analytics, the URL changes to reflect those parameters.

To share your view of the data, copy the URL and send it to other users so that they can work with the same view.

## Export sample log data

You can export up to 100 raw events from the **Packet sample log** at a time. This option is useful when you need to combine and analyze Cloudflare data with data stored in a separate system or database, such as a SIEM system.

To export log data:

1. Select **Export**.
2. Choose either CSV or JSON format for rendering exported data. The downloaded file name will reflect the selected time range, using this pattern:

```

network-analytics-attacks-<START_TIME>-<END_TIME>.json


```

## Export a Network Analytics report

To print or download a snapshot report from Network Analytics, select **Print report**. Your web browser's print interface displays options for printing or saving as a PDF.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/network-analytics/","name":"Network analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/network-analytics/configure/","name":"Configure"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/network-analytics/configure/share-export/","name":"Share and export data"}}]}
```

---

---
title: Adjust the time range
description: Use the timeframe drop-down list to change the time range over which Network Analytics displays data. When you select a timeframe, the entire view is updated to reflect your choice.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/network-analytics/configure/time-range.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Adjust the time range

## Using the timeframe drop-down list

Use the timeframe drop-down list to change the time range over which Network Analytics displays data. When you select a timeframe, the entire view is updated to reflect your choice.

In the Network Analytics dashboard, the range of historical data you can query is 112 days.

When you select _Previous 30 minutes_, the **Network Analytics** card will show the data from the last 30 minutes, refreshing every 20 seconds. A _Live_ notification appears next to the statistic drop-down list to let you know that the view keeps updating automatically:

![Timeframe drop-down with Previous 30 minutes selected.](https://developers.cloudflare.com/_astro/timeframe-selector.CKN2F0gt_1pRaib.webp) 

## Zooming in the chart

To zoom in a specific period, select and drag to define a region in the **Packets summary** (or **Bits summary**) chart. To zoom out, select **X** in the time range selector.

![User zooming in a given period in the Network Analytics traffic chart.](https://developers.cloudflare.com/images/analytics/network-analytics/chart-zoom-in.gif) 

The effective resolution goes up when you zoom in and goes down when you zoom out, due to the [Adaptive Bit Rate](https://developers.cloudflare.com/analytics/network-analytics/understand/concepts/#adaptive-bit-rate-sampling). This means that a big packet burst that lasted a few seconds may look less impactful when analyzing a chart displaying data for 24 hours or more.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/network-analytics/","name":"Network analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/network-analytics/configure/","name":"Configure"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/network-analytics/configure/time-range/","name":"Adjust the time range"}}]}
```

---

---
title: Get started
description: Learn how to view and use data from Network Analytics.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/network-analytics/get-started.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Get started

Requirements

Network Analytics requires the following:

* A Cloudflare Enterprise plan.
* Cloudflare Magic Transit or Spectrum.
* Cloudflare WAN.

## View the Network Analytics dashboard

1. In the Cloudflare dashboard, go to the **Network Analytics** page.  
[ Go to **Network analytics** ](https://dash.cloudflare.com/?to=/:account/networking-insights/analytics/network-analytics/transport-analytics)
2. Select an account that has access to Magic Transit or Spectrum.
3. Configure the displayed data. You can [adjust the time range](https://developers.cloudflare.com/analytics/network-analytics/configure/time-range/), [select the main metric](https://developers.cloudflare.com/analytics/network-analytics/configure/displayed-data/#select-high-level-metric) (total packets or total bytes), [apply filters](https://developers.cloudflare.com/analytics/network-analytics/configure/displayed-data/#apply-filters), and more.

## Get Network Analytics data via API

Use the [GraphQL Analytics API](https://developers.cloudflare.com/analytics/graphql-api/) to query data using the available [Network Analytics nodes](https://developers.cloudflare.com/analytics/graphql-api/migration-guides/network-analytics-v2/node-reference/).

## Send Network Analytics logs to a third-party service

[Create a Logpush job](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/) that sends Network analytics logs to your storage service, SIEM solution, or log management provider.

## Limitations

Users with the `Analytics` role will have visibility to IDs but will not see the following on the Network Analytics dashboard:

* Tunnel names
* Prefix names
* [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/) rules
* [DDoS managed rulesets](https://developers.cloudflare.com/ddos-protection/managed-rulesets/)
* Override names

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/network-analytics/","name":"Network analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/network-analytics/get-started/","name":"Get started"}}]}
```

---

---
title: Data collection
description: For the purposes of mitigating DDoS attacks and providing traffic visibility through the Network Analytics dashboard, Cloudflare collects data from different protocols such as IP, IPv6, TCP, UDP, ICMP, GRE, and DNS.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/network-analytics/reference/data-collection.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Data collection

For the purposes of mitigating DDoS attacks and providing traffic visibility through the Network Analytics dashboard, Cloudflare collects data from different protocols such as IP, IPv6, TCP, UDP, ICMP, GRE, and DNS.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/network-analytics/","name":"Network analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/network-analytics/reference/","name":"Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/network-analytics/reference/data-collection/","name":"Data collection"}}]}
```

---

---
title: Concepts
description: With Adaptive Bit Rate (ABR) sampling, every analytics query that supports ABR will be calculated at a resolution matching the query. Depending on the size of your query, the ABR mechanism will choose the best sampling rate and fetch a response from one of the sample tables encapsulated behind each Network Analytics node. The cardinality and accuracy are preserved even for historical data.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/network-analytics/understand/concepts.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Concepts

## Adaptive Bit Rate sampling

With Adaptive Bit Rate (ABR) sampling, every analytics query that supports ABR will be calculated at a resolution matching the query. Depending on the size of your query, the ABR mechanism will choose the best sampling rate and fetch a response from one of the sample tables encapsulated behind each [Network Analytics node](https://developers.cloudflare.com/analytics/graphql-api/migration-guides/network-analytics-v2/node-reference/). The cardinality and accuracy are preserved even for historical data.

For more background information on Adaptive Bit Rate sampling, refer to the [Explaining Cloudflare's ABR Analytics ↗](https://blog.cloudflare.com/explaining-cloudflares-abr-analytics/) blog post.

## Edge Sample Enrichment

Network Analytics can provide accurate data due to the sample rate and to Edge Sample Enrichment.

Sample rates vary depending on the mitigation service. For example:

* The sample rate for `dosd` changes dynamically from 1/100 to 1/10,000 packets based on the volume of packets.
* The sample rate for Network Firewall events changes dynamically from 1/100 to 1/1,000,000 packets based on the number of packets.
* The sample rate for `flowtrackd` is 1/10,000 packets.

NA uses a data logging pipeline that relies on Edge Sample Enrichment. By delegating the packet sample enrichment and cross-referencing to the global data centers, the data pipeline’s resilience and tolerance against congestion are improved. Using this method, enriched packet samples are immediately stored in Cloudflare's core data centers as soon as they arrive.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/network-analytics/","name":"Network analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/network-analytics/understand/","name":"About"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/network-analytics/understand/concepts/","name":"Concepts"}}]}
```

---

---
title: Main dashboard
description: The following sections are a guide on the different sections of the main Network Analytics dashboard.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/network-analytics/understand/main-dashboard.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Main dashboard

The following sections are a guide on the different sections of the main Network Analytics dashboard.

## Available tabs

The **All traffic** tab displays global information about layer 3/4 traffic, DNS traffic, and DDoS attacks. The dashboard has additional tabs with specific information (and specific filters) for different mitigation systems.

The following table contains a summary of what is shown in each tab:

| Tab name                        | For Magic Transit users                                                                                                                                                                                                                                                | For Spectrum users                                                                                                       |
| ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
| **All traffic**                 | Traffic dropped by DDoS managed rules, Advanced TCP Protection, Advanced DNS Protection, and Cloudflare Network Firewall, and traffic passed to the origin server.                                                                                                     | Traffic dropped and passed by DDoS managed rules.                                                                        |
| **DDoS managed rules**          | Traffic dropped and passed by [DDoS managed rules](https://developers.cloudflare.com/ddos-protection/managed-rulesets/).                                                                                                                                               | Traffic dropped and passed by [DDoS managed rules](https://developers.cloudflare.com/ddos-protection/managed-rulesets/). |
| **TCP Protection**              | Traffic dropped and passed by the [Advanced TCP Protection](https://developers.cloudflare.com/ddos-protection/advanced-ddos-systems/overview/advanced-tcp-protection/) system. Does not include traffic dropped by DDoS managed rules.                                 | N/A                                                                                                                      |
| **DNS Protection**              | Traffic dropped and passed by the [Advanced DNS Protection](https://developers.cloudflare.com/ddos-protection/advanced-ddos-systems/overview/advanced-dns-protection/) system. Does not include traffic dropped by DDoS managed rules.                                 | N/A                                                                                                                      |
| **Cloudflare Network Firewall** | Traffic dropped by [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/) and traffic passed to the origin server. Does not include traffic dropped by DDoS managed rules, Advanced TCP Protection, or Advanced DNS Protection. | N/A                                                                                                                      |

Use these tabs to better understand the decisions made by each mitigation system, and which rules are being applied to mitigate attacks.

Note

Network Analytics will not show other traffic, such as:

* Traffic dropped by Spectrum
* Traffic dropped by the WAF/CDN service
* Traffic served from cache or from Workers

## High-level metrics

The side panels in the Network Analytics page provide a summary of activity over the period selected in the time frame drop-down list.

![Available high-level metrics in the Network Analytics dashboard](https://developers.cloudflare.com/_astro/high-level-metrics.DFUDKbKH_1CcwDD.webp) 

_Note: Labels in this image may reflect a previous product name._

Selecting one of the metrics in the sidebar will define the base unit (packets or bits/bytes) for the data displayed in the dashboard.

## Executive summary

![Executive summary card in the Network Analytics dashboard.](https://developers.cloudflare.com/_astro/executive-summary-card.Bueo7FPl_Xhlas.webp) 

The executive summary provides top insights and trends about DDoS attacks targeting your network, including the amount of attacks, percentage of attacks traffic mitigated relative to your traffic, largest attack rates, total mitigated attack bytes, top source, and estimated duration of the attacks.

These insights are adaptive based on the selected time frame and the **Packets** or **Bytes** [metrics](#high-level-metrics) selector. The insights are also accompanied by the trends relative to the selected time period, visualized as period-over-period change in percentage and indicator arrows.

The executive summary also features a one-liner summary at the top, informing you about recent and ongoing attacks.

### Total attacks

The total number of attacks is based on unique attack IDs of mitigations issued by the [Network-layer DDoS Attack Protection managed ruleset](https://developers.cloudflare.com/ddos-protection/managed-rulesets/network/).

Since the mitigation system may generate several mitigation rules (and therefore several attack IDs) for a single attack, the actual number of attacks may seem higher in some cases.

To obtain the metadata of recently mitigated DDoS attacks, query the [dosdAttackAnalyticsGroups](https://developers.cloudflare.com/analytics/graphql-api/migration-guides/network-analytics-v2/node-reference/#dosdattackanalyticsgroups) GraphQL node.

Note about attack rates

Attack rates in the executive summary may seem lower than the ones displayed in the time series graph because they are calculated based on the maximum rate of unique attack events and only by the Network-layer DDoS Attack Protection managed ruleset. However, in practice, multiple attacks and mitigation systems can contribute to blocking a single attack, resulting in a larger rate than the one displayed.

Additionally, attack rates may change based on the sampling and adaptive bit rate (ABR) as you zoom in and out in the time series graph. Refer to [Concepts](https://developers.cloudflare.com/analytics/network-analytics/understand/concepts/) for more information.

## Filters

In the main dashboard card you can apply filters to the displayed data.

You can filter by the following parameters:

* Mitigation action taken by Cloudflare
* Mitigation system that performed the action
* Source IP, port, ASN, tunnel
* [Direction](#traffic-direction)
* Destination IP, port, IP range (description or CIDR of provisioned prefixes), tunnel
* Source Cloudflare data center and data center country of where the traffic was observed
* Packet size
* TCP flag
* TTL

Note that the IP Range filter currently has a limitation that only supports filtering /24 IPv4 Ranges and /64 IPv6 Ranges.

Dashboard tabs for [specific mitigation systems](https://developers.cloudflare.com/analytics/network-analytics/understand/main-dashboard/#available-tabs) (DDoS managed rules, Advanced TCP Protection, or Cloudflare Network Firewall) may have additional filter parameters.

### Traffic direction

The available values in the **Direction** filter have the following meaning, from the point of view of a specific customer's network:

* **Ingress**: Incoming traffic from the public Internet (ingress) to the customer's network via Cloudflare's network (for example, through [Magic Transit](https://developers.cloudflare.com/magic-transit/));
* **Egress**: Outgoing traffic leaving the customer's network through Cloudflare's network to the public Internet (for example, through [Magic Transit deployed with the egress option](https://developers.cloudflare.com/magic-transit/reference/egress/));
* **Lateral**: Traffic that stayed within the customer's network, routed through Cloudflare's network (for example, traffic between customer office branches or data centers routed through [Cloudflare WAN](https://developers.cloudflare.com/cloudflare-wan/)).

## Packets summary or Bits summary

Displays a plot of the traffic (in terms of bits or packets) in the selected time range according to the values of a given dimension. By default, Network Analytics displays data broken down by **Action**.

### Available dimensions

You can choose one of the following dimensions:

* Action
* Destination IP
* Destination IP range
* Destination port
* Destination tunnels
* Mitigation system
* Source ASN
* Data center country
* Source data center
* Source IP
* Source port
* Source tunnels
* Packet size
* Protocol
* TCP flag

Dashboard tabs for [specific mitigation systems](https://developers.cloudflare.com/analytics/network-analytics/understand/main-dashboard/#available-tabs) (DDoS managed rules, Advanced TCP Protection, or Cloudflare Network Firewall) may have additional dimensions.

## Mitigation system distribution

The **Mitigation System Distribution** card displays the amount of traffic (in terms of packets or bits) that was mitigated by each mitigation system.

## Packet sample log

The Network Analytics **Packet sample log** shows up to 100 log events — including both allowed and dropped packets — in the currently selected time range, paginated with 10 results per page per time range view (the [GraphQL Analytics API](https://developers.cloudflare.com/analytics/graphql-api/) does not have this limitation).

Expand each row to display event details, including the full packet headers and metadata.

Dashboard tabs for [specific mitigation systems](https://developers.cloudflare.com/analytics/network-analytics/understand/main-dashboard/#available-tabs) (DDoS managed rules, Advanced TCP Protection, or Cloudflare Network Firewall) may have additional fields in the expanded event details.

## Data center country/Source data center

Displays the top source [Cloudflare data centers ↗](https://www.cloudflare.com/en-gb/network/) where the displayed traffic was ingested. The same card can also display the country associated with these top source data centers.

To switch between **Data center country** and **Source data center** information, use the dropdown in the card.

## Top insights

The different panels in **Top insights** display the top items in each dimension. To filter by a given value or exclude a value from displayed data, hover the value stats and select **Filter** or **Exclude**.

To set the number of items to display for each dimension, open the drop-down list associated with the view and select the desired number of items.

## TCP flag

The **TCP Flag** panel displays the TCP flags set for all the traffic currently displayed in the dashboard, including both allowed and mitigated traffic.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/network-analytics/","name":"Network analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/network-analytics/understand/","name":"About"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/network-analytics/understand/main-dashboard/","name":"Main dashboard"}}]}
```

---

---
title: GraphQL Analytics API
description: The GraphQL Analytics API provides data regarding HTTP requests passing through Cloudflare's network, as well as data from specific products, such as Firewall or Load Balancing. Network Analytics users also have access to packet-level data. Use the GraphQL Analytics API to select specific datasets and metrics of interest, filter and aggregate the data along various dimensions, and integrate the results with other applications.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# GraphQL Analytics API

The GraphQL Analytics API provides data regarding HTTP requests passing through Cloudflare's network, as well as data from specific products, such as Firewall or Load Balancing. Network Analytics users also have access to packet-level data. Use the GraphQL Analytics API to select specific datasets and metrics of interest, filter and aggregate the data along various dimensions, and integrate the results with other applications.

The basis of the API is the [GraphQL framework ↗](https://graphql.org/), created and open-sourced by Facebook. There is an active developer community for GraphQL and powerful clients for running queries, which makes it easy to get started. GraphQL is especially useful for building visualizations and powers the analytics in the Cloudflare dashboard.

GraphQL models a business domain as a graph using a schema. In the schema, there are logical definitions for different types of nodes and their connections (edges). These nodes are the datasets you use for your analytics. You write queries in GraphQL much like in SQL: you specify the dataset (table), the metrics to retrieve (such as requests and bytes), and filter or group by dimensions (for example, a time period).

GraphQL differs from a traditional API: it has one single endpoint:

```

https://api.cloudflare.com/client/v4/graphql


```

You pass the query parameters as a JSON object in the payload of a `POST` request to this endpoint.

You can use `curl` to make requests to the GraphQL Analytics API. Alternatively, you can use a GraphQL client to construct queries and pass requests to the GraphQL Analytics API.

## Clients

We are using [GraphiQL ↗](https://github.com/skevy/graphiql-app) for our example GraphQL queries. There are many other popular open-source clients that you can find online, such as [Altair ↗](https://altairgraphql.dev) and [Insomnia ↗](https://insomnia.rest).

## Limitations

The purpose of the GraphQL API is to provide aggregated analytics about various Cloudflare products. These datasets should not be used as a measure for usage that Cloudflare uses for billing purposes. Billable traffic [excludes things like DDoS traffic ↗](https://blog.cloudflare.com/unmetered-mitigation), while GraphQL is a measure of overall consumption/usage, so it will include all measurable traffic.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}}]}
```

---

---
title: Error responses
description: The GraphQL Analytics API is a RESTful API based on HTTPS requests and JSON responses, and will return familiar HTTP status codes (for example, 404, 500, 504). However, in contrast to the common REST approach, a 200 response can contain an error, conforming to the GraphQL specification.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/errors.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Error responses

The GraphQL Analytics API is a RESTful API based on HTTPS requests and JSON responses, and will return familiar HTTP status codes (for example, `404`, `500`, `504`). However, in contrast to the common REST approach, a `200` response can contain an error, conforming to the [GraphQL specification ↗](https://graphql.github.io/graphql-spec/June2018/#sec-Errors).

All responses contain an `errors` array, which will be `null` if there are no errors, and include at least one error object if there was an error. Non-null error objects will contain the following fields:

* `message`: a string describing the error.
* `path`: the nodes associated with the error, starting from the root. Note that the number included in the path array, for example, `0` or `1`, specifies to which zone the error applies; `0` indicates the first zone in the list (or only zone, if only one is being queried).
* `timestamp`: UTC datetime when the error occurred.

## Example

```

{

  "data": null,

  "errors": [

    {

      "message": "cannot request data older than 2678400s",

      "path": ["viewer", "zones", "0", "firewallEventsAdaptiveGroups"],

      "extensions": {

        "timestamp": "2019-12-09T21:27:19.195060142Z"

      }

    }

  ]

}


```

## Common error types

### Service unavailability

Sample error messages:

* `unable to execute query, please try again later` (HTTP `503`)
* `too many queries in progress, please try again later` (HTTP `503`)

These messages indicate a temporary server-side issue. The first message typically means the upstream database is unreachable or returned an error. The second message means the server has reached its maximum number of concurrent queries.

Retry the request after a short delay. If the error persists, check the [Cloudflare status page ↗](https://www.cloudflarestatus.com/) for ongoing incidents.

### Dataset accessibility limits exceeded

Sample error messages:

* `cannot request data older than...` (HTTP `400`)
* `number of fields can't be more than...` (HTTP `400`)
* `limit must be positive number and not greater than...` (HTTP `400`)
* `query time range is too large...` (HTTP `400`)

These messages indicate that the query exceeds what is allowed for the particular dataset under the current [plan ↗](https://www.cloudflare.com/plans/), and an upgrade should be considered. Refer to [Node limits](https://developers.cloudflare.com/analytics/graphql-api/limits/#node-limits-and-availability) for details.

### Parsing issues

Sample error messages:

* `error parsing args...` (HTTP `400`)
* `scalar fields must have no selections` (HTTP `400`)
* `object field must have selections` (HTTP `400`)
* `unknown field...` (HTTP `400`)
* `query contains error, please review it and retry` (HTTP `400`)

These messages indicate that the query cannot be processed because it is malformed. Check the query syntax against the [GraphQL schema](https://developers.cloudflare.com/analytics/graphql-api/getting-started/explore-graphql-schema/) and correct the invalid fields or structure.

### Rate limits exceeded

Sample error messages:

* `rate limiter budget depleted, try again after 5 minutes` (HTTP `429`)
* `in combination, your request queries too many nodes, zones and accounts` (HTTP `429`)
* `query consumed excessive resources, please try running smaller queries which consume fewer resources` (HTTP `429`)

These messages indicate the query exceeded rate or resource limits. Reduce the query complexity, the number of zones or accounts per request, or wait before retrying. Refer to the [Limits](https://developers.cloudflare.com/analytics/graphql-api/limits/) section for more details about rate limits.

### Authentication and authorization errors

Sample error messages:

* `Unauthorized` (HTTP `401`)
* `not authorized for that account` (HTTP `403`)
* `zones [...] are not authorized` (HTTP `403`)
* `does not have access to the path...` (HTTP `403`)

An `Unauthorized` response means the API token or bearer token is missing, expired, or invalid. Verify that you are passing a valid token in the `Authorization` header.

A `403` response means the token does not have the required permissions for the requested account or zone. Verify the token has the **Analytics: Read** permission for the relevant resources. Refer to the [Tokens](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/) section for more details.

### Internal server errors

Sample error message:

* `Internal server error` (HTTP `500`)

This is a generic error indicating an unexpected failure. If it persists, contact [Cloudflare Support ↗](https://support.cloudflare.com/) with the full request and response, including the `Ray-ID` header from the HTTP response.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/errors/","name":"Error responses"}}]}
```

---

---
title: Confidence Intervals
description: Confidence intervals help assess accuracy and quantify uncertainty in results from sampled datasets. When querying sum or count fields on adaptive datasets, you can request a confidence interval to understand the possible range around an estimate. For example, specifying a confidence level of 0.95 returns the estimate, along with the range of values that likely contains the true value 95% of the time.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/features/confidence-intervals.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Confidence Intervals

Confidence intervals help assess accuracy and quantify uncertainty in results from sampled datasets. When querying sum or count fields on adaptive datasets, you can request a confidence interval to understand the possible range around an estimate. For example, specifying a confidence level of `0.95` returns the estimate, along with the range of values that likely contains the true value 95% of the time.

## Availability

* **Supported datasets**: Adaptive (sampled) datasets only.
* **Supported fields**: All `sum` and `count` fields.
* **Usage**: Confidence `level` must be provided as a decimal between 0 and 1 (for example,`0.90`, `0.95`, `0.99`).
* **Default**: If no confidence level is specified, intervals are not returned.

## Usage example

The following example shows how to query a confidence interval and interpret the response.

### Request

To request a confidence interval, use the `confidence(level: X)` argument in your query.

A GraphQL query

```

query SingleDatasetWithConfidence($zoneTag: string, $start: Time, $end: Time) {

  viewer {

    zones(filter: {zoneTag: $zoneTag}) {

      firewallEventsAdaptiveGroups(

        filter: {datetime_gt: $start, datetime_lt: $end}

        limit: 1000

      ) {

        count

        avg {

          sampleInterval

        }

        confidence(level: 0.95) {

          count {

            estimate

            lower

            upper

            sampleSize

          }

        }

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBAygSwHYHMA2YAiBDALtgZzFwHUFcALAYQHskAzBAEzCQGMwAKAEgC86wAFWwoAXDAK4IyFABoY3Sdgi5xghAFsw87qyZrNYAJQwA3gCgYMAG4IwAd0hnLVmPyRgCnRmlyRxpu5CIuJ8AsIoAL4mFq6ujBAO2GhoAKLWrLgEAIJM2AAOuAgZAOIQNCD5Xi5xVj5+EAF5fkVaAPooqgpKKvLNxIZtvqF6kTW1aJrk4gCMAAwL41YxS65sFUi4q1bY1ijOtbUE2Br5GACSm5DWydswY4dW6wzMrBycGBlo4nMAdACcAFYVo81htcAdQa5PK08GA7nE0DRHBAEa5KvlIGirMdThhELx4VD7ncHrUyVYyWNIkA&variables=N4IgXg9gdgpgKgQwOYgFwgFoHkByBRAfQEkAREAGhAGcAXBAJxrRACYAGFgNgFo2AWXgGY4bNqgCs41PwwUQMKABNm7LrwFthbThKkyQAXyA)

### Response

The response includes the following values:

* `estimate`: The estimated value, based on sampled data.
* `lower`: The lower bound of the confidence interval.
* `sampleSize`: The number of sampled data points used to calculate the estimate.
* `upper`: The upper bound of the confidence interval.

In this example, the interpretation of the response is that, based on a sample of 40,054, the estimated number of events is 42,939, with 95% confidence that the true value lies between 42,673 and 43,204.

```

{

  "data": {

    "viewer": {

      "zones": [

        {

          "firewallEventsAdaptiveGroups": [

            {

              "avg": {

                "sampleInterval": 1.0720277625205972

              },

              "confidence": {

                "count": {

                  "estimate": 42939,

                  "lower": 42673.44115335711,

                  "sampleSize": 40054,

                  "upper": 43204.55884664289

                }

              },

              "count": 42939

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/features/","name":"Features"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/features/confidence-intervals/","name":"Confidence Intervals"}}]}
```

---

---
title: Datasets (tables)
description: Cloudflare Analytics offers a range of datasets, including both general and
product-specific ones. Datasets use a consistent naming scheme that explicitly
identifies the type of data they return:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/features/data-sets.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Datasets (tables)

Cloudflare Analytics offers a range of datasets, including both general and product-specific ones. Datasets use a consistent naming scheme that explicitly identifies the type of data they return:

* **Domain** \- Each dataset is named after the field it describes and is associated with a set of nodes. Product-specific data nodes incorporate the name of the relevant product, for instance `loadBalancingRequests*` nodes.
* **Adaptive Sampling** \- Nodes that represent data acquired using adaptive sampling incorporate the `Adaptive` suffix. For more details, refer to[sampling](https://developers.cloudflare.com/analytics/graphql-api/sampling/).
* **Aggregated data** \- Nodes that represent aggregated data include the`Groups` suffix. For example, the `loadBalancingRequestsAdaptiveGroups` node represents aggregated data for Load Balancing requests. Aggregated data is returned in an array of `...Group` objects. Please note: we have a node that currently excluded from that naming convention - `workersInvocationsAdaptive`(beta).
* **Raw data** \- Raw data nodes, such as `loadBalancingRequestsAdaptive`, are not aggregated and so do not incorporate the `Groups` suffix. Raw data is returned in arrays containing objects of the relevant data type. For example, a query to `loadBalancingRequestsAdaptive` returns a variety of`LoadBalancingRequest` objects.

To find out more information about datasets, availability, beta, and deprecation statuses, please refer to GraphQL [discovery](https://developers.cloudflare.com/analytics/graphql-api/features/discovery/) features.

## Working with datasets

### Aggregated fields

This example illustrates the structure for Groups:

```

type WhateverGroup {

    count # No subfields, it is just the group size. Not available for roll-up tables.

    sum {

        # fields that support summing (numbers, maps of numbers)

    }

    avg {

        # fields that support averaging (numbers)

    }

    uniq {

        # fields that support uniqueing (numbers, strings, enums, IPs, dates, etc.)

    }

}


```

Unique values are not available as a dimension but can be queried as demonstrated in this example:

```

{

  # Get number of bytes and unique IPs in each minute.

  httpRequests1mGroups {

    sum {

      bytes

    }

    uniq {

      uniques # unique IPs

    }

    dimensions {

      datetimeMinute

    }

  }


  # Count the number of events in each hour.

  firewallEventsAdaptiveGroups {

    count

    dimensions {

      datetimeHour

    }

  }

}


```

### Schema type definitions

Every exposed table has a GraphQL type definition. Type definitions observe the following rules:

* Regular fields represent themselves.
* Every field, including nested fields, has a type and represents a list of that type.
* The `enum` type represents an enumerated field.

Here is an example type definition for `ContentTypeMapElem`:

```

type ContentTypeMapElem {

    edgeResponseContentType: UInt32!

    requests: UInt64!

    bytes: UInt64!

}


# An array of httpRequestsGroup is the result of httpRequests1hGroups or

# httpRequests1mGroups query.

type httpRequestsGroup {

    date: Date!

    timeslot: DateTime!

    requests: UInt64!

    contentTypeMap: [ContentTypeMapElem!]!

    # ... other fields

}


enum TrustedClientCategory {

    UNKNOWN

    REAL_BROWSER

    HONEST_BOT

}


# An array of Request is the result of httpRequests query.

type Request {

    trustedClientCategory: TrustedClientCategory!

    # ... other fields

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/features/","name":"Features"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/features/data-sets/","name":"Datasets (tables)"}}]}
```

---

---
title: Discovery
description: GraphQL API supports introspection to explore nodes and provides a way to
retrieve the user's limits for every node.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/features/discovery/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Discovery

GraphQL API supports [introspection](https://developers.cloudflare.com/analytics/graphql-api/features/discovery/introspection/) to explore nodes and provides a way to retrieve the user's [limits](https://developers.cloudflare.com/analytics/graphql-api/features/discovery/settings/) for every node.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/features/","name":"Features"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/features/discovery/","name":"Discovery"}}]}
```

---

---
title: Introspection
description: Cloudflare GraphQL API has a dynamic schema and exposes more than 70 datasets
across zone and account scopes. We constantly expand the list and replace
existing ones with more capable alternatives.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/features/discovery/introspection.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Introspection

Cloudflare GraphQL API has a dynamic schema and exposes more than 70 datasets across zone and account scopes. We constantly expand the list and replace existing ones with more capable alternatives.

To tackle the schema question, GraphQL provides an [introspection ↗](https://graphql.org/learn/introspection/) mechanism. It is part of the GraphQL specification and allows you to explore the graph of the datasets and fields.

The introspection results provide an overview of ALL available nodes and fields, their descriptions and deprecation status.

Although GraphQL has `query`, `subscription`, and `mutation` operations, Cloudflare GraphQL API only supports `query` operation.

## Description and Beta mode

With details on data exposed by a given node or a field, descriptions also indicate whether it is in Beta mode. Beta nodes (or fields) are for testing and exploration and are usually available for customers on more extensive plans. Please do not rely on beta data nodes since they are subject to change or removal without notice.

## Deprecation

Introspection provides information about deprecation status. Cloudflare uses it as a notification about replacement plans. If the sunset date is provided, please migrate to a replacement node(s) before that date to avoid any disruption.

## Availability

Some of the nodes might only be available to query for some users. Please refer to the [settings](https://developers.cloudflare.com/analytics/graphql-api/features/discovery/settings/) node for more details about availability and personal limits on a given node.

## Explore documentation

The most convenient way to introspect the schema is to use a documentation[explorer](https://developers.cloudflare.com/analytics/graphql-api/getting-started/explore-graphql-schema/) that usually is a part of a GraphQL client (like GraphiQL, Altair, etc).

Alternatively, you can also do it manually by using `__schema` node with the needed directives.

A typical introspection query

```

{

  __schema {

    queryType {

      name

    }

    mutationType {

      name

    }

    subscriptionType {

      name

    }

    types {

      ...FullType

    }

    directives {

      name

      description

      locations

      args {

        ...InputValue

      }

    }

  }

}

fragment TypeRef on __Type {

  kind

  name

  ofType {

    kind

    name

    ofType {

      kind

      name

      ofType {

        kind

        name

        ofType {

          kind

          name

          ofType {

            kind

            name

            ofType {

              kind

              name

              ofType {

                kind

                name

              }

            }

          }

        }

      }

    }

  }

}

fragment InputValue on __InputValue {

  name

  description

  type {

    ...TypeRef

  }

  defaultValue

}

fragment FullType on __Type {

  kind

  name

  description

  fields(includeDeprecated: true) {

    name

    description

    args {

      ...InputValue

    }

    type {

      ...TypeRef

    }

    isDeprecated

    deprecationReason

  }

  inputFields {

    ...InputValue

  }

  interfaces {

    ...TypeRef

  }

  enumValues(includeDeprecated: true) {

    name

    description

    isDeprecated

    deprecationReason

  }

  possibleTypes {

    ...TypeRef

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=N4KABGD6kM4MYAsCmBbAhmUEIEcCuSATgJ4AqxADkpuNhAHZopK0QC+rYKeALmjwEsA9vXJUadBkxZ0OdGHgBG8QgIqCRY6lkmNmnOdh6UkMCZIB0VgGJ4ANna0HOAEwGEkcQQDdT5unoyki6mcKrqwvScEHZCcPyRMNFgaIQA5mY6kmBWFgCS9BS8AGpodgTJhuy0HBwAZoRoacz0PGBaAEpIdWAiUJBa5gDWAvQutIG0QnWDWWAjY5yTdNOzyQvj2cuSqyb+khvJUvrZELvic9mHp9jbN+faRwejmze30k8rM3uXb9dvAQ+AK+a2Bz0WYMBJ0hVTesOy8OwiNhhlqIAaTRabQKRR4pXK1D60BxJTKBHM2xCKjUGiiEGMF04uU63RqtBCdTQ9jxZJY9UazSQrTAtgcgyJAx+tH+lNC4VptDqAiQdhcMAAFKM4OUQgARJAUDzxHhIFwALjAPEIBAAlPs7mAqWEaZFOKkMvsILkSTyCc46AzHtlmSYunV-dgBDB9YbPPxTa4DUaEiIumgYCI2RBRrjrMrVZkmVYffiKtVs60iJy4H45iGqGGs2AhXgUKXTJr6Nq8Hqk3GTebLdakHa5g6nfLXXQozHkwPE7HjZE0xm6WBDBQhDAYAJFHYkFpC3R60hG9U2EA&variables=N4XyA)

For more details on how to send a GraphQL request with curl, please refer to [Execute a GraphQL query with curl](https://developers.cloudflare.com/analytics/graphql-api/getting-started/execute-graphql-query/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/features/","name":"Features"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/features/discovery/","name":"Discovery"}},{"@type":"ListItem","position":6,"item":{"@id":"/analytics/graphql-api/features/discovery/introspection/","name":"Introspection"}}]}
```

---

---
title: Settings node
description: Cloudflare GraphQL API exposes more than 70 datasets to its customers. These
datasets represent different Cloudflare products with very different data
shapes; thus, each has its configuration of limits.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/features/discovery/settings.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Settings node

Cloudflare GraphQL API exposes more than 70 datasets to its customers. These datasets represent different Cloudflare products with very different data shapes; thus, each has its configuration of [limits](https://developers.cloudflare.com/analytics/graphql-api/limits/).

Although we allow access to ALL plans for the essential datasets (like`httpRequestsAdaptiveGroups`, `firewallEventsAdaptive`, etc), users on larger plans benefit from an extended set of datasets and wider query limits.

In addition to [introspection](https://developers.cloudflare.com/analytics/graphql-api/features/discovery/introspection/), users can use the Settings node that is available for both zones and accounts scopes.

## Format

`Settings` node has all datasets from `zones` and `accounts` as fields.

Using a settings node on accounts nodes

```

{

  viewer {

    accounts(filter: { accountTag : $accountTag }) {

      settings {

        # any dataset(s) from accounts

      }

    }

    zones(filter: { zoneTag : $zoneTag }) {

      settings {

        # any dataset(s) from zones

      }

    }

  }

}


```

Every subnode of `settings` node could consist of these fields:

* `enabled` \- shows whether the node is available for a requester or not;
* `availableFields` \- shows the list of fields available for a requester. If it is a nested field, the path will be returned, like `sum_requests`;
* `maxPageSize` \- retrieves the maximum number of records that can be returned
* `maxNumberOfFields` \- answers on how many fields could be used in a single query for that node;
* `notOlderThan` \- returns a number of seconds on how far back in time a query can read;
* `maxDuration` \- shows how wide the requested time range could be.

## A sample query

Get boundaries of firewallEventsAdaptive node

```

query SampleQuery($zoneTag: string) {

  viewer {

    zones(filter: { zoneTag: $zoneTag }) {

      settings {

        firewallEventsAdaptive {

          enabled

          maxDuration

          maxNumberOfFields

          maxPageSize

          notOlderThan

        }

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBAygQwLYAcA2YCK5oAoAkAXgPYB2YAKggOYBcMAzgC4QCWp1AlDAN4BQMGADdWYAO6ReAwTBLkGuAGas0TSPR6yylGvSLaq1GAF9u-GTIZgmTdtQZSLF5RHEI0aAKJCwpJgwBBABMEFFsfRycLXwQAIwwg6SjBJAQADwAREAgEWzIk5NS0gDkQJFjIAHlFADFRNCCGAqiigAUaMDhWQjBmp1JiJkqGyAoACwRSPsFjPtmLeZM+YyA&variables=N4IgXg9gdgpgKgQwOYgFwgFoHkByBRAfQEkAREAXyA)

firewallEventsAdaptive limits for a given user

```

{

  "data": {

    "viewer": {

      "zones": [

        {

          "settings": {

            "firewallEventsAdaptive": {

              "enabled": true,

              "maxDuration": 259200,

              "maxNumberOfFields": 30,

              "maxPageSize": 10000,

              "notOlderThan": 2678400

            }

          }

        }

      ]

    }

  },

  "errors": null

}


```

To get more details on how to execute queries, please refer to our how to get started [guides](https://developers.cloudflare.com/analytics/graphql-api/getting-started/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/features/","name":"Features"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/features/discovery/","name":"Discovery"}},{"@type":"ListItem","position":6,"item":{"@id":"/analytics/graphql-api/features/discovery/settings/","name":"Settings node"}}]}
```

---

---
title: Filtering
description: Filters constrain queries to a particular account or set of zones, requests by date, or those from a specific user agent, for example. Without filters, queries can suffer performance degradation, results can exceed supported bounds, and the data returned can be noisy.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/features/filtering.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Filtering

Filters constrain queries to a particular account or set of zones, requests by date, or those from a specific user agent, for example. Without filters, queries can suffer performance degradation, results can exceed supported bounds, and the data returned can be noisy.

## Filter Structure

The GraphQL filter is represented by the [GraphQL Input Object ↗](https://graphql.github.io/graphql-spec/June2018/#sec-Input-Objects), which exposes Boolean algebra on nodes.

You can use filters as an argument on the following resources:

* zones
* accounts
* tables (datasets)

### Zone filter

Allows querying zone-related data by zone ID (`zoneTag`).

```

zones(filter: {zoneTag: "your Zone ID"}) {

    ...

}


```

The zone filter must conform to the following grammar:

```

filter

    { zoneTag: t }

    { zoneTag_gt: t }

    { zoneTag_in: [t, ...] }


```

Compound filters (comma-separated, `AND`, `OR`) are not supported.

Use the `zoneTag: t` and `zoneTag_in: [t, ...]` forms when you know the zone IDs. Use the `zoneTag_gt: t` form with limits to traverse all zones if the zone IDs are not known. Zones always sort alphanumerically.

Omit the filter to get results for all zones (up to the supported limit).

### Account filter

The account filter uses the same structure and rules as the zone filter, except that it uses the Account ID (`accountTag`) instead of the Zone ID (`zoneTag`).

You must specify an account filter when making an account-scoped query, and you cannot query multiple accounts simultaneously.

Note

Network Analytics queries require an Account ID (`accountTag`) filter.

### Table (dataset) filter

Table filters require that you query at least one node. Use the `AND` operator to create and combine multi-node filters. Table filters also support the `OR` operator, which you must specify explicitly.

The following grammar describes the table filter, where `k` is the GraphQL node on which to filter and `op` is one of the supported operators for that node:

```

filter

  { kvs }

kvs

  kv

  kv, kvs

kv

  k: v

  k_op: v

  AND: [filters]

  OR: [filters]

filters

  filter

  filter, filters


```

### Operators

Operator support varies, depending on the node type and node name.

#### Array operators

The following operators are supported for all array types:

| Operator | Comparison                                      |
| -------- | ----------------------------------------------- |
| has      | array contains a value                          |
| hasall   | array contains all of a list of values          |
| hasany   | array contains at least one of a list of values |

#### Scalar operators

The following operators are supported for all scalar types:

| Operator | Comparison          |
| -------- | ------------------- |
| gt       | greater than        |
| lt       | less than           |
| geq      | greater or equal to |
| leq      | less or equal to    |
| neq      | not equal           |
| in       | in                  |

#### String operators

The `like` operator is available for string comparisons and supports the `%` character as a wildcard.

## Examples

Note

Filtering times are based on event start timestamps, which means requests that end after the filter may be included in queries (as long as they start within the given time).

### General example

```

query GeneralExample($zoneTag: string, $start: Time) {

  viewer {

    zones(filter: { zoneTag: $zoneTag }) {

      httpRequestsAdaptiveGroups(

        filter: { datetime_gt: $start, clientCountryName: "GB" }

        limit: 1

      ) {

        count

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBA4mAdpAhgGwKIA8UFsAOaYAFACQBeA9sgCooDmAXDAM4AuEAlovQDQyl2KCG2Y1OuMAEoYAbwBQMGADdOYAO6Q5ipTCrIWxAGac0bSM1l7qYOkwH7bDGAF8ZC3boAWbNvgBKYKBg7CwAggAmKPhsnMpgcBCUIPiGOp5KJmYWcjBR5rGSAPr0ogJCIvwAxmhqiGwAwsn10AByeGDMAERwAEJdrukZtbicZQCMQ0ruU7pVzWyzLkPLSssuQA&variables=N4IgXg9gdgpgKgQwOYgFwgFoHkByBRAfQEkAREAGhAGcAXBAJxrRACYAGFgNgFo2AWXgGY4bNqgCs41G3EYQAXyA)

### Filter on a specific node

The following GraphQL example shows how to filter a specific node. The SQL equivalent follows.

#### GraphQL

```

httpRequestsAdaptiveGroups(filter: {datetime: "2018-01-01T10:00:00Z"}) {

    ...

}


```

#### SQL

```

WHERE datetime="2018-01-01T10:00:00Z"


```

### Filter on multiple fields

The following GraphQL example shows how to apply a filter to multiple fields, in this case two datetime fields. The SQL equivalent follows.

#### GraphQL

```

httpRequests1hGroups(filter: {datetime_gt: "2018-01-01T10:00:00Z", datetime_lt: "2018-01-01T11:00:00Z"}) {

    ...

}


```

#### SQL

```

WHERE (datetime > "2018-01-01T10:00:00Z") AND (datetime < "2018-01-01T10:00:00Z")


```

### Filter using the `OR` operator

The following GraphQL example demonstrates using the `OR` operator in a filter. This `OR` operator filters for the value `US` or `GB` in the `clientCountryName` field.

#### GraphQL

```

httpRequestsAdaptiveGroups(

        filter: {

          datetime: "2018-01-01T10:00:00Z",

          OR:[{clientCountryName: "US"}, {clientCountryName: "GB"}]) {

    ...

}


```

#### SQL

```

WHERE datetime="2018-01-01T10:00:00Z"

  AND ((clientCountryName = "US") OR (clientCountryName = "GB"))


```

### Filter an array by one value

The following GraphQL examples show how to filter an array field to only return data that includes a specific value. The SQL equivalent follows.

#### GraphQL

```

mnmFlowDataAdaptiveGroups(filter: {ruleIDs_has: "rule-id"}) {

    ...

}


```

#### SQL

```

WHERE has(ruleIDs, 'rule-id')


```

### Filter an array by multiple values

The following GraphQL examples show how to filter an array field to only return data that includes several values. The SQL equivalent follows.

#### GraphQL

```

mnmFlowDataAdaptiveGroups(filter: {ruleIDs_hasall: ["rule-id-1", "rule-id-2"]}) {

    ...

}


```

#### SQL

```

WHERE has(ruleIDs, 'rule-id-1') AND has(ruleIDs, 'rule-id-2')


```

### Filter end users

Add the `requestSource` filter for `eyeball` to return request, data transfer, and visit data about only the end users of your website. This will exclude actions taken by Cloudflare products (for example, cache purge, healthchecks, Workers subrequests) on your zone.

## Subqueries (advanced filters)

Subqueries are not currently supported. You can use two GraphQL queries as a workaround for this limitation.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/features/","name":"Features"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/features/filtering/","name":"Filtering"}}]}
```

---

---
title: Nested Structures
description: Two kinds of nested structures that are supported: arrays and maps. Fields of either of these types are arrays; when they are part of a query result, which is already an array of objects, they become nested arrays.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/features/nested-structures.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Nested Structures

Two kinds of nested structures that are supported: **arrays** and **maps**. Fields of either of these types are arrays; when they are part of a query result, which is already an array of objects, they become nested arrays.

## Arrays

The GraphQL API supports two different sorts of arrays:

* Some arrays contain scalar types (for example, `[String]`) and function like ordinary fields that [can be filtered](https://developers.cloudflare.com/analytics/graphql-api/features/filtering/)
* Some arrays contain more complex types (for example, `[Subrequest]`.) The following section describes their behaviour.

Arrays of non-scalar types behave as a single value. There is no way to paginate through, filter, filter by, group, or group by the array.

On the other hand, you can choose which fields of the underlying type you want fetched.

For example, given arrays like this:

JavaScript

```

type SubRequest {

    url: String!

    status: Int

}


type Request {

    date: Date!

    datetime: DateTime!

    subRequests: [SubRequest!]!

}


```

You can run a query to get the status by subrequest:

JavaScript

```

{

    requests {

        date

        subRequests {

            # discard the url, only need the status

            status

        }

    }

}


```

The results would be:

JavaScript

```

{

    "requests": [

        {

            "date": "2018-01-01",

            "subRequests": [{"status": 404}, {"status": 200}, {"status": 404}]

        },

        {

            "date": "2018-01-01",

            "subRequests": [{"status": 200}]

        }

    ]

}


```

## Maps

Maps behave like arrays, but can be grouped using the `sum` function. They are used in aggregated datasets, such as `httpRequest1dGroups`.

Example maps:

JavaScript

```

type URLStatsMapElem {

    url: String!

    requests: Int

    bytes: Int

}


type Request {

    date: Date!

    datetime: DateTime!

    urlStatsMap: [URLStatsMapElem!]!

}


```

Query:

JavaScript

```

{

    requests {

        sum {

            urlStatsMap {

                url

                requests

                bytes

            }

        }

        dimensions {

            date

        }

    }

}


```

Response:

JavaScript

```

{

    "requests": [

        {

            "sum": {

                "urlStatsMap": [

                    {

                        "url": "hello-world.org/1",

                        "requests": 123,

                        "bytes": 1024

                    },

                    {

                        "url": "hello-world.org/10",

                        "requests": 1230,

                        "bytes": 10240

                    }

                ]

            }

            "dimensions" {

                "date": "2018-10-19"

            }

        },

        ...

    ]

}


```

## Examples

Query array fields in raw datasets:

```

query NestedFields($zoneTag: string, $start: Time, $end: Time) {

  viewer {

    zones(filter: { zoneTag: $zoneTag }) {

      events(limit: 2, filter: { datetime_geq: $start, datetime_leq: $end }) {

        matches {

          ruleId

          action

          source

        }

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBAcmAzgFzAEwGIEswBs2IAUAJAF4D2AdmACoCGA5gFwwoRaUMA0MxKdEZCxpYAtmB7EwlNMLFgAlDADeAKBgwAbjgDukFeo0wK1IgDMsuVBBbLjVWoxZkH9BjAC+StUaNhN0shEuGJYQjAATDwWVpC2MGh0qMjyAPoMYMDO-II8iclpuJnO0mie3oa+GqJJAMYAFkgGVVUQIEUAkmiVLTB0tSlUPS2I5CAQtWDDRh7Ts77znqoeQA&variables=N4IgXg9gdgpgKgQwOYgFwgFoHkByBRAfQEkAREAGhAGcAXBAJxrRACYAGFgNgFo2AWXgGY4bNqgCs41G3EYKIGFAAmzdl14C2wtpwlSZcgL5A)

Example response:

JavaScript

```

{

  "data": {

    "viewer": {

      "zones": [

        {

          "events": [

            {

              "matches": [

                {

                  "action": "allow",

                  "ruleId": "rule-id-one",

                  "source": "asn"

                },

                {

                  "action": "block",

                  "ruleId": "rule-id-two",

                  "source": "asn"

                }

              ]

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

Query maps fields in aggregated datasets:

```

query MapCapacity(

  $zoneTag: string

  $dateStart: Date

  $dateEnd: Date

  $start: Time

  $end: Time

) {

  viewer {

    zones(filter: { zoneTag: $zoneTag }) {

      httpRequests1mGroups(

        limit: 10

        filter: {

          date_geq: $dateStart

          date_leq: $dateEnd

          datetime_geq: $start

          datetime_lt: $end

        }

      ) {

        sum {

          countryMap {

            clientCountryName

            requests

            bytes

            threats

          }

        }

        dimensions {

          datetimeHour

        }

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBAsgQwA4GFkIMYEsAuUAUAUDDACQBeA9gHZgAqCA5gFwwDOOEW1jxZAJghxgAyjgQQcrACJCwfUoOEBRavxlyFHCVJh0sAW3klSYNa31HCAShgBvPgDcsYAO6R7fElVpt8AMywAG2EIVjsYH3omVgoaaMYYAF9bBxJ0mAALHBwkACUwUDAONgBGAwBxCEoQJD8vDJggw1xWUoAGBozAkMhwrsalMAB9RkLYobEdAYyh4aDxgTlVfhn0oZxDEbHgWO1JNZINrfndUzU1pIHUtbYQA09Gxowa6k4oRCRHp+fmsxwUK93gA5BBWH6NCCFcAlQ7pABGUGEbDhJBwmShQhREJIVwheJ+-C21DYWBobG+EOORgAEjUIJcBgTcXwrkkgA&variables=N4IgXg9gdgpgKgQwOYgFwgFoHkByBRAfQEkAREAGhABMEAXGAZVoQCda0QAmABk4DYAtNwAsQzhWp0YeKFQ49+Q0dwDMEgM7M283oJFCVcbt1QBWU6m6mMEmLJ2L9qo3zMWrNgL5A)

Example response:

JavaScript

```

{

  "data": {

    "viewer": {

      "zones": [

        {

          "httpRequests1mGroups": [

            {

              "dimensions": {

                "datetime": "2019-03-08T17:00:00Z"

              },

              "sum": {

                "countryMap": [

                  {

                    "bytes": 51911317,

                    "clientCountryName": "XK",

                    "requests": 4492,

                    "threats": 0

                  },

                  {

                    "bytes": 1816103586,

                    "clientCountryName": "T1",

                    "requests": 132423,

                    "threats": 0

                  },

                  ...

                ]

              }

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/features/","name":"Features"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/features/nested-structures/","name":"Nested Structures"}}]}
```

---

---
title: Pagination
description: Pagination – breaking up your query results into smaller parts – can be done using limit, orderBy, and filtering parameters. The GraphQL Analytics API does not support cursors for pagination.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/features/pagination.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Pagination

Pagination – breaking up your query results into smaller parts – can be done using `limit`, `orderBy`, and filtering parameters. The GraphQL Analytics API does not support cursors for pagination.

* `limit` (integer) defines how many records to return.
* `orderBy` (string) defines the sort order for the data.

## Query pages without cursors

Our examples assume that the `date` and `clientCountryName` relationships are unique.

### Get the first _n_ results of a query

To limit results, add the `limit` parameter as an integer. For example, query the first two records:

JavaScript

```

firewallEventsAdaptive (limit: 2, orderBy: [datetime_ASC, clientCountryName_ASC]) {

    datetime

    clientCountryName

}


```

Note

Specifying a sort order by date returns less specific results than specifying a sort order by date and country.

**Response**

JavaScript

```

{

  "firewallEventsAdaptive" : [

    {

      "datetime": "2018-11-12T00:00:00Z",

      "clientCountryName": "UM"

    },

    {

      "datetime": "2018-11-12T00:00:00Z",

      "clientCountryName": "US"

    }

  ]

}


```

### Query for the next page using filters

To get the next _n_ results, specify a filter to exclude the last result from the previous query. Taking the previous example, you can do this by appending the greater-than operator (`_gt`) to the `clientCountryName` field and the greater-or-equal operator (`_geq`) to the `datetime` field. This is where being specific about sort order comes into play. You are less likely to miss results using a more granular sort order.

JavaScript

```

firewallEventsAdaptive (limit: 2, orderBy: [datetime_ASC, clientCountryName_ASC], filter: {datetime_geq: "2018-11-12T00:00:00Z", clientCountryName_gt: "US"}) {

    datetime

    clientCountryName

}


```

**Response**

JavaScript

```

{

  "firewallEventsAdaptive" : [

    {

      "datetime": "2018-11-12T00:00:00Z",

      "clientCountryName": "UY"

    },

    {

      "datetime": "2018-11-12T00:00:00Z",

      "clientCountryName": "UZ"

    }

  ]

}


```

### Query the previous page

To get the previous _n_ results, reverse the filters and sort order.

JavaScript

```

firewallEventsAdaptive (limit: 2, orderBy: [datetime_DESC, clientCountryName_DESC, filter: {datetime_leq: "2018-11-12T00:00:00Z", clientCountryName_lt: "UY"}]) {

  datetime

  clientCountryName

}


```

**Response**

JavaScript

```

{

  "firewallEventsAdaptive" : [

    {

      "datetime": "2018-11-12T00:00:00Z",

      "clientCountryName": "US"

    },

    {

      "datetime": "2018-11-12T00:00:00Z",

      "clientCountryName": "UM"

    }

  ]

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/features/","name":"Features"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/features/pagination/","name":"Pagination"}}]}
```

---

---
title: Sorting
description: You can specify the order of the query result elements using the orderBy argument. By default, the results are sorted by the primary key of a dataset (table). If you specify another field to sort on, the primary key is also used in the sorting key, allowing results to remain consistent for pagination.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/features/sorting.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Sorting

You can specify the order of the query result elements using the `orderBy` argument. By default, the results are sorted by the primary key of a dataset (table). If you specify another field to sort on, the primary key is also used in the sorting key, allowing results to remain consistent for pagination.

The default order for an aggregated dataset is by the fields on which the aggregated data is grouped. If you specify a different order, the aggregation group is appended to your specified ordering.

Note

Ordering within nested structures is not supported.

## Examples

### Raw data sorting

```

firewallEventsAdaptive (orderBy: [clientCountryName_ASC]) {

    clientCountryName

}


```

### Raw data sorting using multiple fields

```

firewallEventsAdaptive (orderBy: [clientCountryName_ASC, datetime_DESC]) {

    clientCountryName

    datetime

}


```

### Group sorting by aggregation function

```

httpRequests1hGroups (orderBy: [sum_bytes_DESC]){

    sum {

        bytes

        requests

    }

    dimensions {

        datetime

    }

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/features/","name":"Features"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/features/sorting/","name":"Sorting"}}]}
```

---

---
title: Get started
description: Use these articles to get started with the Cloudflare GraphQL API:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/getting-started/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Get started

Use these articles to get started with the Cloudflare GraphQL API:

* [Authentication](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/) \- walks you through the options and the steps required to set up your access to Cloudflare API successfully,
* [Querying basics](https://developers.cloudflare.com/analytics/graphql-api/getting-started/querying-basics/) \- brings simple query examples for you to start exploring the GraphQL API,
* [Introspect the GraphQL schema](https://developers.cloudflare.com/analytics/graphql-api/getting-started/explore-graphql-schema/) \- explains how-to surf the schema with GraphQL client,
* [Create a query in a GraphQL client](https://developers.cloudflare.com/analytics/graphql-api/getting-started/compose-graphql-query/) \- describes how to build and run a query against the Cloudflare GraphQL API in the GraphQL clients,
* [Use curl to query the GraphQL API](https://developers.cloudflare.com/analytics/graphql-api/getting-started/execute-graphql-query/) \- walks you through running a query against the Cloudflare GraphQL API from the command line.

For examples of how to build your own GraphQL Analytics dashboard and query specific information, such as Firewall and Workers events, please refer to[Tutorials](https://developers.cloudflare.com/analytics/graphql-api/tutorials/).

Data unavailability: Customer Metadata Boundary configuration

If you encounter a message on the dashboard indicating that your data is unavailable due to your account's Metadata Boundary configuration, this is because you are trying to access data that is not stored in your region (that is, you are in the US and trying to access data that is only stored in the EU, or vice versa). If you receive this error message while being in the region where your data is stored, there are two potential reasons why you might get this message:

* Your account has Customer Metadata Boundary (CMB) enabled, and your request is being directed to an incorrect region. For example, if you are in the EU and CMB is configured to store your data in the US.
* If you are trying to access your data from the correct region, such as being in the EU with CMB configured to save your data in the EU, the issue may be caused by network congestion. Typically, this problem resolves within a few minutes.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/getting-started/","name":"Get started"}}]}
```

---

---
title: Authentication
description: Cloudflare separates service configuration by zone. When there are multiple accounts, each with many zones, it is important to restrict GraphQL Analytics API access to only those account and zone resources that are relevant for the task at hand.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/getting-started/authentication/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Authentication

Cloudflare separates service configuration by zone. When there are multiple accounts, each with many zones, it is important to restrict GraphQL Analytics API access to only those account and zone resources that are relevant for the task at hand.

To secure access to your GraphQL Analytics data, use a Cloudflare API key or token to authenticate an API request.

This table outlines the differences between Cloudflare API keys and tokens:

| Authentication Method                                                                      | Description                                                                                                                                                                                                                                               |
| ------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [API Tokens](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/) | Cloudflare recommends API Tokens as the preferred way to interact with Cloudflare APIs. You can configure the scope of tokens to limit access to account and zone resources, and you can define the Cloudflare APIs to which the token authorizes access. |
| [API Keys](https://developers.cloudflare.com/fundamentals/api/get-started/keys/)           | Unique to each Cloudflare user and used only for authentication. API keys do not authorize access to accounts or zones. Use the Global API Key for authentication.                                                                                        |

To create and configure GraphQL Analytics API tokens, refer to [Configure an Analytics API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/).

To find and retrieve API keys, as well as edit HTTP headers for authentication in GraphiQL, refer to [Authenticate with a Cloudflare API key](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-key-auth/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/getting-started/","name":"Get started"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/getting-started/authentication/","name":"Authentication"}}]}
```

---

---
title: Authenticate with a Cloudflare API key
description: API keys are unique to each Cloudflare user and used only for authentication. An API key does not authorize access to accounts or zones. To ensure that the GraphQL Analytics API authenticates your queries, retrieve your Cloudflare Global API Key.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/getting-started/authentication/api-key-auth.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Authenticate with a Cloudflare API key

API keys are unique to each Cloudflare user and used only for authentication. An API key does not authorize access to accounts or zones. To ensure that the GraphQL Analytics API authenticates your queries, retrieve your Cloudflare Global API Key.

Learn how to [retrieve your API Key in the Cloudflare dashboard](https://developers.cloudflare.com/fundamentals/api/get-started/keys/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/getting-started/","name":"Get started"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/getting-started/authentication/","name":"Authentication"}},{"@type":"ListItem","position":6,"item":{"@id":"/analytics/graphql-api/getting-started/authentication/api-key-auth/","name":"Authenticate with a Cloudflare API key"}}]}
```

---

---
title: Configure an Analytics API token
description: Cloudflare recommends API tokens as the preferred authentication method with Cloudflare APIs. This article walks through creating API tokens for authentication to the GraphQL Analytics API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/getting-started/authentication/api-token-auth.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Configure an Analytics API token

Cloudflare recommends API tokens as the preferred authentication method with Cloudflare APIs. This article walks through creating API tokens for authentication to the GraphQL Analytics API.

For more details on API tokens and the full range of supported options, refer to [Creating API tokens](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/).

To create an API token for authentication to the GraphQL Analytics API, use this workflow:

* [Access the Create API Token page](#access-the-create-api-token-page)
* [Configure a custom API token](#configure-a-custom-api-token)
* [Review and create your API token](#review-and-create-your-api-token)
* [Copy and test your API token](#copy-and-test-your-api-token)

## Access the Create API Token page

1. In the Cloudflare dashboard, go to the **Account API tokens** page.  
[ Go to **Account API tokens** ](https://dash.cloudflare.com/?to=/:account/api-tokens)
2. Select **Create Token**.
![API Tokens tab](https://developers.cloudflare.com/_astro/user-profile-api-tokens-tab.Cfjm5UAa_Z27c0cL.webp) 

The **Create API Token** page displays.

![Clicking Get started in the Create API Token page](https://developers.cloudflare.com/_astro/create-api-token-page-display.DTQbXvJf_1PY59q.webp) 

The next section of this walkthrough shows you how to configure a custom token for access to the GraphQL Analytics API.

## Configure a custom API token

To configure a custom token, follow these steps:

1. Select **Get started** in the **Custom token** section of the **Create API Token** page:
![Clicking Get started in the Create API Token page](https://developers.cloudflare.com/_astro/create-api-token-get-started.BaVcSeWC_ZdfidW.webp) 

The **Create Custom Token** page displays:

![Create Custom Token page](https://developers.cloudflare.com/_astro/create-custom-api-token.CFX0TYIj_Z1Saoga.webp) 
1. Enter a descriptive name for your token in the **Token name** text input field.
2. To configure access to the GraphQL Analytics API, use the **Permissions** drop-down lists.
3. To set permissions for the GraphQL Analytics API, select _Account_ in the first drop-down list, _Account Analytics_ from the second drop-down list, and _Read_ from the third.

This example scopes account-level permissions for read access to the Analytics API:

![Permissions configuration page](https://developers.cloudflare.com/_astro/create-custom-token-permissions.C95JIEHR_Z2t4MXb.webp) 
1. To configure the specific zones to which the token grants access, use the **Zone Resources** drop-down lists. In this example, the token is set to grant access to all zones:
![Resources configuration page](https://developers.cloudflare.com/_astro/create-custom-token-zone-resources.CfSpKkcP_2a7KPx.webp) 
1. To restrict the API token to specific IP addresses, use the **Client IP Address Filtering** controls.
![IP Address Filtering configuration page](https://developers.cloudflare.com/_astro/create-custom-token-ip-address-filtering.X4iaKSyi_Z2steW8.webp) 
1. To define how long the token is valid, select the **TTL** (time-to-live) start/end date.
![TTL configuration page](https://developers.cloudflare.com/_astro/create-custom-token-ttl.Bo81ViQe_11z701.webp) 
1. Select **Continue to summary**.

The next section of this walkthrough covers how to review and test your API token.

## Review and create your API token

Once you select **Continue to summary**, the **API Token Summary** page displays.

Use the **API Token Summary** to confirm that you have scoped the API Token to the desired permissions and resources before creating it.

![API Token Summary page](https://developers.cloudflare.com/_astro/api-token-summary.BcCShVRo_Z1LNqny.webp) 

Once you have validated your API token configuration, select **Create Token**.

## Copy and test your API token

When you create a new token, a confirmation page displays that includes your token and a custom `curl` command.

![Page displaying your API token and the curl command to test your token](https://developers.cloudflare.com/_astro/token-complete.T8mB8qZ5_2mc4EV.webp) 

To copy the token to your device's clipboard, select the **Copy** button.

Warning

The token displays only on the confirmation page, so copy the token and store it safely, since anyone who has the token can use it to access your data.

If you lose the token, you can [regenerate it from the API Tokens page](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/), so that you do not have to configure all the permissions again.

To test your token, copy the `curl` command and paste it into a terminal.

When you have finished, select **View all API tokens**.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/getting-started/","name":"Get started"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/getting-started/authentication/","name":"Authentication"}},{"@type":"ListItem","position":6,"item":{"@id":"/analytics/graphql-api/getting-started/authentication/api-token-auth/","name":"Configure an Analytics API token"}}]}
```

---

---
title: Configure GraphQL client endpoint and HTTP headers
description: Now that you have configured authentication, you are ready to run queries using GraphiQL.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/getting-started/authentication/graphql-client-headers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Configure GraphQL client endpoint and HTTP headers

1. Launch [GraphiQL ↗](https://www.gatsbyjs.com/docs/how-to/querying-data/running-queries-with-graphiql/).
2. Select **Edit HTTP Headers**.![Clicking Edit HTTP Headers](https://developers.cloudflare.com/_astro/GraphiQL-edit-http-headers.Cc0SaBrH_17rcJm.webp)The **Edit HTTP Headers** window appears.![Editing HTTP Headers Window](https://developers.cloudflare.com/_astro/GraphiQL-edit-http-headers-window.D6rNIUCL_Z1C89jf.webp)
3. Select **Add Header** to configure authentication. You can use Cloudflare Analytics API token authentication (recommended) or Cloudflare API key authentication.  
   * **Token authentication**:  
   Enter **Authorization** in the **Header Name** field, and enter `Bearer {your-analytics-token}` in the **Header value** field, then select **Save**.  
   ![Editing HTTP Headers](https://developers.cloudflare.com/_astro/GraphiQL-edit-http-headers-token.BRr3JTFE_2tTM7L.webp)  
   * **Key authentication**:  
   Enter `X-AUTH-EMAIL` in the **Header name** field and your email address registered with Cloudflare in the **Header value** field, and select **Save**.  
   Select **Add Header** to add a second header. Enter `X-AUTH-KEY` in the **Header Name** field, and paste your Global API Key in the **Header value** field, then select **Save**.
4. Select anywhere outside the **Edit HTTP Headers** window in GraphiQL to close it and return to the main GraphiQL display.
5. Enter `https://api.cloudflare.com/client/v4/graphql` in the **GraphQL Endpoint** field.![Editing GraphQL Endpoint](https://developers.cloudflare.com/_astro/GraphiQL-response-pane.jm8FGlXL_1dPBsE.webp)

Note

The right-side response pane is empty when you enter your information correctly. An error displays when there are problems with your header credentials.

Now that you have configured authentication, you are ready to run queries using GraphiQL.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/getting-started/","name":"Get started"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/getting-started/authentication/","name":"Authentication"}},{"@type":"ListItem","position":6,"item":{"@id":"/analytics/graphql-api/getting-started/authentication/graphql-client-headers/","name":"Configure GraphQL client endpoint and HTTP headers"}}]}
```

---

---
title: Compose a query in GraphiQL
description: Learn how to use a GraphiQL client to compose and execute a GraphQL query. This guide covers setting up a query, selecting the dataset, and configuring parameters and fields.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/getting-started/compose-graphql-query.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Compose a query in GraphiQL

Many clients might need help using [the semantics](https://developers.cloudflare.com/analytics/graphql-api/getting-started/querying-basics/) of GraphQL and exploring the possibilities of Cloudflare GraphQL API.

This page details how to use a [GraphiQL client ↗](https://github.com/graphql/graphiql/tree/main/packages/graphiql#readme) to compose and execute a GraphQL query.

## Prerequisites

You can find all details on how to [configure](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/graphql-client-headers/) a client here.

## Set up a query and choose a dataset

Click on the editing pane of GraphiQL and add this base query, replacing`zone-id` with your Cloudflare zone ID:

![Adding a base query in the GraphiQL pane](https://developers.cloudflare.com/_astro/graphiql-base-query.fKm6YnqW_szGao.webp) 

Note

To find the zone's tag, log in to your Cloudflare account and select the site for which you want to obtain the tag. In the Cloudflare dashboard **Overview** page, scroll to the **API** section in the right sidebar, which displays your zone and account tags.

To assist query building, the GraphiQL client has word completion. Insert your cursor in the query, in this case on the line below `zones`, and start entering a value to engage the feature. For example, when you type `firewall`, a popup menu displays the datasets that return firewall information:

![GraphiQL word completion assistant to query building](https://developers.cloudflare.com/_astro/graphiql-word-completion.iSRM-VK6_1RMEOc.webp) 

The text at the bottom of the list displays a short description of the data that the node returns.

Select the dataset you want to query and insert it. Either select the item in the list, or scroll using arrow keys and press the `Return` key.

## Supply required parameters

Hover your mouse over a field to display a tooltip that describes the dataset. In this example, hovering over the `firewallEventsAdaptive` node displays this description:

![Hovering the mouse over a field to display its description](https://developers.cloudflare.com/_astro/graphiql-set-up-base-query.1fPWncy2_1umdqT.webp) 

To display information about the dataset, including required parameters, select the dataset name (blue text). The **Documentation Explorer** opens and displays details about the dataset:

![Documentation Explorer window displaying dataset details](https://developers.cloudflare.com/_astro/graphiql-parameters.CM7npJ7C_hXm0h.webp) 

Note that the `filter` and `limit` arguments are required, as indicated by the exclamation mark (`!`) after their type definitions (gold text). In this example, the `orderBy` argument is not required, though when used it requires a value of type `ZoneFirewallEventsAdaptiveOrderBy`.

To browse a list of supported filter fields, select the filter type definition (gold text) in the Documentation Explorer. In this example, the type is`ZoneFirewallEventsAdaptiveFilter_InputObject`:

![Browsing GraphiQL filter fields](https://developers.cloudflare.com/_astro/graphiql-filter-fields.DeLcvFBV_1VYBuR.webp) 

This example query shows the required `filter` and `limit` arguments for`firewallEventsAdaptive` (as well as for the rest of GraphQL nodes):

![Example of GraphiQL query arguments](https://developers.cloudflare.com/_astro/graphiql-filter-values.vYQN7N4B_ZbHnhq.webp) 

## Define the fields used by your query

To browse the fields you can use with your query, hover your cursor over the dataset name in your query, and in the tooltip that displays, select the data type definition (gold text):

![Hovering the mouse over a dataset to display available fields](https://developers.cloudflare.com/_astro/graphiql-set-up-base-query.1fPWncy2_1umdqT.webp) 

**The Documentation Explorer** opens and displays a list of fields:

![Documentation Explorer window displaying list of fields](https://developers.cloudflare.com/_astro/graphiql-return-fields.DaJ56iiT_4Cp7G.webp) 

To add the data fields that you want to read, type an opening brace (`{`) after the closing parenthesis for the parameters, then start typing the name of a field that you want to fetch. Use word completion to choose a field.

This example query returns the `action`, `datetime`, `clientRequestHTTPHost`, and `userAgent` fields:

![Example query with return fields](https://developers.cloudflare.com/_astro/graphiql-query-return-field-values.D6RsP235_1xgidr.webp) 

Once you have entered all the fields you want to query, select the **Play**button to submit the query. The response pane will contain the data fetched from the configured GraphQL API endpoint:

![GraphiQL response pane](https://developers.cloudflare.com/_astro/create-query-fw-data-set-play.dQ7w2sGu_uUF6.webp) 

## Variable substitution

The GraphiQL client allows you to use placeholders for value and supply them via the `variables` part of the payload.

Placeholder names should start with `$` character, and you do not need to wrap placeholders in quotes when you use them in the query.

Values for placeholders should be provided in JSON format, in which placeholders are addressed without `$` character. As an example, for a placeholder `$zoneTag`GraphQL API will read a value from the `zoneTag` field of supplied variables object.

To supply a value for a placeholder, select the **Query Variables** pane and edit a JSON object that defines your variables.

This example query uses the `zoneTag` query variable to represent the zone ID:

![Example of GraphiQL query variables](https://developers.cloudflare.com/_astro/graphiql-query-variables.D9uAtvLs_1bnPs.webp) 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/getting-started/","name":"Get started"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/getting-started/compose-graphql-query/","name":"Compose a query in GraphiQL"}}]}
```

---

---
title: Execute a GraphQL query with curl
description: Using a plain curl to send a query provides the ability to slice-n-dice with the
results and apply post-processing if needed. For example, converting
results received from GraphQL API into a CSV format.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/getting-started/execute-graphql-query.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Execute a GraphQL query with curl

Using a plain curl to send a query provides the ability to slice-n-dice with the results and apply post-processing if needed. For example, converting results received from GraphQL API into a CSV format.

For more functionality, like auto-completion, schema exploring, etc., you can look at GraphQL [clients](https://developers.cloudflare.com/analytics/graphql-api/getting-started/compose-graphql-query/).

GraphQL API expects JSON with two essentials fields: "query" and "variables".

A query should be stripped from newline symbols and sent as a single-line string when the variables is an object full of values for all placeholders used in the query:

A payload structure for GraphQL API

```

{

  "query": "{viewer { ... }}",

  "variables": {}

}


```

It is still possible to use a human-friendly query though. In the example below you can see how `echo` piped together with `tr` to provide a proper payload with`curl`:

Example bash script that uses curl to query Analytics API

```

echo '{ "query":

  "{

    viewer {

      zones(filter: { zoneTag: $zoneTag }) {

        firewallEventsAdaptive(

          filter: $filter

          limit: 10

          orderBy: [datetime_DESC]

        ) {

          action

          clientAsn

          clientCountryName

          clientIP

          clientRequestPath

          clientRequestQuery

          datetime

          source

          userAgent

        }

      }

    }

  }",

  "variables": {

    "zoneTag": "<zone-tag>",

    "filter": {

      "datetime_geq": "2022-07-24T11:00:00Z",

      "datetime_leq": "2022-07-24T12:00:00Z"

    }

  }

}' | tr -d '\n' | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Content-Type: application/json" \

--data @-


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/getting-started/","name":"Get started"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/getting-started/execute-graphql-query/","name":"Execute a GraphQL query with curl"}}]}
```

---

---
title: Explore the GraphQL schema
description: Many GraphQL clients support browsing the GraphQL schema by taking care of
introspection. In this page, we will cover GraphiQL and Altair clients.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/getting-started/explore-graphql-schema.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Explore the GraphQL schema

Many GraphQL clients support browsing the GraphQL schema by taking care of[introspection](https://developers.cloudflare.com/analytics/graphql-api/features/discovery/introspection/). In this page, we will cover GraphiQL and Altair clients.

[GraphiQL ↗](https://github.com/graphql/graphiql/tree/main/packages/graphiql#readme) and [Altair ↗](https://altairgraphql.dev/#download) are open-source GraphQL clients that provide a tool to compose a query, execute it, and inspect the results. And as a bonus, they also allow you to browse GraphQL schema.

## Prerequisites

Before you begin, do not forget to [configure](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/graphql-client-headers/) the API endpoint and HTTP headers.

The screenshots below are done from GraphiQL. However, Altair provides the same functionality and you will not find any difficulties following the same instructions to explore the schema.

## Open the Documentation Explorer

To open the GraphiQL Documentation Explorer, select the **Docs** link in the header of the response pane:

![Clicking GraphiQL Docs link to open Documentation Explorer](https://developers.cloudflare.com/_astro/graphiql-docs-link.EkyLJzjS_Z1Sek3o.webp) 

The **Documentation Explorer** opens and displays a list of available objects:

![GraphiQL Doc Explorer pane](https://developers.cloudflare.com/_astro/graphiql-doc-explorer.Bd9kpJrN_2n3xdk.webp) 

Objects in the **Documentation Explorer** use this syntax:

```

  object-name: object-type-definition


```

## Find the type definition of an object

When you first open the **Documentation Explorer** pane, the `mutation` and`query` root types display:

![Documentation Explorer displaying mutation and query nodes](https://developers.cloudflare.com/_astro/graphiql-doc-explorer-query-mutations.BbRcxejs_Z25FbTt.webp) 

In this example, `query` is the name of a root, and `Query` is the type definition.

## Find the fields available for a type definition

Click on the **type definition** of a node to view the fields that it provides. The **Documentation Explorer** also displays descriptions of the nodes.

For example, select the **Query** type definition. The **Documentation Explorer**displays the fields that `Query` provides. In this example, the fields are`cost` and `viewer`:

![Documentation Explorer displaying cost and viewer fields](https://developers.cloudflare.com/_astro/graphiql-doc-explorer-view-cost.CT9nC44o_1eB1dc.webp) 

To explore the schema, select the names of objects and definitions. You can also use the search input (magnifying glass icon) and breadcrumb links in the header.

## Find the arguments associated with a field

Click the type definition of the `viewer` field (gold text) to list its sub-fields. The `viewer` field provides sub-fields that allow you to query`accounts` or `zones` data:

![Displaying viewer fields](https://developers.cloudflare.com/_astro/graphiql-doc-explorer-viewer-fields.BKFriIIB_1z6Vyc.webp) 

The `accounts` and `zones` nodes take arguments to specify which dataset to query.

For example, `zones` can take a filter of `ZoneFilter_InputObject` type as an argument. To view the fields available to filter, select**ZoneFilter\_InputObject**.

## Find the datasets available for a zone

To view a list of the datasets available to query, select the **zone** type definition (gold text):

![Clicking zone type definition](https://developers.cloudflare.com/_astro/graphiql-doc-explorer-zones.DMRVzjxA_Z8PoXc.webp) 

A list of datasets displays in the **Fields** section, each with list of valid arguments and a brief description. Arguments that end with an exclamation mark (`!`) are required.

![Fields section displaying datasets available](https://developers.cloudflare.com/_astro/graphiql-doc-explorer-zone-fields.OMeSzfCd_Zz7H9C.webp) 

Use the search input (magnifying glass icon) to find specific datasets:

![Searching a dataset in the Documentation Explorer](https://developers.cloudflare.com/_astro/graphiql-doc-explorer-find-firewall.CkSNHI_E_Z1tDCNv.webp) 

To select a dataset, select its name.

The definition for the dataset displays. This example shows the`firewallEventsAdaptive` dataset:

![Example of a dataset definition](https://developers.cloudflare.com/_astro/graphiql-doc-explorer-firewallevents-definition.CsFujHwT_1aT5lQ.webp) 

## Find the fields available for a dataset

To view the fields available for a particular dataset, select on its type definition (gold text).

For example, select the **ZoneFirewallEventsAdaptive** type definition to view the fields available for the `firewallEventsAdaptive` dataset:

![Clicking type definition to visualize fields available for a dataset](https://developers.cloudflare.com/_astro/graphiql-doc-explorer-firewall-type-definition.CKad-SDm_RyQzy.webp) 

The list of fields displays:

![Displaying available fields for a dataset](https://developers.cloudflare.com/_astro/graphiql-doc-explorer-firewall-fields.K45OyD1Z_Zj4g9g.webp) 

For more information on using GraphiQL, please visit this [guide](https://developers.cloudflare.com/analytics/graphql-api/getting-started/compose-graphql-query/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/getting-started/","name":"Get started"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/getting-started/explore-graphql-schema/","name":"Explore the GraphQL schema"}}]}
```

---

---
title: Querying basics
description: Learn the basics of querying with Cloudflare's GraphQL API. Understand query structure, schema, and how to fetch data using GraphQL queries.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/getting-started/querying-basics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying basics

## Structure of a GraphQL query

GraphQL structures data as a graph. GraphQL uses a schema to define the objects and their hierarchy in your data graph. You can explore the edges of the graph by using queries to get the needed data. These queries must respect the structure of the schema.

A **node**, followed by its **fields**, is at the core of a GraphQL query. A node is an object of a specific **type**; the type specifies the fields that make up the object.

A field can be another node where the appropriate query would contain nested elements. Some nodes look like functions that can take on arguments to limit the scope of what they can act on. You can apply filters at each node.

## Cloudflare GraphQL schema

A typical query against the Cloudflare GraphQL schema is made up of four main components:

* `viewer` \- is the root node,
* `zones` or `accounts` \- indicate the scope of the query, that is the domain area or account you want to query. The `viewer` can access one `zones` or`accounts`, or both,
* **data node** or **dataset** \- represent the data you want to query. `zones`or `accounts` may contain one or more datasets. To find out more about discovering nodes, please refer to [introspection](https://developers.cloudflare.com/analytics/graphql-api/features/discovery/introspection/),
* **fieldset** \- a set of fields or nested fields of the **dataset**.

The query to Cloudflare GraphQL API must be sent over HTTP POST request with payload in JSON format that consists of these fields:

```

{

  "query": "",

  "variables": {}

}


```

From the above structure, the `query` field must contain a GraphQL query formatted as a **single line** string (meaning all newline symbols should be stripped / escaped), when `variables` is an object that contains all values of used placeholders in the query itself.

## A single dataset example

In the following example, the GraphQL query fetches a `datetime`, `action`, and client request HTTP host as `host` field of 2 WAF events from zone-scoped`firewallEventsAdaptive` dataset.

A GraphQL query

```

query ASingleDatasetExample($zoneTag: string, $start: Time, $end: Time) {

  viewer {

    zones(filter: { zoneTag: $zoneTag }) {

      firewallEventsAdaptive(

        filter: { datetime_gt: $start, datetime_lt: $end }

        limit: 2

        orderBy: [datetime_DESC]

      ) {

        action

        datetime

        host: clientRequestHTTPHost

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBAggZQJYDsDmAbMARAhgF1wGcx8BRAD1wFsAHLACgBIAvAexTABVc0AuGEXwRUaADQwmQ3BHwCuSamAlMwKACbzFYAJQwA3gCgYMAG5IwAd0gHjJmO05EGAMyQZ8kAfocduvAVY-HjQYAF89I3t7NwgrXAwMMlM1fCI4dVxafCQUhjtokzcPLwMYTM8cpQB9NDlJaVkJCtJtao9AtXVwgsKMRSR6gCZe6LYIdUgAISgBAG0WqrBq7DIEAGEAXVGYSJ2TXABjHI598oJWpTOACzYhAUP+1IAlMFAwIQAJLi4ABU+7vgdmFeiCTCCwkA&variables=N4IgXg9gdgpgKgQwOYgFwgFoHkByBRAfQEkAREAGhAGcAXBAJxrRACYAGFgNgFo2AWXgGY4bNqgCs41G3EYKIGFAAmzdl14C2wtpwlSZcgL5A)

In the query above, we have variable placeholders: $zoneTag, $start, and $end. We provide values for those placeholders alongside the query by placing them into`variables` field of the payload. Note that the examples below use the UTC timezone, indicated by the letter "Z".

A set of variables

```

{

  "zoneTag": "<zone-tag>",

  "start": "2020-08-03T02:07:05Z",

  "end": "2020-08-03T17:07:05Z"

}


```

There are multiple ways to send your query to Cloudflare GraphQL API. You can use you favourite GraphQL client or CLI to send a request via curl. We have a[how-to guide](https://developers.cloudflare.com/analytics/graphql-api/getting-started/compose-graphql-query/) about using GraphiQL client, also check a guide on how to execute a query with a curl [here](https://developers.cloudflare.com/analytics/graphql-api/getting-started/execute-graphql-query/).

A sample of a response for a query above

```

{

  "data": {

    "viewer": {

      "zones": [

        {

          "firewallEventsAdaptive": [

            {

              "action": "log",

              "host": "cloudflare.guru",

              "datetime": "2020-08-03T17:07:03Z"

            },

            {

              "action": "log",

              "host": "cloudflare.guru",

              "datetime": "2020-08-03T17:07:01Z"

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

## Query multiple datasets in a single GraphQL API request

As previously mentioned, a query might contain one or multiple nodes (datasets). At the API level, the data extraction would be done simultaneously, but the response would be delayed until all dataset queries got their results. If any fails during the execution, the entire query will be terminated, and the error will be returned.

A sample query for two datasets in a one go

```

query MultipleDatasetsExample(

  $zoneTag: string

  $start: Time

  $end: Time

  $ts: Date

) {

  viewer {

    zones(filter: { zoneTag: $zoneTag }) {

      last10Events: firewallEventsAdaptive(

        filter: { datetime_gt: $start, datetime_lt: $end }

        limit: 10

        orderBy: [datetime_DESC]

      ) {

        action

        datetime

        host: clientRequestHTTPHost

      }

      top3DeviceTypes: httpRequestsAdaptiveGroups(

        filter: { date: $ts }

        limit: 10

        orderBy: [count_DESC]

      ) {

        count

        dimensions {

          device: clientDeviceType

        }

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBAsiANgFwJYAdFgCIENm4DOYyhAogB64C2mYAFAFAwwAkAXgPYB2YAKrgDmALhiFkEVN0HM243BGSi+qamFmsw3ACbLV6lq1Ki8ydQEoYAb1kA3VGADuka7JZdehegDNUKSKJWMB78QqIcPKGCMAC+ljYsiTCIRMgAjAAMZLZaxjC+EE64iIjZuYQAgtq46Gg5TElJvv4QgTDVZmhqAPqCSnIEigA07fgk+t0o4VrasW6NiKqo-ZnzSZwQ2pAAQlCiANod4z3YZADKAMIAumsw8bcsuADGaDwPo5367wAWnOKiT0WuQASmBQGBxAAJPh8AAKkL+yFuMVuyE46AAzNgwPYnvwoOgIaJvshkOhQeDxJVqrVUDkAOIQTggdBed7NMytawfMDhUhzRqJRbUZaiVaClgbLYQXYHJ7M7jIbqnS43QX3CUweUgRXvbT6biEVA8QiuTUsLa43laoGK7FWvgEgwSlGC12Jd0omJAA&variables=N4IgXg9gdgpgKgQwOYgFwgFoHkByBRAfQEkAREAGhAGcAXBAJxrRACYAGFgNgFo2AWXgGY4bNqgCs41G3EYKIGFAAmzdl14C2wtpwlSZcyjSqqOPfkJABfIA)

A set of variables for the query above

```

{

  "zoneTag": "<zone-tag>",

  "start": "2022-10-02T00:26:49Z",

  "end": "2022-10-04T14:26:49Z",

  "ts": "2022-10-04"

}


```

A sample response for the query with variables above

```

{

  "data": {

    "viewer": {

      "zones": [

        {

          "last10Events": [

            {

              "action": "block",

              "country": "TR",

              "datetime": "2022-10-04T08:41:09Z"

            },

            {

              "action": "block",

              "country": "TR",

              "datetime": "2022-10-04T08:41:09Z"

            },

            {

              "action": "block",

              "country": "RU",

              "datetime": "2022-10-04T01:09:36Z"

            },

            {

              "action": "block",

              "country": "US",

              "datetime": "2022-10-03T14:26:49Z"

            },

            {

              "action": "block",

              "country": "US",

              "datetime": "2022-10-03T14:26:46Z"

            },

            {

              "action": "block",

              "country": "CN",

              "datetime": "2022-10-02T23:51:26Z"

            },

            {

              "action": "block",

              "country": "TR",

              "datetime": "2022-10-02T23:39:41Z"

            },

            {

              "action": "block",

              "country": "TR",

              "datetime": "2022-10-02T23:39:41Z"

            }

          ],

          "top3DeviceTypes": [

            {

              "count": 4580,

              "dimensions": {

                "device": "desktop"

              }

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

## Helpful Resources

Here are some helpful articles about working with the Cloudflare Analytics API and GraphQL.

### Cloudflare specific

* [How to find your zoneTag using the API](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids/)

### General info on the GraphQL framework

* [How to use GraphQL (tutorials) ↗](https://www.howtographql.com/)
* [Thinking in Graphs ↗](https://graphql.org/learn/thinking-in-graphs/)
* [What data can you can query in the GraphQL type system (schemas) ↗](https://graphql.org/learn/schema/)
* [How to pass variables in GraphiQL (Medium article with quick tips) ↗](https://medium.com/graphql-mastery/graphql-quick-tip-how-to-pass-variables-into-a-mutation-in-graphiql-23ecff4add57)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/getting-started/","name":"Get started"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/getting-started/querying-basics/","name":"Querying basics"}}]}
```

---

---
title: Limits
description: Cloudflare GraphQL API exposes more than 70 datasets representing products with
different configurations and data availability for different zones and accounts
plans.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/limits.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Limits

Cloudflare GraphQL API exposes more than 70 datasets representing products with different configurations and data availability for different zones and accounts plans.

To support this variety of products, Cloudflare GraphQL API has three layers of limits:

* global limits
* user limits
* node (dataset) limits

## Global limits

These limits are applied to every query for every plan:

* A zone-scoped query can include up to **10 zones**
* An account-scoped query can include only **1 account**

Additionally, there is a limited number of queries you can make per request. The total number of queries in a request is equal to the number of zone/account scopes, multiplied by the number of nodes to which they are applied.

## User limits

Cloudflare GraphQL API limits the number of GraphQL requests each user can send. The default quota is **300 GraphQL queries over 5-minute window**. It allows a user to run at least **1 query every second** or do a burst of 300 queries and then wait 5 minutes before issuing another query.

That rate limit is applied in addition to the [general rate limits enforced by the Cloudflare API](https://developers.cloudflare.com/fundamentals/api/reference/limits/).

## Node limits and availability

Each data node has its limits, such as:

* how far back in time can data be requested,
* the maximum time period (in seconds) that can be requested in one query,
* the maximum number of fields that can be requested in one query,
* the maximum number of records that can be returned in one query.

Node limits are tied to requested `zoneTag` or `accountTag`. Higher plans have access to a greater selection of datasets or fields, and can query over broader historical intervals.

To get exact boundaries and availability for your zone(s) or account, please refer to [settings](https://developers.cloudflare.com/analytics/graphql-api/features/discovery/settings/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/limits/","name":"Limits"}}]}
```

---

---
title: MCP server
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/mcp-server.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# MCP server

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/mcp-server/","name":"MCP server"}}]}
```

---

---
title: Migration guides
description: If you are currently using the deprecated httpRequests1mByColoGroups or httpRequests1dByColoGroups GraphQL API nodes, the HTTP Requests by Colo Groups to HTTP Requests by Adaptive Groups guide will help you migrate your queries to use the httpRequestsAdaptiveGroups node.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/migration-guides/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Migration guides

## GraphQL migrations

If you are currently using the deprecated `httpRequests1mByColoGroups` or `httpRequests1dByColoGroups` GraphQL API nodes, the [HTTP Requests by Colo Groups to HTTP Requests by Adaptive Groups](https://developers.cloudflare.com/analytics/graphql-api/migration-guides/graphql-api-analytics/) guide will help you migrate your queries to use the `httpRequestsAdaptiveGroups` node.

## Zone Analytics migrations

If you are currently using the Zone Analytics API, the following guide will help you migrate your queries to the new GraphQL Analytics API:

* [Zone Analytics to GraphQL Analytics](https://developers.cloudflare.com/analytics/graphql-api/migration-guides/zone-analytics/)
* [Zone Analytics Colos Endpoint to GraphQL Analytics](https://developers.cloudflare.com/analytics/graphql-api/migration-guides/zone-analytics-colos/)

## Network Analytics migrations

If you are currently using the Network Analytics v1 (NAv1) GraphQL nodes, the [Network Analytics v1 to Network Analytics v2](https://developers.cloudflare.com/analytics/graphql-api/migration-guides/network-analytics-v2/) guide will help you migrate your queries to the new Network Analytics v2.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/migration-guides/","name":"Migration guides"}}]}
```

---

---
title: HTTP Requests by Colo Groups to HTTP Requests by Adaptive Groups
description: This guide shares considerations when migrating from the deprecated httpRequests1mByColoGroups and httpRequests1dByColoGroups GraphQL API nodes to the httpRequestsAdaptiveGroups GraphQL API node.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/migration-guides/graphql-api-analytics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# HTTP Requests by Colo Groups to HTTP Requests by Adaptive Groups

This guide shares considerations when migrating from the deprecated `httpRequests1mByColoGroups` and `httpRequests1dByColoGroups` GraphQL API nodes to the `httpRequestsAdaptiveGroups` GraphQL API node.

For example, if you wanted to see which five data centers had the most number of requests, the total number of those requests, and the total amount of data transfer, in the past you used the `httpRequests1mByColoGroups` GraphQL API node as in the following example:

```

{

  viewer {

    zones(filter: { zoneTag: $zoneTag }) {

      series: httpRequests1mByColoGroups(

        limit: 5

        orderBy: [sum_requests_DESC]

        filter: { datetime_geq: $start, datetime_lt: $end }

      ) {

        sum {

          requests

          bytes

        }

        dimensions {

          coloCode

        }

      }

    }

  }

}


```

Response

```

{

  "data": {

    "viewer": {

      "zones": [

        {

          "series": [

            {

              "dimensions": {

                "coloCode": "LHR"

              },

              "sum": {

                "bytes": 18260055,

                "requests": 4404

              }

            },

            {

              "dimensions": {

                "coloCode": "AMS"

              },

              "sum": {

                "bytes": 17563009,

                "requests": 4302

              }

            },

            {

              "dimensions": {

                "coloCode": "CDG"

              },

              "sum": {

                "bytes": 17200434,

                "requests": 4032

              }

            },

            {

              "dimensions": {

                "coloCode": "PTY"

              },

              "sum": {

                "bytes": 10400209,

                "requests": 2707

              }

            },

            {

              "dimensions": {

                "coloCode": "JIB"

              },

              "sum": {

                "bytes": 9040105,

                "requests": 2601

              }

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

## `httpRequestsAdaptiveGroups` GraphQL API node

With the deprecation of the `httpRequests1mByColoGroups` and `httpRequests1dByColoGroups` GraphQL API nodes, use the `httpRequestsAdaptiveGroups` GraphQL API node to access the same data (`count`, `sum(edgeResponseBytes)`, and `visits`).

**Request**

```

query MigrationSample($zoneTag: string, $start: Time, $end: Time) {

  viewer {

    zones(filter: { zoneTag: $zoneTag }) {

      series: httpRequestsAdaptiveGroups(

        limit: 5

        orderBy: [count_DESC]

        filter: {

          datetime_geq: $start

          datetime_lt: $end

          requestSource: "eyeball"

        }

      ) {

        count

        avg {

          sampleInterval

        }

        sum {

          visits

          edgeResponseBytes

        }

        dimensions {

          coloCode

        }

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBAsgSwOYQIYBcEHsB2BlVAWwAcAbMACgBIAvXMAFVSQC4YBndCBHJAGhhVOqCOjYMEhMAKpgcAE3GSwAShgBvAFAwYANwRgA7pA3adMOjjDsKAMwSl0kNuov0mrQZcbMYAXzUtc3N2SAN2NgALdHRiACUwUGt0dgBBeVRiTF0wAHEILBBiGzNgnVJJBDEYAFZSsqwIeUgAISg2AG0AY0KcdAB9ABEAUTwAYQBdeuD7R2dTMrKMp0wpfqREtiF0EXRppYwwVbB+xy25eX3giETwTjxCiC6wNgAiMCgwACNUUlJXq7+faBQE9EB9QGoXRIBaLEJEMhgACSfUgul+gL8gPYIEIsLhegQ7Cq7EBOjA8g2CXYxFwoTaTlJBKxBPkyhwxLp+LhPVIWDGWGamP2LPMoqxfiAA&variables=N4IgXg9gdgpgKgQwOYgFwgFoHkByBRAfQEkAREAGhAGcAXBAJxrRACYAGFgNgFo2AWXgGY4bNqgCs41G3EYKIGFAAmzdl14C2wtpwlSZcgL5A)

Response

```

{

  "data": {

    "viewer": {

      "zones": [

        {

          "series": [

            {

              "avg": {

                "sampleInterval": 10

              },

              "count": 4350,

              "dimensions": {

                "coloCode": "LHR"

              },

              "sum": {

                "edgeResponseBytes": 17860000,

                "visits": 4120

              }

            },

            {

              "avg": {

                "sampleInterval": 10

              },

              "count": 4210,

              "dimensions": {

                "coloCode": "AMS"

              },

              "sum": {

                "edgeResponseBytes": 17110000,

                "visits": 3910

              }

            },

            {

              "avg": {

                "sampleInterval": 10

              },

              "count": 3890,

              "dimensions": {

                "coloCode": "CDG"

              },

              "sum": {

                "edgeResponseBytes": 17050000,

                "visits": 3700

              }

            },

            {

              "avg": {

                "sampleInterval": 10

              },

              "count": 2550,

              "dimensions": {

                "coloCode": "PTY"

              },

              "sum": {

                "edgeResponseBytes": 10286000,

                "visits": 2130

              }

            },

            {

              "avg": {

                "sampleInterval": 10

              },

              "count": 2410,

              "dimensions": {

                "coloCode": "JIB"

              },

              "sum": {

                "edgeResponseBytes": 9029000,

                "visits": 2080

              }

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

This query says:

* Given the indicated `zones`, `limit`, and `time range`.
* Fetch the total number of requests (as `count`), the total amount of data transfer (as `edgeResponseBytes` of `sum` object), and the total number of `visits` per data center.

A few points to note:

* Adding the `requestSource` filter for `eyeball` returns request, data transfer, and visit data about only the end users of your website.
* Instead of `requests`, the `httpRequestsAdaptiveGroups` node reports `count`, which indicates the number of requests per data center.
* To measure data transfer, use `sum(edgeResponseBytes)`. Note that in the old API this was called `bandwidth` even though it actually measured data transfer.
* `unique visitors per colocation` is not supported in `httpRequestsAdaptiveGroups`, but the `httpRequestsAdaptiveGroups` API does support `visits`. A visit is defined as a page view that originated from a different website or direct link. Cloudflare checks where the HTTP referer does not match the hostname. One visit can consist of multiple page views.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/migration-guides/","name":"Migration guides"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/migration-guides/graphql-api-analytics/","name":"HTTP Requests by Colo Groups to HTTP Requests by Adaptive Groups"}}]}
```

---

---
title: Network Analytics v1 to Network Analytics v2
description: In early 2020, Cloudflare released the first version of the Network Analytics dashboard and its corresponding API. The second version (Network Analytics v2) was made available on 2021-09-13.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/migration-guides/network-analytics-v2/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Network Analytics v1 to Network Analytics v2

In early 2020, Cloudflare released the first version of the Network Analytics dashboard and its corresponding API. The second version (Network Analytics v2) was made available on 2021-09-13.

Warning

**Network Analytics v1 (NAv1) is now deprecated.** For more information on Network Analytics v2 (NAv2), refer to [Cloudflare Network Analytics](https://developers.cloudflare.com/analytics/network-analytics/).

## Before you start

Learn more about the [concepts introduced in Network Analytics v2](https://developers.cloudflare.com/analytics/network-analytics/understand/concepts/).

## Feature comparison

The following table compares the features of NAv1 and NAv2:

| Feature                          | NAv1                                                                                          | NAv2                                                                               |
| -------------------------------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- |
| Sampling rate                    | 1/8,192 packets                                                                               | Varies between 1/100 and 1/1,000,000 packets, depending on the mitigation service. |
| Sampling method                  | Core Sample Enrichment                                                                        | Edge Sample Enrichment                                                             |
| Historical data retention method | Aggregated roll-ups                                                                           | Adaptive Bit Rate                                                                  |
| Retention period                 | 1-min roll-ups: 30 days1-hour roll-ups: 6 months1-day roll-ups: 1 yearAttack roll-ups: 1 year | All nodes: 16 weeks                                                                |
| Attack mitigation systems        | dosd                                                                                          | dosd, flowtrackd\*, and Cloudflare Network Firewall\*                              |
| Examples of new fields           | n/a                                                                                           | Rule IDGRE tunnel IDPacket size                                                    |

\* _Applicable only for Magic Transit customers._

For more information on the differences in terms of sampling method and historical data retention, refer to [Main differences between Network Analytics v1 and v2](https://developers.cloudflare.com/analytics/graphql-api/migration-guides/network-analytics-v2/differences/).

Note

The `attackId` field value may be different between NAv1 and NAv2 for the same attack.

## Node comparison

NAv2 uses the same API endpoint but makes use of new nodes. While NAv1 has three nodes for aggregated roll-ups for all traffic and attacks, and one node for attacks, NAv2 has one node for all traffic and attacks, and four separate nodes for attacks that vary based on the mitigation system.

| Node type      | NAv1                                          | NAv2 for Magic Transit                                                                                                                            | NAv2 for Spectrum                                            |
| -------------- | --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------ |
| Main node(s)   | ipFlows1mGroupsipFlows1hGroupsipFlows1dGroups | magicTransitNetworkAnalyticsAdaptiveGroups                                                                                                        | spectrumNetworkAnalyticsAdaptiveGroups                       |
| Attack node(s) | ipFlows1mAttacksGroups                        | dosdNetworkAnalyticsAdaptiveGroups dosdAttackAnalyticsGroups flowtrackdNetworkAnalyticsAdaptiveGroups magicFirewallNetworkAnalyticsAdaptiveGroups | dosdNetworkAnalyticsAdaptiveGroups dosdAttackAnalyticsGroups |

Each row represents one packet sample. The data is sampled at Cloudflare’s edge at [various rates](https://developers.cloudflare.com/analytics/graphql-api/migration-guides/network-analytics-v2/node-reference/). You can also query the sample rate from the nodes using the `sample_interval` field.

For reference information on NAv2 nodes, refer to the [NAv2 node reference](https://developers.cloudflare.com/analytics/graphql-api/migration-guides/network-analytics-v2/node-reference/).

Obtaining data for ingress traffic only

All the NAv2 `*AnalyticsAdaptiveGroups` nodes include data for ingress and egress traffic. To obtain data about ingress traffic only, include `direction: "ingress"` in your [GraphQL query filter](https://developers.cloudflare.com/analytics/graphql-api/features/filtering/).

## Schema comparison

Refer to [NAv1 to NAv2 schema map](https://developers.cloudflare.com/analytics/graphql-api/migration-guides/network-analytics-v2/schema-map/) for a mapping of schema fields from NAv1 nodes to NAv2 nodes. Follow this recommended mapping when migrating to NAv2.

## Example

The following example queries the top 20 logs of traffic dropped by mitigation systems different from Cloudflare Network Firewall within a given time range, ordered by destination IP address.

```

{

  viewer {

    accounts(filter: { accountTag: "<REDACTED>" }) {

      magicTransitNetworkAnalyticsAdaptiveGroups(

        filter: {

          datetime_gt: "2021-10-01T00:00:00Z"

          datetime_lt: "2021-10-05T00:00:00Z"

          outcome_like: "drop"

          mitigationSystem_neq: "magic-firewall"

        }

        limit: 20

        orderBy: [ipDestinationAddress_ASC]

      ) {

        dimensions {

          outcome

          mitigationSystem

          ipSourceAddress

          ipDestinationAddress

          ipProtocol

          destinationPort

        }

      }

    }

  }

}


```

## Final remarks

The `mitigationSystem` field can take one the following values:

* `dosd` for [DDoS managed rulesets](https://developers.cloudflare.com/ddos-protection/managed-rulesets/) (Network-layer DDoS Attack Protection or HTTP DDoS Attack Protection).
* `flowtrackd` for [Advanced TCP Protection](https://developers.cloudflare.com/ddos-protection/advanced-ddos-systems/overview/advanced-tcp-protection/).
* `magic-firewall` for [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/).
* Empty string for unmitigated traffic.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/migration-guides/","name":"Migration guides"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/migration-guides/network-analytics-v2/","name":"Network Analytics v1 to Network Analytics v2"}}]}
```

---

---
title: Main differences
description: In Network Analytics v1 (NAv1), the data is rolled up into one minute roll-up tables, then one hour roll-ups, and finally one day roll-ups. Users can then query either the ipFlows1mGroups node for high-resolution data on traffic and attacks in the past 30 days, or query the ipFlows1hGroups or ipFlows1dGroups nodes for historical data. However, the data available through these nodes is aggregate data, and that means that the accuracy and cardinality of the results are limited. For example, short traffic spikes will not be visible in the data obtained from these nodes due to the aggregation of samples in the roll-ups.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/migration-guides/network-analytics-v2/differences.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Main differences

## Aggregated roll-ups versus Adaptive Bit Rate

In Network Analytics v1 (NAv1), the data is rolled up into one minute roll-up tables, then one hour roll-ups, and finally one day roll-ups. Users can then query either the `ipFlows1mGroups` node for high-resolution data on traffic and attacks in the past 30 days, or query the `ipFlows1hGroups` or `ipFlows1dGroups` nodes for historical data. However, the data available through these nodes is aggregate data, and that means that the accuracy and cardinality of the results are limited. For example, short traffic spikes will not be visible in the data obtained from these nodes due to the aggregation of samples in the roll-ups.

On the other hand, Network Analytics v2 (NAv2) uses [Adaptive Bit Rate (ABR)](https://developers.cloudflare.com/analytics/network-analytics/understand/concepts/#adaptive-bit-rate-sampling) sampling. This means that users do not need to choose a node based on their query timeframe. Furthermore, the cardinality and accuracy is preserved even for historical data. Depending on the size of the query, the ABR mechanism will choose the best sampling rate and fetch a response from one of the sample tables encapsulated behind each node.

## Sampling improvements

Network Analytics v2 provides more accurate data due to the better sample rate and [Edge Sample Enrichment](https://developers.cloudflare.com/analytics/network-analytics/understand/concepts/#edge-sample-enrichment). NAv1 samples 1/8,192 packets (that is, one in every 8,192 packets), while NAv2 sample rates vary depending on the mitigation service. For example:

* The sample rate for `dosd` changes dynamically from 1/100 to 1/10,000 packets based on the volume of packets.
* The sample rate for Cloudflare Network Firewall events changes dynamically from 1/100 to 1/1,000,000 packets based on the number of packets.
* The sample rate for `flowtrackd` is 1/10,000 packets.

The NAv2 data pipeline is also more resilient compared to NAv1\. NAv1 uses Core Sample Enrichment, where raw packet samples are sent from all of Cloudflare's edge data centers to the Core data centers. In the Core data centers, the packet samples are cross-referenced with additional databases and infused with the associated customer account ID, attack ID, attack type, and other metadata. Then, the packet samples are inserted into storage. One of the main shortcomings of this method is the potential congestion of samples when cross-referencing information, which could, in rare cases, cause temporary data lag.

To eliminate this potential data lag, NAv2 uses a new data logging pipeline which relies on Edge Sample Enrichment. By delegating the packet sample enrichment and cross-referencing to the edge data centers, we improve the data pipeline's resilience and tolerance against congestion. Using this method, enriched packet samples are immediately stored in Cloudflare's core data centers as soon as they arrive.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/migration-guides/","name":"Migration guides"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/migration-guides/network-analytics-v2/","name":"Network Analytics v1 to Network Analytics v2"}},{"@type":"ListItem","position":6,"item":{"@id":"/analytics/graphql-api/migration-guides/network-analytics-v2/differences/","name":"Main differences"}}]}
```

---

---
title: NAv2 node reference
description: Main nodes provide deep packet-level information about traffic and attacks for Spectrum customers and Magic Transit customers.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/migration-guides/network-analytics-v2/node-reference.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# NAv2 node reference

## Main nodes

Main nodes provide deep packet-level information about traffic and attacks for Spectrum customers and Magic Transit customers.

Use the main node to query traffic and attacks at a high level, as seen at the Cloudflare edge:

| Product       | Main node                                  |
| ------------- | ------------------------------------------ |
| Spectrum      | spectrumNetworkAnalyticsAdaptiveGroups     |
| Magic Transit | magicTransitNetworkAnalyticsAdaptiveGroups |

To query more specific details about attacks, use the [attack nodes](#attack-nodes).

Each row represents a packet sample. The sample rate of main nodes is 1/10,000 packets.

If you are using both Magic Transit and Spectrum for IP addresses that overlap, you can use only the Magic Transit node.

## Attack nodes

### `dosdAttackAnalyticsGroups`

This node provides information about DDoS attacks detected and mitigated by Cloudflare's main DDoS protection system, the denial of service daemon (`dosd`). This node includes attack metadata such as:

* `startDatetime`
* `endDatetime`
* `attackType`
* `sourceIp`

Each row represents an attack event. Each attack has a unique ID.

The sample rate is dynamic and based on the volume of packets, ranging from 1/100 to 1/10,000 packets.

Adjusting attack mitigation

To adjust mitigation sensitivities and actions, or to define expression filters that exclude or include traffic from mitigation actions, customize the [Network-layer DDoS Attack Protection managed ruleset](https://developers.cloudflare.com/ddos-protection/managed-rulesets/network/).

### `dosdNetworkAnalyticsAdaptiveGroups`

This node complements the information in the `dosdAttackAnalyticsGroups` node. Provides deep packet-level information about DDoS attack packets mitigated by `dosd`, including fields such as:

* `ipProtocol`
* `ipv4Checksum`
* `ipv4Options`
* `tcpSequenceNumber`
* `tcpChecksum`
* `icmpCode`
* `ruleId`
* `ruleName`
* `attackVector`

Each row represents a packet sample. The sample rate is 1/10,000 packets.

### `advancedTcpProtectionNetworkAnalyticsAdaptiveGroups`

This node is only available to Magic Transit customers. Provides metadata about out-of-state TCP DDoS attacks mitigated by Cloudflare's [Advanced TCP Protection](https://developers.cloudflare.com/ddos-protection/advanced-ddos-systems/overview/advanced-tcp-protection/) system.

Advanced TCP Protection does not use the following ID fields: attack ID, rule ID, and ruleset ID.

The sample rate is 1/1,000 packets.

### `advancedDnsProtectionNetworkAnalyticsAdaptiveGroups`

This node is only available to Magic Transit customers. Provides metadata about DNS-based DDoS attacks mitigated by Cloudflare's [Advanced DNS Protection](https://developers.cloudflare.com/ddos-protection/advanced-ddos-systems/overview/advanced-dns-protection/) system.

Samples include information about the following DNS header fields:

* `dnsQueryName`
* `dnsQueryType`

Advanced DNS Protection does not use the following ID fields: attack ID, rule ID, and ruleset ID.

The sample rate is 1/1,000 packets.

### `magicFirewallNetworkAnalyticsAdaptiveGroups`

This node is only available to Magic Transit customers. Provides information about packets that were matched against customer-configured [Cloudflare Network Firewall](https://developers.cloudflare.com/cloudflare-network-firewall/) rules.

Each row represents a packet sample that matches a Cloudflare Network Firewall rule.

Cloudflare Network Firewall does not use attack IDs, only rule IDs and ruleset IDs.

The sample rate is dynamic and based on the volume of packets, ranging from 1/100 to 1/1,000,000 packets.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/migration-guides/","name":"Migration guides"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/migration-guides/network-analytics-v2/","name":"Network Analytics v1 to Network Analytics v2"}},{"@type":"ListItem","position":6,"item":{"@id":"/analytics/graphql-api/migration-guides/network-analytics-v2/node-reference/","name":"NAv2 node reference"}}]}
```

---

---
title: NAv1 to NAv2 schema map
description: The following table lists direct mappings between NAv1 and NAv2 fields, when available, and provides related fields when there is no direct mapping available.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Copy page

# NAv1 to NAv2 schema map

The following table lists direct mappings between NAv1 and NAv2 fields, when available, and provides related fields when there is no direct mapping available.

| ipFlows1mGroups        | magicTransitNetworkAnalytics-AdaptiveGroups /spectrumNetworkAnalytics-AdaptiveGroups | dosdNetworkAnalytics-AdaptiveGroups         | dosdAttackAnalytics-Groups                  | flowtrackdNetworkAnalytics-AdaptiveGroups   | magicFirewallNetworkAnalytics-AdaptiveGroups |
| ---------------------- | ------------------------------------------------------------------------------------ | ------------------------------------------- | ------------------------------------------- | ------------------------------------------- | -------------------------------------------- |
| date                   | _Related fields:_datetimedatetimeTenSeconds                                          | _Related fields:_datetimedatetimeTenSeconds | _Related fields:_datetimedatetimeTenSeconds | _Related fields:_datetimedatetimeTenSeconds |                                              |
| datetimeMinute         | datetimeMinute                                                                       | datetimeMinute                              | datetimeMinute                              | datetimeMinute                              |                                              |
| datetimeFiveMinutes    | datetimeFiveMinutes                                                                  | datetimeFiveMinutes                         | datetimeFiveMinutes                         | datetimeFiveMinutes                         |                                              |
| datetimeFifteenMinutes | datetimeFifteenMinutes                                                               | datetimeFifteenMinutes                      | datetimeFifteenMinutes                      | datetimeFifteenMinutes                      |                                              |
| datetimeHour           | datetimeHour                                                                         | datetimeHour                                | datetimeHour                                | datetimeHour                                |                                              |
| attackId\*             | attackId\*                                                                           | attackId\*                                  |                                             |                                             |                                              |
| attackType             | attackType                                                                           |                                             |                                             |                                             |                                              |
| attackMitigationType   | mitigationType                                                                       |                                             |                                             |                                             |                                              |
| sourceIPCountry        | sourceCountry                                                                        | sourceCountry                               | sourceCountry                               | sourceCountry                               |                                              |
| sourceIPAsn            | sourceAsn                                                                            | sourceAsn                                   | sourceAsn                                   | sourceAsn                                   |                                              |
| sourceIPASNDescription | _Related field:_sourceGeohash                                                        | _Related field:_sourceGeohash               | _Related field:_sourceGeohash               | _Related field:_sourceGeohash               |                                              |
| coloCode               | coloCode                                                                             | coloCode                                    | coloCode                                    | coloCode                                    |                                              |
| coloCity               | coloCity                                                                             | coloCity                                    | coloCity                                    | coloCity                                    |                                              |
| coloCountry            | coloCountry                                                                          | coloCountry                                 | coloCountry                                 | coloCountry                                 |                                              |
| coloRegion             | _Related field:_coloGeohash                                                          | _Related field:_coloGeohash                 | _Related field:_coloGeohash                 | _Related field:_coloGeohash                 |                                              |
| ipFlows1mGroups        | magicTransitNetworkAnalytics-AdaptiveGroups /spectrumNetworkAnalytics-AdaptiveGroups | dosdNetworkAnalytics-AdaptiveGroups         | dosdAttackAnalytics-Groups                  | flowtrackdNetworkAnalytics-AdaptiveGroups   | magicFirewallNetworkAnalytics-AdaptiveGroups |
| ipVersion              | ethertype                                                                            | ethertype                                   | ethertype                                   | ethertype                                   |                                              |
| bits                   | ipTotalLength (bits divided by 8)                                                    | ipTotalLength (bits divided by 8)           | bits                                        | ipTotalLength (bits divided by 8)           | ipTotalLength (bits divided by 8)            |
| packets                | _n/a_                                                                                | _n/a_                                       | packets                                     | _n/a_                                       | _n/a_                                        |
| ipProtocol             | ipProtocol                                                                           | ipProtocol                                  | ipProtocol                                  | ipProtocol                                  | ipProtocol                                   |
| sourceIP               | ipSourceAddress                                                                      | ipSourceAddress                             | sourceIp                                    | ipSourceAddress                             | ipSourceAddress                              |
| destinationIP          | ipDestinationAddress                                                                 | ipDestinationAddress                        | destinationIp                               | ipDestinationAddress                        | ipDestinationAddress                         |
| destinationIPv4Range24 | ipDestinationSubnet                                                                  | ipDestinationSubnet                         | ipDestinationSubnet                         | ipDestinationSubnet                         |                                              |
| destinationIPv4Range23 | _n/a_                                                                                | _n/a_                                       | _n/a_                                       | _n/a_                                       |                                              |
| sourcePort             | sourcePort                                                                           | sourcePort                                  | sourcePort                                  | sourcePort                                  | sourcePort                                   |
| destinationPort        | destinationPort                                                                      | destinationPort                             | destinationPort                             | destinationPort                             | destinationPort                              |
| tcpFlags               | tcpFlags                                                                             | tcpFlags                                    | tcpFlags                                    | tcpFlags                                    | tcpFlags                                     |

\* The `attackId` field value may be different between NAv1 and NAv2 for the same attack.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/migration-guides/","name":"Migration guides"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/migration-guides/network-analytics-v2/","name":"Network Analytics v1 to Network Analytics v2"}},{"@type":"ListItem","position":6,"item":{"@id":"/analytics/graphql-api/migration-guides/network-analytics-v2/schema-map/","name":"NAv1 to NAv2 schema map"}}]}
```

---

---
title: Zone Analytics to GraphQL Analytics
description: The Zone Analytics API allows you to get request data by zone. It offers optional since and until parameters to specify the request time period and a continuous parameter to indicate whether the time period should be moved backward to find a period with completely aggregated data.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/migration-guides/zone-analytics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Zone Analytics to GraphQL Analytics

The Zone Analytics API allows you to get request data by zone. It offers optional `since` and `until` parameters to specify the request time period and a `continuous` parameter to indicate whether the time period should be moved backward to find a period with completely aggregated data.

For example, here is a sample curl call to get data for a two minute period:

Terminal window

```

curl "https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/analytics/dashboard?since=2019-09-08T20:00:00Z&until=2019-09-08T20:02:00Z&continuous=false" \

--header "Authorization: Bearer <API_TOKEN>" --silent | jq .


```

Response

```

{

  "success": true,

  "query": {

    "since": "2019-09-08T20:00:00Z",

    "until": "2019-09-08T20:02:00Z",

    "time_delta": 1

  },

  "errors": [],

  "messages": [],

  "result": {

    "timeseries": [

      {

        "since": "2019-09-08T20:00:00Z",

        "until": "2019-09-08T20:01:00Z",

        "requests": {

          "all": 15,

          "cached": 12,

          "uncached": 3,

          "ssl": {

            "encrypted": 13,

            "unencrypted": 2

          },

          "http_status": {

            "200": 4,

            "403": 11

          },

          "content_type": {

            "html": 12,

            "png": 3

          },

          "country": {

            "CN": 6,

            "IE": 1,

            "US": 3,

            "VN": 5

          },

          "ip_class": {

            "monitoringService": 4,

            "noRecord": 11

          },

          "ssl_protocol": {

            "TLSv1.2": 13,

            "none": 2

          }

        },

        "bandwidth": {

          "all": 312740,

          "cached": 309930,

          "uncached": 2810,

          "ssl": {

            "encrypted": 309276,

            "unencrypted": 3464

          },

          "ssl_protocol": {

            "TLSv1.2": 13,

            "none": 2

          },

          "content_type": {

            "html": 32150,

            "png": 280590

          },

          "country": {

            "CN": 10797,

            "IE": 98224,

            "US": 185176,

            "VN": 18543

          }

        },

        "threats": {

          "all": 6,

          "type": {

            "user.ban.ctry": 6

          },

          "country": {

            "CN": 6

          }

        },

        "pageviews": {

          "all": 1,

          "search_engine": {

            "pingdom": 1

          }

        },

        "uniques": {

          "all": 11

        }

      },

      {

        "since": "2019-09-08T20:01:00Z",

        "until": "2019-09-08T20:02:00Z",

        "requests": {

          "all": 4,

          "cached": 1,

          "uncached": 3,

          "ssl": {

            "encrypted": 4,

            "unencrypted": 0

          },

          "http_status": {

            "200": 4

          },

          "content_type": {

            "html": 1,

            "png": 3

          },

          "country": {

            "CA": 2,

            "US": 2

          },

          "ip_class": {

            "monitoringService": 4

          },

          "ssl_protocol": {

            "TLSv1.2": 4

          }

        },

        "bandwidth": {

          "all": 283399,

          "cached": 280590,

          "uncached": 2809,

          "ssl": {

            "encrypted": 283399,

            "unencrypted": 0

          },

          "ssl_protocol": {

            "TLSv1.2": 4

          },

          "content_type": {

            "html": 2809,

            "png": 280590

          },

          "country": {

            "CA": 101033,

            "US": 182366

          }

        },

        "threats": {

          "all": 0,

          "type": {},

          "country": {}

        },

        "pageviews": {

          "all": 1,

          "search_engine": {

            "pingdom": 1

          }

        },

        "uniques": {

          "all": 4

        }

      }

    ],

    "totals": {

      "since": "2019-09-08T20:00:00Z",

      "until": "2019-09-08T20:02:00Z",

      "requests": {

        "all": 19,

        "cached": 13,

        "uncached": 6,

        "ssl": {

          "encrypted": 17,

          "unencrypted": 2

        },

        "http_status": {

          "200": 8,

          "403": 11

        },

        "content_type": {

          "html": 13,

          "png": 6

        },

        "country": {

          "CA": 2,

          "CN": 6,

          "IE": 1,

          "US": 5,

          "VN": 5

        },

        "ip_class": {

          "monitoringService": 8,

          "noRecord": 11

        },

        "ssl_protocol": {

          "TLSv1.2": 17,

          "none": 2

        }

      },

      "bandwidth": {

        "all": 596139,

        "cached": 590520,

        "uncached": 5619,

        "ssl": {

          "encrypted": 592675,

          "unencrypted": 3464

        },

        "ssl_protocol": {

          "TLSv1.2": 17,

          "none": 2

        },

        "content_type": {

          "html": 34959,

          "png": 561180

        },

        "country": {

          "CA": 101033,

          "CN": 10797,

          "IE": 98224,

          "US": 367542,

          "VN": 18543

        }

      },

      "threats": {

        "all": 6,

        "type": {

          "user.ban.ctry": 6

        },

        "country": {

          "CN": 6

        }

      },

      "pageviews": {

        "all": 2,

        "search_engine": {

          "pingdom": 2

        }

      },

      "uniques": {

        "all": 15

      }

    }

  }

}


```

As you can see from the response, Zone Analytics returns metrics along many dimensions and does not give you the option to control what you receive. With GraphQL Analytics, you can ask for only the data that you need. However, if you wanted to get exactly the same metrics and dimensions as you would from Zone Analytics, here is the query you would make:

```

query ZoneAnalyticsMigrationSample($zoneTag: string, $start: Time, $end: Time) {

  viewer {

    zones(filter: { zoneTag: $zoneTag }) {

      httpRequests1mGroups(

        orderBy: [datetimeFiveMinutes_ASC]

        limit: 100

        filter: { datetime_geq: $start, datetime_lt: $end }

      ) {

        dimensions {

          datetimeFiveMinutes

        }

        sum {

          browserMap {

            pageViews

            uaBrowserFamily

          }

          bytes

          cachedBytes

          cachedRequests

          contentTypeMap {

            bytes

            requests

            edgeResponseContentTypeName

          }

          clientSSLMap {

            requests

            clientSSLProtocol

          }

          countryMap {

            bytes

            requests

            threats

            clientCountryName

          }

          encryptedBytes

          encryptedRequests

          ipClassMap {

            requests

            ipType

          }

          pageViews

          requests

          responseStatusMap {

            requests

            edgeResponseStatus

          }

          threats

          threatPathingMap {

            requests

            threatPathingName

          }

        }

        uniq {

          uniques

        }

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBAWgewHZgIJIIYBsoBcCWAxgM4Cy+A5hBgcgMoYC2ADlmABQAkAXsmACoYKALhjFcEfEgoAaGJ3EYIuUf3yMwczmCQATVerABKGAG8AUDBgA3fGADukM5asxeKYuwBm+LLkiipm58giLy7gJCMAC+JhaurgAWuLjMAEpgoGDixACMjADiEAggzJ4uCVYIELqQAEJQogDaujRgBBoAYvjWYORIIP7EAPqodADCALoVlVjq+CowuQAMyzMJPn4BZjCt-h1gwxSZogq4Srhye+2Gw36nOrox6zBxL1a6hkjE+MjEzpVKtcDt1ev1Btl3s9AVZiCBGACYVYAEbFezESCkDDMRFIqzMIRgABqdnRUISIAwdTRGIgnSYvig5OheOReEheJghAwhESYF0DSGzO5vP5GSyOWFyH8SFw-CgzD62NxrPZxGZVggmXAks5Vn5xwyxGYfzA42lOjlCrAADkmGBmdFhXNLXQ6AAZLE4+J6rUS3DqvVcl2yt3ugAKxVwCEICCwjqlIFl0C9KqRbKFQb9OoDGpguESWpogb1hBDuHNSYkUDtGgTnJ0hGgzH8ArVzMbzdb4pzJaR+GY4ywGGIZGVPs52eyuaDA-livreIJxxJDj7MKnurxWuNpro51wIDH3rzm5neoNYCNJu+YH3NCPi6RBaL5+fhbANHDNESUgoqYnbdtWndd31fb8Cz-WsHU5J0kTgmEk3wYA00qJCJShBCEiwlknWiIA&variables=N4IgXg9gdgpgKgQwOYgFwgFoHkByBRAfQEkAREAGhAGcAXBAJxrRACYAGFgNgFo2AWXgGY4bNqgCs41G3EYKIGFAAmzdl14C2wtpwlSZcgL5A)

Response

```

{

  "data": {

    "viewer": {

      "zones": [

        {

          "httpRequests1mGroups": [

            {

              "dimensions": {

                "datetimeFiveMinutes": "2019-09-08T20:00:00Z"

              },

              "sum": {

                "browserMap": [

                  {

                    "pageViews": 1,

                    "uaBrowserFamily": "PingdomBot"

                  }

                ],

                "bytes": 312740,

                "cachedBytes": 309930,

                "cachedRequests": 12,

                "clientSSLMap": [

                  {

                    "clientSSLProtocol": "none",

                    "requests": 2

                  },

                  {

                    "clientSSLProtocol": "TLSv1.2",

                    "requests": 13

                  }

                ],

                "contentTypeMap": [

                  {

                    "bytes": 280590,

                    "edgeResponseContentTypeName": "png",

                    "requests": 3

                  },

                  {

                    "bytes": 32150,

                    "edgeResponseContentTypeName": "html",

                    "requests": 12

                  }

                ],

                "countryMap": [

                  {

                    "bytes": 10797,

                    "clientCountryName": "CN",

                    "requests": 6,

                    "threats": 6

                  },

                  {

                    "bytes": 98224,

                    "clientCountryName": "IE",

                    "requests": 1,

                    "threats": 0

                  },

                  {

                    "bytes": 185176,

                    "clientCountryName": "US",

                    "requests": 3,

                    "threats": 0

                  },

                  {

                    "bytes": 18543,

                    "clientCountryName": "VN",

                    "requests": 5,

                    "threats": 0

                  }

                ],

                "encryptedBytes": 309276,

                "encryptedRequests": 13,

                "ipClassMap": [

                  {

                    "ipType": "monitoringService",

                    "requests": 4

                  },

                  {

                    "ipType": "noRecord",

                    "requests": 11

                  }

                ],

                "pageViews": 1,

                "requests": 15,

                "responseStatusMap": [

                  {

                    "edgeResponseStatus": 200,

                    "requests": 4

                  },

                  {

                    "edgeResponseStatus": 403,

                    "requests": 11

                  }

                ],

                "threatPathingMap": [

                  {

                    "requests": 6,

                    "threatPathingName": "user.ban.ctry"

                  }

                ],

                "threats": 6

              },

              "uniq": {

                "uniques": 11

              }

            },

            {

              "dimensions": {

                "datetimeFiveMinutes": "2019-09-08T20:01:00Z"

              },

              "sum": {

                "browserMap": [

                  {

                    "pageViews": 1,

                    "uaBrowserFamily": "PingdomBot"

                  }

                ],

                "bytes": 283399,

                "cachedBytes": 280590,

                "cachedRequests": 1,

                "clientSSLMap": [

                  {

                    "clientSSLProtocol": "TLSv1.2",

                    "requests": 4

                  }

                ],

                "contentTypeMap": [

                  {

                    "bytes": 280590,

                    "edgeResponseContentTypeName": "png",

                    "requests": 3

                  },

                  {

                    "bytes": 2809,

                    "edgeResponseContentTypeName": "html",

                    "requests": 1

                  }

                ],

                "countryMap": [

                  {

                    "bytes": 101033,

                    "clientCountryName": "CA",

                    "requests": 2,

                    "threats": 0

                  },

                  {

                    "bytes": 182366,

                    "clientCountryName": "US",

                    "requests": 2,

                    "threats": 0

                  }

                ],

                "encryptedBytes": 283399,

                "encryptedRequests": 4,

                "ipClassMap": [

                  {

                    "ipType": "monitoringService",

                    "requests": 4

                  }

                ],

                "pageViews": 1,

                "requests": 4,

                "responseStatusMap": [

                  {

                    "edgeResponseStatus": 200,

                    "requests": 4

                  }

                ],

                "threatPathingMap": [],

                "threats": 0

              },

              "uniq": {

                "uniques": 4

              }

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

Notice that you can specify the request time period using a dataset filter (refer to [Filtering](https://developers.cloudflare.com/analytics/graphql-api/features/filtering/)). The `continuous` parameter is no longer needed because GraphQL Analytics is designed to provide data as soon as it is available.

Also, if you want to get the totals for a particular period, rather than a breakdown by time period, simply remove the `datetimeFiveMinutes` field under `dimensions`.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/migration-guides/","name":"Migration guides"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/migration-guides/zone-analytics/","name":"Zone Analytics to GraphQL Analytics"}}]}
```

---

---
title: Zone Analytics Colos Endpoint to GraphQL Analytics
description: This guide shows how you might migrate from the deprecated (and soon to be sunset) zone analytics API to the GraphQL API. It provides an example for a plausible use-case of the colos endpoint, then shows how that use-case is translated to the GraphQL API. It also explores features of the GraphQL API that make it more powerful than the API it replaces.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/migration-guides/zone-analytics-colos.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Zone Analytics Colos Endpoint to GraphQL Analytics

This guide shows how you might migrate from the deprecated (and soon to be sunset) zone analytics API to the GraphQL API. It provides an example for a plausible use-case of the colos endpoint, then shows how that use-case is translated to the GraphQL API. It also explores features of the GraphQL API that make it more powerful than the API it replaces.

In this example, we want to calculate the number of requests for a particular colo, broken down by the hour in which the requests occurred. Referring to the zone analytics colos endpoint, we can construct a curl which retrieves the data from the API.

Terminal window

```

curl -H "Authorization: Bearer $API_TOKEN" "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/analytics/colos?since=2020-12-10T00:00:00Z"  > colos_endpoint_output.json


```

This query says:

* Given an `API_TOKEN` which has Analytics Read access to `ZONE_ID`.
* Fetch colos analytics for `ZONE_ID` with a time range that starts on`2020-12-10T00:00:00Z` (`since` parameter) to now.

The question that we want to answer is: "What is the number of requests for ZHR per hour?" Using the colos endpoint response data and some wrangling by jq we can answer that question with this command:

Terminal window

```

cat colos_endpoint_output.json | jq  -c '.result[] | {colo_id: .colo_id, timeseries: .timeseries[]} | {colo_id: .colo_id, timeslot: .timeseries.since, requests: .timeseries.requests.all, bandwidth: .timeseries.bandwidth.all} | select(.requests > 0) | select(.colo_id == "ZRH") '


```

This jq command is complex, so we can break it down:

Terminal window

```

.result[]


```

This means that the result array is split into individual json lines.

Terminal window

```

{colo_id: .colo_id, timeseries: .timeseries[]}


```

This breaks each json line into multiple json lines. Each resulting line contains a `colo_id` and one element of the `timeseries` array.

Terminal window

```

{colo_id: .colo_id, timeslot: .timeseries.since, requests: .timeseries.requests.all, bandwidth: .timeseries.bandwidth.all}


```

This flattens out the data we are interested in that is inside the timeseries object of each line.

Terminal window

```

select(.requests > 0) | select(.colo_id == "ZRH")


```

This selects only lines that contain more than 0 requests and the `colo_id` is ZRH.

The final data we get looks like the following response:

Response

```

{"colo_id":"ZRH","timeslot":"2020-12-10T00:00:00Z","requests":601,"bandwidth":683581}

{"colo_id":"ZRH","timeslot":"2020-12-10T01:00:00Z","requests":484,"bandwidth":550936}

{"colo_id":"ZRH","timeslot":"2020-12-10T02:00:00Z","requests":326,"bandwidth":370627}

{"colo_id":"ZRH","timeslot":"2020-12-10T03:00:00Z","requests":354,"bandwidth":402527}

{"colo_id":"ZRH","timeslot":"2020-12-10T04:00:00Z","requests":446,"bandwidth":507234}

{"colo_id":"ZRH","timeslot":"2020-12-10T05:00:00Z","requests":692,"bandwidth":787688}

{"colo_id":"ZRH","timeslot":"2020-12-10T06:00:00Z","requests":1474,"bandwidth":1676166}

{"colo_id":"ZRH","timeslot":"2020-12-10T07:00:00Z","requests":2839,"bandwidth":3226871}

{"colo_id":"ZRH","timeslot":"2020-12-10T08:00:00Z","requests":2953,"bandwidth":3358487}

{"colo_id":"ZRH","timeslot":"2020-12-10T09:00:00Z","requests":2550,"bandwidth":2901823}

{"colo_id":"ZRH","timeslot":"2020-12-10T10:00:00Z","requests":2203,"bandwidth":2504615}

...


```

How do we get the same result using the GraphQL API?

The GraphQL API allows us to be much more specific about the data that we want to retrieve. While the colos endpoint forces us to retrieve all the information about the breakdown of requests and bandwidth per colo, using the GraphQL API allows us to fetch only the information we are interested in.

The data we want is about HTTP requests. Hence, we use the canonical source for HTTP request data, also known as `httpRequestsAdaptiveGroups`. This node in GraphQL API allows you to filter and group by almost any dimension of an HTTP request imaginable. It is [Adaptive](https://developers.cloudflare.com/analytics/network-analytics/understand/concepts/#adaptive-bit-rate-sampling) so responses will be fast since it is driven by our [ABR technology ↗](https://blog.cloudflare.com/explaining-cloudflares-abr-analytics/).

The following is a GraphQL API query to retrieve the data we need to answer the question: "What is the number of requests for ZHR per hour?"

```

{

  viewer {

    zones(filter: {zoneTag:"$ZONE_TAG"}) {

      httpRequestsAdaptiveGroups(filter: {datetime_gt: "2020-12-10T00:00:00Z", coloCode:"ZRH"}, limit:10000, orderBy: [datetimeHour_ASC]) {

        count

        sum {

          edgeResponseBytes

        }

        avg {

          sampleInterval

        }

        count

        dimensions {

          datetimeHour

          coloCode

        }

      }

    }

  }

}


```

Then we can run it with curl:

Terminal window

```

curl -X POST -H "Authorization: Bearer $API_TOKEN"  https://api.cloudflare.com/client/v4/graphql -d "@./coloGroups.json" > graphqlColoGroupsResponse.json


```

We can answer our question in the same way as before using jq:

Terminal window

```

cat graphqlColoGroupsResponse.json| jq -c '.data.viewer.zones[] | .httpRequestsAdaptiveGroups[] | {colo_id: .dimensions.coloCode, timeslot: .dimensions.datetimeHour, requests: .count, bandwidth: .sum.edgeResponseBytes}'


```

This command is much simpler than what we had before, because the data returned by the GraphQL API is more specific than what is returned by the colos endpoint.

Still, it is worth explaining the command since it will help to understand some of the concepts underlying the GraphQL API.

Terminal window

```

.data.viewer.zones[]


```

The format of a GraphQL response is very similar to the query. A successful response always contains a `data` object which wraps the data in the response. A query will always have a `viewer` object which represents your user. Then, we unwrap the zones objects, one per line. Our query only has one zone (since this is how we chose to do it). But a query could have multiple zones as well.

Terminal window

```

.httpRequestsAdaptiveGroups[]


```

The `httpRequestsAdaptiveGroups` field is a list, where each datapoint in the list represents a combination of the dimensions that were selected, along with the aggregation that was selected for that combination of the dimensions. Here, we unwrap each of the datapoints, one per row.

Terminal window

```

{colo_id: .dimensions.coloCode, timeslot: .dimensions.datetimeHour, requests: .count, bandwidth: .sum.edgeResponseBytes}


```

This is straightforward: it just selects the attributes of each datapoint that we are interested in, in the format which we used previously in the colos endpoint.

The GraphQL API is a very powerful tool, as you can filter and group the data by many dimensions. This feature is totally absent from the colos endpoint in the Zone Analytics API.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/migration-guides/","name":"Migration guides"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/migration-guides/zone-analytics-colos/","name":"Zone Analytics Colos Endpoint to GraphQL Analytics"}}]}
```

---

---
title: Sampling
description: For a deep-dive on how sampling at Cloudflare works, see Understanding sampling in Cloudflare Analytics.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/sampling.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Sampling

For a deep-dive on how sampling at Cloudflare works, see [Understanding sampling in Cloudflare Analytics](https://developers.cloudflare.com/analytics/sampling/).

## Overview

In a small number of cases, the analytics provided on the Cloudflare dashboard and GraphQL Analytics API are based on a **sample** — a subset of the dataset. In these cases, Cloudflare Analytics returns an estimate derived from the sampled value. For example, suppose that during an attack the sampling rate is 10% and 5,000 events are sampled. Cloudflare will estimate 50,000 total events (5,000 × 10) and report this value in Analytics.

## Sampled datasets

Cloudflare GraphQL API exposes datasets that powered by adaptive sampling. These nodes have **Adaptive** in the name and can be discovered through[introspection](https://developers.cloudflare.com/analytics/graphql-api/features/discovery/introspection/).

The presence of sampled data is also called out in the Cloudflare dashboard and in the description of the dataset in the API.

## Why sampling is applied

Analytics is designed to provide requested data, at the appropriate level of detail, as quickly as possible. Sampling allows Cloudflare to deliver analytics within seconds, even when datasets scale quickly and unpredictably, such as a burst of Firewall events generated during an attack. And because the volume of underlying data is large, the value estimated from the sample should still be statistically significant – meaning you can rely on sampled data with a high degree of confidence. Without sampling, it might take several minutes or longer to answer a query — a long time to wait when validating mitigation efforts.

## Types of sampling

### Adaptive sampling

Cloudflare almost always uses **adaptive sampling**, which means the sample rate fluctuates depending on the volume of data ingested or queried. If the number of records is relatively small, sampling is not used. However, as the volume of records grows larger, progressively lower sample rates are applied. Security Events (also known as Firewall Events) and the Security Event Log follow this model. Data nodes that use adaptive sampling are easy to identify by the `Adaptive` suffix in the node name, as in `firewallEventsAdaptive`.

### Fixed sampling

The following data nodes are based on fixed sampling, where the sample rate does not vary:

| Data set                                                                                       | Rate   | Notes                                                                                                                                                                                                |
| ---------------------------------------------------------------------------------------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Firewall Rules Preview**Nodes:**firewallRulePreviewGroups                                      | 1%     | Use with caution. A 1% sample rate does not provide accurate estimates for datasets smaller than a certain threshold, a scenario the Cloudflare dashboard calls out explicitly but the API does not. |
| Network Analytics**Nodes:**ipFlows1mGroupsipFlows1hGroupsipFlows1dGroupsipFlows1mAttacksGroups | 0.012% | Sampling rate is in terms of packet count (1 of every 8,192 packets).                                                                                                                                |

## Access to raw data

Because sampling is primarily adaptive and automatically adjusts to provide an accurate estimate, the sampling rate cannot be directly controlled. Enterprise customers have access to raw data via Cloudflare Logs.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/sampling/","name":"Sampling"}}]}
```

---

---
title: Capture GraphQL queries with Chrome DevTools
description: Using Chrome DevTools, you can capture the queries running behind the Cloudflare Dashboard analytics. In this example, we will focus on the Network Analytics dataset, but the same process can be applied to any other analytics available in your dashboard.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/tutorials/capture-graphql-queries-from-dashboard.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Capture GraphQL queries with Chrome DevTools

Using [Chrome DevTools ↗](https://developer.chrome.com/docs/devtools), you can capture the queries running behind the Cloudflare Dashboard analytics. In this example, we will focus on the Network Analytics dataset, but the same process can be applied to any other analytics available in your dashboard.

1. In the Cloudflare dashboard, go to the **Network Analytics** page or any other analytics dashboard you are interested in seeing the GraphQL queries in.  
[ Go to **Network analytics** ](https://dash.cloudflare.com/?to=/:account/networking-insights/analytics/network-analytics/transport-analytics)
![Analytics tab](https://developers.cloudflare.com/_astro/analytics-tab.sJIMwybT_2gjMTY.webp) 
1. Open the [Chrome Developer Tools ↗](https://developer.chrome.com/docs/devtools) and select **Inspect**.
![Chrome developer tools](https://developers.cloudflare.com/_astro/chrome-developer-tools.D4a36rnA_1DYD77.webp) 
1. Select the **Network** tab in the Developer Tools panel.
2. In the filter bar, type `graphql` to filter out the GraphQL requests. If no requests appear, try reloading the page. As the page reloads, several network requests will populate the **Network** tab. Look for requests that contain `graphql` in the name.
![Type graphql in the search field](https://developers.cloudflare.com/_astro/search-field.BxHnt1F0_Z2pAlRo.webp) 
1. Select one of the GraphQL requests to open its details and go to the **Payload** tab. There you will find the GraphQL query. Select the query line and then **Copy value** to capture the query.
![Copy query value](https://developers.cloudflare.com/_astro/copy-value.BZMZMU5__2lVcIH.webp) 
1. If you want to capture a new query, adjust the filters in the **Network analytics** dashboard and a new query will appear in the GraphQL requests.
![Create a new query](https://developers.cloudflare.com/_astro/new-query.TN7tG2lX_Z7bye2.webp) 

You can now use this query as the basis for your API call. Refer to the [Get started](https://developers.cloudflare.com/analytics/graphql-api/getting-started/) section for more information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/tutorials/capture-graphql-queries-from-dashboard/","name":"Capture GraphQL queries with Chrome DevTools"}}]}
```

---

---
title: Querying HTTP events by hostname with GraphQL
description: In this example, we are going to use the GraphQL Analytics API to query aggregated metrics about HTTP events by hostname over a specific period of time.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/tutorials/end-customer-analytics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying HTTP events by hostname with GraphQL

## Aggregated HTTP metrics by hostname over time

In this example, we are going to use the GraphQL Analytics API to query aggregated metrics about HTTP events by hostname over a specific period of time.

The following API call will request the number of visits and edge response bytes for the custom hostname `hostname.example.com` over a four day period. Be sure to replace `CLOUDFLARE_ZONE_TAG` and `API_TOKEN`[1](#user-content-fn-1) with your zone ID and API credentials, and adjust the `datetime_geq` and `datetime_leq` values as needed.

### API Call

Terminal window

```

echo '{ "query":

  "query RequestsAndDataTransferByHostname($zoneTag: string, $filter:filter) {

    viewer {

      zones(filter: {zoneTag: $zoneTag}) {

        httpRequestsAdaptiveGroups(limit: 10, filter: $filter) {

          sum {

            visits

            edgeResponseBytes

          }

          dimensions {

            datetimeHour

          }

        }

      }

    }

  }",

  "variables": {

    "zoneTag": "<CLOUDFLARE_ZONE_TAG>",

    "filter": {

      "datetime_geq": "2022-07-20T11:00:00Z",

      "datetime_lt": "2022-07-24T12:00:00Z",

      "clientRequestHTTPHost": "hostname.example.com",

      "requestSource": "eyeball"

    }

  }

}' | tr -d '\n' | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @- | jq .


```

The returned results will be in JSON format (as requested), so piping the output to `jq` will make them easier to read, like in the following example:

```

{

  "data": {

    "viewer": {

      "zones": [

        {

          "httpRequestsAdaptiveGroups": [

            {

              "dimensions": {

                "datetimeHour": "2022-07-21T10:00:00Z"

              },

              "sum": {

                "edgeResponseBytes": 19849385,

                "visits": 4383

              }

            },

            {

              "dimensions": {

                "datetimeHour": "2022-07-21T06:00:00Z"

              },

              "sum": {

                "edgeResponseBytes": 20607204,

                "visits": 4375

              }

            },

            {

              "dimensions": {

                "datetimeHour": "2022-07-26T05:00:00Z"

              },

              "sum": {

                "edgeResponseBytes": 20170839,

                "visits": 4519

              }

            },

            {

              "dimensions": {

                "datetimeHour": "2022-07-23T08:00:00Z"

              },

              "sum": {

                "edgeResponseBytes": 20141860,

                "visits": 4448

              }

            },

            {

              "dimensions": {

                "datetimeHour": "2022-07-25T15:00:00Z"

              },

              "sum": {

                "edgeResponseBytes": 21070367,

                "visits": 4469

              }

            },

            {

              "dimensions": {

                "datetimeHour": "2022-07-28T08:00:00Z"

              },

              "sum": {

                "edgeResponseBytes": 19200774,

                "visits": 4345

              }

            },

            {

              "dimensions": {

                "datetimeHour": "2022-07-26T02:00:00Z"

              },

              "sum": {

                "edgeResponseBytes": 20758067,

                "visits": 4502

              }

            },

            {

              "dimensions": {

                "datetimeHour": "2022-07-20T19:00:00Z"

              },

              "sum": {

                "edgeResponseBytes": 22127811,

                "visits": 4443

              }

            },

            {

              "dimensions": {

                "datetimeHour": "2022-07-27T15:00:00Z"

              },

              "sum": {

                "edgeResponseBytes": 20480644,

                "visits": 4268

              }

            },

            {

              "dimensions": {

                "datetimeHour": "2022-07-27T17:00:00Z"

              },

              "sum": {

                "edgeResponseBytes": 19885704,

                "visits": 4287

              }

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

## Top 10 consuming URLs in a zone

We are going to use the GraphQL Analytics API to query the top 10 consuming URLs from a zone, helping you identify the URLs with the highest resource usage. Here are some configuration instructions:

* To filter on a specific hostname, add the line `"clientRequestHTTPHost": "'$2'"` below `"requestSource"`."
* Replace `API_TOKEN` with your generated API token using the `Read all resources` permissions. The script will only access zones available to the token's creator.
* Pass the zone ID (`zoneTag`) as a parameter `ARG=$1`.
* To calculate the current date and the date from 30 days ago, use `gdate` on Mac:  
   * `CURRENTDATE=$(gdate -u +'%FT%TZ')`  
   * `OLDDATE=$(gdate -d '-30 days' -u +'%FT%TZ')`.
* For specific dates within the last 30 days, set `CURRENTDATE` and `OLDDATE` variables in the format `"YYYY-MM-DDTHH:MM:SSZ"`.

### API call

Terminal window

```

curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Content-Type: application/json" \

--data '{

  "query": "{viewer {zones(filter: {zoneTag: $zoneTag}) {topPaths: httpRequestsAdaptiveGroups(filter: $filter, limit: 10, orderBy: [sum_edgeResponseBytes_DESC]) {count sum {edgeResponseBytes} dimensions {metric: clientRequestPath}}}}}",

  "variables": {

    "zoneTag": "'$ARG'",

    "filter": {

      "AND": [

        {

          "datetime_geq": "'$OLDDATE'",

          "datetime_leq": "'$CURRENTDATE'"

        },

        {

          "requestSource": "eyeball"

        }

      ]

    }

  }

}' | jq -r 'try .data.viewer.zones[].topPaths[] | "\"\(.dimensions.metric)\": \(.sum.edgeResponseBytes)"' | sort


```

## Footnotes

1. Refer to [Configure an Analytics API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. [↩](#user-content-fnref-1)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/tutorials/end-customer-analytics/","name":"Querying HTTP events by hostname with GraphQL"}}]}
```

---

---
title: Querying Access login events with GraphQL
description: In this example, we are going to use the GraphQL Analytics API to retrieve logs for an Access login event. These logs are particularly useful for determining why a user received a 403 Forbidden error, since they surface additional data beyond what is shown in the dashboard Access logs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/tutorials/querying-access-login-events.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying Access login events with GraphQL

In this example, we are going to use the GraphQL Analytics API to retrieve logs for an Access login event. These logs are particularly useful for determining why a user received a `403` Forbidden error, since they surface additional data beyond what is shown in the dashboard Access logs.

The following API call will request logs for a single Access login event and output the requested fields. The authentication request is identified by its **Ray ID**, which you can obtain from the `403` Forbidden page shown to the user.

You will need to insert your `<CLOUDFLARE_ACCOUNT_TAG>`, your API credentials in `<API_TOKEN>`[1](#user-content-fn-1), and substitute your own values for the following variables:

* `rayID`: A unique identifier assigned to the authentication request.
* `datetimeStart`: The earliest event time to query (no earlier than September 16, 2022).
* `datetimeEnd`: The latest event time to query. Be sure to specify a time range that includes the login event you are querying.

## API Call

Terminal window

```

echo '{ "query":

  "query accessLoginRequestsAdaptiveGroups($accountTag: string, $rayId: string, $datetimeStart: string, $datetimeEnd: string) {

    viewer {

      accounts(filter: {accountTag: $accountTag}) {

        accessLoginRequestsAdaptiveGroups(limit: 100, filter: {datetime_geq: $datetimeStart, datetime_leq: $datetimeEnd, cfRayId: $rayId}, orderBy: [datetime_ASC]) {

          dimensions {

            datetime

            isSuccessfulLogin

            hasWarpEnabled

            hasGatewayEnabled

            hasExistingJWT

            approvingPolicyId

            cfRayId

            ipAddress

            userUuid

            identityProvider

            country

            deviceId

            mtlsStatus

            mtlsCertSerialId

            mtlsCommonName

            serviceTokenId

          }

        }

      }

    }

  }",

  "variables": {

    "accountTag": "<CLOUDFLARE_ACCOUNT_TAG>",

    "rayId": "74e4ac510dfdc44f",

    "datetimeStart": "2022-09-20T14:36:38Z",

    "datetimeEnd": "2022-09-22T14:36:38Z"

}

}' | tr -d '\n' | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @- | jq .


```

Note

Rather than filter by `cfRayId`, you may also [filter](https://developers.cloudflare.com/analytics/graphql-api/features/filtering/) by any other field in the query such as `userUuid` or `deviceId`.

## Response

```

{

  "data": {

    "viewer": {

      "accounts": [

        {

          "accessLoginRequestsAdaptiveGroups": [

            {

              "dimensions": {

                "approvingPolicyId": "",

                "cfRayId": "744927037ce06d68",

                "country": "US",

                "datetime": "2022-09-02T20:56:27Z",

                "deviceId": "",

                "hasExistingJWT": 0,

                "hasGatewayEnabled": 0,

                "hasWarpEnabled": 0,

                "identityProvider": "nonidentity",

                "ipAddress": "2a09:bac0:15::814:7b37",

                "isSuccessfulLogin": 0,

                "mtlsCertSerialId": "",

                "mtlsCommonName": "",

                "mtlsStatus": "NONE",

                "serviceTokenId": "",

                "userUuid": ""

              }

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

You can compare the query results to your Access policies to understand why a user was blocked. For example, if your application requires a valid mTLS certificate, Access blocked the request shown above because `mtlsStatus`, `mtlsCommonName`, and `mtlsCertSerialId` are empty.

## Footnotes

1. Refer to [Configure an Analytics API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. [↩](#user-content-fnref-1)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/tutorials/querying-access-login-events/","name":"Querying Access login events with GraphQL"}}]}
```

---

---
title: Querying Email Routing events with GraphQL
description: This example uses the GraphQL Analytics API to query for Email Routing events over a specified time period.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/tutorials/querying-email-routing.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying Email Routing events with GraphQL

This example uses the GraphQL Analytics API to query for Email Routing events over a specified time period.

## Activiy Logs API Call

The following API call will request Email Routing activity logs over a one day period, and output the requested fields. Be sure to replace `<CLOUDFLARE_ZONE_TAG>` and `<API_TOKEN>`[1](#user-content-fn-1) with your zone tag and API credentials, and adjust the `datetime_geg` and `datetime_leq` values as required.

Terminal window

```

echo '{ "query":

  "query EmailRoutingActivity($zoneTag: string, $filter: EmailRoutingAdaptiveFilter_InputObject) {

    viewer {

      zones(filter: { zoneTag: $zoneTag }) {

        emailRoutingAdaptive(

          filter: $filter

          limit: 3

          orderBy: [datetime_DESC]

        ) {

          datetime

          id: sessionId

          messageId

          from

          to

          subject

          status

          action

          spf

          dkim

          dmarc

          arc

          errorDetail

          isNDR

          isSpam

          spamThreshold

          spamScore

        }

      }

    }

  }",

  "variables": {

    "zoneTag": "<CLOUDFLARE_ZONE_TAG>",

    "filter": {

      "datetime_geq": "2026-01-18T11:00:00Z",

      "datetime_leq": "2026-01-19T11:00:00Z"

    }

  }

}' | tr -d '\n' | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @- | jq .


```

The results returned will be in JSON (as requested):

```

{

  "data": {

    "viewer": {

      "zones": [

        {

          "emailRoutingAdaptive": [

            {

              "action": "forward",

              "arc": "none",

              "datetime": "2026-01-19T10:51:25Z",

              "dkim": "pass",

              "dmarc": "pass",

              "errorDetail": "",

              "from": "John <john@email.example.com>",

              "id": "AfWyaZ7V1TAH",

              "isNDR": 0,

              "isSpam": 0,

              "messageId": "<9e6574f1-97f8-4060-ad62-c54b6408ac3f@local>",

              "spamScore": 0,

              "spamThreshold": 5,

              "spf": "pass",

              "status": "delivered",

              "subject": "How are you doing?",

              "to": "me@example.com"

            },

            {

              "action": "forward",

              "arc": "none",

              "datetime": "2026-01-19T10:30:00Z",

              "dkim": "pass",

              "dmarc": "pass",

              "errorDetail": "",

              "from": "eBay <ebay@ebay.co.uk>",

              "id": "aYPegrIfLWia",

              "isNDR": 0,

              "isSpam": 0,

              "messageId": "<1A513C40-F2CD808A928-029BBE999993-0000000000FA8855@starship>",

              "spamScore": 0,

              "spamThreshold": 5,

              "spf": "pass",

              "status": "delivered",

              "subject": "New offers",

              "to": "me@example.com"

            },

            {

              "action": "forward",

              "arc": "none",

              "datetime": "2026-01-19T10:29:59Z",

              "dkim": "pass",

              "dmarc": "pass",

              "errorDetail": "",

              "from": "Notification <notifications@example.com>",

              "id": "nWIl9gs95mY3",

              "isNDR": 0,

              "isSpam": 0,

              "messageId": "<0AB8F1C3-3015EDF2980-019BBE9B58F2-0000000000FA7C4D@local>",

              "spamScore": 0,

              "spamThreshold": 5,

              "spf": "pass",

              "status": "delivered",

              "subject": "You're over quota",

              "to": "me@example.com"

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

## Analytics API Call

The following API call will count the number of events grouped by hour.

Terminal window

```

echo '{ "query":

  "query EmailRoutingActivity($zoneTag: string, $filter: EmailRoutingAdaptiveFilter_InputObject) {

     viewer {

       zones(filter: { zoneTag: $zoneTag }) {

         emailRoutingAdaptiveGroups(

           limit: 10000

           filter: $filter

           orderBy: [datetimeHour_ASC]

         ) { count

               dimensions {

                 datetimeHour

               }

             }

           }

     }

  }",

  "variables": {

    "zoneTag": "<CLOUDFLARE_ZONE_TAG>",

    "filter": {

      "datetimeHour_geq": "2026-01-18T11:00:00Z",

      "datetimeHour_leq": "2026-01-19T11:00:00Z"

    }

  }

}' | tr -d '\n' | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @- | jq .


```

The results returned will be in JSON (as requested):

```

{

  "data": {

    "viewer": {

      "zones": [

        {

          "emailRoutingAdaptiveGroups": [

            {

              "count": 2,

              "dimensions": {

                "datetimeHour": "2026-01-18T11:00:00Z"

              }

            },

            {

              "count": 1,

              "dimensions": {

                "datetimeHour": "2026-01-18T12:00:00Z"

              }

            },

            {

              "count": 1,

              "dimensions": {

                "datetimeHour": "2026-01-18T13:00:00Z"

              }

            },

            {

              "count": 2,

              "dimensions": {

                "datetimeHour": "2026-01-18T14:00:00Z"

              }

            },

            {

              "count": 1,

              "dimensions": {

                "datetimeHour": "2026-01-18T15:00:00Z"

              }

            },

            {

              "count": 1,

              "dimensions": {

                "datetimeHour": "2026-01-18T16:00:00Z"

              }

            },

            {

              "count": 2,

              "dimensions": {

                "datetimeHour": "2026-01-18T17:00:00Z"

              }

            },

            {

              "count": 3,

              "dimensions": {

                "datetimeHour": "2026-01-18T18:00:00Z"

              }

            },

            {

              "count": 1,

              "dimensions": {

                "datetimeHour": "2026-01-18T22:00:00Z"

              }

            },

            {

              "count": 2,

              "dimensions": {

                "datetimeHour": "2026-01-19T01:00:00Z"

              }

            },

            {

              "count": 1,

              "dimensions": {

                "datetimeHour": "2026-01-19T02:00:00Z"

              }

            },

            {

              "count": 4,

              "dimensions": {

                "datetimeHour": "2026-01-19T05:00:00Z"

              }

            },

            {

              "count": 1,

              "dimensions": {

                "datetimeHour": "2026-01-19T08:00:00Z"

              }

            },

            {

              "count": 5,

              "dimensions": {

                "datetimeHour": "2026-01-19T09:00:00Z"

              }

            },

            {

              "count": 6,

              "dimensions": {

                "datetimeHour": "2026-01-19T10:00:00Z"

              }

            },

            {

              "count": 2,

              "dimensions": {

                "datetimeHour": "2026-01-19T11:00:00Z"

              }

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

## Footnotes

1. Refer to [Configure an Analytics API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. [↩](#user-content-fnref-1)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/tutorials/querying-email-routing/","name":"Querying Email Routing events with GraphQL"}}]}
```

---

---
title: Querying Firewall Events with GraphQL
description: In this example, we are going to use the GraphQL Analytics API to query for Firewall Events over a specified time period.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/tutorials/querying-firewall-events.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying Firewall Events with GraphQL

In this example, we are going to use the GraphQL Analytics API to query for Firewall Events over a specified time period.

The following API call will request Firewall Events over a one hour period, and output the requested fields. Be sure to replace `<CLOUDFLARE_ZONE_TAG>`, `<EMAIL>`, and `<API_TOKEN>`[1](#user-content-fn-1) with your zone tag and API credentials, and adjust the `datetime_geg` and `datetime_leq` values to your liking.

## API Call

Terminal window

```

echo '{ "query":

  "query ListFirewallEvents($zoneTag: string, $filter: FirewallEventsAdaptiveFilter_InputObject) {

    viewer {

      zones(filter: { zoneTag: $zoneTag }) {

        firewallEventsAdaptive(

          filter: $filter

          limit: 10

          orderBy: [datetime_DESC]

        ) {

          action

          clientAsn

          clientCountryName

          clientIP

          clientRequestPath

          clientRequestQuery

          datetime

          source

          userAgent

        }

      }

    }

  }",

  "variables": {

    "zoneTag": "<CLOUDFLARE_ZONE_TAG>",

    "filter": {

      "datetime_geq": "2022-07-24T11:00:00Z",

      "datetime_leq": "2022-07-24T12:00:00Z"

    }

  }

}' | tr -d '\n' | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @-


```

The results returned will be in JSON (as requested), so piping the output to `jq` will make them easier to read, for example:

Terminal window

```

... | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @- | jq .


#=> {

#=>   "data": {

#=>     "viewer": {

#=>       "zones": [

#=>         {

#=>           "firewallEventsAdaptive": [

#=>             {

#=>               "action": "log",

#=>               "clientAsn": "5089",

#=>               "clientCountryName": "GB",

#=>               "clientIP": "203.0.113.69",

#=>               "clientRequestPath": "/%3Cscript%3Ealert()%3C/script%3E",

#=>               "clientRequestQuery": "",

#=>               "datetime": "2020-04-24T10:11:24Z",

#=>               "source": "waf",

#=>               "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"

#=>             },

#=>             {

#=>               "action": "log",

#=>               "clientAsn": "5089",

#=>               "clientCountryName": "GB",

#=>               "clientIP": "203.0.113.69",

#=>               "clientRequestPath": "/%3Cscript%3Ealert()%3C/script%3E",

#=>               "clientRequestQuery": "",

#=>               "datetime": "2020-04-24T10:11:24Z",

#=>               "source": "waf",

#=>               "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"

#=>             },

#=>             {

#=>               "action": "log",

#=>               "clientAsn": "5089",

#=>               "clientCountryName": "GB",

#=>               "clientIP": "203.0.113.69",

#=>               "clientRequestPath": "/%3Cscript%3Ealert()%3C/script%3E",

#=>               "clientRequestQuery": "",

#=>               "datetime": "2020-04-24T10:11:24Z",

#=>               "source": "waf",

#=>               "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"

#=>             },

#=>             {

#=>               "action": "log",

#=>               "clientAsn": "5089",

#=>               "clientCountryName": "GB",

#=>               "clientIP": "203.0.113.69",

#=>               "clientRequestPath": "/%3Cscript%3Ealert()%3C/script%3E",

#=>               "clientRequestQuery": "",

#=>               "datetime": "2020-04-24T10:11:24Z",

#=>               "source": "waf",

#=>               "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"

#=>             },

#=>             {

#=>               "action": "log",

#=>               "clientAsn": "5089",

#=>               "clientCountryName": "GB",

#=>               "clientIP": "203.0.113.69",

#=>               "clientRequestPath": "/%3Cscript%3Ealert()%3C/script%3E",

#=>               "clientRequestQuery": "",

#=>               "datetime": "2020-04-24T10:11:24Z",

#=>               "source": "waf",

#=>               "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"

#=>             },

#=>             {

#=>               "action": "log",

#=>               "clientAsn": "5089",

#=>               "clientCountryName": "GB",

#=>               "clientIP": "203.0.113.69",

#=>               "clientRequestPath": "/%3Cscript%3Ealert()%3C/script%3E",

#=>               "clientRequestQuery": "",

#=>               "datetime": "2020-04-24T10:11:24Z",

#=>               "source": "waf",

#=>               "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"

#=>             },

#=>             {

#=>               "action": "log",

#=>               "clientAsn": "5089",

#=>               "clientCountryName": "GB",

#=>               "clientIP": "203.0.113.69",

#=>               "clientRequestPath": "/%3Cscript%3Ealert()%3C/script%3E",

#=>               "clientRequestQuery": "",

#=>               "datetime": "2020-04-24T10:11:24Z",

#=>               "source": "waf",

#=>               "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"

#=>             },

#=>             {

#=>               "action": "block",

#=>               "clientAsn": "5089",

#=>               "clientCountryName": "GB",

#=>               "clientIP": "203.0.113.69",

#=>               "clientRequestPath": "/%3Cscript%3Ealert()%3C/script%3E",

#=>               "clientRequestQuery": "",

#=>               "datetime": "2020-04-24T10:11:24Z",

#=>               "source": "waf",

#=>               "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"

#=>             },

#=>             {

#=>               "action": "log",

#=>               "clientAsn": "58224",

#=>               "clientCountryName": "IR",

#=>               "clientIP": "2.183.175.37",

#=>               "clientRequestPath": "/api/v2",

#=>               "clientRequestQuery": "",

#=>               "datetime": "2020-04-24T10:00:54Z",

#=>               "source": "waf",

#=>               "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36"

#=>             },

#=>             {

#=>               "action": "log",

#=>               "clientAsn": "58224",

#=>               "clientCountryName": "IR",

#=>               "clientIP": "2.183.175.37",

#=>               "clientRequestPath": "/api/v2",

#=>               "clientRequestQuery": "",

#=>               "datetime": "2020-04-24T10:00:54Z",

#=>               "source": "waf",

#=>               "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36"

#=>             }

#=>           ]

#=>         }

#=>       ]

#=>     }

#=>   },

#=>   "errors": null

#=> }


```

## Footnotes

1. Refer to [Configure an Analytics API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. [↩](#user-content-fnref-1)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/tutorials/querying-firewall-events/","name":"Querying Firewall Events with GraphQL"}}]}
```

---

---
title: Querying Magic Transit endpoint health check results with GraphQL
description: Use the GraphQL Analytics API to query endpoint health check results for your account. The magicEndpointHealthCheckAdaptiveGroups dataset returns probe results aggregated by the dimensions and time interval you specify.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/tutorials/querying-magic-transit-endpoint-healthcheck-results.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying Magic Transit endpoint health check results with GraphQL

Use the [GraphQL Analytics API](https://developers.cloudflare.com/analytics/graphql-api/) to query endpoint health check results for your account. The `magicEndpointHealthCheckAdaptiveGroups` dataset returns probe results aggregated by the dimensions and time interval you specify.

Send all GraphQL queries as HTTP `POST` requests to `https://api.cloudflare.com/client/v4/graphql`.

### Prerequisites

You need the following to query endpoint health check data:

* Your [account ID](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids/).
* An [API token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/) with `Account > Account Analytics > Read` permissions. For details, refer to [Configure an Analytics API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/).

### Query parameters

The following parameters are some of the most common ones in the `filter` object:

| Parameter     | Description                                                                                                                                                                                                                                       |
| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| date\_geq     | Start date for the query in YYYY-MM-DD format (for example, 2026-01-01). When used with a date-based truncation dimension, returns results from this date onward. You can also use a full ISO 8601 timestamp (for example, 2026-01-01T00:00:00Z). |
| date\_leq     | _(Optional)_ End date for the query. Uses the same format as date\_geq.                                                                                                                                                                           |
| datetime\_geq | _(Optional)_ Start timestamp in ISO 8601 format (for example, 2026-01-01T00:00:00Z). Use instead of date\_geq for time-based truncation dimensions.                                                                                               |
| datetime\_leq | _(Optional)_ End timestamp in ISO 8601 format.                                                                                                                                                                                                    |
| limit         | Maximum number of result groups to return.                                                                                                                                                                                                        |

You can also filter on any dimension listed in the [Available dimensions](#available-dimensions) table. Append an operator suffix to the dimension name to create a filter — for example, `endpoint_in` to filter by a list of endpoints, or `checkType_neq` to exclude a specific check type. Using a dimension name without a suffix filters for equality. For the full list of supported operators, refer to [Filtering](https://developers.cloudflare.com/analytics/graphql-api/features/filtering/).

### Available dimensions

You can query the following dimensions in the `dimensions` field:

| Dimension              | Description                                                                      |
| ---------------------- | -------------------------------------------------------------------------------- |
| checkId                | The unique ID of the configured health check.                                    |
| checkType              | The type of health check (for example, icmp).                                    |
| endpoint               | The IP address of the endpoint being checked.                                    |
| name                   | The name assigned to the health check when configured (may be empty if not set). |
| date                   | Event timestamp truncated to the day.                                            |
| datetime               | Full event timestamp.                                                            |
| datetimeMinute         | Event timestamp truncated to the minute.                                         |
| datetimeFiveMinutes    | Event timestamp truncated to five-minute intervals.                              |
| datetimeFifteenMinutes | Event timestamp truncated to 15-minute intervals.                                |
| datetimeHalfOfHour     | Event timestamp truncated to 30-minute intervals.                                |
| datetimeHour           | Event timestamp truncated to the hour.                                           |

### Available metrics

| Metric             | Description                                       |
| ------------------ | ------------------------------------------------- |
| count              | Total number of health check events in the group. |
| sum.total          | Total number of health check probes sent.         |
| sum.failures       | Number of failed health check probes.             |
| avg.lossPercentage | Average calculated loss percentage (0-100).       |

### API call

The following example queries endpoint health check results for a specific account, returning probe counts aggregated in five-minute intervals. Replace `<ACCOUNT_ID>` with your [account ID](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids/) and `<API_TOKEN>` with your [API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/).

Terminal window

```

echo '{ "query":

  "query GetEndpointHealthCheckResults($accountTag: string, $datetimeStart: string) {

    viewer {

      accounts(filter: {accountTag: $accountTag}) {

        magicEndpointHealthCheckAdaptiveGroups(

          filter: {

            datetime_geq: $datetimeStart

          }

          limit: 10

        ) {

          count

          dimensions {

            checkId

            checkType

            endpoint

            datetimeFiveMinutes

          }

          sum {

            failures

            total

          }

        }

      }

    }

  }",

  "variables": {

    "accountTag": "<ACCOUNT_ID>",

    "datetimeStart": "2026-01-21T00:00:00Z"

  }

}' | tr -d '\n' | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @-


```

Pipe the output to `jq` to format the JSON response for easier reading:

Terminal window

```

... | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @- | jq .


```

### Example response

```

{

  "data": {

    "viewer": {

      "accounts": [

        {

          "magicEndpointHealthCheckAdaptiveGroups": [

            {

              "count": 288,

              "dimensions": {

                "checkId": "90b478c7-bb51-4640-b94b-2c3050e9fa00",

                "checkType": "icmp",

                "datetimeFiveMinutes": "2026-01-21T12:00:00Z",

                "endpoint": "103.21.244.100"

              },

              "sum": {

                "failures": 0,

                "total": 288

              }

            },

            {

              "count": 288,

              "dimensions": {

                "checkId": "90b478c7-bb51-4640-b94b-2c3050e9fa00",

                "checkType": "icmp",

                "datetimeFiveMinutes": "2026-01-21T12:05:00Z",

                "endpoint": "103.21.244.100"

              },

              "sum": {

                "failures": 2,

                "total": 288

              }

            }

          ]

        }

      ]

    }

  },

  "errors": null

}


```

In this response, `sum.total` is the number of probes sent during the interval and `sum.failures` is the number that did not receive a reply. A `failures` value of `0` indicates the endpoint was fully reachable during that period.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/tutorials/querying-magic-transit-endpoint-healthcheck-results/","name":"Querying Magic Transit endpoint health check results with GraphQL"}}]}
```

---

---
title: Querying Magic Transit and Cloudflare WAN IPsec/GRE tunnel bandwidth analytics with GraphQL
description: This example uses the GraphQL Analytics API to query Magic Transit or Cloudflare WAN ingress tunnel traffic over a specified time period.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/tutorials/querying-magic-transit-tunnel-bandwidth-analytics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying Magic Transit and Cloudflare WAN IPsec/GRE tunnel bandwidth analytics with GraphQL

This example uses the GraphQL Analytics API to query Magic Transit or Cloudflare WAN ingress tunnel traffic over a specified time period.

The following API call requests Magic Transit or Cloudflare WAN ingress tunnel traffic over a one-hour period and outputs the requested fields. Replace `<CLOUDFLARE_ACCOUNT_TAG>` with your account ID, `<EMAIL>`, `<API_KEY>`[1](#user-content-fn-1) (legacy), or `<API_TOKEN>`[2](#user-content-fn-2) (preferred) with your API credentials, and adjust the `datetime_geq` and `datetime_leq` values as needed.

The example queries for ingress traffic. To query for egress traffic, change the value in the `direction` filter.

## API Call

Terminal window

```

PAYLOAD='{ "query":

  "query GetTunnelHealthCheckResults($accountTag: string, $datetimeStart: string, $datetimeEnd: string) {

      viewer {

        accounts(filter: {accountTag: $accountTag}) {

          magicTransitTunnelTrafficAdaptiveGroups(

            limit: 100,

            filter: {

              datetime_geq: $datetimeStart,

              datetime_lt:  $datetimeEnd,

              direction: $direction

            }

          ) {

            avg {

              bitRateFiveMinutes

            }

            dimensions {

              tunnelName

              datetimeFiveMinutes

            }

          }

        }

      }

  }",

    "variables": {

      "accountTag": "<CLOUDFLARE_ACCOUNT_TAG>",

      "direction": "ingress",

      "datetimeStart": "2022-05-04T11:00:00.000Z",

      "datetimeEnd": "2022-05-04T12:00:00.000Z"

    }

  }

}'


# curl with Legacy API Key

curl https://api.cloudflare.com/client/v4/graphql \

--header "X-Auth-Email: <EMAIL>" \

--header "X-Auth-Key: <API_KEY>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data "$(echo $PAYLOAD)"


# curl with API Token

curl https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data "$(echo $PAYLOAD)"


```

The returned values represent the total bandwidth in bits per second during the five-minute interval for a particular tunnel. To use aggregations other than five minutes, use the same time window for both your metric and datetime. For example, to analyze hourly groups, use `bitRateHour` and `datetimeHour`.

The result is in JSON (as requested), so piping the output to `jq` formats it for easier parsing, as in the following example:

Terminal window

```

curl https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data "$(echo $PAYLOAD)" | jq .


## Example response:

#=> {

#=>   "data": {

#=>     "viewer": {

#=>       "accounts": [

#=>         {

#=>           "magicTransitTunnelTrafficAdaptiveGroups": [

#=>             {

#=>               avg: { bitRateFiveMinutes:  327680 },

#=>               dimensions: {

#=>                 datetimeFiveMinute: '2021-05-12T22:00-00:00',

#=>                 tunnelName: 'tunnel_name'

#=>               }

#=>             },

#=>             {

#=>               avg: { bitRateFiveMinutes:  627213680 },

#=>               dimensions: {

#=>                 datetimeFiveMinute: '2021-05-12T22:05-00:00',

#=>                 tunnelName: 'another_tunnel'

#=>              }

#=>             }

#=>           ]

#=>         }

#=>       ]

#=>     }

#=>   },

#=>   "errors": null

#=> }


```

## Footnotes

1. For details, refer to [Authenticate with a Cloudflare API key](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-key-auth/). [↩](#user-content-fnref-1)
2. For details, refer to [Configure an Analytics API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/). [↩](#user-content-fnref-2)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/tutorials/querying-magic-transit-tunnel-bandwidth-analytics/","name":"Querying Magic Transit and Cloudflare WAN IPsec/GRE tunnel bandwidth analytics with GraphQL"}}]}
```

---

---
title: Querying Magic Transit and Cloudflare WAN IPsec/GRE tunnel health check results with GraphQL
description: This example uses the GraphQL Analytics API to query Magic Transit or Cloudflare WAN tunnel health check results. These results are aggregated from individual health checks that Cloudflare servers perform against the tunnels you configured in your account. You can query up to one week of data for dates up to three months in the past.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/tutorials/querying-magic-transit-tunnel-healthcheck-results.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying Magic Transit and Cloudflare WAN IPsec/GRE tunnel health check results with GraphQL

This example uses the GraphQL Analytics API to query Magic Transit or Cloudflare WAN tunnel health check results. These results are aggregated from individual health checks that Cloudflare servers perform against the tunnels you configured in your account. You can query up to one week of data for dates up to three months in the past.

The following API call requests tunnel health checks for a specific account over a one-day period for a specific Cloudflare data center and outputs the requested fields. Replace `<CLOUDFLARE_ACCOUNT_TAG>` and `<API_TOKEN>`[1](#user-content-fn-1) with your API credentials, and adjust the `datetimeStart` and `datetimeEnd` variables as needed.

The API call returns tunnel health check results by Cloudflare data center. Cloudflare aggregates each data center's result from health checks conducted on individual servers. The `tunnelState` field represents the state of the tunnel. Magic Transit or Cloudflare WAN uses these states for routing. A `tunnelState` value of `0` represents a down tunnel, `0.5` represents a degraded tunnel, and `1` represents a healthy tunnel.

## API Call

Terminal window

```

echo '{ "query":

  "query GetTunnelHealthCheckResults($accountTag: string, $datetimeStart: string, $datetimeEnd: string) {

    viewer {

      accounts(filter: {accountTag: $accountTag}) {

        magicTransitTunnelHealthChecksAdaptiveGroups(

          limit: 100,

          filter: {

            datetime_geq: $datetimeStart,

            datetime_lt:  $datetimeEnd,

          }

        ) {

          avg {

            tunnelState

          }

          dimensions {

            tunnelName

            edgeColoName

          }

        }

      }

    }

  }",

  "variables": {

    "accountTag": "<CLOUDFLARE_ACCOUNT_TAG>",

    "datetimeStart": "2022-08-04T00:00:00.000Z",

    "datetimeEnd": "2022-08-04T01:00:00.000Z"

  }

}' | tr -d '\n' | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @-


```

The results are returned in JSON (as requested), so piping the output to `jq` formats them for easier parsing, as in the following example:

Terminal window

```

... | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @- | jq .


## Example response:

#=> {

#=>   "data": {

#=>     "viewer": {

#=>       "accounts": [

#=>         {

#=>           "conduitEdgeTunnelHealthChecks": [

#=>             {

#=>               {

#=>                 "avg": {

#=>                   "tunnelState": 1

#=>                 },

#=>                 "dimensions": {

#=>                   "edgeColoName": "mel01",

#=>                   "tunnelName": "tunnel_01",

#=>                   "tunnelState": 0.5

#=>                 }

#=>               },

#=>               {

#=>                 "avg": {

#=>                   "tunnelState": 0.5

#=>                 },

#=>                 "count": 310,

#=>                 "dimensions": {

#=>                   "edgeColoName": "mel01",

#=>                   "tunnelName": "tunnel_02",

#=>                   "tunnelState": 0.5

#=>                 }

#=>               }

#=>           ]

#=>         }

#=>       ]

#=>     }

#=>   },

#=>   "errors": null

#=> }


```

## Footnotes

1. For details, refer to [Configure an Analytics API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/). [↩](#user-content-fnref-1)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/tutorials/querying-magic-transit-tunnel-healthcheck-results/","name":"Querying Magic Transit and Cloudflare WAN IPsec/GRE tunnel health check results with GraphQL"}}]}
```

---

---
title: Querying Cloudflare Network Firewall Intrusion Detection System (IDS) samples with GraphQL
description: In this example, we are going to use the GraphQL Analytics API to query for IDS samples over a specified time period.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/tutorials/querying-network-firewall-ids-samples.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying Cloudflare Network Firewall Intrusion Detection System (IDS) samples with GraphQL

In this example, we are going to use the GraphQL Analytics API to query for IDS samples over a specified time period.

The following API call will request IDS samples over a one hour period, and output the requested fields. Be sure to replace `<CLOUDFLARE_ACCOUNT_TAG>` and `<API_TOKEN>`[1](#user-content-fn-1) with your account tag and API credentials, and adjust the `datetime_geg` and `datetime_leq` values to your liking.

## API Call

Terminal window

```

echo '{ "query":

  "query IDSActivity {

    viewer {

      accounts(filter: { accountTag: $accountTag }) {

        magicIDPSNetworkAnalyticsAdaptiveGroups(

          filter: $filter

          limit: 10

        ) {

          sum {

            bits

            packets

          }

          dimensions {

            datetimeFiveMinutes

          }

        }

      }

    }

  }",

  "variables": {

    "accountTag": "<CLOUDFLARE_ACCOUNT_TAG>",

    "filter": {

      "datetime_geq": "2023-06-20T11:00:00.000Z",

      "datetime_leq": "2023-06-20T12:00:00.000Z",

      "verdict": "drop",

      "outcome": "pass"

    }

  }

}' | tr -d '\n' | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @-


```

The returned values represent the total number of packets and bits that matched IDS rules during the five minute interval. The result will be in JSON (as requested), so piping the output to `jq` will make it easier to read, like in the following example:

Terminal window

```

... | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @- | jq .


#=> {

#=>   "data": {

#=>     "viewer": {

#=>       "accounts": [

#=>         {

#=>           "magicIDPSNetworkAnalyticsAdaptiveGroups": [

#=>             {

#=>               sum: { bits:  327680, packets: 16384 },

#=>               dimensions: {

#=>                 datetimeFiveMinute: '2021-05-12T22:00-00:00'

#=>               }

#=>             },

#=>             {

#=>               sum: { bits:  360448, packets: 8192 },

#=>               dimensions: {

#=>                 datetimeFiveMinute: '2021-05-12T22:05-00:00'

#=>               }

#=>             },

#=>             {

#=>               sum: { bits:  327680, packets: 8192 },

#=>               dimensions: {

#=>                 datetimeFiveMinute: '2021-05-12T22:05-00:00'

#=>               }

#=>             },

#=>             {

#=>               sum: { bits:  360448, packets: 8192 },

#=>               dimensions: {

#=>                 datetimeFiveMinute: '2021-05-12T22:20-00:00'

#=>               }

#=>             },

#=>             {

#=>               sum: { bits:  327680, packets: 8192 },

#=>               dimensions: {

#=>                 datetimeFiveMinute: '2021-05-12T22:20-00:00'

#=>               }

#=>             }

#=>           ]

#=>         }

#=>       ]

#=>     }

#=>   },

#=>   "errors": null

#=> }


```

## Footnotes

1. Refer to [Configure an Analytics API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. [↩](#user-content-fnref-1)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/tutorials/querying-network-firewall-ids-samples/","name":"Querying Cloudflare Network Firewall Intrusion Detection System (IDS) samples with GraphQL"}}]}
```

---

---
title: Querying Cloudflare Network Firewall Samples with GraphQL
description: In this example, we are going to use the GraphQL Analytics API to query for Cloudflare Network Firewall Samples over a specified time period.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/tutorials/querying-network-firewall-samples.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying Cloudflare Network Firewall Samples with GraphQL

In this example, we are going to use the GraphQL Analytics API to query for Cloudflare Network Firewall Samples over a specified time period.

The following API call will request Cloudflare Network Firewall Samples over a one hour period, and output the requested fields. Be sure to replace `<CLOUDFLARE_ACCOUNT_TAG>` and `<API_TOKEN>`[1](#user-content-fn-1) with your zone tag and API credentials, and adjust the `datetime_geg` and `datetime_leq` values to your liking.

## API Call

Terminal window

```

echo '{ "query":

  "query MFWActivity {

    viewer {

      accounts(filter: { accountTag: $accountTag }) {

        magicFirewallSamplesAdaptiveGroups(

          filter: $filter

          limit: 10

          orderBy: [datetimeFiveMinute_DESC]

        ) {

          sum {

            bits

            packets

          }

          dimensions {

            datetimeFiveMinute

            ruleId

          }

        }

      }

    }

  }",

  "variables": {

    "accountTag": "<CLOUDFLARE_ACCOUNT_TAG>",

    "filter": {

      "datetime_geq": "2022-07-24T11:00:00Z",

      "datetime_leq": "2022-07-24T11:10:00Z"

    }

  }

}' | tr -d '\n' | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @-


```

The returned values represent the total number of packets and bits received during the five minute interval for a particular rule. The result will be in JSON (as requested), so piping the output to `jq` will make it easier to read, like in the following example:

Terminal window

```

... | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @- | jq .


#=> {

#=>   "data": {

#=>     "viewer": {

#=>       "accounts": [

#=>         {

#=>           "magicFirewallSamplesAdaptiveGroups": [

#=>             {

#=>               sum: { bits:  327680, packets: 16384 },

#=>               dimensions: {

#=>                 datetimeFiveMinute: '2021-05-12T22:00-00:00',

#=>                 ruleId: 'bdfa8f8f0ae142b4a70ef15f6160e532'

#=>               }

#=>             },

#=>             {

#=>               sum: { bits:  360448, packets: 8192 },

#=>               dimensions: {

#=>                 datetimeFiveMinute: '2021-05-12T22:05-00:00',

#=>                 ruleId: 'bdfa8f8f0ae142b4a70ef15f6160e532'

#=>               }

#=>             },

#=>             {

#=>               sum: { bits:  327680, packets: 8192 },

#=>               dimensions: {

#=>                 datetimeFiveMinute: '2021-05-12T22:05-00:00',

#=>                 ruleId: 'bdfa8f8f0ae142b4a70ef15f6160e532'

#=>               }

#=>             },

#=>             {

#=>               sum: { bits:  360448, packets: 8192 },

#=>               dimensions: {

#=>                 datetimeFiveMinute: '2021-05-12T22:20-00:00',

#=>                 ruleId: 'bdfa8f8f0ae142b4a70ef15f6160e532'

#=>               }

#=>             },

#=>             {

#=>               sum: { bits:  327680, packets: 8192 },

#=>               dimensions: {

#=>                 datetimeFiveMinute: '2021-05-12T22:20-00:00',

#=>                 ruleId: 'bdfa8f8f0ae142b4a70ef15f6160e532'

#=>               }

#=>             }

#=>           ]

#=>         }

#=>       ]

#=>     }

#=>   },

#=>   "errors": null

#=> }


```

## Footnotes

1. Refer to [Configure an Analytics API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. [↩](#user-content-fnref-1)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/tutorials/querying-network-firewall-samples/","name":"Querying Cloudflare Network Firewall Samples with GraphQL"}}]}
```

---

---
title: Querying Workers Metrics with GraphQL
description: In this example, we are going to use the GraphQL Analytics API to query for Workers Metrics over a specified time period. We can query up to one month of data for dates up to three months ago.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/tutorials/querying-workers-metrics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying Workers Metrics with GraphQL

In this example, we are going to use the GraphQL Analytics API to query for Workers Metrics over a specified time period. We can query up to one month of data for dates up to three months ago.

The following API call will request a Worker script's metrics over a one day period, and output the requested fields. Be sure to replace `<CLOUDFLARE_ACCOUNT_TAG>` and `<API_TOKEN>`[1](#user-content-fn-1) with your API credentials, and adjust the `datetimeStart`, `datetimeEnd`, and `scriptName` variables as needed.

## API Call

Terminal window

```

echo '{ "query":

  "query GetWorkersAnalytics($accountTag: string, $datetimeStart: string, $datetimeEnd: string, $scriptName: string) {

    viewer {

      accounts(filter: {accountTag: $accountTag}) {

        workersInvocationsAdaptive(limit: 100, filter: {

          scriptName: $scriptName,

          datetime_geq: $datetimeStart,

          datetime_leq: $datetimeEnd

        }) {

          sum {

            subrequests

            requests

            errors

          }

          quantiles {

            cpuTimeP50

            cpuTimeP99

          }

          dimensions{

            datetime

            scriptName

            status

          }

        }

      }

    }

  }",

  "variables": {

    "accountTag": "<CLOUDFLARE_ACCOUNT_TAG>",

    "datetimeStart": "2022-08-04T00:00:00.000Z",

    "datetimeEnd": "2022-08-04T01:00:00.000Z",

    "scriptName": "worker-subrequest-test-client"

  }

}' | tr -d '\n' | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @-


```

The results returned will be in JSON (as requested), so piping the output to `jq` will make them easier to read, like in the following example:

Terminal window

```

... | curl --silent \

https://api.cloudflare.com/client/v4/graphql \

--header "Authorization: Bearer <API_TOKEN>" \

--header "Accept: application/json" \

--header "Content-Type: application/json" \

--data @- | jq .


#=> {

#=>   "data": {

#=>     "viewer": {

#=>       "accounts": [

#=>         {

#=>           "workersInvocationsAdaptive": [

#=>             {

#=>               "dimensions": {

#=>                 "datetime": "2020-05-04T18:10:35Z",

#=>                 "scriptName": "worker-subrequest-test-client",

#=>                 "status": "success"

#=>               },

#=>               "quantiles": {

#=>                 "cpuTimeP50": 206,

#=>                 "cpuTimeP99": 206

#=>               },

#=>               "sum": {

#=>                 "errors": 0,

#=>                 "requests": 1,

#=>                 "subrequests": 0

#=>               }

#=>             },

#=>             {

#=>               "dimensions": {

#=>                 "datetime": "2020-05-04T18:10:34Z",

#=>                 "scriptName": "worker-subrequest-test-client",

#=>                 "status": "success"

#=>               },

#=>               "quantiles": {

#=>                 "cpuTimeP50": 291,

#=>                 "cpuTimeP99": 291

#=>               },

#=>               "sum": {

#=>                 "errors": 0,

#=>                 "requests": 1,

#=>                 "subrequests": 0

#=>               }

#=>             },

#=>             {

#=>               "dimensions": {

#=>                 "datetime": "2020-05-04T18:10:49Z",

#=>                 "scriptName": "worker-subrequest-test-client",

#=>                 "status": "success"

#=>               },

#=>               "quantiles": {

#=>                 "cpuTimeP50": 212.5,

#=>                 "cpuTimeP99": 261.19

#=>               },

#=>               "sum": {

#=>                 "errors": 0,

#=>                 "requests": 4,

#=>                 "subrequests": 0

#=>               }

#=>             }

#=>           ]

#=>         }

#=>       ]

#=>     }

#=>   },

#=>   "errors": null

#=> }


```

## Footnotes

1. Refer to [Configure an Analytics API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. [↩](#user-content-fnref-1)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/tutorials/querying-workers-metrics/","name":"Querying Workers Metrics with GraphQL"}}]}
```

---

---
title: Use GraphQL to create widgets
description: This article presents examples of queries you can use to populate your own dashboard.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/graphql-api/tutorials/use-graphql-create-widgets.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Use GraphQL to create widgets

This article presents examples of queries you can use to populate your own dashboard.

* [Parameters and filters](#parameters-and-filters)
* [Timeseries graph](#timeseries-graph)
* [Activity log](#activity-log)
* [Top N cards - source](#top-n-cards---source)
* [Top N cards - destination](#top-n-cards---destination)
* [TCP Flags](#tcp-flags)
* [Executive summary](#executive-summary)

Use this workflow to build and test queries:

* Install and configure the [GraphiQL ↗](https://www.gatsbyjs.com/docs/how-to/querying-data/running-queries-with-graphiql/) app to authenticate to the Cloudflare Analytics GraphQL API. Cloudflare recommends token authentication. Refer to [Configure an Analytics API token](https://developers.cloudflare.com/analytics/graphql-api/getting-started/authentication/api-token-auth/), for more information.
* Construct the queries in the GraphiQL. You can use the introspective documentation in the GraphQL client to explore the nodes available. For further information about queries, refer to [Querying basics](https://developers.cloudflare.com/analytics/graphql-api/getting-started/querying-basics/).
* Test your queries by running them from GraphiQL or by passing them as the payload in a cURL request to the GraphQL API endpoint.
* Use the queries in your application to provide data for your dashboard widgets.

## Parameters and filters

These examples use the account ID for the Cloudflare account that you are querying. You can define this as a variable (`accountTag`) and reference it in your queries.

The queries also use a filter to specify the time interval that you want to query. The filter uses a start time and end time to define the time interval. You use different attributes to specify the start and end times, depending on the time period that you want to query. Refer to [Filtering](https://developers.cloudflare.com/analytics/graphql-api/features/filtering/) for further information about filters.

The following example queries for data with dates greater than or equal to `date_geq` and less than or equal to `date_leq`:

Account and query time interval settings

```

{

  "accountTag": "{account-id}",

  "filter": {

    "AND": [{ "date_geq": "2020-01-19" }, { "date_leq": "2020-01-20" }]

  }

}


```

This table lists Network Analytics datasets (nodes) and the `datetimeDimension` that you should use when querying data for a given time selection.

When you want an aggregated view of data, use the `Groups` query nodes. For example, the `ipFlows1mAttacksGroups` dataset represents minute-wise rollup reports of attack activity. For more detail, refer to [Datasets](https://developers.cloudflare.com/analytics/graphql-api/features/data-sets/).

| **Time Selection** | **Query node**              | **datetimeDimension**       |
| ------------------ | --------------------------- | --------------------------- |
| Last week          | ipFlows1dGroups             | date                        |
| Last month         | ipFlows1dGroups             | date                        |
| 24 hours           | ipFlows1mGroups             | datetimeFifteenMinutes      |
| 12 hours           | ipFlows1mGroups             | datetimeFifteenMinutes      |
| 6 hours            | ipFlows1mGroups             | datetimeFiveMinutes         |
| 30 mins            | ipFlows1mGroups             | datetimeMinute              |
| Custom range       | Dependent on range selected | Dependent on range selected |

The table below lists the start and end time attributes that are valid for query nodes representing different time ranges.

| **Query node**         | **Start day / time filter** | **End day / time filter** |
| ---------------------- | --------------------------- | ------------------------- |
| ipFlows1mGroups        | datetimeMinute\_geq         | datetimeMinute\_leq       |
| ipFlows1mAttacksGroups | date\_geq                   | date\_leq                 |
| ipFlows1hGroups        | datetimeHour\_geq           | datetimeHour\_leq         |
| ipFlows1dGroups        | date\_geq                   | date\_leq                 |

## Timeseries graph

Use the following query to build the timeseries graph in network analytics:

Timeseries graph

```

query ipFlowTimeseries(

  $accountTag: string

  $filter: AccountIpFlows1mGroupsFilter_InputObject

) {

  viewer {

    accounts(filter: { accountTag: $accountTag }) {

      ipFlows1mGroups(

        limit: 1000

        filter: $filter

        orderBy: datetimeMinute_ASC

      ) {

        dimensions {

          timestamp: datetimeMinute

          attackMitigationType

          attackId

        }

        sum {

          bits

          packets

        }

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBAlgBwGIBsD2B3AKnAtmAZ0jkIAoAoGGAEgEMBjetEAOwBctaBzALhgLYQ4LLpRoAzOCjaQ+AQUbN2ASWToMBAIy4A4hGYICSKTIgB9ZSwQg2AeQBGAKzD025AJQwA3mIBuJDEhvMSoGJlY2AlJJaVlvGDClDm4+OkUIzi4YAF9PHyoC+DVMLV19EEMKQsKUPDg2Pk0ABhaQ6pjTVI7INsK0CAATSAAhKD4B2hk2PDAAWWEbMDM5AGUAYV6YPM2qAZmWAjg0A+Dq6un8AVpcBHHJsAu5hZkdgsm2BgBreemuSaOWFgoAgwK9QmwPvRPsoBq9sq8CCBcKczgV7PUCGCYAgvg9Maicpt4YViYTskA&variables=N4IghgxhD2CuB2AXAKmA5iAXCAggYTwHkBVAOWQH0BJAERAF8g)

## Activity log

This query returns an activity log summarizing minute-wise rollups of attack traffic in IP flows. The query groups the data by the fields listed in the `dimensions` object.

Activity log query

```

query ipFlowEventLog(

  $accountTag: string

  $filter: AccountIpFlows1mAttacksGroupsFilter_InputObject

) {

  viewer {

    accounts(filter: { accountTag: $accountTag }) {

      ipFlows1mAttacksGroups(

        limit: 10

        filter: $filter

        orderBy: [min_datetimeMinute_ASC]

      ) {

        dimensions {

          attackId

          attackDestinationIP

          attackDestinationPort

          attackMitigationType

          attackSourcePort

          attackType

        }

        avg {

          bitsPerSecond

          packetsPerSecond

        }

        min {

          datetimeMinute

          bitsPerSecond

          packetsPerSecond

        }

        max {

          datetimeMinute

          bitsPerSecond

          packetsPerSecond

        }

        sum {

          bits

          packets

        }

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBAlgBwGIBsD2B3AogNzAOwBcAZNAcwAoAoGGAEgEMBjJtEIgFQbIC4YBnQhDj4yNegDM4KQpD4BBFmyIBJZOgz8AjAFt5hQswDW-AOIQ2CfkmmyIAfRX4EIQgHkARgCswTQlQBKGABvcRw4MAxIEPFaZlZ2Qn4KKRk5EJh45UIuXnosxNyYAF8g0NoK+HVMbT0DYzMLECtqSsqUOB04Qj4tAAZYttS7PjphyEHKtAgAE0gAISg+AG0u-HsZhllCTrAAWRFXMHt5AGUAYQBdSZgym9oZ3fx+ODRnmLa2rcMmIxUZ+4Vb7GAAiYEEIi2r3wKgACoC4vVfmCIfgoW9YdN-J9KsDfgcdmR0fgOFAEGAEZkkUZTmwIEwwJiINicYifkZSeTAcVAQwcGQPqyYB5uvxYZBTr43gChTAEMYwElxRBJax8DKcTzWWtBazNttdgd8EdKSKlRKperKfLfoqxRa1RrPlqcToGAAPXU4-WKw2HWSm0XK1XS60K80qy1Otouz78EA6L2fM38MO2pLcm6xkriHnFIA&variables=N4IghgxhD2CuB2AXAKmA5iAXCAggYTwHkBVAOWQH0BJAERAF8g)

## Top N cards - source

This query returns data about the top source IPs. The `limit` parameter controls the amount of records returned for each node. In the following code, the highlighted lines indicate where you configure `limit`.

Top N Cards query

```

query GetTopNBySource(

    $accountTag: string

    $filter: AccountIpFlows1mGroupsFilter_InputObject

    $portFilter: AccountIpFlows1mGroupsFilter_InputObject

  ) {

    viewer {

      accounts(filter: { accountTag: $accountTag }) {

        topNPorts: ipFlows1mGroups(

        limit: 5

        filter: $portFilter

        orderBy: [sum_(bits/packets)_DESC]

      ) {

        sum {

          count: (bits/packets)

        }

        dimensions {

          metric: sourcePort

          ipProtocol

        }

      }

      topNASN: ipFlows1mGroups(

        limit: 5

        filter: $filter

        orderBy: [sum_(bits/packets)_DESC]

      ) {

        sum {

          count: (bits/packets)

        }

        dimensions {

          metric: sourceIPAsn

          description: sourceIPASNDescription

        }

      }

        topNIPs: ipFlows1mGroups(

        limit: 5

        filter: $filter

        orderBy: [sum_(bits/packets)_DESC]

      ) {

        sum {

          count: (bits/packets)

        }

        dimensions {

          metric: sourceIP

        }

      }

        topNColos: ipFlows1mGroups(

          limit: 10

          filter: $filter

          orderBy: [sum_(bits/packets)_DESC]

        ) {

          sum {

            count: (bits/packets)

          }

          dimensions {

            metric: coloCity

            coloCode

          }

        }

        topNCountries: ipFlows1mGroups(

          limit: 10

          filter: $filter

          orderBy: [sum_(bits/packets)_DESC]

        ) {

          sum {

            count: (bits/packets)

          }

          dimensions {

            metric: coloCountry

          }

        }

        topNIPVersions: ipFlows1mGroups(

          limit: 2

          filter: $filter

          orderBy: [sum_(bits/packets)_DESC]

        ) {

          sum {

            count: (bits/packets)

          }

          dimensions {

            metric: ipVersion

          }

        }

      }

    }

  }


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBA4mALgFQPYAcByAhKBlVECAYzAAoAoGamAEgENjjCA7FegcwC4YBnRCAEsWHKjVoAzQQBtEkHgEEmrRAEl0AMWmoA7rwCMAWzgRC6XhplyIAfVUt0IRAHkARgCswxRGOq10qBCIlrLyMErMIGzqWroGxqYg5iHWdg5Obp7eYgCUMADevjAAboJgOpAFRdSMkWy8ZFKhEDz5MLUqyJw8DMpR7BwwAL55hTTjMIgYmAAKgYi8PIKa2npGJmYN1TTSgoaCiDwArNvUTdY9AUEpkKcwgQAmkLg8ANq8IIY2ZK4HvAD06EYAGskLwcjYACIAUTwAGEALrbUZ3D6GKoTCZ1Q4wH5-QEgsE5O5DO4PPZgFi8QSoKkYzE0QxIITEHi8QgkMBzIJ3ajLGamKbMaQk7akzFTLAKPCYJYrOLrRLmSgMmC7fY4k6q85hSRWW6qx7PKBvNHfX4LAnEUELCEw+FIzEo1Vo+mq7E8PGWoHWomi1XkplUml0saqmBMgSCVl8DmkVQzBS8Fi8mBPXjEIToRAhtlxsAJ6WYSFgDNZnO0-0TcUMyWYBOLGDLWJrBKbFUM9UHY53HUtOh9u5GiAvGDvT7m-E+m3gqGwxHIt0TV1h90qT0WgHTv2qmuYwOU6m03hLzGRll5ojxmZV8Z7iZ1uGobSN5ureIbJJbcNqvbdmD6AADKmfY9IOP7DqO45fF6W6Erac4OnczrhiuqbUB6uKblaM7Ej+977hSwbHqeDLntGPDCqgcIHFA6EwFRT5PKmBE0Kx1CPioQilnKLYfkq37hl2OJASB+r9nqzSppBJpjmasE4USiELqqKEup8pGYphCnbraLGpgexGhvR5ExoxXF0fht4PtMCYAGqQEeVK8e+irtqmwk8AATGJzRgeJ0kQE8I6ydBk7evBs72ipDJqQyaE-hh65YVOkV4eG7E0IZTknqu4amXKDkQE5+m7mKRQ1kMQA&variables=N4IghgxhD2CuB2AXAKmA5iAXCAggYTwHkBVAOWQH0BJAERAF8g)

## Top N cards - destination

This query returns data about the top destination IPs. The `limit` parameter controls the amount of records returned. In the following code, the highlighted lines indicate that the query returns the five highest results.

Top N Cards - Destination

```

query GetTopNByDestination(

    $accountTag: string

    $filter: AccountIpFlows1mGroupsFilter_InputObject

    $portFilter: AccountIpFlows1mGroupsFilter_InputObject

  ) {

    viewer {

      accounts(filter: { accountTag: $accountTag }) {

        topNIPs: ipFlows1mGroups(

          filter: $filter

          limit: 5

          orderBy: [sum_(bits/packets)_DESC]

        ) {

          sum {

            count: (bits/packets)

          }

          dimensions {

            metric: destinationIP

          }

        }

        topNPorts: ipFlows1mGroups(

          filter: $portFilter

          limit: 5

          orderBy: [sum_(bits/packets)_DESC]

        ) {

          sum {

            count: (bits/packets)

          }

          dimensions {

            metric: destinationPort

            ipProtocol

          }

        }

      }

    }

  }


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBA4mALgFQPYAcByAhKARMAZ0QEsA7AQ1NTIAoAoGJmAEgoGN3UQyUKBzAFwxiEcv0bMWAMxIAbRJGEBBTt14BJdADE5qAO6EAjAFs4EbukLb5iiAH0NZdCEQB5AEYArMO0SSmFnRUCEQbBSUYVS4eRC1dA2MzCxArcLtHZ1dPHz9JAEoYAG8AmAA3EjB9SGLSpg4Y3kJaWQiIYSKYBvU+IVZu2OQBGABfQpLmSZhEDEwNAAVCYRIdPUNTc0tmuqnWu2EZW0gdybkSExJEYQBWE+YQgBNIXGEAbUIQE3taD0vCAHp0BwANZIQj5ex4ACiAGUAMIAXTu4zuTA+JlqUyxMEaVxgPz+gJBYPyqNGZIe5zAZEIJBohEx2MmJiQYnYwiexHIVDpZAWZJGd0FTJmWHmIUQSxgKwS62SWwYTKYe0iQQl6WOSpgZwueNuWsezygb3R31+kqJ7FBkoh0PhSKZKK16MZWtxwgJFqBVpJAopVJpvIZEy1MBZiDZHKIpEo1DI4tCZKYK3mFhmXDkfqZwqmOaYOZGQA&variables=N4IghgxhD2CuB2AXAKmA5iAXCAggYTwHkBVAOWQH0BJAERAF8g)

## TCP Flags

This query extracts the number of TCP packets from the minute-wise rollups of IP flows, and groups the results by TCP flag value. It uses `limit: 8` to display the top eight results, and presents them in descending order.

Add the following line to the filter to indicate that you want to view TCP data:

```

{ "ipProtocol": "TCP" }


```

TCP Flags query

```

query GetTCPFlags(

    $accountTag: string

    $filter: AccountIpFlows1mGroupsFilter_InputObject

  ) {

    viewer {

      accounts(filter: { accountTag: $accountTag }) {

        tcpFlags: ipFlows1mGroups(

          filter: $filter

          limit: 8

          orderBy: [sum_(bits/packets)_DESC]

        ) {

          sum {

            count: (bits/packets)

          }

          dimensions {

            tcpFlags

          }

        }

      }

    }

  }


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBA4mALgFQMIAUBiAbAhgcwGcAKAKBgpgBJcBjWgexADsUCAuGQxCAS2fzlKVAGa9siSJwCC9JqwCSABxwMA7oQCMAWzgQmSwpnGSIAfQXMlIRAHkARgCswtREICUMAN5CKAN14wNUhvX0o6RhZEEjEJKW8YCPk2fE4aOSjkAhgAX08fSkKYRFoVPCJOXjL1LV19EEMyIqLY0zTWyDDm7F5tXkROAA4uooYIABNIACEoTgBtQhBtM2J7fsIAeiU6AGskQnczABEAUQBlVABdEYp8m8pF7VDm5sjWTlX1rd3993uKHL-GDjXpgZiEXgMcHPF6FEplAiEIGA2EoopogFCHJAA&variables=N4IghgxhD2CuB2AXAKmA5iAXCAggYTwHkBVAOWQH0BJAERAF8g)

## Executive summary

The executive summary query summarizes overall activity, therefore it only filters by the selected time interval, and ignores all filters applied to the analytics. Use different queries, depending on the time interval you want to examine and what kind of traffic the account is seeing.

If the time interval is absolute, for example March 25th 09:00 to March 25th 17:00, then execute a query for attacks within those times. [Use the appropriate query node](#parameters-and-filters), for example `ipFlows1dGroups`, for the time interval.

GetPreviousAttacks query - fetch previous attacks

```

query GetPreviousAttacks($accountTag: string, $filter: filter) {

  viewer {

    accounts(filter: {accountTag: $accountTag}) {

      ${queryNode}(limit: 1000, filter: $filter) {

        dimensions {

          attackId

        }

        sum {

          packets

          bits

        }

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBA4mALgBQmAbgSwPYgM4CCiiAhgMYDWeAFACTlm4B2iAKiQOYBcMeiEmJhwA0MWgDNMAG0SQekmZACUMAN4AoGDCxgA7pDWatMBs0Q0FsiD1WmQLdtzF2HnAL4qNx47VWhIUABy2AAmYG7UUpgAtpiIPACMAAwpopZyYukQnkbeWiExYEx4OMWGeXkkxOQUAJIhuXlujd54INHlFcYADjVIeC15AEZxA10wzRWTxtPNbkA&variables=N4IghgxhD2CuB2AXAKmA5iAXCAggYTwHkBVAOWQH0BJAERAF8g)

If the time interval is relative to the current time, for example the last 24 hours or the last 30 minutes, then make a query to the `ipFlows1mGroup` node to check whether there were attacks in the past five minutes. Attacks within the past five minutes are classed as ongoing: the Activity Log displays `Present`. The query response lists the `attackID` values of ongoing attacks.

GetOngoingAttackIds query - check for ongoing attacks

```

query GetOngoingAttackIds($accountTag: string, $filter: filter) {

  viewer {

    accounts(filter: { accountTag: $accountTag }) {

      ipFlows1mGroups(limit: 1000, filter: $filter) {

        dimensions {

          attackId

        }

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBA4mALgeQHYHMD2BLDBBRRAQwGMBrASQBMBnACgBJSTMRVEAVI9ALhhsQRc6ADQwGAM2wAbRJD5TZkAJQwA3gCgYMAG7YwAd0jqt2mM1bt6iuRD5rzJFm07c+TJ5dfoYAX1WaZmbYAA4AYtKYBjQAjAC2cBCsIfTS2HHYiHwxAAx5Yjby4oUQAaZB2lTpYKg02Ji1JhUVRISklFTlFb5d2j1B-X4avkA&variables=N4IghgxhD2CuB2AXAKmA5iAXCAggYTwHkBVAOWQH0BJAERAF8g)

If there are ongoing attacks, query the `ipFlows1mAttacksGroups` node, filtering with the `attackID` values from the previous query. The query below returns the maximum bit and packet rates.

GetOngoingAttacks query - fetch data for ongoing attacks

```

query GetOngoingAttacks($accountTag: string, $filter: filter) {

  viewer {

    accounts(filter: { accountTag: $accountTag }) {

      ipFlows1mAttacksGroups(limit: 1000, filter: $filter) {

        dimensions {

          attackId

        }

        max {

          bitsPerSecond

          packetsPerSecond

        }

      }

    }

  }

}


```

[Run in GraphQL API Explorer](https://graphql.cloudflare.com/explorer?query=I4VwpgTgngBA4mALgeQHYHMD2BLDBBRRAQwGMBrAZwAoASUkzEVRAFSPQC4YLEJd0ANDBoAzbABtEkLmMmQAlDADeAKBgwAbtjAB3SMrXqY9Rs2qypELkuMkGTVuy507px+hgBfRaqNHsAA4AYuKYOhQAjAC2BMTkFHAQjAHU4thR2IhcEQAMeUIW0sKFED6GfuoAJulgqBTYmHUGFRVEhKRkAJKV5RWevX5RRAAezS1GAEaZFAAKkADKYAyoPeNGAR1IswtLjavj-S2HRsf9nkA&variables=N4IghgxhD2CuB2AXAKmA5iAXCAggYTwHkBVAOWQH0BJAERAF8g)

If there are no ongoing attacks, use the `GetPreviousAttacks` query to display data for attacks within an absolute time interval.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/graphql-api/","name":"GraphQL Analytics API"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/graphql-api/tutorials/","name":"Tutorials"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/graphql-api/tutorials/use-graphql-create-widgets/","name":"Use GraphQL to create widgets"}}]}
```

---

---
title: Workers Analytics Engine
description: Workers Analytics Engine provides unlimited-cardinality analytics at scale, via a built-in API to write data points from Workers, and a SQL API to query that data.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workers Analytics Engine

Workers Analytics Engine provides unlimited-cardinality analytics at scale, via [a built-in API](https://developers.cloudflare.com/analytics/analytics-engine/get-started/) to write data points from Workers, and a [SQL API](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/) to query that data.

You can use Workers Analytics Engine to:

* Expose custom analytics to your own customers
* Build usage-based billing systems
* Understand the health of your service on a per-customer or per-user basis
* Add instrumentation to frequently called code paths, without impacting performance or overwhelming external analytics systems with events
[ Get started ](https://developers.cloudflare.com/analytics/analytics-engine/get-started/) 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}}]}
```

---

---
title: Get started
description: Add the following to your Wrangler configuration file to create a binding to a Workers Analytics Engine dataset. A dataset is like a table in SQL: the rows and columns should have consistent meaning.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/get-started.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Get started

## 1\. Name your dataset and add it to your Worker

Add the following to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) to create a [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/) to a Workers Analytics Engine dataset. A dataset is like a table in SQL: the rows and columns should have consistent meaning.

Note

You do not need to manually create a dataset in the Cloudflare dashboard. Workers Analytics Engine datasets are created automatically the first time you write to them after defining the binding in your Wrangler configuration.

* [  wrangler.jsonc ](#tab-panel-3122)
* [  wrangler.toml ](#tab-panel-3123)

```

{

  "analytics_engine_datasets": [

    {

      "binding": "<BINDING_NAME>",

      "dataset": "<DATASET_NAME>"

    }

  ]

}


```

```

[[analytics_engine_datasets]]

binding = "<BINDING_NAME>"

dataset = "<DATASET_NAME>"


```

## 2\. Write data points from your Worker

You can write data points to your Worker by calling the `writeDataPoint()` method that is exposed on the binding that you just created.

JavaScript

```

async fetch(request, env) {

  env.WEATHER.writeDataPoint({

    'blobs': ["Seattle", "USA", "pro_sensor_9000"], // City, State

    'doubles': [25, 0.5],

    'indexes': ["a3cd45"]

  });

  return new Response("OK!");

}


```

Note

You do not need to await `writeDataPoint()` — it will return immediately, and the Workers runtime handles writing your data in the background.

A data point is a structured event that consists of:

* **Blobs** (strings) — The dimensions used for grouping and filtering. Sometimes called labels in other metrics systems.
* **Doubles** (numbers) — The numeric values that you want to record in your data point.
* **Indexes** — (strings) — Used as a [sampling](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/#sampling) key.

In the example above, suppose you are collecting air quality samples. Each data point written represents a reading from your weather sensor. The blobs define city, state, and sensor model — the dimensions you want to be able to filter queries on later. The doubles define the numeric temperature and air pressure readings. And the index is the ID of your customer. You may want to include [context about the incoming request](https://developers.cloudflare.com/workers/runtime-apis/request/), such as geolocation, to add additional data to your datapoint.

Currently, the `writeDataPoint()` API accepts ordered arrays of values. This means that you must provide fields in a consistent order. While the `indexes` field accepts an array, you currently must only provide a single index. If you attempt to provide multiple indexes, your data point will not be recorded.

## 3\. Query data using the SQL API

You can query the data you have written in two ways:

* [**SQL API**](https://developers.cloudflare.com/analytics/analytics-engine/sql-api) — Best for writing your own queries and integrating with external tools like Grafana.
* [**GraphQL API**](https://developers.cloudflare.com/analytics/graphql-api/) — This is the same API that powers the Cloudflare dashboard.

For the purpose of this example, we will use the SQL API.

### Create an API token

Create an [API Token ↗](https://dash.cloudflare.com/profile/api-tokens) that has the `Account Analytics Read` permission.

### Write your first query

The following query returns the top 10 cities that had the highest average humidity readings when the temperature was above zero:

```

SELECT

  blob1 AS city,

  SUM(_sample_interval * double2) / SUM(_sample_interval) AS avg_humidity

FROM WEATHER

WHERE double1 > 0

GROUP BY city

ORDER BY avg_humidity DESC

LIMIT 10


```

Note

We are using a custom averaging function to take [sampling](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/#sampling) into account.

You can run this query by making an HTTP request to the SQL API:

Terminal window

```

curl "https://api.cloudflare.com/client/v4/accounts/{account_id}/analytics_engine/sql" \

--header "Authorization: Bearer <API_TOKEN>" \

--data "SELECT blob1 AS city, SUM(_sample_interval * double2) / SUM(_sample_interval) AS avg_humidity FROM WEATHER WHERE double1 > 0 GROUP BY city ORDER BY avg_humidity DESC LIMIT 10"


```

Refer to the [Workers Analytics Engine SQL Reference](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/) for a full list of supported SQL functionality.

### Working with time series data

Workers Analytics Engine is optimized for powering time series analytics that can be visualized using tools like Grafana. Every event written from the runtime is automatically populated with a `timestamp` field. It is expected that most time series will round, and then `GROUP BY` the `timestamp`. For example:

```

SELECT

  intDiv(toUInt32(timestamp), 300) * 300 AS t,

  blob1 AS city,

  SUM(_sample_interval * double2) / SUM(_sample_interval) AS avg_humidity

FROM WEATHER

WHERE

  timestamp >= NOW() - INTERVAL '1' DAY

  AND double1 > 0

GROUP BY t, city

ORDER BY t, avg_humidity DESC


```

This query first rounds the `timestamp` field to the nearest five minutes. Then, it groups by that field and city and calculates the average humidity in each city for a five minute period.

Refer to [Querying Workers Analytics Engine from Grafana](https://developers.cloudflare.com/analytics/analytics-engine/grafana/) for more details on how to create efficient Grafana queries against Workers Analytics Engine.

## Further reading

* [ Get started ](https://developers.cloudflare.com/analytics/analytics-engine/get-started/)
* [ Examples ](https://developers.cloudflare.com/analytics/analytics-engine/recipes/)
* [ SQL API ](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/)
* [ SQL Reference ](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/)
* [ Querying from Grafana ](https://developers.cloudflare.com/analytics/analytics-engine/grafana/)
* [ Querying from a Worker ](https://developers.cloudflare.com/analytics/analytics-engine/worker-querying/)
* [ Sampling with WAE ](https://developers.cloudflare.com/analytics/analytics-engine/sampling/)
* [ Pricing ](https://developers.cloudflare.com/analytics/analytics-engine/pricing/)
* [ Limits ](https://developers.cloudflare.com/analytics/analytics-engine/limits/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/get-started/","name":"Get started"}}]}
```

---

---
title: Querying from Grafana
description: Workers Analytics Engine is optimized for powering time series analytics that can be visualized using tools like Grafana. Every event written from the runtime is automatically populated with a timestamp field.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/grafana.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying from Grafana

Workers Analytics Engine is optimized for powering time series analytics that can be visualized using tools like Grafana. Every event written from the runtime is automatically populated with a `timestamp` field.

## Grafana plugin setup

We recommend the use of the [Altinity plugin for Clickhouse ↗](https://grafana.com/grafana/plugins/vertamedia-clickhouse-datasource/) for querying Workers Analytics Engine from Grafana.

Configure the plugin as follows:

* URL: `https://api.cloudflare.com/client/v4/accounts/<account_id>/analytics_engine/sql`. Replace `<account_id>` with your 32 character account ID (available in the Cloudflare dashboard).
* Leave all auth settings off.
* Add a custom header with a name of `Authorization` and value set to `Bearer <token>`. Replace `<token>` with suitable API token string (refer to the [SQL API docs](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/#authentication) for more information on this).
* No other options need to be set.

## Querying timeseries data

For use in a dashboard, you usually want to aggregate some metric per time interval. This can be achieved by rounding and then grouping by the `timestamp` field. The following query rounds and groups in this way, and then computes an average across each time interval whilst taking into account [sampling](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/#sampling).

```

SELECT

    intDiv(toUInt32(timestamp), 60) * 60 AS t,

    blob1 AS label,

    SUM(_sample_interval * double1) / SUM(_sample_interval) AS average_metric

FROM dataset_name

WHERE

    timestamp <= NOW()

    AND timestamp > NOW() - INTERVAL '1' DAY

GROUP BY blob1, t

ORDER BY t


```

The Altinity plugin provides some useful macros that can simplify writing queries of this type. The macros require setting `Column:DateTime` to `timestamp` in the query builder, then they can be used like this:

```

SELECT

    $timeSeries AS t,

    blob1 AS label,

    SUM(_sample_interval * double1) / SUM(_sample_interval) AS average_metric

FROM dataset_name

WHERE $timeFilter

GROUP BY blob1, t

ORDER BY t


```

This query will automatically adjust the rounding time depending on the zoom level and filter to the correct time range that is currently being displayed.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/grafana/","name":"Querying from Grafana"}}]}
```

---

---
title: Limits
description: The following limits apply to Workers Analytics Engine:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/limits.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Limits

The following limits apply to Workers Analytics Engine:

* Analytics Engine will accept up to twenty blobs, twenty doubles, and one index per call to `writeDataPoint`.
* The total size of all blobs in a request must not exceed **16 KB**. The 16 KB size limit for the blobs field applies to **each individual data point**, regardless of how many are included in a single request using writeDataPoints().
* Each index must not be more than 96 bytes.
* You can write a maximum of 250 data points per Worker invocation (client HTTP request). Each call to `writeDataPoint` counts towards this limit.

## Data retention

Data written to Workers Analytics Engine is stored for three months.

Interested in longer retention periods? Join the `#analytics-engine` channel in the [Cloudflare Developers Discord ↗](https://discord.cloudflare.com/) and tell us more about what you are building.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/limits/","name":"Limits"}}]}
```

---

---
title: Pricing
description: Workers Analytics Engine is priced based on two metrics — data points written, and read queries.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/pricing.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Pricing

Workers Analytics Engine is priced based on two metrics — data points written, and read queries.

| Plan             | Data points written                                            | Read queries                                                 |
| ---------------- | -------------------------------------------------------------- | ------------------------------------------------------------ |
| **Workers Paid** | 10 million included per month  (+$0.25 per additional million) | 1 million included per month (+$1.00 per additional million) |
| **Workers Free** | 100,000 included per day                                       | 10,000 included per day                                      |

Pricing availability

Currently, you will not be billed for your use of Workers Analytics Engine. Pricing information here is shared in advance, so that you can estimate what your costs will be once Cloudflare starts billing for usage in the coming months.

If you are an Enterprise customer, contact your account team for information about Workers Analytics Engine pricing and billing.

### Data points written

Every time you call [writeDataPoint()](https://developers.cloudflare.com/analytics/analytics-engine/get-started/#2-write-data-points-from-your-worker) in a Worker, this counts as one data point written.

Each data point written costs the same amount. There is no extra cost to add dimensions or cardinality, and no additional cost for writing more data in a single data point.

### Read queries

Every time you post to Workers Analytics Engine's [SQL API](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/), this counts as one read query.

Each read query costs the same amount. There is no extra cost for more or less complex queries, and no extra cost for reading only a few rows of data versus many rows of data.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/pricing/","name":"Pricing"}}]}
```

---

---
title: Usage-based billing
description: How to use Workers Analytics Engine to build usage-based billing into your SaaS product
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/recipes/usage-based-billing-for-your-saas-product.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Usage-based billing

Many Cloudflare customers run software-as-a-service products with multiple customers. A big concern for such companies is understanding the cost of each customer, and understanding customer behaviour more widely.

Keeping data on every web request used by a customer can be expensive, as can attributing page views to customers. At Cloudflare we have solved this problem with the same in-house technologies now available to you through Analytics Engine.

## Recording data on usage

Analytics Engine is designed for use with Cloudflare Workers. If you already use Cloudflare Workers to serve requests, you can start sending data into Analytics Engine in just a few lines of code:

JavaScript

```

  [...]


  // This examples assumes you give a unique ID to each of your SaaS customers, and the Worker has

  // assigned it to the variable named `customer_id`

  const { pathname } = new URL(request.url);

  env.USAGE_INDEXED_BY_CUSTOMER_ID.writeDataPoint({

    "indexes": [customer_id],

    "blobs": [pathname]

  });


```

SaaS customer activity often follows an exponential pattern: one customer may do 100 million requests per second, while another does 100 requests a day. If all data is sampled together, the usage of bigger customers can cause smaller customers data to be sampled to zero. Analytics Engine allows you to prevent that: in the example code above we supply the customer's unique ID as the index, causing Analytics Engine to sample your customers' individual activity.

## Viewing usage

You can start viewing customer data either using Grafana (for visualisations) or as JSON (for your own tools). Other areas of the Analytics Engine documentation explain this in-depth.

Look at customer usage over all endpoints:

```

SELECT

  index1 AS customer_id,

  sum(_sample_interval) AS count

FROM

  usage_indexed_by_customer_id

GROUP BY customer_id


```

If run in Grafana, this query returns a graph summarising the usage of each customer. The `sum(_sample_interval)` accounts for the sampling - refer to other Analytics Engine documentation. This query gives you an answer to "which customers are most active?"

The example `writeDataPoint` call above writes an endpoint name. If you do that, you can break down customer activity by endpoint:

```

SELECT

  index1 AS customer_id,

  blob1 AS request_endpoint,

  sum(_sample_interval) AS count

FROM

  usage_indexed_by_customer_id

GROUP BY customer_id, request_endpoint


```

This can give you insights into what endpoints different customers are using. This can be useful for business purposes (for example, understanding customer needs) as well for for your engineers to see activity and behaviour (observability).

## Billing customers

Analytics Engine can be used to bill customers based on a reliable approximation of usage. In order to get the best approximation, when generating bills we suggest executing one query per customer. This can result in less sampling than querying multiple customers at once.

```

SELECT

  index1 AS customer_id,

  blob1 AS request_endpoint,

  sum(_sample_interval) AS usage_count

FROM

  usage_indexed_by_customer_id

WHERE

  customer_id = 'substitute_customer_id_here'

  AND timestamp >= toDateTime('2023-03-01 00:00:00')

  AND timestamp < toDateTime('2023-04-01 00:00:00')

GROUP BY customer_id, request_endpoint


```

Running this query once for each customer at the end of each month could give you the data to produce a bill. This is just an example: most likely you'll want to adjust this example to how you want to bill.

When producing a bill, most likely you will also want to provide the daily costs. The following query breaks down usage by day:

```

SELECT

  index1 AS customer_id,

  toStartOfInterval(timestamp, INTERVAL '1' DAY) AS date,

  blob1 AS request_endpoint,

  sum(_sample_interval) AS request_count

FROM

  usage_indexed_by_customer_id

WHERE

  customer_id = 'x'

  AND timestamp >= toDateTime('2023-03-01 00:00:00')

  AND timestamp < toDateTime('2023-04-01 00:00:00')

GROUP BY customer_id, date, request_endpoint


```

You will want to take the usage queries above, adapt them for how you charge customers, and make a backend system run those queries and calculate the customer charges based on the data returned.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/recipes/","name":"Examples"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/analytics-engine/recipes/usage-based-billing-for-your-saas-product/","name":"Usage-based billing"}}]}
```

---

---
title: Sampling with WAE
description: How data written to Workers Analytics Engine is automatically sampled at scale
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sampling.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Sampling with WAE

Workers Analytics Engine offers the ability to write an extensive amount of data and retrieve it quickly, at minimal or no cost. To facilitate writing large amounts of data at a reasonable cost, Workers Analytics Engine employs weighted adaptive [sampling ↗](https://en.wikipedia.org/wiki/Sampling%5F%28statistics%29).

When utilizing sampling, you do not need every single data point to answer questions about a dataset. For a sufficiently large dataset, the [necessary sample size ↗](https://select-statistics.co.uk/blog/importance-effect-sample-size/) does not depend on the size of the original population. Necessary sample size depends on the variance of your measure, the size of the subgroups you analyze, and how accurate your estimate must be.

The implication for Analytics Engine is that we can compress very large datasets into many fewer observations, yet still answer most queries with very high accuracy. This enables us to offer an analytics service that can measure very high rates of usage, with unbounded cardinality, at a low and predictable price.

At a high level, the way sampling works is:

1. At write time, we sample if data points are written too quickly into one index.
2. We sample again at query time if the query is too complex.

In the following sections, you will learn:

* [How sampling works](https://developers.cloudflare.com/analytics/analytics-engine/sampling/#how-sampling-works).
* [How to read sampled data](https://developers.cloudflare.com/analytics/analytics-engine/sampling/#how-to-read-sampled-data).
* [How is data sampled](https://developers.cloudflare.com/analytics/analytics-engine/sampling/#how-is-data-sampled).
* [How Adaptive Bit Rate Sampling works](https://developers.cloudflare.com/analytics/analytics-engine/sampling/#adaptive-bit-rate-sampling-at-read-time).
* [How to pick your index such that your data is sampled in a usable way](https://developers.cloudflare.com/analytics/analytics-engine/sampling/#how-to-select-an-index).

## How sampling works

Cloudflare's data sampling is similar to how online mapping services like Google Maps render maps at different zoom levels. When viewing satellite imagery of a whole continent, the mapping service provides appropriately sized images based on the user's screen and Internet speed.

![The image on the left shows a satellite view from OpenStreetMap. On the right, the same image is zoomed in. In these two images, each pixel represents the same area; however the image on the right has many fewer pixels.](https://developers.cloudflare.com/_astro/zoom-less-pixels.CTBizcEW_1fmnak.webp) 

Each pixel on the map represents a large area, such as several square kilometers. If a user tries to zoom in using a screenshot, the resulting image would be blurry. Instead, the mapping service selects higher-resolution images when a user zooms in on a specific city. The total number of pixels remains relatively constant, but each pixel now represents a smaller area, like a few square meters.

![Now the image on the right is of a much higher resolution. Each pixel represents a much smaller area; however, the total number of pixels in both images is roughly the same.](https://developers.cloudflare.com/_astro/zoom-more-pixels.CFR4ChGF_ZSBF09.webp) 

The key point is that the map's quality does not solely depend on the resolution or the area represented by each pixel. It is determined by the total number of pixels used to render the final view.

There are similarities between the how a mapping services handles resolution and Cloudflare Analytics delivers analytics using adaptive samples:

* **How data is stored**:  
   * **Mapping service**: Imagery stored at different resolutions.  
   * **Cloudflare Analytics**: Events stored at different sample rates.
* **How data is displayed to user**:  
   * **Mapping service**: The total number of pixels is \~constant for a given screen size, regardless of the area selected.  
   * **Cloudflare Analytics**: A similar number of events are read for each query, regardless of the size of the dataset or length of time selected.
* **How a resolution is selected**:  
   * **Mapping service**: The area represented by each pixel will depend on the size of the map being rendered. In a more zoomed out map, each pixel will represent a larger area.  
   * **Cloudflare Analytics**: The sample interval of each event in the result depends on the size of the underlying dataset and length of time selected. For a query over a large dataset or long length of time, each sampled event may stand in for many similar events.

## How to read sampled data

To effectively write queries and analyze the data, it is helpful to first learn how sampled data is read in Workers Analytics Engine.

In Workers Analytics Engine, every event is recorded with the `_sample_interval` field. The sample interval is the inverse of the sample rate. For example, if a one percent (1%) sample rate is applied, the `sample_interval` will be set to `100`.

Using the mapping example in simple terms, the sample interval represents the "number of unsampled data points" (kilometers or meters) that a given sampled data point (pixel) represents.

The sample interval is a property associated with each individual row stored in Workers Analytics Engine. Due to the implementation of equitable sampling, the sample interval can vary for each row. As a result, when querying the data, you need to consider the sample interval field. Simply multiplying the query result by a constant sampling factor is not sufficient.

Here are some examples of how to express some common queries over sampled data.

| Use case                           | Example without sampling | Example with sampling                                      |
| ---------------------------------- | ------------------------ | ---------------------------------------------------------- |
| Count events in a dataset          | count()                  | sum(\_sample\_interval)                                    |
| Sum a quantity, for example, bytes | sum(bytes)               | sum(bytes \* \_sample\_interval)                           |
| Average a quantity                 | avg(bytes)               | sum(bytes \* \_sample\_interval) / sum(\_sample\_interval) |
| Compute quantiles                  | quantile(0.50)(bytes)    | quantileExactWeighted(0.50)(bytes, \_sample\_interval)     |

Note that the accuracy of results is not determined by the sample interval, similar to the mapping analogy mentioned earlier. A high sample interval can still provide precise results. Instead, accuracy depends on the total number of data points queried and their distribution.

## How is data sampled

To determine the sample interval for each event, note that most analytics have some important type of subgroup that must be analyzed with accurate results. For example, you may want to analyze user usage or traffic to specific hostnames. Analytics Engine users can define these groups by populating the `index` field when writing an event. This allows for more targeted and precise analysis within the specified groups.

The next observation is that these index values likely have a very different number of events written to them. In fact, the usage of most web services follows a [Pareto distribution ↗](https://en.wikipedia.org/wiki/Pareto%5Fdistribution), meaning that the top few users will account for the vast majority of the usage. Pareto distributions are common and look like this:

![In this graphic, each bar represents a user; the height of the bar is their total usage.](https://developers.cloudflare.com/_astro/total-usage.DT9rN3Uq_Z1FdlV8.webp) 

If we took a [simple random sample ↗](https://en.wikipedia.org/wiki/Simple%5Frandom%5Fsample) of one percent (1%) of this data, and we applied that to the whole population, you may be able to track your largest customers accurately — but you would lose visibility into what your smaller customers are doing:

![The same graphic as above, but now based on a 1% sample of the data.](https://developers.cloudflare.com/_astro/sample-data.Db8bZbVI_1Wqq0E.webp) 

Notice that the larger bars look more or less unchanged, and yet they are still quite accurate. But as you analyze smaller customers, results get [quantized ↗](https://en.wikipedia.org/wiki/Quantization%5F%28signal%5Fprocessing%29) and may even be rounded to 0 entirely.

This shows that while a one percent (1%) or even smaller sample of a large population may be sufficient, we may need to store a larger proportion of events for a small population to get accurate results.

We do this through a technique called equitable sampling. This means that we will equalize the number of events we store for each unique index value. For relatively uncommon index values, we may write all of the data points that we get via `writeDataPoint()`. But if you write lots of data points to a single index value, we will start to sample.

Here is the same distribution, but now with (a simulation of) equitable sampling applied:

![This graphic shows the same population, but with equitable sampling.](https://developers.cloudflare.com/_astro/equitable-sampling.CzViMd9X_283FKf.webp) 

You may notice that this graphic is very similar to the first graph. However, it only requires `<10%` of the data to be stored overall. The sample rate is actually much lower than `10%` for the larger series (that is, we store larger sample intervals), but the sample rate is higher for the smaller series.

Refer back to the mapping analogy above. Regardless of the map area shown, the total number of pixels in the map stays constant. Similarly, we always want to store a similar number of data points for each index value. However, the resolution of the map — how much area is represented by each pixel — will change based on the area being shown. Similarly here, the amount of data represented by each stored data point will vary, based on the total number of data points in the index.

## Adaptive Bit Rate Sampling at Read Time

Equitable sampling ensures that an equal amount of data is maintained for each index within a specific time frame. However, queries can vary significantly in the duration of time they target. Some queries may only require a 10-minute data snapshot, while others might need to analyze data spanning 10 weeks — a period which is 10,000 times longer.

To address this issue, we employ a method called [adaptive bit rate ↗](https://blog.cloudflare.com/explaining-cloudflares-abr-analytics/) (ABR). With ABR, queries that cover longer time ranges will retrieve data from a higher sample interval, allowing them to be completed within a fixed time limit. In simpler terms, just as screen size or bandwidth is a fixed resource in our mapping analogy, the time required to complete a query is also fixed. Therefore, irrespective of the volume of data involved, we need to limit the total number of rows scanned to provide an answer to the query. This helps to ensure fairness: regardless of the size of the underlying dataset being queried, we ensure that all queries receive an equivalent share of the available computing time.

To achieve this, we store the data in multiple resolutions (that is, with different levels of detail, for instance, 100%, 10%, 1%) derived from the equitably sampled data. At query time, we select the most suitable data resolution to read based on the query's complexity. The query's complexity is determined by the number of rows to be retrieved and the probability of the query completing within a specified time limit of N seconds. By dynamically selecting the appropriate resolution, we optimize the query performance and ensure it stays within the allotted time budget.

ABR offers a significant advantage by enabling us to consistently provide query results within a fixed query budget, regardless of the data size or time span involved. This sets it apart from systems that struggle with timeouts, errors, or high costs when dealing with extensive datasets.

## How to select an index

In order to get accurate results with sampled data, select an appropriate value to use as your index. The index should match how users will query and view data. For example, if users frequently view data based on a specific device or hostname, it is recommended to incorporate those attributes into your index.

The index has the following properties, which are important to consider when choosing an index:

* Get accurate summary statistics about your entire dataset, across all index values.
* Get an accurate count of the number of unique values of your index.
* Get accurate summary statistics (for example, count, sum) within a particular index value.
* See the `Top N` values of specific fields that are not in your index.
* Filter on most fields.
* Run other aggregations like quantiles.

Some limitations and trade-offs to consider are:

* You may not be able to get accurate unique counts of fields that are not in your index.  
   * For example, if you index on `hostname`, you may not be able to count the number of unique URLs.
* You may not be able to observe very rare values of fields not in the index.  
   * For example, a particular URL for a hostname, if you index on host and have millions of unique URLs.
* You may not be able to run accurate queries across multiple indices at once.  
   * For example, you may only be able to query for one host at a time (or all of them) and expect accurate results.
* There is no guarantee you can retrieve any one individual record.
* You cannot necessarily reconstruct exact sequences of events.

It is not recommended to write a unique index value on every row (like a UUID) for most use cases. While this will make it possible to retrieve individual data points very quickly, it will slow down most queries for aggregations and time series.

Refer to the Workers Analytics Engine FAQs, for common question about [Sampling](https://developers.cloudflare.com/analytics/faq/wae-faqs/#sampling).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sampling/","name":"Sampling with WAE"}}]}
```

---

---
title: SQL API
description: The SQL API for Workers Analytics Engine
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sql-api.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# SQL API

The Workers Analytics Engine SQL API is an HTTP API that allows executing SQL queries against your Workers Analytics Engine datasets.

The API is hosted at `https://api.cloudflare.com/client/v4/accounts/<account_id>/analytics_engine/sql`.

## Authentication

Authentication is done via bearer token. An `Authorization: Bearer <token>` header must be supplied with every request to the API.

Use the dashboard to create a token with permission to read analytics data on your account:

1. Visit the [API tokens ↗](https://dash.cloudflare.com/profile/api-tokens) page in the Cloudflare dashboard.
2. Select **Create Token**.
3. Select **Create Custom Token**.
4. Complete the **Create Custom Token** form as follows:  
   * Give your token a descriptive name.  
   * For **Permissions** select _Account_ | _Account Analytics_ | _Read_  
   * Optionally configure account and IP restrictions and TTL.  
   * Submit and confirm the form to create the token.
5. Make a note of the token string.

## Querying the API

Submit the query text in the body of a `POST` request to the API address. The format of the data returned can be selected using the [FORMAT](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/statements/#format-clause) option in your query.

You can use cURL to test the API as follows, replacing the `<account_id>` with your 32 character account ID (available in the dashboard) and the `<token>` with the token string you generated above.

Terminal window

```

curl "https://api.cloudflare.com/client/v4/accounts/{account_id}/analytics_engine/sql" \

--header "Authorization: Bearer <API_TOKEN>" \

--data "SELECT 'Hello Workers Analytics Engine' AS message"


```

If you have already published some data, you might try executing the following to confirm that the dataset has been created in the DB.

Terminal window

```

curl "https://api.cloudflare.com/client/v4/accounts/{account_id}/analytics_engine/sql" \

--header "Authorization: Bearer <API_TOKEN>" \

--data "SHOW TABLES"


```

Refer to the Workers Analytics Engine [SQL reference](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/), for the full supported query syntax.

## Table structure

A new table will automatically be created for each dataset once you start writing events to it from your worker.

The table will have the following columns:

| Name               | Type     | Description                                                                                                                                                                                                                                          |
| ------------------ | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| dataset            | string   | This column will contain the dataset name in every row.                                                                                                                                                                                              |
| timestamp          | DateTime | The timestamp at which the event was logged in your worker.                                                                                                                                                                                          |
| \_sample\_interval | integer  | In case that the data has been sampled, this column indicates what the sample rate is for this row (that is, how many rows of the original data are represented by this row). Refer to the [sampling](#sampling) section below for more information. |
| index1             | string   | The index value that was logged with the event. The value in this column is used as the key for sampling.                                                                                                                                            |
| blob1...blob20     | string   | The blob values that were logged with the event.                                                                                                                                                                                                     |
| double1...double20 | double   | The double values that were logged with the event.                                                                                                                                                                                                   |

## Sampling

At very high volumes of data, Analytics Engine will downsample data in order to be able to maintain performance. Sampling can occur on write and on read. Sampling is based on the index of your dataset so that only indexes that receive large numbers of events will be sampled. For example, if your worker serves multiple customers, you might consider making customer ID the index field. This would mean that if one customer starts making a high rate of requests then events from that customer could be sampled while other customers data remains unsampled.

We have tested this system of sampling over a number of years at Cloudflare and it has enabled us to scale our web analytics systems to very high throughput, while still providing statistically meaningful results irrespective of the amount of traffic a website receives.

The rate at which the data is sampled is exposed via the `_sample_interval` column. This means that if you are doing statistical analysis of your data, you may need to take this column into account. For example:

| Original query               | Query taking into account sampling                                           |
| ---------------------------- | ---------------------------------------------------------------------------- |
| SELECT COUNT() FROM ...      | SELECT SUM(\_sample\_interval) FROM ...                                      |
| SELECT SUM(double1) FROM ... | SELECT SUM(\_sample\_interval \* double1) FROM ...                           |
| SELECT AVG(double1) FROM ... | SELECT SUM(\_sample\_interval \* double1) / SUM(\_sample\_interval) FROM ... |

Additionally, the [QUANTILEEXACTWEIGHTED](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/aggregate-functions/#quantileexactweighted) function is designed to be used with sample interval as the third argument.

## Example queries

### Select data with column aliases

Column aliases can be used in queries to give names to the blobs and doubles in your dataset:

```

SELECT

    timestamp,

    blob1 AS location_id,

    double1 AS inside_temp,

    double2 AS outside_temp

FROM temperatures

WHERE timestamp > NOW() - INTERVAL '1' DAY


```

### Aggregation taking into account sample interval

Calculate number of readings taken at each location in the last 7 days. In this case, we are grouping by the index field so an exact count can be calculated even in the case that the data has been sampled:

```

SELECT

    index1 AS location_id,

    SUM(_sample_interval) AS n_readings

FROM temperatures

WHERE timestamp > NOW() - INTERVAL '7' DAY

GROUP BY index1


```

Calculate the average temperature over the last 7 days at each location. Sample interval is taken into account:

```

SELECT

    index1 AS location_id,

    SUM(_sample_interval * double1) / SUM(_sample_interval) AS average_temp

FROM temperatures

WHERE timestamp > NOW() - INTERVAL '7' DAY

GROUP BY index1


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sql-api/","name":"SQL API"}}]}
```

---

---
title: Aggregate functions
description: Usage:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sql-reference/aggregate-functions.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Aggregate functions

## count

Usage:

```

count()

count(DISTINCT column_name)


```

`count` is an aggregation function that returns the number of rows in each group or results set.

`count` can also be used to count the number of distinct (unique) values in each column:

Example:

```

-- return the total number of rows

count()

-- return the number of different values in the column

count(DISTINCT column_name)


```

## sum

Usage:

```

sum([DISTINCT] column_name)


```

`sum` is an aggregation function that returns the sum of column values across all rows in each group or results set. Sum also supports `DISTINCT`, but in this case it will only sum the unique values in the column.

Example:

```

-- return the total cost of all items

sum(item_cost)

-- return the total of all unique item costs

sum(DISTINCT item_cost)


```

## avg

Usage:

```

avg([DISTINCT] column_name)


```

`avg` is an aggregation function that returns the mean of column values across all rows in each group or results set. Avg also supports `DISTINCT`, but in this case it will only average the unique values in the column.

Example:

```

-- return the mean item cost

avg(item_cost)

-- return the mean of unique item costs

avg(DISTINCT item_cost)


```

## min

Usage:

```

min(column_name)


```

`min` is an aggregation function that returns the minimum value of a column across all rows.

Example:

```

-- return the minimum item cost

min(item_cost)


```

## max

Usage:

```

max(column_name)


```

`max` is an aggregation function that returns the maximum value of a column across all rows.

Example:

```

-- return the maximum item cost

max(item_cost)


```

## quantileExactWeighted

Usage:

```

quantileExactWeighted(q)(column_name, weight_column_name)


```

`quantileExactWeighted` is an aggregation function that returns the value at the qth quantile in the named column across all rows in each group or results set. Each row will be weighted by the value in `weight_column_name`. Typically this would be `_sample_interval` (refer to [Sampling](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/#sampling) for more information).

Example:

```

-- estimate the median value of <double1>

quantileExactWeighted(0.5)(double1, _sample_interval)


-- in a table of query times, estimate the 95th centile query time

quantileExactWeighted(0.95)(query_time, _sample_interval)


```

For backwards compatibility, this is also available as `quantileWeighted(q, column_name, weight_column_name)`.

## argMax New

Usage:

```

argMax(arg, val)


```

`argMax` is an aggregation function that returns the `arg` value that corresponds to the maximum value of `val`.

If multiple `arg` values have the maximum value of `val`, any one will be returned.

Example:

```

-- find the <blob1> value for the row with the highest <double1>

argMax(blob1, double1)


-- find the <blob1> value from the most heavily sampled row

argMax(blob1, _sample_interval)


```

## argMin New

Usage:

```

argMin(arg, val)


```

`argMin` is an aggregation function that returns the `arg` value that corresponds to the minimum value of `val`.

If multiple `arg` values have the minimum value of `val`, any one will be returned.

Example:

```

-- find the <blob1> value for the row with the lowest <double1>

argMin(blob1, double1)


-- find the <blob1> value from the least heavily sampled row

argMin(blob1, _sample_interval)


```

## first\_value New

Usage:

```

first_value(column_name)


```

`first_value` is an aggregation function which returns the first value of the provided column.

Example:

```

-- find the oldest value of <blob1>

SELECT first_value(blob1) FROM my_dataset ORDER BY timestamp ASC


```

## last\_value New

Usage:

```

last_value(column_name)


```

`last_value` is an aggregation function which returns the last value of the provided column.

Example:

```

-- find the oldest value of <blob1>

SELECT last_value(blob1) FROM my_dataset ORDER BY timestamp DESC


```

## topK New

Usage:

```

topK(N)(column)


```

`topK` is an aggregation function which returns the most common `N` values of a column.

`N` is optional and defaults to `10`.

Example:

```

-- find the 10 most common values of <double1>

SELECT topK(double1) FROM my_dataset


-- find the 15 most common values of <blob1>

SELECT topK(15)(blob1) FROM my_dataset


```

## topKWeighted New

Usage:

```

topKWeighted(N)(column, weight_column)


```

`topKWeighted` is an aggregation function which returns the most common `N` values of a column, weighted by a second column.

`N` is optional and defaults to `10`.

Example:

```

-- find the 10 most common values of <double1>, weighted by `_sample_interval`

SELECT topKWeighted(double1, _sample_interval) FROM my_dataset


-- find the 15 most common values of <blob1>, weighted by `_sample_interval`

SELECT topKWeighted(15)(blob1, _sample_interval) FROM my_dataset


```

## countIf New

Usage:

```

countIf(<expr>)


```

`countIf` is an aggregation function that returns the number of rows in the results set, but only counting rows where a provided expression evaluates to true.

Example:

```

-- return the number of rows where `double1` is greater than 5

countIf(double1 > 5)


```

## sumIf New

Usage:

```

sumIf(<expr>, <expr>)


```

`sumIf` is an aggregation function that returns the sum of a first expression across all rows in the results set, but only including rows where a second expression evaluates to true.

Example:

```

-- return the sum of column `item_cost` of all items where another column `in_stock` is not zero

sumIf(item_cost, in_stock > 0)


```

## avgIf New

Usage:

```

avgIf(<expr>, <expr>)


```

`avgIf` is an aggregation function that returns the mean of an expression across all rows in the results set, but only including rows where a second expression evaluates to true.

Example:

```

-- return the mean of column `item_cost` where another column `in_stock` is not zero

avgIf(item_cost, in_stock > 0)


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sql-reference/","name":"SQL Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/analytics-engine/sql-reference/aggregate-functions/","name":"Aggregate functions"}}]}
```

---

---
title: Bit functions
description: Usage:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sql-reference/bit-functions.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Bit functions

## bitAnd New

Usage:

```

bitAnd(a, b)


```

`bitAnd` returns the bitwise AND of expressions `a` and `b`.

Examples:

```

-- perform 0b1 & 0b11

bitAnd(1, 3)

-- extract the least significant bit of the integer value of double1

bitAnd(toUInt8(double1), 1)


```

## bitCount New

Usage:

```

bitCount(a)


```

`bitCount` returns the number of bits set to one in the binary representation of `a`.

Examples:

```

-- get the number of 1 bits in the binary representation of the float `double1`

bitCount(double1)

-- get the number of 1 bits in the binary representation of `double1` as an integer

bitCount(toUInt32(double1))

-- select rows where at least 5 bits are 1

SELECT * WHERE bitCount(double1) > 5


```

## bitHammingDistance New

Usage:

```

bitHammingDistance(x, y)


```

`bitHammingDistance` returns the number of bits that differ between `x` and `y`.

Examples:

```

-- returns zero

bitHammingDistance(1, 1)

-- returns 2

bitHammingDistance(3, 0)


```

## bitNot New

Usage:

```

bitNot(a)


```

`bitNot` returns `a` with all bits flipped.

Examples:

```

bitNot(1)


```

## bitOr New

Usage:

```

bitOr(a, b)


```

`bitOr` returns the inclusive bitwise or of `a` and `b`.

Examples:

```

-- returns 3

bitOr(1, 2)


```

## bitRotateLeft New

Usage:

```

bitRotateLeft(a, n)


```

`bitRotateLeft` rotates all bits in `a` left by `n` positions.

Examples:

```

-- returns 2

bitRotateLeft(1, 1)

-- returns 1

bitRotateLeft(128, 1)


```

## bitRotateRight New

Usage:

```

bitRotateRight(a, n)


```

`bitRotateRight` rotates all bits in `a` right by `n` positions.

Examples:

```

-- returns 128

bitRotateRight(1, 1)

-- returns 3

bitRotateRight(12, 2)


```

## bitShiftLeft New

Usage:

```

bitShiftLeft(a, n)


```

`bitShiftLeft` shifts all bits in `a` left by `n` positions.

Examples:

```

-- returns 2

bitShiftLeft(1, 1)

-- returns 0

bitShiftLeft(128, 1)


```

## bitShiftRight New

Usage:

```

bitShiftRight(a, n)


```

`bitShiftRight` shifts all bits in `a` right by `n` positions.

Examples:

```

-- returns 0

bitShiftRight(1, 1)

-- returns 3

bitShiftRight(12, 2)


```

## bitTest New

Usage:

```

bitTest(a, n)


```

`bitTest` returns the value of bit `n` in number `a`.

Examples:

```

-- returns 1

bitTest(3, 1)

-- return 0

bitTest(2, 1)

-- select rows where a particular bit is 1

SELECT * WHERE bitTest(double1, 2)


```

## bitXor New

Usage:

```

bitXor(a, b)


```

`bitXor` returns the bitwise exclusive-or of `a` and `b`.

Examples:

```

-- returns 3

bitXor(1, 2)

-- returns 0

bitXor(3, 3)


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sql-reference/","name":"SQL Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/analytics-engine/sql-reference/bit-functions/","name":"Bit functions"}}]}
```

---

---
title: Conditional functions
description: Usage:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sql-reference/conditional-functions.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Conditional functions

## if

Usage:

```

if(<condition>, <true_expression>, <false_expression>)


```

Returns `<true_expression>` if `<condition>` evaluates to true, else returns `<false_expression>`.

Example:

```

if(temp > 20, 'It is warm', 'Bring a jumper')


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sql-reference/","name":"SQL Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/analytics-engine/sql-reference/conditional-functions/","name":"Conditional functions"}}]}
```

---

---
title: Date and Time functions
description: Usage:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sql-reference/date-time-functions.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Date and Time functions

## formatDateTime

Usage:

```

formatDateTime(<datetime expression>, <format string>[, <timezone string>])


```

`formatDateTime` prints a datetime as a string according to a provided format string. Refer to[ClickHouse's documentation ↗](https://clickhouse.com/docs/en/sql-reference/functions/date-time-functions/#formatdatetime)for a list of supported formatting options.

Examples:

```

-- prints the current YYYY-MM-DD in UTC

formatDateTime(now(), '%Y-%m-%d')


-- prints YYYY-MM-DD in the datetime's timezone

formatDateTime(<a datetime with a timezone>, '%Y-%m-%d')

formatDateTime(toDateTime('2022-12-01 16:17:00', 'America/New_York'), '%Y-%m-%d')


-- prints YYYY-MM-DD in UTC

formatDateTime(<a datetime with a timezone>, '%Y-%m-%d', 'Etc/UTC')

formatDateTime(toDateTime('2022-12-01 16:17:00', 'America/New_York'), '%Y-%m-%d', 'Etc/UTC')


```

## now

Usage:

```

now()


```

Returns the current time as a DateTime.

## today New

Usage:

```

today()


```

Returns the current date as a `Date`.

## toDateTime

Usage:

```

toDateTime(<expression>[, 'timezone string'])


```

`toDateTime` converts an expression to a datetime. This function does not support ISO 8601-style timezones; if your time is not in UTC then you must provide the timezone using the second optional argument.

Examples:

```

-- double1 contains a unix timestamp in seconds

toDateTime(double1)


-- blob1 contains an datetime in the format 'YYYY-MM-DD hh:mm:ss'

toDateTime(blob1)


-- literal values:

toDateTime(355924804) -- unix timestamp

toDateTime('355924804') -- string containing unix timestamp

toDateTime('1981-04-12 12:00:04') -- string with datetime in 'YYYY-MM-DD hh:mm:ss' format


-- interpret a date relative to New York time

toDateTime('2022-12-01 16:17:00', 'America/New_York')


```

## toYear New

Usage:

```

toYear(<datetime>)


```

`toYear` returns the year of a datetime.

Examples:

```

-- returns the number 2025

toYear(toDateTime('2025-10-27 00:00:00'))


```

## toMonth New

Usage:

```

toMonth(<datetime>)


```

`toMonth` returns the year of a datetime.

Examples:

```

-- returns the number 10

toMonth(toDateTime('2025-10-27 00:00:00'))


```

## toDayOfWeek New

Usage:

```

toDayOfWeek(<datetime>)


```

`toDayOfWeek` takes a datetime and returns its numerical day of the week.

Returns `1` to indicate Monday, `2` to indicate Tuesday, and so on.

Examples:

```

-- returns the number 1 for Monday 27th October 2025

toDayOfWeek(toDateTime('2025-10-27 00:00:00'))


-- returns the number 2 for Tuesday 28th October 2025

toDayOfWeek(toDateTime('2025-10-28 00:00:00'))


-- returns the number 7 for Sunday 2nd November 2025

toDayOfWeek(toDateTime('2025-11-02 00:00:00'))


```

## toDayOfMonth New

Usage:

```

toDayOfMonth(<datetime>)


```

`toDayOfMonth` returns the day of the month from a datetime.

Examples:

```

-- returns the number 27

toDayOfMonth(toDateTime('2025-10-27 00:00:00'))


```

## toHour New

Usage:

```

toHour(<datetime>)


```

`toHour` returns the hour of the day from a datetime.

Examples:

```

-- returns the number 9

toHour(toDateTime('2025-10-27 09:11:13'))


```

## toMinute New

Usage:

```

toMinute(<datetime>)


```

`toMinute` returns the minute of the hour from a datetime.

Examples:

```

-- returns the number 11

toMinute(toDateTime('2025-10-27 09:11:13'))


```

## toSecond New

Usage:

```

toSecond(<datetime>)


```

`toSecond` returns the second of the minute from a datetime.

Examples:

```

-- returns the number 13

toSecond(toDateTime('2025-10-27 09:11:13'))


```

## toUnixTimestamp

Usage:

```

toUnixTimestamp(<datetime>)


```

`toUnixTimestamp` converts a datetime into an integer unix timestamp.

Examples:

```

-- get the current unix timestamp

toUnixTimestamp(now())


```

## toStartOfInterval

Usage:

```

toStartOfInterval(<datetime>, INTERVAL '<n>' <unit>[, <timezone string>])


```

`toStartOfInterval` rounds down a datetime to the nearest offset of a provided interval. This can be useful for grouping data into equal-sized time ranges.

Examples:

```

-- round the current time down to the nearest 15 minutes

toStartOfInterval(now(), INTERVAL '15' MINUTE)


-- round a timestamp down to the day

toStartOfInterval(timestamp, INTERVAL '1' DAY)


-- count the number of datapoints filed in each hourly window

SELECT

  toStartOfInterval(timestamp, INTERVAL '1' HOUR) AS hour,

  sum(_sample_interval) AS count

FROM your_dataset

GROUP BY hour

ORDER BY hour ASC


```

## toStartOfYear New

Usage:

```

toStartOfYear(<datetime>)


```

`toStartOfYear` rounds down a datetime to the nearest start of year. This can be useful for grouping data into equal-sized time ranges.

Examples:

```

-- round a timestamp down to 2025-01-01 00:00:00

toStartOfYear(toDateTime('2025-10-27 00:00:00'))


```

## toStartOfMonth New

Usage:

```

toStartOfMonth(<datetime>)


```

`toStartOfMonth` rounds down a datetime to the nearest start of month. This can be useful for grouping data into equal-sized time ranges.

Examples:

```

-- round a timestamp down to 2025-10-01 00:00:00

toStartOfMonth(toDateTime('2025-10-27 00:00:00'))


```

## toStartOfWeek New

Usage:

```

toStartOfWeek(<datetime>)


```

`toStartOfWeek` rounds down a datetime to the start of the week. This can be useful for grouping data into equal-sized time ranges.

Treats Monday as the first day of the week.

Examples:

```

-- round a time on a Monday down to Monday 2025-10-27 00:00:00

toStartOfWeek(toDateTime('2025-10-27 00:00:00'))


-- round a time on a Wednesday down to Monday 2025-10-27 00:00:00

toStartOfWeek(toDateTime('2025-10-29 00:00:00'))


```

## toStartOfDay New

Usage:

```

toStartOfDay(<datetime>)


```

`toStartOfDay` rounds down a datetime to the nearest start of day. This can be useful for grouping data into equal-sized time ranges.

Examples:

```

-- round a timestamp down to 2025-10-27 00:00:00

toStartOfDay(toDateTime('2025-10-27 00:00:00'))


```

## toStartOfHour New

Usage:

```

toStartOfHour(<datetime>)


```

`toStartOfHour` rounds down a datetime to the nearest start of hour. This can be useful for grouping data into equal-sized time ranges.

Examples:

```

-- round a timestamp down to 2025-10-27 16:00:00

toStartOfHour(toDateTime('2025-10-27 16:55:25'))


```

## toStartOfFifteenMinutes New

Usage:

```

toStartOfFifteenMinutes(<datetime>)


```

`toStartOfFifteenMinutes` rounds down a datetime to the nearest fifteen minutes. This can be useful for grouping data into equal-sized time ranges.

Examples:

```

-- round a timestamp down to 2025-10-27 16:45:00

toStartOfFifteenMinutes(toDateTime('2025-10-27 16:55:25'))


```

## toStartOfTenMinutes New

Usage:

```

toStartOfTenMinutes(<datetime>)


```

`toStartOfTenMinutes` rounds down a datetime to the nearest ten minutes. This can be useful for grouping data into equal-sized time ranges.

Examples:

```

-- round a timestamp down to 2025-10-27 16:50:00

toStartOfTenMinutes(toDateTime('2025-10-27 16:55:25'))


```

## toStartOfFiveMinutes New

Usage:

```

toStartOfFiveMinutes(<datetime>)


```

`toStartOfFiveMinutes` rounds down a datetime to the nearest five minutes. This can be useful for grouping data into equal-sized time ranges.

Examples:

```

-- round a timestamp down to 2025-10-27 16:55:00

toStartOfFiveMinutes(toDateTime('2025-10-27 16:55:25'))


```

## toStartOfMinute New

Usage:

```

toStartOfMinute(<datetime>)


```

`toStartOfMinute` rounds down a datetime to the nearest minute. This can be useful for grouping data into equal-sized time ranges.

Examples:

```

-- round a timestamp down to 2025-10-27 16:55:00

toStartOfMinute(toDateTime('2025-10-27 16:55:25'))


```

## toYYYYMM New

Usage:

```

toYYYYMM(<datetime>)


```

`toYYYYMM` returns a number representing year and month of a datetime. For instance a datetime on `2025-05-03` would return the number `202505`.

Examples:

```

-- returns the number 202510

toYYYYMM(toDateTime('2025-10-27 16:55:25'))


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sql-reference/","name":"SQL Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/analytics-engine/sql-reference/date-time-functions/","name":"Date and Time functions"}}]}
```

---

---
title: Encoding functions
description: Usage:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sql-reference/encoding-functions.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Encoding functions

## bin New

Usage:

```

bin(<expression>)


```

`bin` returns a string containing the binary representation of its argument.

Examples:

```

-- get the binary representation of 1

bin(1)

-- get the binary representation of a string`

bin('abc')


```

## hex New

Usage:

```

hex(<expression>)


```

`hex` returns a string containing the hexadecimal representation of its argument.

Examples:

```

-- get the hexadecimal representation of 1

hex(1)

-- get the hexadecimal representation of a string`

hex('abc')


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sql-reference/","name":"SQL Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/analytics-engine/sql-reference/encoding-functions/","name":"Encoding functions"}}]}
```

---

---
title: Literals
description: The following literals are supported:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sql-reference/literals.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Literals

The following literals are supported:

| Type          | Syntax                                                                                |
| ------------- | ------------------------------------------------------------------------------------- |
| integer       | 42, \-42                                                                              |
| double        | 4.2, \-4.2                                                                            |
| string        | 'so long and thanks for all the fish'                                                 |
| boolean       | true or false                                                                         |
| time interval | INTERVAL '42' DAYIntervals of YEAR, MONTH, DAY, HOUR, MINUTE and SECOND are supported |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sql-reference/","name":"SQL Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/analytics-engine/sql-reference/literals/","name":"Literals"}}]}
```

---

---
title: Mathematical functions
description: Usage:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sql-reference/mathematical-functions.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Mathematical functions

## intDiv

Usage:

```

intDiv(a, b)


```

Divide `a` by `b`, rounding the answer down to the nearest whole number.

## log New

Usage:

```

log(<expression>)


```

`log` returns the natural logarithm of a provided number. `ln` is also available as an alias.

Examples:

```

-- get the natural logarithm of the double1 column

log(double1)


```

## pow New

Usage:

```

pow(<expression>, <expression>)


```

`pow` returns the first argument raised to the power of the second argument.

Examples:

```

-- get the square of the double1 column

pow(double1, 2)


```

## round New

Usage:

```

round(<expression>[, n])


```

`round` returns a number rounded to the nearest whole number, or to a given number of decimal points specified by the second argument.

Examples:

```

-- round 5.5 to 6

round(5.5)

-- round 3.14 to 3.1

round(3.14, 1)


```

## floor New

Usage:

```

floor(<expression>[, n])


```

`floor` returns a number rounded down to a whole number, or rounded down to a given number of decimal points specified by the second argument.

Examples:

```

-- round down 5.5 to 5

floor(5.5)

-- round down 3.14 to 3.1

floor(3.14, 1)


```

## ceil New

Usage:

```

ceil(<expression>[, n])


```

`ceil` returns a number rounded up to a whole number, or rounded up to a given number of decimal points specified by the second argument.

Examples:

```

-- round up 5.5 to 6

ceil(5.5)

-- round up 3.14 to 3.2

ceil(3.14, 1)


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sql-reference/","name":"SQL Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/analytics-engine/sql-reference/mathematical-functions/","name":"Mathematical functions"}}]}
```

---

---
title: Operators
description: The following operators are supported:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sql-reference/operators.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Operators

The following operators are supported:

## Arithmetic operators

| Operator | Description    |
| -------- | -------------- |
| +        | addition       |
| \-       | subtraction    |
| \*       | multiplication |
| /        | division       |
| %        | modulus        |

## Comparison operators

| Operator | Description                                                                                            |
| -------- | ------------------------------------------------------------------------------------------------------ |
| \=       | equals                                                                                                 |
| <        | less than                                                                                              |
| \>       | greater than                                                                                           |
| <=       | less than or equal to                                                                                  |
| \>=      | greater than or equal to                                                                               |
| <> or != | not equal                                                                                              |
| IN       | true if the preceding expression's value is in the listcolumn IN ('a', 'list', 'of', 'values')         |
| NOT IN   | true if the preceding expression's value is not in the listcolumn NOT IN ('a', 'list', 'of', 'values') |

We also support the `BETWEEN` operator for checking a value is in an inclusive range: `a [NOT] BETWEEN b AND c`.

### Pattern matching operators New

| Operator  | Description                                                                                 |
| --------- | ------------------------------------------------------------------------------------------- |
| LIKE      | true if the string matches the pattern (case-sensitive)column LIKE 'pattern%'               |
| NOT LIKE  | true if the string does not match the pattern (case-sensitive)column NOT LIKE 'pattern%'    |
| ILIKE     | true if the string matches the pattern (case-insensitive)column ILIKE 'pattern%'            |
| NOT ILIKE | true if the string does not match the pattern (case-insensitive)column NOT ILIKE 'pattern%' |

Pattern matching supports two wildcard characters:

* `%` matches any sequence of zero or more characters
* `_` matches any single character

Examples:

```

-- Match strings starting with "error"

WHERE blob1 LIKE 'error%'


-- Match strings ending with ".jpg" (case-insensitive)

WHERE blob2 ILIKE '%.jpg'


-- Match strings containing "test" anywhere

WHERE blob3 LIKE '%test%'


-- Match exactly 5 characters starting with "log"

WHERE blob4 LIKE 'log__'


-- Exclude strings containing "debug" (case-insensitive)

WHERE blob5 NOT ILIKE '%debug%'


```

## Boolean operators

| Operator | Description                                                          |
| -------- | -------------------------------------------------------------------- |
| AND      | boolean "AND" (true if both sides are true)                          |
| OR       | boolean "OR" (true if either side or both sides are true)            |
| NOT      | boolean "NOT" (true if following expression is false and visa-versa) |

## Unary operators

| Operator | Description                           |
| -------- | ------------------------------------- |
| \-       | negation operator (for example, \-42) |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sql-reference/","name":"SQL Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/analytics-engine/sql-reference/operators/","name":"Operators"}}]}
```

---

---
title: Statements
description: SHOW TABLES can be used to list the tables on your account. The table name is the name you specified as dataset when configuring the workers binding (refer to Get started with Workers Analytics Engine, for more information). The table is automatically created when you write event data in your worker.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sql-reference/statements.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Statements

## SHOW TABLES statement

`SHOW TABLES` can be used to list the tables on your account. The table name is the name you specified as `dataset` when configuring the workers binding (refer to [Get started with Workers Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/get-started/), for more information). The table is automatically created when you write event data in your worker.

```

SHOW TABLES

[FORMAT <format>]


```

Refer to [FORMAT clause](#format-clause) for the available `FORMAT` options.

## SHOW TIMEZONES statement

`SHOW TIMEZONES` can be used to list all of the timezones supported by the SQL API. Most common timezones are supported.

```

SHOW TIMEZONES

[FORMAT <format>]


```

## SHOW TIMEZONE statement

`SHOW TIMEZONE` responds with the current default timezone in use by SQL API. This should always be `Etc/UTC`.

```

SHOW TIMEZONE

[FORMAT <format>]


```

## SELECT statement

`SELECT` is used to query tables.

Usage:

```

SELECT <expression_list>

[FROM <table>|(<subquery>)]

[WHERE <expression>]

[GROUP BY <expression>, ...]

[HAVING <expression>]

[ORDER BY <expression_list>]

[LIMIT <n>|ALL]

[FORMAT <format>]


```

Below you can find the syntax of each clause. Refer to the [SQL API](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/) documentation for some example queries.

### SELECT clause

The `SELECT` clause specifies the list of columns to be included in the result. Columns can be aliased using the `AS` keyword.

Usage:

```

SELECT <expression> [AS <alias>], ...


```

Examples:

```

-- return the named columns

SELECT blob2, double3


-- return all columns

SELECT *


-- alias columns to more descriptive names

SELECT

    blob2 AS probe_name,

    double3 AS temperature


```

Additionally, expressions using supported functions and [operators](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/operators/) can be used in place of column names:

```

SELECT

    blob2 AS probe_name,

    double3 AS temp_c,

    double3*1.8+32 AS temp_f -- compute a value


SELECT

    blob2 AS probe_name,

    if(double3 <= 0, 'FREEZING', 'NOT FREEZING') AS description -- use of functions


SELECT

    blob2 AS probe_name,

    avg(double3) AS avg_temp -- aggregation function


```

### FROM clause

`FROM` is used to specify the source of the data for the query.

Usage:

```

FROM <table_name>|(subquery)


```

Examples:

```

-- query data written to a workers dataset called "temperatures"

FROM temperatures


-- use a subquery to manipulate the table

FROM (

    SELECT

        blob1 AS probe_name,

        count() as num_readings

    FROM

        temperatures

    GROUP BY

        probe_name

)


```

Note that queries can only operate on a single table. `UNION`, `JOIN` etc. are not currently supported.

### WHERE clause

`WHERE` is used to filter the rows returned by a query before grouping and aggregation.

Usage:

```

WHERE <condition>


```

`<condition>` can be any expression that evaluates to a boolean.

[Comparison operators](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/operators/#comparison-operators) can be used to compare values and [boolean operators](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/operators/#boolean-operators) can be used to combine conditions.

Expressions containing functions and [operators](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/operators/) are supported.

To filter results after grouping and aggregation, use the [HAVING clause](#having-clause) instead.

Examples:

```

-- simple comparisons

WHERE blob1 = 'test'

WHERE double1 = 4


-- inequalities

WHERE double1 > 4


-- use of operators (see below for supported operator list)

WHERE double1 + double2 > 4

WHERE blob1 = 'test1' OR blob2 = 'test2'


-- expression using inequalities, functions and operators

WHERE if(unit = 'f', (temp-32)/1.8, temp) <= 0


```

### GROUP BY clause

When using aggregate functions, `GROUP BY` specifies the groups over which the aggregation is run.

Usage:

```

GROUP BY <expression>, ...


```

For example, if you had a table of temperature readings:

```

-- return the average temperature for each probe

SELECT

    blob1 AS probe_name,

    avg(double1) AS average_temp

FROM temperature_readings

GROUP BY probe_name


```

In the usual case the `<expression>` can just be a column name but it is also possible to supply a complex expression here. Multiple expressions or column names can be supplied separated by commas.

### HAVING clause New

`HAVING` is used to filter the results after grouping and aggregation.

Usage:

```

HAVING <condition>


```

`<condition>` can be any expression that evaluates to a boolean, and can reference aggregate functions or grouped columns.

Unlike `WHERE`, which filters rows before grouping, `HAVING` filters groups after aggregation. This allows you to filter based on aggregate values.

[Comparison operators](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/operators/#comparison-operators) can be used to compare values and [boolean operators](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/operators/#boolean-operators) can be used to combine conditions.

Examples:

```

-- filter groups where the average is greater than 10

SELECT

    blob1 AS probe_name,

    avg(double1) AS average_temp

FROM temperature_readings

GROUP BY probe_name

HAVING average_temp > 10


-- filter groups with more than 100 readings

SELECT

    blob1 AS probe_name,

    count() AS num_readings

FROM temperature_readings

GROUP BY probe_name

HAVING num_readings > 100


-- combine multiple conditions

SELECT

    blob1 AS city,

    avg(double1) AS avg_temp,

    count() AS readings

FROM weather_data

GROUP BY city

HAVING avg_temp > 20 AND readings >= 50


```

### ORDER BY clause

`ORDER BY` can be used to control the order in which rows are returned.

Usage:

```

ORDER BY <expression> [ASC|DESC], ...


```

`<expression>` can just be a column name.

`ASC` or `DESC` determines if the ordering is ascending or descending. `ASC` is the default, and can be omitted.

Examples:

```

-- order by double2 then double3, both in ascending order

ORDER BY double2, double3


-- order by double2 in ascending order then double3 is descending order

ORDER BY double2, double3 DESC


```

### LIMIT clause

`LIMIT` specifies a maximum number of rows to return.

Usage:

```

LIMIT <n>|ALL


```

Supply the maximum number of rows to return or `ALL` for no restriction.

For example:

```

LIMIT 10 -- return at most 10 rows


```

### OFFSET clause

`OFFSET` specifies a number of rows to skip in the query result.

Usage:

```

OFFSET <n>


```

For example:

```

OFFSET 10 -- skip the first 10 result rows


```

### FORMAT clause

`FORMAT` controls how to the returned data is encoded.

Usage:

```

FORMAT [JSON|JSONEachRow|TabSeparated]


```

If no format clause is included then the default format of `JSON` will be used.

Override the default by setting a format. For example:

```

FORMAT JSONEachRow


```

The following formats are supported:

#### JSON

Data is returned as a single JSON object with schema data included:

```

{

    "meta": [

        {

            "name": "<column 1 name>",

            "type": "<column 1 type>"

        },

        {

            "name": "<column 2 name>",

            "type": "<column 2 type>"

        },

        ...

    ],

    "data": [

        {

            "<column 1 name>": "<column 1 value>",

            "<column 2 name>": "<column 2 value>",

            ...

        },

        {

            "<column 1 name>": "<column 1 value>",

            "<column 2 name>": "<column 2 value>",

            ...

        },

        ...

    ],

    "rows": 10

}


```

#### JSONEachRow

Data is returned with a separate JSON object per row. Rows are newline separated and there is no header line or schema data:

```

{"<column 1 name>": "<column 1 value>", "<column 2 name>": "<column 2 value>"}

{"<column 1 name>": "<column 1 value>", "<column 2 name>": "<column 2 value>"}

...


```

#### TabSeparated

Data is returned with newline separated rows. Columns are separated with tabs. There is no header.

```

column 1 value  column 2 value

column 1 value  column 2 value

...


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sql-reference/","name":"SQL Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/analytics-engine/sql-reference/statements/","name":"Statements"}}]}
```

---

---
title: String functions
description: Usage:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sql-reference/string-functions.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# String functions

## length

Usage:

```

length({string})


```

Returns the length of a string. This function is UTF-8 compatible.

Examples:

```

SELECT length('a string') AS s;

SELECT length(blob1) AS s FROM your_dataset;


```

For backwards-compatibility, this function is the equivalent of ClickHouse's `lengthUTF8` function, rather than ClickHouse's `length` function.

## empty

Usage:

```

empty({string})


```

Returns a boolean saying whether the string was empty. This computation can also be done as a binary operation: `{string} = ''`.

Examples:

```

SELECT empty('a string') AS b;

SELECT empty(blob1) AS b FROM your_dataset;


```

For backwards compatibility, this function can also be called using `empty(<string>)`.

## lower

Usage:

```

lower({string})


```

Returns the string converted to lowercase. This function is NOT Unicode compatible - refer to `lowerUTF8` for that.

Examples:

```

SELECT lower('STRING TO DOWNCASE') AS s;

SELECT lower(blob1) AS s FROM your_dataset;


```

## lowerUTF8 New

Usage:

```

lowerUTF8({string})


```

Returns the string converted to lowercase. This function is Unicode compatible. This may not be perfect for all languages and users with stringent needs, should do the operation in their own code.

Examples:

```

SELECT lowerUTF8('STRING TO DOWNCASE') AS s;

SELECT lowerUTF8(blob1) AS s FROM your_dataset;


```

For backwards compatibility, this function can also be called using `toLower({string})`.

## upper

Usage:

```

upper({string})


```

Returns the string converted to uppercase. This function is NOT Unicode compatible - refer to `upperUTF8` for that.

Examples:

```

SELECT upper('string to uppercase') AS s;

SELECT upper(blob1) AS s FROM your_dataset;


```

## upperUTF8 New

Usage:

```

upperUTF8({string})


```

Returns the string converted to uppercase. This function is Unicode compatible. The results may not be perfect for all languages and users with strict needs. These users should do the operation in their own code.

Examples:

```

SELECT upperUTF8('string to uppercase') AS s;

SELECT upperUTF8(blob1) AS s FROM your_dataset;


```

For backwards compatibility, this function can also be called using `toUpper({string})`.

## startsWith

Usage:

```

startsWith({string}, {string})


```

Returns a boolean of whether the first string has the second string at its start.

Examples:

```

SELECT startsWith('prefix ...', 'prefix') AS b;

SELECT startsWith(blob1, 'prefix') AS b FROM your_dataset;


```

## endsWith

Usage:

```

endsWith({string}, {string})


```

Returns a boolean of whether the first string contains the second string at its end.

Examples:

```

SELECT endsWith('prefix suffix', 'suffix') AS b;

SELECT endsWith(blob1, 'suffix') AS b FROM your_dataset;


```

## position

Usage:

```

position({needle:string} IN {haystack:string})


```

Returns the position of one string, `needle`, in another, `haystack`. In SQL, indexes are usually 1-based. That means that position returns `1` if your needle is at the start of the haystack. It only returns `0` if your string is not found.

Examples:

```

SELECT position(':' IN 'hello: world') AS p;

SELECT position(':' IN blob1) AS p FROM your_dataset;


```

## substring

Usage:

```

substring({string}, {offset:integer}[. {length:integer}])


```

Extracts part of a string, starting at the Unicode code point indicated by the offset and returning the number of code points requested by the length. As previously mentioned, in SQL, indexes are usually 1-based. That means that the offset provided to substring should be at least `1`.

Examples:

```

SELECT substring('hello world', 6) AS s;

SELECT substring('hello: world', 1, position(':' IN 'hello: world')-1) AS s;


```

## format

Usage:

```

format({string}[, ...])


```

This function supports formatting strings, integers, floats, datetimes, intervals, etc, except `NULL`. The function does not support literal `{` and `}` characters in the format string.

Examples:

```

SELECT format('blob1: {}', blob1) AS s FROM dataset;


```

The [formatDateTime](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/date-time-functions/#formatdatetime) function might also be useful.

## extract

Usage:

```

extract(<time unit> from <datetime>)


```

`extract` returns an integer number of time units from a datetime. It supports`YEAR`, `MONTH`, `DAY`, `HOUR`, `MINUTE` and `SECOND`.

Examples:

```

-- extract the number of seconds from a timestamp (returns 15 in this example)

extract(SECOND from toDateTime('2022-06-06 11:30:15'))


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sql-reference/","name":"SQL Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/analytics-engine/sql-reference/string-functions/","name":"String functions"}}]}
```

---

---
title: Type conversion functions
description: Usage:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/sql-reference/type-conversion-functions.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Type conversion functions

## toUInt8 New

Usage:

```

toUInt8(<expression>)


```

Converts any numeric expression, or expression resulting in a string representation of a decimal, into an unsigned 8 bit integer.

Behaviour for negative numbers is undefined.

## toUInt32

Usage:

```

toUInt32(<expression>)


```

Converts any numeric expression, or expression resulting in a string representation of a decimal, into an unsigned 32 bit integer.

Behaviour for negative numbers is undefined.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/sql-reference/","name":"SQL Reference"}},{"@type":"ListItem","position":5,"item":{"@id":"/analytics/analytics-engine/sql-reference/type-conversion-functions/","name":"Type conversion functions"}}]}
```

---

---
title: Querying from a Worker
description: If you want to access Analytics Engine data from within a Worker you can use fetch to access the SQL API. The API can return JSON data that is easy to interact with in JavaScript.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-engine/worker-querying.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Querying from a Worker

If you want to access Analytics Engine data from within a Worker you can use `fetch` to access the SQL API. The API can return JSON data that is easy to interact with in JavaScript.

## Authentication

In order that your Worker can authenticate with the API you will need your account ID and an API token.

* Your 32 character account ID can be obtained from the Cloudflare dashboard.
* An API token can also be generated in the dashboard. Refer to the [SQL API docs](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/#authentication) for more information on this.

We recommend storing the account ID as an environment variable and the API token as a secret in your worker. This can be done through the dashboard or through Wrangler. Refer to the [Workers documentation](https://developers.cloudflare.com/workers/configuration/environment-variables/) for more details on this.

## Querying

Use the JavaScript `fetch` API as follows to execute a query:

JavaScript

```

const query = "SELECT * FROM my_dataset";

const API = `https://api.cloudflare.com/client/v4/accounts/${env.ACCOUNT_ID}/analytics_engine/sql`;

const response = await fetch(API, {

  method: "POST",

  headers: {

    Authorization: `Bearer ${env.API_TOKEN}`,

  },

  body: query,

});

const responseJSON = await response.json();


```

The data will be returned in the format described in the [FORMAT](https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/statements/#json) section of the documentation, allowing you to extract meta information about the names and types of returned columns in addition to the data itself and a row count.

## Example Worker

The following is a sample Worker which executes a query against a dataset of weather readings and displays minimum and maximum values for each city.

### Environment variable setup

First the environment variables are set up with the account ID and API token.

The account ID is set in the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/):

* [  wrangler.jsonc ](#tab-panel-3124)
* [  wrangler.toml ](#tab-panel-3125)

```

{

  "vars": {

    "ACCOUNT_ID": "<account_id>"

  }

}


```

```

[vars]

ACCOUNT_ID = "<account_id>"


```

The API\_TOKEN can be set as a secret, using the wrangler command line tool, by running the following and entering your token string:

Terminal window

```

npx wrangler secret put API_TOKEN


```

### Worker script

The worker script itself executes a query and formats the result:

JavaScript

```

export default {

  async fetch(request, env) {

    // This worker only responds to requests at the root.

    if (new URL(request.url).pathname != "/") {

      return new Response("Not found", { status: 404 });

    }


    // SQL string to be executed.

    const query = `

            SELECT

                blob1 AS city,

                max(double1) as max_temp,

                min(double1) as min_temp

            FROM weather

            WHERE timestamp > NOW() - INTERVAL '1' DAY

            GROUP BY city

            ORDER BY city`;


    // Build the API endpoint URL and make a POST request with the query string

    const API = `https://api.cloudflare.com/client/v4/accounts/${env.ACCOUNT_ID}/analytics_engine/sql`;

    const queryResponse = await fetch(API, {

      method: "POST",

      headers: {

        Authorization: `Bearer ${env.API_TOKEN}`,

      },

      body: query,

    });


    // The API will return a 200 status code if the query succeeded.

    // In case of failure we log the error message and return a failure message.

    if (queryResponse.status != 200) {

      console.error("Error querying:", await queryResponse.text());

      return new Response("An error occurred!", { status: 500 });

    }


    // Read the JSON data from the query response and render the data as HTML.

    const queryJSON = await queryResponse.json();

    return new Response(renderResponse(queryJSON.data), {

      headers: { "content-type": "text/html" },

    });

  },

};


// renderCity renders a table row as HTML from a data row.

function renderCity(row) {

  return `<tr><td>${row.city}</td><td>${row.min_temp}</td><td>${row.max_temp}</td></tr>`;

}


// renderResponse renders a simple HTML table of results.

function renderResponse(data) {

  return `<!DOCTYPE html>

<html>

    <body>

        <table>

            <tr><th>City</th><th>Min Temp</th><th>Max Temp</th></tr>

            ${data.map(renderCity).join("\n")}

        </table>

    </body>

<html>`;

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-engine/","name":"Workers Analytics Engine"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-engine/worker-querying/","name":"Querying from a Worker"}}]}
```

---

---
title: Analytics integrations
description: Cloudflare Enterprise customers can use Cloudflare integrations with their preferred analytics provider and configure ready-to-use Cloudflare Dashboards. Most analytics integrations are built on Cloudflare Logs by using Logpush with either Amazon S3 bucket or GCP Storage bucket.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-integrations/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Analytics integrations

Cloudflare Enterprise customers can use Cloudflare integrations with their preferred analytics provider and configure ready-to-use Cloudflare Dashboards. Most analytics integrations are built on Cloudflare Logs by using Logpush with either Amazon S3 bucket or GCP Storage bucket.

Analyze [Cloudflare Logs](https://developers.cloudflare.com/logs/) data with the following analytics platforms:

* [ Datadog ](https://developers.cloudflare.com/analytics/analytics-integrations/datadog/)
* [ Graylog ](https://developers.cloudflare.com/analytics/analytics-integrations/graylog/)
* [ New Relic ](https://developers.cloudflare.com/analytics/analytics-integrations/new-relic/)
* [ Splunk ](https://developers.cloudflare.com/analytics/analytics-integrations/splunk/)
* [ Sentinel ](https://developers.cloudflare.com/analytics/analytics-integrations/sentinel/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-integrations/","name":"Analytics integrations"}}]}
```

---

---
title: Datadog
description: This tutorial explains how to analyze Cloudflare metrics using the Cloudflare Integration tile for Datadog
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-integrations/datadog.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Datadog

This tutorial explains how to analyze Cloudflare metrics using the [Cloudflare Integration tile for Datadog ↗](https://docs.datadoghq.com/integrations/cloudflare/).

## Overview

Before viewing the Cloudflare dashboard in Datadog, note that this integration:

* Is available to all Cloudflare customer plans (Free, Pro, Business and Enterprise)
* Is based on the Cloudflare Analytics API
* Provides Cloudflare web traffic and DNS metrics only
* Does not feature data coming from request logs stored in Cloudflare Logs

## Task 1 - Install the Cloudflare App

To install the Cloudflare App for Datadog:

1. Log in to **Datadog**.
2. Click the **Integrations** tab.
3. In the **search box**, start typing _Cloudflare_. The app tile should appear below the search box.![Searching for Cloudflare App in the Datadog Integrations tab](https://developers.cloudflare.com/_astro/datadog-integrations.BJs60jr6_ZMH8eb.webp)
4. Click the **Cloudflare** tile to begin the installation.
5. Next, click **Configuration** and then complete the following:  
   * **Account name**: (Optional) This can be any value. It has not impact on the site data pulled from Cloudflare.  
   * **Email**: This value helps keep your account safe. We recommend creating a dedicated Cloudflare user for analytics with the [_Analytics_ role](https://developers.cloudflare.com/fundamentals/manage-members/roles/) (read-only). Note that the _Analytics_ role is available to Enterprise customers only.  
   * **API Key**: Enter your Cloudflare Global API key. For details refer to [API Keys](https://developers.cloudflare.com/fundamentals/api/get-started/keys/).
6. Click **Install Integration**.![Configuring and installing the Datadog integration](https://developers.cloudflare.com/_astro/cloudflare-tile-datadog-fill-details.Bd14uPIs_Z1Rb82I.webp)

The Cloudflare App for Datadog should be installed now and you can view the dashboard.

## Task 2 - View the dashboard

By default, the dashboard displays metrics for all sites in your Cloudflare account. Use the dashboard filters see metrics for a specific domain.

The dashboard displays the following metrics:

* **Threats** (threats by type, threats by country)
* **Requests** (total requests, cached requests, uncached requests, top countries by request, requests by IP class, top content types)
* **Bandwidth** (total bandwidth, encrypted and unencrypted traffic cached bandwidth, uncached bandwidth)
* **Caching** (Cache hit rate, request caching rate over time)
* **HTTP response status errors**
* **Page views**
* **Search Engine Bot Traffic**
* **DNS** (DNS queries, response time, top hostnames, queries by type, stale vs. uncached queries)
![Dashboard displaying metrics for a site on a Cloudflare account](https://developers.cloudflare.com/_astro/cloudflare-dashboard-datadog.BETjd10H_1ROw9T.webp) 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-integrations/","name":"Analytics integrations"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-integrations/datadog/","name":"Datadog"}}]}
```

---

---
title: Graylog
description: This tutorial explains how to analyze Cloudflare Logs using Graylog. The Graylog integration is available on GitHub.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-integrations/graylog.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Graylog

This tutorial explains how to analyze [Cloudflare Logs ↗](https://www.cloudflare.com/products/cloudflare-logs/) using [Graylog ↗](https://github.com/Graylog2/graylog-s3-lambda/blob/master/content-packs/cloudflare/cloudflare-logpush-content-pack.json).

## Overview

If you haven't used Cloudflare Logs before, visit our [Logs documentation](https://developers.cloudflare.com/logs/) for more details. Contact your Cloudflare Customer Account Team to enable logs for your account.

### Prerequisites

Before sending your Cloudflare log data to Graylog, make sure that you:

* Have an existing Graylog installation. Both single-node and cluster configurations are supported
* Have a Cloudflare Enterprise account with Cloudflare Logs enabled
* Configure [Logpush](https://developers.cloudflare.com/logs/logpush/)

Note

Cloudflare logs are HTTP/HTTPS request logs in JSON format and are gathered from our 200+ data centers globally. By default, timestamps are returned as UNIX nanosecond integers. All timestamp formats are supported by Graylog.

## Task 1 - Preparation

Before getting Cloudflare logs into Graylog:

1. Configure Cloudflare [Logpush](https://developers.cloudflare.com/logs/logpush/) to push logs with all desired fields to an AWS S3 bucket of your choice.
2. Download the latest [Graylog Integration for Cloudflare ↗](https://github.com/Graylog2/graylog-s3-lambda/blob/master/content-packs/cloudflare/cloudflare-logpush-content-pack.json).
3. Decompress the zip file.

Once decompressed, the integration package includes:

* _graylog-s3-lambda.jar_
* _content-packs/cloudflare/cloudflare-logpush-content-pack.json_
* _content-packs/cloudflare/threat-lookup.csv_

## Task 2 - Create and configure the AWS Lambda Function

1. Navigate to the Lambda service page in the AWS web console.
2. Create a new Lambda function and specify a _function name_ of your choice and the _Java-8 runtime_.
3. Create or specify an execution role with the following permissions. You can also further restrict the resource permissions as desired for your specific set-up.

```

{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Sid": "Policy",

      "Effect": "Allow",

      "Action": [

        "logs:CreateLogGroup"

        "s3:GetObject",

        "logs:CreateLogStream",

        "logs:PutLogEvents"

      ],

      "Resource": [

        "arn:aws:logs:your-region:your-account-number:*",

        "arn:aws:s3:your-region::cloudflare-bucket-name/*"

      ]

    }

  ]

}


```

**Note:** If your Graylog cluster is running in a VPC, you may need to add the _AWSLambdaVPCAccessExecutionRole_ managed role to allow the Lambda function to route traffic to the VPC.

1. Once you've created the Lambda function, upload the function code _**graylog-s3-lambda.jar**_ downloaded in [Task 1](#task-1---preparation). Specify the following method for the Handler: _org.graylog.integrations.s3.GraylogS3Function::handleRequest_.
2. Specify at least the following required environment variables to configure the Lambda function for your Graylog cluster:  
   * **CONTENT\_TYPE** (required) - _application/x.cloudflare.log_ value to indicate that the Lambda function will process Cloudflare logs.  
   * **COMPRESSION\_TYPE** _**(required**_ **)** \- _gzip_ since Cloudflare logs are gzip compressed.  
   * **GRAYLOG\_HOST** _(required)_ \- hostname or IP address of the Graylog host or cluster load balancer.  
   * **GRAYLOG\_PORT** _(optional - defaults to 12201)_ \- The Graylog service port.  
   * **CONNECT\_TIMEOUT** _(optional - defaults to 10000)_ \- The number of milliseconds to wait for the connection to be established.  
   * **LOG\_LEVEL** _(optional - defaults to INFO)_ \- The level of detail to include in the CloudWatch logs generated from the Lambda function. Supported values are _OFF_, _ERROR_, _WARN_, _INFO_, _DEBUG_, _TRACE_, and _ALL_. Increase the logging level to help with troubleshooting. See [Defining Custom Log Levels in Code ↗](https://logging.apache.org/log4j/2.0/manual/customloglevels.html) for more information.  
   * **CLOUDFLARE\_LOGPUSH\_MESSAGE\_FIELDS** _(optional - defaults to all)_ \- The fields to parse from the message. Specify as a comma-separated list of field names.  
   * **CLOUDFLARE\_LOGPUSH\_MESSAGE\_SUMMARY\_FIELDS** _(optional - defaults to ClientRequestHost, ClientRequestPath, OriginIP, ClientSrcPort, EdgeServerIP, EdgeResponseBytes)_ \- The fields to include in the message summary that appears above the parsed fields at the top of each message in Graylog. Specify as a comma-separated list of field names.![List of required Graylog environment variables](https://developers.cloudflare.com/_astro/graylog-environment-variables.Db3fSAfE_1M5TP.webp)  
   **Note:** More configuration variables are available to fine-tune the function configuration in the Graylog Lambda S3 [README ↗](https://github.com/Graylog2/graylog-s3-lambda/blob/master/README.md#step-2-specify-configuration) file.
3. Create an AWS S3 Trigger for the Lambda function so that the function can process each Cloudflare log field that is written. Specify the same S3 bucket from [Task 1](#task-1---preparation) and choose the _All object create events_ option. Any other desired file filters can be applied here.![Add trigger dialog with an example AWS S3 Trigger](https://developers.cloudflare.com/_astro/aws-s3-add-trigger.CKwYBqmZ_Z1dJOUN.webp)
4. If your Graylog cluster is located within a VPC, you will need to [configure your Lambda function to access resources in a VPC ↗](https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html). You may also need to create a [VPC endpoint for the AWS S3 service ↗](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html#create-vpc-endpoint). This allows the Lambda function to access S3 directly when running in a VPC.

Note

By default, all log messages are sent over TCPt. TLS encryption between the Lambda function and Graylog is not currently supported. We recommend taking appropriate measures to secure the log messages in transit, such as placing the Lambda function within a secure VPC subnet where the Graylog node or cluster is running.

## Task 3 - Import the content pack in Graylog

Importing the Cloudflare Logpush content pack into Graylog loads the necessary configuration to receive Cloudflare logs and installs the Cloudflare dashboards.

The following components install with the content pack:

* Cloudflare dashboards ([Task 4](#task-4---view-the-cloudflare-dashboards)).
* A Cloudflare GELF (TCP) input that allows Graylog to receive Cloudflare logs.
* A Cloudflare message [stream ↗](https://docs.graylog.org/en/3.1/pages/streams.html).
* [Pipeline ↗](https://docs.graylog.org/en/3.1/pages/pipelines/pipelines.html) rules that help to process and parse Cloudflare log fields.

To import the content pack:

1. Locate the _cloudflare-logpush-content-pack.json_ file that you downloaded and extracted in [Task 1](#task-1---preparation).
2. In Graylog, go to **System** \> **Content Packs** and click **Upload** in the top right. Once uploaded, the Cloudflare Logpush content pack will appear in the list of uploaded content packs.![Uploading Graylog content packs](https://developers.cloudflare.com/_astro/graylog-content-packs.D1kZ2lWL_Z1NwPJk.webp)
3. Click **Install**.![Installing Graylog content packs](https://developers.cloudflare.com/_astro/graylog-content-packs-uploaded.DEaypq4Q_21xo6P.webp)
4. In the **Install** dialog, enter an optional install comment, and verify that the correct values are entered for all configuration parameters.  
   * A path is required for the MaxMind™️ database, available at [https://dev.maxmind.com/geoip/ ↗](https://dev.maxmind.com/geoip/).  
   * A path is also required for the _Threat Lookup_ CSV file, extracted in [Task 1](#task-1---preparation).  
![Adding an install comment and configuring parameters in Install Dialog screen](https://developers.cloudflare.com/_astro/graylog-content-pack-install.B5_Hmivu_Z1VzJ0P.webp)
5. Once installed, your Graylog cluster will be ready to receive Cloudflare logs from the Lambda function.

Refer to the Graylog Lambda S3 [README ↗](https://github.com/Graylog2/graylog-s3-lambda/blob/master/README.md) for additional information and troubleshooting tips.

## Task 4 - View the Cloudflare Dashboards

You can view your dashboard in the [Graylog Cloudflare integration page ↗](https://go.graylog.com/cloudflare). The dashboards include:

### Cloudflare - Snapshot

This is an at-a-glance overview of the most important metrics from your websites and applications on the Cloudflare network. You can use dashboard filters to further slice and dice the information for granular analysis of events and trends.

Use this dashboard to:

* Monitor the most important web traffic metrics of your websites and applications on the Cloudflare network
* View which countries and IPs your traffic is coming from, and analyze the breakdown between mobile and desktop traffic, protocol, methods, and content types
![Visualizing Cloudflare log metrics in the Graylog dashboard](https://developers.cloudflare.com/_astro/snapshot-cloudflare-dashboard-graylog.CRVPLE-B_Z2wU6qH.webp) 

### Cloudflare - Security

This overview provides insights into threats to your websites and applications, including number of threats stopped,threats over time, top threat countries, and more.

Use this dashboard to:

* Monitor the most important security and threat metrics for your websites and applications
* Fine-tune and configure your IP firewall
![Visualizing an analysis of Cloudflare threat traffic in the Graylog dashboard](https://developers.cloudflare.com/_astro/security-cloudflare-dashboard-graylog.Bm8-7dyC_ZvCVKj.webp) 

### Cloudflare - Performance

This dashboard helps to identify and address performance issues and caching misconfigurations. Metrics include total vs. cached bandwidth, saved bandwidth, total requests, cache ratio, top uncached requests, and more.

Use this dashboard to:

* Monitor caching behavior and identify misconfigurations
* Improve configuration and caching ratio
![Visualizing Cloudflare Performance metrics in the Graylog dashboard](https://developers.cloudflare.com/_astro/performance-cloudflare-dashboard-graylog.BJk_tceI_ZUnpsP.webp) 

### Cloudflare - Reliability

This dashboard provides insights on the availability of your websites and applications. Metrics include origin response error ratio, origin response status over time, percentage of 3xx/4xx/5xx errors over time, and more.

Use this dashboard to:

* Investigate errors on your websites and applications by viewing edge and origin response status codes
* Further analyze errors based on status codes by countries, client IPs, hostnames, and other metrics
![Graylog dashboard Cloudflare Reliability](https://developers.cloudflare.com/_astro/reliability-cloudflare-dashboard-graylog.9KgmAZJm_c5YOr.webp) 

### Cloudflare - Bots

Use this dashboard to detect and mitigate bad bots so that you can prevent credential stuffing, spam registration, content scraping, click fraud, inventory hoarding, and other malicious activities.

Note

To get bot requests identified correctly, use only one WAF custom rule (or firewall rule), configured with the action _Interactive Challenge_. To learn more about custom rules, refer to the [WAF documentation](https://developers.cloudflare.com/waf/custom-rules/).

Use this dashboard to:

* Investigate bot activity on your website and prevent content scraping, checkout fraud, spam registration, and other malicious activities.
* Use insight to tune Cloudflare to prevent bots from excessive usage and abuse across websites, applications, and API endpoints.
![Graylog dashboard Cloudflare Bot Management](https://developers.cloudflare.com/_astro/bot-management-cloudflare-dashboard-graylog.DUQmn7po_Z2nT7Vm.webp) 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-integrations/","name":"Analytics integrations"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-integrations/graylog/","name":"Graylog"}}]}
```

---

---
title: New Relic
description: This tutorial explains how to analyze Cloudflare metrics using the New Relic One Cloudflare Quickstart.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-integrations/new-relic.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# New Relic

This tutorial explains how to analyze Cloudflare metrics using the [New Relic One Cloudflare Quickstart ↗](https://newrelic.com/instant-observability/cloudflare/fc2bb0ac-6622-43c6-8c1f-6a4c26ab5434).

## Prerequisites

Before sending your Cloudflare log data to New Relic, make sure that you:

* Have a Cloudflare Enterprise account with Cloudflare Logs enabled.
* Have a New Relic account.
* Configure [Logpush to New Relic](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/new-relic/).

## Task 1 - Install the Cloudflare Network Logs quickstart

1. Log in to New Relic.
2. Click the Instant Observability button (top right).
3. Search for **Cloudflare Network Logs**.
![Cloudflare Network Logs install screen](https://developers.cloudflare.com/_astro/cloudflare-network-logs.CYJYSb1Z_1A3d0x.webp) 
1. Click **Install this quickstart**.
2. Follow the steps to deploy.

## Task 2 - View the Cloudflare Dashboards

You can view your dashboards on the New Relic dashboard page. The dashboards include the following information:

### Overview

Get a quick overview of the most important metrics from your websites and applications on the Cloudflare network.

![Cloudflare Network Logs install screen](https://developers.cloudflare.com/_astro/dash-1.CTd2mveX_ZpWmkd.webp) 

### Security

Get insights on threats to your websites and applications, including number of threats taken action on by the Web Application Firewall (WAF), threats over time, top threat countries, and more.

![Cloudflare Network security metrics screen](https://developers.cloudflare.com/_astro/dash-2.DpiyWwxC_Z1KLMnK.webp) 

### Performance

Identify and address performance issues and caching misconfigurations. Metrics include total requests, total versus cached requests, total versus origin requests.

![Cloudflare Network Logs performance metrics screen](https://developers.cloudflare.com/_astro/dash-3.DMdRroU0_ZLKqKd.webp) 

### Reliability

Get insights on the availability of your websites and Applications. Metrics include, edge response status over time, percentage of `3xx`/`4xx`/`5xx` errors over time, and more.

![Cloudflare Network Logs reliability metrics screen](https://developers.cloudflare.com/_astro/dash-4.BIqk6bUl_wxIpq.webp) 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-integrations/","name":"Analytics integrations"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-integrations/new-relic/","name":"New Relic"}}]}
```

---

---
title: Sentinel
description: Cloudflare has integrations with Microsoft Sentinel to make analyzing your Cloudflare data easier and in a centralized space. Cloudflare has two versions of this connector available. We recommend utilizing the latest Codeless Connector integration as it provides easier setup, cost management, and integrates with Sentinel Data Lake.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-integrations/sentinel.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Sentinel

Cloudflare has integrations with Microsoft Sentinel to make analyzing your Cloudflare data easier and in a centralized space. Cloudflare has two versions of this connector available. We recommend utilizing the latest Codeless Connector integration as it provides easier setup, cost management, and integrates with [Sentinel Data Lake ↗](https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-lake-overview).

**[Sentinel CCF Solution ↗](https://marketplace.microsoft.com/en-us/product/azure-application/cloudflare.azure-sentinel-solution-cloudflare-ccf?tab=Overview)** (recommended): The Codeless Connector Framework (CCF) provides partners, advanced users, and developers the ability to create custom connectors for ingesting data to Microsoft Sentinel.

**[Sentinel Function Based Connector ↗](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cloudflare.cloudflare%5Fsentinel?tab=Overview)**: The Cloudflare connector for Microsoft Sentinel uses [Azure Functions ↗](https://azure.microsoft.com/en-us/products/functions) to process security logs from Cloudflare's Logpush service and ingest them directly into the SIEM platform.

This guide provides clear, step-by-step instructions for integrating Cloudflare logs with the new CCF connector for Microsoft Sentinel using Azure Blob Storage. By following these steps, you will be able to securely collect, store, and analyse your Cloudflare logs within Microsoft Sentinel, enhancing your organisation's security monitoring and incident response capabilities.

## Step 1: Prerequisites

* Azure Subscription with permission to create and manage resources (Contributor/Owner role recommended).
* Microsoft Sentinel Workspace already set up in your Azure environment.
* Azure Storage Account with a Blob container for storing Cloudflare logs.
* Cloudflare Account with access to the domain whose logs you wish to export, and permission to configure Logpush jobs.

## Step 2: Set up a logpush job

1. Log in to the [Cloudflare dashboard ↗](https://dash.cloudflare.com/), and select your account and domain.
2. Go to **Analytics** \> **Logs** and select **Logpush**.
3. Select **Create Logpush Job**. Choose the log type you want to export (for example, **HTTP requests**).
4. For the destination, select **Azure Blob Storage**.
5. Enter your Azure Blob Storage details:  
   * SAS Token (Shared Access Signature)  
To generate a SAS token from the Azure portal, first navigate to your storage account. Under the **Data Storage** section, select **Containers** and choose the relevant container. Within the settings, locate and select **Shared access signature**. Configure the required permissions, such as `write` and `create`, and specify the start and expiration dates for the token. Once configured, generate the SAS token accordingly.
6. Save and activate the Logpush job.

For complete details, refer to the [Cloudflare Logpush to Azure documentation](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/azure/).

## Step 3: Configure Azure and deploy the Data Connector in Microsoft Sentinel

1. Log in to the Azure Portal and go to your **Microsoft Sentinel** workspace.
2. Select **Content Hub** in the navigation bar and search for **Cloudflare**.
3. Select the **Cloudflare** solution from the results.
4. Select **Install** in the right pane.
5. In your **Sentinel workspace**, go to **Data connectors**.
6. Search for the **Cloudflare connector** (may appear as **Cloudflare (using Azure Blob Storage)**).
7. Selecte the connector to configure it.
![Azure portal](https://developers.cloudflare.com/_astro/azure-portal.DumVF0xP_1Jxd4n.webp) 

## Step 4: Fill out required fields

When configuring the Cloudflare data connector, you will need to provide the following information:

* Blob container URL

To obtain the container URL within your Azure storage account, access the Azure Portal and navigate to your storage account. Under **Data Storage**, select **Containers**, then choose the relevant container receiving logs from Cloudflare. The container properties section will display the URL link.

* Resource group name for the storage account
* Storage account location
* Subscription ID
* Event grid topic name (only if reconfiguring; not needed for initial setup)

After entering all information, select **Connect**.

Ensure all fields are correctly filled to enable seamless log ingestion.

![Configuration fields](https://developers.cloudflare.com/_astro/configuration.ypRscF1K_pXKb5.webp) 

## Step 5: Complete deployment

1. Select **Apply changes** or **Connect** to finalise the connector setup.
2. Monitor the Data connectors page in Sentinel to confirm that the Cloudflare connector status is **Connected**.
3. Verify that Cloudflare logs are appearing in your Sentinel workspace under **Log Analytics** \> **Logs**.
4. If logs are not appearing, review your Blob Storage permissions, Cloudflare Logpush configuration, and Sentinel connector settings.
![Data connectors](https://developers.cloudflare.com/_astro/data-connectors.By58rEfp_2e4kQf.webp) 

By following these steps, you have successfully integrated Cloudflare logs with Microsoft Sentinel using Azure Blob Storage. This integration enables advanced security analytics and incident response capabilities for your Cloudflare-protected environments. If you encounter issues, review each configuration step, check permissions, and review Microsoft's official documentation.

![Cloudflare traffic overview](https://developers.cloudflare.com/_astro/traffic-overview.C9qSRy0T_iH49l.webp) 

## Supported Logs

We support the following fields to be utilized within the Sentinel Connectors (CCF & Function based). You can push all log fields to Azure using our logpush function as described in [Enable Microsoft Azure](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/azure/) documentation.

Parser fields

ClientDeviceType  
Source  
ClientSSLCipher  
ClientTlsCipher  
ClientSSLProtocol  
ClientTlsProtocol  
FirewallMatchesActions  
Event  
FirewallMatchesRuleIDs  
RuleID  
ClientRequestBytes  
ClientBytes  
ClientSrcPort  
ClientPort  
EdgeResponseBytes  
OriginBytes  
BotScore  
BotScoreSrc  
CacheCacheStatus  
CacheResponseBytes  
CacheResponseStatus  
CacheTieredFill  
ClientASN  
ClientCountry  
ClientIP  
ClientIPClass  
ClientRequestHost  
ClientRequestMethod  
ClientRequestPath  
ClientRequestProtocol  
ClientRequestReferer  
ClientRequestURI  
ClientRequestUserAgent  
ClientXRequestedWith  
EdgeColoCode  
EdgeColoID  
EdgeEndTimestamp  
EdgePathingOp  
EdgePathingSrc  
EdgePathingStatus  
EdgeRateLimitAction  
EdgeRateLimitID  
EdgeRequestHost  
EdgeResponseCompressionRatio  
EdgeResponseContentType  
EdgeResponseStatus  
EdgeServerIP  
EdgeStartTimestamp  
FirewallMatchesSources  
OriginIP  
OriginResponseBytes  
OriginResponseHTTPExpires  
OriginResponseHTTPLastModified  
OriginResponseStatus  
OriginResponseTime  
OriginSSLProtocol  
ParentRayID  
RayID  
SecurityLevel  
WAFAction  
WAFFlags  
WAFMatchedVar  
WAFProfile  
WAFRuleID  
WAFRuleMessage  
WorkerCPUTime  
WorkerStatus  
WorkerSubrequest  
WorkerSubrequestCount  
ZoneID  
Application  
ClientMatchedIpFirewall  
ClientProto  
ClientTcpRtt  
ClientTlsClientHelloServerName  
ClientTlsStatus  
ColoCode  
ConnectTimestamp  
DisconnectTimestamp  
IpFirewall  
OriginPort  
OriginProto  
OriginTcpRtt  
OriginTlsCipher  
OriginTlsFingerprint  
OriginTlsMode  
OriginTlsProtocol  
OriginTlsStatus  
ProxyProtocol  
Status  
Timestamp  
ClientASNDescription  
ClientRefererHost  
ClientRefererPath  
ClientRefererQuery  
ClientRefererScheme  
ClientRequestQuery  
ClientRequestScheme  
Datetime  
Kind  
MatchIndex  
OriginatorRayID  
TimeGenerated  

WorkBook fields

ClientCountry\_s  
ClientDeviceType\_s  
ClientIP\_s  
ClientIPClass\_s  
ClientRequestMethod\_s  
ClientRequestProtocol\_s  
ClientRequestReferer\_s  
ClientRequestURI\_s  
ClientRequestUserAgent\_s  
EdgePathingOp\_s  
EdgePathingSrc\_s  
EdgePathingStatus\_s  
EdgeResponseContentType\_s  
threat  
TimeGenerated  
EdgePathingSrc\_s  
EdgePathingOp\_s  
EdgePathingStatus\_s  
EdgeResponseStatus\_d  
OriginResponseStatus\_d  
TimeGenerated  

Analytic rules

ClientIPClass  
SrcIpAddr  
ClientRequestURI  
HttpUserAgentOriginal  
HttpRequestMethod  
TimeGenerated  
SrcGeoCountry  
ClientRequestURI  
HttpRequestMethod  
HttpStatusCode  
DstBytes  
SrcBytes  
WAFRuleID  
WAFRuleMessage  
WAFAction  

Hunting queries

TimeGenerated  
HttpStatusCode  
SrcIpAddr  
ClientRequestURI  
ClientTlsStatus  
HttpUserAgentOriginal  
OriginTlsStatus  
NetworkRuleName  
EdgeRequestHost  
SrcGeoCountry  
EdgeResponseStatus  
ClientCountry  
ClientDeviceType  
status  
OriginResponseStatus  
WorkerSubrequest  
http\_method  
dest\_ip  
dest\_host  
uri\_path  
http\_user\_agent  
status  
src\_ip  
OriginResponseStatus  
RayID  
WorkerSubrequest  
http\_method  
bytes\_out  
bytes\_cached\_requests  
threat  
ClientRequestProtocol  
http\_referrer  
ClientIPClass  
cf\_http\_status\_codes  
http\_content\_type  
cf\_http\_status\_codes  
cached\_requests  
CacheCacheStatus  
ClientASN  
EdgePathingSrc  
EdgePathingOp  
EdgePathingStatus  
ClientRequestUserAgent  
SecurityAction  
SecurityRuleID  
SecurityRuleDescription  

## Resources

[Download Cloudflare's CCF Sentinel Solution ↗](https://marketplace.microsoft.com/en-us/product/azure-application/cloudflare.azure-sentinel-solution-cloudflare-ccf?tab=Overview)  
[Microsoft Data Lake Overview ↗](https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-lake-overview)  
[About the CCF Platform ↗](https://learn.microsoft.com/en-us/azure/sentinel/create-codeless-connector)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-integrations/","name":"Analytics integrations"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-integrations/sentinel/","name":"Sentinel"}}]}
```

---

---
title: Splunk
description: This tutorial explains how to analyze Cloudflare Logs using the Cloudflare App for Splunk.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/analytics-integrations/splunk.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Splunk

This tutorial explains how to analyze [Cloudflare Logs ↗](https://www.cloudflare.com/products/cloudflare-logs/) using the [Cloudflare App for Splunk ↗](https://splunkbase.splunk.com/app/4501/).

## Prerequisites

Before sending your Cloudflare log data to Splunk, ensure that you:

* Have an existing Splunk Enterprise or Cloud account
* Have a Cloudflare Enterprise account
* Consult the [Splunk documentation ↗](https://splunkbase.splunk.com/app/4501/) for the Cloudflare App

## Task 1 - Install and Configure the Cloudflare App for Splunk

To install the [Cloudflare App for Splunk ↗](https://splunkbase.splunk.com/app/4501/):

1. Log in to your Splunk instance.
2. Under **Apps** \> **Find More Apps**, search for _Cloudflare App for Splunk._
3. Click **Install**.
![Splunk website with Apps menu expanded and Search & Reporting menu item along with Cloudflare App for Splunk](https://developers.cloudflare.com/_astro/splunk-cloudflare-app-for-splunk.CSImDJTK_Z1O8qyE.webp) 
1. Restart and reopen your Splunk instance.
2. Edit the `cloudflare:json` source type in the Cloudflare App for Splunk. To edit the source type:  
   1. Click the **Settings** dropdown and select **Source types**.  
   2. Uncheck **Show only popular** and search for _cloudflare_.  
   3. Click **Edit** and change the Regex expression to `([\r\n]+)`.  
   4. Save your edits.
3. Create an index on Splunk to store the HTTP Event logs. To create an index:  
   1. Open the setup screen by clicking the **Settings** dropdown, then click **Indexes**.  
   2. Select **New Index**. Note that the **Indexes** page also gives you the status of all your existing indexes so that you can see whether you're about to use up your licensed amount of space.  
   3. Name the index **cloudflare**, which is the default index that the Cloudflare App will use.
4. Set up the HTTP Event Collector (HEC) on Splunk. To create an HEC:  
   1. Click the **Settings** dropdown and select **Data inputs**.  
   2. Click **+Add new** and follow the wizard. When prompted, submit the following responses:  
         * Name: Cloudflare  
         * Source Type: Select > "cloudflare:json"  
         * App Context: Cloudflare App for Splunk (cloudflare)  
         * Index: cloudflare  
   3. At the end of the wizard you will see a **Token Value**. This token authorizes the Cloudflare Logpush job to send data to your Splunk instance. If you forget to copy it now, Splunk allows you to get the value at any time.
5. Verify whether Splunk is using a self-signed certificate. You'll need this information when creating the Logpush job.
6. Determine the endpoint to use to send the data to. The endpoint should be:

```

"<protocol>://input-<host>:<port>/<endpoint>" or "<protocol>://http-inputs-<host>:<port>/<endpoint>"


```

Where:

* `protocol`: HTTP or HTTPS
* `input`: `input` or `http-inputs` based on whether you have a self-service or managed cloud plan
* `host`: The hostname of your Splunk instance. The easiest way to determine the hostname is to look at the URL you went to when you logged in to Splunk.
* `port`: 443 or 8088
* `endpoint`: services/collector/raw

For example: `https://prd-p-0qk3h.splunkcloud.com:8088/services/collector/raw`. Refer to the [Splunk Documentation ↗](https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector) for more details and examples.

**Post Installation Notes**

You can change the **Index Name** after the initial configuration by clicking on the **Settings** dropdown and navigating to **Advanced search**. There you can select **Search macros** and look for the Cloudflare App for Splunk.

![Splunk interface highlighting Apps menu and Manage Apps option along with Enable Acceleration checkbox](https://developers.cloudflare.com/_astro/splunk-settings-advanced-search-search-macros.Bt1szjjM_WDiER.webp) 

The Cloudflare App for Splunk comes with a custom Cloudflare Data Model that has an acceleration time frame of 1 day but is not accelerated by default. If you enable [Data Model acceleration ↗](https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Acceleratedatamodels), we recommend that the Data Model is only accelerated for 1 or 7 days to ensure there are no adverse effects within your Splunk environment.

Enable or disable acceleration after the initial configuration by accessing the app Set up page by clicking the **Apps** dropdown, then **Manage Apps** \> **Cloudflare Set Up**.

![Splunk Advanced Search page highlighted Search macros and Advanced search](https://developers.cloudflare.com/_astro/splunk-apps-manage-apps-cloudflare-set-up-enable-data-model-acceleration.KQW0iwYr_4acu7.webp) 

You can also manually configure Data Models by going to **Settings** \> **Data models**. Learn more about data model acceleration in the [Splunk documentation ↗](https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Acceleratedatamodels).

## Task 2 - Make the API call to create the Logpush job

Create the Logpush job by following the instructions on [Enable Logpush to Splunk](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/splunk/). The API call creates a Logpush job but does not enable it.

Enable the Logpush job through the Cloudflare dashboard or through the API by following the instructions on [Enable Logpush to Splunk](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/splunk/). To enable through the dashboard:

1. Navigate to the Cloudflare dashboard and select **Analytics & Logs** \> **Logs**.
2. Select **Edit** and select the fields referenced in the Dashboard section below to fully populate all tables and graphs.
3. Enable the Logpush job by toggling on the switch next to the Edit link. Data takes a few minutes to populate.

To validate that you are receiving data, search `index=cloudflare` in Splunk.

## Task 3 - View the Dashboards

You can analyze Cloudflare logs with the thirteen (13) dashboards listed below.

You can use filters within these dashboards to help narrow the analysis by date and time, device type, country, user agent, client IP, hostname, and more to further help with debugging and tracing.

### About the Dashboards

The following dashboards outlined below are available as part of the Cloudflare App for Splunk.

#### Cloudflare - Snapshot

![Splunk dashboard with Web Traffic Overview metrics](https://developers.cloudflare.com/_astro/splunk-cloudflare-snapshot-dashboard.Du4lsJw__hYMt8.webp) 

#### Cloudflare - Reliability

![Splunk dashboard with a high level summary of Reliability metrics](https://developers.cloudflare.com/_astro/splunk-cloudflare-reliability-summary-dashboard.C1py_8XX_Zupzyv.webp) ![Splunk dashboard with a detailed summary of Reliability metrics](https://developers.cloudflare.com/_astro/splunk-cloudflare-reliability-detailed-dashboard.jeSlAQnq_1qkyMx.webp) 

#### Cloudflare - Security

![Splunk dashboard with an overview of Security metrics](https://developers.cloudflare.com/_astro/splunk-cloudflare-security-overview.D-c4Punh_Z1C8EgV.webp) ![Splunk dashboard with an overview of Security metrics for WAF](https://developers.cloudflare.com/_astro/splunk-cloudflare-security-waf-dashboard.DTZrF-bl_lB5WH.webp) ![Splunk dashboard with an overview of Security metrics for Rate Limiting](https://developers.cloudflare.com/_astro/splunk-cloudflare-security-rate-limiting-dashboard.CRoUKWVc_ZVMcdn.webp) ![Splunk dashboard with a high level summary of Security metrics for Bots](https://developers.cloudflare.com/_astro/splunk-cloudflare-security-bot-summary-dashboard.S5k4rphZ_19QyUS.webp) ![Splunk dashboard with a detailed summary of Security metrics for Bots](https://developers.cloudflare.com/_astro/splunk-cloudflare-security-bots-detailed-dashboard.x_RSBUYB_T6P0y.webp) 

#### Cloudflare - Performance

![Splunk dashboard with Performance metrics for Requests and Cache](https://developers.cloudflare.com/_astro/splunk-cloudflare-performance-requests-and-cache-dashboard.CzCMXwsS_Z2rsU7q.webp) ![Splunk dashboard with Performance metrics for Bandwidth](https://developers.cloudflare.com/_astro/splunk-cloudflare-performance-bandwidth-dashboard.B0Io0qTc_257Rz.webp) 

_Hostname, Content Type, Request Methods, Connection Type_: Get insights into your most popular hostnames, most requested content types, breakdown of request methods, and connection type.

![Splunk dashboard with Cloudflare Performance metrics including for Hostname, Content Type, Request Methods, Connection Type](https://developers.cloudflare.com/_astro/splunk-cloudflare-performance-hostname-dashboard.BNc0Yvsw_ZRXqjX.webp) ![Splunk dashboard with Cloudflare Performance metrics for Static vs. Dynamic Content](https://developers.cloudflare.com/_astro/splunk-cloudflare-performance-static-vs-dynamic-dashboard.Dx9F5klY_ZXDTlD.webp) 

### Filters

All dashboard have a set of filters that you can apply to the entire dashboard, as shown in the following example. Filters are applied across the entire dashboard.

![Available dashboard filters from the Splunk dashboard](https://developers.cloudflare.com/_astro/splunk-filters.D7I8q-lv_ZQe0Nh.webp) 

You can use filters to drill down and examine the data at a granular level. Filters include client country, client device type, client IP, client request host, client request URI, client request user agent, edge response status, origin IP, and origin response status.

The default time interval is set to 24 hours. Note that for correct calculations filter will need to exclude Worker subrequests (**WorkerSubrequest** \= _false_) and purge requests (**ClientRequestMethod** is not _PURGE_).

Available Filters:

* Time Range (EdgeStartTimestamp)
* Client Country
* Client Device type
* Client IP
* Client Request Host
* Client Request URI
* Client Request User Agent
* Edge response status
* Origin IP
* Origin Response Status
* RayID
* Worker Subrequest
* Client Request Method

## Debugging tips

### Incomplete dashboards

The Splunk Cloudflare App relies on data from the Cloudflare Enterprise Logs fields outlined below. Depending on which fields you have enabled, certain dashboards might not populate fully.

If that is the case, verify and test the Cloudflare App filters below each dashboard (these filters are the same across all dashboards). You can delete any filters that you do not need, even if such filters include data fields already contained in your logs.

Also, you could compare the list of fields you are getting in Cloudflare Logs with the fields listed in **Splunk** \> **Settings** \> **Data Model** \> **Cloudflare**.

The available fields are:

* CacheCacheStatus
* CacheResponseBytes
* CacheResponseStatus (deprecated)
* ClientASN
* ClientCountry
* ClientDeviceType
* ClientIP
* ClientIPClass
* ClientRequestBytes
* ClientRequestHost
* ClientRequestMethod
* ClientRequestPath
* ClientRequestProtocol
* ClientRequestReferer
* ClientRequestURI
* ClientRequestUserAgent
* ClientSSLCipher
* ClientSSLProtocol
* ClientSrcPort
* EdgeColoCode
* EdgeColoID
* EdgeEndTimestamp
* EdgePathingOp
* EdgePathingSrc
* EdgePathingStatus
* EdgeRequestHost
* EdgeResponseBytes
* EdgeResponseContentType
* EdgeResponseStatus
* EdgeServerIP
* EdgeStartTimestamp
* OriginIP
* OriginResponseStatus
* OriginResponseTime
* OriginSSLProtocol
* RayID
* SecurityAction
* SecurityActions
* SecurityRuleDescription
* SecurityRuleID
* SecurityRuleIDs
* SecuritySources
* WAFFlags
* WAFMatchedVar
* WorkerSubrequest
* ZoneID

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/analytics-integrations/","name":"Analytics integrations"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/analytics-integrations/splunk/","name":"Splunk"}}]}
```

---

---
title: Account analytics (beta)
description: Cloudflare account analytics lets you access a wide range of aggregated metrics from all the sites under a specific Cloudflare account.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/account-and-zone-analytics/account-analytics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Account analytics (beta)

Cloudflare account analytics lets you access a wide range of aggregated metrics from all the sites under a specific Cloudflare account.

Note

For general information about all of Cloudflare's analytics offerings, refer to [About Cloudflare Analytics](https://developers.cloudflare.com/analytics/faq/about-analytics/).

---

## View your account analytics

To view metrics for your site, in the Cloudflare dashboard, go to the **Account Analytics** page.

[ Go to **Account analytics** ](https://dash.cloudflare.com/?to=/:account/analytics) 

Once it loads, the Account Analytics app displays a collection of categorized charts with aggregated metrics for your account. To understand the various metrics available, refer to _Review your account metrics_ below.

---

## Review your account metrics

This section outlines the aggregated metrics under each category. Before reviewing your metrics, let's define a couple of concepts used in some panels:

* _Rate_ \- Reflects the ratio between the amount for a specific data category and the total.
* _Bandwidth_ \- Refers to the number of bytes sent from the Cloudflare edge network to the requesting client.

Also, note that:

* To filter metrics for a specific time period, use the dropdown in the top right.
* Most metrics are grouped into panels representing different aspects of the underlying data.

### Summary of metrics

Below is a brief description of the major elements comprising the metrics available.

#### HTTP Traffic

These charts aggregate data for HTTP traffic, and include:

![Chart showing last week's data for HTTP traffic](https://developers.cloudflare.com/_astro/hc-dash-account-analytics-map.CcPRTQU-_2gUQhL.webp) 
* Spark lines for _Requests_, _Bandwidth_, _Page views_, and _Visitors_ (_Unique IPs)_
* An interactive map that breaks down the number of requests by country
* A table combining numerical and spark line data, sorted by total number of requests per country

#### Security

![Panel displaying lines highlighting encryption metrics: requests, requests rate, bandwidth, and bandwidth rate](https://developers.cloudflare.com/_astro/hc-dash-account-analytics_security_panel.5rFJ7hHV_Z27QO1S.webp) 

This panel features spark lines highlighting various encryption metrics, including: _requests_, _requests rate_, _bandwidth_, and _bandwidth rate_. These also include a comparative percentage change based on the previous period.

#### Cache

![Panel displaying lines for caching metrics: requests, requests rate, bandwidth, and bandwidth rate](https://developers.cloudflare.com/_astro/hc-dash-account-analytics_cache_card.BOCedSTx_Z26wddi.webp) 

This panel features spark lines for various caching metrics, including: _requests_, _requests rate_, _bandwidth_, and _bandwidth rate_. These also include a comparative percentage change based on the previous equivalent period. For example, if you selected _Last week_ as your time period, the previous period refers to the _week_ before.

#### Errors

![Panel displaying lines for 4xx and 5xx error rates](https://developers.cloudflare.com/_astro/hc-account-analytics_errors_card.D2i2BrS9_dU6xT.webp) 

This panel displays spark lines for 4xx and 5xx error rates, respectively. Learn more about [HTTP Status Codes](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/). 

#### Network

![Statistics showing the percentage of requests that use a specific version of HTTP](https://developers.cloudflare.com/_astro/hc-dash-account-analytics_network_card.Fso_4DUE_Z2trpY.webp) 

#### Client HTTP Version Used

These statistics show the percentage of requests that use a specific version of HTTP.

#### Traffic Served Over SSL

These statistics show the percentage of traffic that is encrypted using a specific version of SSL or TLS.

#### Content Type Breakdown

These statistics show the number of requests based on the resource content type.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/account-and-zone-analytics/","name":"Account and zone analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/account-and-zone-analytics/account-analytics/","name":"Account analytics (beta)"}}]}
```

---

---
title: Cloudflare analytics with Workers
description: Learn how Cloudflare analytics tracks requests made by Cloudflare Workers.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/account-and-zone-analytics/analytics-with-workers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cloudflare analytics with Workers

Learn how Cloudflare analytics tracks requests made by [Cloudflare Workers](https://developers.cloudflare.com/workers/).

## What is a subrequest

With a no-op Worker (a Worker that simply proxies traffic by passing on the original client request to the origin and proxying the response) running on a particular route, the request to the origin is counted as a 'subrequest', separate from initial client to edge request. Thus, unless the Worker responds with a static response and never hits an origin, the eyeball → edge request, and edge → origin request will each be counted separately towards the request or bandwidth count in Analytics. Subrequests are not included in the **Requests** or **Bandwidth** graphs of the Cloudflare **Analytics** app.

---

## Zone analytics

In the dashboard, the numbers in zone analytics reflect visitor traffic. That is, the number of requests shown in zone analytics (under the Analytics tabs in the dashboard) is the number of requests that were served to the client.

Similarly, the bandwidth is counted based on the bandwidth that is sent to the client, and status codes reflect the status codes that were served back to the client (so if a subrequest received a 500, but you respond with a 200, a 200 will be shown in the status codes breakdown).

---

## Worker analytics

For a breakdown of subrequest traffic (origin facing traffic), you may go to the Cloudflare **Analytics** app and select the **Workers** tab. Under the **Workers** tab, below the Service Workers panel, are a **Subrequests** breakdown by count, **Bandwidth** and **Status Codes**. This will help you spot and debug errors at your origin (such as spikes in 500s), and identify your cache-hit ratio to help you understand traffic going to your origin.

---

## FAQ

**Why do I not have any analytics for Workers?**

* If you are not currently using Workers (do not have Workers deployed on any routes or filters), we will not have any information to show you.
* If your Worker sends a static response back to the client without ever calling fetch() to an origin, you are not making any subrequests, thus, all traffic will be shown in zone Analytics

**Will this impact billing?** 

No, [billing for Workers](https://developers.cloudflare.com/workers/platform/pricing/) is based on requests that go through a Worker. 

**Why am I seeing such a high cache hit ratio?**

Requests served by a Worker always show as cached. For an accurate cache hit ratio on subrequests, refer to the **Subrequests** graph in the **Analytics** app under the **Workers** analytics tab.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/account-and-zone-analytics/","name":"Account and zone analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/account-and-zone-analytics/analytics-with-workers/","name":"Cloudflare analytics with Workers"}}]}
```

---

---
title: Status codes
description: Status Codes metrics in the Cloudflare dashboard Analytics app provide customers with a deeper insight into the distribution of errors that are occurring on their website per data center. A data center facility is where Cloudflare runs its servers that make up our edge network (current locations).
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/account-and-zone-analytics/status-codes.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Status codes

Note

Status Codes analytics by data center is exclusive to the [enterprise level of service ↗](https://www.cloudflare.com/plans/enterprise/contact/).

Status Codes metrics in the Cloudflare dashboard **Analytics** app provide customers with a deeper insight into the distribution of errors that are occurring on their website per data center. A data center facility is where Cloudflare runs its servers that make up our edge network ([current locations ↗](https://www.cloudflare.com/network/)).

HTTP status codes that appear in a response passing through our edge are displayed in analytics.

The `Origin Status Code` can help you investigate issues on your origin. If your origin returns a `5xx` error, Cloudflare's edge will forward this error to the end user. Comparing the `Edge Status Code` and `Origin Status Code` can help determine whether the issue is occurring on your origin or on the Cloudflare edge.

Errors that originate from our edge servers (blank `502`, `503`, or `504` error page with just `Cloudflare`) are not reported as part of the error analytics.

You can filter out specific error(s) by selecting one or more in the legend. You can also exclude a particular error and it will no longer display as part of the graph.

Note

Users may also see `100x` errors which are not reported. These will be displayed as either `403` or `409` (edge) errors.

![Error analytics by Cloudflare data center](https://developers.cloudflare.com/_astro/status-codes.BbTZPg-P_ZDqqiT.webp) 

---

## Common edge status codes

* `400` \- Bad Request intercepted at the Cloudflare Edge (for example, missing or bad HTTP header)
* `403` \- Security functionality (for example, Web Application Firewall, Browser Integrity Check, [Cloudflare challenges](https://developers.cloudflare.com/cloudflare-challenges/), and most 1xxx error codes)
* `409` \- DNS errors typically in the form of 1000 or 1001 error code
* `413` \- File size upload exceeded the maximum size allowed (configured in the dashboard under **Network** \> **Maximum Upload Size**.)
* `444` \- Used by Nginx to indicate that the server has returned no information to the client, and closed the connection. This error code is internal to Nginx and is **not** returned to the client.
* `499` \- Used by Nginx to indicate when a connection has been closed by the client while the server is still processing its request, making the server unable to send a status code back.

For more information, refer to [4xx Client Error](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/4xx-client-error/).

---

## Common origin status codes

* `400` \- Origin rejected the request due to bad, or unsupported syntax sent by the application.
* `404` \- Only if the origin triggered a 404 response for a request.
* `4xx`
* `50x`

For more information, refer to [4xx Client Error](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/4xx-client-error/) and [Troubleshooting Cloudflare 5XX errors](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-5xx-errors/).

---

## 52x errors

* `520` \- This is essentially a "catch-all" response for when the origin server returns something unexpected, or something that is not tolerated/cannot be interpreted by our edge (that is, protocol violation or empty response).
* `522` \- Our edge could not establish a TCP connection to the origin server.
* `523` \- Origin server is unreachable (for example, the origin IP changed but DNS was not updated, or due to network issues between our edge and the origin).
* `524` \- Our edge established a TCP connection, but the origin did not reply with a HTTP response before the connection timed out.
* `525` \- This error indicates that the SSL handshake between Cloudflare and the origin web server failed, either due to a network issue or a certificate issue at the origin.
* `526` \- The certificate configured at the origin is not valid.

For more information, refer to [Troubleshooting Cloudflare 5XX errors](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-5xx-errors/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/account-and-zone-analytics/","name":"Account and zone analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/account-and-zone-analytics/status-codes/","name":"Status codes"}}]}
```

---

---
title: Threat types
description: Cloudflare classifies the threats that it blocks or challenges. To help you understand more about your site’s traffic, the 'Type of Threats Mitigated' metric on the analytics page measures threats blocked or challenged by the following categories:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/account-and-zone-analytics/threat-types.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Threat types

Cloudflare classifies the threats that it blocks or challenges. To help you understand more about your site’s traffic, the 'Type of Threats Mitigated' metric on the analytics page measures threats blocked or challenged by the following categories:

## Bad browser

The source of the request was not legitimate or the request itself was malicious. Users would receive a [1010 error page](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-1xxx-errors/error-1010/) in their browser.

Cloudflare's [Browser Integrity Check](https://developers.cloudflare.com/waf/tools/browser-integrity-check/) looks for common HTTP headers abused most commonly by spammers and denies them access to your page. It will also challenge visitors that do not have a user agent or a non standard user agent (also commonly used by bots, crawlers, or visitors).

## Blocked hotlink

[Hotlink Protection](https://developers.cloudflare.com/waf/tools/scrape-shield/hotlink-protection/) ensures that other sites cannot use your bandwidth by building pages that link to images hosted on your origin server. This feature can be turned on and off by Cloudflare's customers.

## Human challenged

Visitors were presented with an interactive challenge page and failed to pass.

_Note: An interactive challenge page is a difficult to read word or set of numbers that only a human can translate. If entered incorrectly or not answered in a timely fashion, the request is blocked._

## Browser challenge

A bot gave an invalid answer to the JavaScript challenge (in most cases this will not happen, bots typically do not respond to the challenge at all, so "failed" JavaScript challenges would not get logged).

_Note: During a JavaScript challenge you will be shown an interstitial page for about five seconds while Cloudflare performs a series of mathematical challenges to make sure it is a legitimate human visitor._

## Bad IP

A request that came from an IP address that is not trusted by Cloudflare based on the threat score.

Previously, the threat score was a score from `0` (zero risk) to `100` (high risk) classifying the IP reputation of a visitor. Currently, the threat score is always `0` (zero).

## Country block

Requests from countries that were blocked based on the [user configuration](https://developers.cloudflare.com/waf/tools/ip-access-rules/) set in the WAF.

## IP block (user)

Requests from specific IP addresses that were blocked based on the [user configuration](https://developers.cloudflare.com/waf/tools/ip-access-rules/) set in the WAF.

## IP range block (/16)

A /16 IP range that was blocked based on the [user configuration](https://developers.cloudflare.com/waf/tools/ip-access-rules/) set in the WAF.

## IP range block (/24)

A /24 IP range that was blocked based on the [user configuration](https://developers.cloudflare.com/waf/tools/ip-access-rules/) set in the WAF.

## New Challenge (user)

[Challenge](https://developers.cloudflare.com/cloudflare-challenges/) based on user configurations set for visitor's IP in either WAF managed rules or custom rules, configured in **Security** \> **WAF**.

## Challenge error

Requests made by a bot that failed to pass the challenge.

_Note: An interactive challenge page is a difficult to read word or set of numbers that only a human can translate. If entered incorrectly or not answered in a timely fashion, the request is blocked._

## Bot Request

Request that came from a bot.

## Unclassified

Unclassified threats comprises a number of automatic blocks that are not related to the Browser Integrity Challenge (Bad Browser). These threats usually relate to Hotlink Protection, and other actions that happen on Cloudflare's global network based on the composition of the request (and not its content).

Unclassified means a number of conditions under which we group common threats related to Hotlink Protection as well as specific requests that are blocked at Cloudflare's global network before reaching your servers.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/account-and-zone-analytics/","name":"Account and zone analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/account-and-zone-analytics/threat-types/","name":"Threat types"}}]}
```

---

---
title: Total threats stopped
description: Total Threats Stopped measures the number of 'suspicious' and 'bad' requests that were aimed at your site. Requests receive these labels as they enter Cloudflare's network:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/account-and-zone-analytics/total-threats-stopped.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Total threats stopped

Total Threats Stopped measures the number of 'suspicious' and 'bad' requests that were aimed at your site. Requests receive these labels as they enter Cloudflare's network:

* **Legitimate:** Request passed directly to your site.
* **Suspicious:** Request has been challenged with a [Cloudflare challenge](https://developers.cloudflare.com/cloudflare-challenges/).
* **Bad:** Request has been blocked because our Browser Integrity Check, or because of user configured settings like WAF rules or IP Access rules.

In addition to threat analytics you can also monitor search engine crawlers going to your websites. For most websites, threats and crawlers make up 20% to 50% of traffic.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/account-and-zone-analytics/","name":"Account and zone analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/account-and-zone-analytics/total-threats-stopped/","name":"Total threats stopped"}}]}
```

---

---
title: Zone Analytics
description: The Cloudflare zone analytics is a major component of the overall Cloudflare Analytics product line.  Specifically, this app gives you access to a wide range of metrics, collected at the website or domain level.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/account-and-zone-analytics/zone-analytics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Zone Analytics

The Cloudflare zone analytics is a major component of the overall Cloudflare Analytics product line. Specifically, this app gives you access to a wide range of metrics, collected at the website or domain level.

Note

Read [Cloudflare Analytics](https://developers.cloudflare.com/analytics/faq/about-analytics/)for general information about all of Cloudflare's analytics offerings. You can also understand the characteristics of the data that Cloudflare captures and processes.

---

## View your website analytics

To view metrics for your website, in the Cloudflare dashboard, go to the **Analytis & Logs** page.

[ Go to **HTTP Traffic** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/traffic) 

Once it loads, you can find tabs for **Traffic**, **Security**, **Performance**, **DNS**, **Workers**, and **Logs** (Enterprise domains only). To understand the various metrics available, refer to _Review your website metrics_ below.

---

## Review your website metrics

This section outlines the metrics available under each Analytics app tab. Before proceeding, note that each tab may contain:

* One or more panels to further categorize the underlying metrics.
* A dropdown (on the panel's top right) to filter metrics for a specific time period. The time period you can select may vary based on the Cloudflare plan that your domain is associated with.

Note

Cloudflare analytics are case sensitive for paths and URIs. Make sure that filters or queries use the correct case.

Below is a summary of each Analytics app tab.

### HTTP Traffic

#### Free plan

These metrics include legitimate user requests as well as crawlers and threats. The HTTP Traffic tab features the following panels: 

* **Web Traffic** \- Displays metrics for _Requests_, _Bandwidth_, and _Unique Visitors_. If you are using Cloudflare Workers, subrequests data will not be visible in zone Traffic Analytics. Instead, you can find subrequests analytics under the **Workers & Pages** tab in the **Overview** section. Refer to [Worker Analytics](https://developers.cloudflare.com/analytics/account-and-zone-analytics/analytics-with-workers/#worker-analytics) for more information.
* **Web Traffic Requests by Country** \- Is an interactive map that breaks down the number of requests by country. This panel also includes a data table for **Top Traffic Countries / Regions** that display the countries with the most number of requests (up to five, if the data exists).

#### Pro, Business, or Enterprise plan

Note

Privacy-first HTTP Traffic Analytics are available on the Pro, Business, and Enterprise plans.

Analytics are based on Cloudflare's edge logs, with no need for third party scripts or trackers. The HTTP Traffic tab features the following metrics:

* **Requests** \- An HTTP request. A typical page view requires many requests. If you are using Cloudflare Workers, subrequests data will not be visible in zone HTTP Traffic Analytics. Instead, you can find subrequests analytics under the **Workers & Pages** tab in the **Overview** section. Refer to [Worker Analytics](https://developers.cloudflare.com/analytics/account-and-zone-analytics/analytics-with-workers/#worker-analytics) for more information.
* **Data Transfer** \- Total HTTP data transferred in responses.
* **Page views** \- A page view is defined as a successful HTTP response with a content-type of HTML.
* **Visits** \- A visit is defined as a [page view](#page-views) that originated from a different website, or direct link. Cloudflare checks where the HTTP referer does not match the hostname. One visit can consist of multiple page views.
* **API Requests** \- An HTTP request for API data.

To receive more detailed metrics, **Add filter**. You can also filter each metric by:

* Cache status
* Data center
* Source ASN
* Country
* Source device type
* Source IP
* Referer host
* Host
* HTTP method
* HTTP version
* Path
* Query string
* Content type
* Edge status code
* Origin status code
* Security Action
* Security Source
* Source browser
* Source operating system
* Source user agent
* X-Requested-With header

In addition, the following filters are available to Enterprise [Bot Management](https://developers.cloudflare.com/bots/get-started/bot-management/) customers only.

* Source JA4 fingerprint
* Source JA3 fingerprint

To change the time period, use the dropdown menu on the right-hand side above the graph. You can also drag to zoom on the graph.

### Security

For this tab, the number and type of charts may vary based on existing data and customer plan. Most of the metrics in this tab come from the Cloudflare Firewall app. The panels available include:

* **Threats** \- Displays a data summary and an area chart showing threats against the site.
* **Threats by Country** \- Is an interactive map highlighting the countries where threats originated. It also includes data tables with statistics on **Top Threat Countries / Regions** and **Top Crawlers / Bots.**
* **Rate Limiting** (add-on service) - Features a line chart highlighting matching and blocked requests, based on rate limits. To learn more, consult [Rate Limiting Analytics](https://developers.cloudflare.com/waf/reference/legacy/old-rate-limiting/#analytics).
* **Overview** \- Displays a set of pie charts for: **Total Threats Stopped**, **Traffic Served Over SSL**, and **Types of Threats Mitigated**. If available, the expandable **Details** link display a table with numerical data.

### Performance

The metrics aggregated under this tab span multiple Cloudflare services. The panels available include:

* **Origin Performance (Argo)** (add-on service) - Displays metrics related to response time between the Cloudflare edge network and origin servers for the last 48 hours. For additional details, refer to [Argo Analytics](https://developers.cloudflare.com/argo-smart-routing/analytics/).
* **Overview** \- Displays a set of pie charts for: **Client HTTP Version Used**, **Bandwidth Saved**, and **Content Type Breakdown**. If available, the expandable **Details** link display a table with numerical data.

### Workers

This panel features metrics for Cloudflare Workers. To learn more, read [Cloudflare analytics with Workers](https://developers.cloudflare.com/analytics/account-and-zone-analytics/analytics-with-workers/).

### Logs

The Logs tab is not a metrics feature. Instead, Customers in the Enterprise plan can enable the [Cloudflare Logs Logpush](https://developers.cloudflare.com/logs/logpush/) service. You can use Logpush to download and analyze data using any analytics tool of your choice. 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/account-and-zone-analytics/","name":"Account and zone analytics"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/account-and-zone-analytics/zone-analytics/","name":"Zone Analytics"}}]}
```

---

---
title: About Cloudflare Analytics
description: In an effort to make analytics an ubiquitous component of all Cloudflare's products, Cloudflare has implemented, and continues to evolve, several ways in which customers can access and gain insights from Internet properties on Cloudflare.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/faq/about-analytics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# About Cloudflare Analytics

In an effort to make analytics an ubiquitous component of all Cloudflare's products, Cloudflare has implemented, and continues to evolve, several ways in which customers can access and gain insights from Internet properties on Cloudflare.

You can access root-level analytics that give you an overview of metadata related to your Cloudflare account, analytics related to specific properties and products, and the GraphQL API that gives you more control over how you visualize the analytics and log information available on the Cloudflare dashboard.

Refer to [Types of analytics](https://developers.cloudflare.com/analytics/types-of-analytics/) for more information regarding this subject.

## How Cloudflare captures and processes analytics data

The underlying datasets that Cloudflare Analytics captures and processes share the following characteristics:

* All metrics reflect traffic proxied through the Cloudflare network (also known as orange-clouded), as configured via DNS records in the Cloudflare DNS app. Note that for a [CNAME setup](https://developers.cloudflare.com/dns/zone-setups/partial-setup/), Cloudflare is unable to offer DNS metrics.
* Cloudflare does not count traffic for unproxied DNS records. However, if your site is not proxied through Cloudflare but Cloudflare is your authoritative DNS server, then we are able to collect DNS metrics.
* Cloudflare can only proxy information for traffic targeting [specific ports](https://developers.cloudflare.com/fundamentals/reference/network-ports/).
* In determining the originating country, Cloudflare uses the IP address associated with each request. Learn about [Configuring Cloudflare IP Geolocation](https://developers.cloudflare.com/network/ip-geolocation/).

## Apparent data discrepancies

It is possible that your Cloudflare metrics do not fully align with data for the same site as reported by other analytics sources, such as Google Analytics and web server logs.

Once Cloudflare identifies a unique IP address for a request, we identify such request as a visit. Therefore, the number of visitors Cloudflare Analytics shows is probably higher than what other analytics services may report.

For example, Google Analytics and other web-based analytics programs use JavaScript on the web browser to track visitors. As a result, Google Analytics does not record threats, bots, and automated crawlers because those requests typically do not trigger JavaScript. Also, these services do not track visitors who disable JavaScript on their browser or who leave a page before it fully loads.

Finally, it is likely that unique visitor data from the Cloudflare Analytics app is greater than your search analytics unique pageviews. This is because pageviews reflect when someone visits a page via a web browser and loads the entire page. However, when another site or service like a bot, plugin, or API is consuming partial content from your site (but not loading a full page), this counts as a unique visitor in Cloudflare and not as a pageview.

## About missing metrics

You may not be seeing metrics on Cloudflare Analytics for the following reasons:

* You only recently signed up for Cloudflare. Metrics are delayed 24 hours for domains on a free Cloudflare plan.
* If you signed up directly with Cloudflare, your nameservers might not be pointing to Cloudflare at your registrar just yet. Registrars can take 24-72 hours to update their nameservers. Metrics will not start gathering until we detect the nameservers pointing to Cloudflare.
* If you signed up through a Cloudflare [hosting partner option ↗](https://www.cloudflare.com/partners/), something might not be configured correctly. Contact the hosting partner for support.
* Some browser extensions designed to block ads may prevent analytics from loading. To address this issue, disable the ad block extension or allow `cloudflare.com` on it.

Note

Activations through a hosting partner works via a [CNAME setup](https://developers.cloudflare.com/dns/zone-setups/partial-setup/) on the `www` record. If most of your traffic actually goes to `domain.com`, [forward your traffic](https://developers.cloudflare.com/rules/url-forwarding/bulk-redirects/) from `domain.com` to `www.domain.com`.

## Why does the analytics data on the **Overview** page not match what I have under **View More Analytics**?

The Overview page shows analytics based on all traffic, including subrequests. However, when you navigate to **Analytics & Logs** \> **HTTP Traffic**, the metrics (for example, `Requests`, `Data`, `Visits`) are filtered to show only end user traffic (that is, `requestSource = eyeball`).

As a result, subrequests are excluded from the **HTTP Traffic** view, which can lead to discrepancies between the numbers shown in **Overview** and those displayed in other analytics sections of the dashboard.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/faq/","name":"FAQs"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/faq/about-analytics/","name":"About Cloudflare Analytics"}}]}
```

---

---
title: GraphQL API inconsistent results
description: If you run the same GraphQL Analytics API query multiple times and receive slightly different results, this is caused by Adaptive Bit Rate (ABR) sampling. ABR dynamically adjusts data resolution based on query complexity and timing, which can result in slight variations between query runs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/faq/graphql-api-inconsistent-results.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# GraphQL API inconsistent results

If you run the same GraphQL Analytics API query multiple times and receive slightly different results, this is caused by Adaptive Bit Rate (ABR) sampling. ABR dynamically adjusts data resolution based on query complexity and timing, which can result in slight variations between query runs.

To reduce variation, query shorter timeframes (daily or weekly instead of monthly), use aggregated datasets (nodes with the `Groups` suffix), and request confidence intervals to understand data quality. For more information, refer to [Sampling](https://developers.cloudflare.com/analytics/graphql-api/sampling/).

## What is sampling?

Cloudflare's data pipeline handles over 700 million events per second across the global network. Processing all this data in real-time for every query would be prohibitively expensive and time-consuming.

Sampling analyzes a subset of data rather than every individual data point. Cloudflare uses Adaptive Bit Rate (ABR) sampling to ensure queries complete quickly, even when working with large datasets.

ABR stores data at multiple resolutions:

* **100%** — Full data (used for smaller datasets)
* **10%** — 10% sample (medium resolution)
* **1%** — 1% sample (lower resolution)

When you run a query, ABR dynamically selects the best resolution based on query complexity, time range requested, number of rows to retrieve, and current system load.

## Why do results vary between query runs?

Results can vary for several reasons:

* **Dynamic resolution selection** — ABR may choose different sampling resolutions on different query runs based on system conditions.
* **Long time ranges** — Querying 30 days at once is an expensive operation that triggers more aggressive sampling.
* **High query complexity** — Complex queries with many filters or aggregations may be sampled differently.
* **System load** — During high-traffic periods, the system may apply more aggressive sampling to ensure fair resource distribution.

For example, running the same 30-day query twice might return 3,500 objects one time and 3,600 objects another time. This indicates different sampling resolutions were used.

## Can I trust sampled data?

Yes. Sampled data is highly reliable and provides insights as dependable as those derived from full datasets. Cloudflare's sampling techniques capture the essential characteristics of the entire dataset.

Aggregated metrics (totals, averages, percentiles) are extrapolated based on the sample size, so reported metrics accurately represent the entire dataset. Results based on thousands of rows are highly likely to be representative.

Note

Sampling may not capture extremely rare events with very low occurrence rates.

## How can I reduce variation in my query results?

### Query shorter time ranges

Instead of querying an entire month at once, break queries into smaller intervals (daily or weekly).

Before (more variable):

```

datetime_geq: "2024-09-01T00:00:00Z"

datetime_lt: "2024-10-01T00:00:00Z"


```

After (more consistent):

```

datetime_geq: "2024-09-01T00:00:00Z"

datetime_lt: "2024-09-02T00:00:00Z"


```

Then aggregate the results client-side. Smaller time windows are less likely to trigger aggressive sampling thresholds.

### Use aggregated datasets

Prefer data nodes with the `Groups` suffix over raw adaptive datasets. Aggregated data is pre-processed and less subject to sampling variability.

For example, use `httpRequestsAdaptiveGroups` instead of raw event data.

### Add explicit sorting

Always include `orderBy` in your queries to ensure consistent result ordering:

```

orderBy: [datetime_ASC]


```

### Use confidence intervals

For adaptive datasets, request [confidence intervals](https://developers.cloudflare.com/analytics/graphql-api/features/confidence-intervals/) to understand data quality and verify sampling:

```

confidence(level: 0.95) {

  count {

    estimate

    lower

    upper

    sampleSize

  }

}


```

A higher `sampleSize` indicates more reliable results.

## Quick reference

| Issue                                | Mitigation                                                          |
| ------------------------------------ | ------------------------------------------------------------------- |
| Results vary between runs            | Query shorter time ranges (daily or weekly instead of monthly)      |
| Aggressive sampling on large queries | Break queries into smaller time intervals and aggregate client-side |
| Need consistent ordering             | Add orderBy clause to all queries                                   |
| Need to verify data quality          | Request confidence intervals to check sample size and accuracy      |
| Using raw adaptive data              | Switch to aggregated datasets (nodes with Groups suffix)            |

## Related resources

* [Understanding Sampling in Cloudflare Analytics](https://developers.cloudflare.com/analytics/sampling/)
* [GraphQL API Sampling](https://developers.cloudflare.com/analytics/graphql-api/sampling/)
* [Confidence Intervals](https://developers.cloudflare.com/analytics/graphql-api/features/confidence-intervals/)
* [GraphQL API Limits](https://developers.cloudflare.com/analytics/graphql-api/limits/)
* [Adaptive Bit Rate blog post ↗](https://blog.cloudflare.com/explaining-cloudflares-abr-analytics/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/faq/","name":"FAQs"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/faq/graphql-api-inconsistent-results/","name":"GraphQL API inconsistent results"}}]}
```

---

---
title: Other FAQs
description: There is a number of different types of traffic which may originate from CLOUDFLARENET ASN 13335; just because there is a lot of traffic from this AS, it likely does not indicate a DDoS attack.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/faq/other-faqs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Other FAQs

## Why do I see a large amount of traffic from CLOUDFLARENET ASN 13335 in Analytics? Does this indicate a DDoS attack?

There is a number of different types of traffic which may originate from **CLOUDFLARENET ASN 13335**; just because there is a lot of traffic from this AS, it likely does not indicate a DDoS attack.

Some sources of traffic from ASN13335 include:

* [Workers subrequests](https://developers.cloudflare.com/workers/runtime-apis/fetch/)
* [WARP](https://developers.cloudflare.com/warp-client/known-issues-and-faq/#does-warp-reveal-my-ip-address-to-websites-i-visit)
* [iCloud Private Relay ↗](https://blog.cloudflare.com/icloud-private-relay/) (For reference, iCloud Private Relay’s egress IP addresses are available in this [CSV form ↗](https://mask-api.icloud.com/egress-ip-ranges.csv))
* [Cloudflare Privacy Proxy ↗](https://blog.cloudflare.com/building-privacy-into-internet-standards-and-how-to-make-your-app-more-private-today/)
* Other Cloudflare features like [Health Checks](https://developers.cloudflare.com/health-checks/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/faq/","name":"FAQs"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/faq/other-faqs/","name":"Other FAQs"}}]}
```

---

---
title: Workers Analytics Engine FAQs
description: Below you will find answers to our most commonly asked questions.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/analytics/faq/wae-faqs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workers Analytics Engine FAQs

Below you will find answers to our most commonly asked questions.

## Sampling

### Could I just use many unique index values to get better unique counts?

No, adding a large number of index values does not come without drawbacks. The tradeoff is that reading across many indices is slow.

In practice, due to how ABR works, reading from many indices in one query will result in low-resolution data – possibly unusably low.

On the other hand, if you pick a good index that aligns with how you read the data, your queries will run faster and you will get higher resolution results.

### What if I need to index on multiple values?

It is possible to concatenate multiple values in your index field. So if you want to index on user ID and hostname, you can write, for example `"$userID:$hostname"` into your index field.

Note that, based on your query pattern, it may make sense to write the same dataset with different indices. It is a common misconception that one should avoid "double-writing" data.

Thanks to sampling, the cost of writing data multiple times can be relatively low. However, reading data inefficiently can result in significant expenses or low-quality results due to sampling.

### How do I know if my data is sampled?

You can use the `_sample_interval` field — again, note that this does not tell you if the results are accurate.

You can tell when data is sampled at read time because sample intervals will be multiples of powers of 10, for example `20` or `700`. There is no hard and fast rule for when sampling starts at read time, but in practice reading longer periods (or more index values) will result in a higher sample interval.

### Why is data missing?

Sampling is based largely on the choice of index, as well as other factors like the time range queried and number of indices read. If you are reading from a larger index over a longer time period, and have filtered to a relatively small subgroup within that index, it may not be present due to sampling.

If you need to read accurate results for that subgroup, we suggest that you add that field to your index (refer to [What if I need to index on multiple values](https://developers.cloudflare.com/analytics/faq/wae-faqs/#what-if-i-need-to-index-on-multiple-values)).

### Can I trust sampled data? Are my results accurate?

Sampled data is highly reliable, particularly when a carefully selected index is used.

Admittedly, it is difficult at present to prove that the results returned by ABR queries are within a certain error bound. As a rule of thumb, it is good to check the number of rows read by using count() — think of this like the count of pixels in your image. A higher number of rows read will result in more accurate results. (The flipside is that the `_sample_interval` field does not tell you very much about whether your results are accurate). If you are extrapolating from only one or two rows, it is unlikely you have a representative result; if you are extrapolating from thousands of rows, it is very likely that your results are quite accurate.

In the near future, we plan to expose the [margin of error ↗](https://en.wikipedia.org/wiki/Margin%5Fof%5Ferror) along with query results so that you can see precisely how accurate your results are.

### How are bursts handled?

Equitable sampling exists both to normalize differences between groups, and also to handle large spikes of traffic to a given index. Equalization happens every few seconds; if you are writing many events very close in time, then it is expected that they will be sampled at write time. The sample interval for a given index will vary from moment to moment, based on the current rate of data being written.

### How much traffic will trigger sampling?

There is no fixed rule determining when sampling will be triggered.

We have observed that for workloads like our global CDN, which distribute load around our network, each index value needs about 100 data points per second before sampling is noticeable at all.

Depending on your workload and how you use Workers Analytics Engine, sampling may start at a higher or lower threshold than this. For example, if you are writing out many data points from a single worker execution, it is more likely that your data will be sampled.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/analytics/","name":"Analytics"}},{"@type":"ListItem","position":3,"item":{"@id":"/analytics/faq/","name":"FAQs"}},{"@type":"ListItem","position":4,"item":{"@id":"/analytics/faq/wae-faqs/","name":"Workers Analytics Engine FAQs"}}]}
```
