---
title: Cloudflare Logs
description: These logs are helpful for debugging, identifying configuration adjustments, and creating analytics, especially when combined with logs from other sources, such as your application server. For information about the types of data Cloudflare collects, refer to Cloudflare's Types of analytics.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Cloudflare Logs

Detailed logs that contain metadata generated by our products.

These logs are helpful for debugging, identifying configuration adjustments, and creating analytics, especially when combined with logs from other sources, such as your application server. For information about the types of data Cloudflare collects, refer to [Cloudflare's Types of analytics](https://developers.cloudflare.com/analytics/types-of-analytics/).

---

## Features

### Logpush

Push your request or event logs to your cloud service provider using Logpush, which can be configured via the Cloudflare dashboard or API.

[ Use Logpush ](https://developers.cloudflare.com/logs/logpush/) 

### Instant Logs

View HTTP request logs instantly in the Cloudflare dashboard or the CLI.

[ Use Instant Logs ](https://developers.cloudflare.com/logs/instant-logs/) 

### Logpull (legacy)

Consume request logs over HTTP using Cloudflare Logpull, a REST API designed for log retrieval.

[ Use Logpull (legacy) ](https://developers.cloudflare.com/logs/logpull/) 

---

## Related products

**[Log Explorer](https://developers.cloudflare.com/log-explorer/)** 

Store and explore your Cloudflare logs directly within the Cloudflare dashboard or API.

**[Audit Logs](https://developers.cloudflare.com/fundamentals/account/account-security/review-audit-logs/)** 

Summarize the history of changes made within your Cloudflare account.

**[Web Analytics](https://developers.cloudflare.com/web-analytics/)** 

Provides privacy-first analytics without changing your DNS or using Cloudflare's proxy.

---

## More resources

[Plans](https://www.cloudflare.com/products/cloudflare-logs/) 

Compare available Cloudflare plans

[Pricing](https://www.cloudflare.com/plans/#overview) 

Explore pricing options for Logs

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}}]}
```

---

---
title: Logpush
description: Logpush delivers logs in batches as quickly as possible, with no minimum batch size, potentially delivering files more than once per minute. This capability enables Cloudflare to provide information almost in real time, in smaller file sizes.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Logpush

Logpush delivers logs in batches as quickly as possible, with no minimum batch size, potentially delivering files more than once per minute. This capability enables Cloudflare to provide information almost in real time, in smaller file sizes.

The push frequency is automatic and cannot be adjusted—Cloudflare pushes logs in batches as soon as possible. However, users can configure the batch size [using the API](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#max-upload-parameters) for improved control in case the log destination has specific requirements.

Important limitation

Logpush only pushes logs once as they become available and cannot backfill historical data. If your job is disabled or fails, logs generated during that period are permanently lost. This is why configuring [health notifications](https://developers.cloudflare.com/logs/logpush/logpush-health/) is essential for early detection of issues.

Logpush does not offer storage or search functionality for logs; its primary aim is to send logs as quickly as they arrive.

Cloudflare Logpush supports pushing logs to storage services, SIEMs, and log management providers via the Cloudflare dashboard or API.

Cloudflare aims to support additional services in the future. Interested in a particular service? Take this [survey ↗](https://goo.gl/forms/0KpMfae63WMPjBmD2).

## Estimating log volume

Before setting up a Logpush job, you can estimate the total volume of data that will be pushed to your destination. The volume depends on your traffic, selected fields, and compression.

### Quick sizing for HTTP Requests

A quick sizing estimate for an [HTTP Requests](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/http%5Frequests/) dataset:

* \~100–250 bytes per request (compressed, depending on fields selected)
* 1M requests/day → \~100–250 MB/day
* 30M requests/month → \~3–7.5 GB/month

### Daily storage by traffic volume

* 100k req/day → \~25–50 MB/day
* 1M req/day → \~250–500 MB/day
* 10M req/day → \~2.5–5 GB/day
* 100M req/day → \~25–50 GB/day

These ranges reflect field selection, compression, and whether you include extra fields or [custom fields](https://developers.cloudflare.com/logs/logpush/logpush-job/custom-fields/). Other datasets (Firewall, Workers, Load Balancing) add volume separately.

For precise estimates, you can [sample your logs via Logpull](https://developers.cloudflare.com/logs/logpull/additional-details/#estimating-daily-data-volume) using a 1-hour sample.

## Limits

There is currently a max limit of **4 Logpush jobs per zone**. Trying to create a job once the limit has been reached will result in an error message: `creating a new job is not allowed: exceeded max jobs allowed`.

## Availability

| Free         | Pro | Business | Enterprise |     |
| ------------ | --- | -------- | ---------- | --- |
| Availability | No  | No       | No         | Yes |

Note

Users without an Enterprise plan can still access [Workers Trace Events Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush/) by subscribing to the [Workers Paid](https://developers.cloudflare.com/workers/platform/pricing/) plan.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}}]}
```

---

---
title: Logpush alerts and analytics
description: Logpush jobs may fail for a few reasons, for instance because the destination is unreachable, because of a change in permissions at the customers’ origin, or because a Logpush job did not complete at least one successful push in the last 24 hour.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/alerts-and-analytics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Logpush alerts and analytics

Logpush jobs may fail for a few reasons, for instance because the destination is unreachable, because of a change in permissions at the customers’ origin, or because a Logpush job did not complete at least one successful push in the last 24 hour.

With analytics and alerting, you can monitor your Logpush job health and find out for yourself when a job fails. You can get alerted and you can also get analytics about your Logpush jobs health via GraphQL.

Alerts are sent via the [Cloudflare Notifications](https://developers.cloudflare.com/notifications/) system. They can be sent via email or webhook. When subscribed to job disablement notification, you will receive at most one alert per job per 24 hours. The notification email contains the job ID and destination configuration.

Failing Logpush Job Disabled

**Who is it for?**

Enterprise customers who use [Logpush](https://developers.cloudflare.com/logs/) and want to monitor their job health.

**Other options / filters**

* Notification Name: A custom name for the notification.
* Description (optional): A custom description for the notification.
* Notification Email (can be multiple emails): The email address of the recipient for the notification.

**Included with**

Enterprise plans.

**What should you do if you receive one?**

In the email for the notification, you can find the destination name for the failing Logpush job. With this destination name, you should be able to figure out which zone this relates to. There can be multiple reasons why a job fails, but it is best to test that the destination endpoint is healthy, and that necessary credentials are still working. You can also check that the destination has allowlisted [Cloudflare IPs](https://www.cloudflare.com/ips/).

## Enable alerts

You can add an alert for **Failing Logpush Job Disabled** via the **Notifications** section of the dashboard. Note that alerts can be configured at the account level and apply to all jobs within an account.

1. In the Cloudflare dashboard, go to the **Notifications** page.  
[ Go to **Notifications** ](https://dash.cloudflare.com/?to=/:account/notifications)
2. Next, select **Add**.
3. Select the alert **Failing Logpush Job Disabled**.
4. Configure the alert: choose a name, add a description (optional), select the notification services, Webhooks and enter the email where you want to be notified.
5. Select **Save**.

When you complete these steps, you will receive an email alert if your Logpush job is disabled.

## Enable Logpush health analytics

Customers can query Logpush job health metrics via the [GraphQL API](https://developers.cloudflare.com/analytics/graphql-api/). The name of the dataset is `logpushHealthAdaptiveGroups` and the schema can be explored using the [GraphQL API](https://developers.cloudflare.com/analytics/graphql-api/getting-started/explore-graphql-schema/).

Here is a query to get the count of how many times jobs pushing to S3 failed.

```

query

{

  viewer

  {

    zones(filter: { zoneTag: $zoneTag})

    {

      logpushHealthAdaptiveGroups(filter: {

        datetime_gt:"2022-08-15T00:00:00Z",

        destinationType:"s3",

        status_neq:200

      },

      limit:10)

      {

        count,

        dimensions {

          jobId,

          status,

          destinationType

        }

      }

    }

  }

}


```

Note

If you get a `1105` status code error when checking your Logpush job health, it indicates a DNS resolution issue. This means Cloudflare is unable to resolve the target hostname for the Logpush job. To resolve this, check with your DNS service provider and confirm the hostname can be publicly resolved.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/alerts-and-analytics/","name":"Logpush alerts and analytics"}}]}
```

---

---
title: Manage Logpush with cURL
description: You can manage your Cloudflare Logpush service from the command line using cURL.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/examples/example-logpush-curl.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Manage Logpush with cURL

You can manage your Cloudflare Logpush service from the command line using cURL.

Before getting started, review the following documentation:

* [API configuration](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/)

Note

The examples below are for zone-scoped datasets. Account-scoped datasets should use `/accounts/{account_id}` instead of `/zone/{zone_id}`.

## Step 1 - Get ownership challenge

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Get ownership challenge

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/ownership" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2"

  }'


```

### Parameters

* **destination\_conf** \- Refer to [Destination](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#destination) for details.

### Response

A challenge file will be written to the destination, and the filename will be in the response (the filename may be expressed as a path if appropriate for your destination). For example:

```

{

  "success": true,

  "errors": [],

  "messages": [],

  "result": {

    "filename": "burritobot/logs/ownership-challenge.txt",

    "valid": true,

    "message": ""

  }

}


```

You will need to provide the token contained in this file when creating a job in the next step.

Note

When using Sumo Logic, you may find it helpful to have [Live Tail ↗](https://help.sumologic.com/05Search/Live-Tail/About-Live-Tail) open to see the challenge file as soon as it is uploaded.

## Step 2 - Create a job

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<DOMAIN_NAME>",

    "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",

    "dataset": "http_requests",

    "output_options": {

        "field_names": [

            "ClientIP",

            "ClientRequestHost",

            "ClientRequestMethod",

            "ClientRequestURI",

            "EdgeEndTimestamp",

            "EdgeResponseBytes",

            "EdgeResponseStatus",

            "EdgeStartTimestamp",

            "RayID"

        ],

        "timestamp_format": "rfc3339"

    },

    "ownership_challenge": "<OWNERSHIP_CHALLENGE_TOKEN>"

  }'


```

### Parameters

* **name** (optional) - We suggest using your domain name as the job name; the name cannot be changed after the job is created.
* **destination\_conf** \- Refer to [Destination](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#destination) for details.
* **dataset** \- The category of logs you want to receive. Refer to [Datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/) for the full list of supported datasets; this parameter cannot be changed after the job is created.
* **output\_options** (optional) - Refer to [Log Output Options](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/).  
   * Typically includes the desired fields and timestamp format.  
   * Set the timestamp format to `RFC 3339` (`"timestamp_format": "rfc3339"`) for:  
         * Google BigQuery usage.  
         * Automated timestamp parsing within Sumo Logic; refer to [timestamps from Sumo Logic ↗](https://help.sumologic.com/03Send-Data/Sources/04Reference-Information-for-Sources/Timestamps%2C-Time-Zones%2C-Time-Ranges%2C-and-Date-Formats) for details.
* **ownership\_challenge** \- Challenge token required to prove destination ownership.
* **kind** (optional) - Used to differentiate between Logpush and Edge Log Delivery jobs. Refer to [Kind](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#kind) for details.
* **filter** (optional) - Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for details.

### Response

In the response, you get a newly-created job ID. For example:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "enabled": false,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",

    "last_complete": null,

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

## Step 3 - Enable (update) a job

Start by retrieving information about a specific job, using a job ID:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Get Logpush job details

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs/$JOB_ID" \

  --request GET \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN"


```

### Response

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "enabled": false,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",

    "last_complete": null,

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

Note that by default a job is not enabled (`"enabled": false`).

If you do not remember your job ID, you can retrieve it using your zone ID:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

List Logpush jobs

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request GET \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN"


```

Next, to enable the job, send an update request:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Update Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs/$JOB_ID" \

  --request PUT \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "enabled": true

  }'


```

### Response

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "enabled": true,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",

    "last_complete": null,

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

Once the job is enabled, you will start receiving logs within a few minutes and then in batches as soon as possible until you disable the job. For zones with very high request volume, it may take several hours before you start receiving logs for the first time.

In addition to modifying `enabled`, you can also update the value for **output\_options**. To modify **destination\_conf**, you will need to request an ownership challenge and provide the associated token with your update request. You can also delete your current job and create a new one.

Once a job has been enabled and has started executing, the **last\_complete** field will show the time when the last batch of logs was successfully sent to the destination:

### Request to get job by ID and see **last\_complete** info

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Get Logpush job details

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs/$JOB_ID" \

  --request GET \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN"


```

### Response

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "enabled": true,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",

    "last_complete": "2018-08-09T21:26:00Z",

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

## Optional - Delete a job

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Delete Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs/$JOB_ID" \

  --request DELETE \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN"


```

Be careful when deleting a job because this action cannot be reversed.

### Response

```

{

  "errors": [],

  "messages": [],

  "result": {},

  "success": true

}


```

## Optional - Retrieve your job

Retrieve a specific job, using the job ID:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Get Logpush job details

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs/$JOB_ID" \

  --request GET \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN"


```

### Response

```

{

  "errors": [],

  "messages": [],

  "result": [

    {

      "id": <JOB_ID>,

      "dataset": "http_requests",

      "enabled": true,

      "name": "<DOMAIN_NAME>",

      "output_options": {

        "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

        "timestamp_format": "rfc3339"

      },

      "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",

      "last_complete": null,

      "last_error": null,

      "error_message": null

    }

  ],

  "success": true

}


```

Retrieve all jobs for all datasets:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

List Logpush jobs

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request GET \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN"


```

### Response

```

{

  "errors": [],

  "messages": [],

  "result": [

    {

      "id": <JOB_ID>,

      "dataset": "spectrum_events",

      "enabled": true,

      "name": "<DOMAIN_NAME>",

      "output_options": {

        "field_names": ["Application", "ClientAsn", "ClientIP", "ColoCode", "Event", "OriginIP", "Status"],

      },

      "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",

      "last_complete": "2019-10-01T00:25:00Z",

      "last_error": null,

      "error_message": null

    },

    {

      "id": <JOB_ID>,

      "dataset": "http_requests",

      "enabled": false,

      "name": "<DOMAIN_NAME>",

      "output_options": {

        "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

        "timestamp_format": "rfc3339"

      },

      "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",

      "last_complete": "2019-09-24T21:15:00Z",

      "last_error": null,

      "error_message": null

    }

  ]

}


```

## Optional - Update **output\_options**

If you want to add (or remove) fields, change the timestamp format, or enable protection against the `Log4j - CVE-2021-44228` vulnerability, first retrieve the current **output\_options** for your zone.

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Get Logpush job details

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs/$JOB_ID" \

  --request GET \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN"


```

### Response

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "kind": "",

    "enabled": true,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",

    "last_complete": "2021-12-14T19:56:49Z",

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

Next, edit the **output\_options** as desired and create a `PUT` request. The following example enables the **CVE-2021-44228** redaction option.

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Update Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs/$JOB_ID" \

  --request PUT \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "output_options": {

        "field_names": [

            "ClientIP",

            "ClientRequestHost",

            "ClientRequestMethod",

            "ClientRequestURI",

            "EdgeEndTimestamp",

            "EdgeResponseBytes",

            "EdgeResponseStatus",

            "EdgeStartTimestamp",

            "RayID"

        ],

        "timestamp_format": "rfc3339"

    }

  }'


```

Note that at this time, the **CVE-2021-44228** option is not available through the UI, and updating your Logpush job through the UI will remove this option.

### Response

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "kind": "",

    "enabled": true,

    "name": null,

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",

    "last_complete": "2021-12-14T20:02:19Z",

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/examples/","name":"Logpush examples"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/examples/example-logpush-curl/","name":"Manage Logpush with cURL"}}]}
```

---

---
title: Manage Logpush with Python
description: You can manage your Cloudflare Logpush service using Python. In the script below you can find example requests to create a job, retrieve job details, update job settings, and delete a Logpush job.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/examples/example-logpush-python.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Manage Logpush with Python

You can manage your Cloudflare Logpush service using Python. In the script below you can find example requests to create a job, retrieve job details, update job settings, and delete a Logpush job.

Note

The examples below are for zone-scoped datasets. Account-scoped datasets should use `<ACCOUNT_ID>` instead of `<ZONE_ID>`.

Python

```

import json

import requests


url = "https://api.cloudflare.com/client/v4/"


x_auth_email = "<EMAIL>"

x_auth_key = "<API_KEY>"


zone_id = "<ZONE_ID>"

destination_conf = "s3://<BUCKET_NAME>/logs?region=us-west-1"


logpush_url = url + "/zones/%s/logpush" % zone_id


headers = {

  'X-Auth-Email': <EMAIL>,

  'X-Auth-Key': <API_KEY>,

  'Content-Type': 'application/json'

}


# Create job

r = requests.post(logpush_url + "/jobs", headers=headers, data=json.dumps({"destination_conf":destination_conf}))

print(r.status_code, r.text)

assert r.status_code == 201

assert r.json()["result"]["enabled"] == False


# Keep id of the new job

id = r.json()["result"]["id"]


# Get job

r = requests.get(logpush_url + "/jobs/%s" % id, headers=headers)

print(r.status_code, r.text)

assert r.status_code == 200


# Get all jobs for a zone

r = requests.get(logpush_url + "/jobs", headers=headers)

print(r.status_code, r.text)

assert r.status_code == 200

assert len(r.json()["result"]) > 0


# Update job

r = requests.put(logpush_url + "/jobs/%s" % id, headers=headers, data=json.dumps({"enabled":True}))

print(r.status_code, r.text)

assert r.status_code == 200

assert r.json()["result"]["enabled"] == True


# Delete job

r = requests.delete(logpush_url + "/jobs/%s" % id, headers=headers)

print(r.status_code, r.text)

assert r.status_code == 200


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/examples/","name":"Logpush examples"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/examples/example-logpush-python/","name":"Manage Logpush with Python"}}]}
```

---

---
title: Logpush Health Dashboards
description: Logpush Health Dashboards give you a clear view into the performance and reliability of your Logpush jobs. You can monitor the status of log delivery, diagnose issues, and understand the volume of data being sent to your configured destinations. This helps you ensure that critical log data for security, compliance, and observability is always flowing as expected.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-health.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Logpush Health Dashboards

Logpush Health Dashboards give you a clear view into the performance and reliability of your Logpush jobs. You can monitor the status of log delivery, diagnose issues, and understand the volume of data being sent to your configured destinations. This helps you ensure that critical log data for security, compliance, and observability is always flowing as expected.

## Stay informed with health notifications

Configure [Logpush health notifications](https://developers.cloudflare.com/logs/logpush/logpush-health/) to receive alerts when your Logpush job is disabled or experiencing errors. Early detection is critical since Logpush cannot backfill logs—once data is dropped, it is permanently lost.

Health notifications work alongside the Health Dashboard to provide both real-time alerts and historical analysis of your Logpush job performance.

---

## Access Health Dashboards

1. In the **Cloudflare dashboard**, go to the **Logpush** page at either the account or domain (zone) level.
2. Go to the **Health** tab.
3. Select the job you want to analyze.
4. Specify the time range you want to review.
5. _(Optional)_ From the **Jobs** tab, locate the job you want to analyze.
6. Hover over the **Job Health (24h)** column for that job and select **View Health**.
7. You will be redirected to the **Health** tab, where you can select the desired time range for analysis.

### Data availability and API access

* The **Health Dashboard** displays up to **30 days** of health metrics for each Logpush job in the Cloudflare dashboard.
* The raw health metrics can be queried via the `logpushHealthAdaptiveGroups` dataset in the GraphQL API.
* You can explore or test queries using the [Cloudflare GraphQL Explorer ↗](https://graphql.cloudflare.com/explorer).

## Key concepts in job health

### Log line

A single log entry generated by Cloudflare, such as an HTTP request, DNS query, or Access event.

### Batch

A group of logs that Cloudflare uploads together to your destination as a single file or request. A batch is also referred to as a file.

### Upload

A single attempt to upload a batch of logs to your destination. If the first attempt fails, Cloudflare automatically retries until the upload succeeds or the retry limit is reached. Each upload can have one of three outcomes: **Successful**, **Retry attempts**, or **Failed**.

#### Successful

Indicates that a batch of logs was uploaded to your destination without errors or timeouts. Once an upload succeeds, the batch is marked as delivered and no further retries occur.

#### Retry attempts

Additional upload attempts made after an initial failure. The count includes the first failed attempt. Retries continue until the batch is successfully delivered or the retry limit is reached.

#### Failed

Indicates that all upload attempts for a batch were exhausted without success. When a batch fails, Cloudflare cannot deliver its logs to your destination, and all logs in that batch are dropped. These logs are permanently lost.

## Health dashboard flow

The **Logpush Health Dashboard** provides two complementary views that help you monitor and troubleshoot log delivery: **Upload Health** and **Upload Reliability**.

Each view highlights a different aspect of job performance — what was delivered and how reliably it was delivered.

### Upload Health

**Upload Health** helps you understand how much data was successfully uploaded and where uploads failed or data was dropped. This view answers: Are uploads succeeding, and are logs reaching the destination?

#### Charts and metrics

* **Batch Upload Success vs. Failure**: Displays the number of batches that were successfully uploaded versus those that failed.  
   * **Successful Uploads** \- Total number of batches successfully uploaded.  
   * **Failed Uploads** \- Total number of batches that failed to upload due to connection or destination issues.
* **Log Lines Uploaded**: Tracks the total number of log lines successfully uploaded to your destination.  
   * **Uploaded Log Lines** \- Total number of log lines successfully delivered.  
   * **Dropped Log Lines** \- Total number of log lines that could not be delivered after all retry attempts.
* **Data Volume**: Shows the total volume of log data uploaded (in bytes), both compressed and uncompressed.  
   * **Uncompressed Data (raw)** \- Total size of log data before compression.  
   * **Compressed Data (uploaded)** \- Total size of log data after compression, representing the actual bytes transmitted.

#### When to use

Start here to assess overall data delivery health:

* High upload success and stable data volume indicate a healthy Logpush job.
* Drops, spikes, or failed uploads suggest delivery issues — proceed to **Upload Reliability** to investigate root causes.

### Upload Reliability

Upload Reliability helps you identify factors affecting reliability, stability, and latency across all upload attempts (including retries and failures). This view answers: Are uploads stable and efficient?

#### Charts and metrics

* **Uploaded Logs by Status Code** Shows the number of batches that were successful, failed, or retried, categorized by status code.  
   * **Success Rate** \- Percentage of batches successfully uploaded.  
   * **Successful Uploads** \- Total number of batches successfully completed.
* **Upload Duration**: Shows the average time taken to complete each batch upload, broken down by status code.  
   * **Destination Availability** \- How often Cloudflare successfully connected to your destination and completed uploads.  
   * **Average Upload Duration** \- Average time taken to upload logs after they are generated.
* **Retry Attempts**: Displays the number of retries made after failed uploads, broken down by status code.  
   * **Retry Attempts** \- Total number of upload attempts made after previous failures (includes the first failed attempt).

#### When to use

Use this view to troubleshoot reliability issues:

* High latency, frequent retries, or low destination availability indicate potential instability in the destination endpoint or network.
* Combine with **Upload Health** metrics to correlate delivery success with underlying reliability patterns.

## Troubleshooting guide

The Logpush Health Dashboards help you monitor the status, reliability, and performance of your Logpush jobs. Use this guide to interpret each chart, identify the root cause of anomalies, and take corrective action.

| Chart name                                  | Symptom                                       | What it means                                                                                                                                 | Possible causes                                                                                                                                                                                                | Recommended actions                                                                                                                                                                                                                |
| ------------------------------------------- | --------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Batch Upload Success vs Failure**         | Failed uploads                                | Cloudflare could not deliver batches after all retry attempts. These batches are marked as **failed**, and all log lines in them are dropped. | \- Destination endpoint unavailable or rejecting connections (expired credentials, downtime).  \- Uploads timing out due to large batch sizes or network latency.  \- Destination throttling or rate limiting. | \- Verify destination credentials and endpoint health.  \- Reduce batch size in the Logpush job configuration.  \- Ensure the destination can handle the expected upload rate.  \- Contact Cloudflare Support if failures persist. |
| **Log Lines Uploaded**                      | Dropped log lines or reduced delivery volume  | Fewer logs are being delivered than expected, often due to failed uploads or dropped batches.                                                 | \- Spike in failed uploads.  \- Destination ingestion limits or partial uploads.                                                                                                                               | \- Compare **Log Lines Uploaded** and **Data Volume** charts for dips.  \- Check destination for ingestion errors or rate limiting.  \- Review recent Logpush job configuration changes.                                           |
| **Data Volume (Compressed & Uncompressed)** | Unexpected drop in data volume                | Delivered data volume is lower than expected, suggesting compression inefficiencies or dropped batches.                                       | \- Failed uploads or incomplete deliveries.  \- Destination rejecting uploads due to size or quota limits.                                                                                                     | \- Review compression settings and batch size.  \- Verify destination storage capacity.  \- Check for spikes in failed uploads or retries.                                                                                         |
| **Uploaded Logs by Status Code**            | High number of retries or failed status codes | Uploads fail on the first attempt but succeed on retries.                                                                                     | \- Temporary destination downtime or throttling.  \- Network instability between Cloudflare and the destination.                                                                                               | \- Review retry and failure distribution by status code.  \- Compare with **Destination Availability** for correlation.  \- Reduce batch size.                                                                                     |
| **Retry Attempts**                          | Frequent retry activity                       | Uploads are repeatedly failing and retried multiple times.                                                                                    | \- Destination instability or transient errors.  \- High latency or slow acknowledgements from the destination.                                                                                                | \- Verify destination uptime and ingestion rate.  \- Ensure destination is not throttling requests.  \- Occasional retries are expected; persistent spikes require review.                                                         |
| **Avg. Upload Duration**                    | Long upload times                             | Uploads are taking longer than expected, indicating latency or oversized batches.                                                             | \- Large batches or uncompressed payloads.  \- Network or regional latency.  \- Destination processing delays.                                                                                                 | \- Review **Avg. Upload Duration** trends.  \- Reduce batch size for faster uploads.  \- Verify destination throughput and rate limit settings.                                                                                    |
| **Destination Availability**                | Low or unstable availability                  | Cloudflare cannot consistently connect to your destination.                                                                                   | \- Destination downtime, DNS errors, or authentication issues.  \- Firewall or network restrictions blocking Cloudflare.                                                                                       | \- Check **Destination Availability** for dips.  \- Confirm destination credentials and endpoint uptime.  \- Review allowlists or network access settings.                                                                         |

## Understanding retry behavior

Logpush is designed to handle temporary destination issues through automatic retries. When your destination is temporarily unavailable, Logpush will retry approximately five times over five minutes. However, if Cloudflare persistently receives errors from your destination and cannot keep up with incoming batches, Logpush will eventually drop logs.

If errors continue for a prolonged period, Logpush assumes the destination is permanently unavailable and disables your push job. You can re-enable the job once the destination issue is resolved.

Note

These retry counts and timeframes are approximations. Actual behavior may vary based on the nature of the error and destination response times.

To monitor retry behavior and destination availability, use the [Health Dashboard](#upload-reliability) metrics, particularly the **Retry Attempts** and **Destination Availability** charts.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-health/","name":"Logpush Health Dashboards"}}]}
```

---

---
title: Logpush job setup
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Logpush job setup

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}}]}
```

---

---
title: API configuration
description: The table below summarizes the job operations available for both Logpush and Edge Log Delivery jobs. Make sure that Account-scoped datasets use /accounts/{account_id} and Zone-scoped datasets use /zone/{zone_id}. For more information, refer to the Datasets page.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/api-configuration.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# API configuration

## Endpoints

The table below summarizes the job operations available for both Logpush and Edge Log Delivery jobs. Make sure that Account-scoped datasets use `/accounts/{account_id}` and Zone-scoped datasets use `/zone/{zone_id}`. For more information, refer to the [Datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/) page.

You can locate `{zone_id}` and `{account_id}` arguments based on the [Find zone and account IDs](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids/) page. The `{job_id}` argument is numeric, like 123456\. The `{dataset_id}` argument indicates the log category (such as `http_requests` or `audit_logs`).

| Operation | Description                                 | API                                                                                                                             |
| --------- | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| POST      | Create job                                  | [Documentation](https://developers.cloudflare.com/api/resources/logpush/subresources/jobs/methods/create/)                      |
| GET       | Retrieve job details                        | [Documentation](https://developers.cloudflare.com/api/resources/logpush/subresources/datasets/subresources/jobs/methods/get/)   |
| GET       | Retrieve all jobs for all datasets          | [Documentation](https://developers.cloudflare.com/api/resources/logpush/subresources/jobs/methods/list/)                        |
| GET       | Retrieve all jobs for a dataset             | [Documentation](https://developers.cloudflare.com/api/resources/logpush/subresources/datasets/subresources/jobs/methods/get/)   |
| GET       | Retrieve all available fields for a dataset | [Documentation](https://developers.cloudflare.com/api/resources/logpush/subresources/datasets/subresources/fields/methods/get/) |
| PUT       | Update job                                  | [Documentation](https://developers.cloudflare.com/api/resources/logpush/subresources/jobs/methods/update/)                      |
| DELETE    | Delete job                                  | [Documentation](https://developers.cloudflare.com/api/resources/logpush/subresources/jobs/methods/delete/)                      |
| POST      | Check whether destination exists            | [Documentation](https://developers.cloudflare.com/api/resources/logpush/subresources/validate/methods/destination/)             |
| POST      | Get ownership challenge                     | [Documentation](https://developers.cloudflare.com/api/resources/logpush/subresources/ownership/methods/validate/)               |
| POST      | Validate ownership challenge                | [Documentation](https://developers.cloudflare.com/api/resources/logpush/subresources/ownership/methods/validate/)               |
| POST      | Validate log options                        | [Documentation](https://developers.cloudflare.com/api/resources/logpush/subresources/validate/methods/origin/)                  |

For concrete examples, refer to the tutorials in [Logpush examples](https://developers.cloudflare.com/logs/logpush/examples/).

## Connecting

The Logpush API requires credentials like any other Cloudflare API.

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

List Logpush jobs

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request GET \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN"


```

## Ownership

Before creating a new job, ownership of the destination must be proven.

To issue an ownership challenge token to your destination:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Get ownership challenge

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/ownership" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2"

  }'


```

A challenge file will be written to the destination, and the filename will be in the response (the filename may be expressed as a path, if appropriate for your destination):

```

{

  "errors": [],

  "messages": [],

  "result": {

    "valid": true,

    "message": "",

    "filename": "<PATH_TO_CHALLENGE_FILE>.txt"

  },

  "success": true

}


```

You will need to provide the token contained in the file when creating a job.

Note

When using Sumo Logic, you may find it helpful to have [Live Tail ↗](https://help.sumologic.com/05Search/Live-Tail/About-Live-Tail) open to see the challenge file as soon as it is uploaded.

## Destination

You can specify your cloud service provider destination via the required **destination\_conf** parameter.

Note

As of May 2022, defining a unique destination for a Logpush job will no longer be required. As this constraint has been removed, you can now have more than one job writing to the same destination.

The `destination_conf` parameter must follow this format:

```

<scheme>://<destination-address>


```

Supported schemes are listed below, each tailored to specific providers such as R2, S3, etc. Additionally, generic use cases like `https` are also covered:

* `r2`,
* `gs`,
* `s3`,
* `sumo`,
* `https`,
* `azure`,
* `splunk`,
* `sentinelone`,
* `datadog`.

The `destination-address` should generally be provided by the destination provider. However, for certain providers, we require the `destination-address`to follow a specific format:

* **Cloudflare R2** (scheme `r2`): bucket path + account ID + R2 access key ID + R2 secret access key; for example: `r2://<BUCKET_PATH>?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>`
* **AWS S3** (scheme `s3`): bucket + optional directory + region + optional encryption parameter (if required by your policy); for example: `s3://bucket/[dir]?region=<REGION>[&sse=AES256]`
* **Datadog** (scheme `datadog`): Datadog endpoint URL + Datadog API key + optional parameters; for example: `datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>&ddsource=cloudflare&service=<SERVICE>&host=<HOST>&ddtags=<TAGS>`
* **Google Cloud Storage** (scheme `gs`): bucket + optional directory; for example: `gs://bucket/[dir]`
* **Microsoft Azure** (scheme `azure`): service-level SAS URL with `https` replaced by `azure` \+ optional directory added before query string; for example: `azure://<BLOB_CONTAINER_PATH>/[dir]?<QUERY_STRING>`
* **New Relic** (use scheme `https`): New Relic endpoint URL which is `https://log-api.newrelic.com/log/v1` for US or `https://log-api.eu.newrelic.com/log/v1` for EU + a license key + a format; for example: for US `"https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"` and for EU `"https://log-api.eu.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"`
* **Splunk** (scheme `splunk`): Splunk endpoint URL + Splunk channel ID + insecure-skip-verify flag + Splunk sourcetype + Splunk authorization token; for example: `splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>`
* **Sumo Logic** (scheme `sumo`): HTTP source address URL with `https` replaced by `sumo`; for example: `sumo://<SUMO_ENDPOINT_URL>/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>`
* **SentinelOne** (scheme `sentinelone`): SentinelOne endpoint URL + SentinelOne sourcetype + SentinelOne authorization token; for example: `sentinelone://<SENTINELONE_ENDPOINT_URL>?sourcetype=<SOURCE_TYPE>&header_Authorization=<SENTINELONE_AUTH_TOKEN>`

For **R2**, **S3**, **Google Cloud Storage**, and **Azure**, you can organize logs into daily subdirectories by including the special placeholder `{DATE}` in the URL path. This placeholder will automatically be replaced with the date in the `YYYYMMDD` format (for example, `20180523`).

For example:

* `s3://mybucket/logs/{DATE}?region=us-east-1&sse=AES256`
* `azure://myblobcontainer/logs/{DATE}?[QueryString]`

This approach is useful when you want your logs grouped by day.

For more information on the value for your cloud storage provider, consult the following conventions:

* [AWS S3 CLI ↗](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) (S3Uri path argument type)
* [Google Cloud Storage CLI ↗](https://cloud.google.com/storage/docs/gsutil) (Syntax for accessing resources)
* [Microsoft Azure Shared Access Signature ↗](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview)
* [Sumo Logic HTTP Source ↗](https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source)

To check if a destination is already in use:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Check destination exists

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/validate/destination/exists" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "destination_conf": "s3://foo"

  }'


```

Response

```

{

  "errors": [],

  "messages": [],

  "result": {

    "exists": false

  },

  "success": true

}


```

## Name

A human-readable, optional job name that does not need to be unique. We recommend choosing a meaningful name, such as the domain name, to help you easily identify and manage your job. You can update the name later if needed.

## Kind

The kind parameter (optional) is used to differentiate between Logpush and Edge Log Delivery jobs. For Logpush jobs, this parameter can be left empty or omitted. For Edge Log Delivery jobs, set `"kind": "edge"`. Currently, Edge Log Delivery is only supported for the `http_requests` dataset.

Note

The kind parameter cannot be used to update existing Logpush jobs. You can only specify the kind parameter when creating a new job.

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<DOMAIN_NAME>",

    "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",

    "dataset": "http_requests",

    "output_options": {

        "field_names": [

            "ClientIP",

            "ClientRequesrHost",

            "ClientRequestMethod",

            " ClientRequestURI",

            "EdgeEndTimestamp",

            "EdgeResponseBytes",

            "EdgeResponseStatus",

            "EdgeStartTimestamp",

            "RayID"

        ],

        "timestamp_format": "rfc3339"

    },

    "kind": "edge"

  }'


```

## Options

Logpull\_options has been replaced with Custom Log Formatting output\_options. Please refer to the [Log Output Options](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/) documentation for instructions on configuring these options and updating your existing jobs to use these options.

If you are still using logpull\_options, here are the options that you can customize:

1. **Fields** (optional): Refer to [Datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/) for the currently available fields. The list of fields is also accessible directly from the API: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields`. Default fields: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields/default`.
2. **Timestamp format** (optional): The format in which timestamp fields will be returned. Value options: `unixnano` (nanoseconds unit - default), `unix` (seconds unit), `rfc3339` (seconds unit).
3. **Redaction for CVE-2021-44228** (optional): This option will replace every occurrence of `${` with `x{`. To enable it, set `"CVE-2021-44228": true`.

Note

The **CVE-2021-44228** parameter can only be set through the API at this time. Updating your Logpush job through the dashboard will set this option to false.

To check if the selected **logpull\_options** are valid:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Validate origin

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/validate/origin" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "logpull_options": "fields=RayID,ClientIP,EdgeStartTimestamp&timestamps=rfc3339&CVE-2021-44228=true",

    "dataset": "http_requests"

  }'


```

Response

```

{

  "errors": [],

  "messages": [],

  "result": {

    "valid": true,

    "message": ""

  },

  "success": true

}


```

## Configuration change timing

When you modify a Logpush job configuration, changes do not take effect immediately.

### Destination changes

If you reconfigure a job to use a new destination, logs may continue to be sent to the old destination for approximately 10-15 minutes during the transition period. This delay allows the system to complete in-flight uploads and propagate the new configuration across Cloudflare's network.

### Field changes

When you add new fields to an existing Logpush job, the new fields will appear in your logs within approximately 10-15 minutes. This timing is an estimate and may vary based on system load.

Note

These timeframes are estimates. If you need to verify that changes have taken effect, monitor your destination for the updated log format or check the [Health Dashboard](https://developers.cloudflare.com/logs/logpush/logpush-health/) for recent uploads.

## Filter

Use filters to select the events to include and/or remove from your logs. For more information, refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/).

## Sampling rate

Value can range from `0.0` (exclusive) to `1.0` (inclusive). `sample=0.1` means return 10% (1 in 10) of all records. The default value is `1`, meaning logs will be unsampled.

### Understanding sample\_rate and SampleInterval

The `sample_rate` parameter and `SampleInterval` field are independent mechanisms that operate at different stages of the logging pipeline:

* **`sample_rate`**: A configuration parameter you set on your Logpush job to control what percentage of logs are delivered to your destination (0.0-1.0). For example, setting `sample_rate: 0.1` delivers approximately 10% of logs.
* **`SampleInterval`**: A data field that appears in certain datasets (particularly [Network Analytics Logs](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/network%5Fanalytics%5Flogs/)) indicating upstream sampling applied during data collection. A `SampleInterval` of 1000 means the log entry represents 1 in 1000 packets.

The `sample_rate` you configure applies on top of any pre-existing sampling. If your data already has `SampleInterval: 1000` and you set `sample_rate: 0.1`, you receive approximately 1 in 10,000 of the original events (1000 × 10).

Note

When customer-configured sampling is applied, the `SampleInterval` field value in the logs is not modified. When there is no internal sampling, `SampleInterval` will always be 1 regardless of your configured `sample_rate`.

## Max upload parameters

These parameters control the size of each upload batch — not how quickly data is delivered. Use them to prevent overloading your destination with uploads that are too large or too small.

| Parameter                      | Description                                                               | Default               |
| ------------------------------ | ------------------------------------------------------------------------- | --------------------- |
| max\_upload\_bytes             | Maximum uncompressed file size of a batch of logs.                        | Varies by destination |
| max\_upload\_records           | Maximum number of log lines per batch.                                    | 100,000               |
| max\_upload\_interval\_seconds | Maximum time-span of log data per batch (used during catch-up scenarios). | Varies by destination |

Note

These settings influence upload size, not delivery latency. Logpush processes data approximately once per minute, regardless of these parameter values. Adjusting these settings results in smaller or larger uploads per batch, which can help you avoid overloading destinations that have memory or request-size constraints.

### When to adjust these parameters

* Reduce `max_upload_records` if your destination struggles with large payloads or runs out of memory processing big batches.
* Increase `max_upload_records` if you want fewer, larger files (for example, when pushing to object storage like R2 or S3).
* For destinations like Datadog that have strict payload limits, Logpush automatically uses smaller batch sizes (for example, 1,000 rows).

Tip

If you need to estimate the number of files generated for cost planning (for example, R2 write operations), run Logpush for a representative period and measure the actual output. The number of uploads depends on your data volume and cannot be precisely calculated in advance.

## Custom fields

You can add custom fields to your HTTP request log entries in the form of HTTP request headers, HTTP response headers, and cookies. Custom fields configuration applies to all the Logpush jobs in a zone that use the HTTP requests dataset. To learn more, refer to [Custom fields](https://developers.cloudflare.com/logs/logpush/logpush-job/custom-fields/).

## Audit

The following Logpush actions are recorded in **Cloudflare Audit Logs**: create, update, and delete job.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/api-configuration/","name":"API configuration"}}]}
```

---

---
title: Custom fields
description: The HTTP requests dataset includes most standard log information by default. However, if you need to capture additional request or response headers or cookies, you can use custom fields to tailor the logs to your specific needs
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/custom-fields.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Custom fields

The HTTP requests dataset includes most standard log information by default. However, if you need to capture additional request or response headers or cookies, you can use custom fields to tailor the logs to your specific needs

Custom fields are configured per zone and, once set up, are enabled for all Logpush jobs in that zone that use the HTTP requests dataset and include the request headers, response headers, or cookie fields. You can log these fields in their raw form or as transformed values.

By default:

* **Request headers** are logged as **raw values**.
* **Response headers** are logged as **transformed values**.

This default behavior can be changed. You can configure either request or response headers to be logged as raw or transformed, depending on your needs - but not both for the same header.

Custom fields can be enabled via API or the Cloudflare dashboard.

Note

Custom fields are only available for the [HTTP requests dataset](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/http%5Frequests/).

Note

There is no way to automatically forward all custom headers in Logpush without manually specifying each one. Each request or response header must be individually configured using Custom Fields.

## Enable custom rules via API

Use the [Rulesets API](https://developers.cloudflare.com/ruleset-engine/rulesets-api/) to create a rule that configures custom fields. For more information on concepts like phases, rulesets, and rules, as well as the available API operations, refer to the [Ruleset Engine](https://developers.cloudflare.com/ruleset-engine/) documentation.

To configure custom fields:

1. Create a rule to configure the list of custom fields.
2. Include the `Cookies`, `RequestHeaders`, and/or `ResponseHeaders` fields in your Logpush job.

### 1\. Create a rule to configure the list of custom fields

Create a rule configuring the list of custom fields in the `http_log_custom_fields` phase at the zone level. Set the rule action to `log_custom_field` and the rule expression to `true`.

The `action_parameters` object that you must include in the rule that configures the list of custom fields should have the following structure:

```

"action_parameters": {

//select raw (default) or transformed request header

  "request_fields": [

    { "name": "<http_request_header_raw>" }

  ],

  "transformed_request_fields": [

    { "name": "<http_request_header_transformed>" }

  ],

//select raw or transformed (default) response header

  "response_fields": [

    { "name": "<http_response_header_transformed>" }

  ],

  "raw_response_fields": [

    { "name": "<http_response_header_raw>" }

  ],

  "cookie_fields": [

    { "name": "<cookie_name>" }

  ]

}


```

Ensure that your rule definition complies with the following:

* You must include at least one of the following arrays in the `action_parameters` object: `request_fields`, `transformed_request_fields`, `response_fields`, `raw_response_fields`, and `cookie_fields`.
* You must enter HTTP request and response header names in lower case.
* Cookie names are case sensitive — you must enter cookie names with the same capitalization they have in the HTTP request.
* You must set the rule expression to `true`.
* You can only log raw or transformed values for either request or response headers but not both for the same header.

Perform the following steps to create the rule:

1. Use the [List zone rulesets](https://developers.cloudflare.com/ruleset-engine/rulesets-api/view/#list-existing-rulesets) operation to check if there is already an [entry point ruleset](https://developers.cloudflare.com/ruleset-engine/about/rulesets/#entry-point-ruleset) for the `http_log_custom_fields` phase at the zone level (you can only have one entry point ruleset per phase):  
List zone rulesets  
```  
curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/rulesets" \  
  --request GET \  
  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN"  
```  
If there is an entry point ruleset for the `http_log_custom_fields` phase (that is, a ruleset with `"kind": "zone"` and `"phase": "http_log_custom_fields"`), take note of the ruleset ID.
2. (Optional) If the response did not include a ruleset with `"kind": "zone"` and `"phase": "http_log_custom_fields"`, create the phase entry point ruleset using the [Create a zone ruleset](https://developers.cloudflare.com/ruleset-engine/rulesets-api/create/) operation:  
Create a zone ruleset  
```  
curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/rulesets" \  
  --request POST \  
  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \  
  --json '{  
    "name": "Zone-level phase entry point",  
    "kind": "zone",  
    "description": "This ruleset configures custom log fields.",  
    "phase": "http_log_custom_fields"  
  }'  
```  
Take note of the ruleset ID included in the response.
3. Use the [Update a zone ruleset](https://developers.cloudflare.com/ruleset-engine/rulesets-api/update/) operation to define the rules of the entry point ruleset you found (or created in the previous step), adding a rule with the custom fields configuration. The rules you include in the request will replace all the rules in the ruleset.  
The following example configures custom fields with the names of the HTTP request headers, HTTP response headers, and cookies you wish to include in Logpush logs:  
Update a zone ruleset  
```  
curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/rulesets/$RULESET_ID" \  
  --request PUT \  
  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \  
  --json '{  
    "rules": [  
        {  
            "action": "log_custom_field",  
            "expression": "true",  
            "description": "Set Logpush custom fields for HTTP requests",  
            "action_parameters": {  
                "request_fields": [  
                    {  
                        "name": "content-type"  
                    },  
                    {  
                        "name": "x-forwarded-for"  
                    }  
                ],  
                "transformed_request_fields": [  
                    {  
                        "name": "host"  
                    }  
                ],  
                "response_fields": [  
                    {  
                        "name": "server"  
                    },  
                    {  
                        "name": "content-type"  
                    }  
                ],  
                "raw_response_fields": [  
                    {  
                        "name": "allow"  
                    }  
                ],  
                "cookie_fields": [  
                    {  
                        "name": "__ga"  
                    },  
                    {  
                        "name": "accountNumber"  
                    },  
                    {  
                        "name": "__cfruid"  
                    }  
                ]  
            }  
        }  
    ]  
  }'  
```  
```  
{  
  "result": {  
    "id": "<RULESET_ID>",  
    "name": "Zone-level phase entry point",  
    "description": "This ruleset configures custom log fields.",  
    "kind": "zone",  
    "version": "2",  
    "rules": [  
      {  
        "id": "<RULE_ID_1>",  
        "version": "1",  
        "action": "log_custom_field",  
        "action_parameters": {  
          "request_fields": [  
            { "name": "content-type" },  
            { "name": "x-forwarded-for" }  
          ],  
          "transformed_request_fields": [{ "name": "host" }],  
          "response_fields": [  
            { "name": "server" },  
            { "name": "content-type" }  
          ],  
          "raw_response_fields": [{ "name": "allow" }],  
          "cookie_fields": [  
            { "name": "__ga" },  
            { "name": "accountNumber" },  
            { "name": "__cfruid" }  
          ]  
        },  
        "expression": "true",  
        "description": "Set Logpush custom fields for HTTP requests",  
        "last_updated": "2021-11-21T11:02:08.769537Z",  
        "ref": "<RULE_REF_1>",  
        "enabled": true  
      }  
    ],  
    "last_updated": "2021-11-21T11:02:08.769537Z",  
    "phase": "http_log_custom_fields"  
  },  
  "success": true,  
  "errors": [],  
  "messages": []  
}  
```

#### Record duplicate response header values

Some headers sent from the origin — such as `set-cookie` — may have multiple values that you want to capture. You can use the Rulesets API to specify which headers should have all their values logged.

Update a zone ruleset

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/rulesets/$RULESET_ID" \

  --request PUT \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "rules": [

        {

            "action": "log_custom_field",

            "expression": "true",

            "description": "Set Logpush custom fields for HTTP requests",

            "action_parameters": {

                "response_fields": [

                    {

                        "name": "set-cookie",

                        "preserve_duplicates": true

                    }

                ]

            }

        }

    ]

  }'


```

Note that `preserve_duplicates` applies to both `response_fields` and `raw_response_fields`. If there are no transform rules that affect a header, including `preserve_duplicates` in either `response_fields` or `raw_response_fields` should achieve the same result.

In this example, all values of the `set-cookie` headers will be logged. They will appear as an array of string values under `ResponseFields`, for example:

```

{

  // ...

  "ResponseFields": {

    "set-cookie": ["name1=val1", "name2=val2", ...]

  }

}


```

You can use a worker or custom logic at your logpush destination to extract these values.

### 2\. Include the custom fields in your Logpush job

Next, include `Cookies`, `RequestHeaders`, `ResponseHeaders`, and/or `ResponseFields`, depending on your custom field configuration, in the list of fields of the `output_options` job parameter when creating or updating a job. The logs will contain the configured custom fields and their values in the request/response.

For example, consider the following request that creates a job that includes custom fields:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<DOMAIN_NAME>",

    "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2",

    "dataset": "http_requests",

    "output_options": {

        "field_names": [

            "RayID",

            "EdgeStartTimestamp",

            "Cookies",

            "RequestHeaders",

            "ResponseHeaders"

        ],

        "timestamp_format": "rfc3339"

    },

    "ownership_challenge": "<OWNERSHIP_CHALLENGE_TOKEN>"

  }'


```

Note for Cloudflare Access users

If you are a Cloudflare Access user, as of March 2022 you have to manually add the `cf-access-user` user identity header to your logs by creating a custom fields ruleset or adding the `cf-access-user` HTTP request header to your custom fields configuration. Additionally, make sure that you include the `RequestHeaders` field in your Logpush job.

## Enable custom fields via dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page.  
[ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. In the **Custom log fields** section, select **Edit Custom Fields**.
3. Select **Set new Custom Field**.
4. From the **Field Type** dropdown, select _Request Header_, _Response Header_ or _Cookies_ and type the **Field Name**.
5. When you are done, select **Save**.

## Use case: Logging mTLS certificate headers

To log mTLS certificate details (such as `cf-cert-subject-dn` or `cf-cert-issuer-dn`) in Logpush, you need to:

1. Enable the [Add TLS client auth headers](https://developers.cloudflare.com/rules/transform/managed-transforms/reference/#add-tls-client-auth-headers) Managed Transform to inject the certificate headers.
2. Configure Logpush custom fields using `transformed_request_fields` (not `request_fields`) to capture these Cloudflare-injected headers.
3. Ensure your Logpush job includes the `RequestHeaders` field.

The mTLS headers are injected by Cloudflare after the client request is received, so they must be captured using `transformed_request_fields` rather than `request_fields`.

For more information on configuring client certificates, refer to [mTLS authentication](https://developers.cloudflare.com/ssl/client-certificates/enable-mtls/).

## Limitations

* Custom fields allow 100 headers per field type — this applies separately to `request_fields`, `transformed_request_fields`, `response_fields`, `raw_response_fields`, and `cookie_fields`.
* The request header `Range` is currently not supported by Custom Fields.
* Transformed and raw values for request and response headers are available only via the API and cannot be set through the UI.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/custom-fields/","name":"Custom fields"}}]}
```

---

---
title: Datasets
description: The datasets below describe the fields available by log category:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Datasets

## Datasets

The datasets below describe the fields available by log category:

* [Zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/)
* [Account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/)

## API

The list of fields can also be accessed directly from the API using the following endpoints:

* For zone-scoped datasets: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/<DATASET>/fields`
* For account-scoped datasets: `https://api.cloudflare.com/client/v4/accounts/{account_id}/logpush/datasets/<DATASET>/fields`

The `<DATASET>` argument indicates the log category. For example, `http_requests`, `spectrum_events`, `firewall_events`, `nel_reports`, or `dns_logs`.

## Availability

* The availability of Logpush dataset fields depends on your subscription plan.
* Zone-scoped HTTP requests are available in both Logpush and Logpull.
* [Custom fields](https://developers.cloudflare.com/logs/logpush/logpush-job/custom-fields/) for HTTP requests are only available in Logpush.
* All other datasets are only available through Logpush.

## Deprecation

Deprecated fields remain available to prevent breaking existing jobs. They may eventually become empty values if completely removed. Customers are encouraged to migrate away from deprecated fields if they are using them.

## Recommendation

For log field **ClientIPClass**, Cloudflare recommends using [bot tags](https://developers.cloudflare.com/bots/concepts/bot-tags/) to classify IPs.

## Additional resources

For more information on logs available in Cloudflare Zero Trust, refer to [Zero Trust logs](https://developers.cloudflare.com/cloudflare-one/insights/logs/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}}]}
```

---

---
title: Access requests
description: The descriptions below detail the fields available for access_requests.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/access%5Frequests.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Access requests

The descriptions below detail the fields available for `access_requests`.

## Action

Type: `string`

What type of record is this. _login_ | _logout_.

## Allowed

Type: `bool`

If request was allowed or denied.

## AppDomain

Type: `string`

The domain of the Application that Access is protecting.

## AppUUID

Type: `string`

Access Application UUID.

## Connection

Type: `string`

Identity provider used for the login.

## Country

Type: `string`

Request's country of origin.

## CreatedAt

Type: `int or string`

The date and time the corresponding access request was made (for example, '2021-07-27T00:01:07Z').

## Email

Type: `string`

Email of the user who logged in.

## IPAddress

Type: `string`

The IP address of the client.

## PurposeJustificationPrompt

Type: `string`

Message prompted to the client when accessing the application.

## PurposeJustificationResponse

Type: `string`

Justification given by the client when accessing the application.

## RayID

Type: `string`

Identifier of the request.

## TemporaryAccessApprovers

Type: `array[string]`

List of approvers for this access request.

## TemporaryAccessDuration

Type: `int`

Approved duration for this access request.

## UserUID

Type: `string`

The uid of the user who logged in.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/access_requests/","name":"Access requests"}}]}
```

---

---
title: Audit Logs
description: The descriptions below detail the fields available for audit_logs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/audit%5Flogs.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Audit Logs

The descriptions below detail the fields available for `audit_logs`.

## ActionResult

Type: `bool`

Whether the action was successful.

## ActionType

Type: `string`

Type of action taken.

## ActorEmail

Type: `string`

Email of the actor.

## ActorID

Type: `string`

Unique identifier of the actor in Cloudflare's system.

## ActorIP

Type: `string`

Physical network address of the actor.

## ActorType

Type: `string`

Type of user that started the audit trail.

## ID

Type: `string`

Unique identifier of an audit log.

## Interface

Type: `string`

Entry point or interface of the audit log.

## Metadata

Type: `object`

Additional audit log-specific information. Metadata is organized in key:value pairs. Key and Value formats can vary by ResourceType.

## NewValue

Type: `object`

Contains the new value for the audited item.

## OldValue

Type: `object`

Contains the old value for the audited item.

## OwnerID

Type: `string`

The identifier of the user that was acting or was acted on behalf of. If a user did the action themselves, this value will be the same as the ActorID.

## ResourceID

Type: `string`

Unique identifier of the resource within Cloudflare's system.

## ResourceType

Type: `string`

The type of resource that was changed.

## When

Type: `int or string`

When the change happened.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/audit_logs/","name":"Audit Logs"}}]}
```

---

---
title: Audit Logs V2
description: The descriptions below detail the fields available for audit_logs_v2.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/audit%5Flogs%5Fv2.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Audit Logs V2

The descriptions below detail the fields available for `audit_logs_v2`.

## AccountID

Type: `string`

The Cloudflare account ID.

## AccountName

Type: `string`

The Cloudflare account name.

## ActionDescription

Type: `string`

Description of action taken.

## ActionResult

Type: `string`

Whether the action was successful.

## ActionTimestamp

Type: `int or string`

When the change happened.

## ActionType

Type: `string`

Type of action taken.

## ActorContext

Type: `string`

Context of the actor.

## ActorEmail

Type: `string`

Email of the actor.

## ActorID

Type: `string`

Unique identifier of the actor in Cloudflare's system.

## ActorIPAddress

Type: `string`

Physical network address of the actor.

## ActorTokenDetails

Type: `object`

Details of how the actor is authenticated.

## ActorType

Type: `string`

Type of user that started the audit trail.

## AuditLogID

Type: `string`

Unique identifier of an audit log.

## Raw

Type: `object`

Raw data.

## ResourceID

Type: `string`

Unique identifier of the resource within Cloudflare's system.

## ResourceProduct

Type: `string`

Resource product.

## ResourceRequest

Type: `object`

Resource request.

## ResourceResponse

Type: `object`

Resource response.

## ResourceScope

Type: `string`

Resource scope.

## ResourceType

Type: `string`

The type of resource that was changed.

## ResourceValue

Type: `object`

Resource value.

## ZoneID

Type: `string`

The Cloudflare zone ID.

## ZoneName

Type: `string`

The Cloudflare zone name.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/audit_logs_v2/","name":"Audit Logs V2"}}]}
```

---

---
title: Browser Isolation User Actions
description: The descriptions below detail the fields available for biso_user_actions.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/biso%5Fuser%5Factions.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Browser Isolation User Actions

The descriptions below detail the fields available for `biso_user_actions`.

## AccountID

Type: `string`

The Cloudflare account ID.

## Decision

Type: `string`

The decision applied ('allow' or 'block').

## DomainName

Type: `string`

The domain name in the URL.

## Metadata

Type: `string`

Additional information specific to a user action (JSON string).

## Timestamp

Type: `int or string`

The date and time.

## Type

Type: `string`

The user action type ('copy', 'paste', 'download', etc.).

## URL

Type: `string`

The URL of the webpage where a user action was performed.

## UserEmail

Type: `string`

The user email.

## UserID

Type: `string`

The user ID.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/biso_user_actions/","name":"Browser Isolation User Actions"}}]}
```

---

---
title: CASB Findings
description: The descriptions below detail the fields available for casb_findings.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/casb%5Ffindings.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# CASB Findings

The descriptions below detail the fields available for `casb_findings`.

## AssetDisplayName

Type: `string`

Asset display name (for example, 'My File Name.docx').

## AssetExternalID

Type: `string`

Unique identifier for an asset of this type. Format will vary by policy vendor.

## AssetLink

Type: `string`

URL to the asset. This may not be available for some policy vendors and asset types.

## AssetMetadata

Type: `object`

Metadata associated with the asset. Structure will vary by policy vendor.

## DetectedTimestamp

Type: `int or string`

Date and time the finding was first identified (for example, '2021-07-27T00:01:07Z').

## FindingTypeDisplayName

Type: `string`

Human-readable name of the finding type (for example, 'File Publicly Accessible Read Only').

## FindingTypeID

Type: `string`

UUID of the finding type in Cloudflare's system.

## FindingTypeSeverity

Type: `string`

Severity of the finding type (for example, 'High').

## InstanceID

Type: `string`

UUID of the finding in Cloudflare's system.

## IntegrationDisplayName

Type: `string`

Human-readable name of the integration (for example, 'My Google Workspace Integration').

## IntegrationID

Type: `string`

UUID of the integration in Cloudflare's system.

## IntegrationPolicyVendor

Type: `string`

Human-readable vendor name of the integration's policy (for example, 'Google Workspace Standard Policy').

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/casb_findings/","name":"CASB Findings"}}]}
```

---

---
title: Device posture results
description: The descriptions below detail the fields available for device_posture_results.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/device%5Fposture%5Fresults.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Device posture results

The descriptions below detail the fields available for `device_posture_results`.

## ClientVersion

Type: `string`

The Zero Trust client version at the time of upload.

## DeviceID

Type: `string`

The device ID that performed the posture upload.

## DeviceManufacturer

Type: `string`

The manufacturer of the device that the Zero Trust client is running on.

## DeviceModel

Type: `string`

The model of the device that the Zero Trust client is running on.

## DeviceName

Type: `string`

The name of the device that the Zero Trust client is running on.

## DeviceSerialNumber

Type: `string`

The serial number of the device that the Zero Trust client is running on.

## DeviceType

Type: `string`

The Zero Trust client operating system type.

## Email

Type: `string`

The email used to register the device with the Zero Trust client.

## OSVersion

Type: `string`

The operating system version at the time of upload.

## PolicyID

Type: `string`

The posture check ID associated with this device posture result.

## PostureCheckName

Type: `string`

The name of the posture check associated with this device posture result.

## PostureCheckType

Type: `string`

The type of the Zero Trust client check or service provider check.

## PostureEvaluatedResult

Type: `bool`

Whether this posture upload passes the associated posture check, given the requirements posture check at the time of the timestamp.

## PostureExpectedJSON

Type: `object`

JSON object of what the posture check expects from the Zero Trust client.

## PostureReceivedJSON

Type: `object`

JSON object of what the Zero Trust client actually uploads.

## RegistrationID

Type: `string`

The UUID of the device registration associated with this posture result.

## Timestamp

Type: `int or string`

The date and time the corresponding device posture upload was performed (for example, '2021-07-27T00:01:07Z'). To specify the timestamp format, refer to [Output types](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/#output-types).

## UserUID

Type: `string`

The uid of the user who registered the device.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/device_posture_results/","name":"Device posture results"}}]}
```

---

---
title: DEX Application Tests
description: The descriptions below detail the fields available for dex_application_tests.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/dex%5Fapplication%5Ftests.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# DEX Application Tests

The descriptions below detail the fields available for `dex_application_tests`.

## AccountID

Type: `string`

The Cloudflare account ID.

## ClientPlatform

Type: `string`

The client's operating system.

## ClientVersion

Type: `string`

The WARP client version.

## ColoCode

Type: `string`

The Colo code where the WARP client is connected to Cloudflare.

## DeviceID

Type: `string`

The unique device ID.

## DeviceRegistrationID

Type: `string`

The unique ID for the device registration.

## ExecutionContext

Type: `string`

Whether the test traffic was run inside or outside of the tunnel. Can be `inTunnel` or `outOfTunnel`.

## HTTPClientIPASN

Type: `int`

HTTP test client IP autonomous system number, for example `13335`. HTTP tests only.

## HTTPClientIPASO

Type: `string`

HTTP test client IP autonomous system organization, for example `Cloudflare, Inc.`. HTTP tests only.

## HTTPClientIPAddress

Type: `string`

HTTP test client IP address. HTTP tests only.

## HTTPClientIPCity

Type: `string`

HTTP test client IP city name in English language, for example `Los Angeles`. HTTP tests only.

## HTTPClientIPCountryISO

Type: `string`

HTTP test client IP country ISO code, for example `US` for the United States. HTTP tests only.

## HTTPClientIPNetmask

Type: `string`

HTTP test client IP netmask. HTTP tests only.

## HTTPClientIPStateISO

Type: `string`

HTTP test client IP state ISO code, for example `CA` for California. HTTP tests only.

## HTTPClientIPVersion

Type: `string`

HTTP test client IP version. HTTP tests only.

## HTTPClientIPZip

Type: `string`

HTTP test client IP postal code, for example `90001`. HTTP tests only.

## HTTPConnectEndMs

Type: `int`

HTTP test result connect end, in milliseconds since test start. HTTP tests only. Refer to [Resource timing ↗](https://developer.mozilla.org/en-US/docs/Web/API/Resource%5FTiming%5FAPI/Using%5Fthe%5FResource%5FTiming%5FAPI) for more details.

## HTTPConnectStartMs

Type: `int`

HTTP test result connect start, in milliseconds since test start. HTTP tests only. Refer to [Resource timing ↗](https://developer.mozilla.org/en-US/docs/Web/API/Resource%5FTiming%5FAPI/Using%5Fthe%5FResource%5FTiming%5FAPI) for more details.

## HTTPDomainLookupEndMs

Type: `int`

HTTP test result domain lookup end, in milliseconds since test start. HTTP tests only. Refer to [Resource timing ↗](https://developer.mozilla.org/en-US/docs/Web/API/Resource%5FTiming%5FAPI/Using%5Fthe%5FResource%5FTiming%5FAPI) for more details.

## HTTPDomainLookupStartMs

Type: `int`

HTTP test result domain lookup start, in milliseconds since test start. HTTP tests only. Refer to [Resource timing ↗](https://developer.mozilla.org/en-US/docs/Web/API/Resource%5FTiming%5FAPI/Using%5Fthe%5FResource%5FTiming%5FAPI) for more details.

## HTTPErrorMessage

Type: `string`

HTTP test result error message. HTTP tests only.

## HTTPMethod

Type: `string`

HTTP test method. HTTP tests only.

## HTTPRedirectEndMs

Type: `int`

HTTP test redirect end timestamp, in milliseconds elapsed since test start. HTTP tests only. Refer to [Resource timing ↗](https://developer.mozilla.org/en-US/docs/Web/API/Resource%5FTiming%5FAPI/Using%5Fthe%5FResource%5FTiming%5FAPI) for more details.

## HTTPRedirectStartMs

Type: `int`

HTTP test redirect start timestamp, in milliseconds elapsed since test start. HTTP tests only. Refer to [Resource timing ↗](https://developer.mozilla.org/en-US/docs/Web/API/Resource%5FTiming%5FAPI/Using%5Fthe%5FResource%5FTiming%5FAPI) for more details.

## HTTPRequestStartMs

Type: `int`

HTTP test result request start, in milliseconds since test start. HTTP tests only. Refer to [Resource timing ↗](https://developer.mozilla.org/en-US/docs/Web/API/Resource%5FTiming%5FAPI/Using%5Fthe%5FResource%5FTiming%5FAPI) for more details.

## HTTPResponseBody

Type: `string`

HTTP response body. HTTP tests only.

## HTTPResponseBodyBytes

Type: `int`

Size of the HTTP response body. HTTP tests only.

## HTTPResponseEndMs

Type: `int`

HTTP test result response end, in milliseconds since test start. HTTP tests only. Refer to [Resource timing ↗](https://developer.mozilla.org/en-US/docs/Web/API/Resource%5FTiming%5FAPI/Using%5Fthe%5FResource%5FTiming%5FAPI) for more details.

## HTTPResponseHeaderBytes

Type: `int`

HTTP test result header bytes. HTTP tests only.

## HTTPResponseHeaders

Type: `array[object]`

HTTP response headers, for example `[{"name": "Content-Type", "value": "text/html"}]`. HTTP tests only.

## HTTPResponseStartMs

Type: `int`

HTTP test result response start, in milliseconds since test start. HTTP tests only. Refer to [Resource timing ↗](https://developer.mozilla.org/en-US/docs/Web/API/Resource%5FTiming%5FAPI/Using%5Fthe%5FResource%5FTiming%5FAPI) for more details.

## HTTPSecureConnectionStartMs

Type: `int`

HTTP test result secure connection start, in milliseconds since test start. HTTP tests only. Refer to [Resource timing ↗](https://developer.mozilla.org/en-US/docs/Web/API/Resource%5FTiming%5FAPI/Using%5Fthe%5FResource%5FTiming%5FAPI) for more details.

## HTTPServerIPASN

Type: `int`

HTTP test server IP autonomous system number, for example `13335`. HTTP tests only.

## HTTPServerIPASO

Type: `string`

HTTP test server IP autonomous system organization, for example `Cloudflare, Inc.`. HTTP tests only.

## HTTPServerIPAddress

Type: `string`

HTTP test server IP address. HTTP tests only.

## HTTPServerIPCity

Type: `string`

HTTP test server IP city name in English language, for example `Los Angeles`. HTTP tests only.

## HTTPServerIPCountryISO

Type: `string`

HTTP test server IP country ISO code, for example `US` for the United States. HTTP tests only.

## HTTPServerIPNetmask

Type: `string`

HTTP test server IP netmask. HTTP tests only.

## HTTPServerIPStateISO

Type: `string`

HTTP test server IP state ISO code, for example `CA` for California. HTTP tests only.

## HTTPServerIPVersion

Type: `string`

HTTP test server IP version. HTTP tests only.

## HTTPServerIPZip

Type: `string`

HTTP test server IP postal code, for example `90001`. HTTP tests only.

## HTTPStatusCode

Type: `int`

HTTP test result status code. HTTP tests only.

## HTTPURL

Type: `string`

HTTP test target URL. HTTP tests only.

## TestID

Type: `string`

The test ID for which the result was uploaded.

## TestType

Type: `string`

The type of test. Can be `traceroute` or `http`.

## Timestamp

Type: `int or string`

Test start time.

## TracerouteDestinationHostname

Type: `string`

Traceroute test result destination hostname. Traceroute tests only.

## TracerouteDestinationIPASN

Type: `int`

Traceroute test destination IP autonomous system number, for example `13335`. Traceroute tests only.

## TracerouteDestinationIPASO

Type: `string`

Traceroute test destination IP autonomous system organization, for example `Cloudflare, Inc.`. Traceroute tests only.

## TracerouteDestinationIPAddress

Type: `string`

Traceroute test destination IP address. Traceroute tests only.

## TracerouteDestinationIPCity

Type: `string`

Traceroute test destination IP city name in English language, for example `Los Angeles`. Traceroute tests only.

## TracerouteDestinationIPCountryISO

Type: `string`

Traceroute test destination IP country ISO code, for example `US` for the United States. Traceroute tests only.

## TracerouteDestinationIPNetmask

Type: `string`

Traceroute test destination IP netmask. Traceroute tests only.

## TracerouteDestinationIPStateISO

Type: `string`

Traceroute test destination IP state ISO code, for example `CA` for California. Traceroute tests only.

## TracerouteDestinationIPVersion

Type: `string`

Traceroute test destination IP version. Traceroute tests only.

## TracerouteDestinationIPZip

Type: `string`

Traceroute test destination IP postal code, for example `90001`. Traceroute tests only.

## TracerouteDurationMs

Type: `int`

Traceroute test result duration in milliseconds. Traceroute tests only.

## TracerouteHops

Type: `array[object]`

Traceroute test result hops, for example `[{"errors": ["timeout", "host unreachable"], "ip": {"address": "192.0.2.0", "asn": 13335, "aso": "Cloudflare, Inc.", "location": {"city": "Los Angeles", "countryISO": "US", "stateISO": "CA", "zip": "90001"}, "netmask": "255.255.255.0", "version": "v4"}, "name": "router1.example.com", "pathID": 1, "received": 3, "rtts": [10, 12, 11], "sent": 3, "ttl": 60}]`. Traceroute tests only.

## TracerouteMaxTTL

Type: `int`

Traceroute test result maximum TTL value. Traceroute tests only.

## TracerouteProtocol

Type: `string`

Traceroute test result protocol. Can be `icmp`, `udp`, or `tcp`. Traceroute tests only.

## TracerouteSize

Type: `int`

Traceroute test result packet size in bytes. Traceroute tests only.

## TracerouteSourceIPASN

Type: `int`

Traceroute test source IP autonomous system number, for example `13335`. Traceroute tests only.

## TracerouteSourceIPASO

Type: `string`

Traceroute test source IP autonomous system organization, for example `Cloudflare, Inc.`. Traceroute tests only.

## TracerouteSourceIPAddress

Type: `string`

Traceroute test source IP address. Traceroute tests only.

## TracerouteSourceIPCity

Type: `string`

Traceroute test source IP city name in English language, for example `Los Angeles`. Traceroute tests only.

## TracerouteSourceIPCountryISO

Type: `string`

Traceroute test source IP country ISO code, for example `US` for the United States. Traceroute tests only.

## TracerouteSourceIPNetmask

Type: `string`

Traceroute test source IP netmask. Traceroute tests only.

## TracerouteSourceIPStateISO

Type: `string`

Traceroute test source IP state ISO code, for example `CA` for California. Traceroute tests only.

## TracerouteSourceIPVersion

Type: `string`

Traceroute test source IP version. Traceroute tests only.

## TracerouteSourceIPZip

Type: `string`

Traceroute test source IP postal code, for example `90001`. Traceroute tests only.

## TracerouteStatus

Type: `string`

Traceroute test result status. Can be `destinationReached`, `lastHopFailed`, or `maxHopsExhausted`. Traceroute tests only.

## TracerouteTimeEnd

Type: `int or string`

Traceroute test result time end. Traceroute tests only.

## TracerouteVersion

Type: `string`

The version of the WARP traceroute client. Traceroute tests only.

## TunnelType

Type: `string`

The tunnel type the device uses to establish a connection to the edge, if any. Can be `http2`, `masque`, or `wireguard`.

## UserEmail

Type: `string`

The Access user email.

## UserID

Type: `string`

The Access user ID.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/dex_application_tests/","name":"DEX Application Tests"}}]}
```

---

---
title: DEX Device State Events
description: The descriptions below detail the fields available for dex_device_state_events.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/dex%5Fdevice%5Fstate%5Fevents.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# DEX Device State Events

The descriptions below detail the fields available for `dex_device_state_events`.

## AccountID

Type: `string`

The Cloudflare account ID.

## AlwaysOn

Type: `bool`

Whether the WARP daemon is configured to reconnect automatically or not.

## AppFirewallEnabled

Type: `bool`

Whether the application-level firewall is enabled or disabled.

## BatteryCharging

Type: `bool`

Whether the battery is charging or not.

## BatteryCycles

Type: `int`

The number of battery cycles. May not be available on all platforms.

## BatteryPercentage

Type: `float`

The percentage of battery remaining from 0 - 1.

## CPUPercentage

Type: `float`

The percentage of CPU utilization from 0 - 1.

## CPUPercentageByApp

Type: `array[object]`

The top applications by percentage of CPU used, for example `[{"name": "app0", "percentage": 0.55}, {"name": "app1", "percentage": 0.45}]`.

## ClientPlatform

Type: `string`

The client's OS.

## ClientVersion

Type: `string`

The WARP client version.

## ConnectionType

Type: `string`

The type of connection the device has. Can be `cellular`, `ethernet`, or `wifi`.

## DeviceID

Type: `string`

The unique device ID.

## DeviceIPv4Address

Type: `string`

The device's private IPv4 address.

## DeviceIPv4Netmask

Type: `string`

The device's private IPv4 netmask.

## DeviceIPv6Address

Type: `string`

The device's private IPv6 address.

## DeviceIPv6Netmask

Type: `string`

The device's private IPv6 netmask.

## DeviceRegistrationID

Type: `string`

The unique ID for the device registration.

## DiskReadBPS

Type: `int`

The number of disk bytes read per second.

## DiskUsagePercentage

Type: `float`

The percentage of disk used from 0 - 1.

## DiskWriteBPS

Type: `int`

The number of disk bytes written per second.

## DoHSubdomain

Type: `string`

The WARP client's DoH subdomain.

## ExperimentalExtra

Type: `object`

Additional unstructured data sent by the WARP client. This field may change at any time.

## FirewallEnabled

Type: `bool`

Whether the system-level firewall is enabled or disabled.

## GatewayIPv4Address

Type: `string`

The private IPv4 address of the gateway/router the device is connected to.

## GatewayIPv4Netmask

Type: `string`

The private IPv4 netmask of the gateway/router the device is connected to.

## GatewayIPv6Address

Type: `string`

The private IPv6 address of the gateway/router the device is connected to.

## GatewayIPv6Netmask

Type: `string`

The private IPv6 netmask of the gateway/router the device is connected to.

## HandshakeLatencyMs

Type: `int`

When WARP is connected, the tunnel's estimated latency in milliseconds. When disconnected, -1.

## ISPIPv4ASN

Type: `int`

The public IPv4 autonomous system number of the device assigned by the ISP, for example `13335`.

## ISPIPv4ASO

Type: `string`

The public IPv4 autonomous system organization of the device assigned by the ISP, for example `Cloudflare Inc`.

## ISPIPv4Address

Type: `string`

The public IPv4 address of the device assigned by the ISP.

## ISPIPv4City

Type: `string`

The public IPv4 city name in English language of the device assigned by the ISP, for example `San Francisco`.

## ISPIPv4CountryISO

Type: `string`

The public IPv4 country ISO code of the device assigned by the ISP, for example `US` for the United States.

## ISPIPv4Netmask

Type: `string`

The public IPv4 netmask of the device assigned by the ISP.

## ISPIPv4StateISO

Type: `string`

The public IPv4 state ISO code of the device assigned by the ISP, for example `CA` for California.

## ISPIPv4Zip

Type: `string`

The public IPv4 postal code of the device assigned by the ISP, for example `90001`.

## ISPIPv6ASN

Type: `int`

The public IPv6 autonomous system number of the device assigned by the ISP, for example `13335`.

## ISPIPv6ASO

Type: `string`

The public IPv6 autonomous system organization of the device assigned by the ISP, for example `Cloudflare Inc`.

## ISPIPv6Address

Type: `string`

The public IPv6 address of the device assigned by the ISP.

## ISPIPv6City

Type: `string`

The public IPv6 city name in English language of the device assigned by the ISP, for example `San Francisco`.

## ISPIPv6CountryISO

Type: `string`

The public IPv6 country ISO code of the device assigned by the ISP, for example `US` for the United States.

## ISPIPv6Netmask

Type: `string`

The public IPv6 netmask of the device assigned by the ISP.

## ISPIPv6StateISO

Type: `string`

The public IPv6 state ISO code of the device assigned by the ISP, for example `CA` for California.

## ISPIPv6Zip

Type: `string`

The public IPv6 postal code of the device assigned by the ISP, for example `90001`.

## Mode

Type: `string`

The WARP client connection mode, for example, `warp+doh`, `proxy`.

## NetworkReceivedBPS

Type: `int`

The number of network bytes received per second.

## NetworkSSID

Type: `string`

The SSID of the network the device is connected to, max 32 characters.

## NetworkSentBPS

Type: `int`

The number of network bytes sent per second.

## RAMAvailableKB

Type: `int`

The total available RAM in kilobytes.

## RAMUsedPercentage

Type: `float`

The percentage of RAM utilization from 0 - 1.

## RAMUsedPercentageByApp

Type: `array[object]`

The top applications by percentage of RAM used, for example `[{"name": "app0", "percentage": 0.55}, {"name": "app1", "percentage": 0.45}]`.

## Status

Type: `string`

The WARP client connection status, for example, `connected`, `paused`.

## SwitchLocked

Type: `bool`

Whether the WARP client was configured to always be enabled.

## Timestamp

Type: `int or string`

Event timestamp.

## TunnelStatsDownstream

Type: `object`

Warp Tunnel downstream stats, focused on MASQUE tunnels, for example `{"rttUs": 5, "minRttUs": 1, "rttVarUs": 1, "packetsSent": 100, "packetsLost": 50, "packetsRetransmitted": 25, "bytesSent": 1000, "bytesLost": 500, "bytesRetransmitted": 250}`.

## TunnelStatsUpstream

Type: `object`

Warp Tunnel upstream stats, focused on MASQUE tunnels, for example `{"rttUs": 5, "minRttUs": 1, "rttVarUs": 1, "packetsSent": 100, "packetsLost": 50, "packetsRetransmitted": 25, "bytesSent": 1000, "bytesLost": 500, "bytesRetransmitted": 250}`.

## TunnelType

Type: `string`

The tunnel type the device uses to establish a connection to the edge, if any. Can be `http2`, `masque`, or `wireguard`.

## WarpColoCode

Type: `string`

The colo code where the client is connected to our API, for example, `DFW` or `none`.

## WiFiStrengthDBM

Type: `int`

The WiFi strength in decibel milliwatts. Scale between -30 and -90.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/dex_device_state_events/","name":"DEX Device State Events"}}]}
```

---

---
title: DLP Forensic Copies
description: The descriptions below detail the fields available for dlp_forensic_copies.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/dlp%5Fforensic%5Fcopies.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# DLP Forensic Copies

The descriptions below detail the fields available for `dlp_forensic_copies`.

## AccountID

Type: `string`

Cloudflare account ID.

## Datetime

Type: `int or string`

The date and time the corresponding HTTP request was made.

## ForensicCopyID

Type: `string`

The unique ID for this particular forensic copy.

## GatewayRequestID

Type: `string`

Cloudflare request ID, as found in Gateway logs.

## Headers

Type: `object`

String key-value pairs for a selection of HTTP headers on the associated request/response.

## Payload

Type: `string`

Captured request/response data, base64-encoded.

## Phase

Type: `string`

Phase of the HTTP request this forensic copy was captured from (that is, "request" or "response").

## TriggeredRuleID

Type: `string`

The ID of the Gateway firewall rule that triggered this forensic copy.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/dlp_forensic_copies/","name":"DLP Forensic Copies"}}]}
```

---

---
title: DNS Firewall Logs
description: The descriptions below detail the fields available for dns_firewall_logs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/dns%5Ffirewall%5Flogs.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# DNS Firewall Logs

The descriptions below detail the fields available for `dns_firewall_logs`.

## ClientResponseCode

Type: `int`

Integer value of the response code Cloudflare presents to the client. Response code follows [IANA parameters ↗](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-6).

## ClusterID

Type: `string`

The ID of the cluster which handled this request.

## ColoCode

Type: `string`

IATA airport code of the data center that received the request.

## EDNSSubnet

Type: `string`

IPv4 or IPv6 address information corresponding to the [EDNS Client Subnet (ECS)](https://developers.cloudflare.com/glossary/?term=ecs) forwarded by recursive resolvers. Not all resolvers send this information.

## EDNSSubnetLength

Type: `int`

Size of the [EDNS Client Subnet (ECS)](https://developers.cloudflare.com/glossary/?term=ecs) in bits. For example, if the last octet of an IPv4 address is omitted (`192.0.2.x.`), the subnet length will be 24.

## QueryDO

Type: `bool`

Indicates if the client is capable of handling a signed response (DNSSEC answer OK).

## QueryName

Type: `string`

Name of the query that was sent.

## QueryRD

Type: `bool`

Indicates if the client means a recursive query (Recursion Desired).

## QuerySize

Type: `int`

The size of the query sent from the client in bytes.

## QueryTCP

Type: `bool`

Indicates if the query from the client was made via TCP (if false, then UDP).

## QueryType

Type: `int`

Integer value of query type. For more information refer to [Query type ↗](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4).

## ResponseCached

Type: `bool`

Whether the response was cached or not.

## ResponseCachedStale

Type: `bool`

Whether the response was cached stale. In other words, the TTL had expired and the upstream nameserver was not reachable.

## ResponseReason

Type: `string`

Short descriptions with more context around the final DNS Firewall response. Refer to [response reasons](https://developers.cloudflare.com/dns/dns-firewall/analytics/) for more information.

## SourceIP

Type: `string`

IP address of the client (IPv4 or IPv6).

## Timestamp

Type: `int or string`

Timestamp at which the query occurred.

## UpstreamIP

Type: `string`

IP of the upstream nameserver (IPv4 or IPv6).

## UpstreamResponseCode

Type: `int`

Integer value of the response code from the upstream nameserver. Response code follows [IANA parameters ↗](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-6)

## UpstreamResponseTimeMs

Type: `int`

Upstream response time in milliseconds.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/dns_firewall_logs/","name":"DNS Firewall Logs"}}]}
```

---

---
title: Email Security Alerts
description: The descriptions below detail the fields available for email_security_alerts.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/email%5Fsecurity%5Falerts.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Email Security Alerts

The descriptions below detail the fields available for `email_security_alerts`.

## AlertID

Type: `string`

The canonical ID for an Email Security Alert (for example, '4WtWkr6nlBz9sNH-2024-08-28T15:32:35').

## AlertReasons

Type: `array[string]`

Human-readable list of findings which contributed to this message's final disposition.

## Attachments

Type: `array[object]`

List of objects containing metadata of attachments contained in this message (for example, \[{"Md5": "91f073bd208689ddbd248e8989ecae90", "Sha1": "62b77e14e2c43049c45b5725018e78d0f9986930", "Sha256": "3b57505305e7162141fd898ed87d08f92fc42579b5047495859e56b3275a6c06", "Ssdeep": "McAQ8tPlH25e85Q2OiYpD08NvHmjJ97UfPMO47sekO:uN9M553OiiN/OJ9MM+e3", "Name": "attachment.gif", "ContentTypeProvided": "image/gif", "ContentTypeComputed": "application/x-msi", "Encrypted": true, "Decrypted": true}, ...\]).

## CC

Type: `array[string]`

Email address portions of the CC header provided by the sender (for example, '[firstlast@cloudflare.com](mailto:firstlast@cloudflare.com)').

## CCName

Type: `array[string]`

Email address portions of the CC header provided by the sender (for example, 'First Last').

## FinalDisposition

Type: `string`

Final disposition attributed to the message.   
Possible values are _unset_ | _malicious_ | _suspicious_ | _spoof_ | _spam_ | _bulk_.

## From

Type: `string`

Email address portion of the From header provided by the sender (for example, '[firstlast@cloudflare.com](mailto:firstlast@cloudflare.com)').

## FromName

Type: `string`

Name portion of the From header provided by the sender (for example, 'First Last').

## Links

Type: `array[string]`

List of links detected in this message, benign or otherwise; limited to 100 in total.

## MessageDeliveryMode

Type: `string`

The message's mode of transport to Email Security.   
Possible values are _unset_ | _api_ | _direct_ | _bcc_ | _journal_ | _retroScan_.

## MessageID

Type: `string`

Value of the Message-ID header provided by the sender.

## Origin

Type: `string`

The origin of the message.   
Possible values are _unset_ | _internal_ | _external_ | _secondPartyInternal_ | _thirdPartyInternal_ | _outbound_.

## OriginalSender

Type: `string`

The original sender address as determined by Email Security mail processing (for example, '[firstlast@cloudflare.com](mailto:firstlast@cloudflare.com)').

## ReplyTo

Type: `string`

Email address portion of the Reply-To header provided by the sender (for example, '[firstlast@cloudflare.com](mailto:firstlast@cloudflare.com)').

## ReplyToName

Type: `string`

Name portion of the Reply-To header provided by the sender (for example, 'First Last').

## SMTPEnvelopeFrom

Type: `string`

Value of the SMTP MAIL FROM command provided by the sender (for example, 'First Last [firstlast@cloudflare.com](mailto:firstlast@cloudflare.com)').

## SMTPEnvelopeTo

Type: `array[string]`

Values of the SMTP RCPT TO command provided by the sender (for example, 'First Last [firstlast@cloudflare.com](mailto:firstlast@cloudflare.com)').

## SMTPHeloServerIP

Type: `string`

IPv4/v6 of the SMTP HELO server.

## SMTPHeloServerIPAsName

Type: `string`

Autonomous System Name of the SMTP HELO server's IP.

## SMTPHeloServerIPAsNumber

Type: `string`

Autonomous System Number of the SMTP HELO server's IP.

## SMTPHeloServerIPGeo

Type: `string`

SMTP HELO server geolocation info (for example, 'US/NV/Las Vegas').

## SMTPHeloServerName

Type: `string`

Hostname provided by the SMTP HELO server.

## Subject

Type: `string`

Value of the Subject header provided by the sender.

## ThreatCategories

Type: `array[string]`

Threat categories attributed by Email Security processing (for example, 'CredentialHarvester', 'Dropper').

## Timestamp

Type: `int or string`

Start time of message processing (for example, '2024-08-28T15:32:35Z'). To specify the timestamp format, refer to [Output types](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/#output-types).

## To

Type: `array[string]`

Email address portions of the To header provided by the sender (for example, '[firstlast@cloudflare.com](mailto:firstlast@cloudflare.com)').

## ToName

Type: `array[string]`

Name portions of the To header provided by the sender (for example, 'First Last').

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/email_security_alerts/","name":"Email Security Alerts"}}]}
```

---

---
title: Gateway DNS
description: The descriptions below detail the fields available for gateway_dns.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/gateway%5Fdns.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Gateway DNS

The descriptions below detail the fields available for `gateway_dns`.

## AccountID

Type: `string`

Cloudflare account ID.

## ApplicationID

Type: `int`

ID of the application the domain belongs to (for example, 1, 2). Set to 0 when no ApplicationID is matched.

## ApplicationName

Type: `string`

Name of the application the domain belongs to (for example, 'Cloudflare Dashboard').

## AuthoritativeNameServerIPs

Type: `array[string]`

The IPs of the authoritative nameservers that provided the answers, if any (for example \['203.0.113.1', '203.0.113.2'\]).

## CNAMECategoryIDs

Type: `array[int]`

ID or IDs of category that the intermediate cname domains belongs to (for example, \[7,12,28,122,129,163\]).

## CNAMECategoryNames

Type: `array[string]`

Name or names of category that the intermediate cname domains belongs to (for example, \['Photography', 'Weather'\]).

## CNAMEs

Type: `array[string]`

Resolved intermediate cname domains (for example, \['alias.example.com'\]).

## CNAMEsReversed

Type: `array[string]`

Resolved intermediate cname domains in reverse (for example, \['com.example.alias'\]).

## ColoCode

Type: `string`

The name of the data center that received the DNS query (for example, 'SJC', 'MIA', 'IAD').

## ColoID

Type: `int`

The ID of the data center that received the DNS query (for example, 46, 72, 397).

## CustomResolveDurationMs

Type: `int`

The time it took for the custom resolver to respond.

## CustomResolverAddress

Type: `string`

IP and port combo used to resolve the custom dns resolver query, if any.

## CustomResolverPolicyID (deprecated)

Type: `string`

Custom resolver policy UUID, if matched. Deprecated by ResolverPolicyID.

## CustomResolverPolicyName (deprecated)

Type: `string`

Custom resolver policy name, if matched. Deprecated by ResolverPolicyName.

## CustomResolverResponse

Type: `string`

Status of the custom resolver response.

## Datetime

Type: `int or string`

The date and time the corresponding DNS request was made (for example, '2021-07-27T00:01:07Z').

## DeviceID

Type: `string`

UUID of the device where the HTTP request originated from (for example, 'dad71818-0429-11ec-a0dc-000000000000').

## DeviceName

Type: `string`

The name of the device where the HTTP request originated from (for example, 'Laptop MB810').

## DoHSubdomain

Type: `string`

The destination DoH subdomain the DNS query was made to.

## DoTSubdomain

Type: `string`

The destination DoT subdomain the DNS query was made to.

## DstIP

Type: `string`

The destination IP address the DNS query was made to (for example, '104.16.132.2290').

## DstPort

Type: `int`

The destination port used at the edge. The port changes based on the protocol used by the DNS query (for example, 0).

## EDEErrors

Type: `array[int]`

List of returned Extended DNS Error Codes (for example, \[2, 3\]).

## Email

Type: `string`

Email used to authenticate the client (for example, '[user@test.com](mailto:user@test.com)').

## InitialCategoryIDs

Type: `array[int]`

ID or IDs of category that the queried domains belongs to (for example, \[7,12,28,122,129,163\]).

## InitialCategoryNames

Type: `array[string]`

Name or names of category that the queried domains belongs to (for example, \['Photography', 'Weather'\]).

## InitialResolvedIPs

Type: `array[string]`

The IPs used to correlate existing FQDN matching policy between Gateway DNS and Gateway proxy.

## InternalDNSDurationMs

Type: `int`

The time it took for the internal DNS to respond.

## InternalDNSFallbackStrategy

Type: `string`

The fallback strategy applied over the internal DNS response. Empty if no fallback strategy was applied.

## InternalDNSRCode

Type: `int`

The return code sent back by the internal DNS service.

## InternalDNSViewID

Type: `string`

The DNS internal view identifier that was sent to the internal DNS service.

## InternalDNSZoneID

Type: `string`

The DNS zone identifier returned by the internal DNS service.

## IsResponseCached

Type: `bool`

Response comes from cache or not.

## Location

Type: `string`

Name of the location the DNS request is coming from. Location is created by the customer (for example, 'Office NYC').

## LocationID

Type: `string`

UUID of the location the DNS request is coming from. Location is created by the customer (for example, '7bdc7a9c-81d3-4816-8e56-000000000000').

## MatchedCategoryIDs

Type: `array[int]`

ID or IDs of category that the domain was matched with the policy (for example, \[7,12,28,122,129,163\]).

## MatchedCategoryNames

Type: `array[string]`

Name or names of category that the domain was matched with the policy (for example, \['Photography', 'Weather'\]).

## MatchedIndicatorFeedIDs

Type: `array[int]`

ID or IDs of indicator feed(s) that the domain was matched with the policy (for example, \[7,12\]).

## MatchedIndicatorFeedNames

Type: `array[string]`

Name or names of indicator feed(s) that the domain was matched with the policy (for example, \['Vendor Malware Feed', 'Vendor CoC Feed'\]).

## Policy (deprecated)

Type: `string`

Name of the policy that was applied (if any) (for example, '7bdc7a9c-81d3-4816-8e56-de1acad3dec5').

## PolicyID

Type: `string`

ID of the policy/rule that was applied (if any).

## PolicyName

Type: `string`

Name of the policy that was applied (if any).

## Protocol

Type: `string`

The protocol used for the DNS query by the client (for example, 'udp').

## QueryApplicationIDs

Type: `array[int]`

ID or IDs of applications the queried domain belongs to (for example, \[1, 51\])

## QueryApplicationNames

Type: `array[string]`

Name or names of applications the queried domain belongs to (for example, \['Cloudflare Dashboard'\])

## QueryCategoryIDs

Type: `array[int]`

Union of all categories; Initial categories + Resolved IP categories + Cname intermediate categories

## QueryCategoryNames

Type: `array[string]`

Union of all category names; Initial categories + Resolved IP categories + Cname intermediate categories

## QueryID

Type: `string`

Globally unique identifier of the query.

## QueryIndicatorFeedIDs

Type: `array[int]`

ID or IDs of indicator feed(s) that the domain belongs to (for example, \[7,12,28\]).

## QueryIndicatorFeedNames

Type: `array[string]`

Name or names of indicator feed(s) that the domain belongs to (for example, \['Vendor Malware Feed', 'Vendor CoC Feed', 'Vendor Phishing Feed'\]).

## QueryName

Type: `string`

The query name (for example, 'example.com'). Cloudflare will surface '.' for root server queries in your logs.

## QueryNameReversed

Type: `string`

Query name in reverse (for example, 'com.example'). Cloudflare will surface '.' for root server queries in your logs.

## QuerySize

Type: `int`

The size of the DNS request in bytes (for example, 151).

## QueryType

Type: `int`

The type of DNS query (for example, 1, 28, 15, or 16).

## QueryTypeName

Type: `string`

The type of DNS query (for example, 'A', 'AAAA', 'MX', or 'TXT').

## RCode

Type: `int`

The return code sent back by the DNS resolver.

## RData (deprecated)

Type: `array[object]`

The rdata objects (for example, \[{"type":"5","data":"dns-packet-placeholder..."}\]).

## RedirectTargetURI

Type: `string`

Custom URI to which the user was redirected, if any.

## RegistrationID

Type: `string`

The UUID of the device registration from which the HTTP request originated (for example, 'dad71818-0429-11ec-a0dc-000000000000').

## RequestContextCategoryIDs

Type: `array[int]`

ID or IDs of the category that was sent to gateway in the EDNS request for filtering (for example, \[7,12,28,122,129,163\]).

## RequestContextCategoryNames

Type: `array[string]`

Name or names of the category that was sent to gateway in the EDNS request for filtering (for example, \['Photography', 'Weather'\]).

## ResolvedIPCategoryIDs

Type: `array[int]`

ID or IDs of category that the ips in the response belongs to (for example, \[7,12,28,122,129,163\]).

## ResolvedIPCategoryNames

Type: `array[string]`

Name or names of category that the ips in the response belongs to (for example, \['Photography', 'Weather'\]).

## ResolvedIPContinentCodes

Type: `array[string]`

Continent code of each resolved IP, if any (for example \['NA', 'EU'\]).

## ResolvedIPCountryCodes

Type: `array[string]`

Country code of each resolved IP, if any (for example \['US', 'PT'\]).

## ResolvedIPs

Type: `array[string]`

The resolved IPs in the response, if any (for example \['203.0.113.1', '203.0.113.2'\]).

## ResolverDecision

Type: `string`

Result of the DNS query (for example, 'overrideForSafeSearch').

## ResolverPolicyID

Type: `string`

Resolver policy UUID, if any matched.

## ResolverPolicyName

Type: `string`

Resolver policy name, if any matched.

## ResourceRecords

Type: `array[object]`

The rdata objects (for example, \[{"type":"5","data":"dns-packet-placeholder..."}\]).

## ResourceRecordsJSON

Type: `string`

String that represents the JSON array with the returned resource records (for example, '\[{"name": "example.com", "type": "CNAME", "class": "IN", "ttl": 3600, "rdata": "cname.example.com."}\]').

## SrcIP

Type: `string`

The source IP address making the DNS query (for example, '104.16.132.229').

## SrcIPContinentCode

Type: `string`

Continent code of the source IP address making the DNS query (for example, 'NA').

## SrcIPCountryCode

Type: `string`

Country code of the source IP address making the DNS query (for example, 'US').

## SrcPort

Type: `int`

The port used by the client when they sent the DNS request (for example, 0).

## TimeZone

Type: `string`

Time zone used to calculate the current time, if a matched rule was scheduled with it.

## TimeZoneInferredMethod

Type: `string`

Method used to pick the time zone for the schedule (from rule/ from user ip/ from local time).

## UserID

Type: `string`

User identity where the HTTP request originated from (for example, '00000000-0000-0000-0000-000000000000').

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/gateway_dns/","name":"Gateway DNS"}}]}
```

---

---
title: Gateway HTTP
description: The descriptions below detail the fields available for gateway_http.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/gateway%5Fhttp.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Gateway HTTP

The descriptions below detail the fields available for `gateway_http`.

## AccountID

Type: `string`

Cloudflare account tag.

## Action

Type: `string`

Action performed by gateway on the HTTP request.

## AppControlInfo

Type: `object`

Information about application control operations, APIs, and groups that matched the HTTP request.

## ApplicationIDs

Type: `array[int]`

IDs of the applications that matched the HTTP request parameters.

## ApplicationNames

Type: `array[string]`

Names of the applications that matched the HTTP request parameters.

## ApplicationStatuses

Type: `array[string]`

Statuses of the applications that matched the HTTP request parameters.

## BlockedFileHash

Type: `string`

Hash of the file blocked in the response, if any.

## BlockedFileName

Type: `string`

File name blocked in the request, if any.

## BlockedFileReason

Type: `string`

Reason file was blocked in the response, if any.

## BlockedFileSize

Type: `int`

File size(bytes) blocked in the response, if any.

## BlockedFileType

Type: `string`

File type blocked in the response eg. exe, bin, if any.

## CategoryIDs

Type: `array[int]`

IDs of the categories that matched the HTTP request parameters.

## CategoryNames

Type: `array[string]`

Names of the categories that matched the HTTP request parameters.

## Datetime

Type: `int or string`

The date and time the corresponding HTTP request was made.

## DestinationIP

Type: `string`

Destination ip of the request.

## DestinationIPContinentCode

Type: `string`

Continent code of the destination IP of the HTTP request (for example, 'NA').

## DestinationIPCountryCode

Type: `string`

Country code of the destination IP of the HTTP request (for example, 'US').

## DestinationPort

Type: `int`

Destination port of the request.

## DeviceID

Type: `string`

UUID of the device where the HTTP request originated from.

## DeviceName

Type: `string`

The name of the device where the HTTP request originated from (for example, 'Laptop MB810').

## DownloadMatchedDlpProfileEntries

Type: `array[string]`

List of matched DLP entries in the HTTP request.

## DownloadMatchedDlpProfiles

Type: `array[string]`

List of matched DLP profiles in the HTTP request.

## DownloadedFileNames

Type: `array[string]`

List of files downloaded in the HTTP request.

## Email

Type: `string`

Email used to authenticate the client.

## FileInfo

Type: `object`

Information about files detected within the HTTP request.

## ForensicCopyStatus

Type: `string`

Status of any associated forensic copies that may have been captured during the request.

## HTTPHost

Type: `string`

Content of the host header in the HTTP request.

## HTTPMethod

Type: `string`

HTTP request method.

## HTTPStatusCode

Type: `int`

HTTP status code gateway returned to the user. Zero if nothing was returned (for example, client disconnected).

## HTTPVersion

Type: `string`

Version name for the HTTP request.

## IsIsolated

Type: `bool`

If the requested was isolated with Cloudflare Browser Isolation or not.

## PolicyID

Type: `string`

The gateway policy UUID applied to the request, if any.

## PolicyName

Type: `string`

The name of the gateway policy applied to the request, if any.

## PrivateAppAUD

Type: `string`

The private app AUD, if any.

## ProxyEndpoint

Type: `string`

The proxy endpoint used on the HTTP request, if any.

## Quarantined

Type: `bool`

If the request content was quarantined.

## RedirectTargetURI

Type: `string`

Custom URI to which the user was redirected, if any.

## Referer

Type: `string`

Contents of the referer header in the HTTP request.

## RegistrationID

Type: `string`

The UUID of the device registration from which the HTTP request originated.

## RequestID

Type: `string`

Cloudflare request ID. This might be empty on bypass action.

## SessionID

Type: `string`

Network session ID.

## SourceIP

Type: `string`

Source ip of the request.

## SourceIPContinentCode

Type: `string`

Continent code of the source IP of the request (for example, 'NA').

## SourceIPCountryCode

Type: `string`

Country code of the source IP of the request (for example, 'US').

## SourceInternalIP

Type: `string`

Local LAN IP of the device. Only available when connected via a GRE/IPsec tunnel on-ramp.

## SourcePort

Type: `int`

Source port of the request.

## URL

Type: `string`

HTTP request URL.

## UntrustedCertificateAction

Type: `string`

Action taken when an untrusted origin certificate error occurs (for example, expired certificate, mismatched common name, invalid certificate chain, signed by non-public CA). One of _none_ | _block_ | _error_ | _passThrough_.

## UploadMatchedDlpProfileEntries

Type: `array[string]`

List of matched DLP entries in the HTTP request.

## UploadMatchedDlpProfiles

Type: `array[string]`

List of matched DLP profiles in the HTTP request.

## UploadedFileNames

Type: `array[string]`

List of files uploaded in the HTTP request.

## UserAgent

Type: `string`

Contents of the user agent header in the HTTP request.

## UserID

Type: `string`

User identity where the HTTP request originated from.

## VirtualNetworkID

Type: `string`

The identifier of the virtual network the device was connected to, if any.

## VirtualNetworkName

Type: `string`

The name of the virtual network the device was connected to, if any.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/gateway_http/","name":"Gateway HTTP"}}]}
```

---

---
title: Gateway Network
description: The descriptions below detail the fields available for gateway_network.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/gateway%5Fnetwork.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Gateway Network

The descriptions below detail the fields available for `gateway_network`.

## AccountID

Type: `string`

Cloudflare account tag.

## Action

Type: `string`

Action performed by gateway on the session.

## ApplicationIDs

Type: `array[int]`

IDs of the applications that matched the session parameters.

## ApplicationNames

Type: `array[string]`

Names of the applications that matched the session parameters.

## CategoryIDs

Type: `array[int]`

IDs of the categories that matched the session parameters.

## CategoryNames

Type: `array[string]`

Names of the categories that matched the session parameters.

## Datetime

Type: `int or string`

The date and time the corresponding network session was made (for example, '2021-07-27T00:01:07Z').

## DestinationIP

Type: `string`

Destination IP of the network session.

## DestinationIPContinentCode

Type: `string`

Continent code of the destination IP of the network session (for example, 'NA').

## DestinationIPCountryCode

Type: `string`

Country code of the destination IP of the network session (for example, 'US').

## DestinationPort

Type: `int`

Destination port of the network session.

## DetectedProtocol

Type: `string`

Detected traffic protocol of the network session.

## DeviceID

Type: `string`

UUID of the device where the network session originated from.

## DeviceName

Type: `string`

The name of the device where the HTTP request originated from (for example, 'Laptop MB810').

## Email

Type: `string`

Email associated with the user identity where the network session originated from.

## OverrideIP

Type: `string`

Overridden IP of the network session, if any.

## OverridePort

Type: `int`

Overridden port of the network session, if any.

## PolicyID

Type: `string`

Identifier of the policy/rule that was applied, if any.

## PolicyName

Type: `string`

The name of the gateway policy applied to the request, if any.

## ProxyEndpoint

Type: `string`

The proxy endpoint used on this network session, if any.

## RegistrationID

Type: `string`

The UUID of the device registration from which the network session originated.

## SNI

Type: `string`

Content of the SNI for the TLS network session, if any.

## SessionID

Type: `string`

The session identifier of this network session.

## SourceIP

Type: `string`

Source IP of the network session.

## SourceIPContinentCode

Type: `string`

Continent code of the source IP of the network session (for example, 'NA').

## SourceIPCountryCode

Type: `string`

Country code of the source IP of the network session (for example, 'US').

## SourceInternalIP

Type: `string`

Local LAN IP of the device. Only available when connected via a GRE/IPsec tunnel on-ramp.

## SourcePort

Type: `int`

Source port of the network session.

## Transport (deprecated)

Type: `string`

Transport protocol used for this session.   
Possible values are _tcp_ | _quic_ | _udp_. Deprecated, please use TransportProtocol instead.

## TransportProtocol

Type: `string`

Transport protocol used for this session.   
Possible values are _tcp_ | _quic_ | _udp_.

## UserID

Type: `string`

User identity where the network session originated from.

## VirtualNetworkID

Type: `string`

The identifier of the virtual network the device was connected to, if any.

## VirtualNetworkName

Type: `string`

The name of the virtual network the device was connected to, if any.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/gateway_network/","name":"Gateway Network"}}]}
```

---

---
title: IPSec Logs
description: The descriptions below detail the fields available for ipsec_logs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/ipsec%5Flogs.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# IPSec Logs

The descriptions below detail the fields available for `ipsec_logs`.

## Level

Type: `string`

The level of the log.

## LocalIP

Type: `string`

The local IP address associated with the log.

## LocalPort

Type: `int`

The local port associated with the log.

## Message

Type: `string`

The log message. IKEv2 ciphersuite is logged here for handshake messages.

## RemoteIP

Type: `string`

The remote IP address associated with the log.

## RemotePort

Type: `int`

The remote port associated with the log.

## Timestamp

Type: `int or string`

Timestamp at which the log occurred.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/ipsec_logs/","name":"IPSec Logs"}}]}
```

---

---
title: Magic IDS Detections
description: The descriptions below detail the fields available for magic_ids_detections.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/magic%5Fids%5Fdetections.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Magic IDS Detections

The descriptions below detail the fields available for `magic_ids_detections`.

## Action

Type: `string`

What action was taken on the packet. Possible values are _pass_ | _block_.

## ColoCity

Type: `string`

The city where the detection occurred.

## ColoCode

Type: `string`

The IATA airport code corresponding to where the detection occurred.

## DestinationIP

Type: `string`

The destination IP of the packet which triggered the detection.

## DestinationPort

Type: `int`

The destination port of the packet which triggered the detection. It is set to 0 if the protocol field is set to _any_.

## Protocol

Type: `string`

The layer 4 protocol of the packet which triggered the detection. Possible values are _tcp_ | _udp_ | _any_. Variant _any_ means a detection occurred at a lower layer (such as IP).

## SignatureID

Type: `int`

The signature ID of the detection.

## SignatureMessage

Type: `string`

The signature message of the detection. Describes what the packet is attempting to do.

## SignatureRevision

Type: `int`

The signature revision of the detection.

## SourceIP

Type: `string`

The source IP of packet which triggered the detection.

## SourcePort

Type: `int`

The source port of the packet which triggered the detection. It is set to 0 if the protocol field is set to _any_.

## Timestamp

Type: `int or string`

A timestamp of when the detection occurred.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/magic_ids_detections/","name":"Magic IDS Detections"}}]}
```

---

---
title: MCP Portal Logs
description: The descriptions below detail the fields available for mcp_portal_logs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/mcp%5Fportal%5Flogs.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# MCP Portal Logs

The descriptions below detail the fields available for `mcp_portal_logs`.

## ClientCountry

Type: `string`

Country code of the client IP address.

## ClientIP

Type: `string`

IP address of the client that initiated the request.

## ColoCode

Type: `string`

Colo code of the data center that processed the request (for example, 'DFW').

## Datetime

Type: `int or string`

The date and time the request was made.

## Error

Type: `string`

The error message if the request failed and there is additional information.

## Method

Type: `string`

The JSON-RPC method of the request (for example, 'tools/call', 'prompts/get', 'resources/read').

## PortalAUD

Type: `string`

Audience tag of the MCP Portal.

## PortalID

Type: `string`

Unique identifier of the MCP Portal.

## PromptGetName

Type: `string`

For prompts/get requests, the name of the prompt being fetched.

## ResourceReadURI

Type: `string`

For resources/read requests, the URI of the resource being fetched.

## ServerAUD

Type: `string`

Audience tag of the upstream MCP Server.

## ServerID

Type: `string`

Unique identifier of the upstream MCP Server.

## ServerResponseDurationMs

Type: `int`

The time in milliseconds it took for the upstream MCP server to respond.

## ServerURL

Type: `string`

URL of the upstream MCP Server.

## SessionID

Type: `string`

Unique identifier of the stateful MCP session associated with the request.

## Success

Type: `bool`

If the request succeeded.

## ToolCallName

Type: `string`

For tools/call requests, the name of the tool being called.

## UserEmail

Type: `string`

Email address of the authenticated user who performed the request.

## UserID

Type: `string`

Unique identifier of the authenticated user who performed the request.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/mcp_portal_logs/","name":"MCP Portal Logs"}}]}
```

---

---
title: Network Analytics Logs
description: The descriptions below detail the fields available for network_analytics_logs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/network%5Fanalytics%5Flogs.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Network Analytics Logs

The descriptions below detail the fields available for `network_analytics_logs`.

## AttackCampaignID

Type: `string`

Unique identifier of the attack campaign that this packet was a part of, if any.

## AttackID

Type: `string`

Unique identifier of the mitigation that matched the packet, if any.

## AttackVector

Type: `string`

Descriptive name of the type of attack that this packet was a part of, if any. Only for packets matching rules contained within the Cloudflare L3/4 managed ruleset.

## ColoCity

Type: `string`

The city where the Cloudflare data center that received the packet is located.

## ColoCode

Type: `string`

The Cloudflare data center that received the packet (nearest IATA airport code).

## ColoCountry

Type: `string`

The country where the Cloudflare data center that received the packet is located (ISO 3166-1 alpha-2).

## ColoGeoHash

Type: `string`

The latitude and longitude where the Cloudflare data center that received the packet is located (Geohash encoding).

## ColoName

Type: `string`

The unique site identifier of the Cloudflare data center that received the packet (for example, 'ams01', 'sjc01', 'lhr01').

## DNSQueryName

Type: `string`

The DNS query name (domain) that was queried, if the packet is a DNS query.

## DNSQueryType

Type: `string`

The DNS query type (for example, A, AAAA, MX, TXT), if the packet is a DNS query.

## Datetime

Type: `int or string`

The date and time the event occurred at the edge.

## DestinationASN

Type: `int`

The ASN associated with the destination IP of the packet.

## DestinationASNName

Type: `string`

The name of the ASN associated with the destination IP of the packet.

## DestinationCountry

Type: `string`

The country where the destination IP of the packet is located (ISO 3166-1 alpha-2).

## DestinationGeoHash

Type: `string`

The latitude and longitude where the destination IP of the packet is located (Geohash encoding).

## DestinationPort

Type: `int`

Value of the Destination Port header field in the TCP or UDP packet.

## Direction

Type: `string`

The direction in relation to customer network.   
Possible values are _ingress_ | _egress_.

## GREChecksum

Type: `int`

Value of the Checksum header field in the GRE packet.

## GREEtherType

Type: `int`

Value of the EtherType header field in the GRE packet.

## GREHeaderLength

Type: `int`

Length of the GRE packet header, in bytes.

## GREKey

Type: `int`

Value of the Key header field in the GRE packet.

## GRESequenceNumber

Type: `int`

Value of the Sequence Number header field in the GRE packet.

## GREVersion

Type: `int`

Value of the Version header field in the GRE packet.

## ICMPChecksum

Type: `int`

Value of the Checksum header field in the ICMP packet.

## ICMPCode

Type: `int`

Value of the Code header field in the ICMP packet.

## ICMPType

Type: `int`

Value of the Type header field in the ICMP packet.

## IPDestinationAddress

Type: `string`

Value of the Destination Address header field in the IPv4 or IPv6 packet.

## IPDestinationSubnet

Type: `string`

Computed subnet of the Destination Address header field in the IPv4 or IPv6 packet (/24 for IPv4; /64 for IPv6).

## IPFragmentOffset

Type: `int`

Value of the Fragment Offset header field in the IPv4 or IPv6 packet.

## IPHeaderLength

Type: `int`

Length of the IPv4 or IPv6 packet header, in bytes.

## IPMoreFragments

Type: `int`

Value of the More Fragments header field in the IPv4 or IPv6 packet.

## IPProtocol

Type: `int`

Value of the Protocol header field in the IPv4 or IPv6 packet.

## IPProtocolName

Type: `string`

Name of the protocol specified by the Protocol header field in the IPv4 or IPv6 packet.

## IPSourceAddress

Type: `string`

Value of the Source Address header field in the IPv4 or IPv6 packet.

## IPSourceSubnet

Type: `string`

Computed subnet of the Source Address header field in the IPv4 or IPv6 packet (/24 for IPv4; /64 for IPv6).

## IPTTL

Type: `int`

Value of the TTL header field in the IPv4 packet or the Hop Limit header field in the IPv6 packet.

## IPTTLBuckets

Type: `int`

Value of the TTL header field in the IPv4 packet or the Hop Limit header field in the IPv6 packet, with the last digit truncated.

## IPTotalLength

Type: `int`

Total length of the IPv4 or IPv6 packet, in bytes.

## IPTotalLengthBuckets

Type: `int`

Total length of the IPv4 or IPv6 packet, in bytes, with the last two digits truncated.

## IPv4Checksum

Type: `int`

Value of the Checksum header field in the IPv4 packet.

## IPv4DSCP

Type: `int`

Value of the Differentiated Services Code Point header field in the IPv4 packet.

## IPv4DontFragment

Type: `int`

Value of the Don't Fragment header field in the IPv4 packet.

## IPv4ECN

Type: `int`

Value of the Explicit Congestion Notification header field in the IPv4 packet.

## IPv4Identification

Type: `int`

Value of the Identification header field in the IPv4 packet.

## IPv4Options

Type: `string`

List of Options numbers included in the IPv4 packet header.

## IPv6DSCP

Type: `int`

Value of the Differentiated Services Code Point header field in the IPv6 packet.

## IPv6ECN

Type: `int`

Value of the Explicit Congestion Notification header field in the IPv6 packet.

## IPv6ExtensionHeaders

Type: `string`

List of Extension Header numbers included in the IPv6 packet header.

## IPv6FlowLabel

Type: `int`

Value of the Flow Label header field in the IPv6 packet.

## IPv6Identification

Type: `int`

Value of the Identification extension header field in the IPv6 packet.

## MitigationReason

Type: `string`

Reason for applying a mitigation to the packet, if any.   
Possible values are _BLOCKED_ | _RATE\_LIMITED_ |_UNEXPECTED_ | _CHALLENGE\_NEEDED_ | _CHALLENGE\_PASSED_ | _NOT\_FOUND_ | _OUT\_OF\_SEQUENCE_ | _ALREADY\_CLOSED_.

## MitigationScope

Type: `string`

Whether the packet matched a local or global mitigation, if any.   
Possible values are _local_ | _global_.

## MitigationSystem

Type: `string`

Which Cloudflare system sampled the packet.   
Possible values are _dosd_ | _flowtrackd_ | _magic-firewall_.

## Outcome

Type: `string`

The action that Cloudflare systems took on the packet.   
Possible values are _pass_ | _drop_.

## PFPCustomTag

Type: `int`

The custom network analytics tag set by Programmable Flow Protection program, if any.

## ProtocolState

Type: `string`

State of the packet in the context of the protocol, if any.   
Possible values are _OPEN_ | _NEW_ | _CLOSING_ | _CLOSED_.

## RuleID

Type: `string`

Unique identifier of the rule contained within the Cloudflare L3/4 managed ruleset that this packet matched, if any.

## RuleName

Type: `string`

Human-readable name of the rule contained within the Cloudflare L3/4 managed ruleset that this packet matched, if any.

## RulesetID

Type: `string`

Unique identifier of the Cloudflare L3/4 managed ruleset containing the rule that this packet matched, if any.   
Possible values are _3b64149bfa6e4220bbbc2bd6db589552_.

## RulesetOverrideID

Type: `string`

Unique identifier of the rule within the accounts root ddos\_l4 phase ruleset which resulted in an override of the default sensitivity or action being applied/evaluated, if any.

## SampleInterval

Type: `int`

The sample interval is the inverse of the sample rate. For example, a sample interval of 1000 means that this packet was randomly sampled from 1 in 1000 packets. Sample rates are dynamic and based on the volume of traffic.

## SourceASN

Type: `int`

The ASN associated with the source IP of the packet.

## SourceASNName

Type: `string`

The name of the ASN associated with the source IP of the packet.

## SourceCountry

Type: `string`

The country where the source IP of the packet is located (ISO 3166-1 alpha-2).

## SourceGeoHash

Type: `string`

The latitude and longitude where the source IP of the packet is located (Geohash encoding).

## SourcePort

Type: `int`

Value of the Source Port header field in the TCP or UDP packet.

## TCPAcknowledgementNumber

Type: `int`

Value of the Acknowledgement Number header field in the TCP packet.

## TCPChecksum

Type: `int`

Value of the Checksum header field in the TCP packet.

## TCPDataOffset

Type: `int`

Value of the Data Offset header field in the TCP packet.

## TCPFlags

Type: `int`

Value of the Flags header field in the TCP packet.

## TCPFlagsString

Type: `string`

Human-readable string representation of the Flags header field in the TCP packet.

## TCPMSS

Type: `int`

Value of the MSS option header field in the TCP packet.

## TCPOptions

Type: `string`

List of Options numbers included in the TCP packet header.

## TCPSACKBlocks

Type: `string`

List of the SACK Blocks option header in the TCP packet.

## TCPSACKPermitted

Type: `int`

Value of the SACK Permitted option header in the TCP packet.

## TCPSequenceNumber

Type: `int`

Value of the Sequence Number header field in the TCP packet.

## TCPTimestampECR

Type: `int`

Value of the Timestamp Echo Reply option header in the TCP packet.

## TCPTimestampValue

Type: `int`

Value of the Timestamp option header in the TCP packet.

## TCPUrgentPointer

Type: `int`

Value of the Urgent Pointer header field in the TCP packet.

## TCPWindowScale

Type: `int`

Value of the Window Scale option header in the TCP packet.

## TCPWindowSize

Type: `int`

Value of the Window Size header field in the TCP packet.

## UDPChecksum

Type: `int`

Value of the Checksum header field in the UDP packet.

## UDPPayloadLength

Type: `int`

Value of the Payload Length header field in the UDP packet.

## Verdict

Type: `string`

The action that Cloudflare systems think should be taken on the packet.   
Possible values are _pass_ | _drop_.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/network_analytics_logs/","name":"Network Analytics Logs"}}]}
```

---

---
title: Sinkhole HTTP Logs
description: The descriptions below detail the fields available for sinkhole_http_logs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/sinkhole%5Fhttp%5Flogs.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Sinkhole HTTP Logs

The descriptions below detail the fields available for `sinkhole_http_logs`.

## AccountID

Type: `string`

The Account ID.

## Body

Type: `string`

The request body.

## BodyLength

Type: `int`

The length of request body.

## DestAddr

Type: `string`

The destination IP address of the request.

## Headers

Type: `string`

The request headers. If a header has multiple values, the values are comma separated. Each header is separated by the escaped newline character (\\n).

## Host

Type: `string`

The host the request was sent to.

## Method

Type: `string`

The request method.

## Password

Type: `string`

The request password.

## R2Path

Type: `string`

The path to the object within the R2 bucket linked to this sinkhole that stores overflow body and header data. Blank if neither headers nor body was larger than 256 bytes.

## Referrer

Type: `string`

The referrer of the request.

## SinkholeID

Type: `string`

The ID of the Sinkhole that logged the HTTP Request.

## SrcAddr

Type: `string`

The sender's IP address.

## Timestamp

Type: `int or string`

The date and time the sinkhole HTTP request was logged.

## URI

Type: `string`

The request Uniform Resource Identifier.

## URL

Type: `string`

The request Uniform Resource Locator.

## UserAgent

Type: `string`

The request user agent.

## Username

Type: `string`

The request username.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/sinkhole_http_logs/","name":"Sinkhole HTTP Logs"}}]}
```

---

---
title: SSH Logs
description: The descriptions below detail the fields available for ssh_logs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/ssh%5Flogs.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# SSH Logs

The descriptions below detail the fields available for `ssh_logs`.

## AccountID

Type: `string`

Cloudflare account ID.

## ClientAddress

Type: `string`

The source address of the SSH command.

## Datetime

Type: `int or string`

The timestamp in UTC of when this message is being sent.

## Error

Type: `string`

An SSH error. Only used if an error has occurred.

## PTY

Type: `string`

This is used by certain programs types to synchronize local and remote SSH terminal state.

## Payload

Type: `string`

The captured request/response data, in asciicast v2 format. This includes the command associated with the 'exec' program type.

## ProgramFinishDatetime

Type: `int or string`

The timestamp in UTC of the SSH program termination. This is empty until the program ends.

## ProgramID

Type: `string`

The SSH program ID. A single SSH session can have multiple programs running.

## ProgramStartDatetime

Type: `int or string`

The timestamp in UTC of the SSH program creation.

## ProgramType

Type: `string`

The SSH program being run. The options are 'shell': opens an interactive terminal, 'exec': execute a single specified command, 'x11': is for an interactive graphical environment, 'direct-tcpip': direct tunneling, 'forwarded-tcpip': reverse tunneling.

## ServerAddress

Type: `string`

The destination address for the SSH session.

## SessionFinishDatetime

Type: `int or string`

The timestamp in UTC of the SSH session termination. This is empty until the session ends.

## SessionID

Type: `string`

SSH session ID.

## SessionStartDatetime

Type: `int or string`

The timestamp in UTC of the SSH session creation.

## TargetID

Type: `string`

The identifier of the target being accessed.

## UserEmail

Type: `string`

User email address.

## UserID

Type: `string`

Cloudflare user ID.

## Username

Type: `string`

The principal user being accessed on SSH server's machine. This will be empty if an error was thrown when establishing the connection.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/ssh_logs/","name":"SSH Logs"}}]}
```

---

---
title: WARP Config Changes
description: The descriptions below detail the fields available for warp_config_changes.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/warp%5Fconfig%5Fchanges.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# WARP Config Changes

The descriptions below detail the fields available for `warp_config_changes`.

## AccountIDFrom

Type: `string`

The Cloudflare account ID the user switched from.

## AccountIDTo

Type: `string`

The Cloudflare account ID the user switched to.

## AccountNameFrom

Type: `string`

The name of the account the user switched from.

## AccountNameTo

Type: `string`

The name of the account the user switched to.

## ConfigNameFrom

Type: `string`

The name of the config the user switched from.

## ConfigNameTo

Type: `string`

The name of the config the user switched to.

## DeviceID

Type: `string`

Physical device ID.

## DeviceRegistrationID

Type: `string`

Device registration ID.

## Hostname

Type: `string`

The device hostname.

## SerialNumber

Type: `string`

The device serial number.

## Timestamp

Type: `int or string`

Time the event was ingested.

## UserEmail

Type: `string`

The Access user email.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/warp_config_changes/","name":"WARP Config Changes"}}]}
```

---

---
title: WARP Toggle Changes
description: The descriptions below detail the fields available for warp_toggle_changes.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/warp%5Ftoggle%5Fchanges.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# WARP Toggle Changes

The descriptions below detail the fields available for `warp_toggle_changes`.

## AccountID

Type: `string`

The Cloudflare account ID when the toggle happened.

## AccountName

Type: `string`

The account name when the toggle happened.

## DeviceID

Type: `string`

Physical device ID.

## DeviceRegistrationID

Type: `string`

Device registration ID.

## Hostname

Type: `string`

The device hostname.

## SerialNumber

Type: `string`

The device serial number.

## Timestamp

Type: `int or string`

Time the event was ingested.

## Toggled

Type: `bool`

Indicates whether the device was toggled or not.

## UserEmail

Type: `string`

The Access user email.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/warp_toggle_changes/","name":"WARP Toggle Changes"}}]}
```

---

---
title: Workers Trace Events
description: The descriptions below detail the fields available for workers_trace_events.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/workers%5Ftrace%5Fevents.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Workers Trace Events

The descriptions below detail the fields available for `workers_trace_events`.

## CPUTimeMs

Type: `int`

The amount of CPU time used by the Worker script, in milliseconds.

## DispatchNamespace

Type: `string`

The Cloudflare Worker dispatch namespace.

## Entrypoint

Type: `string`

The name of the entrypoint class in which the Worker began execution.

## Event

Type: `object`

Details about the source event.

## EventTimestampMs

Type: `int`

The timestamp of when the event was received, in milliseconds.

## EventType

Type: `string`

The event type that triggered the invocation.   
Possible values are _fetch_.

## Exceptions

Type: `array[object]`

List of uncaught exceptions during the invocation.

## Logs

Type: `array[object]`

List of console messages emitted during the invocation.

## Outcome

Type: `string`

The outcome of the Worker script invocation.   
Possible values are _ok_ | _exception_.

## ScriptName

Type: `string`

The Cloudflare Worker script name.

## ScriptTags

Type: `array[string]`

A list of user-defined tags used to categorize the Worker.

## ScriptVersion

Type: `object`

The version of the script that was invoked.

## WallTimeMs

Type: `int`

The elapsed time in milliseconds between the start of a Worker invocation, and when the Workers Runtime determines that no more JavaScript needs to run. Specifically, this measures the wall-clock time that the JavaScript context remained open. For example, when returning a response with a large body, the Workers runtime can, in some cases, determine that no more JavaScript needs to run, and closes the JS context before all the bytes have passed through and been sent. Alternatively, if you use the `waitUntil()` API to perform work without blocking the return of a response, this work may continue executing after the response has been returned, and will be included in `WallTimeMs`.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/workers_trace_events/","name":"Workers Trace Events"}}]}
```

---

---
title: Zero Trust Network Session Logs
description: The descriptions below detail the fields available for zero_trust_network_sessions.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/account/zero%5Ftrust%5Fnetwork%5Fsessions.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Zero Trust Network Session Logs

The descriptions below detail the fields available for `zero_trust_network_sessions`.

## AccountID

Type: `string`

Cloudflare account ID.

## BytesReceived

Type: `int`

The number of bytes sent from the origin to the client during the network session.

## BytesSent

Type: `int`

The number of bytes sent from the client to the origin during the network session.

## ClientTCPHandshakeDurationMs

Type: `int`

Duration of handshaking the TCP connection between the client and Cloudflare in milliseconds.

## ClientTLSCipher

Type: `string`

TLS cipher suite used in the connection between the client and Cloudflare.

## ClientTLSHandshakeDurationMs

Type: `int`

Duration of handshaking the TLS connection between the client and Cloudflare in milliseconds.

## ClientTLSVersion

Type: `string`

TLS protocol version used in the connection between the client and Cloudflare.

## ConnectionCloseReason

Type: `string`

The reason for closing the connection, only applicable for TCP.   
Possible values are _CLIENT\_CLOSED_ | _CLIENT\_IDLE\_TIMEOUT_ | _CLIENT\_TLS\_ERROR_ | _CLIENT\_ERROR_ | _ORIGIN\_CLOSED_ | _ORIGIN\_TLS\_ERROR_ | _ORIGIN\_ERROR_ | _ORIGIN\_UNREACHABLE_ | _ORIGIN\_UNROUTABLE_ | _PROXY\_CONN\_REFUSED_ | _UNKNOWN_ | _MISMATCHED\_IP\_VERSIONS_ | _TOO\_MANY\_ACTIVE\_SESSIONS\_FOR\_ACCOUNT_ | _TOO\_MANY\_ACTIVE\_SESSIONS\_FOR\_USER_ | _TOO\_MANY\_NEW\_SESSIONS\_FOR\_ACCOUNT_ | _TOO\_MANY\_NEW\_SESSIONS\_FOR\_USER_.

## ConnectionReuse

Type: `bool`

Whether the TCP connection was reused for multiple HTTP requests.

## DestinationTunnelID

Type: `string`

Identifier of the Cloudflare One connector to which the network session was routed to, if any, such as Cloudflare Tunnel or WARP device.

## DetectedProtocol

Type: `string`

Detected traffic protocol of the network session.

## DeviceID

Type: `string`

Identifier of the client device which initiated the network session, if applicable, (for example, WARP Device ID).

## DeviceName

Type: `string`

Name of the client device which initiated the network session, if applicable, (for example, WARP Device ID).

## EgressColoName

Type: `string`

The name of the Cloudflare data center from which traffic egressed to the origin.

## EgressIP

Type: `string`

Source IP used when egressing traffic from Cloudflare to the origin.

## EgressPort

Type: `int`

Source port used when egressing traffic from Cloudflare to the origin.

## EgressRuleID

Type: `string`

Identifier of the egress rule that was applied by the Secure Web Gateway, if any.

## EgressRuleName

Type: `string`

The name of the egress rule that was applied by the Secure Web Gateway, if any.

## Email

Type: `string`

Email address associated with the user identity which initiated the network session.

## IngressColoName

Type: `string`

The name of the Cloudflare data center to which traffic ingressed.

## InitialOriginIP

Type: `string`

The IP used to correlate existing FQDN matching policy between Gateway DNS and Gateway proxy.

## Offramp

Type: `string`

The type of destination to which the network session was routed.   
Possible values are _INTERNET_ | _MAGIC_ | _CFD\_TUNNEL_ | _WARP_.

## OriginIP

Type: `string`

The IP of the destination ("origin") for the network session.

## OriginPort

Type: `int`

The port of the destination origin for the network session.

## OriginTLSCertificateIssuer

Type: `string`

The issuer of the origin TLS certificate.

## OriginTLSCertificateValidationResult

Type: `string`

The result of validating the TLS certificate of the origin.   
Possible values are _VALID_ | _EXPIRED_ | _REVOKED_ | _HOSTNAME\_MISMATCH_ | _NONE_ | _UNKNOWN_.

## OriginTLSCipher

Type: `string`

TLS cipher suite used in the connection between Cloudflare and the origin.

## OriginTLSHandshakeDurationMs

Type: `int`

Duration of handshaking the TLS connection between Cloudflare and the origin in milliseconds.

## OriginTLSVersion

Type: `string`

TLS protocol version used in the connection between Cloudflare and the origin.

## Protocol

Type: `string`

Network protocol used for this network session.   
Possible values are _TCP_ | _UDP_ | _ICMP_ | _ICMPV6_.

## RegistrationID

Type: `string`

Identifier of the client registration which initiated the network session, if applicable (for example, WARP Registration ID).

## ResolvedFQDN

Type: `string`

The fully qualified domain name of the destination.

## RuleEvaluationDurationMs

Type: `int`

The duration taken by Secure Web Gateway applying applicable Network, HTTP, and Egress rules to the network session in milliseconds.

## SNI

Type: `string`

The server name indication (SNI) value from the TLS handshake, if applicable.

## SessionEndTime

Type: `int or string`

The network session end timestamp with nanosecond precision.

## SessionID

Type: `string`

The identifier of this network session.

## SessionStartTime

Type: `int or string`

The network session start timestamp with nanosecond precision.

## SourceIP

Type: `string`

Source IP of the network session.

## SourceInternalIP

Type: `string`

Local LAN IP of the device. Only available when connected via a GRE/IPsec tunnel on-ramp.

## SourcePort

Type: `int`

Source port of the network session.

## UserID

Type: `string`

User identity where the network session originated from. Only applicable for WARP device clients.

## VirtualNetworkID

Type: `string`

Identifier of the virtual network configured for the client.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/","name":"Account-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/account/zero_trust_network_sessions/","name":"Zero Trust Network Session Logs"}}]}
```

---

---
title: CMB support by dataset
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/cmb.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# CMB support by dataset

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/cmb/","name":"CMB support by dataset"}}]}
```

---

---
title: DNS logs
description: The descriptions below detail the fields available for dns_logs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/zone/dns%5Flogs.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# DNS logs

The descriptions below detail the fields available for `dns_logs`.

## ColoCode

Type: `string`

IATA airport code of the data center that received the request.

## EDNSSubnet

Type: `string`

IPv4 or IPv6 address information corresponding to the [EDNS Client Subnet (ECS)](https://developers.cloudflare.com/glossary/?term=ecs) forwarded by recursive resolvers. Not all resolvers send this information.

## EDNSSubnetLength

Type: `int`

Size of the [EDNS Client Subnet (ECS)](https://developers.cloudflare.com/glossary/?term=ecs) in bits. For example, if the last octet of an IPv4 address is omitted (`192.0.2.x.`), the subnet length will be 24.

## QueryName

Type: `string`

Name of the query that was sent.

## QueryType

Type: `int`

Integer value of query type. For more information refer to [Query type ↗](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4).

## ResponseCached

Type: `bool`

Whether the response was cached or not.

## ResponseCode

Type: `int`

Integer value of response code. For more information refer to [Response code ↗](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-6).

## SourceIP

Type: `string`

IP address of the client (IPv4 or IPv6).

## Timestamp

Type: `int or string`

Timestamp at which the query occurred.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/","name":"Zone-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/dns_logs/","name":"DNS logs"}}]}
```

---

---
title: Firewall events
description: The descriptions below detail the fields available for firewall_events.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/zone/firewall%5Fevents.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Firewall events

The descriptions below detail the fields available for `firewall_events`.

## Action

Type: `string`

The code of the first-class action the Cloudflare Firewall took on this request.   
Possible actions are _unknown_ | _allow_ | _block_ | _challenge_ | _jschallenge_ | _log_ | _connectionclose_ | _challengesolved_ | _challengebypassed_ | _jschallengesolved_ | _jschallengebypassed_ | _bypass_ | _managedchallenge_ | _managedchallengenoninteractivesolved_ | _managedchallengeinteractivesolved_ | _managedchallengebypassed_.

## ClientASN

Type: `int`

The ASN number of the visitor.

## ClientASNDescription

Type: `string`

The ASN of the visitor as string.

## ClientCountry

Type: `string`

Country from which request originated.

## ClientIP

Type: `string`

The visitor's IP address (IPv4 or IPv6).

## ClientIPClass

Type: `string`

The classification of the visitor's IP address, possible values are: _unknown_ | _badHost_ | _searchEngine_ | _allowlist_ | _monitoringService_ | _noRecord_ | _scan_ | _tor_.

## ClientRefererHost

Type: `string`

The referer host.

## ClientRefererPath

Type: `string`

The referer path requested by visitor.

## ClientRefererQuery

Type: `string`

The referer query-string was requested by the visitor.

## ClientRefererScheme

Type: `string`

The referer URL scheme requested by the visitor.

## ClientRequestHost

Type: `string`

The HTTP hostname requested by the visitor.

## ClientRequestMethod

Type: `string`

The HTTP method used by the visitor.

## ClientRequestPath

Type: `string`

The path requested by visitor.

## ClientRequestProtocol

Type: `string`

The version of HTTP protocol requested by the visitor.

## ClientRequestQuery

Type: `string`

The query-string was requested by the visitor.

## ClientRequestScheme

Type: `string`

The URL scheme requested by the visitor.

## ClientRequestUserAgent

Type: `string`

Visitor's user-agent string.

## ContentScanObjResults

Type: `array[string]`

List of content scan results.

## ContentScanObjSizes

Type: `array[int]`

List of content object sizes.

## ContentScanObjTypes

Type: `array[string]`

List of content types.

## Datetime

Type: `int or string`

The date and time the event occurred at the edge.

## Description

Type: `string`

The description of the rule triggered by this request.

## EdgeColoCode

Type: `string`

The airport code of the Cloudflare data center that served this request.

## EdgeResponseStatus

Type: `int`

HTTP response status code returned to browser.

## FraudUserID

Type: `string`

A unique identifier generated by the Fraud Detection system for each user, generated during any action determined by the fraud event type.

## Kind

Type: `string`

The kind of event, currently only possible values are: _firewall_.

## LeakedCredentialCheckResult

Type: `string`

Result of the check for [leaked credentials](https://developers.cloudflare.com/waf/detections/leaked-credentials/).   
Possible results are: _password\_leaked_ | _username\_and\_password\_leaked_ | _username\_password\_similar_ | _username\_leaked_ | _clean_.

## MatchIndex

Type: `int`

Rules match index in the chain. The last matching rule will have MatchIndex _0_. If another rule matched before the last one, it will have MatchIndex _1_. The same applies to any other matching rules, which will have a MatchIndex value of _2_, _3_, and so on.

## Metadata

Type: `object`

Additional product-specific information. Metadata is organized in key:value pairs. Key and Value formats can vary by Cloudflare security product and can change over time.

## OriginResponseStatus

Type: `int`

HTTP origin response status code returned to browser.

## OriginatorRayID

Type: `string`

The RayID of the request that issued the challenge/jschallenge.

## RayID

Type: `string`

The RayID of the request.

## Ref

Type: `string`

The user-defined identifier for the rule triggered by this request. Use refs to label your rules individually alongside the Cloudflare-provided RuleID. You can set refs via the [Rulesets API](https://developers.cloudflare.com/ruleset-engine/rulesets-api/) for some security products.

## RuleID

Type: `string`

The Cloudflare security product-specific RuleID triggered by this request.

## Source

Type: `string`

The Cloudflare security product triggered by this request.   
Possible sources are _unknown_ | _asn_ | _country_ | _ip_ | _iprange_ | _securitylevel_ | _zonelockdown_ | _waf_ | _firewallrules_ | _uablock_ | _ratelimit_ | _bic_ | _hot_ | _l7ddos_ | _validation_ | _botfight_ | _apishield_ | _botmanagement_ | _dlp_ | _firewallmanaged_ | _firewallcustom_ | _apishieldschemavalidation_ | _apishieldtokenvalidation_ | _apishieldsequencemitigation_.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/","name":"Zone-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/firewall_events/","name":"Firewall events"}}]}
```

---

---
title: HTTP requests
description: The descriptions below detail the fields available for http_requests.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/zone/http%5Frequests.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# HTTP requests

The descriptions below detail the fields available for `http_requests`.

## BotDetectionIDs

Type: `array[int]`

List of IDs that correlate to the Bot Management Heuristic detections made on a request. Available only for Bot Management customers. To enable this feature, contact your account team.

## BotDetectionTags

Type: `array[string]`

List of tags that correlate to the Bot Management Heuristic detections made on a request. Available only for Bot Management customers. To enable this feature, contact your account team.

## BotScore

Type: `int`

Cloudflare Bot Score. Scores below 30 are commonly associated with automated traffic. Available only for Bot Management customers. To enable this feature, contact your account team.

## BotScoreSrc

Type: `string`

Detection engine responsible for generating the Bot Score.   
Possible values are _Not Computed_ | _Heuristics_ | _Machine Learning_ | _Behavioral Analysis_ | _Verified Bot_ | _JS Fingerprinting_ | _Cloudflare Service_. Available only for Bot Management customers. To enable this feature, contact your account team.

## BotTags

Type: `array[string]`

Type of bot traffic (if available). Refer to [Bot Tags](https://developers.cloudflare.com/bots/concepts/bot-tags/) for the list of potential values. Available only for Bot Management customers. To enable this feature, contact your account team.

## CacheCacheStatus

Type: `string`

Cache status.   
Possible values are _unknown_ | _miss_ | _expired_ | _updating_ | _stale_ | _hit_ | _ignored_ | _bypass_ | _revalidated_ | _dynamic_ | _stream\_hit_ | _deferred_   
"dynamic" means that a request is not eligible for cache. This can mean, for example that it was blocked by the firewall. Refer to [Cloudflare cache responses](https://developers.cloudflare.com/cache/concepts/cache-responses/) for more details.

## CacheReserveUsed

Type: `bool`

Cache Reserve was used to serve this request.

## CacheResponseBytes

Type: `int`

Number of bytes returned by the cache.

## CacheResponseStatus (deprecated)

Type: `int`

HTTP status code returned by the cache to the edge. All requests (including non-cacheable ones) go through the cache. Refer also to CacheCacheStatus field.

## CacheTieredFill

Type: `bool`

Tiered Cache was used to serve this request.

## ClientASN

Type: `int`

Client AS number.

## ClientCity

Type: `string`

Approximate city of the client.

## ClientCountry

Type: `string`

2-letter ISO-3166 country code of the client IP address.

## ClientDeviceType

Type: `string`

Client device type.

## ClientIP

Type: `string`

IP address of the client.

## ClientIPClass

Type: `string`

Client IP class.   
Possible values are _unknown_ | _badHost_ | _searchEngine_ | _allowlist_ | _monitoringService_ | _noRecord_ | _scan_ | _tor_.

## ClientLatitude

Type: `string`

Approximate latitude of the client.

## ClientLongitude

Type: `string`

Approximate longitude of the client.

## ClientMTLSAuthCertFingerprint

Type: `string`

The SHA256 fingerprint of the certificate presented by the client during mTLS authentication. Only populated on the first request on an mTLS connection.

## ClientMTLSAuthStatus

Type: `string`

The status of mTLS authentication. Only populated on the first request on an mTLS connection.   
Possible values are _unknown_ | _ok_ | _absent_ | _untrusted_ | _notyetvalid_ | _expired_.

## ClientRegionCode

Type: `string`

The ISO-3166-2 region code of the client IP address.

## ClientRequestBytes

Type: `int`

Number of bytes in the client request.

## ClientRequestHost

Type: `string`

Host requested by the client.

## ClientRequestMethod

Type: `string`

HTTP method of client request.

## ClientRequestPath

Type: `string`

URI path requested by the client, which includes only the path portion of the requested URL, without the query string.

## ClientRequestProtocol

Type: `string`

HTTP protocol of client request.

## ClientRequestReferer

Type: `string`

HTTP request referrer.

## ClientRequestScheme

Type: `string`

The URL scheme requested by the visitor.

## ClientRequestSource

Type: `string`

Identifies requests as coming from an external source or another service within Cloudflare. Refer to [ClientRequestSource field](https://developers.cloudflare.com/logs/reference/clientrequestsource/) for the list of potential values.

## ClientRequestURI

Type: `string`

URI requested by the client, which includes the full path and query string of the requested URL.

## ClientRequestUserAgent

Type: `string`

User agent reported by the client.

## ClientSSLCipher

Type: `string`

Client SSL cipher.

## ClientSSLProtocol

Type: `string`

Client SSL (TLS) protocol. The value "none" means that SSL was not used.

## ClientSrcPort

Type: `int`

Client source port.

## ClientTCPRTTMs

Type: `int`

The smoothed average of TCP round-trip time (SRTT). For the initial request on a connection, this is measured only during connection setup. For a subsequent request on the same connection, it is measured over the entire connection lifetime up until the time that request is received.

## ClientXRequestedWith

Type: `string`

X-Requested-With HTTP header.

## ContentScanObjResults

Type: `array[string]`

List of content scan results.

## ContentScanObjSizes

Type: `array[int]`

List of content object sizes.

## ContentScanObjTypes

Type: `array[string]`

List of content types.

## Cookies

Type: `object`

String key-value pairs for cookies. This field is populated based on [Logpush Custom fields](https://developers.cloudflare.com/logs/logpush/logpush-job/custom-fields/), which need to be configured.

## EdgeCFConnectingO2O

Type: `bool`

True if the request looped through multiple zones on the Cloudflare edge. This is considered an O2O request.

## EdgeColoCode

Type: `string`

IATA airport code of the data center that received the request.

## EdgeColoID

Type: `int`

Cloudflare edge data center ID.

## EdgeEndTimestamp

Type: `int or string`

Timestamp at which the edge finished sending response to the client.

## EdgePathingOp

Type: `string`

Indicates what type of response was issued for this request (unknown = no specific action).

## EdgePathingSrc

Type: `string`

Details how the request was classified based on security checks (unknown = no specific classification).

## EdgePathingStatus

Type: `string`

Indicates what data was used to determine the handling of this request (unknown = no data).

## EdgeRequestHost

Type: `string`

Host header on the request from the edge to the origin.

## EdgeResponseBodyBytes

Type: `int`

Size of the HTTP response body returned to clients.

## EdgeResponseBytes

Type: `int`

Number of bytes returned by the edge to the client.

## EdgeResponseCompressionRatio

Type: `float`

The edge response compression ratio is calculated as the ratio between the sizes of the original and compressed responses.

## EdgeResponseContentType

Type: `string`

Edge response Content-Type header value.

## EdgeResponseStatus

Type: `int`

HTTP status code returned by Cloudflare to the client.

## EdgeServerIP

Type: `string`

IP of the edge server making a request to the origin. Possible responses are string in IPv4 or IPv6 format, or empty string. Empty string means that there was no request made to the origin server.

## EdgeStartTimestamp

Type: `int or string`

Timestamp at which the edge received request from the client.

## EdgeTimeToFirstByteMs

Type: `int`

Total view of Time To First Byte as measured at Cloudflare's edge. Starts after a TCP connection is established and ends when Cloudflare begins returning the first byte of a response to eyeballs. Includes TLS handshake time (for new connections) and origin response time.

## FraudAttack

Type: `string`

The primary attack or use case detected in the request by Fraud detections.

## FraudDetectionIDs

Type: `array[int]`

List of IDs that correlate to the Fraud detections made on a request.

## FraudDetectionTags

Type: `array[string]`

List of tags that correlate to the Fraud detections made on a request.

## FraudEmailRisk

Type: `string`

Risk of a specific email address.   
Possible values are _low_ | _medium_ | _high_.

## FraudUserID

Type: `string`

A unique identifier generated by the Fraud Detection system for each user, generated during any action determined by the fraud event type.

## JA3Hash

Type: `string`

The MD5 hash of the JA3 fingerprint used to profile SSL/TLS clients. Available only for Bot Management customers. To enable this feature, contact your account team.

## JA4

Type: `string`

The JA4 fingerprint used to profile SSL/TLS clients. Available only for Bot Management customers. To enable this feature, contact your account team.

## JA4Signals

Type: `object`

Inter-request statistics computed for this JA4 fingerprint. JA4Signals field is organized in key:value pairs, where values are numbers. Available only for Bot Management customers. To enable this feature, contact your account team.

## JSDetectionPassed

Type: `string`

Whether the request passed background JavaScript Detection.   
Possible values are _passed_ | _failed_ | _missing_. Available only for Bot Management customers. To enable this feature, contact your account team.

## LeakedCredentialCheckResult

Type: `string`

Result of the check for [leaked credentials](https://developers.cloudflare.com/waf/detections/leaked-credentials/).   
Possible results are: _password\_leaked_ | _username\_and\_password\_leaked_ | _username\_password\_similar_ | _username\_leaked_ | _clean_.

## OriginDNSResponseTimeMs

Type: `int`

Time taken to receive a DNS response for an origin name. Usually takes a few milliseconds, but may be longer if a CNAME record is used.

## OriginIP

Type: `string`

IP of the origin server.

## OriginRequestHeaderSendDurationMs

Type: `int`

Time taken to send request headers to origin after establishing a connection. Note that this value is usually 0.

## OriginResponseBytes (deprecated)

Type: `int`

Number of bytes returned by the origin server.

## OriginResponseDurationMs

Type: `int`

Upstream response time, measured from the first datacenter that receives a request. Includes time taken by Argo Smart Routing and Tiered Cache, plus time to connect and receive a response from origin servers. This field replaces OriginResponseTime.

## OriginResponseHTTPExpires

Type: `string`

Value of the origin 'expires' header in RFC1123 format.

## OriginResponseHTTPLastModified

Type: `string`

Value of the origin 'last-modified' header in RFC1123 format.

## OriginResponseHeaderReceiveDurationMs

Type: `int`

Time taken for origin to return response headers after Cloudflare finishes sending request headers.

## OriginResponseStatus

Type: `int`

Status returned by the upstream server. The value 0 means that there was no response received from the origin server and the response was served by Cloudflare's Edge. However, if the zone has a Worker running on it, the value 0 could be the result of a Workers subrequest made to the origin.

## OriginResponseTime (deprecated)

Type: `int`

Number of nanoseconds it took the origin to return the response to edge.

## OriginSSLProtocol

Type: `string`

SSL (TLS) protocol used to connect to the origin.

## OriginTCPHandshakeDurationMs

Type: `int`

Time taken to complete TCP handshake with origin. This will be 0 if an origin connection is reused.

## OriginTLSHandshakeDurationMs

Type: `int`

Time taken to complete TLS handshake with origin. This will be 0 if an origin connection is reused.

## ParentRayID

Type: `string`

Ray ID of the parent request if this request was made using a Worker script.

## PayPerCrawlStatus

Type: `string`

Pay Per Crawl outcome, when applicable (for example, request enabled for charging and not blocked by a WAF rule).

## RayID

Type: `string`

ID of the request.

## RequestHeaders

Type: `object`

String key-value pairs for request headers. This field is populated based on [Logpush Custom fields](https://developers.cloudflare.com/logs/logpush/logpush-job/custom-fields/), which need to be configured.

## ResponseHeaders

Type: `object`

String key-value pairs for response headers. This field is populated based on [Logpush Custom fields](https://developers.cloudflare.com/logs/logpush/logpush-job/custom-fields/), which need to be configured.

## SecurityAction

Type: `string`

Action of the security rule that triggered a terminating action, if any.

## SecurityActions

Type: `array[string]`

Array of actions the Cloudflare security products performed on this request. The individual security products associated with this action can be found in SecuritySources and their respective rule IDs can be found in SecurityRuleIDs. The length of the array is the same as SecurityRuleIDs and SecuritySources.   
Possible actions are _unknown_ | _allow_ | _block_ | _challenge_ | _jschallenge_ | _log_ | _connectionClose_ | _challengeSolved_ | _challengeBypassed_ | _jschallengeSolved_ | _jschallengeBypassed_ | _bypass_ | _managedChallenge_ | _managedChallengeNonInteractiveSolved_ | _managedChallengeInteractiveSolved_ | _managedChallengeBypassed_ | _rewrite_ | _forceConnectionClose_ | _skip_.

## SecurityRuleDescription

Type: `string`

Description of the security rule that triggered a terminating action, if any.

## SecurityRuleID

Type: `string`

Rule ID of the security rule that triggered a terminating action, if any.

## SecurityRuleIDs

Type: `array[string]`

Array of rule IDs of the security product that matched the request. The security product associated with the rule ID can be found in SecuritySources. The length of the array is the same as SecurityActions and SecuritySources.

## SecuritySources

Type: `array[string]`

Array of security products that matched the request. The same product can appear multiple times, which indicates different rules or actions that were activated. The rule IDs can be found in SecurityRuleIDs, and the actions can be found in SecurityActions. The length of the array is the same as SecurityRuleIDs and SecurityActions.   
Possible sources are _unknown_ | _asn_ | _country_ | _ip_ | _ipRange_ | _securityLevel_ | _zoneLockdown_ | _waf_ | _firewallRules_ | _uaBlock_ | _rateLimit_ | _bic_ | _hot_ | _l7ddos_ | _validation_ | _botFight_ | _apiShield_ | _botManagement_ | _dlp_ | _firewallManaged_ | _firewallCustom_ | _apiShieldSchemaValidation_ | _apiShieldTokenValidation_ | _apiShieldSequenceMitigation_.

## SmartRouteColoID

Type: `int`

The Cloudflare data center used to connect to the origin server if Argo Smart Routing is used.

## UpperTierColoID

Type: `int`

The "upper tier" data center that was checked for a cached copy if Tiered Cache is used.

## VerifiedBotCategory

Type: `string`

The category of verified bot.

## WAFAttackScore

Type: `int`

Overall request score generated by the WAF detection module.

## WAFFlags (deprecated)

Type: `string`

Additional configuration flags: _simulate (0x1)_ | _null_.

## WAFMatchedVar (deprecated)

Type: `string`

The full name of the most-recently matched variable.

## WAFRCEAttackScore

Type: `int`

WAF score for an RCE attack.

## WAFSQLiAttackScore

Type: `int`

WAF score for an SQLi attack.

## WAFXSSAttackScore

Type: `int`

WAF score for an XSS attack.

## WebAssetsLabelsManaged

Type: `array[string]`

Cloudflare-defined labels matched for the request.

## WebAssetsOperationID

Type: `string`

UUID of the matched web asset operation.

## WorkerCPUTime

Type: `int`

Amount of time in microseconds spent executing a Worker, if any.

## WorkerScriptName

Type: `string`

The Worker script name that made the request.

## WorkerStatus

Type: `string`

Status returned from Worker daemon.

## WorkerSubrequest

Type: `bool`

Whether or not this request was a Worker subrequest.

## WorkerSubrequestCount

Type: `int`

Number of subrequests issued by a Worker when handling this request.

## WorkerWallTimeUs

Type: `int`

The elapsed time in microseconds between the start of a Worker invocation, and when the Workers Runtime determines that no more JavaScript needs to run. Specifically, this measures the wall-clock time that the JavaScript context remained open. For example, when returning a response with a large body, the Workers runtime can, in some cases, determine that no more JavaScript needs to run, and closes the JS context before all the bytes have passed through and been sent. Alternatively, if you use the `waitUntil()` API to perform work without blocking the return of a response, this work may continue executing after the response has been returned, and will be included in `WorkerWallTimeUs`.

## ZoneName

Type: `string`

The human-readable name of the zone (for example, 'cloudflare.com').

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/","name":"Zone-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/http_requests/","name":"HTTP requests"}}]}
```

---

---
title: NEL reports
description: The descriptions below detail the fields available for nel_reports.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/zone/nel%5Freports.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# NEL reports

The descriptions below detail the fields available for `nel_reports`.

## ClientIPASN

Type: `int`

Client ASN.

## ClientIPASNDescription

Type: `string`

Client ASN description.

## ClientIPCountry

Type: `string`

Client country.

## LastKnownGoodColoCode

Type: `string`

IATA airport code of colo client connected to.

## Phase

Type: `string`

The phase of connection the error occurred in; _dns_ | _connection_ | _application_ | _unknown_.

## Timestamp

Type: `int or string`

Timestamp for error report.

## Type

Type: `string`

The type of error in the phase.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/","name":"Zone-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/nel_reports/","name":"NEL reports"}}]}
```

---

---
title: Page Shield events
description: The descriptions below detail the fields available for page_shield_events.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/zone/page%5Fshield%5Fevents.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Page Shield events

The descriptions below detail the fields available for `page_shield_events`.

## Action

Type: `string`

The action which was taken against the violation.   
Possible values are _log_ | _allow_.

## CSPDirective

Type: `string`

The violated directive in the report.

## Host

Type: `string`

The host where the resource was seen on.

## PageURL

Type: `string`

The page URL the violation was seen on.

## PolicyID

Type: `string`

The ID of the policy which was violated.

## ResourceType

Type: `string`

The resource type of the violated directive. Possible values are 'script', 'connection', or 'other' for unmonitored resource types.

## Timestamp

Type: `int or string`

The timestamp of when the report was received.

## URL

Type: `string`

The resource URL.

## URLContainsCDNCGIPath (deprecated)

Type: `bool`

Whether the resource URL contains the '/cdn-cgi/' path.

## URLHost

Type: `string`

The domain host of the URL.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/","name":"Zone-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/page_shield_events/","name":"Page Shield events"}}]}
```

---

---
title: Spectrum events
description: The descriptions below detail the fields available for spectrum_events.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/zone/spectrum%5Fevents.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Spectrum events

The descriptions below detail the fields available for `spectrum_events`.

## Application

Type: `string`

The unique public ID of the application on which the event occurred.

## ClientAsn

Type: `int`

Client AS number.

## ClientBytes

Type: `int`

The number of bytes read from the client by the Spectrum service.

## ClientCountry

Type: `string`

Country of the client IP address.

## ClientIP

Type: `string`

Client IP address.

## ClientMatchedIpFirewall

Type: `string`

Whether the connection matched any IP Firewall rules. UNKNOWN = No match or Firewall not enabled for Spectrum; _UNKNOWN_ | _ALLOW_ | _BLOCK\_ERROR_ | _BLOCK\_IP_ | _BLOCK\_COUNTRY_ | _BLOCK\_ASN_ | _WHITELIST\_IP_ | _WHITELIST\_COUNTRY_ | _WHITELIST\_ASN_.

## ClientPort

Type: `int`

Client port.

## ClientProto

Type: `string`

Transport protocol used by client; _tcp_ | _udp_ | _unix_.

## ClientTcpRtt

Type: `int`

The TCP round-trip time in nanoseconds between the client and Spectrum.

## ClientTlsCipher

Type: `string`

The cipher negotiated between the client and Spectrum. An unknown cipher is returned as "UNK."

## ClientTlsClientHelloServerName

Type: `string`

The server name in the Client Hello message from client to Spectrum.

## ClientTlsProtocol

Type: `string`

The TLS version negotiated between the client and Spectrum; _unknown_ | _none_ | _SSLv3_ | _TLSv1_ | _TLSv1.1_ | _TLSv1.2_ | _TLSv1.3_.

## ClientTlsStatus

Type: `string`

Indicates state of TLS session from the client to Spectrum; _UNKNOWN_ | _OK_ | _INTERNAL\_ERROR_ | _INVALID\_CONFIG_ | _INVALID\_SNI_ | _HANDSHAKE\_FAILED_ | _KEYLESS\_RPC_.

## ColoCode

Type: `string`

IATA airport code of the data center that received the request.

## ConnectTimestamp

Type: `int or string`

Timestamp at which both legs of the connection (client/edge, edge/origin or nexthop) were established.

## DisconnectTimestamp

Type: `int or string`

Timestamp at which the connection was closed.

## Event

Type: `string`

_connect_ | _disconnect_ | _clientFiltered_ | _tlsError_ | _resolveOrigin_ | _originError_.

## IpFirewall

Type: `bool`

Whether IP Firewall was enabled at time of connection.

## OriginBytes

Type: `int`

The number of bytes read from the origin by Spectrum.

## OriginIP

Type: `string`

Origin IP address.

## OriginPort

Type: `int`

Origin port.

## OriginProto

Type: `string`

Transport protocol used by origin; _tcp_ | _udp_ | _unix_.

## OriginTcpRtt

Type: `int`

The TCP round-trip time in nanoseconds between Spectrum and the origin.

## OriginTlsCipher

Type: `string`

The cipher negotiated between Spectrum and the origin. An unknown cipher is returned as "UNK."

## OriginTlsFingerprint

Type: `string`

SHA256 hash of origin certificate. An unknown SHA256 hash is returned as an empty string.

## OriginTlsMode

Type: `string`

If and how the upstream connection is encrypted; _unknown_ | _off_ | _flexible_ | _full_ | _strict_.

## OriginTlsProtocol

Type: `string`

The TLS version negotiated between Spectrum and the origin; _unknown_ | _none_ | _SSLv3_ | _TLSv1_ | _TLSv1.1_ | _TLSv1.2_ | _TLSv1.3_.

## OriginTlsStatus

Type: `string`

The state of the TLS session from Spectrum to the origin; _UNKNOWN_ | _OK_ | _INTERNAL\_ERROR_ | _INVALID\_CONFIG_ | _INVALID\_SNI_ | _HANDSHAKE\_FAILED_ | _KEYLESS\_RPC_.

## ProxyProtocol

Type: `string`

Which form of proxy protocol is applied to the given connection; _off_ | _v1_ | _v2_ | _simple_.

## Status

Type: `int`

A code indicating reason for connection closure.

## Timestamp

Type: `int or string`

Timestamp at which the event took place.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/","name":"Zone-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/spectrum_events/","name":"Spectrum events"}}]}
```

---

---
title: Zaraz Events
description: The descriptions below detail the fields available for zaraz_events.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/datasets/zone/zaraz%5Fevents.md) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Zaraz Events

The descriptions below detail the fields available for `zaraz_events`.

## Body

Type: `object`

Zaraz incoming request body.

## EventDetails

Type: `object`

Zaraz log event details.

## EventType

Type: `string`

Zaraz log event name.

## IP

Type: `string`

Zaraz incoming request client IP address.

## RequestHeaders

Type: `object`

Zaraz incoming request headers.

## TimestampStart

Type: `int or string`

Zaraz log event timestamp.

## URL

Type: `string`

Zaraz incoming request URL.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/datasets/","name":"Datasets"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/","name":"Zone-scoped datasets"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/datasets/zone/zaraz_events/","name":"Zaraz Events"}}]}
```

---

---
title: Edge Log Delivery
description: Edge Log Delivery allows customers to send logs directly from Cloudflare’s edge to their destination of choice. You can configure the maximum interval for your log batches between 30 seconds and five minutes. However, you cannot specify a minimum interval for log batches, meaning that log files may be sent in shorter intervals than the maximum specified. Compared to Logpush, Edge Log Delivery sends logs with lower latency, more frequently, and in smaller batches.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/edge-log-delivery.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Edge Log Delivery

Edge Log Delivery allows customers to send logs directly from Cloudflare’s edge to their destination of choice. You can configure the maximum interval for your log batches between 30 seconds and five minutes. However, you cannot specify a minimum interval for log batches, meaning that log files may be sent in shorter intervals than the maximum specified. Compared to Logpush, Edge Log Delivery sends logs with lower latency, more frequently, and in smaller batches.

Edge Log Delivery is only available for HTTP request logs. Refer to the [API configuration](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#kind) page for steps on how to configure a job to use Edge Log Delivery.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/edge-log-delivery/","name":"Edge Log Delivery"}}]}
```

---

---
title: Enable destinations
description: Enable pushing logs to your storage service, SIEM solution, or log management provider.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable destinations

Enable pushing logs to your storage service, SIEM solution, or log management provider.

Note

Note that you will need to allowlist IP addresses to accept incoming Cloudflare Logpush traffic. Refer to [Cloudflare IPs ↗](https://www.cloudflare.com/ips/) for the complete list of IPs. If you prefer to have a dedicated IP, you can use dedicated [Dedicated Egress IPs for Cloudflare Logpush](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/egress-ip/).

* [ Enable Cloudflare R2 ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/r2/)
* [ Enable HTTP destination ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/http/)
* [ Enable Amazon S3 ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/aws-s3/)
* [ Enable S3-compatible endpoints ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/s3-compatible-endpoints/)
* [ Enable Datadog ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/datadog/)
* [ Enable Elastic ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/elastic/)
* [ Enable Google Cloud Storage ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/google-cloud-storage/)
* [ Enable BigQuery ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/bigquery/)
* [ Enable Microsoft Azure ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/azure/)
* [ Enable New Relic ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/new-relic/)
* [ Enable SentinelOne ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/sentinelone/)
* [ Enable Splunk ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/splunk/)
* [ Enable Sumo Logic ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/sumo-logic/)
* [ Enable Amazon Kinesis ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/kinesis/)
* [ Enable IBM QRadar ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/ibm-qradar/)
* [ Enable IBM Cloud Logs ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/ibm-cloud-logs/)
* [ Enable other providers ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/other-providers/)
* [ Third-party integrations ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/third-party/)
* [ Dedicated Egress IP for Logpush ](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/egress-ip/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}}]}
```

---

---
title: Enable Amazon S3
description: Cloudflare Logpush supports pushing logs directly to Amazon S3 via the Cloudflare dashboard or via API. Customers that use AWS GovCloud locations should use our S3-compatible endpoint and not the Amazon S3 endpoint.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/aws-s3.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable Amazon S3

Cloudflare Logpush supports pushing logs directly to Amazon S3 via the Cloudflare dashboard or via API. Customers that use AWS GovCloud locations should use our **S3-compatible endpoint** and not the **Amazon S3 endpoint**.

## Manage via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. In **Select a destination**, choose **Amazon S3**.
2. Enter or select the following destination information:  
   * **Bucket** \- S3 bucket name  
   * **Path** \- bucket location within the storage container  
   * **Organize logs into daily subfolders** (recommended)  
   * **Bucket region**  
   * If your policy requires [AWS SSE-S3 AES256 Server Side Encryption ↗](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html).  
   * For **Grant Cloudflare access to upload files to your bucket**, make sure your bucket has a [policy ↗](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-policies-s3.html#iam-policy-ex0) (if you did not add it already):  
         * Copy the JSON policy, then go to your bucket in the Amazon S3 console and paste the policy in **Permissions** \> **Bucket Policy** and select **Save**.

When you are done entering the destination details, select **Continue**.

1. To prove ownership, Cloudflare will send a file to your designated destination. To find the token, select the **Open** button in the **Overview** tab of the ownership challenge file, then paste it into the Cloudflare dashboard to verify your access to the bucket. Enter the **Ownership Token** and select **Continue**.
2. Select the dataset to push to the storage service.
3. In the next step, you need to configure your logpush job:  
   * Enter the **Job name**.  
   * Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.  
   * In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
4. In **Advanced Options**, you can:  
   * Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).  
   * Select a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.  
   * Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
5. Select **Submit** once you are done configuring your logpush job.

## Create and get access to an S3 bucket

Cloudflare uses Amazon Identity and Access Management (IAM) to gain access to your S3 bucket. The Cloudflare IAM user needs `PutObject` permission for the bucket.

Logs are written into that bucket as gzipped objects using the S3 Access Control List (ACL)`Bucket-owner-full-control` permission.

For illustrative purposes, imagine that you want to store logs in the bucket `burritobot`, in the `logs` directory. The S3 URL would then be `s3://burritobot/logs`.

Ensure **Log Share** permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the [Roles section](https://developers.cloudflare.com/logs/logpush/permissions/#roles).

  
To enable Logpush to Amazon S3:

1. Create an S3 bucket. Refer to [instructions from Amazon ↗](https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html).  
Note  
Buckets in China regions (`cn-north-1`, `cn-northwest-1`) are currently not supported.
2. Edit and paste the policy below into **S3** \> **Bucket** \> **Permissions** \> **Bucket Policy**, replacing the `Resource` value with your own bucket path. The `AWS` `Principal` is owned by Cloudflare and should not be changed.

```

{

  "Id": "<POLICY_ID>",

  "Version": "2012-10-17",

  "Statement": [

    {

      "Sid": "Stmt1506627150918",

      "Action": ["s3:PutObject"],

      "Effect": "Allow",

      "Resource": "arn:aws:s3:::burritobot/logs/*",

      "Principal": {

        "AWS": ["arn:aws:iam::391854517948:user/cloudflare-logpush"]

      }

    }

  ]

}


```

Note

Logpush uses multipart upload for S3\. Aborted uploads will result in incomplete files remaining in your bucket. To minimize your storage costs, Amazon recommends configuring a lifecycle rule using the `AbortIncompleteMultipartUpload` action. Refer to [Uploading and copying objects using multipart upload ↗](https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html#mpu-abort-incomplete-mpu-lifecycle-config).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/aws-s3/","name":"Enable Amazon S3"}}]}
```

---

---
title: Enable Microsoft Azure
description: Cloudflare Logpush supports pushing logs directly to Microsoft Azure via the Cloudflare dashboard or via API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/azure.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable Microsoft Azure

Cloudflare Logpush supports pushing logs directly to Microsoft Azure via the Cloudflare dashboard or via API.

Note

The [Microsoft Sentinel](https://developers.cloudflare.com/analytics/analytics-integrations/sentinel/) integration for Cloudflare is available in two connector versions.

## Manage via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. In **Select a destination**, choose **Microsoft Azure**.
2. Enter or select the following destination details:  
   * **SAS URL** \- a pre-signed URL that grants access to Azure Storage resources. Refer to [Azure storage documentation ↗](https://learn.microsoft.com/en-us/azure/storage/storage-explorer/vs-azure-tools-storage-manage-with-storage-explorer?tabs=macos#shared-access-signature-sas-url) for more information on generating a SAS URL using Azure Storage Explorer. The service must be set to Blob-only (`ss=b`), and the resource type must be set to Object-only (`srt=o`).  
   * **Path** \- bucket location within the storage container  
   * **Organize logs into daily subfolders** (recommended)

When you are done entering the destination details, select **Continue**.

1. Select the dataset to push to the storage service.
2. In the next step, you need to configure your logpush job:  
   * Enter the **Job name**.  
   * Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.  
   * In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
3. In **Advanced Options**, you can:  
   * Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).  
   * Select a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.  
   * Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
4. Select **Submit** once you are done configuring your logpush job.

## Create and get access to a Blob Storage container

Cloudflare uses a shared access signature (SAS) token to gain access to your Blob Storage container. You will need to provide `Write` permission and an expiration period of at least five years, which will allow you to not worry about the SAS token expiring.

Ensure **Log Share** permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the [Roles section](https://developers.cloudflare.com/logs/logpush/permissions/#roles).

  
To enable Logpush to Azure:

1. Create a Blob Storage container. Refer to [instructions from Azure ↗](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal).
2. Create a [shared access signature (SAS) ↗](https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview) to secure and restrict access to your blob storage container. Use [Storage Explorer ↗](https://learn.microsoft.com/en-us/azure/storage/storage-explorer/vs-azure-tools-storage-manage-with-storage-explorer) to navigate to your container and right click to create a signature. Set the signature to expire at least five years from now and only provide write permission.
3. Provide the SAS URL when prompted by the Logpush API or UI.

Note

Logpush will stop pushing logs if your SAS token expires, which is why an expiration period of at least five years is required. The renewal for your SAS token needs to be done via API, updating the `destination_conf` parameter in your Logpush job.

## Troubleshooting Azure destinations

### signedResourceTypes error

When configuring an Azure destination, the SAS (Shared Access Signature) token must be set to a blob container with write-only permissions. The service must be Blob-only (`ss=b`), and the resource type must be Object-only (`srt=o`).

If the SAS token uses different settings, you will receive the following error:

```

signedResourceTypes must be Object only (srt=o)


```

To resolve this error, regenerate your SAS token using [Storage Explorer ↗](https://learn.microsoft.com/en-us/azure/storage/storage-explorer/vs-azure-tools-storage-manage-with-storage-explorer) with the correct permissions:

* Service: Blob-only (`ss=b`)
* Resource type: Object-only (`srt=o`)
* Permissions: Write-only
* Expiration: At least five years from now

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/azure/","name":"Enable Microsoft Azure"}}]}
```

---

---
title: Enable BigQuery
description: Configure Logpush to send batches of Cloudflare logs to BigQuery.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/bigquery.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable BigQuery

Configure Logpush to send batches of Cloudflare logs to BigQuery.

BigQuery supports loading up to 1,500 jobs per table per day (including failures) with up to 10 million files in each load. That means you can load into BigQuery once per minute and include up to 10 million files in a load. For more information, refer to BigQuery's quotas for load jobs.

Logpush delivers batches of logs as soon as possible, which means you could receive more than one batch of files per minute. Ensure your BigQuery job is configured to ingest files on a given time interval, like every minute, as opposed to when files are received. Ingesting files into BigQuery as each Logpush file is received could exhaust your BigQuery quota quickly.

For a community-supported example of how to set up a schedule job load with BigQuery, refer to [Cloudflare + Google Cloud | Integrations repository ↗](https://github.com/cloudflare/cloudflare-gcp/tree/master/logpush-to-bigquery). Note that this repository is provided on a best-effort basis and is not maintained routinely.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/bigquery/","name":"Enable BigQuery"}}]}
```

---

---
title: Enable Datadog
description: Cloudflare Logpush supports pushing logs directly to Datadog via the Cloudflare dashboard or via API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/datadog.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable Datadog

Cloudflare Logpush supports pushing logs directly to Datadog via the Cloudflare dashboard or via API.

## Manage via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. In **Select a destination**, choose **Datadog**.
2. Enter or select the following destination information:  
   * **Datadog URL Endpoint**, which can be either one below. You can find the difference at [Datadog API reference ↗](https://docs.datadoghq.com/api/latest/logs/).

* [ v1 ](#tab-panel-5387)
* [ v2 ](#tab-panel-5388)

* `http-intake.logs.datadoghq.com/v1/input`

* `http-intake.logs.datadoghq.com/api/v2/logs`

* **Datadog API Key**, can be retrieved by following [these steps ↗](https://docs.datadoghq.com/account%5Fmanagement/api-app-keys/#add-an-api-key-or-client-token).
* **Service**, **Hostname**, **Datadog ddsource field**, and **ddtags** fields can be set as URL parameters. For more information, refer to the [Logs section ↗](https://docs.datadoghq.com/api/latest/logs/) in Datadog's documentation. While these parameters are optional, they can be useful for indexing or processing logs. Note that the values of these parameters may contain special characters, which should be URL encoded.

When you are done entering the destination details, select **Continue**.

1. Select the dataset to push to the storage service.
2. In the next step, you need to configure your logpush job:  
   * Enter the **Job name**.  
   * Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.  
   * In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
3. In **Advanced Options**, you can:  
   * Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).  
   * Select a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.  
   * Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
4. Select **Submit** once you are done configuring your logpush job.

## Manage via API

To set up a Datadog Logpush job:

1. Create a job with the appropriate endpoint URL and authentication parameters.
2. Enable the job to begin pushing logs.

Note

Unlike configuring Logpush jobs for AWS S3, GCS, or Azure, there is no ownership challenge when configuring Logpush to Datadog.

Ensure **Log Share** permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the [Roles section](https://developers.cloudflare.com/logs/logpush/permissions/#roles).

### 1\. Create a job

To create a job, make a `POST` request to the Logpush jobs endpoint with the following fields:

* **name** (optional) - Use your domain name as the job name.
* **destination\_conf** \- A log destination consisting of an endpoint URL, authorization header, and zero or more optional parameters that Datadog supports in the string format below.  
   * `<DATADOG_ENDPOINT_URL>`: The Datadog HTTP logs intake endpoint, which can be either one below. You can find the difference at [Datadog API reference ↗](https://docs.datadoghq.com/api/latest/logs/).  
   * [ v1 ](#tab-panel-5389)  
   * [ v2 ](#tab-panel-5390)  
[https://http-intake.logs.datadoghq.com/v1/input\` ↗](https://http-intake.logs.datadoghq.com/v1/input%60)  
`https://http-intake.logs.datadoghq.com/api/v2/logs`
* `<DATADOG_API_KEY>`: The Datadog API token can be retrieved by following [these steps ↗](https://docs.datadoghq.com/account%5Fmanagement/api-app-keys/#add-an-api-key-or-client-token). For example, `20e6d94e8c57924ad1be3c29bcaee0197d`.
* `ddsource`: Set to `cloudflare`.
* `service`, `host`, `ddtags`: Optional parameters allowed by Datadog.

Terminal window

```

"datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>&ddsource=cloudflare&service=<SERVICE>&host=<HOST>&ddtags=<TAGS>"


```

* **dataset** \- The category of logs you want to receive. Refer to [Datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
* **output\_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [Log Output Options](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/).

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<DOMAIN_NAME>",

    "destination_conf": "datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>&ddsource=cloudflare&service=<SERVICE>&host=<HOST>&ddtags=<TAGS>",

    "output_options": {

        "field_names": [

            "ClientIP",

            "ClientRequestHost",

            "ClientRequestMethod",

            "ClientRequestURI",

            "EdgeEndTimestamp",

            "EdgeResponseBytes",

            "EdgeResponseStatus",

            "EdgeStartTimestamp",

            "RayID"

        ],

        "timestamp_format": "rfc3339"

    },

    "dataset": "http_requests"

  }'


```

Response:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "enabled": false,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp", "EdgeResponseBytes", "EdgeResponseStatus" ,"EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>",

    "last_complete": null,

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

### 2\. Enable (update) a job

To enable a job, make a `PUT` request to the Logpush jobs endpoint. You will use the job ID returned from the previous step in the URL and send `{"enabled": true}` in the request body.

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Update Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs/$JOB_ID" \

  --request PUT \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "enabled": true

  }'


```

Response:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "enabled": true,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp", "EdgeResponseBytes", "EdgeResponseStatus" ,"EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>",

    "last_complete": null,

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

Note

The Datadog destination is exclusive to new jobs and might not be backward compatible with older jobs. Create new jobs if you expect to send your logs directly to Datadog instead of modifying already existing ones. If you try to modify an existing job for another destination to push logs to Datadog, you may observe errors.

Note

To analyze and visualize Cloudflare metrics using the Cloudflare Integration tile for Datadog, follow the steps in the [Datadog Analytics integration page](https://developers.cloudflare.com/analytics/analytics-integrations/datadog/).

## Limitations

Note the following Logpush sending limitations, as described in the [Datadog documentation ↗](https://docs.datadoghq.com/api/latest/logs/).

Send your logs to your Datadog platform over HTTP. Limits per HTTP request are the following:

* Maximum content size per payload (uncompressed): 5 MB
* Maximum size for a single log: 1 MB
* Maximum array size if sending multiple logs in an array: 1,000 entries

Warning

The above limits are hardcoded defaults. It is not possible to override these limitations using the Logpush configuration values, `max_upload_records` or `max_upload_bytes`.

These limitations may result in noticeable log ingestion delay within Datadog following high traffic events. Logpush will not drop unsent logs, so all logs will be uploaded to Datadog in due time.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/datadog/","name":"Enable Datadog"}}]}
```

---

---
title: Dedicated Egress IP for Logpush
description: This guide covers Dedicated CDN Egress IPs and Logpush configuration and testing instructions to enable log delivery with a fixed, dedicated egress IP.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/egress-ip.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Dedicated Egress IP for Logpush

This guide covers [Dedicated CDN Egress IPs](https://developers.cloudflare.com/smart-shield/configuration/dedicated-egress-ips/) and Logpush configuration and testing instructions to enable log delivery with a fixed, dedicated egress IP.

## Prerequisites

To use Logpush with a dedicated egress IP, you will need to have [Smart Shield Advanced](https://developers.cloudflare.com/smart-shield/get-started/#smart-shield-advanced) with Dedicated CDN Egress IPs (formerly known as Aegis). Note that the Dedicated CDN Egress IPs pool is associated with a zone, not with an account. To use Logpush with dedicated IPs, traffic must be routed to a single zone.

The general approach is to have your Logpush job proxying Logpush data through a Cloudflare zone with Dedicated CDN Egress IPs enabled to send data to your desired destination. This way your destination will only need to allowlist the provisioned dedicated egress IPs of your proxy zone.

As a prerequisite, you need to create a dedicated zone or use an existing zone. If using an existing zone, be aware that the zone's egress will be restricted to Dedicated CDN Egress IPs. Make sure all services using that zone will not be impacted.

It is recommended to use a separate, dedicated zone as a proxy to avoid impacting production systems. If you choose to create a new zone, follow the [steps](https://developers.cloudflare.com/registrar/get-started/register-domain/) to register a new domain with Cloudflare.

The following example shows how to set up logpush and Dedicated CDN Egress IPs to proxy an HTTPS destination, but the proxying should work for any supported Logpush destination as all destinations use the HTTP protocol underneath.

## 1\. Provision dedicated egress IP Pool

1. Work with your Cloudflare account team to purchase [Dedicated CDN Egress IPs](https://developers.cloudflare.com/smart-shield/configuration/dedicated-egress-ips/) for your zone.
2. (Optional but recommended) Request two IPs — one in PDX-B and one in SJC-A — to ensure coverage across regions.
3. Confirm Pool ID once provisioned.

## 2\. Configure a zone

1. Register or use an existing zone for the dedicated egress IPs pool.
2. Contact your account team to get the ID for your dedicated egress IPs pool.
3. Make a `PATCH` request to the [Edit Zone Setting](https://developers.cloudflare.com/api/resources/zones/subresources/settings/methods/edit/) endpoint:
* Specify `aegis` as the setting ID in the URL.
* In the request body, set `enabled` to `true` and use the ID from the previous step as `pool_id`.

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Zone Settings Write`

Edit zone setting

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/settings/aegis" \

  --request PATCH \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "id": "aegis",

    "value": {

        "enabled": true,

        "pool_id": "<YOUR_EGRESS_POOL_ID>"

    }

  }'


```

## 3\. Proxy zone setup

1. In your zone, add a DNS record (CNAME or A/AAAA) with **Target** as HTTP destination endpoint.
![Create a DNS record in the Cloudflare dashboard to define the HTTP destination endpoint](https://developers.cloudflare.com/_astro/endpoint.DmFFJC-j_14G61L.webp) 
1. If needed, configure [origin rules](https://developers.cloudflare.com/rules/origin-rules/) to specify a custom port. This is useful if your destination only accepts traffic on a non standard port, for example `12345`. You can configure `logpush.yourdestinationendpoint.com` (without specifying a port, as Cloudflare by default only proxies traffic on HTTP/HTTPS ports) to proxy to `yourdestinationendpoint.com:12345`.

## 4\. Configure Logpush

1. Create a Logpush job with the following details:
* Destination: HTTP
* Endpoint: Use the domain/path set up (the Cloudflare dashboard will auto-validate the destination). Use the server name specified in the **Name** section in the DNS record. In this case, `logpush.yourdestionationendpoint.com`.
![Enter destination details when creating a Logpush job in the Cloudflare dashboard](https://developers.cloudflare.com/_astro/destination-details.imLwZlEZ_PT9vI.webp) 
* Configuration: Select dataset, job name, filters, and fields. Refer to the [Logpush documentation](https://developers.cloudflare.com/logs/logpush/) for more details.
1. Check destination to confirm if the logs are received.

## 5\. Secure your proxy zone endpoint

The proxy zone hostname is publicly resolvable, but traffic passes through Cloudflare's edge where you can apply security controls. Use the following best practices to protect your endpoint.

### Add a secret header with WAF validation

Add a secret token as an HTTP header in your Logpush job, then create a WAF rule to block requests without it. This is the recommended approach for most deployments.

**Configure Logpush with a secret header**

Any URL parameter starting with `header_` becomes an HTTP header in the request. When creating or updating your Logpush job, add the secret header to your destination URL:

```

https://logpush.yourdestinationendpoint.com?header_X-Logpush-Secret=YOUR_RANDOM_SECRET_TOKEN


```

Generate a strong random token using `openssl rand -hex 32`.

**Create a WAF custom rule**

In the proxy zone, go to **Security** \> **WAF** \> **Custom rules** and create a rule to block requests without the correct secret header.

* **Expression:**  
```  
(http.host eq "logpush.yourdestinationendpoint.com" and all(http.request.headers["x-logpush-secret"][*] ne "YOUR_RANDOM_SECRET_TOKEN"))  
```
* **Action:** Block

### Add ASN-based filtering

For defense in depth, add a rule to only allow traffic from Cloudflare's ASNs. Logpush traffic originates from Cloudflare's network (ASN 13335, 132892, or 202623).

* **Expression:**  
```  
(http.host eq "logpush.yourdestinationendpoint.com" and not ip.geoip.asnum in {13335 132892 202623})  
```
* **Action:** Block

Note

ASN filtering alone is insufficient because other Cloudflare customers' traffic also originates from these ASNs. Always combine with secret header validation.

### Use Access Service Tokens for high-security environments

For stronger authentication, use [Cloudflare Access Service Tokens](https://developers.cloudflare.com/cloudflare-one/access-controls/service-credentials/service-tokens/) for machine-to-machine authentication. Create a Service Token in the Zero Trust dashboard, then configure Logpush with the Access headers:

```

https://logpush.yourdestinationendpoint.com?header_CF-Access-Client-Id=YOUR_CLIENT_ID&header_CF-Access-Client-Secret=YOUR_CLIENT_SECRET


```

### Verify your security configuration

Test that your WAF rules are blocking unauthorized requests:

Terminal window

```

$ curl https://logpush.yourdestinationendpoint.com

# Expected: error code: 1020


$ curl -H "X-Logpush-Secret: wrong-token" https://logpush.yourdestinationendpoint.com

# Expected: error code: 1020


```

Check Cloudflare Analytics for the proxy zone to confirm Logpush traffic is flowing, and monitor WAF events to ensure unauthorized requests are blocked.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/egress-ip/","name":"Dedicated Egress IP for Logpush"}}]}
```

---

---
title: Enable Elastic
description: Push your Cloudflare logs to Elastic for instant visibility and insights. Enabling this integration with Elastic comes with a predefined dashboard to view all of your Cloudflare observability and security data with ease.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/elastic.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable Elastic

Push your Cloudflare logs to Elastic for instant visibility and insights. Enabling this integration with Elastic comes with a predefined dashboard to view all of your Cloudflare observability and security data with ease.

The Cloudflare Logpush integration can be used in three different modes to collect data:

* **HTTP Endpoint mode** \- Cloudflare pushes logs directly to an HTTP endpoint hosted by your Elastic Agent.
* **AWS S3 polling mode** \- Cloudflare writes data to S3, and the Elastic Agent polls the S3 bucket by listing its contents and reading new files.
* **AWS S3 SQS mode** \- Cloudflare writes data to S3, S3 pushes a new object notification to SQS, the Elastic Agent receives the notification from SQS, and then reads the S3 object. Multiple Agents can be used in this mode.

Note

Elastic recommends the AWS S3 SQS mode.

## Enable Logpush Job in Cloudflare

Determine which method you want to use, and configure the appropriate Logpush job in the Cloudflare dashboard or via the API.

Elastic supports the default JSON format.

To push logs to an object storage for short term storage and buffering before ingesting into Elastic (recommended), follow the instructions to configure a Logpush job to push logs to [AWS S3](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/aws-s3/), [Google Cloud Storage](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/google-cloud-storage/), or [Azure Blob Storage](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/azure/).

To use the [HTTP Endpoint mode](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/http/), use the API to push logs to an HTTP endpoint backed by your Elastic Agent.

Add the same custom header along with its value on both sides for additional security.

For example, while creating a job along with a header and value for a particular dataset:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<PUBLIC_DOMAIN>",

    "destination_conf": "https://<PUBLIC_DOMAIN>:<PUBLIC_PORT>?header_<SECRET_HEADER>=<SECRET_VALUE>",

    "dataset": "http_requests",

    "output_options": {

        "field_names": [

            "RayID",

            "EdgeStartTimestamp"

        ],

        "timestamp_format": "rfc3339"

    }

  }'


```

## Enable the Integration in Elastic

Once the Logpush job is configured, follow Elastics instructions for [setting up the Integration ↗](https://docs.elastic.co/integrations/cloudflare%5Flogpush) in the Elastic app.

## View Dashboards

Log in to your [Elastic account ↗](https://www.elastic.co/) to view prebuilt dashboards and configure alerts.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/elastic/","name":"Enable Elastic"}}]}
```

---

---
title: Enable Google Cloud Storage
description: Cloudflare Logpush supports pushing logs directly to Google Cloud Storage (GCS) via the Cloudflare dashboard or via API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/google-cloud-storage.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable Google Cloud Storage

Cloudflare Logpush supports pushing logs directly to Google Cloud Storage (GCS) via the Cloudflare dashboard or via API.

## Manage via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. In **Select a destination**, choose **Google Cloud Storage**.
2. Enter or select the following destination details:  
   * **Bucket** \- GCS bucket name  
   * **Path** \- bucket location within the storage container  
   * **Organize logs into daily subfolders** (recommended)  
   * For **Grant Cloudflare access to upload files to your bucket**, make sure your bucket has added Cloudflare’s IAM as a user with a [Storage Object Admin role ↗](https://cloud.google.com/storage/docs/access-control/iam-roles).

When you are done entering the destination details, select **Continue**.

1. To prove ownership, Cloudflare will send a file to your designated destination. To find the token, select the **Open** button in the **Overview** tab of the ownership challenge file, then paste it into the Cloudflare dashboard to verify your access to the bucket. Enter the **Ownership Token** and select **Continue**.
2. Select the dataset to push to the storage service.
3. In the next step, you need to configure your logpush job:  
   * Enter the **Job name**.  
   * Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.  
   * In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
4. In **Advanced Options**, you can:  
   * Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).  
   * Select a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.  
   * Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
5. Select **Submit** once you are done configuring your logpush job.

## Create and get access to a GCS bucket

Cloudflare uses Google Cloud Identity and Access Management (IAM) to gain access to your bucket. The Cloudflare IAM service account needs admin permission for the bucket.

Ensure **Log Share** permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the [Roles section](https://developers.cloudflare.com/logs/logpush/permissions/#roles).

  
To enable Logpush to GCS:

1. Create a GCS bucket. Refer to [instructions from GCS ↗](https://cloud.google.com/storage/docs/creating-buckets#storage-create-bucket-console).
2. In **Storage** \> **Browser** \> **Bucket** \> **Permissions**, add the member `logpush@cloudflare-data.iam.gserviceaccount.com` with `Storage Object Admin` permission.

## Compression and decompressive transcoding

Logpush always delivers log files in gzip-compressed format. When uploading to GCS, Logpush sets `Content-Encoding: gzip` on the object metadata.

GCS performs [decompressive transcoding ↗](https://cloud.google.com/storage/docs/transcoding) by default. This means that when a client downloads an object stored with `Content-Encoding: gzip`, GCS may automatically decompress the file in transit if the client does not include `Accept-Encoding: gzip` in the request headers. When this happens, the downloaded file contains uncompressed data even though the filename retains the `.gz` extension.

To download log files in their original compressed format, use one of the following approaches:

* **Include `Accept-Encoding: gzip` in your download request headers.** For example, when using gsutil:  
Terminal window  
```  
gsutil -h "Accept-Encoding: gzip" cp gs://your-bucket/path/file.log.gz .  
```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/google-cloud-storage/","name":"Enable Google Cloud Storage"}}]}
```

---

---
title: Enable HTTP destination
description: Cloudflare Logpush now supports the ability to send logs to configurable HTTP endpoints.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/http.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable HTTP destination

Cloudflare Logpush now supports the ability to send logs to configurable HTTP endpoints.

Note that when using Logpush to HTTP endpoints, Cloudflare customers are expected to perform their own authentication of the pushed logs. For example, customers may specify a secret token in the URL or an HTTP header of the Logpush destination.

Endpoint requirements

Cloudflare expects that the endpoint is available over HTTPS, using a trusted certificate. The endpoint must accept `POST` requests.

## Manage via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. In **Select a destination**, choose **HTTP destination**.
2. Enter the **HTTP endpoint** where you want to send the logs to, and select **Continue**. - You can use `"header_*"` URL parameters to set request headers, for example, to pass an authentication token to your HTTP endpoint.
3. Select the dataset to push to the storage service.
4. In the next step, you need to configure your logpush job:  
   * Enter the **Job name**.  
   * Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.  
   * In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
5. In **Advanced Options**, you can:  
   * Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).  
   * Select a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.  
   * Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
6. Select **Submit** once you are done configuring your logpush job.

## Manage via API

To create a Logpush job, make a `POST` request to the [Logpush job creation endpoint URL](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/) with the appropriate parameters.

The supported parameters are as follows:

* Fields that are unchanged from other sources:  
   * **dataset** (required): For example, `http_requests`.  
   * **name** (optional): We suggest using your domain name as the job name.  
   * **output\_options** (optional): Refer to [Log Output Options](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/) to configure fields, sample rate, and timestamp format.
* Unique fields:  
   * **destination\_conf**: Where to send the logs. This consists of an endpoint URL and HTTP headers used.  
         * Any `"header_*"` URL parameters will be used to set request headers.  
                  * The HTTPS endpoint cannot have custom URL parameters that conflicts with any `"header_*"` URL parameters you have set.  
                  * These parameters must be properly URL-encoded (that is, use `"%20"` for a whitespace), otherwise some special characters may be decoded incorrectly.  
         * `destination_conf` may have more URL parameters in addition to special `"header_*"` parameters.  
                  * Non URL-encoded special characters will be encoded when uploading.  
         * Example: `https://logs.example.com?header_Authorization=Basic%20REDACTED&tags=host:theburritobot.com,dataset:http_requests`  
   * **max\_upload\_bytes** (optional): The maximum uncompressed file size of a batch of logs. This setting value must be between 5 MB and 1 GB. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size.  
   * **max\_upload\_records** (optional): The maximum number of log lines per batch. This setting must be between 1,000 and 1,000,000 lines. Note that you cannot to specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this.

Note

The `ownership_challenge` parameter is not required to create a Logpush job to an HTTP endpoint. You need to make sure that the file upload to validate the destination accepts a gzipped `test.txt.gz` with content as `{"content":"tests"}` compressed, otherwise it will return an error, like `error validating destination: error writing object: error uploading`.

## Example curl request

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "theburritobot.com-https",

    "output_options": {

        "field_names": [

            "EdgeStartTimestamp",

            "RayID"

        ],

        "timestamp_format": "rfc3339"

    },

    "destination_conf": "https://logs.example.com?header_Authorization=Basic%20REDACTED&tags=host:theburritobot.com,dataset:http_requests",

    "max_upload_bytes": 5000000,

    "max_upload_records": 1000,

    "dataset": "http_requests",

    "enabled": true

  }'


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/http/","name":"Enable HTTP destination"}}]}
```

---

---
title: Enable IBM Cloud Logs
description: Cloudflare Logpush supports pushing logs directly to IBM Cloud Logs via dashboard or API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/ibm-cloud-logs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable IBM Cloud Logs

Cloudflare Logpush supports pushing logs directly to IBM Cloud Logs via dashboard or API.

## Manage via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. In **Select a destination**, choose **IBM Cloud Logs**.
2. Enter the following destination information:
* **HTTP Source Address** \- For example, `ibmcl://<INSTANCE_ID>.ingress.<REGION>.logs.cloud.ibm.com/logs/v1/singles`.
* **IBM API Key** \- For more information refer to the [IBM Cloud Logs documentation ↗](https://cloud.ibm.com/docs/cloud-logs).

When you are done entering the destination details, select **Continue**.

1. Select the dataset to push to the storage service.
2. In the next step, you need to configure your logpush job:  
   * Enter the **Job name**.  
   * Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.  
   * In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
3. In **Advanced Options**, you can:  
   * Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).  
   * Select a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.  
   * Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
4. Select **Submit** once you are done configuring your logpush job.

## Manage via API

To set up an IBM Cloud Logs job:

1. Create a job with the appropriate endpoint URL and authentication parameters.
2. Enable the job to begin pushing logs.

Note

Ensure Log Share permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the [Roles section](https://developers.cloudflare.com/logs/logpush/permissions/#roles).

### 1\. Create a job

To create a job, make a `POST` request to the Logpush jobs endpoint with the following fields:

* **name** (optional) - Use your domain name as the job name.
* **output\_options** (optional) - This parameter is used to define the desired output format and structure. Below are the configurable fields:  
   * output\_type  
   * timestamp\_format  
   * batch\_prefix and batch\_suffix  
   * record\_prefix and record\_suffix  
   * record\_delimiter
* **destination\_conf** \- A log destination consisting of Instance ID, Region and [IBM API Key ↗](https://cloud.ibm.com/docs/account?topic=account-iamtoken%5Ffrom%5Fapikey) in the string format below.

`ibmcl://<INSTANCE_ID>.ingress.<REGION>.logs.cloud.ibm.com/logs/v1/singles?ibm_api_key=<IBM_API_KEY>`

* **max\_upload\_records** (optional) - The maximum number of log lines per batch. This must be at least 1,000 lines or more. Note that there is no way to specify a minimum number of log lines per batch. This means that log files may contain many fewer lines than specified.
* **max\_upload\_bytes** (optional) - The maximum uncompressed file size for a batch of logs. We recommend a default value of 2 MB per upload based on IBM's limits, which our system will enforce for this destination. Since minimum file sizes cannot be set, log files may be smaller than the specified batch size.
* **dataset** \- The category of logs you want to receive. Refer to [Datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<DOMAIN_NAME>",

    "output_options": {

        "output_type": "ndjson",

        "timestamp_format": "rfc3339",

        "batch_prefix": "[",

        "batch_suffix": "]",

        "record_prefix": "{\"applicationName\":\"ibm-platform-log\",\"subsystemName\":\"internet-svcs:logpush\",\"text\":{",

        "record_suffix": "}}",

        "record_delimiter": ","

    },

    "destination_conf": "ibmcl://<INSTANCE_ID>.ingress.<REGION>.logs.cloud.ibm.com/logs/v1/singles?ibm_api_key=<IBM_API_KEY>",

    "max_upload_bytes": 2000000,

    "dataset": "http_requests",

    "enabled": true

  }'


```

Response:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "dataset": "http_requests",

    "destination_conf": "ibmcl://<INSTANCE_ID>.ingress.<REGION>.logs.cloud.ibm.com/logs/v1/singles?ibm_api_key=<IBM_API_KEY>",

    "enabled": true,

    "error_message": null,

    "id": <JOB_ID>,

    "kind": "",

    "last_complete": null,

    "last_error": null,

    "output_options": {

      "output_type": "ndjson",

      "timestamp_format": "rfc3339",

      "batch_prefix": "[",

      "batch_suffix": "]",

      "record_prefix": "{\"applicationName\":\"ibm-platform-log\",\"subsystemName\":\"internet-svcs:logpush\",\"text\":{",

      "record_suffix": "}}",

      "record_delimiter": ","

    },

    "max_upload_bytes": 2000000,

    "name": "<DOMAIN_NAME>"

  },

  "success": true

}


```

### 2\. Enable (update) a job

To enable a job, make a `PUT` request to the Logpush jobs endpoint. You will use the job ID returned from the previous step in the URL and send `{"enabled": true}` in the request body.

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Update Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs/$JOB_ID" \

  --request PUT \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "enabled": true

  }'


```

Response:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "dataset": "http_requests",

    "destination_conf": "ibmcl://<INSTANCE_ID>.ingress.<REGION>.logs.cloud.ibm.com/logs/v1/singles?ibm_api_key=<IBM_API_KEY>",

    "enabled": true,

    "error_message": null,

    "id": <JOB_ID>,

    "kind": "",

    "last_complete": null,

    "last_error": null,

    "output_options": {

      "output_type": "ndjson",

      "timestamp_format": "rfc3339",

      "batch_prefix": "[",

      "batch_suffix": "]",

      "record_prefix": "{\"applicationName\":\"ibm-platform-log\",\"subsystemName\":\"internet-svcs:logpush\",\"text\":{",

      "record_suffix": "}}",

      "record_delimiter": ","

    },

    "max_upload_bytes": 2000000,

    "name": "<DOMAIN_NAME>"

  },

  "success": true

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/ibm-cloud-logs/","name":"Enable IBM Cloud Logs"}}]}
```

---

---
title: Enable IBM QRadar
description: To configure a QRadar/Cloudflare integration you have the option to use one of the following methods:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/ibm-qradar.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable IBM QRadar

To configure a QRadar/Cloudflare integration you have the option to use one of the following methods:

* [HTTP Receiver protocol](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/ibm-qradar/#http-receiver-protocol)
* [Amazon AWS S3 Rest API](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/ibm-qradar/#amazon-aws-s3-rest-api)

## HTTP Receiver Protocol

To send Cloudflare logs to QRadar you need to create a [Logpush job to HTTP endpoints](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/http/) via API. Below you can find two curl examples of how to send Cloudflare Firewall events and Cloudflare HTTP events to QRadar.

### Cloudflare Firewall events

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<NAME>",

    "output_options": {

        "field_names": [

            "Action",

            "ClientIP",

            "ClientASN",

            "ClientASNDescription",

            "ClientCountry",

            "ClientIPClass",

            "ClientRefererHost",

            "ClientRefererPath",

            "ClientRefererQuery",

            "ClientRefererScheme",

            "ClientRequestHost",

            "ClientRequestMethod",

            "ClientRequestPath",

            "ClientRequestProtocol",

            "ClientRequestQuery",

            "ClientRequestScheme",

            "ClientRequestUserAgent",

            "EdgeColoCode",

            "EdgeResponseStatus",

            "Kind",

            "MatchIndex",

            "Metadata",

            "OriginResponseStatus",

            "OriginatorRayID",

            "RayID",

            "RuleID",

            "Source",

            "Datetime"

        ],

        "timestamp_format": "rfc3339"

    },

    "destination_conf": "<QRADAR_URL>:<LOG_SOURCE_PORT>",

    "max_upload_bytes": 5000000,

    "max_upload_records": 1000,

    "dataset": "firewall_events",

    "enabled": true

  }'


```

### Cloudflare HTTP events

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<NAME>",

    "output_options": {

        "field_names": [

            "ClientRequestMethod",

            "EdgeResponseStatus",

            "ClientIP",

            "ClientSrcPort",

            "CacheCacheStatus",

            "ClientCountry",

            "ClientDeviceType",

            "ClientIPClass",

            "ClientMTLSAuthCertFingerprint",

            "ClientMTLSAuthStatus",

            "ClientRegionCode",

            "ClientRequestBytes",

            "ClientRequestHost",

            "ClientRequestPath",

            "ClientRequestProtocol",

            "ClientRequestReferer",

            "ClientRequestScheme",

            "ClientRequestSource",

            "ClientRequestURI",

            "ClientRequestUserAgent",

            "ClientSSLCipher",

            "ClientSSLProtocol",

            "ClientXRequestedWith",

            "EdgeEndTimestamp",

            "EdgeRequestHost",

            "EdgeResponseBodyBytes",

            "EdgeResponseBytes",

            "EdgeServerIP",

            "EdgeStartTimestamp",

            "SecurityActions",

            "SecurityRuleIDs",

            "SecuritySources",

            "OriginIP",

            "OriginResponseStatus",

            "OriginSSLProtocol",

            "ParentRayID",

            "RayID",

            "SecurityAction",

            "WAFAttackScore",

            "SecurityRuleID",

            "SecurityRuleDescription",

            "WAFSQLiAttackScore",

            "WAFXSSAttackScore",

            "EdgeStartTimestamp"

        ],

        "timestamp_format": "rfc3339"

    },

    "destination_conf": "<QRADAR_URL>:<LOG_SOURCE_PORT>",

    "max_upload_bytes": 5000000,

    "max_upload_records": 1000,

    "dataset": "http_requests",

    "enabled": true

  }'


```

Cloudflare checks the accessibility of the IP address, port, and validates the certificate of the HTTP Receive log source. If all parameters are valid, a Logpush is created, and starts to send events to HTTP Receiver log source.

## Amazon AWS S3 Rest API

When you use the Amazon S3 REST API protocol, IBM QRadar collects Cloudflare Log events from an Amazon S3 bucket. To use this option, you need to:

1. Create an [Amazon S3 bucket ↗](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html) to store your Cloudflare Logs. Make a note of the bucket name and the AWS access key ID and secret access key with sufficient permissions to write to the bucket.
2. [Enable a Logpush to Amazon S3](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/aws-s3/).
3. In the AWS Management Console, go to the Amazon S3 service. Create a bucket endpoint to allow Cloudflare to send logs directly to the S3 bucket.
4. Follow the steps in [Integrate Cloudflare Logs with QRadar by using the Amazon AWS S3 REST API protocol ↗](https://www.ibm.com/docs/en/dsm?topic=configuration-cloudflare-logs).
5. Test the configuration by generating some logs in Cloudflare and ensuring that they are delivered to the S3 bucket and subsequently forwarded to QRadar.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/ibm-qradar/","name":"Enable IBM QRadar"}}]}
```

---

---
title: Enable Amazon Kinesis
description: Logpush supports Amazon Kinesis as a destination for all datasets. Each Kinesis record that Logpush sends will contain a batch of GZIP-compressed data in newline-delimited JSON format (by default), or in the format specified in the output_options parameter when the job was created.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/kinesis.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable Amazon Kinesis

Logpush supports [Amazon Kinesis ↗](https://aws.amazon.com/kinesis/) as a destination for all datasets. Each Kinesis record that Logpush sends will contain a batch of GZIP-compressed data in newline-delimited JSON format (by default), or in the format specified in the [output\_options](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/) parameter when the job was created.

## Configure Kinesis using STS Assume Role (recommended)

1. Create an IAM Role for Cloudflare Logpush to Assume with the following trust relationship:

```

{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Effect": "Allow",

            "Principal": {

                "AWS": [

                    "arn:aws:iam::391854517948:user/cloudflare-logpush"

                ]

            },

            "Action": "sts:AssumeRole"

        }

    ]

}


```

1. Ensure that the IAM role has permissions to perform the `PutRecord` action on your Kinesis stream. Replace `<AWS_REGION>`, `<YOUR_AWS_ACCOUNT_ID>` and `<STREAM_NAME>` with your own values:

```

{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Effect": "Allow",

            "Action": "kinesis:PutRecord",

            "Resource": "arn:aws:kinesis:<AWS_REGION>:<YOUR_AWS_ACCOUNT_ID>:stream/<STREAM_NAME>"

        }

    ]

}


```

1. Create a Logpush job, using the following format for the `destination_conf` field:

Terminal window

```

kinesis://<STREAM_NAME>?region=<AWS_REGION>&sts-assume-role-arn=arn:aws:iam::<YOUR_AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>


```

1. (optional) When using STS Assume Role, you can include `sts-external-id` as a `destination_conf` parameter so it is included in your Logpush job's requests to Kinesis. Refer to [Securely Using External ID for Accessing AWS Accounts Owned by Others ↗](https://aws.amazon.com/blogs/apn/securely-using-external-id-for-accessing-aws-accounts-owned-by-others/) for more information.

Terminal window

```

kinesis://<STREAM_NAME>?region=<AWS_REGION>&sts-assume-role-arn=arn:aws:iam::<YOUR_AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>&sts-external-id=<EXTERNAL_ID>


```

### STS Assume Role example

Terminal window

```

$ curl https://api.cloudflare.com/client/v4/zones/$ZONE_TAG/logpush/jobs \

-H 'Authorization: Bearer <API_TOKEN>' \

-H 'Content-Type: application/json' -d '{

  "name": "kinesis",

  "destination_conf": "kinesis://<STREAM_NAME>?region=<AWS_REGION>&sts-assume-role-arn=arn:aws:iam::<YOUR_AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>",

  "dataset": "http_requests",

  "enabled": true

}'


```

## Configure Kinesis using IAM Access Keys

When configuring your Logpush job using IAM Access Keys, ensure that the IAM user has permission to perform the `PutRecord` action on your Kinesis stream:

Terminal window

```

kinesis://<STREAM_NAME>?region=<AWS_REGION>&access-key-id=<AWS_ACCESS_KEY_ID>&secret-access-key=<AWS_SECRET_ACCESS_KEY>


```

### IAM Access Key example

Terminal window

```

$ curl https://api.cloudflare.com/client/v4/zones/$ZONE_TAG/logpush/jobs \

-H 'Authorization: Bearer <API_TOKEN>' \

-H 'Content-Type: application/json' -d '{

  "name": "kinesis",

  "destination_conf": "kinesis://<STREAM_NAME>?region=<AWS_REGION>&access-key-id=<AWS_ACCESS_KEY_ID>&secret-access-key=<AWS_SECRET_ACCESS_KEY>",

  "dataset": "http_requests",

  "enabled": true

}'


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/kinesis/","name":"Enable Amazon Kinesis"}}]}
```

---

---
title: Enable New Relic
description: Cloudflare Logpush supports pushing logs directly to New Relic via the Cloudflare dashboard or via API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/new-relic.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable New Relic

Cloudflare Logpush supports pushing logs directly to New Relic via the Cloudflare dashboard or via API.

## Manage via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. In **Select a destination**, choose **New Relic**.
2. Enter the **New Relic Logs Endpoint**:

* [ US ](#tab-panel-5391)
* [ EU ](#tab-panel-5392)

* `"https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"`

* `"https://log-api.eu.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"`

Use the region that matches the one that has been set on your New Relic account. The **License key** field can be found on the New Relic dashboard. It can be retrieved by following [these steps ↗](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#manage-license-key).

When you are done entering the destination details, select **Continue**.

1. Select the dataset to push to the storage service.
2. In the next step, you need to configure your logpush job:  
   * Enter the **Job name**.  
   * Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.  
   * In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
3. In **Advanced Options**, you can:  
   * Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).  
   * Select a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.  
   * Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
4. Select **Submit** once you are done configuring your logpush job.

## Manage via API

Ensure **Log Share** permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the [Roles section](https://developers.cloudflare.com/logs/logpush/permissions/#roles).

### 1\. Create a job

To create a job, make a `POST` request to the Logpush jobs endpoint with the following fields:

* **name** (optional) - Use your domain name as the job name.
* **output\_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [Log Output Options](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/).  
Note  
To query Cloudflare logs, New Relic requires fields to be sent as a UNIX timestamp.
* **destination\_conf** \- A log destination consisting of an endpoint URL, a license key and a format in the string format below.  
   * `<NR_ENDPOINT_URL>`: The New Relic HTTP logs intake endpoint, which is `https://log-api.newrelic.com/log/v1` for US or `https://log-api.eu.newrelic.com/log/v1` for the EU, depending on the region that has been set on your New Relic account.  
   * `<NR_LICENSE_KEY>`: This key can be found on the New Relic dashboard and it can be retrieved by following [these steps ↗](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#manage-license-key).  
   * `format`: The format is `cloudflare`.  
   US: `"https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"`  
   EU: `"https://log-api.eu.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"`
* **max\_upload\_records** (optional) - The maximum number of log lines per batch. This must be at least 1,000 lines or more. Note that there is no way to specify a minimum number of log lines per batch. This means that log files may contain many fewer lines than specified.
* **max\_upload\_bytes** (optional) - The maximum uncompressed file size of a batch of logs. This must be at least 5 MB. Note that there is no way to set a minimum file size. This means that log files may be much smaller than this batch size. Nevertheless, it is recommended to set this parameter to 5,000,000.
* **dataset** \- The category of logs you want to receive. Refer to [Datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<DOMAIN_NAME>",

    "output_options": {

        "field_names": [

            "ClientIP",

            "ClientRequestHost",

            "ClientRequestMethod",

            "ClientRequestURI",

            "EdgeEndTimestamp",

            "EdgeResponseBytes",

            "EdgeResponseStatus",

            "EdgeStartTimestamp",

            "RayID"

        ],

        "timestamp_format": "unix"

    },

    "destination_conf": "https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare",

    "max_upload_bytes": 5000000,

    "dataset": "http_requests",

    "enabled": true

  }'


```

Response:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "dataset": "http_requests",

    "destination_conf": "https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare",

    "enabled": true,

    "error_message": null,

    "id": <JOB_ID>,

    "kind": "",

    "last_complete": null,

    "last_error": null,

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "unix"

    },

    "max_upload_bytes": 5000000,

    "name": "<DOMAIN_NAME>"

  },

  "success": true

}


```

### 2\. Enable (update) a job

To enable a job, make a `PUT` request to the Logpush jobs endpoint. You will use the job ID returned from the previous step in the URL and send `{"enabled": true}` in the request body.

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Update Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs/$JOB_ID" \

  --request PUT \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "enabled": true

  }'


```

Response:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "dataset": "http_requests",

    "destination_conf": "https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare",

    "enabled": true,

    "error_message": null,

     "id": <JOB_ID>,

     "kind": "",

     "last_complete": "null",

     "last_error": null,

     "output_options": {

       "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

       "timestamp_format": "unix"

     },

     "max_upload_bytes": 5000000,

     "name": "<DOMAIN_NAME>"

  },

  "success": true

}


```

Note

To analyze and visualize Cloudflare metrics using the Cloudflare Network Logs quickstart, follow the steps in the [New Relic Analytics integration page](https://developers.cloudflare.com/analytics/analytics-integrations/new-relic/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/new-relic/","name":"Enable New Relic"}}]}
```

---

---
title: Enable other providers
description: Cloudflare Logpush supports pushing logs to a limited set of services providers. However, you can configure Logpush via API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/other-providers.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable other providers

Cloudflare Logpush supports pushing logs to a limited set of services providers. However, you can configure Logpush via API.

## Manage via the Cloudflare dashboard

Refer to [Enable destinations](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/) for the list of services you can configure to use with Logpush through the Cloudflare dashboard. Interested in a different service? Take this [survey ↗](https://docs.google.com/forms/d/e/1FAIpQLScwOSabROywVajpMX2ZYCVl3saYs11cP4NIC8QR-wmOAnxOtA/viewform).

## Manage via API

The Cloudflare Logpush API allows you to configure and manage jobs via create, retrieve, update, and delete operations (CRUD).

With Logpush, you can create a job to upload logs of the metadata Cloudflare collects in batches as soon as possible to your cloud service provider. The default number of jobs that you can setup per dataset per domain is four, but you can setup more jobs depending on your plan and subscriptions.

Ensure **Log Share** permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the [Roles section](https://developers.cloudflare.com/logs/logpush/permissions/#roles).

  
To get started:

1. Set up a storage provider and grant Cloudflare access. Your storage provider may request your Cloudflare API credentials and other information including:  
   * Email address  
   * Cloudflare API key  
   * Zone ID  
   * Destination access details for your cloud service provider
2. Configure your Logpush job. For more information on how to configure a Logpush job, refer to [API configuration](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/other-providers/","name":"Enable other providers"}}]}
```

---

---
title: Enable Cloudflare R2
description: Cloudflare Logpush supports pushing logs directly to R2. You can do so via the automatic setup (Cloudflare creates an R2 bucket for you), or you can create your own R2 bucket with the custom setup. The automatic setup is ideal for quickly setting up a bucket or for testing purposes. Instead, use the custom setup if you need full control over the configuration.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/r2.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable Cloudflare R2

Cloudflare Logpush supports pushing logs directly to R2\. You can do so via the automatic setup (Cloudflare creates an R2 bucket for you), or you can create your own R2 bucket with the custom setup. The automatic setup is ideal for quickly setting up a bucket or for testing purposes. Instead, use the custom setup if you need full control over the configuration.

For more information about R2, refer to the [Cloudflare R2](https://developers.cloudflare.com/r2/) documentation.

Note

If you want to set up R2 as destination for a zone on [FedRAMP High ↗](https://www.cloudflare.com/cloudflare-for-government/), you need to use an [S3-compatible endpoint](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/s3-compatible-endpoints/) with the following `Endpoint URL`:`<ACCOUNT_ID>.r2.fed.cloudflarestorage.com`

## Automatic setup

If you want to use the automatic setup for your logpush job:

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. Select **R2 Object Storage - automatic** as destination.
2. Next, select the dataset and the storage region you want to use.
3. To finalize, select **Create Logpush job**.

Your setup should now be complete. If you require full control over the configuration, consider using the custom setup instead.

## Custom setup

Cloudflare Logpush supports pushing logs directly to R2 via the Cloudflare dashboard or via API.

Before getting started:

* Create an R2 bucket and set up R2 API tokens.  
   1. Go to the R2 UI > **Create bucket**.  
   2. Select **Manage R2 API Tokens**.  
   3. Select **Create API token**.  
   4. Under **Permission**, select **Edit** permissions for your token.  
   5. Copy the Secret Access Key and Access Key ID. You will need these when setting up your Logpush job.
* Ensure that you have the following permissions:  
   * R2 write, Logshare Edit.

### Manage via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. In **Select a destination**, choose **R2 Object Storage**.
2. Enter or select the following destination details:  
   * **Bucket** \- R2 bucket name  
   * **Path** \- bucket location, for example, `cloudflare-logs/http_requests/example.com`  
   * **Organize logs into daily subfolders** (recommended)  
   * Under **Authentication** add your **R2 Access Key ID** and **R2 Secret Access Key**. Refer to [Manage R2 API tokens ↗](https://dash.cloudflare.com/b54f07a6c269ecca2fa60f1ae4920c99/r2/api-tokens) for more information.

When you are done entering the destination details, select **Continue**.

1. Select the dataset to push to the storage service.
2. In the next step, you need to configure your logpush job:  
   * Enter the **Job name**.  
   * Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.  
   * In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
3. In **Advanced Options**, you can:  
   * Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).  
   * Select a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.  
   * Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
4. Select **Submit** once you are done configuring your logpush job.

### Manage via API

To create a job, make a `POST` request to the Logpush jobs endpoint with the following fields:

* **name** (optional) - Use your domain name as the job name.
* **destination\_conf** \- A log destination consisting of bucket path, account ID, R2 access key ID and R2 secret access key.

Note

We recommend adding the `{DATE}` parameter in the `destination_conf` to separate your logs into daily subfolders.

Terminal window

```

r2://<BUCKET_PATH>/{DATE}?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>


```

* **dataset** \- The category of logs you want to receive. Refer to [Datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
* **output\_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [API configuration options](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#options).

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<DOMAIN_NAME>",

    "output_options": {

        "field_names": [

            "ClientIP",

            "ClientRequestHost",

            "ClientRequestMethod",

            "ClientRequestURI",

            "EdgeEndTimestamp",

            "EdgeResponseBytes",

            "EdgeResponseStatus",

            "EdgeStartTimestamp",

            "RayID"

        ],

        "timestamp_format": "rfc3339"

    },

    "destination_conf": "r2://<BUCKET_PATH>/{DATE}?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>",

    "dataset": "http_requests",

    "enabled": true

  }'


```

## Download logs from R2

Once your logs are stored in R2, you can download them using various methods:

### Dashboard

1. In the Cloudflare dashboard, go to the **R2** page.  
[ Go to **Overview** ](https://dash.cloudflare.com/?to=/:account/r2/overview)
2. Select your bucket.
3. From your bucket's page, locate the desired log file.
4. Select on the **...** icon next to the file to download it.
![Log files list](https://developers.cloudflare.com/_astro/logs-r2.BSx83Q8__1KKCo.webp) 

### AWS CLI

Cloudflare R2 is S3-compatible, so you can use the AWS CLI to interact with it.

* Configure the AWS CLI with your R2 credentials.
* Use the `aws s3 cp` command to download the log file:

Terminal window

```

aws s3 cp s3://<BUCKET-NAME>/<PATH-TO-LOG-FILE> <LOCAL-DESTINATION>


```

Replace `<bucket-name>`, `<path-to-log-file>`, and `<local-destination>` with your specific details.

Downloaded files are gzipped so they must be decompressed before you can open them in a text editor.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/r2/","name":"Enable Cloudflare R2"}}]}
```

---

---
title: Enable S3-compatible endpoints
description: Cloudflare Logpush supports pushing logs to S3-compatible destinations via the Cloudflare dashboard or via API, including:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/s3-compatible-endpoints.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable S3-compatible endpoints

Cloudflare Logpush supports pushing logs to S3-compatible destinations via the Cloudflare dashboard or via API, including:

* [Alibaba Cloud OSS ↗](https://www.alibabacloud.com/help/doc-detail/64919.htm#title-37m-7gl-xy2)
* [Backblaze B2 ↗](https://www.backblaze.com/b2/docs/s3%5Fcompatible%5Fapi.html)
* [DigitalOcean Spaces ↗](https://www.digitalocean.com/docs/spaces/)
* [IBM Cloud Object Storage ↗](https://cloud.ibm.com/apidocs/cos/cos-compatibility)
* [JD Cloud Object Storage Service ↗](https://docs.jdcloud.com/en/object-storage-service/introduction-2)
* [Linode Object Storage ↗](https://www.linode.com/products/object-storage/)
* [Oracle Cloud Object Storage ↗](https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm)
* On-premise [Ceph Object Gateway ↗](https://docs.ceph.com/en/latest/radosgw/s3/)

For more information about Logpush and the current production APIs, refer to [Cloudflare Logpush](https://developers.cloudflare.com/logs/logpush/) documentation.

## Manage via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. In **Select a destination**, choose **S3-Compatible**.
2. Enter or select the following destination information:  
   * **Bucket** \- S3 Compatible bucket name  
   * **Path** \- bucket location within the storage container  
   * **Organize logs into daily subfolders** (recommended)  
   * **Endpoint URL** \- The URL without the bucket name or path. Example, `sfo2.digitaloceanspaces.com`.  
   * **Bucket region**  
   * **Access Key ID**  
   * **Secret Access Key**

When you are done entering the destination details, select **Continue**.

1. Select the dataset to push to the storage service.
2. In the next step, you need to configure your logpush job:  
   * Enter the **Job name**.  
   * Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.  
   * In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
3. In **Advanced Options**, you can:  
   * Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).  
   * Select a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.  
   * Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
4. Select **Submit** once you are done configuring your logpush job.

## Manage via API

To set up S3-compatible endpoints:

1. Create a job with the appropriate endpoint URL and authentication parameters.
2. Enable the job to begin pushing logs.

Note

Unlike Logpush jobs to Amazon S3, there is no ownership challenge with S3-compatible APIs.

Ensure **Log Share** permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the [Roles section](https://developers.cloudflare.com/logs/logpush/permissions/#roles).

### 1\. Create a job

To create a job, make a `POST` request to the Logpush jobs endpoint with the following fields:

* **name** (optional) - Use your domain name as the job name.
* **destination\_conf** \- A log destination consisting of an endpoint name, bucket name, bucket path, region, access-key-id, and secret-access-key in the following string format:

Terminal window

```

"s3://<BUCKET_NAME>/<BUCKET_PATH>?region=<REGION>&access-key-id=<ACCESS_KEY_ID>&secret-access-key=<SECRET_ACCESS_KEY>&endpoint=<ENDPOINT_URL>"


```

Note

`<ENDPOINT_URL>` is the URL without the bucket name or path. For example: `endpoint=sfo2.digitaloceanspaces.com`.

* **dataset** \- The category of logs you want to receive. Refer to [Datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
* **output\_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [Log Output Options](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/).

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<DOMAIN_NAME>",

    "destination_conf": "s3://<BUCKET_NAME>/<BUCKET_PATH>?region=<REGION>&access-key-id=<ACCESS_KEY_ID>&secret-access-key=<SECRET_ACCESS_KEY>&endpoint=<ENDPOINT_URL>",

    "output_options": {

        "field_names": [

            "ClientIP",

            "ClientIP",

            "ClientRequestHost",

            "ClientRequestMethod",

            "ClientRequestURI",

            "EdgeEndTimestamp",

            "EdgeResponseBytes",

            "EdgeResponseStatus",

            "EdgeStartTimestamp",

            "RayID"

        ],

        "timestamp_format": "rfc3339"

    },

    "dataset": "http_requests"

  }'


```

Response:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "enabled": false,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "s3://<BUCKET_NAME>/<BUCKET_PATH>?region=<REGION>&access-key-id=<ACCESS_KEY_ID>&secret-access-key=<SECRET_ACCESS_KEY>&endpoint=<ENDPOINT_URL>",

    "last_complete": null,

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

### 2\. Enable (update) a job

To enable a job, make a `PUT` request to the Logpush jobs endpoint. You will use the job ID returned from the previous step in the URL, and send `{"enabled": true}` in the request body.

Example request using cURL:

Terminal window

```

curl --request PUT \

https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/jobs/{job_id} \

--header "X-Auth-Email: <EMAIL>" \

--header "X-Auth-Key: <API_KEY>" \

--header "Content-Type: application/json" \

--data '{

  "enabled": true

}'


```

Response:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "enabled": true,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "s3://<BUCKET_NAME>/<BUCKET_PATH>?region=<REGION>&access-key-id=<ACCESS_KEY_ID>&secret-access-key=<SECRET_ACCESS_KEY>&endpoint=<ENDPOINT_URL>",

    "last_complete": null,

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/s3-compatible-endpoints/","name":"Enable S3-compatible endpoints"}}]}
```

---

---
title: Enable SentinelOne
description: The HTTP Event Collector (HEC) is a reliable method to send log data to SentinelOne Singularity Data Lake. Cloudflare Logpush supports pushing logs directly to SentinelOne HEC via the Cloudflare dashboard or API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/sentinelone.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable SentinelOne

The HTTP Event Collector (HEC) is a reliable method to send log data to SentinelOne Singularity Data Lake. Cloudflare Logpush supports pushing logs directly to SentinelOne HEC via the Cloudflare dashboard or API.

## Manage via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. In **Select a destination**, choose **SentinelOne**.
2. Enter or select the following destination information:  
   * **SentinelOne HEC URL**  
   * **Auth Token** \- Event Collector token.  
   * **Source Type** \- For example, `marketplace-cloudflare-latest`.

When you are done entering the destination details, select **Continue**.

1. Select the dataset to push to the storage service.
2. In the next step, you need to configure your logpush job:  
   * Enter the **Job name**.  
   * Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.  
   * In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
3. In **Advanced Options**, you can:  
   * Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).  
   * Select a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.  
   * Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
4. Select **Submit** once you are done configuring your logpush job.

## Manage via API

To set up a SentinelOne Logpush job:

1. Create a job with the appropriate endpoint URL and authentication parameters.
2. Enable the job to begin pushing logs.

Note

Unlike configuring Logpush jobs for AWS S3, GCS, or Azure, there is no ownership challenge when configuring Logpush to SentinelOne.

Ensure **Log Share** permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the [Roles section](https://developers.cloudflare.com/logs/logpush/permissions/#roles).

### 1\. Create a job

To create a job, make a `POST` request to the Logpush jobs endpoint with the following fields:

* **name** (optional) - Use your domain name as the job name.
* **destination\_conf** \- A log destination consisting of an endpoint URL, source type, authorization header in the string format below.  
   * **SENTINELONE\_ENDPOINT\_URL**: The SentinelOne raw HTTP Event Collector URL with port. For example: `sentinelone://ingest.us1.sentinelone.net/services/collector/raw`. Cloudflare expects the SentinelOne endpoint to be `/services/collector/raw` while configuring and setting up the Logpush job.  
   * **SENTINELONE\_AUTH\_TOKEN**: The SentinelOne authorization token that is URL-encoded. For example: `Bearer 0e6d94e8c-5792-4ad1-be3c-29bcaee0197d`.  
   * **SOURCE\_TYPE**: The SentinelOne source type. For example: `marketplace-cloudflare-latest`.

Terminal window

```

"https://<SENTINELONE_ENDPOINT_URL>?sourcetype=<SOURCE_TYPE>&header_Authorization=<SENTINELONE_AUTH_TOKEN>"


```

* **dataset** \- The category of logs you want to receive. Refer to [Datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
* **output\_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [Log Output Options](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/). For timestamp, Cloudflare recommends using `timestamps=rfc3339`.

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<DOMAIN_NAME>",

    "destination_conf": "sentinelone://<SENTINELONE_ENDPOINT_URL>?sourcetype=<SOURCE_TYPE>&header_Authorization=<SENTINELONE_AUTH_TOKEN>",

    "output_options": {

        "field_names": [

            "ClientIP",

            "ClientRequestHost",

            "ClientRequestMethod",

            "ClientRequestURI",

            "EdgeEndTimestamp",

            "EdgeResponseBytes",

            "EdgeResponseStatus",

            "EdgeStartTimestamp",

            "RayID"

        ],

        "timestamp_format": "rfc3339"

    },

    "dataset": "http_requests"

  }'


```

Response:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "enabled": false,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "sentinelone://<SENTINELONE_ENDPOINT_URL>?sourcetype=<SOURCE_TYPE>&header_Authorization=<SENTINELONE_AUTH_TOKEN>",

    "last_complete": null,

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

### 2\. Enable (update) a job

To enable a job, make a `PUT` request to the Logpush jobs endpoint. Use the job ID returned from the previous step in the URL and send `{"enabled": true}` in the request body.

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Update Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs/$JOB_ID" \

  --request PUT \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "enabled": true

  }'


```

Response:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "enabled": true,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "sentinelone://<SENTINELONE_ENDPOINT_URL>?sourcetype=<SOURCE_TYPE>&header_Authorization=<SENTINELONE_AUTH_TOKEN>",

    "last_complete": null,

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

Refer to the [Logpush FAQ](https://developers.cloudflare.com/logs/faq/logpush/) for troubleshooting information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/sentinelone/","name":"Enable SentinelOne"}}]}
```

---

---
title: Enable Splunk
description: The HTTP Event Collector (HEC) is a reliable method to receive data from Splunk Enterprise or Splunk Cloud Platform. Cloudflare Logpush supports pushing logs directly to Splunk HEC via the Cloudflare dashboard or API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/splunk.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable Splunk

The [HTTP Event Collector (HEC) ↗](https://dev.splunk.com/enterprise/docs/devtools/httpeventcollector/) is a reliable method to receive data from Splunk Enterprise or Splunk Cloud Platform. Cloudflare Logpush supports pushing logs directly to Splunk HEC via the Cloudflare dashboard or API.

## Manage via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. In **Select a destination**, choose **Splunk**.
2. Enter or select the following destination information:  
   * **Splunk HEC URL**  
   * **Channel ID** \- This is a random GUID that you can generate using [guidgenerator.com ↗](https://guidgenerator.com/).  
   * **Auth Token** \- Event Collector token prefixed with the word `Splunk`. For example: `Splunk 1234EXAMPLEKEY`.  
   * **Source Type** \- For example, `cloudflare:json`. If you are using the [Cloudflare App for Splunk ↗](https://splunkbase.splunk.com/app/4501), refer to the appropriate source type for the corresponding datasets under the **Details** section. For instance, for Zero Trust Access requests logs, the source type is `cloudflare:access`.  
   * **Use insecure skip verify option** (not recommended).

When you are done entering the destination details, select **Continue**.

1. Select the dataset to push to the storage service.
2. In the next step, you need to configure your logpush job:  
   * Enter the **Job name**.  
   * Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.  
   * In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
3. In **Advanced Options**, you can:  
   * Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).  
   * Select a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.  
   * Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
4. Select **Submit** once you are done configuring your logpush job.

## Manage via API

To set up a Splunk Logpush job:

1. Create a job with the appropriate endpoint URL and authentication parameters.
2. Enable the job to begin pushing logs.

Note

Unlike configuring Logpush jobs for AWS S3, GCS, or Azure, there is no ownership challenge when configuring Logpush to Splunk.

Ensure **Log Share** permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the [Roles section](https://developers.cloudflare.com/logs/logpush/permissions/#roles).

### 1\. Create a job

To create a job, make a `POST` request to the Logpush jobs endpoint with the following fields:

* **name** (optional) - Use your domain name as the job name.
* **destination\_conf** \- A log destination consisting of an endpoint URL, channel id, insecure-skip-verify flag, source type, authorization header in the string format below.  
   * **<SPLUNK\_ENDPOINT\_URL>**: The Splunk raw HTTP Event Collector URL with port. For example: `splunk.cf-analytics.com:8088/services/collector/raw`.  
         * Cloudflare expects the Splunk endpoint to be `/services/collector/raw` while configuring and setting up the Logpush job.  
         * Ensure you have enabled HEC in Splunk. Refer to [Splunk Analytics Integrations](https://developers.cloudflare.com/analytics/analytics-integrations/splunk/) for information on how to set up HEC in Splunk.  
         * You may notice an API request failed with a 504 error, when adding an incorrect URL. Splunk Cloud endpoint URL usually contains `http-inputs-` or similar text before the hostname.  
   * **<SPLUNK\_CHANNEL\_ID>**: A unique channel ID. This is a random GUID that you can generate by:  
         * Using an online tool like the [GUID generator ↗](https://www.guidgenerator.com/).  
         * Using the command line. For example: `python -c 'import uuid; print(uuid.uuid4())'`.  
   * **<INSECURE\_SKIP\_VERIFY>**: Boolean value. Cloudflare recommends setting this value to `false`. Setting this value to `true` is equivalent to using the `-k` option with `curl` as shown in Splunk examples and is **not** recommended. Only set this value to `true` when HEC uses a self-signed certificate.  
Note  
Cloudflare highly recommends setting this value to `false`. Refer to the [Logpush FAQ](https://developers.cloudflare.com/logs/faq/logpush/) for more information.  
   * **<SOURCE\_TYPE>**: The Splunk source type. For example: `cloudflare:json`.  
   * **<SPLUNK\_AUTH\_TOKEN>**: The Splunk authorization token that is URL-encoded and must be prefixed with the word `Splunk`. For example: `Splunk e6d94e8c-5792-4ad1-be3c-29bcaee0197d`.

Terminal window

```

"splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>"


```

* **dataset** \- The category of logs you want to receive. Refer to [Datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
* **output\_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [Log Output Options](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/). For timestamp, Cloudflare recommends using `timestamps=rfc3339`.

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "<DOMAIN_NAME>",

    "destination_conf": "splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>",

    "output_options": {

        "field_names": [

            "ClientIP",

            "ClientRequestHost",

            "ClientRequestMethod",

            "ClientRequestURI",

            "EdgeEndTimestamp",

            "EdgeResponseBytes",

            "EdgeResponseStatus",

            "EdgeStartTimestamp",

            "RayID"

        ],

        "timestamp_format": "rfc3339"

    },

    "dataset": "http_requests"

  }'


```

Response:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "enabled": false,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>",

    "last_complete": null,

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

### 2\. Enable (update) a job

To enable a job, make a `PUT` request to the Logpush jobs endpoint. Use the job ID returned from the previous step in the URL and send `{"enabled": true}` in the request body.

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Update Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs/$JOB_ID" \

  --request PUT \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "enabled": true

  }'


```

Response:

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "dataset": "http_requests",

    "enabled": true,

    "name": "<DOMAIN_NAME>",

    "output_options": {

      "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],

      "timestamp_format": "rfc3339"

    },

    "destination_conf": "splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>",

    "last_complete": null,

    "last_error": null,

    "error_message": null

  },

  "success": true

}


```

Refer to the [Logpush FAQ](https://developers.cloudflare.com/logs/faq/logpush/) for troubleshooting information.

### 3\. Create WAF custom rule for Splunk HEC endpoint (optional)

If your logpush destination hostname is proxied through Cloudflare, and you have the Cloudflare Web Application Firewall (WAF) turned on, you may be challenged or blocked when Cloudflare makes a request to Splunk HTTP Event Collector (HEC). To make sure this does not happen, you have to create a [custom rule](https://developers.cloudflare.com/waf/custom-rules/) that allows Cloudflare to bypass the HEC endpoint.

* [  New dashboard ](#tab-panel-5393)
* [ Old dashboard ](#tab-panel-5394)

1. In the Cloudflare dashboard, go to the **Security rules** page.  
[ Go to **Security rules** ](https://dash.cloudflare.com/?to=/:account/:zone/security/security-rules)
2. Select **Create rule** \> **Custom rules**.
3. Enter a descriptive name for the rule (for example, `Splunk`).
4. Under **When incoming requests match**, use the **Field**, **Operator**, and **Value** dropdowns to create a rule. After finishing each row, select **And** to create the next row of rules. Refer to the table below for the values you should input:  
| Field            | Operator | Value                                                               |  
| ---------------- | -------- | ------------------------------------------------------------------- |  
| Request Method   | equals   | POST                                                                |  
| Hostname         | equals   | Your Splunk endpoint hostname. For example: splunk.cf-analytics.com |  
| URI Path         | equals   | /services/collector/raw                                             |  
| URI Query String | contains | channel                                                             |  
| AS Num           | is in    | 13335, 132892, 202623                                               |  
| User Agent       | equals   | Go-http-client/2.0                                                  |
5. After inputting the values as shown in the table, you should have an Expression Preview with the values you added for your specific rule. The example below reflects the hostname `splunk.cf-analytics.com`.  
```  
(http.request.method eq "POST" and http.host eq "splunk.cf-analytics.com" and http.request.uri.path eq "/services/collector/raw" and http.request.uri.query contains "channel" and ip.geoip.asnum in {13335 132892 202623} and http.user_agent eq "Go-http-client/2.0")  
```
6. Under the **Then** \> **Choose an action** dropdown, select _Skip_.
7. Under **WAF components to skip**, select _All managed rules_.
8. Select **Deploy**.

1. Log in to the [Cloudflare dashboard ↗](https://dash.cloudflare.com/) and select your account. Go to **Security** \> **WAF** \> **Custom rules**.
2. Select **Create rule** and enter a descriptive name for it (for example, `Splunk`).
3. Under **When incoming requests match**, use the **Field**, **Operator**, and **Value** dropdowns to create a rule. After finishing each row, select **And** to create the next row of rules. Refer to the table below for the values you should input:  
| Field            | Operator | Value                                                               |  
| ---------------- | -------- | ------------------------------------------------------------------- |  
| Request Method   | equals   | POST                                                                |  
| Hostname         | equals   | Your Splunk endpoint hostname. For example: splunk.cf-analytics.com |  
| URI Path         | equals   | /services/collector/raw                                             |  
| URI Query String | contains | channel                                                             |  
| AS Num           | is in    | 13335, 132892, 202623                                               |  
| User Agent       | equals   | Go-http-client/2.0                                                  |
4. After inputting the values as shown in the table, you should have an Expression Preview with the values you added for your specific rule. The example below reflects the hostname `splunk.cf-analytics.com`.  
```  
(http.request.method eq "POST" and http.host eq "splunk.cf-analytics.com" and http.request.uri.path eq "/services/collector/raw" and http.request.uri.query contains "channel" and ip.geoip.asnum in {13335 132892 202623} and http.user_agent eq "Go-http-client/2.0")  
```
5. Under the **Then** \> **Choose an action** dropdown, select _Skip_.
6. Under **WAF components to skip**, select _All managed rules_.
7. Select **Deploy**.

The WAF should now ignore requests made to Splunk HEC by Cloudflare.

Note

To analyze and visualize Cloudflare Logs using the Cloudflare App for Splunk, follow the steps in the [Splunk Analytics integration page](https://developers.cloudflare.com/analytics/analytics-integrations/splunk/).

## Troubleshooting Splunk destinations

### Validating destination errors

If you receive a validation error while setting up a Splunk job, check the following:

* **Endpoint URL**: Cloudflare only supports Splunk HEC raw endpoint over HTTPS. Verify your endpoint URL is correct and includes the port (typically `:8088`).
* **Authentication token**: Ensure the Splunk authentication token is URL-encoded and prefixed with `Splunk`. For example, use `%20` for spaces in the token.
* **Certificate configuration**: Certificates generated by Splunk or third-party certificates must have the **Common Name** field match the Splunk server's domain name. Otherwise, you may see errors like: `x509: certificate is valid for SplunkServerDefaultCert, not <YOUR_INSTANCE>.splunkcloud.com`.

### Understanding insecure-skip-verify

The `insecure-skip-verify` parameter, when set to `true`, makes an insecure connection to Splunk. This is equivalent to using the `-k` option with `curl` and is **not recommended**.

**Why this parameter exists**: Certificates generated by Splunk or third-party certificates should have the **Common Name** field match the Splunk server's domain name. When they do not match (especially with default certificates generated by Splunk on startup), pushes will fail unless certificates are fixed. This parameter exists for rare scenarios where you cannot access or modify certificates, such as with Splunk Cloud instances that do not allow changing server configurations.

Warning

Cloudflare highly recommends setting `insecure-skip-verify` to `false`. Only set this to `true` when HEC uses a self-signed certificate and fixing the certificates is not possible.

### Verifying HEC before setup

Before creating a Logpush job, verify that your Splunk HEC is working correctly by publishing test events through `curl` without the `-k` flag and with `insecure-skip-verify=false`:

Terminal window

```

curl "https://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=false&sourcetype=<SOURCE_TYPE>" \

--header "Authorization: Splunk <SPLUNK_AUTH_TOKEN>" \

--data '{"BotScore":99,"BotScoreSrc":"Machine Learning","CacheCacheStatus":"miss","CacheResponseBytes":2478}'


```

Expected response:

```

{"text":"Success","code":0}


```

### Network port requirements

Cloudflare expects the HEC network port to be configured to `:443` or `:8088`. Other ports are not supported.

### Cloudflare Splunk App integration

Logpush integrates with the [Cloudflare App for Splunk ↗](https://splunkbase.splunk.com/app/4501/). As long as you ingest logs using the `cloudflare:json` source type, you can use the Cloudflare Splunk App to analyze and visualize your logs.

For detailed setup instructions, refer to [Splunk Analytics integration](https://developers.cloudflare.com/analytics/analytics-integrations/splunk/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/splunk/","name":"Enable Splunk"}}]}
```

---

---
title: Enable Sumo Logic
description: Cloudflare Logpush supports pushing logs directly to Sumo Logic via the Cloudflare dashboard or via API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/sumo-logic.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enable Sumo Logic

Cloudflare Logpush supports pushing logs directly to Sumo Logic via the Cloudflare dashboard or via API.

## Manage via the Cloudflare dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Select **Create a Logpush job**.
1. In **Select a destination**, choose **Sumo Logic**.
2. Enter the **HTTP Source Address**. To get the HTTP Source Address (URL) configure a [Sumo Logic Hosted Collector ↗](https://help.sumologic.com/docs/send-data/hosted-collectors/) with an [HTTP Logs & Metrics Source ↗](https://help.sumologic.com/docs/send-data/hosted-collectors/http-source/logs-metrics/). Note that the same collector can be used for multiple Logpush jobs, but each job must have a dedicated source. When you are done entering the destination details, select **Continue**.
3. Select the dataset to push to the storage service.
4. In the next step, you need to configure your logpush job:  
   * Enter the **Job name**.  
   * Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.  
   * In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
5. In **Advanced Options**, you can:  
   * Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).  
   * Select a [sampling rate](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.  
   * Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
6. Select **Submit** once you are done configuring your logpush job.

## Configure a Hosted Collector

Cloudflare can send logs to a Hosted Collector with **HTTP Logs & Metrics** as the source. Once you have set up a collector, you simply provide the HTTP Source Address (a unique URL) to which logs can be posted.

Ensure **Log Share** permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the [Roles section](https://developers.cloudflare.com/logs/logpush/permissions/#roles).

  
To enable Logpush to Sumo Logic:

1. Configure a Hosted Collector. Refer to [instructions from Sumo Logic ↗](https://help.sumologic.com/docs/send-data/hosted-collectors/configure-hosted-collector/).
2. Configure an HTTP Logs & Metrics Source. Refer to [instructions from Sumo Logic ↗](https://help.sumologic.com/docs/send-data/hosted-collectors/http-source/). The last step indicates how to get the HTTP Source Address (URL).
3. Provide the HTTP Source Address (URL) when prompted by the Logpush API or UI.

Notes

* Logpush will stop working if you regenerate the HTTP Source Address (URL). Refer to [generate a new URL for an HTTP Source from Sumo Logic ↗](https://help.sumologic.com/docs/send-data/hosted-collectors/http-source/generate-new-url/). To use the new URL, you will have to get a new ownership challenge and update the destination for your job.
* Sumo Logic may impose throttling and caps on your log ingestion to prevent your account from using **On-Demand Capacity**. Refer to [manage ingestion ↗](https://help.sumologic.com/docs/manage/ingestion-volume/log-ingestion/).
* To analyze and visualize Cloudflare Logs using the Cloudflare App for Sumo Logic, follow the steps in the Sumo Logic integration documentation to [install the Cloudflare App ↗](https://help.sumologic.com/docs/integrations/saas-cloud/cloudflare/#installing-the-cloudflare-app) and [view the Cloudflare dashboards ↗](https://help.sumologic.com/docs/integrations/saas-cloud/cloudflare/#viewing-the-cloudflare-dashboards).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/sumo-logic/","name":"Enable Sumo Logic"}}]}
```

---

---
title: Axiom
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/third-party/axiom.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Axiom

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/third-party/","name":"Third-party integrations"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/third-party/axiom/","name":"Axiom"}}]}
```

---

---
title: Exabeam
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/third-party/exabeam.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Exabeam

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/third-party/","name":"Third-party integrations"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/third-party/exabeam/","name":"Exabeam"}}]}
```

---

---
title: Sekoia
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/third-party/sekoia.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Sekoia

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/third-party/","name":"Third-party integrations"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/third-party/sekoia/","name":"Sekoia"}}]}
```

---

---
title: Taegis
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/enable-destinations/third-party/taegis.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Taegis

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/","name":"Enable destinations"}},{"@type":"ListItem","position":6,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/third-party/","name":"Third-party integrations"}},{"@type":"ListItem","position":7,"item":{"@id":"/logs/logpush/logpush-job/enable-destinations/third-party/taegis/","name":"Taegis"}}]}
```

---

---
title: Filters
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/filters.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Filters

The following table represents the comparison operators that are supported and example values. Filters are added as escaped JSON strings formatted as `{"key":"<field>","operator":"<comparison_operator>","value":"<value>"}`.

* Refer to the [Datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/) page for a list of fields related to each dataset.
* Comparison operators define how values must relate to fields in the log line for an expression to return true.
* Values represent the data associated with fields.

| Name                            | Operator Notation | String | Int | Bool | Array | Object | Example                                                              |
| ------------------------------- | ----------------- | ------ | --- | ---- | ----- | ------ | -------------------------------------------------------------------- |
| Equal                           | eq                | ✅      | ✅   | ✅    | ❌     | ❌      | {"key":"ClientRequestHost","operator":"eq","value":"example.com"}    |
| Not equal                       | !eq               | ✅      | ✅   | ✅    | ❌     | ❌      | {"key":"ClientCountry","operator":"!eq","value":"ca"}                |
| Less than                       | lt                | ❌      | ✅   | ❌    | ❌     | ❌      | {"key":"BotScore","operator":"lt","value":"30"}                      |
| Less than or equal              | leq               | ❌      | ✅   | ❌    | ❌     | ❌      | {"key":"BotScore","operator":"leq","value":"30"}                     |
| Greater than                    | gt                | ❌      | ✅   | ❌    | ❌     | ❌      | {"key":"BotScore","operator":"gt","value":"30"}                      |
| Greater than or equal           | geq               | ❌      | ✅   | ❌    | ❌     | ❌      | {"key":"BotScore","operator":"geq","value":"30"}                     |
| Starts with                     | startsWith        | ✅      | ❌   | ❌    | ❌     | ❌      | {"key":"ClientRequestPath","operator":"startsWith","value":"/foo"}   |
| Ends with                       | endsWith          | ✅      | ❌   | ❌    | ❌     | ❌      | {"key":"ClientRequestPath","operator":"endsWith","value":"/foo"}     |
| Does not start with             | !startsWith       | ✅      | ❌   | ❌    | ❌     | ❌      | {"key":"ClientRequestPath","operator":"!startsWith","value":"/foo"}  |
| Does not end with               | !endsWith         | ✅      | ❌   | ❌    | ❌     | ❌      | {"key":"ClientRequestPath","operator":"!endsWith","value":"/foo"}    |
| Contains                        | contains          | ✅      | ❌   | ❌    | ✅     | ❌      | {"key":"ClientRequestPath","operator":"contains","value":"/static"}  |
| Does not contain                | !contains         | ✅      | ❌   | ❌    | ✅     | ❌      | {"key":"ClientRequestPath","operator":"!contains","value":"/static"} |
| Value is in a set of values     | in                | ✅      | ✅   | ❌    | ❌     | ❌      | {"key":"EdgeResponseStatus","operator":"in","value":\[200,201\]}     |
| Value is not in a set of values | !in               | ✅      | ✅   | ❌    | ❌     | ❌      | {"key":"EdgeResponseStatus","operator":"!in","value":\[200,201\]}    |

The filter field has limits of approximately 30 operators and 1000 bytes. Anything exceeding this value will return an error.

Note

Filtering is not supported on the following data types: `objects`, `array[object]`.

For the Firewall events dataset, the following fields are not supported: `Action`, `Description`, `Kind`, `MatchIndex`, `Metadata`, `OriginatorRayID`, `RuleID`, and `Source`.

## Logical Operators

* Filters can be connected using `AND`, `OR` logical operators.
* Logical operators can be nested.

Here are some examples of how the logical operators can be implemented. `X`, `Y` and `Z` are used to represent filter criteria:

* X AND Y AND Z - `{"where":{"and":[{X},{Y},{Z}]}}`
* X OR Y OR Z - `{"where":{"or":[{X},{Y},{Z}]}}`
* X AND (Y OR Z) - `{"where":{"and":[{X}, {"or":[{Y},{Z}]}]}}`
* (X AND Y) OR Z - `{"where":{"or":[{"and": [{X},{Y}]},{Z}]}}`

Logpush filters act as a pass-through gate, not an exclusion list. When multiple conditions are joined with AND:

* All conditions must evaluate to TRUE for the log to be pushed.
* If any single condition is FALSE, the log is excluded.

A common misconception is interpreting the filter as `exclude logs matching ALL conditions` rather than `include logs matching ALL conditions`.

## Set filters via API or dashboard

Filters can be set via API or the Cloudflare dashboard. Note that using a filter is optional, but if used, it must contain the `where` key.

### API

Here is an example request using cURL via API:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Create Logpush job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "name": "static-assets",

    "output_options": {

        "field_names": [

            "ClientIP",

            "EdgeStartTimestamp",

            "RayID"

        ],

        "sample_rate": 0.1,

        "timestamp_format": "rfc3339",

        "CVE-2021-44228": true

    },

    "dataset": "http_requests",

    "filter": "{\"where\":{\"and\":[{\"key\":\"ClientRequestPath\",\"operator\":\"contains\",\"value\":\"/static\"},{\"key\":\"ClientRequestHost\",\"operator\":\"eq\",\"value\":\"example.com\"}]}}",

    "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2/"

  }'


```

### Dashboard

To set filters through the dashboard:

1. In the Cloudflare dashboard, go to the **Logpush** page at the account or or domain (also known as zone) level.  
For account: [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/logs)  
For domain (also known as zone): [ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Select the dataset you want to push to a storage service. Depending on your choice, you have access to [account-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Below **Select data fields**, in the **Filter** section, you can set up your filters.
4. You need to select a [dataset field](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/), an [Operator](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/#logical-operators), and a **Value**.
5. You can connect more filters using `AND` and `OR` logical operators.
6. Select **Next** to continue the setting up of your Logpush job.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/filters/","name":"Filters"}}]}
```

---

---
title: Log Output Options
description: Jobs in Logpush now have a new key, output_options, which replaces logpull_options and allows for more flexible formatting. You can modify output_options via the API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/log-output-options.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Log Output Options

Jobs in Logpush now have a new key, **output\_options**, which replaces **logpull\_options** and allows for more flexible formatting. You can modify **output\_options** via the API.

## Replace logpull\_options

Previously, Logpush jobs could be customized by specifying the list of fields, sampling rate, and timestamp format in **logpull\_options** as [URL-encoded parameters](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#options). For example:

```

{

  "id": <JOB_ID>,

  "dataset": "http_requests",

  "enabled": false,

  "name": "<DOMAIN_NAME>",

  "logpull_options": "fields=ClientIP,EdgeStartTimestamp,RayID&sample=0.1&timestamps=rfc3339",

  "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2"

}


```

We have replaced this with **output\_options** as it is used for both Logpull and Logpush.

```

{

  "id": <JOB_ID>,

  "dataset": "http_requests",

  "enabled": false,

  "name": "<DOMAIN_NAME>",

  "output_options": {

    "field_names": ["ClientIP", "EdgeStartTimestamp", "RayID"],

    "sample_rate": 0.1,

    "timestamp_format": "rfc3339"

  },

  "destination_conf": "s3://<BUCKET_PATH>?region=us-west-2"

}


```

## Output types

By default Logpush outputs each record as a single line of JSON (also known as `ndjson`).

With **output\_options** you can switch to CSV or single JSON object, further customize prefixes, suffixes, delimiters, or provide your own record template (in a stripped-down version of Go [text/template ↗](https://pkg.go.dev/text/template) syntax).

The **output\_options** object has the following settings:

* **field\_names**: array of strings. For the moment, there is no option to add all fields at once, you need to specify the fields names.
* **output\_type**: string to specify output type, such as `ndjson` or `csv` (default `ndjson`). This sets default values for the rest of the settings depending on the chosen output type. Some formatting rules (like string quoting) are different between output types.
* **batch\_prefix**: string to be prepended before each batch.
* **batch\_suffix**: string to be appended after each batch.
* **record\_prefix**: string to be prepended before each record.
* **record\_suffix**: string to be appended after each record.
* **record\_template**: string to use as template for each record instead of the default comma-separated list. All fields used in the template must be present in **field\_names** as well, otherwise they will end up as `null`. Format as a Go text/template without any standard functions (like conditionals, loops, sub-templates, etc.). The template can only consist of these three types of tokens:  
   * Action: this is either a `{{ .Field }}` or a `{{ "constant text" }}`.  
   * Text: this is just constant text in-between the `{{ actions }}`.  
   * Comment: the `{{/* comments */}}` are silently dropped.
* **record\_delimiter**: string to be inserted in-between the records as separator.
* **field\_delimiter**: string to join fields. Will be ignored when **record\_template** is set.
* **timestamp\_format**: string to specify the format for timestamps. Supported values are:  
   * `unixnano` — nanoseconds unit  
   * `unix` — seconds unit  
   * `rfc3339` — seconds unit, for example: `2024-02-17T23:52:01Z`  
   * `rfc3339ms` — milliseconds unit, for example: `2024-02-17T23:52:01.123Z`  
   * `rfc3339ns` — nanoseconds unit, for example: `2024-02-17T23:52:01.123456789Z`  
Default timestamp formats apply unless explicitly set. The dashboard defaults to `rfc3339` and the API defaults to `unixnano`.
* **sample\_rate**: floating number to specify sampling rate (default 1.0: no sampling). Sampling is applied on top of filtering, and regardless of the current sample\_interval of the data.
* **CVE-2021-44228**: bool, default false. If set to true, will cause all occurrences of `${` in the generated files to be replaced with `x{`.

## Examples

Specifying **field\_names** and **output\_type** will result in the remaining options being configured as below for the specified **output\_type**:

### ndjson

Default output\_options for `ndjson`

```

{

  "record_prefix": "{",

  "record_suffix": "}\n",

  "field_delimiter": ","

}


```

Example output\_options

```

"output_options": {

  "field_names": ["ClientIP", "EdgeStartTimestamp", "RayID"],

  "output_type": "ndjson"

}


```

Example output

```

{"ClientIP":"89.163.242.206","EdgeStartTimestamp":1506702504433000200,"RayID":"3a6050bcbe121a87"}

{"ClientIP":"89.163.242.207","EdgeStartTimestamp":1506702504433000300,"RayID":"3a6050bcbe121a88"}

{"ClientIP":"89.163.242.208","EdgeStartTimestamp":1506702504433000400,"RayID":"3a6050bcbe121a89"}


```

* `ndjson` with different field names:

Example output\_options

```

"output_options": {

  "field_names": ["ClientIP", "EdgeStartTimestamp", "RayID"],

  "output_type": "ndjson",

  "record_template": "\"client-ip\":{{.ClientIP}},\"timestamp\":{{.EdgeStartTimestamp}},\"ray-id\":{{.RayID}}"

}


```

Example output

```

{"client-ip":"89.163.242.206","timestamp":1506702504433000200,"ray-id":"3a6050bcbe121a87"}

{"client-ip":"89.163.242.207","timestamp":1506702504433000300,"ray-id":"3a6050bcbe121a88"}

{"client-ip":"89.163.242.208","timestamp":1506702504433000400,"ray-id":"3a6050bcbe121a89"}


```

Literal with double curly-braces `({{}})`, that is, `"double{{curly}}braces"`, can be inserted following go text/template convention, that is, `"{{`doublecurlybraces`}}"`.

### csv

Default output\_options for CSV

```

{

  "record_suffix": "\n",

  "field_delimiter": ","

}


```

Example output\_options

```

"output_options": {

  "field_names": ["ClientIP", "EdgeStartTimestamp", "RayID"],

  "output_type": "csv"

}


```

Example output

```

"89.163.242.206",1506702504433000200,"3a6050bcbe121a87"

"89.163.242.207",1506702504433000300,"3a6050bcbe121a88"

"89.163.242.208",1506702504433000400,"3a6050bcbe121a89"


```

### csv/json variants

Based on above, other formats similar to csv or json are also supported:

* csv with header:

Example output\_options

```

"output_options": {

  "field_names": ["ClientIP", "EdgeStartTimestamp", "RayID"],

  "output_type": "csv",

  "batch_prefix": "ClientIP,EdgeStartTimestamp,RayID\n"

}


```

Example output

```

ClientIP,EdgeStartTimestamp,RayID

"89.163.242.206",1506702504433000200,"3a6050bcbe121a87"

"89.163.242.207",1506702504433000300,"3a6050bcbe121a88"

"89.163.242.208",1506702504433000400,"3a6050bcbe121a89"


```

* tsv with header:

Example output\_options

```

"output_options": {

  "field_names": ["ClientIP", "EdgeStartTimestamp", "RayID"],

  "output_type": "csv",

  "batch_prefix": "ClientIP\tEdgeStartTimestamp\tRayID\n",

  "field_delimiter": "\t"

}


```

Example output

```

ClientIP EdgeStartTimestamp  RayID

"89.163.242.206"    1506702504433000200 "3a6050bcbe121a87"

"89.163.242.207"    1506702504433000300 "3a6050bcbe121a88"

"89.163.242.208"    1506702504433000400 "3a6050bcbe121a89"


```

* json with nested object:

Example output\_options

```

"output_options": {

  "field_names": ["ClientIP", "EdgeStartTimestamp", "RayID"],

  "output_type": "ndjson",

  "batch_prefix": "{\"events\":[",

  "batch_suffix": "\n]}\n",

  "record_prefix": "\n  {\"info\":{",

  "record_suffix": "}}",

  "record_delimiter": ","

}


```

Example output

```

{

  "events": [

    {

      "info": {

        "ClientIP": "89.163.242.206",

        "EdgeStartTimestamp": 1506702504433000200,

        "RayID": "3a6050bcbe121a87"

      }

    },

    {

      "info": {

        "ClientIP": "89.163.242.207",

        "EdgeStartTimestamp": 1506702504433000300,

        "RayID": "3a6050bcbe121a88"

      }

    },

    {

      "info": {

        "ClientIP": "89.163.242.208",

        "EdgeStartTimestamp": 1506702504433000400,

        "RayID": "3a6050bcbe121a89"

      }

    }

  ]

}


```

## How to migrate

In order to migrate your jobs from using **logpull\_options** to the new **output\_options**, take these steps:

1. Change the `&fields=ClientIP,EdgeStartTimestamp,RayID` parameter to an array in `output_options.field_names`.
2. Change the `&sample=0.1` parameter to `output_options.sample_rate`.
3. Change the `&timestamps=rfc3339` parameter to `output_options.timestamp_format`.
4. Change the `&CVE-2021-44228=true` parameter to `output_options.CVE-2021-44228`.

For example, if logpull\_options are `fields=ClientIP,EdgeStartTimestamp,RayID&sample=0.1&timestamps=rfc3339&CVE-2021-44228=true`, the output\_options would be:

```

"output_options": {

  "field_names": ["ClientIP", "EdgeStartTimestamp", "RayID"],

  "sample_rate": 0.1,

  "timestamp_format": "rfc3339",

  "CVE-2021-44228": true

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/log-output-options/","name":"Log Output Options"}}]}
```

---

---
title: Ownership challenge FAQ
description: The ownership challenge is a one-time verification that proves you have read access to a destination bucket before Cloudflare pushes logs to it. This mechanism prevents you from accidentally configuring a Logpush job that pushes data to a bucket you do not control.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/logpush-job/logpush-ownership-challenge.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Ownership challenge FAQ

The ownership challenge is a one-time verification that proves you have read access to a destination bucket before Cloudflare pushes logs to it. This mechanism prevents you from accidentally configuring a Logpush job that pushes data to a bucket you do not control.

## How it works

When you create a Logpush job to a storage destination, Cloudflare requires you to prove ownership of that destination:

1. You request an ownership challenge for your `destination_conf`.
2. Cloudflare writes a JWT to an `ownership-challenge.txt` file in your bucket.
3. You read the token from your bucket and submit it with your job creation request.
4. Cloudflare validates the token and creates the job.

For step-by-step instructions, refer to [Manage Logpush with cURL](https://developers.cloudflare.com/logs/logpush/examples/example-logpush-curl/).

## What the challenge protects against

The ownership challenge primarily protects you from accidental misconfiguration. Without this verification, you could inadvertently configure a job to push to a bucket you did not intend—for example, pushing to a bucket that is actually owned by someone else.

The challenge also prevents malicious scenarios where someone could:

* Point a Logpush job at another customer's bucket
* Push bogus or malicious log data to that bucket
* Pollute or corrupt the victim's log storage

Note

You may find the ownership challenge cumbersome because it is unusual and difficult to script via Terraform. However, it exists to prevent costly mistakes.

## Challenge token structure

The ownership challenge is a JSON Web Token (JWT) containing claims that bind it to a specific context. The token includes:

* **Object type** \- Whether the job is zone-scoped, account-scoped, or tenant-scoped
* **Object ID** \- The specific zone or account identifier
* **Destination configuration** \- The full destination configuration string
* **Destination fingerprint** \- A hash of the bucket name and paths/prefixes
* **Expiration** \- The token expires after 7 days

When you submit the challenge token, Cloudflare validates that all claims match your job creation request. This prevents the token from being reused for a different account, zone, or destination.

## Security considerations

### Can a compromised token be exploited?

In practice, an attack using a compromised ownership challenge token is extremely unlikely. An attacker would need:

1. Access to your Cloudflare account (to match the object ID in the token)
2. Knowledge of the exact bucket name and paths/prefixes (to match the destination fingerprint)
3. To act within 7 days (before the challenge expires)

Your bucket's IAM/access controls and Cloudflare account security are the primary security layers, not the ownership challenge token.

### Best practices

* **Delete the challenge file after job creation** \- Once your Logpush job is created, you can safely delete the `ownership-challenge.txt` file from your bucket.
* **Restrict bucket permissions** \- Grant write access only to Cloudflare's service accounts. For AWS S3, grant `PutObject` permission to `arn:aws:iam::391854517948:user/cloudflare-logpush`. For GCS, grant `Storage Object Admin` to `logpush@cloudflare-data.iam.gserviceaccount.com`.
* **Monitor your Logpush jobs** \- Use the [Logpush health dashboards](https://developers.cloudflare.com/logs/logpush/logpush-health/) to monitor job status and detect anomalies.

## Which destinations require an ownership challenge?

| Destination           | Ownership challenge required       |
| --------------------- | ---------------------------------- |
| AWS S3                | Yes (or use access key/secret key) |
| Google Cloud Storage  | Yes                                |
| Azure Blob Storage    | Yes                                |
| Sumo Logic            | Yes                                |
| S3-compatible storage | No                                 |
| HTTP endpoints        | No                                 |
| Datadog               | No                                 |
| Splunk                | No                                 |
| New Relic             | No                                 |

For destinations that do not require an ownership challenge, Cloudflare uses alternative authentication methods such as API keys or tokens.

## Related resources

* [API configuration](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/)
* [Manage Logpush with cURL](https://developers.cloudflare.com/logs/logpush/examples/example-logpush-curl/)
* [Logpush permissions](https://developers.cloudflare.com/logs/logpush/permissions/)
* [Enable AWS S3 destination](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/aws-s3/)
* [Enable GCS destination](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/google-cloud-storage/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/logpush-job/","name":"Logpush job setup"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/logpush/logpush-job/logpush-ownership-challenge/","name":"Ownership challenge FAQ"}}]}
```

---

---
title: Parse Cloudflare Logs JSON data
description: After downloading your Cloudflare Logs data, you can use different tools to parse and analyze your logs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/parsing-json-log-data.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Parse Cloudflare Logs JSON data

After downloading your Cloudflare Logs data, you can use different tools to parse and analyze your logs.

One of those tools used to parse your JSON log data is `jq`.

Refer to [Download jq ↗](https://jqlang.github.io/jq/download/) for more information on obtaining and installing `jq`.

Note

`jq` is a powerful command line for parsing JSON data and performing certain types of analysis. To perform more detailed analysis, consider a full-fledged data analysis system, such as _Kibana_.

## Aggregate fields

To aggregate a field appearing in the log, such as by IP address, URI, or referrer, you can use several `jq` commands. This is useful to identify any patterns in traffic; for example, to identify your most popular pages or to block an attack.

The following examples match on a field name and provide a count of each field instance, sorted in ascending order by count.

Terminal window

```

jq -r .ClientRequestURI logs.json | sort -n | uniq -c | sort -n | tail


```

```

2 /nginx-logo.png

2 /poweredby.png

2 /testagain

3 /favicon.ico

3 /testing

3 /testing123

6 /test

7 /testing1234

10 /cdn-cgi/nexp/dok3v=1613a3a185/cloudflare/rocket.js

54 /


```

Terminal window

```

jq -r .ClientRequestUserAgent logs.json | sort -n | uniq -c | sort -n | tail


```

```

1 python-requests/2.9.1

2 Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.56 Safari/537.17

4 Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36

5 curl/7.47.2-DEV

36 Mozilla/5.0 (X11; Linux x86_64; rv:44.0) Gecko/20100101 Firefox/44.0

51 curl/7.46.0-DEV


```

Terminal window

```

jq -r .ClientRequestReferer logs.json | sort -n | uniq -c | sort -n | tail


```

```

2 http://example.com/testagain

3 http://example.com/testing

5 http://example.com/

5 http://example.com/testing123

7 http://example.com/testing1234

77 null


```

## Filter fields

Another common use case involves filtering data for a specific field value and then aggregating after that. This helps answer questions like _Which URLs saw the most 502 errors?_ For example:

Terminal window

```

jq 'select(.OriginResponseStatus == 502) | .ClientRequestURI' logs.json | sort -n | uniq -c | sort -n | tail


```

```

1 "/favicon.ico"

1 "/testing"

3 "/testing123"

6 "/test"

6 "/testing1234"

18 "/"


```

To find out the top IP addresses blocked by the Cloudflare WAF, use the following query:

Terminal window

```

jq -r 'select(.SecurityAction == "block") | .ClientIP' logs.json | sort -n | uniq -c | sort -n


```

```

1 127.0.0.1


```

## Show cached requests

To retrieve your cache ratios, try the following query:

Terminal window

```

jq -r '.CacheCacheStatus' logs.json | sort -n | uniq -c | sort -n


```

```

3 hit

3 null

3 stale

4 expired

6 miss

81 unknown


```

## Show TLS versions

To find out which TLS versions your visitors are using — for example, to decide if you can disable TLS versions that are older than 1.2 — use the following query:

Terminal window

```

jq -r '.ClientSSLProtocol' logs.json | sort -n | uniq -c | sort -n


```

```

42 none

58 TLSv1.2


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/parsing-json-log-data/","name":"Parse Cloudflare Logs JSON data"}}]}
```

---

---
title: Permissions
description: Below is a description of the available permissions for tokens and roles as they relate to Logs. For information about how to create an API token, refer to Creating API tokens.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush/permissions.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Permissions

Below is a description of the available permissions for tokens and roles as they relate to Logs. For information about how to create an API token, refer to [Creating API tokens](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/).

## Tokens

* **Logs: Read** \- Grants read access to logs using Logpull or Instant Logs.
* **Logs: Write** \- Grants read and write access to Logpull and Logpush, and read access to Instant Logs. Note that all Logpush API operations require **Logs: Write** permission because Logpush jobs contain sensitive information.

Note

* **Zone-scoped datasets** require a **zone-scoped token**.
* **Account-scoped datasets** require an **account-scoped token**.

Permissions must be explicitly configured at the appropriate level (zone or account) to ensure access to the desired API endpoints.

## Roles

**Super Administrator**, **Administrator** and the **Log Share** roles have full access to Logpull, Logpush and Instant Logs.

Only roles with **Log Share** edit permissions can read and configure Logpush jobs because job configurations may contain sensitive information.

The **Administrator Read only** and **Log Share Reader** roles only have access to Instant Logs and Logpull. This role does not have permissions to view the configuration of Logpush jobs.

### Zero Trust datasets

To view, create, update, or delete Logpush jobs for Zero Trust datasets (Access, Gateway, and DEX) users must have both the `Logs Edit` and `Zero Trust: PII Read` permissions.

If you encounter the error `reading job for product '<product>' is not allowed (1004)`, this indicates that the API token you are using does not have the required permissions. Ensure your token or user account has both permissions listed above.

For more details, refer to the [Logpush Permission Update for Zero Trust Datasets ↗](https://developers.cloudflare.com/changelog/2025-11-05-logpush-permissions-update/).

### Assign or remove a role

To check the list of members in your account, or to manage roles and permissions:

1. Navigate to the [Cloudflare dashboard ↗](https://dash.cloudflare.com/login) and select your account.
2. From your Account Home, go to **Manage Account** \> **Members**.
3. Enter a member’s email address to add them to your account, and select **Invite**.
4. Alternatively, scroll down to the **Members** card to find a list of members with their status and role.

For more information, refer to [Managing roles within your Cloudflare account](https://developers.cloudflare.com/fundamentals/manage-members/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush/","name":"Logpush"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpush/permissions/","name":"Permissions"}}]}
```

---

---
title: Instant Logs
description: Instant Logs allows Cloudflare customers to access a live stream of the traffic for their domain from the Cloudflare dashboard or from a command-line interface (CLI). Seeing data in real time allows you to investigate an attack, troubleshoot, debug or test out changes made to your network. Instant Logs is lightweight, simple to use and does not require any additional setup.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/instant-logs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Instant Logs

Instant Logs allows Cloudflare customers to access a live stream of the traffic for their domain from the Cloudflare dashboard or from a command-line interface (CLI). Seeing data in real time allows you to investigate an attack, troubleshoot, debug or test out changes made to your network. Instant Logs is lightweight, simple to use and does not require any additional setup.

## Availability

| Free         | Pro | Business | Enterprise |     |
| ------------ | --- | -------- | ---------- | --- |
| Availability | No  | No       | Yes        | Yes |

## Instant Logs via Cloudflare Dashboard

1. In the Cloudflare dashboard, go to the **Instant Logs** page.  
[ Go to **Instant Logs** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/instant-logs)
2. Select **Start streaming**.
3. (optional) Select **Add filter** to narrow down the events to be shown.

Fields supported in our [HTTP requests dataset](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/http%5Frequests/) can be used when you add filters. Some fields with additional subscriptions required are not supported in the dashboard, you will need to use CLI instead.

Once a filter is selected and the stream has started, only log lines that match the filter criteria will appear. Filters are not applied retroactively to logs already showing in the dashboard.

## Instant Logs via CLI

### 1\. Create an Instant Logs Job

Create a session by sending a `POST` request to the Instant Logs job endpoint with the following parameters:

* **Fields** \- List any field available in the [HTTP requests dataset](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/http%5Frequests/).
* **Sample** \- The sample parameter is the sample rate of the records set by the client: `"sample": 1` is 100% of records `"sample": 10` is 10% and so on.

Note

Instant Logs has a maximum data rate supported. For high volume domains, we sample server side as indicated in the `"sampleInterval"` parameter returned in the logs.

* **Filters** \- Use filters to drill down into specific events. Filters consist of three parts: key, operator and value.

All supported operators can be found in the [Filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) page.

Below we have three examples of filters:

Terminal window

```

# Filter when client IP country is not Canada:

"filter": "{\"where\":{\"and\":[{\"key\":\"ClientCountry\",\"operator\":\"neq\",\"value\":\"ca\"}]}}"


```

Terminal window

```

# Filter when the status code returned from Cloudflare is either 200 or 201:

"filter": "{\"where\":{\"and\":[{\"key\":\"EdgeResponseStatus\",\"operator\":\"in\",\"value\":[200,201]}]}}"


```

Terminal window

```

# Filter when the request path contains "/static" and the request hostname is "example.com":

"filter": "{\"where\":{\"and\":[{\"key\":\"ClientRequestPath\",\"operator\":\"contains\",\"value\":\"/static\"}, {\"where\":{\"and\":[{\"key\":\"ClientRequestHost\",\"operator\":\"eq\",\"value\":\"example.com\"}]}}"


```

Example request using cURL:

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Read`

Create Instant Logs job

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logpush/edge/jobs" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "fields": "ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestURI,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID",

    "sample": 100,

    "filter": "",

    "kind": "instant-logs"

  }'


```

Response:

The response will include a new field called **destination\_conf**. The value of this field is your unique WebSocket address that will receive messages from Cloudflare's global network.

```

{

  "errors": [],

  "messages": [],

  "result": {

    "id": <JOB_ID>,

    "fields": "ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestURI,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID",

    "sample": 100,

    "filter": "",

    "destination_conf": "wss://logs.cloudflare.com/instant-logs/ws/sessions/<SESSION_ID>",

    "kind": "instant-logs"

  },

  "success": true

}


```

### 2\. Connect to WebSocket

Using a CLI utility like [Websocat ↗](https://github.com/vi/websocat), you can connect to the WebSocket and start immediately receiving logs.

Terminal window

```

websocat wss://logs.cloudflare.com/instant-logs/ws/sessions/<SESSION_ID>


```

Response:

Once connected to the websocket, you will receive messages of line-delimited JSON.

### Angle Grinder

Now that you have a connection to Cloudflare's websocket and are receiving logs from Cloudflare's global network, you can start slicing and dicing the logs. A handy tool for this is [Angle Grinder ↗](https://github.com/rcoh/angle-grinder). Angle Grinder lets you apply filtering, transformations and aggregations on stdin with first class JSON support. For example, to get the number of visitors from each country you can sum the number of events by the `ClientCountry` field.

Terminal window

```

websocat wss://logs.cloudflare.com/instant-logs/ws/sessions/<SESSION_ID> | agrind '* | json | sum(sampleInterval) by ClientCountry'


```

Response:

| **ClientCountry** | **\_sum** |
| ----------------- | --------- |
| pt                | 4         |
| fr                | 3         |
| us                | 3         |
| om                | 2         |
| ar                | 1         |
| au                | 1         |

## Datasets available

For the moment, `HTTP requests` is the only dataset supported. In the future, we will expand to other datasets.

## Export

You can download the table of logs that appears in the dashboard, in JSON format via the **Export** button.

## Limits

Instant Logs has three limits set in place:

* Only one active Instant Logs session per zone.
* Maximum session time is 60 minutes.
* If you stop listening to a socket for more than five minutes.

If either of these limits are reached, the logs stream will automatically stop.

## Connect with us

If you have any feature requests or notice any bugs, share your feedback directly with us by joining the [Cloudflare Developers community on Discord ↗](https://discord.cloudflare.com).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/instant-logs/","name":"Instant Logs"}}]}
```

---

---
title: Logs Engine
description: Logs Engine gives you the ability to store your logs in R2 and query them directly.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/R2-log-retrieval.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Logs Engine

Logs Engine gives you the ability to store your logs in R2 and query them directly.

Note

Logs Engine is going to be replaced by Log Explorer. For further details, consult the [Log Explorer](https://developers.cloudflare.com/log-explorer/) documentation and to request access, complete the [sign-up form ↗](https://cloudflare.com/lp/log-explorer/).

## Store logs in R2

* Set up a [Logpush to R2](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/r2/) job.
* Create an [R2 access key](https://developers.cloudflare.com/r2/api/tokens/) with at least R2 read permissions.
* Ensure that you have Logshare read permissions.
* Alternatively, create a Cloudflare API token with the following permissions:  
   * Account scope  
   * Logs read permissions

## Query logs

You can use the API to query and download your logs by time range or [RayID](https://developers.cloudflare.com/fundamentals/reference/cloudflare-ray-id/).

## Authentication

The following headers are required for all API calls:

* `X-Auth-Email` \- the Cloudflare account email address associated with the domain
* `X-Auth-Key` \- the Cloudflare API key

Alternatively, API tokens with Logs edit permissions can also be used for authentication:

* `Authorization: Bearer <API_TOKEN>`

### Required headers

In addition to the required authentication headers mentioned, the following headers are required for the API to access logs stored in your R2 bucket.

* `R2-access-key-id` (required) - [R2 Access Key Id](https://developers.cloudflare.com/r2/api/tokens/)
* `R2-secret-access-key` (required) - [R2 Secret Access Key](https://developers.cloudflare.com/r2/api/tokens/)

## List files

List relevant R2 objects containing logs matching the provided query parameters, using the endpoint `GET /accounts/{accountId}/logs/list`.

### Query parameters

* `start` (required) string (TimestampRFC3339) - Start time in RFC 3339 format, for example `start=2022-06-06T16:00:00Z`.
* `end` (required) string (TimestampRFC3339) - End time in RFC 3339 format, for example `end=2022-06-06T16:00:00Z`.
* `bucket` (required) string (Bucket) - R2 bucket name, for example `bucket=cloudflare-logs`.
* `prefix` string (Prefix) - R2 bucket prefix logs are stored under, for example `prefix=http_requests/example.com/{DATE}`.
* `limit` number (Limit) - Maximum number of results to return, for example `limit=100`.

## Retrieve logs by time range

Stream logs stored in R2 that match the provided query parameters, using the endpoint `GET /accounts/{accountId}/logs/retrieve`.

### Query parameters

* `start` (required) string (TimestampRFC3339) - Start time in RFC 3339 format, for example `start=2022-06-06T16:00:00Z`
* `end` (required) string (TimestampRFC3339) - End time in RFC 3339 format, for example `end=2022-06-06T16:00:00Z`
* `bucket` (required) string (Bucket) - R2 bucket name, for example `bucket=cloudflare-logs`
* `prefix` string (Prefix) - R2 bucket prefix logs are stored under, for example `prefix=http_requests/example.com/{DATE}`

### Example API request

Terminal window

```

curl --globoff "https://api.cloudflare.com/client/v4/accounts/{account_id}/logs/retrieve?start=2022-06-01T16:00:00Z&end=2022-06-01T16:05:00Z&bucket=cloudflare-logs&prefix=http_requests/example.com/{DATE}" \

--header "X-Auth-Email: <EMAIL>" \

--header "X-Auth-Key: <API_KEY>" \

--header "R2-Access-Key-Id: R2_ACCESS_KEY_ID" \

--header "R2-Secret-Access-Key: R2_SECRET_ACCESS_KEY"


```

Results can be piped to a file using `> logs.json`.

Additionally, if you want to receive the raw GZIP bytes without them being transparently decompressed by your client, include the header `--header "Accept-Encoding: gzip"`.

## ​Retrieve logs by Ray ID

Using your logs stored in R2 - the Logpull RayID Lookup feature allows you to query an indexed time range for the presence of an RayID and return the matching result. This feature is available to users with the Logpull RayID Lookup beta subscription.

The ability to look up a RayID is a two-step process. First, a time range needs to be indexed before being able to request a record by the RayID.

Indexes will automatically expire after seven days of no usage.

### Index a time range

Before executing your query, you can specify the time range you would like to index in order to narrow down the scope of the query. In the following example, we index one minute of logs stored in the R2 bucket `"cloudflare-logs"` under the prefix `"http_requests/{DATE}"`.

### Example API request

Terminal window

```

curl https://api.cloudflare.com/client/v4/accounts/{account_id}/logs/rayids/index \

--header "Authorization: Bearer <API_TOKEN>" \

--header "R2-Access-Key-Id: <R2_ACCESS_KEY_ID>" \

--header "R2-Secret-Access-Key: <R2_SECRET_ACCESS_KEY>" \

--header "Content-Type: application/json" \

--data-raw '{

  "start": "2022-08-16T20:30:00Z",

  "end": "2022-08-16T20:31:00",

  "bucket": "cloudflare-logs",

  "prefix": "http_requests/example.com/{DATE}"

}'


```

## Lookup a RayID

After indexing a time range, perform a `GET` request with the RayID. If a matching result is found in the indexed time range, the record will be returned. Note that the parameters have moved from the request body and into the URL. The `-g` flag is required to avoid the `{DATE}` parameter from being misinterpreted by cURL.

### Example API request

Terminal window

```

curl --globoff "https://api.cloudflare.com/client/v4/accounts/{account_id}/logs/rayids/<RAY_ID>?bucket=cloudflare-logs&prefix=http_requests/example.com/{DATE}" \

--header "Authorization: Bearer <API_TOKEN>" \

--header "R2-Access-Key-Id: <R2_ACCESS_KEY_ID>" \

--header "R2-Secret-Access-Key: <R2_SECRET_ACCESS_KEY>"


```

## Troubleshooting

I am getting an error when accessing the API

* **Error**: Time range returned too many results. Try reducing the time range and try again.

HTTP status code `422` will be returned if the time range between the start and end parameters is too wide. Try querying a shorter time range if you are running into this limit.

* **Error**: Provided token does not have the required features enabled.

Contact your account representative to have the beta Logpull RayID Lookup subscription added to your account.

* **Error**: Time range returned too many results. Try reducing the time range and try again.

High volume zones can produce many log files in R2\. Try reducing your start and end time range until you find a duration that works best for your log volume.

How do I know what time range to index?

Currently, there is no process to index logs as they arrive. If you have the RayID and know the time the request was made, try indexing the next 5-10 minutes of logs after the request was completed.

What is the time delay between when an event happens and when I can query for it?

Logpush delivers logs in batches as soon as possible, generally in less than one minute. After this, logs can be accessed using Logs Engine.

Does R2 have retention controls?

R2 does not currently have retention controls in place. You can query back as far as when you created the Logpush job.

Which datasets is Logs Engine compatible with?

The retrieval API is compatible with all the datasets we support. The full list is available on the [Datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/) section.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/r2-log-retrieval/","name":"Logs Engine"}}]}
```

---

---
title: Logpull
description: Cloudflare Logpull is a REST API for consuming request logs over HTTP. These logs contain data related to the connecting client, the request path through the Cloudflare network, and the response from the origin web server. This data is useful for enriching existing logs on an origin server. Logpull is available to customers on the Enterprise plan.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpull/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Logpull

Cloudflare Logpull is a REST API for consuming request logs over HTTP. These logs contain data related to the connecting client, the request path through the Cloudflare network, and the response from the origin web server. This data is useful for enriching existing logs on an origin server. Logpull is available to customers on the Enterprise plan.

Warning

Logpull is considered a legacy feature and we recommend using [Logpush](https://developers.cloudflare.com/logs/logpush/) or [Logs Engine](https://developers.cloudflare.com/logs/r2-log-retrieval/) instead for better performance and functionality.

Review the following content to learn more about Logpull.

* [ Understanding the basics ](https://developers.cloudflare.com/logs/logpull/understanding-the-basics/)
* [ Enabling log retention ](https://developers.cloudflare.com/logs/logpull/enabling-log-retention/)
* [ Requesting logs ](https://developers.cloudflare.com/logs/logpull/requesting-logs/)
* [ Additional details ](https://developers.cloudflare.com/logs/logpull/additional-details/)

## Availability

| Free         | Pro | Business | Enterprise |     |
| ------------ | --- | -------- | ---------- | --- |
| Availability | No  | No       | No         | Yes |

### Limitation

Logpull is unavailable when the Customer Metadata Boundary (CMB) is set outside the US region. Specifically, it does not work when CMB is restricted to the EU-only setting. For more details, refer to the [Cloudflare Data Localization](https://developers.cloudflare.com/data-localization/) documentation.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpull/","name":"Logpull"}}]}
```

---

---
title: Additional details
description: To estimate the amount of data for a zone per day (the number of log lines and the amount of bytes they take up), request a 1% or 10% sample of data for a 1-hour period (use 10% if your volume is low). Note that start=2018-12-15T00:00:00Z and end=2018-12-15T01:00:00Z span a 1-hour period, and sample=0.1.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpull/additional-details.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Additional details

## Estimating daily data volume

To estimate the amount of data for a zone per day (the number of log lines and the amount of bytes they take up), request a 1% or 10% sample of data for a 1-hour period (use 10% if your volume is low). Note that `start=2018-12-15T00:00:00Z` and `end=2018-12-15T01:00:00Z` span a 1-hour period, and `sample=0.1`.

Terminal window

```

curl "https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/received?start=2018-12-15T00:00:00Z&end=2018-12-15T01:00:00Z&sample=0.1" \

--header "X-Auth-Email: <EMAIL>" \

--header "X-Auth-Key: <API_KEY>" \

> sample.log


```

Terminal window

```

wc -l sample.log


```

```

83 sample.log


```

Terminal window

```

ls -lh sample.log


```

```

-rw-r--r-- 1 mik mik 25K Dec 17 15:49 sample.log


```

Based on this information, the approximate number of messages/day is 19,920 (83 × 10 × 24), and the byte size is 6MB (25K × 10 × 24). The size estimate is based on the default response field set. Changing the response field set (refer to [Fields](https://developers.cloudflare.com/logs/logpull/requesting-logs/#fields)) will change the response size.

To get a good estimate of daily traffic, it is best to get at least 30 log lines in your hourly sample. If the response size is too small (or too large), adjust the sample value, not the time range.

## Compression

Responses are compressed by default (gzip). `cURL` decompresses responses transparently, unless called with:

`--header "Accept-Encoding: gzip"`

In that case, the output remains gzipped. Compressed data is approximately 5-10% of its uncompressed size. This means that a 1GB uncompressed response gets compressed down to 50-100MB.

## Service expectations

### Successful requests

If the response or timeout limit is exceeded or there is any problem fetching the response, a `200` status will be returned and the response will end with the non-JSON text line “Error streaming data.” Because responses are streamed, there is no way to identify the error ahead of time. A response is successful if it does not end with the “Error streaming data" text line.

Once you receive a successful response for a given zone and time range, the following is true for all subsequent requests:

* The number and content of returned records will be same.
* The order of records returned may (and is likely to) be different.

### Response fields

Regarding the inclusion of the **fields** parameter:

* When fields are explicitly included in the request URL, the fields returned will not change.
* When not specified in the URL, the default fields are returned.
* The default fields may change at any time.

### Limits

The following usage restrictions apply:

* **Rate limits:** Exceeding these limit results in a `429` error response:  
   * 15 requests/min per zone.  
   * 180 requests/min per user (email address).
* **Time range:** The maximum difference between the **start** and **end** parameters can be 1 hour.
* **Response size:** The maximum response size is 10GiB per request, which is equivalent to about 15M records when about 55 fields are selected (more records can be retrieved when less fields are selected because the per record size will be smaller).
* **Timeout:** The response will fail with a terminated connection after 10 minutes.
* **Stream Timeout:** The request will be terminated with a `408` error response if the connection is idle for 30s. This timeout usually means that the request is probably too exhaustive (frequent timeouts (> 12/hr) will result in subsequent queries to be blocked with status code 429 for 1hr) and so:  
   * try requesting records using lesser number of fields.  
   * try with smaller **start** and **end** parameters.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpull/","name":"Logpull"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpull/additional-details/","name":"Additional details"}}]}
```

---

---
title: Enabling log retention
description: By default, your HTTP request logs are not retained. When using the Logpull API for the first time, you will need to enable retention. You can also turn off retention at any time. Note that after retention is turned off, previously saved logs will be available until the retention period expires (refer to Data retention period).
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpull/enabling-log-retention.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Enabling log retention

By default, your HTTP request logs are not retained. When using the Logpull API for the first time, you will need to enable retention. You can also turn off retention at any time. Note that after retention is turned off, previously saved logs will be available until the retention period expires (refer to [Data retention period](https://developers.cloudflare.com/logs/logpull/understanding-the-basics/#data-retention-period)).

## Endpoints

There are two endpoints for managing log retention:

* `GET /logs/control/retention/flag` \- returns the current status of retention
* `POST /logs/control/retention/flag` \- turns retention on or off

Note

In the Linux examples below we use the optional [jq](https://developers.cloudflare.com/logs/logpush/parsing-json-log-data/) tool to help parse the response data.

To make a `POST` call, you must have zone-scoped `edit` permissions, such as Super Administrator, Administrator, or Log Share. Refer to [Make API calls](https://developers.cloudflare.com/fundamentals/api/how-to/make-api-calls/) for more information.

## Example API requests using cURL

### Check log retention status

* [ Linux ](#tab-panel-5378)
* [ CMD ](#tab-panel-5379)
* [ PowerShell ](#tab-panel-5380)

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`
* `Logs Read`

Get log retention flag

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logs/control/retention/flag" \

  --request GET \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN"


```

```

curl.exe "https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/control/retention/flag" ^

--header "Authorization: Bearer <API_TOKEN>"


```

PowerShell

```

$uri = "https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/control/retention/flag"

$headers = @{"Authorization" = "Bearer <API_TOKEN>"}

Invoke-RestMethod -Uri $uri -Method Get -Headers $headers


```

If the zone has log retention [enabled](https://developers.cloudflare.com/logs/logpull/enabling-log-retention/#enabled-response) you get the value `true`, whereas a value of `false` is returned when it is [disabled](https://developers.cloudflare.com/logs/logpull/enabling-log-retention/#disabled-response).

### Turn on log retention

* [ Linux ](#tab-panel-5381)
* [ CMD ](#tab-panel-5382)
* [ PowerShell ](#tab-panel-5383)

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Update log retention flag

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logs/control/retention/flag" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "flag": true

  }'


```

```

curl.exe "https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/control/retention/flag" ^

--request POST ^

--header "Authorization: Bearer <API_TOKEN>" ^

--header "Content-Type: application/json" ^

--data "{""flag"": true}"


```

PowerShell

```

$uri = "https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/control/retention/flag"

$headers = @{"Authorization" = "Bearer <API_TOKEN>"}

$bodyFlag = @{flag = $true} | ConvertTo-Json

Invoke-RestMethod -Uri $uri -Method Post -Headers $headers -Body $bodyFlag -ContentType "application/json"


```

#### Enabled response

```

{

  "flag": true

}


```

### Turn off log retention

* [ Linux ](#tab-panel-5384)
* [ CMD ](#tab-panel-5385)
* [ PowerShell ](#tab-panel-5386)

Required API token permissions

At least one of the following [token permissions](https://developers.cloudflare.com/fundamentals/api/reference/permissions/)is required:
* `Logs Write`

Update log retention flag

```

curl "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/logs/control/retention/flag" \

  --request POST \

  --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \

  --json '{

    "flag": false

  }'


```

```

curl.exe "https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/control/retention/flag" ^

--header "Authorization: Bearer <API_TOKEN>" ^

--header "Content-Type: application/json" ^

--data "{""flag"": false}"


```

PowerShell

```

$uri = "https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/control/retention/flag"

$headers = @{"Authorization" = "Bearer <API_TOKEN>"}

$bodyFlag = @{flag = $false} | ConvertTo-Json

Invoke-RestMethod -Uri $uri -Method Post -Headers $headers -Body $bodyFlag -ContentType "application/json"


```

#### Disabled response

```

{

  "flag": false

}


```

## Audit

Turning log retention on or off is recorded in [Cloudflare Audit Logs](https://developers.cloudflare.com/fundamentals/account/account-security/review-audit-logs/#access-audit-logs).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpull/","name":"Logpull"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpull/enabling-log-retention/","name":"Enabling log retention"}}]}
```

---

---
title: Requesting logs
description: The three endpoints supported by the Logpull API are:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpull/requesting-logs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Requesting logs

## Endpoints

The three endpoints supported by the Logpull API are:

* `GET /logs/received` \- returns HTTP request log data based on the parameters specified
* `GET /logs/received/fields` \- returns the list of all available log fields
* `GET /logs/rayids/{ray_id}` \- returns HTTP request log data matching `{ray_id}`

## Required authentication headers

The following headers are required for all endpoint calls:

* `X-Auth-Email` \- the Cloudflare account email address associated with the domain
* `X-Auth-Key` \- the Cloudflare API key

Alternatively, API tokens with Logs Read permissions can also be used for authentication:

* `Authorization: Bearer <API_TOKEN>`

## Parameters

The API expects endpoint parameters in the GET request query string. The following are example formats:

`logs/received`

Terminal window

```

https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/received?start=<unix|rfc3339>&end=<unix|rfc3339>[&count=<int>][&sample=<float>][&fields=<FIELDS>][&timestamps=<string>][&CVE-2021-44228=<boolean>]


```

`logs/rayids/{ray_id}`

Terminal window

```

https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/rayids/{ray_id}?[&fields=<FIELDS>][&timestamps=<string>]


```

The following table describes the parameters available:

| Parameter      | Description                                                                                                                                                                                                                                                                                             | Applies to                  | Required |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------- | -------- |
| start          | \- Inclusive \- Timestamp formatted as UNIX (UTC by definition), UNIX Nano, or rfc3339. To specify rfc3339 time zone in URL query parameters, the URL needs to be encoded, like this start=2024-08-07T07:00:00%2B08:00&end=2024-08-07T07:01:00%2B08:00. \- Must be no more than 7 days earlier than now | /logs/received              | Yes      |
| end            | \- Exclusive \- Same format as _start_ \- Must be at least 1 minute earlier than now and later than _start_                                                                                                                                                                                             | /logs/received              | Yes      |
| count          | \- Return up to that many records \- Do not include if returning all records \- Results are not sorted; therefore, different data for repeated requests is likely \- Applies to number of total records returned, not number of sampled records                                                         | /logs/received              | No       |
| sample         | \- Return only a sample of records \- Do not include if returning all records \- Value can range from 0.0 (exclusive) to 1.0 (inclusive) \- sample=0.1 means return 10% (1 in 10) of all records \- Results are random; therefore, different numbers of results for repeated requests are likely        | /logs/received              | No       |
| fields         | \- Comma-separated list of fields to return \- If empty, the default list is returned                                                                                                                                                                                                                   | /logs/received /logs/rayids | No       |
| timestamps     | \- Format in which timestamp fields will be returned \- Value options are: unixnano (default), unix, rfc3339 \- Timestamps returned as integers for unix and unixnano and as strings for rfc3339                                                                                                        | /logs/received /logs/rayids | No       |
| CVE-2021-44228 | \- Optional redaction for [CVE-2021-44228 ↗](https://www.cve.org/CVERecord?id=CVE-2021-44228). This option will replace every occurrence of the string ${ with x{.  For example: CVE-2021-44228=true                                                                                                    | /logs/received              | No       |

Note

The maximum time range from **start** to **end** cannot exceed 1 hour. Because **start** is inclusive and **end** is exclusive, to get all the data for every minute, starting at 10AM, the proper values are:

`start=2018-05-15T10:00:00Z&end=2018-05-15T10:01:00Z`, then `start=2018-05-15T10:01:00Z&end=2018-05-15T10:02:00Z` and so on.

The overlap will be handled correctly.

## Example API requests using cURL

`logs/received`

Terminal window

```

curl "https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/received?start=2017-07-18T22:00:00Z&end=2017-07-18T22:01:00Z&count=1&fields=ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestURI,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID" \

--header "X-Auth-Email: <EMAIL>" \

--header "X-Auth-Key: <API_KEY>"


```

`logs/rayids/{ray_id}`

Terminal window

```

curl "https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/rayids/{ray_id}}?timestamps=rfc3339" \

--header "X-Auth-Email: <EMAIL>" \

--header "X-Auth-Key: <API_KEY>"


```

Note

The IATA code returned as part of the **RayID** does not need to included in the request. For example, if you have a **RayID** such as `49ddb3e70e665831-DFW`, only include `49ddb3e70e665831` in your request.

## Fields

Unless specified in the **fields** parameter, the API returns a limited set of log fields. This default field set may change at any time. The list of all available fields is at:

`https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/received/fields`

The order in which fields are specified does not matter, and the order of fields in the response is not specified.

Using bash subshell and `jq`, you can download the logs with all available fields without manually copying and pasting the fields into the request. For example:

Terminal window

```

FIELDS=$(curl https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/received/fields \

--header "X-Auth-Email: <EMAIL>" \

--header "X-Auth-Key: <API_KEY>" \

| jq '. | to_entries[] | .key' -r | paste -sd "," -)


curl "https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/received?start=2017-07-18T22:00:00Z&end=2017-07-18T22:01:00Z&count=1&fields=$FIELDS" \

--header "X-Auth-Email: <EMAIL>" \

--header "X-Auth-Key: <API_KEY>"


```

Refer to [Download jq ↗](https://jqlang.github.io/jq/download/) for more information on obtaining and installing `jq`.

Refer to [HTTP request fields](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/http%5Frequests) for the currently available fields.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpull/","name":"Logpull"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpull/requesting-logs/","name":"Requesting logs"}}]}
```

---

---
title: Understanding the basics
description: The basic access pattern is give me all the logs for zone Z for minute M where the minute M refers to the time the log entries were written to disk in Cloudflare's log aggregation system.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpull/understanding-the-basics.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Understanding the basics

## Access pattern

The basic access pattern is _give me all the logs for zone Z for minute M_ where the minute _M_ refers to the time the log entries were written to disk in Cloudflare's log aggregation system.

To start, try running your query every minute. If responses are too small, go up to 5 minutes as this will be appropriate for most zones. If the responses are too large, try going down to 15 seconds.

If your zone has so many logs that it takes longer than 1 minute to read 1 minute worth of logs, run 2 workers staggered, each requesting 1 minute worth of logs every 2 minutes.

Data returned by the API will not change on repeat calls. The order of messages in the response may be different, but the number and content of the messages will always be the same for a given query as long as the response code is `200` and there is no error reading the response body.

Because our log processing system ingests data in batches, most zones with less than 1 million requests per minute will have "empty" minutes. Queries for such a minute result in responses with status `200` but no data in the body. This does not mean that there were no requests proxied by Cloudflare for that minute. It just means that our system did not process a batch of logs for that zone in that minute.

## Order of the data returned

The `logs/received` API endpoint exposes data by time received, which is the time the event was written to disk in the Cloudflare Logs aggregation system.

Ordering by log aggregation time instead of log generation time results in lower (faster) log pipeline latency and deterministic log pulls. Functionally, it is similar to tailing a log file or reading from _rsyslog_ (albeit in chunks).

This means that to obtain logs for a given time range, you can issue one call for each consecutive minute (or other time range). Because log lines are batched by time received and made available, there is no late arriving data. A response for a given minute will never change. You do not have to repeatedly poll a given time range to receive logs as they converge on our aggregation system.

## Format of the data returned

The Logpull API returns data in NDJSON format, whereby each log line is a valid JSON object. Major analysis tools like Google BigQuery and AWS Kinesis require this format.

To turn the resulting log data into a JSON array with one array element per log line, you can use the `jq` tool. Essentially, you pipe the API response into _jq_ using the _slurp_ (or simply _s_) flag:

`<API request data> | jq -s`

Refer to [Download jq ↗](https://jqlang.github.io/jq/download/) for more information on obtaining and installing `jq`.

The following is a sample log with default fields:

```

{

  "ClientIP": "89.163.242.206",

  "ClientRequestHost": "www.theburritobot.com",

  "ClientRequestMethod": "GET",

  "ClientRequestURI": "/static/img/testimonial-hipster.png",

  "EdgeEndTimestamp": 1506702504461999900,

  "EdgeResponseBytes": 69045,

  "EdgeResponseStatus": 200,

  "EdgeStartTimestamp": 1506702504433000200,

  "RayID": "3a6050bcbe121a87"

}


```

## Data retention period

You can query for logs starting from 1 minute in the past (relative to the actual time that you make the query) and go back at least 3 days and up to 7 days. For longer durations, we recommend using [Logpush](https://developers.cloudflare.com/logs/logpush/).

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpull/","name":"Logpull"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/logpull/understanding-the-basics/","name":"Understanding the basics"}}]}
```

---

---
title: Changelog
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/changelog/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Changelog

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/changelog/","name":"Changelog"}}]}
```

---

---
title: Audit Logs
description: Audit Logs v2 is now generally available to all Cloudflare customers.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/changelog/audit-logs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Audit Logs

[ Subscribe to RSS ](https://developers.cloudflare.com/changelog/rss/audit-logs.xml) 

## 2026-03-10

  
**Audit logs (version 2) - General Availability**   

Audit Logs v2 is now generally available to all Cloudflare customers.

![Audit Logs v2 GA](https://developers.cloudflare.com/_astro/auditlogsv2.C3pqAR33_1qYU5j.webp) 

Audit Logs v2 provides a unified and standardized system for tracking and recording all user and system actions across Cloudflare products. Built on Cloudflare's API Shield / OpenAPI gateway, logs are generated automatically without requiring manual instrumentation from individual product teams, ensuring consistency across \~95% of Cloudflare products.

**What's available at GA:**

* **Standardized logging** — Audit logs follow a consistent format across all Cloudflare products, making it easier to search, filter, and investigate activity.
* **Expanded product coverage** — \~95% of Cloudflare products covered, up from \~75% in v1.
* **Granular filtering** — Filter by actor, action type, action result, resource, raw HTTP method, zone, and more. Over 20 filter parameters available via the API.
* **Enhanced context** — Each log entry includes authentication method, interface (API or dashboard), Cloudflare Ray ID, and actor token details.
* **18-month retention** — Logs are retained for 18 months. Full history is accessible via the API or Logpush.

**Access:**

* **Dashboard**: Go to **Manage Account** \> **Audit Logs**. Audit Logs v2 is shown by default.
* **API**: `GET https://api.cloudflare.com/client/v4/accounts/{account_id}/logs/audit`
* **Logpush**: Available via the `audit_logs_v2` account-scoped dataset.

**Important notes:**

* Approximately 30 days of logs from the Beta period (back to \~February 8, 2026) are available at GA. These Beta logs will expire on \~April 9, 2026\. Logs generated after GA will be retained for the full 18 months. Older logs remain available in Audit Logs v1.
* The UI query window is limited to 90 days for performance reasons. Use the API or Logpush for access to the full 18-month history.
* `GET` requests (view actions) and `4xx` error responses are not logged at GA. `GET` logging will be selectively re-enabled for sensitive read operations in a future release.
* Audit Logs v1 continues to run in parallel. A deprecation timeline will be communicated separately.
* Before and after values — the ability to see what a value changed from and to — is a highly requested feature and is on our roadmap for a post-GA release. In the meantime, we recommend using Audit Logs v1 for before and after values. Audit Logs v1 will continue to run in parallel until this feature is available in v2.

For more details, refer to the [Audit Logs v2 documentation](https://developers.cloudflare.com/fundamentals/account/account-security/audit-logs/).

## 2025-08-22

  
**Audit logs (version 2) - Logpush Beta Release**   

[Audit Logs v2 dataset](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/audit%5Flogs%5Fv2/) is now available via Logpush.

This expands on earlier releases of Audit Logs v2 in the [API](https://developers.cloudflare.com/changelog/2025-03-27-automatic-audit-logs-beta-release/) and [Dashboard UI](https://developers.cloudflare.com/changelog/2025-07-29-audit-logs-v2-ui-beta/).

We recommend creating a new Logpush job for the Audit Logs v2 dataset.

Timelines for General Availability (GA) of Audit Logs v2 and the retirement of Audit Logs v1 will be shared in upcoming updates.

For more details on Audit Logs v2, refer to the [Audit Logs documentation ↗](https://developers.cloudflare.com/fundamentals/account/account-security/audit-logs/).

## 2025-07-29

  
**Audit logs (version 2) - UI Beta Release**   

The Audit Logs v2 UI is now available to all Cloudflare customers in Beta. This release builds on the public [Beta of the Audit Logs v2 API](https://developers.cloudflare.com/changelog/product/audit-logs/) and introduces a redesigned user interface with powerful new capabilities to make it easier to investigate account activity.

**Enabling the new UI**

To try the new user interface, go to **Manage Account > Audit Logs**. The previous version of Audit Logs remains available and can be re-enabled at any time using the **Switch back to old Audit Logs** link in the banner at the top of the page.

**New Features:**

* **Advanced Filtering**: Filter logs by actor, resource, method, and more for faster insights.
* **On-hover filter controls**: Easily include or exclude values in queries by hovering over fields within a log entry.
* **Detailed Log Sidebar**: View rich context for each log entry without leaving the main view.
* **JSON Log View**: Inspect the raw log data in a structured JSON format.
* **Custom Time Ranges**: Define your own time windows to view historical activity.
* **Infinite Scroll**: Seamlessly browse logs without clicking through pages.
![Audit Logs v2 new UI](https://developers.cloudflare.com/_astro/Audit_logs_v2_filters.Bacd1IHg_f0dJz.webp) 

For more details on Audit Logs v2, see the [Audit Logs documentation ↗](https://developers.cloudflare.com/fundamentals/account/account-security/audit-logs/).

**Known issues**

* A small number of audit logs may currently be unavailable in Audit Logs v2\. In some cases, certain fields such as actor information may be missing in certain audit logs. We are actively working to improve coverage and completeness for General Availability.
* Export to CSV is not supported in the new UI.

We are actively refining the Audit Logs v2 experience and welcome your feedback. You can share overall feedback by clicking the thumbs up or thumbs down icons at the top of the page, or provide feedback on specific audit log entries using the thumbs icons next to each audit log line or by filling out our [feedback form ↗](https://docs.google.com/forms/d/e/1FAIpQLSfXGkJpOG1jUPEh-flJy9B13icmcdBhveFwe-X0EzQjJQnQfQ/viewform?usp=sharing).

## 2025-03-27

  
**Audit logs (version 2) - Beta Release**   

The latest version of audit logs streamlines audit logging by automatically capturing all user and system actions performed through the Cloudflare Dashboard or public APIs. This update leverages Cloudflare’s existing API Shield to generate audit logs based on OpenAPI schemas, ensuring a more consistent and automated logging process.

Availability: Audit logs (version 2) is now in Beta, with support limited to **API access**.

Use the following API endpoint to retrieve audit logs:

JavaScript

```

GET https://api.cloudflare.com/client/v4/accounts/<account_id>/logs/audit?since=<date>&before=<date>


```

You can access detailed documentation for audit logs (version 2) Beta API release [here ↗](https://developers.cloudflare.com/api/resources/accounts/subresources/logs/subresources/audit/methods/list/).

**Key Improvements in the Beta Release:**

* **Automated & standardized logging**: Logs are now generated automatically using a standardized system, replacing manual, team-dependent logging. This ensures consistency across all Cloudflare services.
* **Expanded product coverage**: Increased audit log coverage from 75% to 95%. Key API endpoints such as `/accounts`, `/zones`, and `/organizations` are now included.
* **Granular filtering**: Logs now follow a uniform format, enabling precise filtering by actions, users, methods, and resources—allowing for faster and more efficient investigations.
* **Enhanced context and traceability**: Each log entry now includes detailed context, such as the authentication method used, the interface (API or Dashboard) through which the action was performed, and mappings to Cloudflare Ray IDs for better traceability.
* **Comprehensive activity capture**: Expanded logging to include GET requests and failed attempts, ensuring that all critical activities are recorded.

**Known Limitations in Beta**

* Error handling for the API is not implemented.
* There may be gaps or missing entries in the available audit logs.
* UI is unavailable in this Beta release.
* System-level logs and User-Activity logs are not included.

Support for these features is coming as part of the GA release later this year. For more details, including a sample audit log, check out our blog post: [Introducing Automatic Audit Logs ↗](https://blog.cloudflare.com/introducing-automatic-audit-logs/)

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/changelog/","name":"Changelog"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/changelog/audit-logs/","name":"Audit Logs"}}]}
```

---

---
title: Logs
description: Logpush now supports higher-precision timestamp formats for log output. You can configure jobs to output timestamps at millisecond or nanosecond precision. This is available in both the Logpush UI in the Cloudflare dashboard and the Logpush API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/changelog/logs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Logs

[ Subscribe to RSS ](https://developers.cloudflare.com/changelog/rss/logs.xml) 

## 2026-03-25

  
**Logpush — More granular timestamps**   

Logpush now supports higher-precision timestamp formats for log output. You can configure jobs to output timestamps at millisecond or nanosecond precision. This is available in both the Logpush UI in the Cloudflare dashboard and the [Logpush API](https://developers.cloudflare.com/api/resources/logpush/subresources/jobs/).

To use the new formats, set `timestamp_format` in your Logpush job's `output_options`:

* `rfc3339ms` — `2024-02-17T23:52:01.123Z`
* `rfc3339ns` — `2024-02-17T23:52:01.123456789Z`

Default timestamp formats apply unless explicitly set. The dashboard defaults to `rfc3339` and the API defaults to `unixnano`.

For more information, refer to the [Log output options](https://developers.cloudflare.com/logs/logpush/logpush-job/log-output-options/) documentation.

## 2026-03-09

  
**New MCP Portal Logs dataset and new fields across multiple Logpush datasets in Cloudflare Logs**   

Cloudflare has added new fields across multiple [Logpush datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/):

#### New dataset

* **MCP Portal Logs**: A new dataset with fields including `ClientCountry`, `ClientIP`, `ColoCode`, `Datetime`, `Error`, `Method`, `PortalAUD`, `PortalID`, `PromptGetName`, `ResourceReadURI`, `ServerAUD`, `ServerID`, `ServerResponseDurationMs`, `ServerURL`, `SessionID`, `Success`, `ToolCallName`, `UserEmail`, and `UserID`.

#### New fields in existing datasets

* **DEX Application Tests**: `HTTPRedirectEndMs`, `HTTPRedirectStartMs`, `HTTPResponseBody`, and `HTTPResponseHeaders`.
* **DEX Device State Events**: `ExperimentalExtra`.
* **Firewall Events**: `FraudUserID`.
* **Gateway HTTP**: `AppControlInfo` and `ApplicationStatuses`.
* **Gateway DNS**: `InternalDNSDurationMs`.
* **HTTP Requests**: `FraudEmailRisk`, `FraudUserID`, and `PayPerCrawlStatus`.
* **Network Analytics Logs**: `DNSQueryName`, `DNSQueryType`, and `PFPCustomTag`.
* **WARP Toggle Changes**: `UserEmail`.
* **WARP Config Changes**: `UserEmail`.
* **Zero Trust Network Session Logs**: `SNI`.

For the complete field definitions for each dataset, refer to [Logpush datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/).

## 2025-12-11

  
**SentinelOne as Logpush destination**   

Cloudflare Logpush now supports **SentinelOne** as a native destination.

Logs from Cloudflare can be sent to [SentinelOne AI SIEM ↗](https://www.sentinelone.com/) via [Logpush](https://developers.cloudflare.com/logs/logpush/). The destination can be configured through the Logpush UI in the Cloudflare dashboard or by using the [Logpush API](https://developers.cloudflare.com/api/resources/logpush/subresources/jobs/).

For more information, refer to the [Destination Configuration](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/sentinelone/) documentation.

## 2025-11-11

  
**Logpush Health Dashboards**   

We’re excited to introduce **Logpush Health Dashboards**, giving customers real-time visibility into the status, reliability, and performance of their [Logpush](https://developers.cloudflare.com/logs/logpush/) jobs. Health dashboards make it easier to detect delivery issues, monitor job stability, and track performance across destinations. The dashboards are divided into two sections:

* **Upload Health**: See how much data was successfully uploaded, where drops occurred, and how your jobs are performing overall. This includes data completeness, success rate, and upload volume.
* **Upload Reliability** – Diagnose issues impacting stability, retries, or latency, and monitor key metrics such as retry counts, upload duration, and destination availability.
![Health Dashboard](https://developers.cloudflare.com/_astro/Health-Dashboard.CP0mV0IW_Z1GdXr6.webp) 

Health Dashboards can be accessed from the Logpush page in the Cloudflare dashboard at the account or zone level, under the Health tab. For more details, refer to our [**Logpush Health Dashboards**](https://developers.cloudflare.com/logs/logpush/logpush-health) documentation, which includes a comprehensive troubleshooting guide to help interpret and resolve common issues.

## 2025-11-05

  
**Logpush Permission Update for Zero Trust Datasets**   

[Permissions](https://developers.cloudflare.com/logs/logpush/permissions/) for managing Logpush jobs related to [Zero Trust datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/) (Access, Gateway, and DEX) have been updated to improve data security and enforce appropriate access controls.

To view, create, update, or delete Logpush jobs for Zero Trust datasets, users must now have both of the following permissions:

* Logs Edit
* Zero Trust: PII Read

Note

Update your UI, API or Terraform configurations to include the new permissions. Requests to Zero Trust datasets will fail due to insufficient access without the additional permission.

## 2025-10-27

  
**Azure Sentinel Connector**   

Logpush now supports integration with [Microsoft Sentinel ↗](https://www.microsoft.com/en-us/security/business/siem-and-xdr/microsoft-sentinel).The new Azure Sentinel Connector built on Microsoft’s Codeless Connector Framework (CCF), is now avaialble. This solution replaces the previous Azure Functions-based connector, offering significant improvements in security, data control, and ease of use for customers. Logpush customers can send logs to Azure Blob Storage and configure this new Sentinel Connector to ingest those logs directly into Microsoft Sentinel.

This upgrade significantly streamlines log ingestion, improves security, and provides greater control:

* Simplified Implementation: Easier for engineering teams to set up and maintain.
* Cost Control: New support for Data Collection Rules (DCRs) allows you to filter and transform logs at ingestion time, offering potential cost savings.
* Enhanced Security: CCF provides a higher level of security compared to the older Azure Functions connector.
* ata Lake Integration: Includes native integration with Data Lake.

Find the new solution [here ↗](https://marketplace.microsoft.com/en-us/product/azure-application/cloudflare.azure-sentinel-solution-cloudflare-ccf?tab=Overview) and refer to the [Cloudflare's developer documention ↗](https://developers.cloudflare.com/analytics/analytics-integrations/sentinel/#supported-logs:~:text=WorkBook%20fields,-Analytic%20rules)for more information on the connector, including setup steps, supported logs and Microsfot's resources.

## 2025-08-22

  
**Dedicated Egress IP for Logpush**   

Cloudflare Logpush can now deliver logs from using fixed, dedicated egress IPs. By routing Logpush traffic through a Cloudflare zone enabled with [Aegis IP](https://developers.cloudflare.com/smart-shield/configuration/dedicated-egress-ips/), your log destination only needs to allow Aegis IPs making setup more secure.

Highlights:

* Fixed egress IPs ensure your destination only accepts traffic from known addresses.
* Works with any supported Logpush destination.
* Recommended to use a dedicated zone as a proxy for easier management.

To get started, work with your Cloudflare account team to provision Aegis IPs, then configure your Logpush job to deliver logs through the proxy zone. For full setup instructions, refer to the [Logpush documentation](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/egress-ip/).

## 2025-08-13

  
**IBM Cloud Logs as Logpush destination**   

Cloudflare Logpush now supports IBM Cloud Logs as a native destination.

Logs from Cloudflare can be sent to [IBM Cloud Logs ↗](https://www.ibm.com/products/cloud-logs) via [Logpush](https://developers.cloudflare.com/logs/logpush/). The setup can be done through the Logpush UI in the Cloudflare Dashboard or by using the [Logpush API](https://developers.cloudflare.com/api/resources/logpush/subresources/jobs/). The integration requires IBM Cloud Logs HTTP Source Address and an IBM API Key. The feature also allows for filtering events and selecting specific log fields.

For more information, refer to [Destination Configuration](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/ibm-cloud-logs/) documentation.

## 2025-04-18

  
**Custom fields raw and transformed values support**   

Custom Fields now support logging both **raw and transformed values** for request and response headers in the HTTP requests dataset.

These fields are configured per zone and apply to all Logpush jobs in that zone that include request headers, response headers. Each header can be logged in only one format—either raw or transformed—not both.

By default:

* Request headers are logged as raw values
* Response headers are logged as transformed values

These defaults can be overidden to suit your logging needs.

Note

Transformed and raw values for request and response headers are available **only via the API** and cannot be set through the UI.

For more information refer to [Custom fields](https://developers.cloudflare.com/logs/logpush/logpush-job/custom-fields/) documentation

## 2025-03-06

  
**One-click Logpush Setup with R2 Object Storage**   

We’ve streamlined the [Logpush](https://developers.cloudflare.com/logs/logpush/) setup process by integrating R2 bucket creation directly into the Logpush workflow!

Now, you no longer need to navigate multiple pages to manually create an R2 bucket or copy credentials. With this update, you can seamlessly **configure a Logpush job to R2 in just one click**, reducing friction and making setup faster and easier.

This enhancement makes it easier for customers to adopt Logpush and R2.

For more details refer to our [Logs](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/r2/) documentation.

## 2024-10-08

  
**New fields added to Gateway-related datasets in Cloudflare Logs**   

Cloudflare has introduced new fields to two Gateway-related datasets in Cloudflare Logs:

* **Gateway HTTP**: `ApplicationIDs`, `ApplicationNames`, `CategoryIDs`, `CategoryNames`, `DestinationIPContinentCode`, `DestinationIPCountryCode`, `ProxyEndpoint`, `SourceIPContinentCode`, `SourceIPCountryCode`, `VirtualNetworkID`, and `VirtualNetworkName`.
* **Gateway Network**: `ApplicationIDs`, `ApplicationNames`, `DestinationIPContinentCode`, `DestinationIPCountryCode`, `ProxyEndpoint`, `SourceIPContinentCode`, `SourceIPCountryCode`, `TransportProtocol`, `VirtualNetworkID`, and `VirtualNetworkName`.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/changelog/","name":"Changelog"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/changelog/logs/","name":"Logs"}}]}
```

---

---
title: Glossary
description: Review the definitions for terms used across Cloudflare's Logs documentation.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/glossary.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Glossary

Review the definitions for terms used across Cloudflare's Logs documentation.

| Term        | Definition                                                                                                                                                                                                                                                                                                                                                    |
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| debugging   | The process of identifying and resolving errors or issues within software applications or systems, often facilitated by analyzing log data.                                                                                                                                                                                                                   |
| deprecation | Deprecation in software development involves officially labeling a feature as outdated. While a deprecated software feature remains within the software, users are warned and encouraged to adopt alternatives. Eventually, deprecated features may be removed. This approach ensures backward compatibility and gives programmers time to update their code. |
| event       | An occurrence or happening that is significant and worthy of being recorded in a log.                                                                                                                                                                                                                                                                         |
| log         | A chronological record of events, actions, or transactions, typically used for tracking and troubleshooting purposes.                                                                                                                                                                                                                                         |
| log file    | A file containing a collection of log entries, usually stored in a structured or semi-structured format.                                                                                                                                                                                                                                                      |
| logging     | The process of recording events, actions, or transactions in a log.                                                                                                                                                                                                                                                                                           |
| timestamp   | A data field indicating the date and time when an event occurred, often used for sequencing and analysis.                                                                                                                                                                                                                                                     |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/glossary/","name":"Glossary"}}]}
```

---

---
title: Logpush MCP server
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/logpush-mcp-server.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Logpush MCP server

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/logpush-mcp-server/","name":"Logpush MCP server"}}]}
```

---

---
title: Audit Logs MCP server
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/auditlogs-mcp-server.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Audit Logs MCP server

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/auditlogs-mcp-server/","name":"Audit Logs MCP server"}}]}
```

---

---
title: FAQ
description: Below you will find answers to the most commonly asked questions regarding Cloudflare Logs. If you cannot find the answer you are looking for, go to the community page and post your question there.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/faq/index.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# FAQ

Below you will find answers to the most commonly asked questions regarding Cloudflare Logs. If you cannot find the answer you are looking for, go to the [community page ↗](https://community.cloudflare.com/) and post your question there.

---

## General FAQ

For general questions about Logs.

[ General FAQ ❯ ](https://developers.cloudflare.com/logs/faq/general-faq/) 

## Logpush

For questions about Logpush.

[ Logpush ❯ ](https://developers.cloudflare.com/logs/faq/logpush/) 

## Instant Logs

For questions about Instant Logs.

[ Instant Logs ❯ ](https://developers.cloudflare.com/logs/faq/instant-logs/) 

## Logpull API

For questions about the Logpull API.

[ Logpull API ❯ ](https://developers.cloudflare.com/logs/faq/logpull-api/) 

## Common calculations

For questions about common calculations.

[ Common calculations ❯ ](https://developers.cloudflare.com/logs/faq/common-calculations/) 

## Random hostnames

For questions about unexpected hostnames in HTTP logs for partial zones.

[ Random hostnames ❯ ](https://developers.cloudflare.com/logs/faq/random-hostnames-partial-zones/) 

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/faq/","name":"FAQ"}}]}
```

---

---
title: Common calculations FAQ
description: Learn more about calculating bytes served by the origin and bandwidth usage.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/faq/common-calculations.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Common calculations FAQ

[❮ Back to FAQ](https://developers.cloudflare.com/logs/faq/)

### How can I calculate bytes served by the origin from Cloudflare Logs?

The best way to calculate bytes served by the origin is to use the `CacheResponseBytes` field in Cloudflare Logs, and to filter only requests that come from the origin. Make sure to filter out `OriginResponseStatus` values `0` and `304`, which indicate a revalidated response.

### How do I calculate bandwidth usage for my zone?

Bandwidth (or data transfer) can be calculated by adding the `EdgeResponseBytes` field in HTTP request logs. There are some types of requests that are not factored into bandwidth calculations. In order to only include relevant requests in calculations, add the filter `ClientRequestSource = 'eyeball'`.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/faq/","name":"FAQ"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/faq/common-calculations/","name":"Common calculations FAQ"}}]}
```

---

---
title: General FAQ
description: Review frequently asked questions about Cloudflare Logs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/faq/general-faq.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# General FAQ

[❮ Back to FAQ](https://developers.cloudflare.com/logs/faq/)

### Once a request has passed through the Cloudflare network, how soon are the logs available?

When using **Logpush**, logs are pushed in batches as soon as possible. For example, if you receive a file at 10:10, the file consists of logs that were processed before 10:10.

When using **Logpull**, logs become available in approximately one to five minutes. Cloudflare requires that calls to the **Logpull API** to be for time periods of at least one minute in the past. For example, if it is 9:43 now, you can ask for logs processed between 9:41 and 9:42\. The response will include logs for requests that passed through our network between 9:41 and 9:42 and potentially earlier. Usually Cloudflare's processing takes between three and four minutes, so when you ask for that same time period, you may also see logs of requests that passed through our network at 9:39 or earlier.

These timings are only a guideline, not a guarantee, and may depend on network conditions, the request volume for your domain, and other factors. Although we try to get the logs to you as fast as possible, we prioritize not losing log data over speed. On rare occasions, you may experience a longer delay. In this case, you do not need to take any action. The logs will be available as soon as they are processed.

### Are logs available for customers who are not on an Enterprise plan?

Not yet, but we are planning to make them available to other customer plans in the future.

### When pulling or pushing logs, I occasionally come across a time period with no data, even though I am sure my domain received requests at that time. Is this an expected behavior?

Yes. The time period for which you pull or receive logs is based on our processing time, not the time the requests passed through our network. Empty responses do not mean there were no requests during that time period, just that we did not process any logs for your domain during that time.

### Can I receive logs in a format other than JSON?

Not at this time. Talk to your Cloudflare account team or [Cloudflare Support](https://developers.cloudflare.com/support/contacting-cloudflare-support/) if you are interested in other formats and we will consider them for the future.

### Is it possible to track cache purge requests in the logs?

Yes, since Nov 25, 2025 [Audit Log v2](https://developers.cloudflare.com/fundamentals/account/account-security/audit-logs/).

### At which stage are HTTP requests logged?

Requests are logged only after they successfully reach our proxy. It means that requests failing during the TCP or TLS handshake between the client and the Cloudflare proxy will not be available in the logs.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/faq/","name":"FAQ"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/faq/general-faq/","name":"General FAQ"}}]}
```

---

---
title: Instant Logs FAQ
description: Review frequently asked questions about Instant Logs.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/faq/instant-logs.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Instant Logs FAQ

[❮ Back to FAQ](https://developers.cloudflare.com/logs/faq/)

### I am getting an HTTP 301 when attempting to connect to my WebSocket. What can I do?

Make sure you are using the `wss://` protocol when connecting to your WebSocket.

### I am getting an HTTP 429\. What can I do?

Connection requests are rate limited. Try your request again after waiting a few minutes.

### Why am I not receiving data?

First, double-check if you have a filter defined. If you do, it may be too strict (or incorrect) which ends up dropping all your data.

If you are confident in your filter, check the sample rate you used when creating the session. For example, a sample of 100 means you will receive one log for every 100 requests to your zone.

Finally, make sure the destination is proxied through Cloudflare (also known as orange clouded). We cannot log your request if it does not go through Cloudflare's global network.

### I am getting an error fetching my data. How can I solve this?

Make sure you have the correct permissions. To use Instant Logs you need Super Administrator, Administrator, Log Share, or Log Share Reader permissions.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/faq/","name":"FAQ"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/faq/instant-logs/","name":"Instant Logs FAQ"}}]}
```

---

---
title: Logpull API FAQ
description: Review frequently asked questions about the Logpull API.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/faq/logpull-api.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Logpull API FAQ

[❮ Back to FAQ](https://developers.cloudflare.com/logs/faq/)

### How long are logs retained?

Cloudflare makes logs available for at least three days and up to seven days. If you need your logs for a longer time period, download and store them locally.

### I am asking for logs for the time window of 16:10-16:13\. However, the timestamps in the logs show requests that are before this time period. Why does that happen?

When you make a call for the time period of 16:10-16:13, you are actually asking for the logs that were received and processed by our system during that time (hence the endpoint name, `logs/received`). The received time is the time the logs are written to disk. There is some delay between the time the request hits the Cloudflare edge and the time it is received and processed. The **request time** is what you see in the log itself: **EdgeStartTimestamp** and **EdgeEndTimestamp** tell you when the edge started and stopped processing the request.

The advantage of basing the responses on the **time received** rather than the request or edge time is not needing to worry about late-arriving logs. As long as you are calling our API for continuous time segments, you will always get all of your logs without making duplicate calls. If we based the response on request time, you could never be sure that all the logs for that request time had been processed.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/faq/","name":"FAQ"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/faq/logpull-api/","name":"Logpull API FAQ"}}]}
```

---

---
title: Logpush FAQ
description: Review frequently asked questions about Logpush.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/faq/logpush.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Logpush FAQ

[❮ Back to FAQ](https://developers.cloudflare.com/logs/faq/)

Note

The Logpush FAQ entries have been integrated into the main [Logpush documentation](https://developers.cloudflare.com/logs/logpush/) for better discoverability and context. Please refer to the relevant product pages for detailed information.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/faq/","name":"FAQ"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/faq/logpush/","name":"Logpush FAQ"}}]}
```

---

---
title: Random hostnames
description: Why unexpected hostnames appear in HTTP logs for partial zones.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/faq/random-hostnames-partial-zones.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Random hostnames

[❮ Back to FAQ](https://developers.cloudflare.com/logs/faq/)

### Why do I see hostnames in my HTTP logs that I did not configure?

If you use a [partial (CNAME) zone setup](https://developers.cloudflare.com/dns/zone-setups/partial-setup/), you may see hundreds of random hostnames in your HTTP request logs despite only proxying a few DNS records. This is caused by Host header manipulation attacks, not a bug in Cloudflare logging.

### What causes this?

Attackers use a technique called Host header injection:

1. They discover the Cloudflare IP addresses serving your proxied hostname (for example, via DNS lookup of a known proxied subdomain).
2. They send HTTP requests directly to those IPs with forged `Host` headers containing random subdomain guesses.
3. Cloudflare logs the `Host` header value as-is in the `ClientRequestHost` field.
4. The requests reach Cloudflare because they target valid Cloudflare IPs — but the attacker controls the `Host` header content.

The [http.host field](https://developers.cloudflare.com/ruleset-engine/rules-language/fields/reference/http.host/) contains the `Host` header from the original request, which means attacker-controlled values appear in your logs.

### Why are partial zones susceptible?

With partial (CNAME) zones:

* Only specific hostnames point to Cloudflare via CNAME at your authoritative DNS provider.
* Cloudflare does not control the full zone, so it cannot validate that incoming `Host` headers match configured records.
* Attackers can enumerate subdomains by sending requests to known-good IPs with guessed `Host` headers.

### How do I identify this pattern?

| Indicator                      | What to look for                                                                                                                        |
| ------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------- |
| **Request count distribution** | Legitimate hostnames have thousands of requests. Suspicious hostnames have exactly two to five requests each.                           |
| **Hostname patterns**          | Sequential numbers (0-0, 0-56, 007), common words (admin, api, test, staging), or internal service names (airflow, consul, prometheus). |
| **Source IPs**                 | Suspicious requests often come from a small set of IPs (scanner infrastructure).                                                        |
| **Response codes**             | Many 4xx responses (hostname not found, SSL mismatch).                                                                                  |
| **DNS correlation**            | Suspicious hostnames do not appear in DNS query logs.                                                                                   |

### Example data pattern

```

"ClientRequestHost","_count"

"legitimate-proxied.example.com","12498"    # Real traffic

"another-proxied.example.com","6082"        # Real traffic

"0-0.example.com","2"                       # Scanner

"admin.example.com","2"                     # Scanner

"api-staging.example.com","2"               # Scanner

"1234567890.example.com","2"                # Scanner


```

### How do I block these requests?

Create a [WAF custom rule](https://developers.cloudflare.com/waf/custom-rules/) that only allows requests with valid `Host` headers:

```

Expression:

(http.host ne "proxied-hostname-1.example.com" and

 http.host ne "proxied-hostname-2.example.com" and

 http.host ne "proxied-hostname-3.example.com")


Action: Block


```

Tip

Use a hostname list if you have many proxied hostnames, or use a wildcard match if you use a consistent subdomain pattern.

### Can I filter these from my logs instead?

Yes. If you prefer cleaner logs without blocking traffic:

* **At Logpush level** — Filter the job to include only known-good hostnames using [Logpush filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/).
* **At SIEM level** — Filter or exclude hostnames with request counts below a threshold during log analysis.

### Are these requests reaching my origin?

Possibly, if the `Host` header happens to match a configured hostname or if you have a default or catch-all origin. Check `EdgeResponseStatus` and `OriginResponseStatus` to see if origins were contacted.

### Is this a security risk?

The risk is low to moderate. The main concerns are:

* Information disclosure if error pages reveal internal details.
* Resource consumption if requests reach your origin.
* Log noise that makes real attacks harder to identify.

### Why do suspicious hostnames have exactly two requests?

Automated scanners typically send one to two requests per subdomain guess — one initial probe and possibly one retry. This uniform distribution is a reliable indicator of scanning activity.

### How do I verify my solution is working?

After implementing a WAF rule:

1. Check **Firewall Events** for blocked requests matching your rule.
2. Compare log volume before and after — suspicious hostnames should disappear.
3. Verify legitimate traffic is unaffected by checking request counts for real hostnames.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/faq/","name":"FAQ"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/faq/random-hostnames-partial-zones/","name":"Random hostnames"}}]}
```

---

---
title: 2023-02-01 - Updates to security fields
description: Cloudflare will deploy some updates to security-related fields in Cloudflare Logs. These updates will affect the following datasets:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/reference/change-notices/2023-02-01-security-fields-updates.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# 2023-02-01 - Updates to security fields

Cloudflare will deploy some updates to security-related fields in Cloudflare Logs. These updates will affect the following datasets:

* [HTTP Requests](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/http%5Frequests/)
* [Firewall Events](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/firewall%5Fevents/)

## Timeline

To minimize possible impacts on our customers' existing SIEM configurations, these updates will happen in two phases according to the following timeline:

### Phase 1 (February 1, 2023)

For the log fields being added, Cloudflare will gradually start adding them to logs datasets.

For the log fields being renamed, Cloudflare will:

* **Add new fields** with the same data as the fields that will be removed on phase 2 (described in this document as old fields). These new fields will become gradually available. Refer to the next sections for details.
* **Announce the deprecation of the old fields.** Cloudflare will remove these fields from logs datasets on August 1, 2023.

For the log fields being removed, Cloudflare is announcing them as deprecated. Their removal from logs datasets will occur on August 1, 2023.

In addition to these Cloudflare Logs changes, Cloudflare will also add new security-related fields to the following [GraphQL datasets](https://developers.cloudflare.com/analytics/graphql-api/features/data-sets/):

* `httpRequestsAdaptive `
* `httpRequestsAdaptiveGroups`
* `firewallEventsAdaptive`
* `firewallEventsAdaptiveGroups`
* `firewallEventsAdaptiveByTimeGroups`

### Phase 2 (August 1, 2023)

For the log fields being renamed, Cloudflare will remove the old fields from the Cloudflare logs datasets. From August 1, 2023 onwards, only the new fields will be available.

For the log fields being removed, Cloudflare will also remove them from the Cloudflare logs datasets. From August 1, 2023 onwards, these fields will no longer be available.

## Concepts

The following concepts are used below in the reviewed field descriptions:

* **Terminating action:** One of the following actions:  
   * `block`  
   * `js_challenge`  
   * `managed_challenge`  
   * `challenge` (_Interactive Challenge_)

For more information on these actions, refer to the [Actions](https://developers.cloudflare.com/ruleset-engine/rules-language/actions/) reference in the Rules language documentation.

* **Security rule:** One of the following rule types:  
   * [WAF managed rule](https://developers.cloudflare.com/waf/managed-rules/)  
   * [WAF custom rule](https://developers.cloudflare.com/waf/custom-rules/)  
   * [WAF rate limiting rule](https://developers.cloudflare.com/waf/rate-limiting-rules/)

## HTTP Requests dataset changes

The following fields will be renamed in the [HTTP Requests](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/http%5Frequests/) dataset according to the two-phase strategy outlined in the [timeline](#timeline):

| New field name          | Type         | Description                                                                        | Old field name(deprecated on Aug 1, 2023) |
| ----------------------- | ------------ | ---------------------------------------------------------------------------------- | ----------------------------------------- |
| SecurityRuleID          | String       | Rule ID of the security rule that triggered a terminating action, if any.          | WAFRuleID                                 |
| SecurityRuleDescription | String       | Rule description of the security rule that triggered a terminating action, if any. | WAFRuleMessage                            |
| SecurityAction          | String       | Rule action of the security rule that triggered a terminating action, if any.      | WAFAction                                 |
| SecurityRuleIDs         | String Array | Array of security rule IDs that matched the request.                               | FirewallMatchesRuleIDs                    |
| SecurityActions         | String Array | Array of actions that Cloudflare security products performed on the request.       | FirewallMatchesActions                    |
| SecuritySources         | String Array | Array of Cloudflare security products that matched the request.                    | FirewallMatchesSources                    |

The following fields are now deprecated and they will be removed from the HTTP Requests dataset on August 1, 2023:

| Deprecated field name | Notes                                                                 |
| --------------------- | --------------------------------------------------------------------- |
| WAFProfile            | Used in the previous version of WAF managed rules (now deprecated).   |
| EdgeRateLimitAction   | Used in the previous version of rate limiting rules (now deprecated). |
| EdgeRateLimitID       | Used in the previous version of rate limiting rules (now deprecated). |
| SecurityLevel         | N/A                                                                   |

## Firewall Events dataset changes

The following fields will be added to the [Firewall Events](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/firewall%5Fevents/) dataset:

| Field name  | Type   | Description                                                        |
| ----------- | ------ | ------------------------------------------------------------------ |
| Description | String | The description of the rule triggered by the request.              |
| Ref         | String | The user-defined identifier for the rule triggered by the request. |

## Changes to GraphQL datasets

Cloudflare will add the following fields to the `httpRequestsAdaptive `and `httpRequestsAdaptiveGroups `datasets:

| Field name     | Type   | Description                                                              |
| -------------- | ------ | ------------------------------------------------------------------------ |
| securityAction | String | Action of the security rule that triggered a terminating action, if any. |
| securitySource | String | Source of the security rule that triggered a terminating action, if any. |

Cloudflare will also add the following field to the `firewallEventsAdaptive`, `firewallEventsAdaptiveGroups`, and `firewallEventsAdaptiveByTimeGroups` datasets:

| Field name  | Type   | Description                                           |
| ----------- | ------ | ----------------------------------------------------- |
| description | String | The description of the rule triggered by the request. |

These new fields will become gradually available.

For more information on the available datasets, refer to [GraphQL datasets](https://developers.cloudflare.com/analytics/graphql-api/features/data-sets/).

## Update your Logpush jobs and SIEM systems

Cloudflare will not update existing Logpush jobs to use the renamed fields. You will need to update the jobs according to the instructions provided below.

After updating Logpush jobs, you may need to update external filters or reports in your SIEM systems to reflect the log field changes.

### Update Logpush job in the dashboard

1. In the Cloudflare dashboard, go to the **Logpush** page.  
[ Go to **Logpush** ](https://dash.cloudflare.com/?to=/:account/:zone/analytics/logs)
2. Select **Edit** next to the Logpush job you wish to edit.
3. Under **Select data fields**, update the fields in your job. The new security log fields are available under **General**.
4. Select **Save changes**.

### Update Logpush job via API

Follow the instructions in [Update output\_options](https://developers.cloudflare.com/logs/logpush/examples/example-logpush-curl/#optional---update-output%5Foptions) to update the fields in the Logpush job.

### Update Logpush job via Terraform

If you are already managing Logpush jobs via Terraform, update the `logpull_options` in your existing [cloudflare\_logpush\_job ↗](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/logpush%5Fjob) Terraform resource. For example:

```

resource "cloudflare_logpush_job" "example_job" {

  enabled             = true

  zone_id             = "<ZONE_ID>"

  name                = "My-logpush-job"

  logpull_options     = "fields=RayID,ClientIP,EdgeStartTimestamp,WAFAction,WAFProfile&timestamps=rfc3339"

  logpull_options     = "fields=RayID,ClientIP,EdgeStartTimestamp,SecurityAction&timestamps=rfc3339"

  destination_conf = "r2://cloudflare-logs/http_requests/date={DATE}?account-id=${var.account_id}&access-key-id=${cloudflare_api_token.logpush_r2_token.id}&secret-access-key=${sha256(cloudflare_api_token.logpush_r2_token.value)}"

  dataset             = "http_requests"

}


```

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/reference/","name":"Reference"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/reference/change-notices/","name":"Change notices"}},{"@type":"ListItem","position":5,"item":{"@id":"/logs/reference/change-notices/2023-02-01-security-fields-updates/","name":"2023-02-01 - Updates to security fields"}}]}
```

---

---
title: ClientRequestSource field
description: The possible values for the ClientRequestSource field are the following:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/reference/clientrequestsource.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# ClientRequestSource field

The possible values for the `ClientRequestSource` field are the following:

| Value | Request source     | Description                                                                                                                             |
| ----- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------- |
| 0     | unknown            | Should never happen.                                                                                                                    |
| 1     | eyeball            | A request from an end user. If you want to count requests made the Cloudflare Edge, the query should filter on requestSource=eyeball.   |
| 2     | purge              | A request made by Cloudflare's purge system.                                                                                            |
| 3     | alwaysOnline       | A request made by Cloudflare's Always Online crawler.                                                                                   |
| 4     | healthcheck        | A request made by Cloudflare's Health Check system.                                                                                     |
| 5     | edgeWorkerFetch    | A fetch request made from an edge Worker.                                                                                               |
| 6     | edgeWorkerCacheAPI | A cache API call made from an edge Worker.                                                                                              |
| 7     | edgeWorkerKV       | A KV call made from an edge Worker.                                                                                                     |
| 8     | imageResizing      | Requests made by Cloudflare's Image Resizing product.                                                                                   |
| 9     | orangeToOrange     | A request that comes from another orange clouded zone.                                                                                  |
| 10    | sslDetector        | A request made by Cloudflare's [SSL Detector system ↗](https://blog.cloudflare.com/ssl-tls-recommender/).                               |
| 11    | earlyHintsCache    | An [Early Hint request ↗](https://blog.cloudflare.com/early-hints/).                                                                    |
| 12    | inBrowserChallenge | An end user request caused by a Cloudflare security product (Challenges, JavaScript Detections). These requests never reach the origin. |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/reference/","name":"Reference"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/reference/clientrequestsource/","name":"ClientRequestSource field"}}]}
```

---

---
title: Pathing status
description: Cloudflare issues the following Edge Pathing Statuses:
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/reference/pathing-status.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Pathing status

## Understand pathing

Cloudflare issues the following **Edge Pathing Statuses**:

* **EdgePathingSrc** (pathing source): The stage that made the routing decision.
* **EdgePathingOp** (pathing operation): The specific action or operation taken.
* **EdgePathingStatus** (pathing status): Additional information complementing the **EdgePathingOp**.

### EdgePathingSrc

**EdgePathingSrc** refers to the system that last handled the request before an error occurred or the request was passed to the cache server. Typically, this will be the macro/reputation list. Possible pathing sources include:

* `err`
* `sslv` (SSL verification checker)
* `bic` (browser integrity check)
* `hot` (hotlink protection)
* `macro` (the reputation list)
* `skip` (Always Online or cdnjs resources)
* `user` (user firewall rule)

For example:

Terminal window

```

jq -r .EdgePathingSrc logs.json | sort -n | uniq -c | sort -n | tail


```

```

1 err

5 user

93 macro


```

### EdgePathingOp

**EdgePathingOp** indicates how the request was handled. `wl` is a request that passed all checks and went to your origin server. Other possible values are:

* `errHost` (host header mismatch, DNS errors, etc.)
* `ban` (blocked by IP address, range, etc.)

For example:

Terminal window

```

jq -r .EdgePathingOp logs.json | sort -n | uniq -c | sort -n | tail


```

```

1 errHost

97 wl


```

### EdgePathingStatus

**EdgePathingStatus** is the value **EdgePathingSrc** returns. With a pathing source of `macro`, `user`, or `err`, the pathing status indicates the list where the IP address was found. `nr` is the most common value and it means that the request was not flagged by a security check. Some values indicate the class of user; for example, `se` means search engine.

For example:

Terminal window

```

jq -r .EdgePathingStatus logs.json | sort -n | uniq -c | sort -n | tail


```

```

1 dnsErr

5 ip

92 nr


```

## How does pathing map to Threat Analytics?

Certain combinations of pathing have been labeled in the Cloudflare **Threat Analytics** feature (in the **Analytics** app in the Cloudflare dashboard). The mapping is as follows:

| Pathing         | Label                |
| --------------- | -------------------- |
| bic.ban.unknown | Bad browser          |
| hot.ban.unknown | Blocked hotlink      |
| hot.ban.ip      |                      |
| macro.ban.ip    | Bad IP               |
| user.ban.ctry   | Country block        |
| user.ban.ip     | IP block (user)      |
| user.ban.ipr16  | IP range block (/16) |
| user.ban.ipr24  | IP range block (/24) |

## Understand response fields

The response status appears in three places in a request:

* **edgeResponse**
* **cacheResponse**
* **originResponse**

In your logs, the edge is what first accepts a visitor's request. The cache then accepts the request and either forwards it to your origin or responds from the cache. It is possible to have a request that has only an **edgeResponse** or a request that has an **edgeResponse** and a **cacheResponse**, but no **originResponse**.

This is how you can see where a request terminates. Requests with only an **edgeResponse** likely hit a security check or processing error. Requests with an **edgeResponse** and a **cacheResponse** either were served from the cache or saw an error contacting your origin server. Requests that have an **originResponse** went all the way to your origin server and errors seen would have been served directly from there.

For example, the following query shows the status code and pathing information for all requests that terminated at the Cloudflare edge:

Terminal window

```

jq -r 'select(.OriginResponseStatus == null) | select(.CacheResponseStatus == null) |"\(.EdgeResponseStatus) / \(.EdgePathingSrc) / \(.EdgePathingStatus) / \(.EdgePathingOp)"' logs.json | sort -n | uniq -c | sort -n


```

```

1 403 / macro / nr / wl

1 409 / err / dnsErr / errHost


```

The information stored is broken down based on the following categories:

## Errors

These occur for requests that did not pass any of the validation performed by the Cloudflare network. Example cases include:

* Whenever Cloudflare is unable to look up a domain or zone.
* An attempt to improperly use the IP for an origin server.
* Domain ownership is unclear (for example, the domain is not in Cloudflare).

| EdgePathingStatus  | Description                                               | EdgePathingOp | Status Code |
| ------------------ | --------------------------------------------------------- | ------------- | ----------- |
| cyclic             | Cloudflare loop.                                          | err\_host     | 403         |
| dns\_err           | Unable to resolve.                                        | err\_host     | 409         |
| reserved\_ip       | DNS points to local or disallowed IP.                     | err\_host     | 403         |
| reserved\_ip6      | DNS points to local or disallowed IPv6 address.           | err\_host     | 403         |
| bad\_host          | Bad or no Host header.                                    | err\_host     | 403         |
| no\_existing\_host | Ownership lookup failed: host possibly not on Cloudflare. | err\_host     | 409         |

## User-based actions

These occur for actions triggered from users based on the configuration for a specific IP (or IP range).

| EdgePathingStatus                                  | Description                                   | EdgePathingOp | EdgePathingSrc | Status Code |
| -------------------------------------------------- | --------------------------------------------- | ------------- | -------------- | ----------- |
| Asnum ip ipr24 ipr16 ip6 ip6r64 ip6r48 ip6r32 ctry | The request was blocked.                      | ban           | user           | 403         |
| Asnum ip ipr24 ipr16 ip6 ip6r64 ip6r48 ip6r32 ctry | The request was allowed.WAF will not execute. | wl            | user           | n/a         |

## Firewall Rules

Cloudflare Firewall Rules (deprecated) triggers actions based on matching customer-defined rules.

| EdgePathingStatus       | Description              | EdgePathingOp |
| ----------------------- | ------------------------ | ------------- |
| filter\_based\_firewall | The request was blocked. | ban           |
| filter\_based\_firewall | The request was allowed. | wl            |

## Zone Lockdown

**Zone Lockdown** blocks visitors to particular URIs where the visitor's IP is not allowlisted.

| EdgePathingStatus | Description        | EdgePathingOp | EdgePathingSrc |
| ----------------- | ------------------ | ------------- | -------------- |
| zl                | Lock down applied. | ban           | user           |

## Firewall User-Agent Block

Challenge (Interactive or Non-Interactive) or block visitors who use a browser for which the User-Agent name matches a specific string.

| EdgePathingStatus | Description         | EdgePathingOp | EdgePathingSrc |
| ----------------- | ------------------- | ------------- | -------------- |
| ua                | Blocked User-Agent. | ban           | user           |

## Browser Integrity Check

Assert whether the source of the request is illegitimate or the request itself is malicious.

| EdgePathingStatus | Description      | EdgePathingOp | EdgePathingSrc |
| ----------------- | ---------------- | ------------- | -------------- |
| empty             | Blocked request. | ban           | bic            |

## Hot Linking

Prevent hot linking from other sites.

| EdgePathingStatus | Description      | EdgePathingOp | EdgePathingSrc |
| ----------------- | ---------------- | ------------- | -------------- |
| empty             | Blocked request. | ban           | hot            |

## L7-to-L7 DDoS mitigation

Drop DDoS attacks through L7 mitigation.

| EdgePathingStatus | Description      | EdgePathingOp | EdgePathingSrc |
| ----------------- | ---------------- | ------------- | -------------- |
| l7ddos            | Blocked request. | ban           | protect        |

## IP Reputation (MACRO)

The macro stage is comprised of many different paths. They are categorized by the reputation of the visitor IP.

| EdgePathingStatus | Description                                                                                                                               | EdgePathingOp | EdgePathingSrc |
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | ------------- | -------------- |
| nr                | There is no reputation data for the IP and no action is being taken.                                                                      | wl            | macro          |
| wl                | IP is explicitly allowlisted.                                                                                                             | wl            | macro          |
| scan              | IP is explicitly allowlisted and categorized as a security scanner.                                                                       | wl            | macro          |
| mon               | IP is explicitly allowlisted and categorized as a Monitoring Service.                                                                     | wl            | macro          |
| bak               | IP is explicitly allowlisted and categorized as a Backup Service.                                                                         | wl            | macro          |
| mob               | IP is explicitly allowlisted and categorized as Mobile Proxy Service.                                                                     | wl            | macro          |
| se                | IP is explicitly allowlisted as it belongs to a search engine crawler and no action is taken.                                             | wl            | macro          |
| grey              | IP is greylisted (suspected to be bad) but the request was either for a favicon or security is turned off and as such, it is allowlisted. | wl            | macro          |
| bad\_ok           | The reputation score of the IP is bad but the request was either for a favicon or security is turned off and as such, it is allowlisted.  | wl            | macro          |
| unknown           | The pathing\_status is unknown and the request is being processed as normal.                                                              | wl            | macro          |

## Rate Limiting

| EdgePathingStatus | Description                   | EdgePathingOp | EdgePathingSrc |
| ----------------- | ----------------------------- | ------------- | -------------- |
| rate\_limit       | Dropped request.              | ban           | user           |
| rate\_limit       | IP is explicitly allowlisted. | simulate      | user           |

## Special cases

| EdgePathingStatus                                         | Description                         | EdgePathingOp | EdgePathingSrc |
| --------------------------------------------------------- | ----------------------------------- | ------------- | -------------- |
| ao\_crawl                                                 | AO (Always Online) crawler request. | wl            | skip           |
| cdnjs                                                     | Request to a cdnjs resource.        | wl            | skip           |
| Certain challenge forced by Cloudflare's special headers. | forced                              |               |                |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/reference/","name":"Reference"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/reference/pathing-status/","name":"Pathing status"}}]}
```

---

---
title: Security fields
description: The Security fields contain rules to block requests that contain specific types of content.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/reference/security-fields.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# Security fields

The Security fields contain rules to block requests that contain specific types of content.

## SecurityActions

| Value                                | Action         | Description                                                                                                                                               |
| ------------------------------------ | -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| unknown                              | Unknown        | Take no other action.                                                                                                                                     |
| allow                                | Allow          | Bypass all subsequent rules.                                                                                                                              |
| block                                | Drop           | Block with an HTTP status code of 403, 429, or any other 4XX status code.                                                                                 |
| challenge                            | Challenge Drop | Issue an interactive challenge.                                                                                                                           |
| jschallenge                          | Challenge Drop | Issue a non-interactive challenge.                                                                                                                        |
| log                                  | Log            | Take no action other than logging the event.                                                                                                              |
| connectionClose                      | Close          | Close connection.                                                                                                                                         |
| challengeSolved                      | Allow          | Allow once interactive challenge solved.                                                                                                                  |
| challengeBypassed                    | Allow          | Interactive challenge is not issued again because the visitor had previously passed an interactive challenge and a valid cf\_clearance cookie is present. |
| jschallengeSolved                    | Allow          | Allow once non-interactive challenge solved.                                                                                                              |
| jschallengeBypassed                  | Allow          | Non-interactive challenge not issued because the visitor had previously passed a non-interactive or interactive challenge.                                |
| bypass                               | Allow          | Bypass all subsequent firewall rules.                                                                                                                     |
| managedChallenge                     | Challenge Drop | Issue managed challenge.                                                                                                                                  |
| managedChallengeNonInteractiveSolved | Allow          | Allow once the managed challenge is solved via non-interactive interstitial page.                                                                         |
| managedChallengeInteractiveSolved    | Allow          | Allow once the managed challenged is solved via interactive interstitial page.                                                                            |
| managedChallengeBypassed             | Allow          | Challenge was not presented because visitor had clearance from previous challenge.                                                                        |

## SecuritySources

| Value           | Description                                                                                                      |
| --------------- | ---------------------------------------------------------------------------------------------------------------- |
| unknown         | Used if an event is received from a new source but the logging system has not been updated.                      |
| asn             | Allow or block based on autonomous system number.                                                                |
| country         | Allow or block based on country.                                                                                 |
| ip              | Allow or block based on IP address.                                                                              |
| ipRange         | Allow or block based on range of IP addresses.                                                                   |
| securityLevel   | Allow or block based on requester's security level.                                                              |
| zoneLockdown    | Restrict all access to a specific zone.                                                                          |
| waf             | Allow or block based on the WAF product settings. This is the WAF/managed rules system that is being phased out. |
| firewallRules   | Allow or block based on a zone's firewall rules configuration (deprecated).                                      |
| uaBlock         | Allow or block based on the Cloudflare User Agent Blocking product settings.                                     |
| rateLimit       | Allow or block based on a rate limiting rule, whether set by you or by Cloudflare.                               |
| bic             | Allow or block based on the Browser Integrity Check product settings.                                            |
| hot             | Allow or block based on the Hotlink Protection product settings.                                                 |
| l7ddos          | Allow or block based on the L7 DDoS product settings.                                                            |
| validation      | Allow or block based on a request that is invalid (cannot be customized.)                                        |
| botFight        | Allow or block based on the Bot Fight Mode (classic) product settings.                                           |
| botManagement   | Allow or block based on the Bot Management product settings.                                                     |
| dlp             | Allow or block based on the Data Loss Prevention product settings.                                               |
| firewallManaged | Allow or block based on WAF Managed Rules' settings.                                                             |
| firewallCustom  | Allow or block based on a rule configured in WAF custom rules.                                                   |

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/reference/","name":"Reference"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/reference/security-fields/","name":"Security fields"}}]}
```

---

---
title: WAF fields
description: The Web Application Firewall (WAF) contains rules managed by Cloudflare to block requests that contain malicious content.
image: https://developers.cloudflare.com/core-services-preview.png
---

[Skip to content](#%5Ftop) 

Was this helpful?

YesNo

[ Edit page ](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/logs/reference/waf-fields.mdx) [ Report issue ](https://github.com/cloudflare/cloudflare-docs/issues/new/choose) 

Copy page

# WAF fields

The Web Application Firewall (WAF) contains rules managed by Cloudflare to block requests that contain malicious content.

## WAF Action

| Value | Action          | Description                                  |
| ----- | --------------- | -------------------------------------------- |
| 0     | Unknown         | Take no other action.                        |
| 1     | Allow           | Bypass all subsequent WAF rules.             |
| 2     | Drop            | Block with an HTTP 403 response.             |
| 3     | Challenge Allow | Issue a Managed Challenge.                   |
| 4     | Challenge Drop  | Unused.                                      |
| 5     | Log             | Take no action other than logging the event. |

## Deprecated fields for internal Cloudflare use

The values of these fields are subject to change by Cloudflare at any time and are irrelevant for customer data analysis:

* WAFFlags
* WAFMatchedVar

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/logs/","name":"Logs"}},{"@type":"ListItem","position":3,"item":{"@id":"/logs/reference/","name":"Reference"}},{"@type":"ListItem","position":4,"item":{"@id":"/logs/reference/waf-fields/","name":"WAF fields"}}]}
```
