Documentation Index
Fetch the complete documentation index at: https://docs.sawmills.ai/llms.txt
Use this file to discover all available pages before exploring further.
Supported Data Types
📘 Logs | 📈 Metrics | 🚦 TracesConfiguration
| Field | Type | Default | Required | Description |
|---|---|---|---|---|
| Name | String | none | true | Unique identifier within Sawmills. |
| Region | String | ”us-east-1” | true | AWS region. |
| S3 Bucket | String | none | true | S3 bucket name. |
| Role ARN | String | none | false | The Role ARN to be assumed. |
| File Prefix | String | none | false | Prefix for the S3 key (root directory inside the bucket). |
| Output Format | String | OTLP JSON | false | Format used to produce output data (see Output Format options below). |
Advanced Options
| Field | Type | Default | Required | Description |
|---|---|---|---|---|
| REST API Endpoint | String | None | false | Overrides the endpoint, instead of constructing it from region and s3_bucket. |
| S3 Force Path Style | Boolean | false | false | Set this to true to force the request to use path-style addressing. |
| Use SSL | Boolean | true | false | Set this to false to disable SSL when sending requests. |
| Compression | String | none | false | Should the file be compressed. |
Output Format
The Output Format determines how data is serialized to AWS S3.- OTLP JSON (default): The OpenTelemetry Protocol format represented as JSON.
- OTLP (Protobuf): The OpenTelemetry Protocol format represented as Protocol Buffers. A single protobuf message is written into each object.
- Sumo Logic (JSON): The Sumo Logic Installed Collector Archive format (logs only).
- Raw payload: Exports the log body as a string (logs only).
- NDJSON (.json.gz): Newline-delimited JSON, gzipped, hourly-partitioned (
dt=YYYYMMDD/hour=HH/…). One log record per line. Compatible with Datadog Logs Rehydration, Splunk Generic S3 input, AWS Glue / Splunk FSS3, and other archive readers. (Logs only.) - Parquet: Apache Parquet files, hourly-partitioned. Compatible with Splunk FSS3, Snowflake, AWS Athena, and other analytics tools. (Logs only.)
Snowflake Table Schema for Parquet
When using the Parquet output format, you can create a Snowflake table with the following schema:AWS Credential Configuration
Sawmills collector is running on a Kubernetes cluster and uses helm charts. To provide AWS credentials, passextraEnvs in values.yaml as shown below: