Skip to main content
Sawmills’ Pipelines are workflows that collect, process, and transfer data to where it’s desired. Pipelines enable you to manage this flow of information by efficiently collecting, transforming, and delivering it to destination systems for monitoring, analysis, and storage.

How Pipelines Work

A pipeline is composed of multiple connected stages:
  • Source: The starting point where data is ingested (e.g., logs and traces from your application or metrics from cloud services).
  • Processing: Intermediate steps where data is transformed, filtered, or enriched. Some examples of processors include adjusting timestamps, aggregating metrics, or removing unnecessary information that reduces noise and saves money on storage and/or third party ingestion and processing.
  • Destination: The endpoint where processed data is sent, such as a monitoring platform, alerting system, or storage service.
By setting up a pipeline, you automate how data flows from start to finish.

Steps to Build a Pipeline

  1. Go to the Pipeline page.
  2. Click Create New Pipeline. The system will create a new, empty pipeline.

Step 1: Settings

  1. Name the pipeline with a descriptive name that encapsolates the pipeline’s purpose.
  2. Optionally, add a detailed description for the pipeline.

Step 2: Add a Source

  1. Click Add Source on the left side.
  2. Select the appropriate source. Remember, a Source is where data originates before reaching the Sawmills Collector.
  3. Configure the Source’s settings according to the provided instructions. For detailed configuration options, refer to the Sources documentation.
  4. Review and save your changes.

Step 3: Add a Destination

  1. Click Add Destination on the right side.
  2. Select the appropriate destination. Remember, a Destination is where you want to send the data, such as an observability platform or storage solution.
  3. Configure the Destination’s settings by following the provided instructions. For more details, check the Destinations documentation.
  4. Review and save your changes.
Notes:- You can add multiple sources and multiple destinations to a single pipeline.- Data flows from all sources to all destinations, creating a many-to-many relationship.

Step 4: Deploy the Pipeline

Now that you have built your pipeline, deploy it to one or more collectors. To deploy a pipeline to a Collector, click the Deployments button in the top-right corner. This will display all available collectors and their statuses. Select all collectors or individual ones you want and then proceed with the deployment.
Deployment to a collector may take a several minutes. This delay ensures that the collector’s configuration is updated without causing any data loss.

Step 5: Send Data to the Pipeline

To start sending data to your pipeline, direct data to the Ingress endpoint of the Kubernetes cluster where the Sawmills Collector is installed.
Pipelines start in Draft mode, once deployed, the Live view of the pipeline would be accessible, please use the toggle above the pipeline to move between Live and Draft mode.

Live & Draft

  • Draft: This is where you can edit a pipeline.
  • Live: Once a pipeline is deployed, changes will appear on the live tab with some telemetry data statistics. To Edit a pipeline select the Draft tab.
Following these steps will enable you to set up a pipeline that controls and automates the flow of your telemetry data.