Exporting to New Relic using CloudFlow
Monitoring with New Relic
Here we provide a simple example of how to scrape CloudFlow metrics into New Relic by using a deployment of the Prometheus Agent on CloudFlow itself as a project separate from your production workload project. The basic idea is that you need to run a Prometheus Agent that regularly scrapes CloudFlow’s /federate endpoint to fetch metrics for your entire account, and writes the results into New Relic using “remote write”. Here we show how to make a deployment to CloudFlow that runs 24x7 in a single location. A single Prometheus agent collects metrics for all projects in your account.
Obtain the following information from your instance of New Relic:
- Go here and follow New Relic's instructions to create a New Relic remote write endpoint. You should end up with something that looks like:
remote_write:
- url: https://metric-api.newrelic.com/prometheus/v1/write?prometheus_server=<chosen nickname>
authorization:
credentials: <new relic credential>
Configuration
The following YAML file defines a ConfigMap with configuration for the Prometheus agent. Replace the following accordingly: CLOUDFLOW_ACCOUNT_ID
, CLOUDFLOW_API_TOKEN
, NEW_RELIC_REMOTE_WRITE_URL
, NEW_RELIC_REMOTE_WRITE_CREDENTIAL
. Learn how to obtain the CLOUDFLOW items here.
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheusagent-config
data:
prometheus.yml: |
# my global config
global:
scrape_interval: 30s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 30s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'cloudflow-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'cloudflow-federation'
metrics_path: '/prometheus/account/CLOUDFLOW_ACCOUNT_ID/federate'
params:
'match[]':
- '{__name__=~".+"}'
scheme: 'https'
authorization:
type: Bearer
credentials: CLOUDFLOW_API_TOKEN
static_configs:
- targets: ['console.section.io']
remote_write:
- url: NEW_RELIC_REMOTE_WRITE_URL
authorization:
credentials: NEW_RELIC_REMOTE_WRITE_CREDENTIAL
Deploy it with kubectl apply -f configmap.yaml
.
Deployment
The following deployment will run the Prometheus agent on CloudFlow.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: prometheusagent
name: prometheusagent
spec:
replicas: 1
selector:
matchLabels:
app: prometheusagent
template:
metadata:
labels:
app: prometheusagent
spec:
containers:
- image: prom/prometheus
imagePullPolicy: Always
name: prometheusagent
volumeMounts:
- name: prometheusagent-config
mountPath: /etc/prometheus
resources:
requests:
memory: ".5Gi"
cpu: "500m"
limits:
memory: ".5Gi"
cpu: "500m"
volumes:
- name: prometheusagent-config
configMap:
name: prometheusagent-config
Deploy it with kubectl apply -f grafana-deployment.yaml
.
Location Strategy
By default, CloudFlow will run this project in 2 locations. We only need to collect metrics from your account once, so let's provide a location optimizer strategy that runs the project in only a single location. Read more about location strategies.
apiVersion: v1
kind: ConfigMap
data:
strategy: |
{
"strategy": "SolverServiceV1",
"params": {
"policy": "dynamic",
"minimumLocations": 1,
"maximumLocations": 1
}
}
metadata:
name: location-optimizer
Deploy it with kubectl apply -f location-optimizer.yaml
.
View Metrics in New Relic
Login to your New Relic account to see CloudFlow metrics. New Relic may have automatically provisoned a dashboard for you when the remote-write integration was initially set up and you received your URL and credentials.