Other monitoring and logging solutions
Cloud Service provides a Prometheus-compatible endpoint you can use to connect to your own monitoring infrastructure as well as Postgres logs by way of blob storage.
If you're using your own cloud account, contact Cloud Service Support to enable this feature on your clusters.
Metrics
You can access metrics in a Prometheus format if you request this feature from Cloud Service Support. You can retrieve the hostname and port for your clusters by using the Prometheus URL available on the Monitoring and logging tab on each cluster's detail page in the Console.
These example metrics can help you get started.
Patterns for accessing metrics
A common pattern for metric shipping is to have the vendor-supplied agent scrape the metrics endpoint and send the metrics to the desired platform.
For more information on some common monitoring services, see:
Self-managed Grafana (not Grafana Cloud): Grafana Prometheus datasource
Logs
You can view your logs in your cloud provider's blob storage solution if you request this feature from Cloud Service Support. You can retrieve the location of your object storage on the Monitoring and logging tab on your cluster's detail page in the Console.
The general pattern for getting logs from blob storage into the cloud provider's solution is to write a custom serverless function that watches the blob storage and uploads to the desired solution.
Watching for logs on AWS
You can leverage some Python code to read the S3 bucket and then use the AWS APIs to upload to your custom monitoring solution. For example, for CloudWatch: aws-load-balancer-logs-to-cloudwatch.
Watching for logs on Azure
You can leverage Azure functions to read the Azure Blob Storage object. Then use the API to upload the data to your monitoring solution.
Uploading logs to common third-party providers
After your function observes new log data, you can use your monitoring provider's API to push log data to their platform.
Some platform providers have limitations regarding the ingestion of logs. Read the vendor documentation carefully.
Datadog
Dynatrace
AWS: S3 log forwarder
Azure: Azure log forwarder
Warning
Currently, the Dynatrace integration works only if you can stream the logs from Azure Storage to Azure Event Hub.
New Relic
CLI command
You can also get the metrics and logs URLs using the cluster show-monitoring-urls
CLI command. See Logging and metrics CLI command for more information.
- On this page
- Metrics
- Logs
- CLI command
Could this page be better? Report a problem or suggest an addition!