So you heard about VictoriaMetrics and its claims for increased performance at lower resource usage and you want to see for yourself. After all, who believes everything the authors say about their code? :)
For a meaningful comparison between VictoriaMetrics and Prometheus, you first need to get the same amount of metrics in VM. Prometheus has been in your stack for months and say has 6 months of metrics. How do you get that data into VM?
Use ZFS Boot Environments to safely perform system upgrades
What are Boot Environments In a nutshell, boot environments are ZFS filesystems that are marked bootable. The idea is that you can have multiple boot environments and boot into one that you like by setting it as active.
Specifically, a bootable filesystem is set with a bootfs property on a boot pool. E.g. our system has a ZFS pool called bootpool and we can see what the current bootable filesystem is by looking at the bootfs property:
Running Caddy server on your origin? Here's how to configure your log format to get all the interesting fields
This post talks about Caddy, a HTTP server that’s easy to get up and running, lightweight and has a module for exposing metrics in the native Prometheus format, so we we like it a lot. In this case we’re using Caddy to host a small static gallery, generated from the image post-processing suite Lightroom.
Caddy runs in a FreeBSD jail (OS-level virtualisation), which is hosted on a fairly powerful physical machine.
How to process bulk logs and still have hardware resources left for other tasks
Problem statement Recently I had to send a sizeable amount of logs into our log pipeline, a whole 23781261 log lines to be exact. Our log pipeline is the standard ELK stack, plus Filebeat, a lightweight log shipper from Elastic that forwards logs from the central log server into Logstash.
Here’s a diagram to illustrate the entire flow of logs in our system:
Nodes -> Syslog-NG -> Central logserver Filebeat -> Logstash -> Elasticsearch Pushing this number of logs with pretty much standard configurations of Filebeat and ELK completely saturated disk IO on the server running Elasticsearch (this node is using spinning disks so not that difficult of a feat to achieve).
Provision a simple GCP VM instance
This is a simple how-to for provisioning a VM instance on the Google Cloud Platform. This is intended for example purposes and the VM will mostly have default configuration. The VM size chosen fits into the Always Free usage tier.
We’re using the gcloud CLI tool here from the Cloud SDK. Refer to quickstarts documentation to get it installed and setup.
If you haven’t yet used the gcloud tool, take a few minutes to get familiar with it.