
NOTE: I’m using the Vorteil Open Source Toolkit and the following GitHub vorteil-elk repo
One of the key considerations when moving to Vorteil is the idea of building “stateless” applications. According to most articles and definitions stateless applications implies that no data, transaction history or state is stored, and that each each transaction or execution is treated as a new instance.
But how about broadening the definition of stateless applications to stateless machines? For us, a stateless machine has (at least) the following attributes:
- No logging or data being written to the disk, container or VM
- The ability to destroy an replace machines / applications in place.
- Upgrades — a big NO!
I should probably think of a better term than “stateless”. But for now — I don’t have anything better …
The ask
So a challenge set to us by a prospective customer was to create a “stateless machine” for the ELK stack:
- It had to be a single-instance application with the ElasticSearch, Logstash and Kibana components installed
- It needed to be small, as they would be deploying a LOT of small instances
- It had to be fast and resource efficient
- No upgrades on running machines — destroy and replace
So we set out to build one …
Getting the ELK stack on Vorteil
If you’ve been following all the articles you would know that I tend to choose the easy way out ... this was the exception.
This time though — I built the machine from scratch.
How did I start?
- Download ElasticSearch Linux generic, unzip into a directory
- Download LogStash Linux generic, unzip to same directory
- Download Kibana Linux generic, unzip to same directory
At least this part was painless … I also used the command to import shared objects to the project:
$ vorteil projects import-shared-objects
Making it “stateless”
Since we’re also going to be building a “stateless” machine (we will test this by destroying the machine and rebuilding it), I’m using NFS to store ElasticSearch data, Logstash data and Kibana configurations:
ElasticSearch uses:
- $ES_HOME/config for configuration files, specifically elasticsearch.yml
- $ES_HOME/data for data storage
- $ES_HOME/logs for log files
Logstash uses:
- $LOGSTASH_HOME/config for configuration files
- $LOGSTASH_HOME/data for data storage
- $LOGSTASH_HOME/logs for log files
Kibana uses:
- $KIBANA_HOME/config for configuration files
- $KIBANA_HOME/data for data storage
Below is a graphical illustration of the NFS map I’ve created for my ELK stack mount points:

A simple Logstash pipeline
Next I created a simple TCP listener (port 10100) with the following logstash configuration:
This will receive the Vorteil fluentbit output (which we set in the Vorteil configuration files)
The NFS share
This was pretty simple — I just created an NFS share on AWS Elastic File Services:

Next, I mounted the file system and copied the “config” directories for each of the applications over. After everything was completed and set up, the structure of my NFS file share looked like this:

The VCFG file
Finally, my Vorteil configuration file has all the NFS shares and components mapped and ready to go!
Provision to AWS
Create the AMI
Let’s configure the Vorteil open source toolkit AWS provisioner:
Step 1: create the provisioner configuration file aws.conf:
./vorteil provisioners new amazon-ec2 aws.conf -k <KEY> -s <SECRET> -b <BUCKET>
Step 2: provision my Vorteil machine to AWS:
./vorteil images provision . ../aws --name elk-stack

Create a AWS machine:
- t2-medium sized machine (2 CPUs, 4GB memory)
- 3 GB storage

And ELK stack running!!!

Inject some data
So for the next step, all we did was inject some data into the running ELK stack by starting up a couple of Vorteil machines on my local Mac with the following configuration:
[[logging]]
config = ["Name=tcp" , "Host=ec2-13-54-93-76.ap-southeast-2.compute.amazonaws.com", "Port=10100", "Format=json", "tls=Off"]
type = "system"
[[logging]]
config = ["Name=tcp" , "Host=ec2-13-54-93-76.ap-southeast-2.compute.amazonaws.com", "Port=10100", "Format=json", "tls=Off"]
type = "kernel"[[logging]]
config = ["Name=tcp" , "Host=ec2-13-54-93-76.ap-southeast-2.compute.amazonaws.com", "Port=10100", "Format=json", "tls=Off"]
type = "programs"
The machines running are all Rancher k3s instances on Vorteil, and the log files being sent to ELK are containerd.log files.
See below — received messages successfully!

DESTROY and REBUILD!
Moment of truth:
- Terminate the running machine
- Create a new machine from the AMI
- See if logging and configurations persist!?
See the side by side comparison below (and NOTE the different IP addresses):

The whole process of deploying the new instance took about 5 minutes, which includes the start-up time for the AWS ECS instance and the ELK stack start-up.
What I have is the following …
Something simple to deploy ELK:
- A 2.5 GB Vorteil package which contains ElasticSearch, Logstash and Kibana
- All of my configuration files, data and log files stored on an NFS server (this could also have been a separate mount point — doesn’t need to be NFS)
- Upgrades to ALL components are as simple as downloading the latest version and extracting it
GitHub repo is available here with instructions if you want to give it a go, and if you want to have a look at how it’s done — video below!
