Here’s how I got a proof-of-concept etcd cluster up and running with Vagrant. The goal here was to find a clean, minimal process to get things up and running so I can better understand what’s going on. As you’ll see, this approach is clearly for learning and experimentation, not for production!

The one stubling block I ran in to had to do with etcd’s master elections being thrown off due to the guest OS’s system clocks not being sync’d. I resolved this by ensuring tha the guest additions were installed. A simple way to do this is to use the vagrant-vbguest plugin:

vagrant plugin install vagrant-vbguest

Here’s the Vagrantfile:

Vagrant.configure(2) do |config|
  config.vm.box = "ubuntu/trusty64"
  config.vm.provision "file", source: "./start.sh", destination: "start.sh"

  config.vm.define "etcd0" do |etcd0|
    etcd0.vm.network "private_network", ip: "192.168.60.10"
  end

  config.vm.define "etcd1" do |etcd1|
    etcd1.vm.network "private_network", ip: "192.168.60.11"
  end

  config.vm.define "etcd2" do |etcd2|
    etcd2.vm.network "private_network", ip: "192.168.60.12"
  end
end

It creates three VMs, copies to each the start.sh script, and assigns each a private IP address. Here’s start.sh:

#!/bin/bash -x

if [ ! -f /usr/local/bin/etcd ]
then
  curl -L -s https://github.com/coreos/etcd/releases/download/v2.3.1/etcd-v2.3.1-linux-amd64.tar.gz -o - | \
    tar zxf - etcd-v2.3.1-linux-amd64/etcd -O > /usr/local/bin/etcd
  chmod +x /usr/local/bin/etcd
fi

IP=`hostname -I | awk '{ print $2 }'`

ETCD0=192.168.60.10
ETCD1=192.168.60.11
ETCD2=192.168.60.12

case $IP in
$ETCD0) NAME=etcd0;;
$ETCD1) NAME=etcd1;;
$ETCD2) NAME=etcd2;;
esac

etcd --name $NAME \
  --initial-advertise-peer-urls=http://$IP:2380 \
  --listen-peer-urls=http://$IP:2380 \
  --listen-client-urls=http://$IP:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=http://$IP:2379 \
  --initial-cluster-token=etcd-cluster-1 \
  --initial-cluster=etcd0=http://$ETCD0:2380,etcd1=http://$ETCD1:2380,etcd2=http://$ETCD2:2380 \
  --initial-cluster-state=new

Once provisioned, SSH in to each machine, run start.sh as root, and pay attention to the output. The script installs etcd if it’s not already there. It then runs it, passing to it the other IPs in the cluster.

The etcd invocation is more-or-less copied directly from the examples on the etcd project’s github wiki, along with some conditonal logic to grab the machine’s name based on it’s private IP.