Adventures of a wannabe geek!

Ranting within

Deploying Kibana Using Nginx as an SSL Proxy

In my last post, I described how I use Packer and Terraform to deploy an ElasticSearch cluster. In order to make the logs stored in ElasticSearch searchable, I use Kibana. I follow the previous pattern and deploy Kibana using Packer to build an AMI and then create the infrastructure using Terraform. The Packer template has already taken into account that I want to use nginx as a proxy.

Building Kibana AMIs with Packer and Ansible

The template looks as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
{
  "variables": {
    "ami_id": "",
    "private_subnet_id": "",
    "security_group_id": "",
    "packer_build_number": "",
  },
  "description": "Kibana Image",
  "builders": [
    {
      "ami_name": "kibana-{{user `packer_build_number`}}",
      "availability_zone": "eu-west-1a",
      "iam_instance_profile": "app-server",
      "instance_type": "t2.small",
      "region": "eu-west-1",
      "run_tags": {
        "role": "packer"
      },
      "security_group_ids": [
        "{{user `security_group_id`}}"
      ],
      "source_ami": "{{user `ami_id`}}",
      "ssh_timeout": "10m",
      "ssh_username": "ubuntu",
      "subnet_id": "{{user `private_subnet_id`}}",
      "tags": {
        "Name": "kibana-packer-image"
      },
      "type": "amazon-ebs"
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "inline": [ "sleep 10" ]
    },
    {
      "type": "shell",
      "script": "install_dependencies.sh",
      "execute_command": "echo '' | {{ .Vars }} sudo -E -S sh '{{ .Path }}'"
    },
    {
      "type": "ansible-local",
      "playbook_file": "kibana.yml",
      "extra_arguments": [
        "--module-path=./modules"
      ],
      "playbook_dir": "../../"
    }
  ]
}

The install_dependencies.sh script is as described previously

The ansible playbook for Kibana looks as follows:

1
2
3
4
5
6
7
8
9
10
11
- hosts: all
  sudo: yes

  pre_tasks:
    - ec2_tags:
    - ec2_facts:

  roles:
    - base
    - kibana
    - reverse_proxied

The playbook installs a base role for all the base pieces of my system (e.g. Logstash, Sensu-client, prometheus node_exporter) and then proceeds to install ElasticSearch.

The Kibana role looks as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
- name: Download Kibana
  get_url: url=https://download.elasticsearch.org/kibana/kibana/kibana-{{ kibana_version }}-linux-x64.tar.gz dest=/tmp/kibana-{{ kibana_version }}-linux-x64.tar.gz mode=0440

- name: Untar Kibana
  command: tar xzf /tmp/kibana-{{ kibana_version }}-linux-x64.tar.gz -C /opt creates=/opt/kibana-{{ kibana_version }}-linux-x64.tar.gz

- name: Link to Kibana Directory
  file: src=/opt/kibana-{{ kibana_version }}-linux-x64
        dest=/opt/kibana
        state=link
        force=yes

- name: Link Kibana to ElasticSearch
  lineinfile: >
    dest=/opt/kibana/config/kibana.yml
    regexp="^elasticsearch_url:"
    line='elasticsearch_url: "{{ elasticsearch_url }}"'

- name: Create Kibana Init Script
  copy: src=initd.conf dest=/etc/init.d/kibana mode=755 owner=root

- name: Ensure Kibana is running
  service: name=kibana state=started

The reverse_proxied ansible role looks as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
- name: download private key file
  command: aws s3 cp {{ reverse_proxy_private_key_s3_path }} /etc/ssl/private/{{ reverse_proxy_private_key }}

- name: private key permissions
  file: path=/etc/ssl/private/{{ reverse_proxy_private_key }} mode=600

- name: download certificate file
  command: aws s3 cp {{ reverse_proxy_cert_s3_path }} /etc/ssl/certs/{{ reverse_proxy_cert }}

- name: download DH 2048bit encryption
  command: aws s3 cp {{ reverse_proxy_dh_pem_s3_path }} /etc/ssl/{{ reverse_proxy_dh_pem }}

- name: certificate permissions
  file: path=/etc/ssl/certs/{{ reverse_proxy_cert }} mode=644

- apt: pkg=nginx

- name: remove default nginx site from sites-emabled
  file: path=/etc/nginx/sites-enabled/default state=absent

- template: src=nginx.conf.j2 dest=/etc/nginx/nginx.conf mode=644 owner=root group=root

- service: name=nginx state=restarted

- file: path=/var/log/nginx
        mode=0755
        state=directory

This role downloads a private SSL Key and a Certificate from a S3 bucket that is security controlled through IAM. This allows us to configure nginx to act as a proxy. The nginx proxy template is available to view.

We can then pass a number of variables to our role for use within ansible:

1
2
3
4
5
6
7
8
9
10
11
reverse_proxy_private_key: mydomain.key
reverse_proxy_private_key_s3_path: s3://my-bucket/certs/mydomain/mydomain.key
reverse_proxy_cert: mydomain.crt
reverse_proxy_cert_s3_path: s3://my-bucket/certs/mydomain/mydomain.crt
reverse_proxy_dh_pem_s3_path: s3://my-bucket/certs/dhparams.pem
reverse_proxy_dh_pem: dhparams.pem
proxy_urls:
  - reverse_proxy_url: /
    reverse_proxy_upstream_port: 3000
kibana_version: 4.1.0
elasticsearch_url: http://myes.com:9200

This allows me to easily change the configuration of nginx to patch security vulnerabilities easily.

Deploying Kibana with Terraform

The infrastructure of the Kibana cluster is now pretty easy. The Terraform script now looks as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
resource "aws_security_group" "kibana" {
  name = "kibana-sg"
  description = "Kibana Security Group"
  vpc_id = "${aws_vpc.default.id}"

  ingress {
    from_port = 443
    to_port   = 443
    protocol  = "tcp"
    security_groups = ["${aws_security_group.kibana_elb.id}"]
  }

  ingress {
    from_port = 80
    to_port   = 80
    protocol  = "tcp"
    security_groups = ["${aws_security_group.kibana_elb.id}"]
  }

  egress {
    from_port = "0"
    to_port = "0"
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags {
    Name = "Kibana Node"
  }
}

resource "aws_security_group" "kibana_elb" {
  name = "kibana-elb-sg"
  description = "Kibana Elastic Load Balancer Security Group"
  vpc_id = "${aws_vpc.default.id}"

  ingress {
    from_port = 443
    to_port   = 443
    protocol  = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 80
    to_port   = 80
    protocol  = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port = "0"
    to_port = "0"
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags {
    Name = "Kibana Load Balancer"
  }
}

resource "aws_elb" "kibana_elb" {
  name = "kibana-elb"
  subnets = ["${aws_subnet.primary-private.id}","${aws_subnet.secondary-private.id}","${aws_subnet.tertiary-private.id}"]
  security_groups = ["${aws_security_group.kibana_elb.id}"]
  cross_zone_load_balancing = true
  connection_draining = true
  internal = true

  listener {
    instance_port      = 443
    instance_protocol  = "tcp"
    lb_port            = 443
    lb_protocol        = "tcp"
  }

  listener {
    instance_port      = 80
    instance_protocol  = "tcp"
    lb_port            = 80
    lb_protocol        = "tcp"
  }

  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    interval            = 10
    target              = "TCP:443"
    timeout             = 5
  }
}

resource "aws_launch_configuration" "kibana_launch_config" {
  image_id = "${var.kibana_ami_id}"
  instance_type = "${var.kibana_instance_type}"
  iam_instance_profile = "app-server"
  key_name = "${aws_key_pair.terraform.key_name}"
  security_groups = ["${aws_security_group.kibana.id}","${aws_security_group.node.id}"]
  enable_monitoring = false

  root_block_device {
    volume_size = "${var.kibana_volume_size}"
  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_autoscaling_group" "kibana_autoscale_group" {
  name = "kibana-autoscale-group"
  availability_zones = ["${aws_subnet.primary-private.availability_zone}","${aws_subnet.secondary-private.availability_zone}","${aws_subnet.tertiary-private.availability_zone}"]
  vpc_zone_identifier = ["${aws_subnet.primary-private.id}","${aws_subnet.secondary-private.id}","${aws_subnet.tertiary-private.id}"]
  launch_configuration = "${aws_launch_configuration.kibana_launch_config.id}"
  min_size = 2
  max_size = 100
  health_check_type = "EC2"
  load_balancers = ["${aws_elb.kibana_elb.name}"]

  tag {
    key = "Name"
    value = "kibana"
    propagate_at_launch = true
  }

  tag {
    key = "role"
    value = "kibana"
    propagate_at_launch = true
  }

  tag {
    key = "elb_name"
    value = "${aws_elb.kibana_elb.name}"
    propagate_at_launch = true
  }

  tag {
    key = "elb_region"
    value = "${var.aws_region}"
    propagate_at_launch = true
  }
}

This allows me to scale my system up or down just by changing the values in my Terraform configuration. When the instances are instantiated, the Kibana instances are added to the ELB and they are then available to serve traffic