Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Azure Kubernetes Service (AKS): Azure AD SAML based Single Sign on to secure Elasticsearch and Kibana and securing communications in ELK

This is second part of the series on deploying Elasticsearch, Logstash and Kibana (ELK) to Azure Kubernetes Service cluster. In this article I am going to share steps needed to enable Azure AD SAML based single sign on to secure Elasticsearch and Kibana hosted in AKS. I will also go through steps needed to secure communications in ELK cluster. The first part is

Azure Kubernetes Service (AKS): Deploying Elasticsearch, Logstash and Kibana (ELK) and consume messages from Azure Event Hub

Using SAML SSO for Elasticsearch with AAD means that Elasticsearch does not need to be seeded with any user accounts from the directory. Instead, Elasticsearch is able to rely on the claims sent within a SAML token in response to successful authentication to determine identity and privileges. I have referred to this article to enable SAML based single sign on for Elasticsearch.

Kibana, as the user facing component, interacts with the user’s browser and receives all the SAML messages that the Azure AD sends to the Elastic Stack Service Provider. Elasticsearch implements most of the functionality a SAML Service Provider needs. It holds all SAML related configuration in the form of an authentication realm and it also generates all SAML messages required and passes them to Kibana to be relayed to the user’s browser. It finally consumes all SAML Responses that Kibana relays to it, verifies them, extracts the necessary authentication information and creates the internal authentication tokens based on that. The component diagram has been updated to add Azure AD SAML based SSO integration.

The dev tools used to develop these components are Visual Studio for Mac/VS Code, AKS Dashboard, kubectl, bash and openssl. The code snippets in this article are mostly yaml snippets and are included for reference only as formatting may get distorted thus please refer to GitHub repository for formatted resources.

Azure AD - Create an Enterprise Application

Before any messages can be exchanged between an Identity Provider and a Service Provider, one needs to know the configuration details and capabilities of the other. This configuration and the capabilities are encoded in an XML document, that is the SAML Metadata of the SAML entity. Thus we need to create non-gallery enterprise application within Azure AD. Please refer to these documents at msdn and elastic for more details as I am going to share main pointers as listed below

  • In Azure Portal, navigate to Azure Active Directory > Enterprise Applications > New application > Non-gallery application and create an Application.
  • Configure Single Sign on and specify Identifier and Reply URL. You need to update KIBANA_URL placeholder with DNS of Kibana as displayed below based on your Kibana endpoint. Please note that Reply URL needs to be an HTTPS endpoint. I will also cover securing ELK cluster in this article where I am going to share steps needed to enable TLS for Kibana.
  • Add a new User Claim by specifying 'Name' and 'Namespace' and select 'user.assignedroles' as source attribute. Role mappings will be defined in Elasticsearch for this claim based on which Elasticsearch roles are assigned to each user. I have specified name as role and namespace as Based on values you specify you will need to Elasticsearch.yml.
  • Application Roles: The application roles are defined in the application's registration manifest. When a user signs into the application, Azure AD emits a roles claim for each role that the user has been granted individually to the user and from their group membership. In order to add a role, you will need to edit the manifest for this App by navigating to Azure Active Directory > App Registrations > Select Application and Edit Manifest. Please refer to this document about creating App Roles in Azure AD. Create a few roles by editing manifest as displayed below and save the manifest.
  • Assign Role to users: In order to assign roles to users, add a few users. Navigate to Azure Active Directory > Enterprise Applications > Select application > Users and Groups > Add User and assign roles you created in previous step to these users. These roles are going to be mapped to Elasticsearch roles using role_mapping API in subsequent sections.
  • Finally, download Federation Metadata Xml and take note of Azure AD Identifier URL as these are going to be needed later for setting up SAML based SSO in Elasticsearch. Since I have specified Elasticsearch as Application name, the name of this file is Elasticsearch.xml.

Encrypting communications in ELK Cluster

X-Pack security enables you to encrypt traffic to, from, and within your Elasticsearch cluster. Connections are secured using Transport Layer Security (TLS), which is commonly referred to as "SSL". TLS requires X.509 certificates to perform encryption and authentication of the application that is being communicated with. In order for the communication between nodes to be truly secure, the certificates must be validated. The recommended approach for validating certificate authenticity in a Elasticsearch cluster is to trust the certificate authority (CA) that signed the certificate. By doing this, as nodes are added to your cluster they just need to use a certificate signed by the same CA and the node is automatically allowed to join the cluster. Additionally, it is recommended that the certificates contain subject alternative names (SAN) that correspond to the node’s IP address and DNS name so that hostname verification can be performed.

However, I will be generating self signed certificates using elasticsearch-certutil command line tool. You can read more about setting up TLS on a Cluster. I already have Elasticsearch cluster running which I had created during first part of this series and I am going to harness the same to create self signed certificates. If you have certificates, you need to skip steps listed below to generate certificates.

  • Generate new local CA by running command bin/elasticsearch-certutil ca. The output file name is elastic-stack-ca.p12 and password is Password1$. Change the name and password based on your preferences.
  • Generate self signed certificate bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --dns sample-elasticsearch. Enter password for CA (elastic-stack-ca.p12). The output file name is elastic-certificates.p12 and password is Password1$. Change the name and password based on your preferences. The --dns parameter value 'sample-elasticsearch' is the Elasticsearch host name and needed for host name verification(especially for Logstash pipeline since hostname verification can't be turned off when ssl is enabled).
  • Generate a cert to configure Kibana to encrypt communications between the browser and Kibana server bin/elasticsearch-certutil cert --pem -ca elastic-stack-ca.p12 --dns KIBANA_URL. Update the --dns parameter value based on your DNS for Kibana endpoint. This is the same value which was specified during SAML configuration. The output file name is If you unzip this folder, you will see instance.crt and instance.key files which we are going to need for setting up TLS in Kibana.

I have generated certs on existing Elasticsearch container, thus the next step is to copy these certs from AKS container to local machine. I will use kubectl command line tool to copy these files from AKS container to certs folder on my machine as displayed below

kubectl cp default/sample-elasticsearch-0:elastic-stack-ca.p12 /Users/username/certs/elastic-stack-ca.p12

kubectl cp default/sample-elasticsearch-0:elastic-certificates.p12 /Users/username/certs/elastic-certificates.p12

kubectl cp default/ /Users/username/certs/

After copying these files to local machine, I used OpenSSL utility to create elastic-stack-ca.pem file which Kibana and Logstash will need to communicate with TLS enabled Elasticsearch openssl pkcs12 -in /Users/username/certs/elastic-stack-ca.p12 -clcerts -nokeys -chain -out /Users/username/certs/elastic-stack-ca.pem

Now that certificates needed to enable TLS are ready, the next step is to create Elasticsearch, Logstash and Kibana resources. I am going to create custom docker images for Elasticsearch, Logstash and Kibana by bundling files needed to enable TLS and setup SAML based SSO. The solution structure of source code which you can download from GitHub is displayed below and you can see that I have copied cert files created in steps above and Elasticsearch.xml (Federation Metadata Xml). You will also see Dockerfiles per folder which I will need to create custom images. I haven't committed certificates and IDP federation metadata xml thus you need to copy files you have created to corresponding folders before building docker images.

The next step is to create Elasticsearch, Logstash and Kibana resources in AKS. Part 1 of this series provides detailed overview of these resources thus in this article I will only cover differences and changes needed to enable SSO and TLS as compared to Part 1 files.


Copy elastic-stack-ca.p12, elastic-certificates.p12 and Elasticsearch.xml to Elasticsearch folder. No changes are needed in ES_PersistentVolumeClaim.yml and ES_Service.yml files. The complete yaml files needed to enable TLS and deploy Elasticsearch to AKS can be downloaded from GitHub.

Elasticsearch Dockerfile

Elasticsearch DockerFile copies certs and federation metadata xml to config folder of Elasticsearch and sets keystore and truststore passwords. You need to update file names and passwords based on your specified values in previous steps.

COPY --chown=elasticsearch:elasticsearch elastic-stack-ca.p12 /usr/share/elasticsearch/config/
COPY --chown=elasticsearch:elasticsearch elastic-certificates.p12 /usr/share/elasticsearch/config/
COPY --chown=elasticsearch:elasticsearch Elasticsearch.xml /usr/share/elasticsearch/config/
RUN /bin/bash -c "echo Password1$ | /usr/share/elasticsearch/bin/elasticsearch-keystore add -xf"
RUN /bin/bash -c "echo Password1$ | /usr/share/elasticsearch/bin/elasticsearch-keystore add -xf"
RUN /bin/bash -c "echo Password1$ | /usr/share/elasticsearch/bin/elasticsearch-keystore add -xf"
RUN /bin/bash -c "echo Password1$ | /usr/share/elasticsearch/bin/elasticsearch-keystore add -xf;"

The commands to build and publish docker images to Docker Hub are

  • Build Docker Image: docker build -t testelasticsearch .
  • Tag Docker Image: docker tag testelasticsearch {YOUR_DOCKER_REPO}
  • Publish Image to Docker Hub: docker push {YOUR_DOCKER_REPO}

Please update {YOUR_DOCKER_REPO} placeholder with your docker repository.

Create a Kubernetes ConfigMap

The main pointers about changes needed to this resource as compared to first part of this series are

  • SSL is enabled for Elasticsearch: true, true
  • Verification of certificate is set certificate
  • Keystore and Truststore type isPKCS12, path is location of cert bundled in image elastic-certificates.p12 and password is Password1$. Update these values based on your specified values.
  • Cross origin resource sharing is enabled http.cors.enabled: true
  • Two realms are specified i.e. native and saml underxpack: security: authc: realms:. Order of native realm is 0 so it will take precedence over saml realm.
  • idp.metadata.path is path of the Federation metadata xml which was downloaded from Azure AD SAML configuration.
  • idp.entity_id is the Azure AD identifier URL which was copied from Azure AD SAML configuration.
  • sp.entity_id is the Kibana endpoint
  • sp.acs is the Kibana reply URL
  • sp.logout is the Kibana logout URL
  • attributes.principal is mapped to namespace of the name user claim configured in Azure AD
  • attributes.groups is mapped to namespace of the role user claim configured in Azure AD

apiVersion: v1
kind: ConfigMap
name: sample-elasticsearch-configmap
namespace: default
elasticsearch.yml: | "sample-elasticsearch-cluster"
discovery.zen.minimum_master_nodes: 1
# Update max_local_storage_nodes value based on number of nodes
node.max_local_storage_nodes: 1 true
xpack.monitoring.collection.enabled: true true certificate
xpack.license.self_generated.type: trial
xpack.ssl.keystore.type: PKCS12
xpack.ssl.keystore.path: "elastic-certificates.p12"
xpack.ssl.keystore.password: Password1$
xpack.ssl.truststore.type: PKCS12
xpack.ssl.truststore.path: "elastic-certificates.p12"
xpack.ssl.truststore.password: Password1$ "elastic-certificates.p12" "elastic-certificates.p12" true "elastic-certificates.p12" "elastic-certificates.p12"
xpack.ssl.verification_mode: certificate
http.cors.enabled: true
http.cors.allow-origin: "*"
http.max_header_size: 16kb
type: native
order: 0
type: saml
order: 2
idp.metadata.path: "/usr/share/elasticsearch/config/Elasticsearch.xml"
idp.entity_id: "AZURE_AD_IDENTIFIER_URL"
sp.entity_id: "https://KIBANA_URL:5601/"
sp.acs: "https://KIBANA_URL:5601/api/security/v1/saml"
sp.logout: "https://KIBANA_URL:5601/logout"
attributes.principal: ""
attributes.groups: ""
role_mapping.yml: |

Create a Kubernetes StatefulSet

Update ES_Statefulset.yaml file to point to Docker image image: YOUR_ELASTICSEARCH_IMAGE


Copy instance.crt,instance.key andelastic-stack-ca.pem to Kibana folder. The complete yaml files needed to deploy Kibana to AKS can be downloaded from GitHub.

Kibana Dockerfile

Kibana Dockerfile copies instance.crt and instance.key needed to enable TLS for Kibana along with elastic-stack-ca.pem so that Elasticsearch can trust Kibana.

COPY instance.crt /usr/share/kibana/config/
COPY instance.key /usr/share/kibana/config/
COPY elastic-stack-ca.pem /usr/share/kibana/config/

Create a Kubernetes ConfigMap

The main pointers about changes needed to this resource as compared to first part of this series are

  • Elasticsearch is TLS enabled elasticsearch.url: https://sample-elasticsearch:9200
  • elasticsearch.ssl.certificateAuthorities is set to path of elastic-stack-ca.pem
  • SSL is enabled server.ssl.enabled: true
  • server.ssl.key is set to path of  instance.key
  • server.ssl.certificate is set to path of  instance.crt
  • xpack.monitoring.elasticsearch.ssl.verificationMode is set to certificate
  • elasticsearch.ssl.verificationMode is set to certificate
  • is set to [saml, basic]. The order is important here as Kibana will use saml realm unless basic authorization header is passed in the request and in that case it will give precedence to native realm.
  • is specified so that if Kibana instance is behind a proxy, it will tell Kibana how to form its public URL

apiVersion: v1
kind: ConfigMap
name: sample-kibana-configmap
namespace: default
kibana.yml: | sample-kibana ""
elasticsearch.url: https://sample-elasticsearch:9200
xpack.monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.username: kibana
elasticsearch.password: Password1$ "something_at_least_32_characters"
elasticsearch.ssl.certificateAuthorities: "/usr/share/kibana/config/elastic-stack-ca.pem"
server.ssl.enabled: true
server.ssl.key: "/usr/share/kibana/config/instance.key"
server.ssl.certificate: "/usr/share/kibana/config/instance.crt"
xpack.monitoring.elasticsearch.ssl.verificationMode: certificate
elasticsearch.ssl.verificationMode: certificate [saml, basic]
server.xsrf.whitelist: [/api/security/v1/saml]
protocol: https
hostname: KIBANA_URL

Create a Kubernetes Deployment

Update Kibana_Deployment.yaml file to point to Docker image image: YOUR_KIBANA_IMAGE

Create a Kubernetes Service

Update Kibana_Service.yml file to use 5601 port since we have specified 5601 port for Kibana during SAML configuration and Elasticsearch.yml.

apiVersion: v1
kind: Service
name: sample-kibana
component: sample-kibana
type: LoadBalancer
component: sample-kibana
- name: http
port: 5601
targetPort: http


Copy elastic-stack-ca.pem to Elasticsearch folder. The yaml files needed to deploy Logstash to AKS can be downloaded from GitHub.

Logstash Dockerfile

Logstash DockerFile copies certs needed to enable Logstash to communicate with Elasticsearch.

COPY elastic-stack-ca.pem /usr/share/logstash/config/

Create a Logstash ConfigMap

The main pointers about changes needed to this resource as compared to first part of this series are

  • Elasticsearch is TLS enabled xpack.monitoring.elasticsearch.url: https://sample-elasticsearch:9200
  • is set to point to elastic-stack-ca.pem
  • Pipeline config for azureeventhubs.cfg is updated to
    • Enable ssl ssl => true
    • Set cacert to point to elastic-stack-ca.pem

apiVersion: v1
kind: ConfigMap
name: sample-logstash-configmap
namespace: default
logstash.yml: |
xpack.monitoring.elasticsearch.url: https://sample-elasticsearch:9200
dead_letter_queue.enable: true
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: Password1$ "/usr/share/logstash/config/elastic-stack-ca.pem"
pipelines.yml: |
- azureeventhubs
path.config: "/usr/share/logstash/azureeventhubs.cfg"
azureeventhubs.cfg: |
input {
azure_event_hubs {
event_hub_connections => ["{AZURE_EVENT_HUB_CONNECTION_STRING};EntityPath=logstash"]
threads => 2
decorate_events => true
consumer_group => "$Default"
storage_container => "logstash"
filter {
output {
elasticsearch {
hosts => ["sample-elasticsearch:9200" ]
user => "elastic"
password => "Password1$"
index => "azureeventhub-%{+YYYY.MM.dd}"
ssl => true
cacert => "/usr/share/logstash/config/elastic-stack-ca.pem"
logstash.conf: |

Create a Logstash Deployment

Update Logstash_Deployment.yaml file to point to Docker image image: YOUR_LOGSTASH_IMAGE

Deploy resources to AKS

Deploy Elasticsearch, Logstash and Kibana resources to AKS. Once external endpoint is created for Kibana service which takes a while proceed to next step to update UP with DNS name of Kibana.

Update a public IP resource with a DNS name

The IP address of the Kibana Service external endpoint is updated with DNS name using commands displayed below. You need to update IP and DNSNAME before running these commands based on your values.

bash-3.2$ PUBLICIPID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[id]" --output tsv)
bash-3.2$ az network public-ip update --ids $PUBLICIPID --dns-name $DNSNAME

Map Users to Elasticsearch Roles using role_mapping API

Elasticsearch's role_mapping API is used to map user roles to Elasticsearch roles. Mapping between groups and role user claim along with mapping between principal and name user claim was defined in Elasticsearch.yml

attributes.principal: ""


A few examples of using role_mapping API to map Roles are

  • This maps all the users of saml1 realm to kibana_user Elasticsearch role PUT /_xpack/security/role_mapping/aad-saml-allusers
    "roles": [ "kibana_user" ],
    "enabled": true,
    "rules": {
    "field": { "": "saml1" }
  • This maps users of saml1 realm having groups field value as superuser to map to superuser Elasticsearch rolePUT /_xpack/security/role_mapping/aad-saml-superuser
    "roles": [ "superuser" ],
    "enabled": true,
    "rules": {
    "all": [
    { "field": { "": "saml1" } },
    { "field": { "groups": "superuser" } }
  • This maps users of saml1 realm having groups field value as kibanauser to map to kibana_user Elasticsearch rolePUT /_xpack/security/role_mapping/aad-saml-kibanauser
    "roles": [ "kibana_user" ],
    "enabled": true,
    "rules": {
    "all": [
    { "field": { "": "saml1" } },
    { "field": { "groups": "kibanauser" } }

This completes all the changes needed to enabled SAML based SSO and securing communications in ELK. Browse to https://KIBANA_URL:5601 and Elasticsearch will redirect to Azure AD and after successful login you will be redirected back to Kibana. Browsers will warn since self signed certs are used and after granting access, Kibana Dashboard will launch

Browse to https://KIBANA_URL:5601/api/security/v1/me after authentication it will display mapped Elasticsearch roles and saml claims of the user.

On sending messages to Azure Event Hub using the utility I had shared in first part, messages are going to get ingested into Elasticsearch through Logstash pipeline. The source code of this utility can be downloaded from GitHub.

You can download source code for this article from GitHub repository

Share the post

Azure Kubernetes Service (AKS): Azure AD SAML based Single Sign on to secure Elasticsearch and Kibana and securing communications in ELK


Subscribe to Msdn Blogs | Get The Latest Information, Insights, Announcements, And News From Microsoft Experts And Developers In The Msdn Blogs.

Get updates delivered right to your inbox!

Thank you for your subscription