Quantcast
Channel: Dev – Splunk Blogs
Viewing all articles
Browse latest Browse all 218

Docker? Amazon ECS? Splunk? How they now all seamlessly work together

$
0
0

Today the Amazon EC2 Container Service (ECS) team announced they have added the Splunk native logging driver to the newest version of the ECS agent. This means it’s now easier to implement a comprehensive monitoring solution for running your containers at scale. At Splunk, we’re incredibly excited about this integration because customers running containers in ECS can now receive all the benefits of the logging driver, like better data classification & searching, support for flexible RBAC, and easy and scalable data collection built on top of the Splunk HTTP Event Collector (HEC).

The following is a guest blog post by David Potes, AWS Solutions Architect:

Monitoring containers has been somewhat of a challenge in the past, but the ECS team has been hard at work making it easy to integrate your container logs and metrics into key monitoring ecosystems. Recently, they have added native logging to Splunk in the latest version of the ECS agent. In this article, we’ll look at how to get this up and running and present a few examples of how to get greater insight into your Docker containers on ECS.

If you don’t already have Splunk, that’s OK! You can download a 60-day trial of Splunk, or sign up for a Splunk Cloud trial.

How It Works

Using EC2 Container Services (ECS)?  The Splunk logging driver is now a supported option.  You can set the Splunk logging driver in your Task Definition per container under the “Log configuration” section.  All log messages will be sent to Splunk providing additional access control, using a more secure method, and providing additional data classification options for logs collected from your docker ecosystem.

Not Using ECS? No problem!

You can configure Splunk logging as the default logging driver by passing the correct options to the Docker daemon, or you can set it at runtime for a specific container.

The receiver will be the HTTP Event Collector (HEC), a highly scalable and secure engine built into Splunk 6.3.0 or later. Our traffic will be secured by both a security token and SSL encryption. One of the great things about HEC is that it’s simple to use with either Splunk Enterprise or Splunk Cloud. There’s no need to deploy a forwarder to gather data, since the logging driver handles all of this for you.

Setting Up the HTTP Event Collector

The first step is to set up the HEC and create a security token. In Splunk, select Settings > Data Inputs, and click on the “HTTP Event Collector” link where the configurations can be applied.  For the full instructions please refer to our online docs.

Configuring your Docker Containers

First, make sure your ECS agent is up to date. Run the following to check your agent version:

curl -s 127.0.0.1:51678/v1/metadata | python -mjson.tool.  Please refer to the following site for other options on how to check your ECS Container Agent version.

From an Amazon linux image, get the latest ECS version is simple. To update your ECS Container Agent, you can follow the instructions available here.

Configuring Splunk logging driver in EC2 Container Services (ECS)

You can setup your “Log configuration” options in your AWS Console for you EC2 Container Service.  Under your Task Definition, specify a new “Log configuration” under your existing “Container Definition” under the “STORAGE AND LOGGING” section.

  1. Set the “Log driver” option to splunk
  2. Specify the following mandatory log options, for more details please reference the documentation
    1. splunk-url
    2. splunk-token
    3. splunk-insecureskipverify – set to “true” – required if you don’t specify the certificate options (splunk-capath, splunk-caname)
  3. Specify any other optional parameters (e.g., tag, labels, splunk-source, etc.)
  4. Click on the “Update” button to update your configurations

Figure 2

Figure 2: Sample configuration of Log configuration

Here’s a sample JSON Log configuration for a Task Definition:

“logConfiguration”: {

“logDriver”: “splunk”,

“options”: {

“splunk-url”: “https://splunkhost:8088”,

“splunk-token”: “< your token>”,

“tag”: “{{.Name}}”,

“splunk-insecureskipverify”: “true”,

}
Configuring Splunk logging driver by overriding the docker daemon logging option

Now we will set our logging options in Docker daemon. We can set Splunk logging on a per-container basis or define it as a default in the Docker daemon settings. We will specify some additional details at runtime to be passed along with our JSON payloads to help identify the source data.

docker run –log-driver=splunk \

–log-opt splunk-token=<your token>\

–log-opt splunk-url=https://splunkhost:8088 \

–log-opt splunk-capath=/path/to/cert/cacert.pem \

–log-opt splunk-caname=SplunkServerDefaultCert

–log-opt tag=”{{.Name}}/{{.ID}}”

–log-opt source=mytestsystem

–log-opt index=test

–log-opt sourcetype=apache

–log-opt labels=location

–log-opt env=TEST

–env “TEST=false”

–label location=us-west-2

your/application

 

The splunk-token is the security token we generated when setting up the HEC.

The splunk-url is target address of your Splunk Cloud or Splunk Enterprise system.

The next two lines define the name of and the local path to the Splunk CA cert. If you would rather not deploy the CA to your systems, you set the splunk-insecureskipverify to true is required if you don’t specify the certificate options (splunk-capath, splunk-caname), though it does reduce the security of your configuration.

The tag will add the name of the container and the full ID of the container. Using the ID option would only add the first 12 characters of the container.

We can send labels and env values, if these are specified in the container. If there is a collision between a label and env value, the env value will take precedence.

Optionally, but recommended, you can set the sourcetype, source and the target index for your Splunk implementation.

Now that we have started up the container with Splunk logging options, we should be able to see events populate shortly after the container is running into our Splunk searches. Using the default sourcetype, and if you set the options as in the example above, you can use the following search to see your data: sourcetype=httpevent

Here’s a sample of a container log message logged by the splunk logging driver:

Figure 3

Figure 4

And there you have it. Container monitoring can bring additional complexity to your infrastructure, but it doesn’t have to bring complexity to your job. It’s that easy to configure Splunk to monitor your docker containers on ECS and in your AWS infrastructure.

Thanks,
David Potes
AWS Solutions Architect


Viewing all articles
Browse latest Browse all 218

Trending Articles