Querying AWS Service Logs using Amazon Athena

The Amazon Web Services ecosystem is growing rapidly to meet market demands and new challenges. In its present state, there is virtually no cloud computing challenge you cannot solve with tools and services already available in AWS. Many of the available services are designed to be serverless, simplifying their use and integration further.

Amazon Athena is a good example of a serverless service that is invaluable to a lot of users. Amazon Athena is basically a query service that allows for easy SQL queries and data processing solutions. Amazon Athena is originally designed to work with data stored in Amazon S3 buckets, but it is possible to utilize Athena to query AWS service logs from various sources.

The aim of this article is to cover how to query logs with a variety of AWS services through Amazon Athena with a working example focusing on Application Load Balancers to finish.

From Logs to Data

To fully utilize Amazon Athena for querying service logs, we need to take a closer look at the fundamentals first. For starters, data that can be queried by Athena needs to reside in S3 buckets, but most service logs can be configured to utilize S3 as storage blocks. This means you can easily query logs from services like AWS CloudTrail and Amazon EMR without complex setups.

Amazon Athena is also flexible enough to be optimized for specific queries. The tool is already capable of completing queries within seconds, even when the data set is large, but basic performance tuning can boost the overall performance of Athena even further—but more on that in a second though.

Lastly, there are multiple libraries that can be used to define and automate the process. Python libraries running on AWS Glue ETL, for instance, lets you utilize different libraries as part of a longer process. The approach gives you more control over jobs run by the computing side of the equation, and it is very easy to integrate thanks to libraries such as Athena Glue Service Logs (AGSlogger).

Working with CloudTrail and CloudFront

Amazon Athena is the perfect tool to use for querying CloudTrail logs. You can query an entire set of logs by setting the log location to a folder (i.e. s3://MyLogFiles/AWSLogs/) or focus on specific parts of the data stored in a unique folder.

From the CloudTrail console, configure Event History to Run advanced queries in Amazon Athena. You can then identify the S3 bucket that stores your CloudTrail logs, and then choose Create Table to begin leveraging Athena for querying service logs. Manual creation of the query table is also possible.

Once a table is created, you can begin running standard SQL queries as needed. Commands such as SELECT * FROM cloudtraillogs are now available, plus you can define WHERE parameters as needed. Fields in the log are identified automatically when a table is created, so you have fields such as eventname and requestparameters available as filters.

A good way to improve the performance of your queries is by specifying LIMIT and making sure that the table stays optimized. You can do the same with CloudFront logs, but you will have to create a table manually before you can start leveraging Athena for this purpose.

Services and Optimizations

Aside from CloudTrail and CloudFront, there are other services whose logs you can query using Athena. WAF logs, network load balancer logs, application load balancer logs, (more on that below) and VPC Flow logs can all be organized into tables and processed as structured data.

As mentioned earlier in the article, you can also refine and automate the process with the help of libraries running in EC2 instances or other services. Python libraries are the most common for routines and programmed processes. The AWS CLI gives you access to Athena as a service, so ad-hoc queries are easy to run too.

Naturally, you want your queries to be optimized, so there are a few things you can do:

  • Configure your data using Hive partitioning to make chunks of data easier to process. Your table can take partitions into account too, which means the entire data structure will be optimized for faster queries. Partitioning alone lets you lower your query runtime to under 10 seconds for large data (we’re talking 100 GB large).
  • Optimize your ORDER BY parameters and set a LIMIT for your query. A simple LIMIT parameter of 10000, combined with the right ORDER BY parameter, and speed up your queries by a whopping 98%.
  • Joins in your queries are important too. The way you structure fields in the FROM part of the SQL query greatly affects how quickly the query can be completed. Adding GROUP BY helps optimize the query further.

Querying Application Load Balancer (ALB) AWS Service Logs using Amazon Athena

As mentioned, you can use Amazon access logs to analyze and understand traffic patterns to and from your load balancing instances and backend applications—whatever type of load balancer you’re using. Elastic Load Balancing doesn’t include a default setting to enable access logging. To leverage this option and enable access logging, you need to specify an Amazon S3 bucket. The result of this action will move all Application Load Balancer and Classic Load Balancer access logs to become stored in that S3 bucket. Then, when you need to do any troubleshooting or performance analysis for your load balancer, you can use Athena to analyze the access logs in S3 as outlined in the steps below. 

(The steps are very similar to query Classic Load Balancers if that’s required instead.)

  • First, create a database and a table. To do this, run the following command to create a database in the Athena Query Editor. 
create database alb_db
  • Don’t forget to create the database in the same AWS Region as your S3 bucket. 
Create a database in #AmazonAthena
  • Next, create a table for alb logs in an existing DB. Copy and paste the following DDL statement into the Athena console, and modify values in LOCATION and then run the query.
LOCATION 's3://your-alb-logs-directory/AWSLogs/<ACCOUNT-ID>/elasticloadbalancing/<REGION>/'
  • This is the query for creating a table in a DB.  Here a table will be created in an existing DB to identify potential target traffic imbalances.
CREATE EXTERNAL TABLE IF NOT EXISTS alblogs (
            type string,
            time string,
            elb string,
            client_ip string,
            client_port int,
            target_ip string,
            target_port int,
            request_processing_time double,
            target_processing_time double,
            response_processing_time double,
            elb_status_code string,
            target_status_code string,
            received_bytes bigint,
            sent_bytes bigint,
            request_verb string,
            request_url string,
            request_proto string,
            user_agent string,
            ssl_cipher string,
            ssl_protocol string,
            target_group_arn string,
            trace_id string,
            domain_name string,
            chosen_cert_arn string,
            matched_rule_priority string,
            request_creation_time string,
            actions_executed string,
            redirect_url string,
            lambda_error_reason string,
            new_field string
            )
            ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
            WITH SERDEPROPERTIES (
            'serialization.format' = '1',
            'input.regex' = 
        '([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*):([0-9]*) ([^ ]*)[:-]([0-9]*) ([-.0-9]*) ([-.0-9]*) ([-.0-9]*) (|[-0-9]*) (-|[-0-9]*) ([-0-9]*) ([-0-9]*) \"([^ ]*) ([^ ]*) (- |[^ ]*)\" \"([^\"]*)\" ([A-Z0-9-]+) ([A-Za-z0-9.-]*) ([^ ]*) \"([^\"]*)\" \"([^\"]*)\" \"([^\"]*)\" ([-.0-9]*) ([^ ]*) \"([^\"]*)\" \"([^\"]*)\"($| \"[^ ]*\")(.*)')
            LOCATION 's3://your-alb-logs-directory/AWSLogs/<ACCOUNT-ID>/elasticloadbalancing/<REGION>/';

The below image shows you how to create a table with SQL queries in an existing table.

Create a table with #SQLqueries in an existing table in Amazon Athena

We can now verify that our data is accessible from the table using the preview table as per the image below.

Verify that our data is accessible from the table using the preview table

Example Queries for Application Load Balancer (ALB) Logs

There are a number of use cases and example scenarios for leveraging ALB log queries. We’ve identified a few for you below. 

  • Use ALB log queries to list all client IP addresses that have accessed the ALB, and list out how many times.
  • Use case: For analysis and troubleshooting.
  • Run the following query to set this up: 
SELECT distinct client_ip, count() as count from alb_logs 
GROUP by client_ip
ORDER by count() DESC;
  • Use ALB log queries to list all targets that the ALB is routing traffic to and how many times the ALB has routed requests to each target, by percentage distribution.
  • Use case: To identify potential target traffic imbalances
  • Run the following query to set this up: 
SELECT target_ip, (Count(target_ip)* 100.0 / (Select Count(*) From alb_logs))
as backend_traffic_percentage
FROM alb_logs
GROUP by target_ip
ORDER By count() DESC;
Use #ALBlogqueries to identify potential target traffic imbalances
  • Use ALB log queries to list the times that a client request wasn’t routed because the listener rule forwarded the request to an empty target group (HTTP 503 error).
  • Use case: To troubleshoot HTTP 503 errors.
  • Run the following query to set this up: 
SELECT * from alb_log where elb_status_code = '503'
Limit 10;
Use #ALBlogqueries to troubleshoot HTTP 503 errors
  • Use ALB log queries to list clients in descending order, by the amount of data (in megabytes) that each client sent in their requests to the ALB.
  • Use case: To analyze traffic distribution and patterns.
  • Run the following query to set this up: 
SELECT client_ip, sum(received_bytes/1000000.0) as client_datareceived_megabytes
FROM alb_logs
GROUP by client_ip
ORDER by client_datareceived_megabytes DESC;
Use #ALBlogqueries to analyze traffic distribution and patterns
  • Use ALB log queries to list each time in a specified date range when the target processing time was more than 5 seconds.
  • Use case: To troubleshoot latency in a specified time frame.
  • Run the following query to set this up:
SELECT * from alb_logs
WHERE (parse_datetime(time,'yyyy-MM-dd"T"HH:mm:ss.SSSSSS"V)
BETWEENparse_datetime('2019-12-06-00:00:00',.yyyy-MM-dd-HH:mm:ss.) 
 AND parse_datetime('2019-12-06-02:00:00','yyyy-MM-dd-HH:mm:ss.))
AND (target_processing_time >= 5.0)
Use #ALBlogqueries to troubleshoot latency in a specified time frame

Using Amazon Athena to query service logs isn’t a new approach, but it is an approach that can be very useful when you have a complex AWS environment to manage. Don’t forget that you can automate other tasks based on queries too. There is no doubt that Amazon Athena will be an indispensable tool for cloud admins going forward.


Ibexlabs is an experienced DevOps & Managed Services provider and an AWS consulting partner. Our AWS Certified DevOps consultancy team evaluates your infrastructure and make recommendations based on your individual business or personal requirements. Contact us today and set up a free consultation to discuss a custom-built solution tailored just for you.

Premkumar Kadile

Leave a Comment

Your email address will not be published. Required fields are marked *

As AWS Certified Consulting Partners, you get more than just extensive cloud expertise and first-rate IT support. Our team gives true meaning to the words “brand ambassadors.”

We leverage our comprehensive industry experience on your business' behalf to resolve system pain points, transform your infrastructure, and work in tandem with you.

All for the growth and acceleration of your company.

Follow Us
Subscribe To Our Newsletter
Copyright © 2020 IbexLabs
Scroll to Top