AWS Cloud Operations & Migrations Blog

Visualize AWS Service Catalog Product Usage in an AWS Organization with Amazon QuickSight

 

AWS Service Catalog is a widely used service that simplifies the management of tools, services, and resources in AWS accounts for organizations. This service empowers end users to provision products vetted by their organization in their environments with confidence in security and compliance. Portfolios are shared with AWS accounts in an AWS Organization, from which end users deploy only the approved AWS services (products) they need.

This post provides a logging and reporting solution for customers with a hub and spoke account setup where the hub account is used to create products and portfolios which are then shared and provisioned on spoke accounts. An enterprise using this type of setup would have many hundreds of spoke accounts across different Regions in which different teams would be provisioning products available to them. Since this activity is tracked in the corresponding spoke accounts, a centralized monitoring and logging solution, through which the business can receive a consolidated overview of such activities in the hub account, becomes imperative for effective governance.

The following solution implements this, and then shares the KPIs required by the business to understand and track the products provisioned in all of the spoke accounts by using:

  1. A business intelligence dashboard in Amazon QuickSight that shares the relevant KPIs.
  2. A query tool in Amazon Athena to pull specific details.

This will let the business infer the product usage that enables them to make data-driven decisions on their cloud adoption roadmap and retrospectives.

Prerequisites

The following services are required:

AWS Organizations
AWS Service Catalog
Amazon QuickSight
Amazon DynamoDB
AWS Lambda
Amazon CloudWatch
AWS CloudFormation
Amazon Athena

Overview

The AWS Service Catalog reporting implementation tracks when AWS Service Catalog products are provisioned in each of the spoke accounts in a given Organization. When provisioned, products are only visible in each account’s AWS Service Catalog console. However, this implementation means that the provisioned products and their details are aggregated to the hub account. Once they’re aggregated and stored in a central DynamoDB table, the provisioned product details can be queried by Athena and visualized by QuickSight.

  • User provisions, updates, or terminates a product in a spoke account.
  • Event rule in spoke account captures the matching event from AWS CloudTrail events, and forwards it to the hub account’s custom event bus.
    • If the event fails to send to the hub account’s custom event bus, then send to the hub account’s DLQ Amazon SQS.
  • Both the custom event bus and the DLQ SQS can trigger the event processor Lambda.
  • Event processor Lambda checks the event payload to determine if it’s a ProvisionProduct, UpdateProvisionedProduct, or TerminateProvisionedProduct event.
    • If the event is ProvisionProduct, then create a new row item in the audit DynamoDB table.
    • If the event is UpdateProvisionedProduct, then update the matching provisionedProductId item in the audit DynamoDB table.
    • If the event is TerminateProvisionedProduct, then delete the matching provisionedProductId item from the audit DynamoDB table.

Processing events

Events are processed by the event processor Lambda functions provisioned in all of the Regions to which the hub StackSet is deployed. Furthermore, it writes to the central DynamoDB table located in the hub account’s primary Region. An Amazon SQS dead-letter queue is present in all of the Regions in the hub account, and it accepts events that were unsuccessfully delivered to the hub account’s custom event bus from the spoke accounts from the same Region.

Moreover, events are archived in the hub account’s primary Region for future historical replay. Initiating a replay will result in the event processing Lambda to process all of the events from the last 90 days. The Lambda will not make duplicate entries if an AWS Service Catalog product has already been inserted into the table.

Amazon DynamoDB

The values input into the audit DynamoDB table are derived from the CloudTrail event emitted from the product origin account. The values from the recordDetail nested schema are parsed, along with some other pertinent values.

Currently parsed keys

 

A B
1 Key Example
2 productId “prod-unvdcyl6aixxo”
3 account “123456796951”
4 createdTime “Jul 23, 2021 12:32:33 AM”
5 pathId “lpv2-me43omhkcvwku”
6 productName “Amazon EC2 Linux”
7 provisionedProductId “pp-nehfvaqsy3ikk”
8 provisionedProductName “Amazon_EC2_Linux-07230016”
9 provisionedProductType “CFN_STACK”
10 provisioningArtifactId “pa-7aaaieidrzllo”
11 provisioningArtifactName “v1.0”
12 recordErrors []
13 recordId “rec-x2p3curxfi3u4”

Deploy the solution

  • Designate an account that will act as the hub which aggregates all of the data and hosts the DynamoDB table. Refer to the Simplify sharing your AWS Service Catalog portfolios in an AWS Organizations setup post if you don’t currently have AWS Service Catalog in your organization.
  • Deploy the organization_infra_stack.yaml CloudFormation Stack in the hub account’s primary Region. The template will ask for the following parameters:
    •  OrganizationId: The Organization ID is unique to your organization. Retrieve this value from Services, Management & Governance, and Organizations, e.g., “o-f4sp1mk5g5”.
    • ResourceNamePrefix: Prefix for naming all of the resources created by this CloudFormation template, e.g., “AWS Service Catalog”. You may leave the default value.
  • Create a zip package of the service_catalog_audit.py file found inside the lambda/service_catalog_audit/ directory, and name the zip package “service_catalog_audit.zip”. Refer to the documentation on how to deploy Python Lambda functions with .zip file archives. Then, place the zip package in the lambda/service_catalog_audit/ directory.
  • Upload the Lambda directory to the newly created source-files-<account_id>-<region> Amazon Simple Storage Service (Amazon S3) bucket. Refer to the documentation on uploading objects to the S3 bucket.  After the upload to the bucket is complete, the prefix structure will be “lambda/service_catalog_audit/”.
  • Deploy the audit_hub_stack.yaml as a CloudFormation StackSet. Designate one account in your Organization as a hub account with a primary Region:
    • Choose self-service permissions.
    • Choose sc-stackset-parent-role as the admin role. This role was created by the organization_infra_stack.yaml CloudFormation Stack.
    • Type in sc-stackset-child-role as the execution role. This role was created by the organization_infra_stack.yaml CloudFormation Stack.
    • Choose audit_hub_stack.yaml as the template source.
    • The template will ask for the following parameters:
      • OrganizationId: The Organization ID is unique to your organization. Retrieve this value from Services, Management and Governance, and Organizations, e.g., “o-xxxxxxxxxx”.
      • ResourceNamePrefix: Prefix for naming all of the resources created by this CloudFormation template, e.g., “service-catalog”. You may leave the default value.
      • PrimaryRegion: Primary Region to deploy central audit resources, such as the DynamoDB table, the Amazon EventBridge event bus, and the Athena table e.g., “us-east-1”.
      • S3BucketName: Provide the name of the S3 bucket that has the Lambda deployment packages. This is the S3 bucket created by the organization_infra_stack.yaml CloudFormation stack, e.g., “source-files-<account_id>-<region>”.
      • S3KeyPrefix: Provide the directory of the S3 bucket that has the Lambda deployment packages, e.g., “lambda/service_catalog_audit/”. You may leave the default value.
    • Set deployment options:
      • Select deploy stacks in accounts.
        • Enter the AWS account ID of the desired hub account.
      • Specify the Regions where you’d like to deploy this stack:
        • Select the Region that you’ve input for PrimaryRegion parameter specified above. This is where the DynamoDB table, the EventBridge event bus, and the Athena table will reside.
        • Select any other Regions where users may provision AWS Service Catalog products. These Regions will have the Amazon SQS DLQ and the event processor Lambda function.
  • Deploy the audit_spoke_stack.yaml as a CloudFormation StackSet to all of the spoke accounts where AWS Service Catalog is utilized to provision products.
    • Choose Service-managed permissions
    •  The template will ask for the following parameters:
      • HubAccountId: The AWS account ID of the hub account created in the previous step.
      • PrimaryRegion: Primary Region where Hub central audit resources were deployed, such as the DynamoDB table, EventBridge event bus, and the Athena table, e.g., “us-east-1”.
      • ResourceNamePrefix: Prefix for naming all of the resources created by this CloudFormation template, e.g., “service-catalog”. You may leave the default value.
    • Deployment targets can either be your entire Organization or specific Organizational Units (OUs) where AWS Service Catalog products will be provisioned.
    • Select all of the Regions matching the hub account deployment from the previous step.
  • To test, create a product in a spoke account and verify that an item was inserted into the DynamoDB table found in the hub account’s primary Region.

Athena DynamoDB Connector

The Amazon Athena DynamoDB connector allows Athena to communicate with DynamoDB. To enable this connector, in the hub account’s console navigate to AWS Serverless Application Repository and deploy a pre-built version of this connector.

Select the “AthenaDynamoDBConnector” application:

Select the “AthenaDynamoDBConnector” application:

The audit hub stack in the hub account creates an Athena spill bucket that should be used when creating this connector. The name of the bucket can be found in the outputs of the stack.

In the console, navigate to Athena, and select “Data sources”. Then, create a new data source with DynamoDB as the source:

In the console, navigate to Athena, and select “Data sources”. Then, create a new data source with DynamoDB as the source:

Name the data source “dynamo”, and select the Lambda function created by the “AthenaDynamoDBConnector” Serverless application:

Name the data source “dynamo”, and select the Lambda function created by the “AthenaDynamoDBConnector” Serverless application:

The data source will be created with the associated “default” database:

The data source will be created with the associated “default” database:

Now you can query the DynamoDB table with SQL in the Athena query editor:

Now you can query the DynamoDB table with SQL in the Athena query editor:

QuickSight

QuickSight will let us visualize the data stored in DynamoDB via the Athena connector deployed in the previous section.

Create aws-quicksight-s3-consumers-role-v0 role

In QuickSight, select the upper right drop-down menu, and select “Manage QuickSight”. On the “Security and permissions” page under “QuickSight access to AWS services”, select “Manage”:

In QuickSight, select the upper right drop-down menu, and select "Manage QuickSight". On the "Security and permissions" page under "QuickSight access to AWS services", select "Manage":

Deselect and select “Amazon Athena”, then select “Next”. Once prompted, select “Lambda”, and select the DynamoDB connector Lambda.

Deselect and select "Amazon Athena", then select “Next”. Once prompted, select "Lambda", and select the DynamoDB connector Lambda.

This creates the “aws-quicksight-s3-consumers-role-v0” role with the required permissions to invoke the DynamoDB connector Lambda function.

Create a dashboard

First, we must create a dataset in QuickSight that points to the Athena catalog. In the console, navigate to QuickSight, then select “Datasets” in the left menu pane. Select “New dataset” to create a new dataset, and select Athena:

Create a new data source pointing to the Athena workgroup:

Select the DynamoDB table created by the audit_hub_stack.yaml CloudFormation StackSet:

Either select to import to SPICE for periodic data loads with faster performance, or directly query your data at the expense of slower up-to-date results.

Create a new analysis with the newly created dataset:

Finally, create visualizations using the data from the dataset. Here are several examples:

Clean up

  1. Delete the audit_spoke_stack.yaml CloudFormation StackSet:
    1. First delete the stack instances from the stack set
    2. Then delete the stack set
  1. Delete the audit_hub_stack.yaml CloudFormation StackSet:
    1. Delete contents of the S3 buckets created by this stack set
    2. First delete the stack instances from the stack set
    3. Then delete the stack set
  1. Delete the QuickSight resources:
    1. Delete the dashboard
    2. Delete the analysis
    3. Delete the dataset

Troubleshooting

EventBridge may fail to send events from the spoke accounts to the hub account’s custom event bus due to networking or access issues. In this event, the spoke account’s event bus will forward the event to the hub account’s dead letter queue Amazon SQS. These events will eventually be processed, and the remaining SQS messages will be deleted.

Event processor Lambda may fail to process an event due to a change in the schema or some transient issue. In this event, check the event processor Lambda function’s CloudWatch Logs to determine the root cause.

Conclusion

In this post, we created a mechanism to aggregate AWS Service Catalog product activity within in an organization, and present the information with a QuickSight dashboard. This information can be used to understand AWS Service Catalog portfolio usage, the adoption of AWS services, and to make decisions on further development of new AWS Service Catalog products to enable self-service across your organization.

Author:

Samruth Reddy

Samruth Reddy is a Sr. DevOps Consultant in AWS ProServe working on automation tooling, security and infrastructure implementations, and promoting DevOps methodologies and practices to his customers.

Siddhi Shah

Siddhi Shah joined AWS Professional Services as an Engagement Manager in Shared Delivery Practice in December 2019. Prior to joining AWS ProServe, Siddhi has 6 years of experience. Throughout her career, she has attained knowledge on managing cloud operational changes, strategic management of cloud technology, agile methodology, project management, quality assurance, systems analysis, planning and control, databases, business analysis, and business intelligence etc. Siddhi is based out of San Diego, California, and outside of work, she enjoys hiking, dancing, and exploring different cities with her friends.

Gautam Nambiar

Gautam Nambiar is a DevOps Consultant with Amazon Web Services. He is particularly interested in architecting and building automated solutions, MLOps Pipelines, and creating reusable and secure DevOps best practice patterns. In his spare time, he likes playing and watching soccer.