fbpx

Audit companion for the AWS PCI DSS Quick Start

If you’ve supported a Payment Card Industry Data Security Standard (PCI DSS) assessment as a Qualified Security Assessor (QSA) or as a technical team facing an assessment, it’s likely that you spent a lot of time collecting and analyzing evidence against PCI DSS requirements. In this blog post, I show you how to use automation to reduce the pain of manual and repetitive evidence collection. This will help you focus more on the compliance architecture rather than on evidence collection.

Organizations involved in payment card processing—including merchants, processors, acquirers, issuers, and service providers—are required to comply with PCI DSS. The compliance is validated annually through an assessment. The goal of the assessment is to evaluate the security policies, procedures, and resource configurations of an organization against applicable PCI DSS controls.

 

The PCI DSS assessment is based on the state of the system at a point in time, and the assessor is required to collect specific evidence for each requirement as specified in the testing procedure. Evidence collection has been one of the tedious aspects of PCI DSS assessments—for both the team collecting the evidence and the assessor evaluating it—for the following reasons:

 

    • The technical team must describe how the implemented security controls meet the requirement and provide supporting evidence. The assessor must perform the specified testing procedure.

 

    • The team collecting the evidence has to spend a lot of time taking screenshots or exporting data related to system configurations. This is made more time consuming by the requirement that assessors observe a large enough sample of in-scope systems to validate that the controls being tested are uniform across all systems.

 

    • The assessor must review each item of evidence, which again is a manual, repetitive process. The assessor’s task is further complicated by the fact that screenshots cannot be standardized and so the assessor has to carefully review them to ensure nothing is missed.

 

 

Additional challenges are introduced when the assessed environment is hosted on a cloud platform.

 

    • The elasticity of cloud infrastructure multiplies the scale of the assessment and associated time and complexity. Since Amazon Web Services (AWS) services are exposed via APIs and operations teams are adopting the use of infrastructure as code, multiple instances of identical infrastructure can be provisioned quickly by changing elements of the code. The universal population of in-scope systems is thus dynamic, which makes the sample selection process tricky. As an assessor, you need to verify that the sample selection portrays the overall environment. Sometimes it’s more practical to evaluate the base configuration that governs the elasticity of the infrastructure than to assess each instantiation. For example, if you are assessing Amazon Elastic Compute Cloud (Amazon EC2) instances configured in an Amazon EC2 Auto Scaling group, it would be better to evaluate the base Amazon Machine Image (AMI) used to instantiate the EC2 instances and the associated change management process, instead of evaluating each EC2 instance.

 

    • Because the assessment is based on the state of the system at a point in time, it doesn’t reflect the ephemeral nature of the configuration of a cloud infrastructure. As an assessor, you must understand that there might be times when a resource doesn’t have the required security controls. However, this might not indicate noncompliance. There might be responsive or automatic corrective controls that are event-based and run after a specific event such as creation or update of a resource. For example, an Amazon Simple Storage Service (Amazon S3) bucket might be created unencrypted and an AWS Lambda responder function detects creation of the bucket and that it’s unencrypted and then adds encryption to the configuration of the bucket. Thus, if an evidence collection script is run between the time the Amazon S3 bucket was created and the time the Lambda responder function ran, the script would report noncompliance, which isn’t entirely correct. In such a scenario, running the collection scripts again would show the same bucket to be encrypted and so in compliance. As an assessor you have to consider the possibility of event-based security controls.

 

    • The manual assessment of evidence doesn’t align with the DevOps and CI/CD model used to manage cloud infrastructure. Security control enforcement is mostly automated, as are infrastructure provisioning and deprovisioning. As mentioned earlier, most organizations use infrastructure as code and CI/CD to build and deploy their infrastructure. This is why manually reviewing each resource configuration doesn’t add much value. If an organization has used infrastructure as code, it’s possible that the team providing the evidence and the assessor won’t have console access at all.

 

 

Let’s see how automation can be used to make this process smoother. As the infrastructure being assessed is being managed through code—infrastructure as code—, it’s best to align the compliance assessment accordingly—compliance as code. AWS manages the control plane with a high degree of automation, which leads to a consistent user experience. This consistency in turn allows customers to confidently rely on test results collected via their own automation. The consistency is provided through increased visibility into user and resource activity as the AWS Management Console actions and service API calls are recorded by AWS CloudTrail and stored as event logs. The logs provide a consistent dataset of who did what on the platform and when. This provides two benefits:

 

    • It helps the technical team save time and effort during evidence collection, year over year. This is usually time that the team has to take away from their primary job. It also fits well with how they are accustomed to managing their environment.

 

    • For the assessor, it helps by making it easier to analyze the evidence, making it practical to use a much larger sample set to validate the consistency of the environment. This also makes the overall process less error prone by reducing human error. The assessor can spend more time focusing on the overall governance model such as reviewing security controls within the CI/CD pipeline, which can prevent deployment of misconfigured resources.

 

 

Use AWS CLI for automation

 

Time for a demonstration of how you—as a member of a technical team—can achieve automation by using AWS Command Line Interface (AWS CLI) commands to gather evidence for a specific PCI DSS requirement. For this, you need an infrastructure and associated AWS resources so that you can run the AWS CLI commands against those resources. In this blog post, I use the AWS PCI DSS Quick Start infrastructure as a sample infrastructure. The purpose of using this Quick Start is to get a variety of AWS resources and also work in an environment that adheres to applicable PCI DSS requirements. The Quick Start is scoped to run in a single AWS account, so firstly I will show you how to run AWS CLI commands assuming all resources are in a single account, and then how you can run those commands for resources hosted in separate AWS accounts.

 

Prerequisites

 

    1. Deploy the AWS PCI DSS Quick Start. Follow the deployment guide to deploy the PCI DSS Quick Start infrastructure. This is your sample in-scope environment.

 

    1. Deploy a security read-only AWS Identity and Access Management (IAM) role in the account to be used to gain access to AWS resources. The IAM role uses an AWS managed policy ReadOnlyAccess. This policy provides read-only access to all AWS services and resources. You can also alternatively use a custom customer managed policy and scope it to the services and actions that you need.

 

    1. Install AWS CLI. AWS CLI is available in two versions. For the examples in this post, install AWS CLI version 2.

 

    1. Configure AWS CLI to access your AWS environment via the read-only role. Follow the instructions for using an IAM role in the AWS CLI. You can also provide your assessor temporary access to your in-scope AWS environment to allow them to independently gather evidence and validate requirements. You can limit the assessor’s access by using an IAM role with time or source IP restrictions.

 

 

AWS CLI command building steps

 

Let’s look at a subset of PCI DSS requirements to see how you can use AWS CLI commands to gather evidence for those requirements. I start with one PCI DSS requirement and examine how to form the exact AWS CLI command based on the testing procedure. For the rest, I provide some sample controls so that you can see the flow:

 

    • PCI DSS v3.2.1 requirement 1.1.4 – This requires firewalls at each internet connection and between any perimeter networks and your internal network.

 

    • Description of AWS implementation – In AWS, segmentation is achieved by using security groups in an Amazon Virtual Private Cloud (Amazon VPC). Security groups are attached to an EC2 instance elastic network interface and so represent a firewall at every network connection point. In the PCI DSS Quick Start architecture, the public subnets simulate a traditional perimeter network.

 

    • PCI DSS testing procedure – Review the network configuration to verify that a firewall is in place at each internet connection and between any perimeter networks and the internal network, according to the documented configuration standards and network diagrams.

 

    • Description of evidence – In the PCI DSS QuickStart reference implementation “networks” are represented by Amazon VPC subnet and “firewalls” correspond to security groups. Hence the list of security groups, the rules within security groups, and the subnet configuration could be considered evidence to identify the segmentation boundaries. Next, we will start building queries to gather those information.

 

 

You can use the describe-security-groups command, which describes a specified security group or all of your security groups in the AWS account. The AWS documentation for this command provides options for multiple input parameters so that you can build the command to look for a specific security group.

 

 

describe-security-groups[--filters <filters>]

[–group-ids ]
[–group-names ]
[–dry-run | –no-dry-run]
[–cli-input-json | –cli-input-yaml]
[–starting-token ]
[–page-size ]
[–max-items ]
[–generate-cli-skeleton ]
[–cli-auto-prompt ]

To review all security groups

For now, run the command without any input parameters so that you can review all of the security groups:

 

 

aws ec2 describe-security-groups --output json

 

In the preceding command, you explicitly tell the output to be JSON formatted so that you can export it to a file to review, preserve and/or process later. The other acceptable export formats are YAML, YAML stream, text, and table. If you don’t specify an output format, JSON is the default. Throughout the rest of the blog I am only going to explicitly specify an output format if I need the output to formatted other than JSON.

To specify the fields to return

The output of the preceding command includes all of the configuration details of all of the the security groups in the particular AWS account and AWS region where you executed the command. If you want to see only specific details, you can add the –query parameter—as shown in the following command—to display only specific fields. For example, if you want the output to include only the names and IDs of the security groups, you would update the command as follows:

aws ec2 describe-security-groups --query "SecurityGroups[].{Name:GroupName,ID:GroupId}"

Sample describe-security-groups output

The following is the output of the previous CLI command run against the PCI Quick Start.

[
{
"Name": "Compliance-PCI-ManagementVpcTemplate-1OEU0UTNZQGJL-rSecurityGroupVpcNat-1EZMK95HCA31N",
"ID": "sg-02007b846ef47c4de"
},
{
"Name": "Compliance-PCI-ApplicationTemplate-1T8BJ35G1WMLK-rSecurityGroupAppInstance-12I54OAKCJUYZ",
"ID": "sg-025c33917a73505e5"
},
{
"Name": "Compliance-PCI-ApplicationTemplate-1T8BJ35G1WMLK-rSecurityGroupWeb-16DMMPQX2EMC8",
"ID": "sg-030534d6dc5ec5c44"
},
{
"Name": "Compliance-PCI-ProductionVpcTemplate-4ZPJEBD041NA-rSecurityGroupSSHFromProd-G08GLE3WBB9E",
"ID": "sg-030af0754eaf1461b"
},
{
"Name": "Compliance-PCI-ApplicationTemplate-1T8BJ35G1WMLK-rSecurityGroupWebInstance-6Z6RPYK3CVI0",
"ID": "sg-03d08b47099c0bc25"
},
{
"Name": "Compliance-PCI-ApplicationTemplate-1T8BJ35G1WMLK-rSecurityGroupApp-15L7J8MP4JRMQ",
"ID": "sg-054563171d6a5e3e0"
},
{
"Name": "RDS SecurityGroup",
"ID": "sg-06b56cdd8e030a8cd"
},
{
"Name": "Compliance-PCI-ManagementVpcTemplate-1OEU0UTNZQGJL-rSecurityGroupSSHFromMgmt-17XKKVB0TOWEH",
"ID": "sg-078567fe7006c0878"
},
{
"Name": "Compliance-PCI-ProductionVpcTemplate-4ZPJEBD041NA-rSecurityGroupMgmtBastion-1AB1R9YWTZL38",
"ID": "sg-099c1044562e623db"
},
{
"Name": "Compliance-PCI-ProductionVpcTemplate-4ZPJEBD041NA-rSecurityGroupVpcNat-VR63GTH85MQV",
"ID": "sg-0a3dd80f226787d0d"
},
{
"Name": "Compliance-PCI-ManagementVpcTemplate-1OEU0UTNZQGJL-rSecurityGroupBastion-8QSYS07WZP2P",
"ID": "sg-0a7c561d4bffb1c2b"
},
{
"Name": "Compliance-PCI-ApplicationTemplate-1T8BJ35G1WMLK-rSecurityGroupRDS-1T559TABMR55J",
"ID": "sg-0a844344df4dbf9b4"
}
]

To filter security groups by using a tag

There may be a scenario where all your resources within a particular AWS account are not in scope for your PCI DSS assessment. You can use a tag to denote a resource that is in-scope for your assessment. For example, you can add a tag called PCI to a security group and set the tag value to True to mark the group as in scope. Here you can add the –filter parameter to the describe-security-groups command to obtain the details of only security groups that have that tag.

aws ec2 describe-security-groups --filters Name=tag:PCI,Values=True

Now that we have all the security groups details, the next step per the PCI DSS 1.1.4 requirement testing procedure, you need to collect the Amazon VPC subnet details. For this, you can use the describe-subnets AWS CLI command, which describes one or more subnets in the AWS account. The input parameters option for this are:

 describe-subnets
[--filters ]
[--subnet-ids ]
[--dry-run | --no-dry-run]
[--cli-input-json | --cli-input-yaml]
[--starting-token ]
[--page-size ]
[--max-items ]
[--generate-cli-skeleton ]
[--cli-auto-prompt ]

To review all subnets

Use the following command with describe-subnets to obtain a list of all the subnets with information such as the Availability Zone, the IP address space, and the VPC ID each subnet resides in, and whether a subnet is public. As explained in the previous security group query building steps, you can use the –query parameter to filter the output. For improved readability, I am displaying the output in tabular format.

aws ec2 describe-subnets --query 'Subnets[].{VPCID:VpcId,AvailabilityZone:AvailabilityZone,CIDR:CidrBlock,PublicSubnet:MapPublicIpOnLaunch}' —output table

The following is the output of the command in tabular format:

| DescribeSubnets |
+------------------+------------------+---------------+--------------------------+
| AvailabilityZone | CIDR | PublicSubnet | VPCID |
+------------------+------------------+---------------+--------------------------+
| us-east-2b | 10.0.12.0/24 | False | vpc-0794eb767622ebf4c |
| us-east-2a | 10.100.96.0/21 | False | vpc-0acd134bfc919d9b6 |
| us-east-2c | 172.31.32.0/20 | True | vpc-d6cf66bd |
| us-east-2a | 10.0.1.0/24 | False | vpc-0794eb767622ebf4c |
| us-east-2a | 10.10.20.0/24 | False | vpc-07ec5a6781fba5e80 |
| us-east-2b | 10.100.20.0/24 | False | vpc-0acd134bfc919d9b6 |
| us-east-2a | 10.0.11.0/24 | False | vpc-0794eb767622ebf4c |
| us-east-2a | 172.31.0.0/20 | True | vpc-d6cf66bd |
| us-east-2b | 10.0.2.0/24 | False | vpc-0794eb767622ebf4c |
| us-east-2b | 10.10.2.0/24 | False | vpc-07ec5a6781fba5e80 |
| us-east-2b | 10.100.208.0/21 | False | vpc-0acd134bfc919d9b6 |
| us-east-2b | 10.10.30.0/24 | False | vpc-07ec5a6781fba5e80 |
| us-east-2b | 10.100.112.0/21 | False | vpc-0acd134bfc919d9b6 |
| us-east-2a | 10.100.192.0/21 | False | vpc-0acd134bfc919d9b6 |
| us-east-2a | 10.100.10.0/24 | False | vpc-0acd134bfc919d9b6 |
| us-east-2b | 172.31.16.0/20 | True | vpc-d6cf66bd |
| us-east-2a | 10.10.1.0/24 | False | vpc-07ec5a6781fba5e80 |
+------------------+------------------+---------------+--------------------------+

You now have all the necessary scripts to gather the relevant information to produce evidence for PCI DSS requirement 1.1.4, as per its testing procedure. Let’s now take a look at some other PCI DSS requirements so that you become comfortable with the query building process.

Other PCI DSS requirements and AWS CLI commands

In this section, we’ll review other requirements from PCI DSS v3.2.1 and also the corresponding AWS CLI commands.

Requirement 1.1.5

Description of groups, roles, and responsibilities for management of network components.

Testing procedure

Interview personnel responsible for management of network components to confirm that roles and responsibilities are assigned as documented.

Description of evidence

The list of IAM resources such as users, roles, groups, and the policies attached to them identify permissions at a granular level. Auditors can review IAM rules and—with AWS automation—can identify access. AWS IAM also looks for contradicting rules.

AWS CLI v2 command

IAM users and user policy details

aws iam list-users --query 'Users[*].{UserName:UserName,CreateDate:CreateDate}' --output text

aws iam list-user-policies --user-name 

aws iam get-user-policy --user-name  --policy-name 

IAM groups and policy details

aws iam list-groups --query 'Groups[*].{GroupName:GroupName,CreateDate:CreateDate}' --output text

aws iam list-group-policies --group-name 

aws iam get-group-policy --group-name  --policy-name  

IAM roles and policy details

aws iam list-roles --query 'Roles[*].{RoleName:RoleName,CreateDate:CreateDate}' --output text

aws iam list-role-policies --role-name 

aws iam get-role-policy --role-name  --policy-name 

Requirement 1.2.2

Secure and synchronize router configuration files.

Testing procedure

Examine router configuration files to verify they are secured from unauthorized access.

Description of evidence

The AWS CloudFormation template for this architecture is retained in an S3 bucket. The template can be retrieved using AWS CLI and compared to the current route table to validate that they match. You can also use drift detection on the AWS CloudFormation template to identify if there have been any unexpected changes.

AWS CLI v2 command

aws ec2 describe-route-tables --output table
aws cloudformation get-template --stack-name  --output text
aws detect-stack-drift—stack-name 

Requirement 1.3.6

Place system components that store cardholder data—such as a database—in an internal network zone, segregated from the perimeter network and other untrusted networks.

Testing procedure

Examine firewall and router configurations to verify that system components, like store cardholder data, are on an internal network zone, segregated from the perimeter network and other untrusted networks.

Description of evidence

The route table entries and metadata from the EC2 instances and Amazon Relational Database Service (Amazon RDS) DB instances identify the network placement of the application servers and the database.

AWS CLI v2 command

aws ec2 describe-route-tables --output table
aws rds describe-db-instances --query 'DBInstances[].{DBName:DBName,VpcId:DBSubnetGroup.VpcId,Subnets:DBSubnetGroup.Subnets}'
aws ec2 describe-instances --query 'Reservations[*].{PrivateIP:Instances[0].PrivateIpAddress, VpcId:Instances[0].VpcId,InstanceId:Instances[0].InstanceId, SubnetId:Instances[0].SubnetId}' --output table
aws cloudformation get-template --stack-name  —output text

Requirement 2.2.3

Implement additional security features for any required services, protocols, or daemons that are considered to be insecure.

Testing procedure

Inspect configuration settings to verify that security features are documented and implemented for all insecure services, daemons, or protocols.

Description of evidence

The security group rules will show port 443 as the incoming port for the load balancers. The load balancer configuration will show the use of TLS policies. The S3 bucket policy will also show the use of TLS connections.

AWS CLI v2 command

aws ec2 describe-security-groups --query 'SecurityGroups[].{VpcId:VpcId,SecurityGroupName:GroupName,IngressRule:IpPermissions,EgressRules:IpPermissionsEgress}'
aws elb describe-load-balancers --load-balancer-name  --query "LoadBalancerDescriptions[].{ActivePolicy:ListenerDescriptions}" --output table
aws s3api get-bucket-policy --bucket 

Requirement 2.4

Maintain an inventory of system components that are in scope for PCI DSS.

Testing procedure

Examine system inventory to verify that a list of hardware and software components is maintained and includes a description of the function or use of each.

Description of evidence

The resources that were created with the PCI Quick Start template can be retrieved from the AWS CLI for an inventory of cardholder data environment (CDE) components.

AWS CLI v2 command

aws cloudformation describe-stack-resources --stack-name  --query 'StackResources[].{StackName:StackName,ResourceType:ResourceType,ResourceStatus:ResourceStatus}' —output table

Requirement 8.2.3

Passwords and passphrases must meet the following:

  • Require a length of at least seven characters.
  • Contain both numeric and alphabetic characters.

Alternatively, the passwords and passphrases must have complexity and strength at least equivalent to the parameters specified above.

Testing procedure

For a sample of system components, inspect system configuration settings to verify that user password parameters are set to require at least the following strength and complexity:

  • Require a length of at least seven characters.
  • Contain both numeric and alphabetic characters.

Description of evidence

The AWS account password policy can be retrieved from the AWS CLI.

AWS CLI v2 command

aws iam get-account-password-policy

Requirement 8.5

Do not use group, shared, or generic IDs, passwords, or other authentication methods as follows:

  • Generic user IDs are disabled or removed.
  • Shared user IDs don’t exist for system administration and other critical functions.
  • Shared and generic user IDs aren’t used to administer any system components.

Testing procedure

For a sample of system components, examine user ID lists to verify the following:

  • Generic user IDs are disabled or removed.
  • Shared user IDs for system administration activities and other critical functions don’t exist.
  • Shared and generic user IDs aren’t used to administer any system components.

Description of evidence

IAM groups and roles used in place of the generic root AWS user. Notification of activity with root is sent through an Amazon Simple Notification Service (Amazon SNS) topic. The Quick Start uses an Amazon CloudWatch alarm as a detective control against the use of the generic root account. By reviewing the alarm history, it’s possible to validate whether the root account is used regularly.

AWS CLI v2 command

aws cloudwatch describe-alarms

Requirement 10.1

Implement audit trails to link all access to system components to each individual user.

Testing procedure

Verify, through observation and interviewing the system administrator, that:

  • Audit trails are enabled and active for system components.
  • Access to system components is linked to individual users.

Description of evidence

AWS CLI can be used to retrieve details of the CloudTrail logs, which are enabled by the Quick Start template and configured to capture API call information for all Regions and global services, such as IAM.

AWS CLI v2 command

aws cloudtrail describe-trails --query 'trailList[*].{IncludeGlobalServiceEvents:IncludeGlobalServiceEvents,Name:Name,LogFileValidationEnabled:LogFileValidationEnabled,IsMultiRegionTrail:IsMultiRegionTrail,IsMultiRegionTrail:IsMultiRegionTrail}' —output table

Requirement 10.7

Retain audit trail history for at least one year, with a minimum of three months immediately available for analysis. The history can be available online, archived, or restorable from backup.

Testing procedure

Examine security policies and procedures to verify that they define the following:

  • Audit log retention policies.
  • Procedures for retaining audit logs for at least one year, with a minimum of three months immediately available online.

Description of evidence

An S3 bucket storing all logs in a central location can be configured with lifecycle policies to move logs to different storage units according to a data retention policy defined by the organization. AWS CLI can be used to retrieve the bucket lifecycle policy of the S3 buckets that store central logs.

AWS CLI v2 command

aws s3api get-bucket-lifecycle-configuration —bucket 

Automation – Next Steps

Once you’re comfortable using scripts to interact programmatically with AWS services, you can design more complex automation scenarios by using an AWS software development toolkit (SDK). AWS SDKs let you access and manage AWS services with your preferred development language or platform. You can avoid running evidence collection tasks manually through AWS CLI by using the SDKs to build automation scripts to bundle multiple evidence collection tasks together. You can insert evaluation logic into your automation script—such as only report on security groups that have port 22 allowed and are attached to instances that reside in a public subnet. These scripts can be run in various ways such as via Lambda functions or containerized applications. These applications can then be configured to run periodically or in response to specific events—such as configuration changes—to gather relevant assessment evidence.

Additional considerations

Let’s talk about some additional items to consider if you’re going beyond the AWS PCI DSS Quick Start to an enterprise-scale payment environment. A typical PCI DSS environment is spread across more than one AWS account. AWS recommends that you host the CDE system components that directly handle information in one AWS account; and that you host the systems that have network connectivity to or that manage the security posture of CDE systems in a separate AWS account. Establishing your best practice AWS environment has more information on AWS multi-account architecture recommendations.

A PCI DSS assessment is a point in time activity. However, the cloud environment is fluid by nature. It can change due to explicit or implicit changes like resources automatically scaling or corrective security controls. You have to make sure that certain critical resources won’t change in ways that can negatively impact evidence collection. The IAM role used for scripted evidence collection must be secured from unwanted changes. You can use preventive controls such as IAM service control policies (SCPs) to prevent unauthorized changes to the role and policies by specifically denying write access on those resources to unauthorized IAM roles. As an assessor, you might want to validate that there are controls in place to protect the IAM role used for scripted evidence collection. Additionally, you must make sure that the IAM role has appropriate permission on the resources that it’s gathering evidence from, or else the scripts will fail. Be aware that even with adequate IAM role permission, the script can fail if there are other IAM policies that explicitly deny access to the resource. These can be SCPs applied to the OU or resource policies attached to particular resources. AWS documentation has information to help you better understand the AWS IAM policy evaluation logic. You should monitor the appropriate roles and policies to ensure that the permission changes don’t negatively impact the automation around evidence collection. You can use AWS Config to monitor the relevant roles and policies and then build appropriate notification or responsive behavior on detection of inappropriate changes.

As part of the evidence collection process, we have walked through how to collect information relevant to the testing procedures of the PCI DSS requirements detailed above. It is equally important to demonstrate that the process of evidence collection that we have outlined covers all in-scope resources. A prerequisite for this is for you to correctly identify and maintain an ongoing inventory of PCI DSS in-scope resources. Traditionally, you had to maintain a static list of in-scope resources and had to manually update the list as the in-scope environment changed. This is very error prone as it depends on manual checks to make sure the static inventory list tallies with the deployed in-scope environment. This is further demonstrated by the 2020 Verizon payment security report. The report presents that from all the controls across the DSS, control 2.4—maintain an inventory of system components that are in scope for PCI DSS—experienced the biggest increase in control gap. This jumped from 5.6 percent in 2018 to 24.0 percent in 2019.

You can leverage automation to help reduce the probability of error and maintain an accurate inventory of in-scope resources, which can be a near real time representation of your deployed CDE. You can use resource tags and AWS resource groups to design your automation. You can have a PCI DSS specific tag for all your in-scope resources which acts as an identifier. Tags are words or phrases that act as metadata that you can use to identify and organize your AWS resources. You can use resource groups to group resources based on tags or specific CloudFormation stacks that were initially used to instantiate the resources. Provided you are deploying all your resources via combination of CloudFormation stacks, you can enforce tagging during creation. To learn more around tags and tagging strategies refer to AWS documentation here.

Conclusion

Audits and assessments are integral parts of ensuring organizations adhere to security best practices to protect their IT infrastructure, which in turn protects their business or client data. These activities rely on capturing and processing evidence to validate the efficiency of the security control lifecycle.

In this post, you’ve seen how you can use automation to make the process of evidence collection easier and less time consuming. Automation can be as basic as using AWS CLI commands to complete individual tasks. As you become more comfortable, you can develop complex scripts using AWS SDKs implemented via Lambda functions. These functions can be run as needed, at scheduled intervals, or when there are resource configuration changes. Robust automation allows you as a technical team supporting an assessment to focus on your daily activities of supporting and securing the environment instead of spending majority time assisting with gathering information for the evidence. You can further leverage automation through usage of resource tags to maintain an accurate list of in-scope devices.

A positive side effect of automation is that you no longer have to stress about the multiple assessments and audits that you may have to support. If you are an assessor and/or auditor in general, you can encourage assesses to leverage automation for the evidence collection. This will help make the assessment process easier by firstly allowing you to focus more on validating controls that cannot be automated. Secondly, you can use a broader sample set to validate the compliance of a particular DSS requirement. This provides assurance that the sample reviewed is an accurate representation of the entire in-scope system. Thirdly, automation helps in presenting the evidentiary information in a consistent and standard format, making the communication between all stakeholders easier. As an assessor, you can avoid spending unproductive time processing through evidence provided in multiple formats and focus on extracting the right information from the data.

If you have feedback about this post, submit comments in the Comments section below. If you need assistance setting up your PCI compliance program on AWS or want other architecting or implementation support on AWS, reach out to AWS Security Assurance Services or AWS Professional Services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.