fbpx

How to increase DNS filtering to your NAT example with Squid

September 23, 2020: The squid construction file in this website write-up and associated YAML template have already been updated.

September 4, 2019: We’ve updated this website post, on January 26 initially published, 2016. Major modifications include: assistance of Amazon Linux 2, longer needing to compile Squid 3 no.5, and a higher availability version of the perfect solution is across two availability zones.

Amazon Virtual Private Cloud (Amazon VPC) allows you to release AWS resources about a virtual private system that you’ve defined. On an Amazon VPC, lots of people use network address translation (NAT) instances and NAT gateways make it possible for instances in an exclusive subnet to initiate outbound visitors to the web, while avoiding the instances from receiving inbound visitors initiated by someone on the web.

For security and compliance purposes, you may have to filtration system the requests initiated by these instances (also referred to as “egress filtering”). Making use of iptables rules, you can restrict outbound traffic together with your NAT instance predicated on a predefined location IP or slot address. However, you may want to enforce more technical security policies, such as enabling requests to AWS endpoints just, or blocking fraudulent sites, that you can’t achieve through the use of iptables rules easily.

In this article, I discuss and present an instance of how exactly to use Squid, a respected open-supply proxy, to implement a “transparent proxy” that may restrict both HTTP and HTTPS outbound visitors to a given group of Internet domains, while being transparent for instances in the personal subnet fully.

The solution architecture

In this area, I found the architecture of the high availability NAT solution and describe how exactly to configure Squid to filter traffic transparently. In this post later, Provide instructions about how exactly to implement and check the solution i’ll.

The next diagram illustrates the way the components in this technique interact with one another. Squid Example 1 intercepts HTTP/S requests sent by situations in Private Subnet 1, like the Testing Example. Squid Instance 1 after that initiates a link with the destination web host with respect to the Testing Example, which goes through the web gateway. This answer spans two Accessibility Zones, with Squid Example 2 intercepting requests delivered from another Availability Zone. Remember that you might adapt the answer to span additional Accessibility Zones.

Figure 1: The answer spans 2 Availability ZonesFigure 1: The perfect solution is spans 2 Availability Zones

Intercepting and filtering visitors

In each availability zone, the path table associated to the private subnet transmits the outbound traffic to the Squid instance (observe Route Tables for a NAT Gadget). Squid intercepts the requested domain, after that applies the next filtering policy:

Note 1: Some older client-side software program stacks do not assistance SNI. Listed below are the minimum variations of some essential stacks and development languages that assistance SNI: Python 2.7.9 and 3.2, Java 7 JSSE, wget 1.14, OpenSSL 0.9.8j, cURL 7.18.1

Note 2: TLS 1.3 introduced an optional extension which allows your client to encrypt the SNI, which might avoid Squid from intercepting the requested domain.

The SslPeekAndSplice feature was introduced in Squid 3.5 and is implemented inside exactly the same Squid module as SslBump. Make it possible for this module, Squid demands a certificate is supplied by you, though it shall not really be utilized to decode HTTPS traffic. A certificate is established by the perfect solution is using OpenSSL.


mkdir /etc/squid/ssl
cd /etc/squid/ssl
openssl genrsa -away squid.key 4096
openssl req -new -essential squid.key -away squid.csr -subj "/C=XX/ST=XX/L=squid/O=squid/CN=squid"
openssl x509 -req -days 3650 -inside squid.csr -signkey squid.key -out squid.crt
cat squid.important squid.crt >> squid.pem        

The next code shows the Squid configuration file. For HTTPS visitors, note the ssl_bump directives instructing Squid to “peek” (retrieve the SNI) and “splice” (turn into a TCP tunnel without decoding) or “terminate” the bond with respect to the requested host.


visible_hostname squid 
cache deny all 

# Log rotation and format 
logformat squid %ts.%03tu %6tr %>a %Ss/%03>Hs %sni %Sh/%

The text file situated at /etc/squid/whitelist.txt provides the set of whitelisted domains, with a single domain per range. In this website post, I’ll demonstrate how exactly to configure Squid to permit requests to *.amazonaws.com, which corresponds to AWS endpoints. Remember that you can restrict usage of a specific group of AWS providers that you’ve defined (notice Regions and Endpoints for an in depth set of endpoints), or it is possible to set your own set of domains.

Note: Another approach is by using VPC endpoints to privately connect your VPC to supported AWS solutions without requiring access on the internet (see VPC Endpoints). Some backed AWS services permit you to create a plan that controls the usage of the endpoint to gain access to AWS resources (see VPC Endpoint Policies, and VPC Endpoints for a summary of supported services).

You may have pointed out that Squid listens on port 3129 for HTTP traffic and 3130 for HTTPS. Because Squid cannot pay attention to 80 and 443 directly, you need to redirect the incoming requests from situations in the personal subnets to the Squid ports making use of iptables. You perform not need to enable IP forwarding or increase any FORWARD principle, as you’ll do with a typical NAT instance.


sudo iptables -t nat -The PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3129
sudo iptables -t nat -The PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3130       

The answer stores the files squid.whitelist and conf.txt within an Amazon Simple Storage Service (S3) bucket and runs the next script every minute on the Squid instances to download and update the Squid configuration from S3. This allows you to keep the Squid construction from the central location. Remember that it very first validates the data files with squid -k parse and reload the construction with squid -k reconfigure if no mistake was found.


        cp /etc/squid/* /etc/squid/old/
        aws s3 sync s3:// /etc/squid
        squid -k parse && squid -k reconfigure || (cp /etc/squid/older/* /etc/squid/; exit 1)     

The perfect solution is then uses the CloudWatch Agent in the Squid instances to get and shop Squid logs in Amazon CloudWatch Logs. The log group /filtering-nat-instance/cache.log provides the debug and error text messages that Squid generates and /filtering-nat-instance/access.log provides the access logs.

An gain access to log record is really a space-delimited string which has the following format:

 

The next table describes the fields of an access log record.

FieldDescriptionperiodRequest amount of time in seconds since epochresponse_periodResponse amount of time in millisecondscustomer_ipCustomer source IP deal withstatus_program codeSquid request position and HTTP response program code delivered to the client. For instance, a HTTP demand to an unallowed domain logs TCP_DENIED/403, and a HTTPS demand to a whitelisted domain logs TCP_TUNNEL/200dimensionOverall size of the reaction sent to clienttechniqueDemand method like Have or Write-up.urlDemand URL received from your client. Logged for HTTP requests justsniDomain name intercepted inside the SNI. Logged for HTTPS requests justremote_sponsorSquid hierarchy status and remote control host IP tacklemimeMIME content kind. Logged for HTTP requests just

The following are a few examples of access log records:


1563718817.184 14 10.0.0.28 TCP_DENIED/403 3822 GET http://example.com/ - HIER_NONE/- text/html
1563718821.573 7 10.0.0.28 TAG_NONE/200 0 CONNECT 172.217.7.227:443 illustration.com HIER_NONE/- -
1563718872.923 32 10.0.0.28 TCP_TUNNEL/200 22927 CONNECT 52.216.187.19:443 calculator.s3.amazonaws.com First_DST/52.216.187.19 -   

Designing a higher availability solution

The Squid instances introduce an individual point of failure for the private subnets. In case a Squid example fails, the situations in its associated personal subnet cannot deliver outbound traffic anymore. The next diagram illustrates the architecture that I propose to handle this situation in a Availability Zone.

Body 2: The architecture to handle if a Squid example fails in a Availability ZoneShape 2: The architecture to handle if a Squid example fails in a Availability Zone

Each Squid example is launched within an Amazon EC2 Auto Scaling group which has a minimum dimension and a maximum dimension of one example. A shell script will be operate at startup to configure the situations. That includes setting up and configuring Squid (discover Running Commands on your own Linux Instance at Launch).

The answer uses the CloudWatch Agent and its own procstat plugin to get the CPU using the Squid procedure every 10 secs. For every Squid instance, the answer creates a CloudWatch alarm that timepieces this custom made metric and would go to an ALARM state whenever a data stage is missing. This may happen, for instance, when Squid crashes or the Squid example fails. Remember that for my make use of case, I consider viewing the Squid process an adequate method of determining the ongoing wellness standing of a Squid example, though it cannot detect eventual situations of the Squid procedure being alive but struggling to forward visitors. As a workaround, you may use an end-to-end monitoring method, like using witness situations in the personal subnets to send check requests at normal intervals and gather the custom metric.

When an alarm would go to ALARM state, CloudWatch transmits a notification to an Amazon Simple Notification Service (SNS) subject which in turn triggers an AWS Lambda function. The Squid will be marked by the Lambda functionality instance as harmful in its Car Scaling group, retrieves the set of healthy Squid instances in line with the continuing state of additional CloudWatch alarms, and updates the path tables that currently path traffic to the harmful Squid instance to rather route visitors to the initial available healthy Squid example. As the Auto Scaling team replaces the harmful Squid instance automatically, private situations can send outbound visitors through the Squid example in another Availability Zone.

Once the CloudWatch agent begins collecting the custom metric about the replacement Squid instance once again, the alarm reverts to OK condition. Similarly, CloudWatch transmits a notification to the SNS subject, which triggers the Lambda function then. The Lambda functionality completes the lifecycle activity (see Amazon EC2 Auto Scaling Lifecycle Hooks) to point that the alternative instance is preparing to serve visitors, and updates the route desk associated to the personal subnet in exactly the same availability area to route visitors to the substitute instance.

Implementing and tests the solution

That you realize the architecture in back of this solution now, the instructions could be accompanied by you in this section to implement and test the perfect solution is in your AWS account.

Implementing the remedy

First, you’ll use AWS CloudFormation to provision the mandatory resources. Choose the Start Stack key below to open up the CloudFormation gaming console and develop a stack from the template. After that, follow the on-screen directions.

Select this image to open a link that starts building the CloudFormation stack

CloudFormation can create the next resources:

public subnets and 2 private subnets in the Amazon VPC

  • Two.
  • Three route tables. The initial route table is related to the general public subnets to create them publicly accessible. Another two path tables are linked to the personal subnets.
  • An S3 bucket to shop the Squid configuration documents, and two Lambda-based custom made resources to include the data files squid.conf and whitelist.txt to the bucket.
  • An IAM part to grant the Squid situations permissions to learn from the S3 bucket and utilize the CloudWatch agent.
  • A security group to permit HTTPS and HTTP visitors from situations in the private subnets.
  • A launch construction to specify the template of Squid situations. That includes commands to perform at startup for automating the original configuration.
  • Two Car Scaling groups that utilize this launch configuration to start the Squid situations.
  • A Lambda functionality to redirect the outbound visitors and recover a Squid example when it fails.
  • Two CloudWatch alarms to view the custom metric delivered by Squid instances and result in the Lambda function once the health position of Squid instances adjustments.
  • An EC2 example in the initial private subnet to check the answer, and an IAM function to grant this example permissions to utilize the SSM agent. Program Manager, that i introduce within the next paragraph, utilizes this SSM broker (see Working with SSM Agent)

Testing the option

Following the stack creation has completed (normally it takes up to ten minutes), connect onto the Testing Instance using Session Manager, a capacity for AWS Systems Manager that enables you to manage instances via an interactive shell with no need to open up an SSH port:

  1. Open up the AWS Systems Manager console.
  2. In the routing pane, choose Program Manager.
  3. Choose Start Session.
  4. For Focus on instances, pick the option button left of Testing Example.
  5. Choose Start Session.

Note: Session Manager helps make calls to many AWS endpoints (see Working with SSM Broker). If you like to restrict usage of a defined group of AWS services, ensure that you whitelist the connected domains.

After the connection is manufactured, the solution could be tested by you with the next commands. Only the final three requests should come back a valid reaction, because Squid allows visitors to *.amazonaws.com just.


curl http://www.amazon.com
curl https://www.amazon.com
curl http://calculator.s3.amazonaws.com/index.html
curl https://calculator.s3.amazonaws.com/index.html
aws ec2 describe-regions --area us-east-1         

To get the requests you manufactured in the access logs simply, here’s how to see the Squid logs in Amazon CloudWatch Logs:

  1. Open up the Amazon CloudWatch console.
  2. In the routing pane, choose Logs.
  3. For Log Groups, pick the log group /filtering-nat-instance/entry.log.
  4. Choose Search Log Group to see and search log records.

To test the way the solution behaves whenever a Squid instance fails, it is possible to terminate among the Squid instances manually in the Amazon EC2 console. Then, view the CloudWatch alarm modification its condition in the Amazon CloudWatch console, or view the perfect solution is change the default path of the impacted path table in the Amazon VPC console.

Now you can delete the CloudFormation stack to completely clean up the resources which were just created.

Discussion: Transparent or even forward proxy?

The perfect solution is that I explain in this website is transparent for instances in the private subnets fully, meaning that instances don’t have to be alert to the proxy and will make requests as though they were behind a typical NAT instance. Another solution would be to deploy a ahead proxy in your Amazon VPC and configure situations in personal subnets to utilize it (start to see the post How to create an outbound VPC proxy with domain whitelisting and content filtering for a good example). In this area, I discuss a few of the differences between your two solutions.

Supportability

A significant drawback with forwards proxies is that the proxy should be explicitly configured on every instance within the private subnets. For instance, it is possible to configure the HTTPS_PROXY and HTTP_PROXY atmosphere variables on Linux situations, however, many services or programs, like yum, need their very own proxy configuration, or assistance proxy usage don’t. Remember that some AWS providers and features also, like Amazon EMR or Amazon SageMaker notebook instances, don’t support utilizing a forward proxy at the proper time of the post. However, with TLS 1.3, a forward proxy may be the only choice to restrict outbound visitors if the SNI is encrypted.

Scalability

Deploying a forwards proxy on AWS generally includes a load balancer distributing targeted traffic to a couple of proxy instances released within an Auto Scaling team. Proxy instances could be released or terminated dynamically based on the demand (also called “horizontal scaling”). With proxies forward, each route desk can route visitors to an individual instance at the right time, and changing the kind of the instance may be the only solution to increase or reduce the capacity (also referred to as “vertical scaling”).

The solution I within this post will not dynamically adapt the instance kind of the Squid instances in line with the demand. However, you may consider a mechanism where the traffic from a personal subnet will be temporarily redirected through another Accessibility Zone as the Squid instance has been relaunched by Car Scaling with an inferior or larger instance kind.

Mutualization

Deploying the centralized proxy solution plus using it across several VPCs is a method of reducing cost plus operational complexity.

With a forward proxy, instances in personal subnets send IP packets to the proxy load balancer. As a result, sharing a ahead proxy across several VPCs only requires online connectivity between the “example VPCs” and a proxy VPC which has VPC Peering or comparative capabilities.

With a transparent proxy, instances in personal subnets sends IP packets to the remote control host. VPC Peering will not assistance transitive routing (see Unsupported VPC Peering Configurations) and can’t be used to talk about a transparent proxy across several VPCs. However, now you can make use of an AWS Transit Gateway that works as a system transit hub to talk about a transparent proxy across several VPCs. A good example is distributed by me within the next section.

Sharing the answer across multiple VPCs making use of AWS Transit Gateway

In this area, I give a good example of how to talk about a transparent proxy across several VPCs using AWS Transit Gateway. The architecture will be illustrated in the next diagram. With regard to simplicity, the diagram will not include Availability Zones.

Body 3: The architecture for a transparent proxy throughout multiple VPCs making use of AWS Transit GatewayBody 3: The architecture for a transparent proxy throughout multiple VPCs making use of AWS Transit Gateway

Here’s how situations in the personal subnet of “VPC App” could make requests via the shared transparent proxy in “VPC Shared:”

  1. When situations in VPC App help make HTTP/S requests, the system packets they send possess the public Ip of the remote web host as the destination deal with. These packets are usually forwarded to the transit gateway, in line with the route desk associated to the personal subnet.
  2. The transit gateway receives the forwards and packets them to VPC Shared, in line with the default route of the transit gateway route table.
  3. Take note that the transit gateway attachment resides inside the transit gateway subnet. Once the packets get to VPC Shared, they’re forwarded to the Squid example because the next location has been determined in line with the route table related to the transit gateway subnet.
  4. The Squid example makes requests with respect to the foundation instance (“Situations” in the schema). After that, the response is delivered by it to the foundation instance. The packets that it emits have got the Ip of the foundation instance because the destination address and so are forwarded to the transit gateway based on the route desk associated to the general public subnet.
  5. The transit gateway forwards and receives the response packets to VPC App.
  6. Finally, the foundation is reached simply by the response instance.

In a higher availability deployment, you might have one transit gateway subnet per Availability Zone that transmits traffic to the Squid instance that resides in exactly the same Availability Zone, or even to the Squid instance in another Availability Zone if the instance in exactly the same Availability Zone fails.

You might use AWS Transit Gateway to implement a transparent proxy solution that scales horizontally. This enables one to add or get rid of proxy instances in line with the demand, of changing the instance type instead. With this process, you need to deploy a fleet of proxy situations – launched by a car Scaling group, for instance – and install a VPN link between each example and the transit gateway. The proxy instances have to support ECMP (“Equivalent Cost Multipath routing”; find Transit Gateways) to equally pass on the outbound traffic in between instances. I don’t describe this option architecture in this website post further.

Conclusion

In this article, I’ve shown ways to use Squid to implement a higher availability solution that filter systems outgoing traffic to the web and helps satisfy your safety and compliance needs, while being transparent for the back-finish instances in your VPC fully. I’ve discussed the main element differences between transparent proxies and forward proxies furthermore. Finally, I gave a good example of how to talk about a transparent proxy alternative across multiple VPCs making use of AWS Transit Gateway.

For those who have any relevant queries or suggestions, please keep a comment below or on the Amazon VPC forum.

If you have suggestions about this post, submit remarks in the Comments section below.

Want a lot more AWS Security information? Follow us on Twitter.

Nicolas Malaval

Nicolas is a Remedy Architect for Amazon Internet Services. He lifestyles in Paris and assists our healthcare clients in France adopt cloud technologies and innovate with AWS. Before that, he spent 3 years as a Consultant for AWS Expert Services, dealing with enterprise customers.