F5 Agility Labs - Index¶
Welcome¶
Welcome to the Public Cloud Architectures I: Deploying F5 BIG-IP Virtual Edition in AWS lab at F5 Agility 2020
The content contained here leverages a full DevOps CI/CD pipeline and is sourced from the GitHub repository at https://github.com/TonyMarfil/f5-agility-labs-f5-in-aws-1. Bugs and Requests for enhancements can be made by opening an issue within the repository.
Getting Started¶
Your instructor will provide a URL where you can access your lab environment.
Note
All work for this lab can be performed exclusively from the Linux jumphost. No installation or interaction with your local system is required.
Connecting to the Lab Environment¶
Your instructor will provide directions on how to connect to the Ravello Portal.
Class - Public Cloud Architectures I: Deploying F5 BIG-IP Virtual Edition in AWS¶
This class covers the following topics:
- Deploying AWS environments with CloudFormation Templates and Terraform
- Service Discovery iApp for dynamically populating pool members using instance tags
- Cross Availability Zone HA with F5
- Autoscale WAF
- Logging to Cloudwatch
Infrastructure As Code¶
This lab will use HashiCorp Terraform and AWS CloudFormation templates to deploy two common F5 use cases:
- Cross Availability Zone High Availability
- Autoscale WAF
The CloudFormation templates used in this lab are hosted in the official F5 Github repository:
https://github.com/F5Networks/f5-aws-cloudformation
Connecting to the Lab¶
Important
Your student account, and shortUrl value will be announced at the start of the lab.
- For this lab, a Linux Remote Desktop jump host will be provided as a consistent starting point.
- Though the public cloud environment runs on a shared AWS account, every student will build and work in a dedicated AWS VPC.
- A convenient way to work through the lab is to split your screen in half: one side for the lab environment, the other side for the lab guide.
Lab Variables¶
The lab will make use of unique variables to provide access to the lab and isolate student environments.
Variable Name | Variable Value |
---|---|
shortUrl | Unique key that provides access to this lab (i.e. abc123) |
emailid | Account name for each student (i.e. user01@f5lab.com) |
Launch Remote Desktop Session to Linux¶

- Look for ubuntu1. Note the username / password. Click on rdp link. Download the rdp file. Click on the rdp file to launch a Remote Desktop Session to your client.
- Alternatively, you can copy and paste the ubuntu1 IP address into your Remote Desktop client to modify settings.
- Local Resources => Keyboard => Apply Windows key combinations: On the remote computer. This will allow you to quickly toggle (ALT + TAB) between windows inside the Remote Desktop Session.
- Login with username / password

SSH to the F5-Super-NetOps docker container¶
From the Linux desktop, click on the upper-left-hand corner “Activities” to reveal the application Dock.
Click to launch the terminal application.

From the terminal, invoke the ‘snops’ command alias to ssh to the f5-super-netops docker container. Substitute user (su) to root.
snops
default
su -
default

Set Variables¶
Export your student account and short URL path variables.
Your student account will be used to create an AWS console login and provide unique names for infrastructure that you create in the shared AWS account.
The short URL path will be used to grant access to the shared AWS account both via the AWS API and as the password for the AWS web console. Replace the emailid and shortUrl values below with the student account name and short URL assigned to you at the start of the lab.
Attention
REPLACE THE EXAMPLE VALUES WITH THE VALUES PROVIDED TO YOU BY YOUR INSTRUCTOR.
Copy and paste the commands below to accomplish the steps above.
export emailid=user55@f5lab.com
export shortUrl=abc123
printenv

The printenv
command will echo all your environment variables. Look for emailid and shortUrl. Confirm the exported variables are correct.
Initialize your Lab Environment¶
This will create AWS credentials that you will use to access the shared AWS account.
You will:
- Change to your home directory.
- Clone the git repository for this lab.
- Change to the working directory.
- Run the start script.
Copy and paste the commands below to accomplish the steps above.
cd ~
git clone -b dev https://github.com/TonyMarfil/marfil-f5-terraform
cd ~/marfil-f5-terraform/
source start

Git clone completes successfully.

Attention
For a smooth ride, always invoke commands from inside the cloned git repository (marfil-f5-terraform). To check you’re in the right place you can run the command pwd
and the output should read /root/marfil-f5-terraform
Launch Terraform¶
Now that we have created credentials to access the AWS account, we will use Terraform to deploy our lab environment.
Initialize terraform.
terraform init
Invoke terraform plan. This will output the changes that terraform will apply.
terraform plan
Terraform apply.
terraform apply


F5 AWS Lab Test application¶
Note the elb_dns_name value in terraform output. HTTP to this site from any browser to see the example lab application.

What just happened?¶
This is the TL;DR version of the steps completed.
When you clone the git repository, you are pulling down a current version of the files you need to get started. These files are hosted on Github, the most popular online revision control repository, and include:
- Onboarding scripts that create your AWS account and other prerequisites: f5-super-netops-install.sh, addUser.sh, export.sh.
- Terraform configuration files–a declarative, comprehensive representation of our entire application stack:
- main.tf - Every terraform configuration has a main.tf. This contains all of the AWS specific (non-F5) environment configuration, including web instances
- f5-cloudformation.tf files - A terraform file that takes the officially supported CloudFormation template hosted in the official F5 github repo: https://github.com/F5Networks/f5-aws-cloudformation and stuffs all of the prerequisite parameters so we don’t have to do it manually.
- outputs.tf - Any variable in the outputs.tf file can be rendered to the console with ‘terraform output’ and is exposed to other command line tools.
- vars.tf - Variables for terraform.
- Handy utilities to help move the lab along with minimum fuss: lab-info, password-reset.
The start script takes care of all of the prerequisites to standing up an AWS environment. Precisely:
- Installs all of the necessary software, including: terraform, the aws cli, and various other command line tools.
- Creates your AWS console login and api account and stores the keys locally for use by the AWS command line.
- Creates SSH keys for use by all of your EC2 instances: web servers and Big-IP virtual editions.
- Creates a self-signed SSL certificate for use in deploying https services.
- Sets the default region: us-east-1 (Virginia), ap-southeast-1 (Singapore), etc.
The terraform files go into effect when you invoke terraform apply. This step makes use of all of the prerequisites from the step before to build the environment in AWS.
Exploring AWS¶
This lab will examine the AWS Lab Environment created previously.
Explore the F5 / AWS lab environment¶
Your instructor will share a view of the Big-IQ License Manager hosted on AWS. The class will see all of the instances dynamically licensed through Big-IQ.
When deploying to AWS you have flexible licensing options:
- Bring Your Own License (BYOL) - Can be transferred from one Virtual Edition environment to another (i.e. VMWare => AWS)
- Hourly - Launch an instance from the AWS self-service Marketplace portal and pay only for metered hourly use.
- Subscription - This is the option used in this lab. Every Big-IP launched will query the Big-IQ License Manager for a license. From Big-IQ we can revoke licenses as well.
- Enterprise License Agreement
Attention
Below is a snapshot of the Big-IQ License Manager dynamically licensing devices in AWS. You’re instructor can show this to the class during a lab session.

Launch the Firefox browser. Click on the bookmark for the Amazon AWS Console link in the upper-left-hand corner. Login with emailid as the username and shortUrl as password.
Parameter | value |
---|---|
Account: | f5agility2018 |
User Name: | userxx@f5lab.com, change xx to your student number |
Password: | sames as shortUrl / echo $shortUrl |

Attention
In the upper right-hand corner, ensure you are in the correct region. For example: N. Virginia region (us-east-1) is the default.

CloudFormation¶
Navigate to Services => Management Tools => CloudFormation. In the search field type your user account name (i.e user99). You should see your CloudFormation deployment details. You launched two CloudFormation templates.

- ha-userxxf5labcom-vpc-xxxxxxxx - Is the Cross-Availability-Zone deployment well documented in the F5 Github repository: https://github.com/F5Networks/f5-aws-cloudformation/tree/master/supported/failover/across-net/via-api/3nic/existing-stack/bigiq

- waf-userXXf5labcom-vpc-xxxxxxxx - Is the Autoscale WAF deployment well documented in the F5 Github repository: https://github.com/F5Networks/f5-aws-cloudformation/tree/master/supported/autoscale/waf/via-lb/1nic/existing-stack/bigiq

- Click the Events tab. The F5 CloudFormation template records every successful or failed event here. Look for the final “CREATE_COMPLETE” at the top. This indicates all went well.

- Click on the Outputs tab. When CloudFormation deployments complete successfully, they can export key value pairs you can use to integrate other automation tools. For example, you can query these CloudFormation outputs to find out to which region, availability zone, private IPs, public IPs your F5 Big-IP Virtual Edition instance has been assigned.

- Click on the Resources tab. Here we see a map (resource type to unique id) of all the AWS resources that were deployed from the CloudFormation template.

- Click the Events tab. The F5 CloudFormation template records every successful or failed event here. Look for the final “CREATE_COMPLETE” at the top. This indicates all went well.

- Click on the Parameters tab. We used terraform to stuff all of the necessary parameters into the CloudFormation template. Here you can see the CloudFormation parameter name and value provided.

EC2¶
Navigate to Services => Compute => EC2 => INSTANCES => Instances. Enter your username in the search field (i.e. user99). The web application is hosted on webaz1.0 in one availability zone and webaz2.0 in another availability zone. Highlight web-az1.0.

- In the “Description” tab below, note the availability zone. Highlight web-az2.0 and do the same.

- Take a look at the tags big-IP1-ha… has been assigned. In public cloud deployments you can use tags (key-value pairs) to group your devices.

- Cloud-init. Version 13 of Big-IP supports cloud-init. Right click on BIGIP1 => Instance Settings => View/Change User Data. Cloud-init is the industry standard way to inject commands into an F5 cloud image to automate all aspects of the on-boarding process: https://cloud-init.io/.

Navigate to Services => Compute => EC2 => # Key Pairs. Type your username in the search field (i.e. user99). You will see the ssh key that was created for you and upload to AWS. By default, F5 Big-IP VE appliances deployed to AWS do not have any default root or admin account access. You have to enable or create these accounts. Initially, you can only connect via ssh using your private key. From the Super-NetOps terminal, see if you can find the private key in your home directory.

Navigate to Services => Compute => EC2 => LOAD BALANCING => Load Balancers. In the search filter enter your username. You should see two load balancers. One named tf-elb-* is your newly created AWS load balancer.

- Highlight the ‘Description’ tab. Note:
- Scheme: internet-facing
- Type: Classic

- Click the “Health Check” tab => [Edit health Check]. The classic load-balancer is limited to basic health checks.

- Click the “Listeners” tab => [Edit]. The classic load-balancer is limited to HTTP, HTTPS, TCP and SSL (no UDP).

Navigate to Services => Compute => EC2 => AUTO SCALING => Auto Scaling Group. Highlight the “Activity History” tab. You can the autoscale WAF CloudFormation template created an auto scaling group. Read the Description and Cause.

- Click the “Scaling Policies” tab. Read through the polices to understand how the autoscale WAF deployment is programmed to both scale out during a surge and scale in when the surge subsides.

- Click the “Instances” tab. The single instance running the F5 WAF. Notice the instance is “Protected from: Scale in”. This means that AWS will guarantee a minimum of one F5 WAF instance is running at all times. If someone where to accidentally stop or terminate an instance, this policy would automatically trigger the creation of a new one.

VPC¶
Navigate to Services => Networking & Content Deliver => VPC. click on VPCs. Enter your username in the search filter (i.e. user99). This is the Virtual Private Cloud (VPC) that has been dedicated to your lab environment. Select the Summary tab. You can see the IPv4 CIDR assigned is 10.0.0.0/16. Your on-premises datacenter has been assigned 10.1.0.0/16 to not conflict.

Github¶
- Fully supported F5 Networks Solutions are hosted in the official F5 Networks GitHub repository: https://github.com/f5networks
- We are running the lab from the F5 Super-NetOps container: https://github.com/f5devcentral/f5-super-netops-container
- AWS CloudFormation templates: https://github.com/F5Networks/f5-aws-cloudformation
- Native template formats are also available for Microsoft Azure (arm templates): https://github.com/F5Networks/f5-azure-arm-templates
- Native template formats are also available for Google Cloud Platform (gdm templates): https://github.com/F5Networks/f5-google-gdm-templates

Explore the F5 Big-IP Virtual Editions Deployed¶
In this lab we’ll take a close look at the Big-IP Virtual Editions deployed.
Explore the F5 Big-IP Virtual Editions Deployed¶
From the Super-NetOps terminal, run the handy lab-info utility. Confirm that “MCPD is up, System Ready” for all three of your instances.
lab-info

Attention
Do not attempt to reset the Big-IP password until MCPD is up, System Ready.
Initially, you can only login to an F5 Big-IP VE in AWS via SSH using an SSH key. You will have to enable admin and root password access. Invoke the reset-password utility with the IP address of each of your Big-IP VE’s as the argument. REPLACE THE x.x.x.x PLACEHOLDER WITH THE MANAGEMENT IP ADDRESSES OF YOUR THREE F5 BIG-IP VE’S. This will enable the admin account on all three of your Big-IP’s and change the password to the value of the shortUrl.
reset-password x.x.x.x
reset-password y.y.y.y
reset-password z.z.z.z
Run terraform output
and note the value of elb_dns_name.
terraform output
Open a new tab in the Firefox browser. HTTP to elb_dns_name. Confirm the sample application is up.

Open a new tab in the Firefox browser. HTTPS to the MGMT URL of BIG-IP Autoscale Instance. Don’t miss management port is :8443!
lab-info
Attention
This lab makes use of insecure self-signed certificates. Bypass the warnings by clicking on “Confirm Security Exception”.

Login with Username: admin Password: value of shortUrl.


Main => System => Resource Provision. Note an F5 WAF is provisioned for both LTM and ASM.

Main => Security => Application Security => Policies List. A starter “linux-low” policy has been deployed.

Click on “Learning and Blocking” settings to see exactly what a “linux-low” policy consists of. This starter policy is often times imported in to Big-IQ for central management.

Local Traffic => Virtual Server => Properties. A virtual server with a “catch-all” listener of 0.0.0.0/0 has been deployed.

The “linux-low” security policy is attached to this virtual server.

From the Super-NetOps terminal run “lab-info” and copy the value for WAF ELB -> URL. Open a new browser tab and HTTPS to the WAF ELB URL. Your sample application is protected behind an F5 WAF.


Login to either Big-IP1 or Big-IP2. Main => iApps => Application Services. The Cross-AZ HA Big-IP has been deployed with the F5 AWS HA iApp.

Extending and Securing your Cloud¶
This lab will use the Lab Environment created previously to explore other capabilities including
- Service Discovery
- Failover Across Availability Zones
We can now start configuring the Big-IPs to responsibly fulfill our part of the shared responsibility security model: https://aws.amazon.com/compliance/shared-responsibility-model/
Deploy the Service Discovery iApp on a BigIP Cluster across two Availability Zones¶
From the Super-NetOps terminal, run the handy lab-info utility. Copy the Big-IP1 MGMT IP.
lab-info
The Service Discovery iApp will automatically discover and populate nodes in the cloud based on tags. Open a new browser tab and HTTPS to the MGMT IP. Login to the Big-IP Configuration utility (Web UI).
- Username: admin
- Password: value for <shortURl> will be unique to your lab.
Navigate to iApps => Application Services. Create a new iApp deployment:
- Name: service_discovery
- Template: choose f5.service_discovery from the dropdown.
Click [Finished}

Question | value |
---|---|
Name | service_discovery |
Template | f5.service.discovery |
Pool | |
What is the tag key on your cloud provider for the members of this pool? | findme |
What is the tag value on your cloud provider for the members of this pool? | web |
Do you want to create a new pool or use an existing one? | Create new pool… |
Application Health | |
Create a new health monitor or use an existing one? | http |
Finished


Local Traffic => Pools => track the newly created service_discovery_pool. Within 60 seconds it should light up green. The service_discovery iApp has discovered and auto-populated the service_discovery_pool with two web servers.

Deploy an AWS High-Availability-aware virtual server across two Availability Zones¶
Login to the active Big-IP1 Configuration utility (Web UI). The “HA_Across_AZs” iApp will already be deployed in the Common partition.
Download the latest tcp iApp template from https://s3.amazonaws.com/f5-public-cloud/f5.tcp.v1.0.0rc2.tmpl.

iApps -> Templates -> import. Import f5.tcp.v1.0.0rc2.tmpl to the primary BigIP. The secondary BigIP should pick up the configuration change automatically.



Deploy an iApp using the f5.tcp.v1.0.0rc2.tmpl template.
iApps => Application Serves => Select f5.tcp.v1.0.0rc2 template from the dropdown. Name: virtual_server_1.
Configure iApp: Select “Advanced” from “Template Selection”.

Traffic Group: UNCHECK “Inherit traffic group from current partition / path”
Question | value |
---|---|
Name: | virtual_server_1 |
Inherit traffic group from current partition / path | uncheck |
High Availability. What IP address do you want to use for the virtual server? | VIP IP of Big-IP1 |
What is the associated service port? | HTTP (80) |
What IP address do you wish to use for the TCP virtual server in the other data center or availability zone? | VIP IP of Big-IP2 |
Do you want to create a new pool or use an existing one? | service_discovery_pool |
From the Super-NetOps terminal. Invoke terraform output
and copy the value for Big-IP1 => VIP IP. Use this value in the iApp as explained in the chart above.

From the Super-NetOps terminal. Invoke terraform output
and copy the value for Big-IP2 => VIP IP. Use this value in the iApp as explained in the chart above.


The iApp will create two virtual servers on both Big-IP’s. The iApp deployment on Big-IP1 will automatically and immediately sync to Big-IP2.

From the Super-NetOps terminal. Invoke terraform output
and copy the value for the primary Big-IP’s Elastic IP. Open a browser tab and HTTP to this Elastic IP.

In order to enable request logging and apply a client SSL profile, let’s re-configure our TCP / Fast L4 virtual server to a Standard virtual server with an http profile applied.
iApps => Application Services => select the “virtual_server_1” iApp we just deployed.

Properties => uncheck/disable “Strict Updates”

Local Traffic => Virtual Servers => virtual_server1. Change only the values below and leave the rest as they are.
Question | value |
---|---|
Type | Standard |
Service Port | 443 / HTTPS |
HTTP Profile | http |
SSL Profile (Client) | clientssl |
[Update]


From the Super-NetOps terminal. Invoke terraform output
and copy the value for the primary Big-IP’s Elastic IP. Let’s test the http profile and clientssl profile are working. Open a browser tab and HTTPS (different than before, when we accessed our example application via HTTP) to this Elastic IP.

Test Failover¶
From the Super-NetOps terminal, run the handy lab-info utility. Confirm that “MCPD is up, System Ready” for all three of your instances.
lab-info
From the HTTPS Configuration Utility (Web UI) of the active Big-IPX device: Device Management => Devices. [Force Offline]. Click [OK] to confirm.


From the Super-NetOps terminal, run the lab-info utility. Notice how the Elastic IP previously associated with Big-IP1 has now “floated over” and is associated with Big-IP2.
lab-info

HTTPS to the Elastic IP. We simulated a failover event and our sample application is still up. Because only the Big-IP has failed, not the whole Availability Zone, and the client is configured for persistence, the application is still served up from the same Availability Zone.

Now we’ll simulate an Availability Zone outage. From the https Configuration Utility (Web UI) of the active Big-IPX device: Local Traffic => Pools => Members => Select the pool member in Availability Zone #1 (almost always the first pool member) and [Force Offline].

HTTPS to the Elastic IP. Hit refresh [F5] a few times to refresh the cache. Notice we are not connecting to the application on AZ#2.

Note
Traditional HA failover relies on Layer 2 connectivity and a heartbeat to trigger a fail-over event and move a ‘floating IP’ to a new active unit. There is no Layer 2 connectivity in the cloud across availability zones. The Big-IP will detect an availability zone outage or trouble with a Big-IP VE and the elastic IP will ‘float’ over to the new active device as you just saw.

Logging to CloudWatch¶
F5 Virtual Editions support comprehensive request and security logging for compliance and troubleshooting using two AWS native features: S3 Buckets and CloudWatch. In this lab we’ll configure logging to CloudWatch.
LTM Request Logging to CloudWatch¶
From the Super-NetOps terminal, run the handy lab-info utility. Confirm that “MCPD is up, System Ready” for all three of your instances.
lab-info
From the AWS management console, navigate to Services => Management Tools => CloudWatch => Log Groups. In the search filter enter your username (i.e. user55). Terraform created a Log Group for you.

Click on your log group. Click on your log stream named “log-stream”. Notice the Message column has no messages.

Right-click and copy your log group name (i.e. user55labcom). Save in notepad or your preferred text editor / note taking method for later use.

For convenience working through the next few steps, split your screen into two halves: Super-NetOps terminal on the left and the Firefox or Chrome browser on the right. On a standard Windows US/English Windows keyboard you can split the screen with <Windows Key + left arrow> and <Windows Key + right arrow>.
From your Super-NetOps terminal, there are multiple ways to see your AWS access keys. You can echo the environment variables:
echo $AWS_ACCESS_KEY_ID
echo $AWS_SECRET_ACCESS_KEY
…or you can cat the hidden ~/.aws/config file:
cat ~/.aws/config
Copy your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY values. Save in notepad or your preferred text editor / note taking method for later use.
Create a new cloud_logger iApp. HTTPS to the Configuration Utility (Web UI) of Big-IP1 (assuming that is the ACTIVE device and not STANDBY).
iApps => Application Services => Name: cloudwatch. Template: f5.cloud_logger.v1.0.0. Click [Finished].


Question | value |
---|---|
Name | cloudwatch |
Template | f5.cloud_logger.v1.0.0 |
Which AWS region is the provider located in? | us-east-1 |
What is the access key you want to use for the API calls? | value of $AWS_ACCESS_KEY_ID |
What is the secret key you want to use for the API calls? | value of $AWS_SECRET_ACCESS_KEY |
What is the AWS CloudWatch Logs group name? | log group name i.e. user55labcom |
What is the AWS CloudWatch Logs group’s stream name? | log-stream |
Do you want to enable LTM Request logging? | Enable LTM request logging |

Click [Finished].

The logging components have been created!

HTTPS to the Configuration Utility (Web UI) of Big-IP2 (if that is the standby). Look in the upper left-hand corner. Confirm you are on the STANDBY. All of the iApp changes are kept in sync between active and standby devices.

HTTPS to the Configuration Utility (Web UI) of Big-IP1 (assuming that is the ACTIVE device and not STANDBY).
iApps => Application Services => virtual_server1.
Attention
Before completing the next few steps, DISABLE STRICT UPDATES for the f5.tcp.v1.0.0rc2 iApp named virtual_server1 in our example.
Local Traffic => Virtual Servers => virtual_server1_vs.10.0.1.x.
- Choose “Advanced” from the dropdown.
- Select SSL Profile(Client): clientssl
- Change HTTP Profile to “http”
- Request Logging Profile: cloudwatch_remote_logging
Click [Update].




Do the same for the second virtual server. Local Traffic => Virtual Servers => virtual_server1_vs.10.0.1.x.
- Choose “Advanced” from the dropdown.
- Select SSL Profile(Client): clientssl
- Change HTTP Profile to “http”
- Request Logging Profile: cloudwatch_remote_logging
Click [Update].



Run the lab-info command. Note the Elastic IP.
lab-info

HTTPS to the Elastic IP to test request logging. Refresh with [F5] key for 15 seconds to generate a modest amount of traffic.

Attention
Some lab testers reported an incompatibility issue with Mozilla Firefox on Linux and the AWS CloudWatch console. If Firefox doesn’t render the CloudWatch console, switch to Google Chrome for this part of the lab.
From the AWS Console, Services => Management Tools => CloudWatch => Log Groups. Select your log group and log-stream.

You will see the http request logs.

Expand a log entry to see more detail.

Copy the CLIENT_IP of a request and use this CLIENT_IP in the “Filter events” search filter. In production you would filter search results by attributes such as CLIENT-IP to home in on relevant logs.


WAF HTTP Request and Security Logging to CloudWatch¶
HTTPS to the Configuration Utility (Web UI) of the BIG-IP Autoscale Instance: waf…
iApps => Application Services => waf=userxxf5labcom.

Properties => UNCHECK “Strict Updates”. [Update].

Create a new cloud_logger iApp. iApps => Application Services => Name: cloudwatch. Template: f5.cloud_logger.v1.0.0. Click [Finished].
Question | value |
---|---|
Name | cloudwatch |
Template | f5.cloud_logger.v1.0.0 |
Which AWS region is the provider located in? | us-east-1 |
What is the access key you want to use for the API calls? | value of $AWS_ACCESS_KEY_ID |
What is the secret key you want to use for the API calls? | value of $AWS_SECRET_ACCESS_KEY |
What is the AWS CloudWatch Logs group name? | log group name i.e. user55labcom |
What is the AWS CloudWatrch Logs group’s stream name? | log-stream |
Do you want to enable ASM logging? | Enable ASM logging |
What ASM requests do you want to log? | Log all requests (verbose) |
Do you want to include ASM DOS logging? | Include DOS protection logging |
Do you want to enable LTM Request logging? | Enable LTM request logging |
What Request parameters do you want to send in the log? | leave defaults |
Click [Finished].

Local Traffic => Virtual Server => waf-userXXf5labcom_vs.

Change Request Logging Profile to cloudwatch_remote_logging.

Click [Update].

Local Traffic => Virtual Server => waf-userXXf5labcom_vs => Security => Policies.

Log Profile. Select cloudwatch_remote_logging. Click [Update].

From the Super-NetOps terminal, run the lab-info utility.
lab-info
HTTPS to the WAF ELB URL. Refresh the browser with <CTRL+F5> for 15 seconds to generate a modest amount of traffic.

Back in the CloudWatch console. Use the search term waf to see logs coming from your F5 WAF.

Autoscale WAF¶
Automatically scale out your Web Application Firewall to service a surge and scale in when surge subsides.
Autoscale WAF¶
HTTPS to the WAF ELB URL.

From the AWS console, navigate to Services => AUTO SCALING => Auto Scaling Groups. Filter on your username and select your waf-userxx… auto scaling group.
Select the ‘Instances’ tab below, and select your Instance ID (there should be only one). If your instance is “Protected from… Scale in” then it will always stay up regardless of scale up/down thresholds configured. It’s common to keep a single minimum WAF instance running at all times and scale the 2nd, 3rd, Nth WAF during surges.

Select the Scaling Polices tab. These policies were deployed via the CloudFormation template and can be changed via the CloudFormation template.

Login to the active BIG-IP Autoscale Instance MGMT IP on port 8443 configuration utility (web ui).
lab-info
In the Big-IP Configuration utility (Web UI) navigate to Security -> Application Security -> Security Policies -> Active Polices. A “linux-low” policy was deployed via CloudFormation template and is in Enforcement Mode: Blocking.

From the f5-super-netops container, let’s launch some traffic against the application behind our WAF and watch it autoscale to service the surge! Replace the https://waf-userxx… in the command below with the one in the output of lab-info and don’t miss that critical forward slash / at the end!
base64 /dev/urandom | head -c 3000 > payload
ab -t 120 -c 200 -c 5 -T 'multipart/form-data; boundary=1234567890' -p payload https://waf-user11f5democom-xxxxxxxxx.us-east-1.elb.amazonaws.com/
Services => Compute => EC2 => INSTANCES => Instances. Filter on your username and after 60 seconds (the lowest configurable time threshold) hit refresh to see your 2nd autoscale WAF instance starting.

Clean Up Environment¶
The exciting promise of public cloud is not only to stand up application environments quickly, consistently and with minimum capex, but also the inverse: to tear down application environments quickly, cleanly and completely.
Clean up the lab environment¶
From the Super-NetOps terminal, clean up, then destroy the environment.
lab-cleanup
terraform destroy --force
Attention
You might need to run terraform destroy –force a second time. Watch the console output. Nothing serious: sometimes the Internet gateways take longer to delete than the time we have configured for terraform to timeout.



