F5 Agility Labs - Index

Welcome

Welcome to the Public Cloud Architectures I: Deploying F5 BIG-IP Virtual Edition in AWS lab at F5 Agility 2020

The content contained here leverages a full DevOps CI/CD pipeline and is sourced from the GitHub repository at https://github.com/TonyMarfil/f5-agility-labs-f5-in-aws-1. Bugs and Requests for enhancements can be made by opening an issue within the repository.

Getting Started

Your instructor will provide a URL where you can access your lab environment.

Note

All work for this lab can be performed exclusively from the Linux jumphost. No installation or interaction with your local system is required.

Connecting to the Lab Environment

Your instructor will provide directions on how to connect to the Ravello Portal.

Class - Public Cloud Architectures I: Deploying F5 BIG-IP Virtual Edition in AWS

This class covers the following topics:

  • Deploying AWS environments with CloudFormation Templates and Terraform
  • Service Discovery iApp for dynamically populating pool members using instance tags
  • Cross Availability Zone HA with F5
  • Autoscale WAF
  • Logging to Cloudwatch

Infrastructure As Code

This lab will use HashiCorp Terraform and AWS CloudFormation templates to deploy two common F5 use cases:

  • Cross Availability Zone High Availability
  • Autoscale WAF

The CloudFormation templates used in this lab are hosted in the official F5 Github repository:

https://github.com/F5Networks/f5-aws-cloudformation

Connecting to the Lab

Important

Your student account, and shortUrl value will be announced at the start of the lab.

  • For this lab, a Linux Remote Desktop jump host will be provided as a consistent starting point.
  • Though the public cloud environment runs on a shared AWS account, every student will build and work in a dedicated AWS VPC.
  • A convenient way to work through the lab is to split your screen in half: one side for the lab environment, the other side for the lab guide.
Lab Variables

The lab will make use of unique variables to provide access to the lab and isolate student environments.

Variable Name Variable Value
shortUrl Unique key that provides access to this lab (i.e. abc123)
emailid Account name for each student (i.e. user01@f5lab.com)
Launch Remote Desktop Session to Linux
_images/1_ravello_portal1.png
  • Look for ubuntu1. Note the username / password. Click on rdp link. Download the rdp file. Click on the rdp file to launch a Remote Desktop Session to your client.
  • Alternatively, you can copy and paste the ubuntu1 IP address into your Remote Desktop client to modify settings.
    • Local Resources => Keyboard => Apply Windows key combinations: On the remote computer. This will allow you to quickly toggle (ALT + TAB) between windows inside the Remote Desktop Session.
  • Login with username / password
_images/2_rdp_logon.png
SSH to the F5-Super-NetOps docker container

From the Linux desktop, click on the upper-left-hand corner “Activities” to reveal the application Dock.

Click to launch the terminal application.

_images/3_terminal.png

From the terminal, invoke the ‘snops’ command alias to ssh to the f5-super-netops docker container. Substitute user (su) to root.

snops
default
su -
default
_images/4_snops_login.png
Set Variables

Export your student account and short URL path variables.

Your student account will be used to create an AWS console login and provide unique names for infrastructure that you create in the shared AWS account.

The short URL path will be used to grant access to the shared AWS account both via the AWS API and as the password for the AWS web console. Replace the emailid and shortUrl values below with the student account name and short URL assigned to you at the start of the lab.

Attention

REPLACE THE EXAMPLE VALUES WITH THE VALUES PROVIDED TO YOU BY YOUR INSTRUCTOR.

Copy and paste the commands below to accomplish the steps above.

export emailid=user55@f5lab.com
export shortUrl=abc123
printenv
_images/4a_export.png

The printenv command will echo all your environment variables. Look for emailid and shortUrl. Confirm the exported variables are correct.

Initialize your Lab Environment

This will create AWS credentials that you will use to access the shared AWS account.

You will:

  • Change to your home directory.
  • Clone the git repository for this lab.
  • Change to the working directory.
  • Run the start script.

Copy and paste the commands below to accomplish the steps above.

cd ~
git clone -b dev https://github.com/TonyMarfil/marfil-f5-terraform
cd ~/marfil-f5-terraform/
source start
_images/5_git_clone.png

Git clone completes successfully.

_images/6_git_clone_complete.png

Attention

For a smooth ride, always invoke commands from inside the cloned git repository (marfil-f5-terraform). To check you’re in the right place you can run the command pwd and the output should read /root/marfil-f5-terraform

Launch Terraform

Now that we have created credentials to access the AWS account, we will use Terraform to deploy our lab environment.

Initialize terraform.

terraform init

Invoke terraform plan. This will output the changes that terraform will apply.

terraform plan

Terraform apply.

terraform apply
_images/7_terraform_apply.png _images/8_terraform_apply_complete.png
F5 AWS Lab Test application

Note the elb_dns_name value in terraform output. HTTP to this site from any browser to see the example lab application.

_images/9_http_elb_site.png
What just happened?

This is the TL;DR version of the steps completed.

When you clone the git repository, you are pulling down a current version of the files you need to get started. These files are hosted on Github, the most popular online revision control repository, and include:

  • Onboarding scripts that create your AWS account and other prerequisites: f5-super-netops-install.sh, addUser.sh, export.sh.
  • Terraform configuration files–a declarative, comprehensive representation of our entire application stack:
    • main.tf - Every terraform configuration has a main.tf. This contains all of the AWS specific (non-F5) environment configuration, including web instances
    • f5-cloudformation.tf files - A terraform file that takes the officially supported CloudFormation template hosted in the official F5 github repo: https://github.com/F5Networks/f5-aws-cloudformation and stuffs all of the prerequisite parameters so we don’t have to do it manually.
    • outputs.tf - Any variable in the outputs.tf file can be rendered to the console with ‘terraform output’ and is exposed to other command line tools.
    • vars.tf - Variables for terraform.
  • Handy utilities to help move the lab along with minimum fuss: lab-info, password-reset.

The start script takes care of all of the prerequisites to standing up an AWS environment. Precisely:

  • Installs all of the necessary software, including: terraform, the aws cli, and various other command line tools.
  • Creates your AWS console login and api account and stores the keys locally for use by the AWS command line.
  • Creates SSH keys for use by all of your EC2 instances: web servers and Big-IP virtual editions.
  • Creates a self-signed SSL certificate for use in deploying https services.
  • Sets the default region: us-east-1 (Virginia), ap-southeast-1 (Singapore), etc.

The terraform files go into effect when you invoke terraform apply. This step makes use of all of the prerequisites from the step before to build the environment in AWS.

Exploring AWS

This lab will examine the AWS Lab Environment created previously.

Explore the F5 / AWS lab environment

Your instructor will share a view of the Big-IQ License Manager hosted on AWS. The class will see all of the instances dynamically licensed through Big-IQ.

When deploying to AWS you have flexible licensing options:

  • Bring Your Own License (BYOL) - Can be transferred from one Virtual Edition environment to another (i.e. VMWare => AWS)
  • Hourly - Launch an instance from the AWS self-service Marketplace portal and pay only for metered hourly use.
  • Subscription - This is the option used in this lab. Every Big-IP launched will query the Big-IQ License Manager for a license. From Big-IQ we can revoke licenses as well.
  • Enterprise License Agreement

Attention

Below is a snapshot of the Big-IQ License Manager dynamically licensing devices in AWS. You’re instructor can show this to the class during a lab session.

_images/0a_bigiq_licenses.png

Launch the Firefox browser. Click on the bookmark for the Amazon AWS Console link in the upper-left-hand corner. Login with emailid as the username and shortUrl as password.

Parameter value
Account: f5agility2018
User Name: userxx@f5lab.com, change xx to your student number
Password: sames as shortUrl / echo $shortUrl
_images/1_aws_console_login.png

Attention

In the upper right-hand corner, ensure you are in the correct region. For example: N. Virginia region (us-east-1) is the default.

_images/2_region_check.png
CloudFormation

Navigate to Services => Management Tools => CloudFormation. In the search field type your user account name (i.e user99). You should see your CloudFormation deployment details. You launched two CloudFormation templates.

_images/3_cloudformation_stacks.png _images/cft_cross-az-ha.png _images/cft_autoscale_waf.png
  • Click the Events tab. The F5 CloudFormation template records every successful or failed event here. Look for the final “CREATE_COMPLETE” at the top. This indicates all went well.
_images/5_cft_events.png
  • Click on the Outputs tab. When CloudFormation deployments complete successfully, they can export key value pairs you can use to integrate other automation tools. For example, you can query these CloudFormation outputs to find out to which region, availability zone, private IPs, public IPs your F5 Big-IP Virtual Edition instance has been assigned.
_images/6_cft_outputs.png
  • Click on the Resources tab. Here we see a map (resource type to unique id) of all the AWS resources that were deployed from the CloudFormation template.
_images/7_cft_resources.png
  • Click the Events tab. The F5 CloudFormation template records every successful or failed event here. Look for the final “CREATE_COMPLETE” at the top. This indicates all went well.
_images/8_cft_events.png
  • Click on the Parameters tab. We used terraform to stuff all of the necessary parameters into the CloudFormation template. Here you can see the CloudFormation parameter name and value provided.
_images/9_cft_parameters.png
EC2

Navigate to Services => Compute => EC2 => INSTANCES => Instances. Enter your username in the search field (i.e. user99). The web application is hosted on webaz1.0 in one availability zone and webaz2.0 in another availability zone. Highlight web-az1.0.

_images/10_ec2_instances.png
  • In the “Description” tab below, note the availability zone. Highlight web-az2.0 and do the same.
_images/11_ec2_instance_description.png
  • Take a look at the tags big-IP1-ha… has been assigned. In public cloud deployments you can use tags (key-value pairs) to group your devices.
_images/12_ec2_instance_tags.png
  • Cloud-init. Version 13 of Big-IP supports cloud-init. Right click on BIGIP1 => Instance Settings => View/Change User Data. Cloud-init is the industry standard way to inject commands into an F5 cloud image to automate all aspects of the on-boarding process: https://cloud-init.io/.
_images/13_cloud_init.png

Navigate to Services => Compute => EC2 => # Key Pairs. Type your username in the search field (i.e. user99). You will see the ssh key that was created for you and upload to AWS. By default, F5 Big-IP VE appliances deployed to AWS do not have any default root or admin account access. You have to enable or create these accounts. Initially, you can only connect via ssh using your private key. From the Super-NetOps terminal, see if you can find the private key in your home directory.

_images/14_keypair.png

Navigate to Services => Compute => EC2 => LOAD BALANCING => Load Balancers. In the search filter enter your username. You should see two load balancers. One named tf-elb-* is your newly created AWS load balancer.

_images/15_elb_description.png
  • Highlight the ‘Description’ tab. Note:
    • Scheme: internet-facing
    • Type: Classic
_images/16_elb_instances.png
  • Click the “Health Check” tab => [Edit health Check]. The classic load-balancer is limited to basic health checks.
_images/17_elb_health_checks_limited.png
  • Click the “Listeners” tab => [Edit]. The classic load-balancer is limited to HTTP, HTTPS, TCP and SSL (no UDP).
_images/18_elb_listeners_limited.png

Navigate to Services => Compute => EC2 => AUTO SCALING => Auto Scaling Group. Highlight the “Activity History” tab. You can the autoscale WAF CloudFormation template created an auto scaling group. Read the Description and Cause.

_images/19_asg_activity.png
  • Click the “Scaling Policies” tab. Read through the polices to understand how the autoscale WAF deployment is programmed to both scale out during a surge and scale in when the surge subsides.
_images/20_asg_scaling_policies.png
  • Click the “Instances” tab. The single instance running the F5 WAF. Notice the instance is “Protected from: Scale in”. This means that AWS will guarantee a minimum of one F5 WAF instance is running at all times. If someone where to accidentally stop or terminate an instance, this policy would automatically trigger the creation of a new one.
_images/21_asg_instances.png
VPC

Navigate to Services => Networking & Content Deliver => VPC. click on VPCs. Enter your username in the search filter (i.e. user99). This is the Virtual Private Cloud (VPC) that has been dedicated to your lab environment. Select the Summary tab. You can see the IPv4 CIDR assigned is 10.0.0.0/16. Your on-premises datacenter has been assigned 10.1.0.0/16 to not conflict.

_images/22_vpc.png
Github
_images/f5-github.png

Explore the F5 Big-IP Virtual Editions Deployed

In this lab we’ll take a close look at the Big-IP Virtual Editions deployed.

Explore the F5 Big-IP Virtual Editions Deployed

From the Super-NetOps terminal, run the handy lab-info utility. Confirm that “MCPD is up, System Ready” for all three of your instances.

lab-info
_images/1_mcp_up.png

Attention

Do not attempt to reset the Big-IP password until MCPD is up, System Ready.

Initially, you can only login to an F5 Big-IP VE in AWS via SSH using an SSH key. You will have to enable admin and root password access. Invoke the reset-password utility with the IP address of each of your Big-IP VE’s as the argument. REPLACE THE x.x.x.x PLACEHOLDER WITH THE MANAGEMENT IP ADDRESSES OF YOUR THREE F5 BIG-IP VE’S. This will enable the admin account on all three of your Big-IP’s and change the password to the value of the shortUrl.

reset-password x.x.x.x
reset-password y.y.y.y
reset-password z.z.z.z

Run terraform output and note the value of elb_dns_name.

terraform output

Open a new tab in the Firefox browser. HTTP to elb_dns_name. Confirm the sample application is up.

_images/9_http_elb_site1.png

Open a new tab in the Firefox browser. HTTPS to the MGMT URL of BIG-IP Autoscale Instance. Don’t miss management port is :8443!

lab-info

Attention

This lab makes use of insecure self-signed certificates. Bypass the warnings by clicking on “Confirm Security Exception”.

_images/2_cert_warning.png

Login with Username: admin Password: value of shortUrl.

_images/3_https_login.png _images/4_https_waf.png

Main => System => Resource Provision. Note an F5 WAF is provisioned for both LTM and ASM.

_images/5_waf_provisioned.png

Main => Security => Application Security => Policies List. A starter “linux-low” policy has been deployed.

_images/6_waf_policy_1.png

Click on “Learning and Blocking” settings to see exactly what a “linux-low” policy consists of. This starter policy is often times imported in to Big-IQ for central management.

_images/7_waf_policy_2.png

Local Traffic => Virtual Server => Properties. A virtual server with a “catch-all” listener of 0.0.0.0/0 has been deployed.

_images/8_waf_virtual_server.png

The “linux-low” security policy is attached to this virtual server.

_images/9_waf_policy_enabled.png

From the Super-NetOps terminal run “lab-info” and copy the value for WAF ELB -> URL. Open a new browser tab and HTTPS to the WAF ELB URL. Your sample application is protected behind an F5 WAF.

_images/11_terraform_waf_url.png _images/12_https_waf.png

Login to either Big-IP1 or Big-IP2. Main => iApps => Application Services. The Cross-AZ HA Big-IP has been deployed with the F5 AWS HA iApp.

_images/13_ha_iapp.png

Extending and Securing your Cloud

This lab will use the Lab Environment created previously to explore other capabilities including

  • Service Discovery
  • Failover Across Availability Zones

We can now start configuring the Big-IPs to responsibly fulfill our part of the shared responsibility security model: https://aws.amazon.com/compliance/shared-responsibility-model/

Deploy the Service Discovery iApp on a BigIP Cluster across two Availability Zones

From the Super-NetOps terminal, run the handy lab-info utility. Copy the Big-IP1 MGMT IP.

lab-info

The Service Discovery iApp will automatically discover and populate nodes in the cloud based on tags. Open a new browser tab and HTTPS to the MGMT IP. Login to the Big-IP Configuration utility (Web UI).

  • Username: admin
  • Password: value for <shortURl> will be unique to your lab.

Navigate to iApps => Application Services. Create a new iApp deployment:

  • Name: service_discovery
  • Template: choose f5.service_discovery from the dropdown.

Click [Finished}

_images/1_sd_iapp.png
Question value
Name service_discovery
Template f5.service.discovery
Pool
What is the tag key on your cloud provider for the members of this pool? findme
What is the tag value on your cloud provider for the members of this pool? web
Do you want to create a new pool or use an existing one? Create new pool…
Application Health
Create a new health monitor or use an existing one? http

Finished

_images/2_sd_iapp.png _images/3_sd_iapp.png

Local Traffic => Pools => track the newly created service_discovery_pool. Within 60 seconds it should light up green. The service_discovery iApp has discovered and auto-populated the service_discovery_pool with two web servers.

_images/4_sd_iapp_pool.png
Deploy an AWS High-Availability-aware virtual server across two Availability Zones

Login to the active Big-IP1 Configuration utility (Web UI). The “HA_Across_AZs” iApp will already be deployed in the Common partition.

Download the latest tcp iApp template from https://s3.amazonaws.com/f5-public-cloud/f5.tcp.v1.0.0rc2.tmpl.

_images/1_download_tcp_iapp.png

iApps -> Templates -> import. Import f5.tcp.v1.0.0rc2.tmpl to the primary BigIP. The secondary BigIP should pick up the configuration change automatically.

_images/2_create_tcp_iapp.png _images/3_open_tcp_iapp.png _images/4_upload_tcp_iapp.png

Deploy an iApp using the f5.tcp.v1.0.0rc2.tmpl template.

iApps => Application Serves => Select f5.tcp.v1.0.0rc2 template from the dropdown. Name: virtual_server_1.

Configure iApp: Select “Advanced” from “Template Selection”.

_images/5_tcp_iapp_dropdown.png

Traffic Group: UNCHECK “Inherit traffic group from current partition / path”

Question value
Name: virtual_server_1
Inherit traffic group from current partition / path uncheck
High Availability. What IP address do you want to use for the virtual server? VIP IP of Big-IP1
What is the associated service port? HTTP (80)
What IP address do you wish to use for the TCP virtual server in the other data center or availability zone? VIP IP of Big-IP2
Do you want to create a new pool or use an existing one? service_discovery_pool

From the Super-NetOps terminal. Invoke terraform output and copy the value for Big-IP1 => VIP IP. Use this value in the iApp as explained in the chart above.

_images/6_terraform_copy_vip1.png

From the Super-NetOps terminal. Invoke terraform output and copy the value for Big-IP2 => VIP IP. Use this value in the iApp as explained in the chart above.

_images/7_terraform_copy_vip2.png _images/8_vs_finish.png

The iApp will create two virtual servers on both Big-IP’s. The iApp deployment on Big-IP1 will automatically and immediately sync to Big-IP2.

_images/9_two_vs.png

From the Super-NetOps terminal. Invoke terraform output and copy the value for the primary Big-IP’s Elastic IP. Open a browser tab and HTTP to this Elastic IP.

_images/10_http_vs.png

In order to enable request logging and apply a client SSL profile, let’s re-configure our TCP / Fast L4 virtual server to a Standard virtual server with an http profile applied.

iApps => Application Services => select the “virtual_server_1” iApp we just deployed.

_images/11_select_iapp.png

Properties => uncheck/disable “Strict Updates”

_images/12_disable_strict.png

Local Traffic => Virtual Servers => virtual_server1. Change only the values below and leave the rest as they are.

Question value
Type Standard
Service Port 443 / HTTPS
HTTP Profile http
SSL Profile (Client) clientssl

[Update]

_images/13_vs_changes_1.png _images/14_vs_changes_2.png

From the Super-NetOps terminal. Invoke terraform output and copy the value for the primary Big-IP’s Elastic IP. Let’s test the http profile and clientssl profile are working. Open a browser tab and HTTPS (different than before, when we accessed our example application via HTTP) to this Elastic IP.

_images/15_https_test_vs1.png
Test Failover

From the Super-NetOps terminal, run the handy lab-info utility. Confirm that “MCPD is up, System Ready” for all three of your instances.

lab-info

From the HTTPS Configuration Utility (Web UI) of the active Big-IPX device: Device Management => Devices. [Force Offline]. Click [OK] to confirm.

_images/1_force_standby.png _images/2_force_standby_confirm.png

From the Super-NetOps terminal, run the lab-info utility. Notice how the Elastic IP previously associated with Big-IP1 has now “floated over” and is associated with Big-IP2.

lab-info
_images/3_terraform_failover_eip.png

HTTPS to the Elastic IP. We simulated a failover event and our sample application is still up. Because only the Big-IP has failed, not the whole Availability Zone, and the client is configured for persistence, the application is still served up from the same Availability Zone.

_images/4_https_standby_az1.png

Now we’ll simulate an Availability Zone outage. From the https Configuration Utility (Web UI) of the active Big-IPX device: Local Traffic => Pools => Members => Select the pool member in Availability Zone #1 (almost always the first pool member) and [Force Offline].

_images/5_disable_node1.png

HTTPS to the Elastic IP. Hit refresh [F5] a few times to refresh the cache. Notice we are not connecting to the application on AZ#2.

_images/5_https_standby_az2.png

Note

Traditional HA failover relies on Layer 2 connectivity and a heartbeat to trigger a fail-over event and move a ‘floating IP’ to a new active unit. There is no Layer 2 connectivity in the cloud across availability zones. The Big-IP will detect an availability zone outage or trouble with a Big-IP VE and the elastic IP will ‘float’ over to the new active device as you just saw.

_images/NewSharedResponsibilityModel.png

Logging to CloudWatch

F5 Virtual Editions support comprehensive request and security logging for compliance and troubleshooting using two AWS native features: S3 Buckets and CloudWatch. In this lab we’ll configure logging to CloudWatch.

LTM Request Logging to CloudWatch

From the Super-NetOps terminal, run the handy lab-info utility. Confirm that “MCPD is up, System Ready” for all three of your instances.

lab-info

From the AWS management console, navigate to Services => Management Tools => CloudWatch => Log Groups. In the search filter enter your username (i.e. user55). Terraform created a Log Group for you.

_images/1_log-stream.png

Click on your log group. Click on your log stream named “log-stream”. Notice the Message column has no messages.

_images/2_log-stream.png

Right-click and copy your log group name (i.e. user55labcom). Save in notepad or your preferred text editor / note taking method for later use.

_images/3_copy_log-stream.png

For convenience working through the next few steps, split your screen into two halves: Super-NetOps terminal on the left and the Firefox or Chrome browser on the right. On a standard Windows US/English Windows keyboard you can split the screen with <Windows Key + left arrow> and <Windows Key + right arrow>.

From your Super-NetOps terminal, there are multiple ways to see your AWS access keys. You can echo the environment variables:

echo $AWS_ACCESS_KEY_ID
echo $AWS_SECRET_ACCESS_KEY

…or you can cat the hidden ~/.aws/config file:

cat ~/.aws/config

Copy your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY values. Save in notepad or your preferred text editor / note taking method for later use.

Create a new cloud_logger iApp. HTTPS to the Configuration Utility (Web UI) of Big-IP1 (assuming that is the ACTIVE device and not STANDBY).

iApps => Application Services => Name: cloudwatch. Template: f5.cloud_logger.v1.0.0. Click [Finished].

_images/4_create_iapp.png _images/5_create_iapp.png
Question value
Name cloudwatch
Template f5.cloud_logger.v1.0.0
Which AWS region is the provider located in? us-east-1
What is the access key you want to use for the API calls? value of $AWS_ACCESS_KEY_ID
What is the secret key you want to use for the API calls? value of $AWS_SECRET_ACCESS_KEY
What is the AWS CloudWatch Logs group name? log group name i.e. user55labcom
What is the AWS CloudWatch Logs group’s stream name? log-stream
Do you want to enable LTM Request logging? Enable LTM request logging
_images/6_copy_access_keys.png

Click [Finished].

_images/7_cloud_logging_iapp_finished.png

The logging components have been created!

_images/7_cloud_logging_iapp_created.png

HTTPS to the Configuration Utility (Web UI) of Big-IP2 (if that is the standby). Look in the upper left-hand corner. Confirm you are on the STANDBY. All of the iApp changes are kept in sync between active and standby devices.

_images/8_standby_iapp_sync.png

HTTPS to the Configuration Utility (Web UI) of Big-IP1 (assuming that is the ACTIVE device and not STANDBY).

iApps => Application Services => virtual_server1.

Attention

Before completing the next few steps, DISABLE STRICT UPDATES for the f5.tcp.v1.0.0rc2 iApp named virtual_server1 in our example.

Local Traffic => Virtual Servers => virtual_server1_vs.10.0.1.x.

  • Choose “Advanced” from the dropdown.
  • Select SSL Profile(Client): clientssl
  • Change HTTP Profile to “http”
  • Request Logging Profile: cloudwatch_remote_logging

Click [Update].

_images/9_1st_virtual.png _images/10_virtual_advanced.png _images/11_request_logging_profile.png _images/12_1st_virtual_update.png

Do the same for the second virtual server. Local Traffic => Virtual Servers => virtual_server1_vs.10.0.1.x.

  • Choose “Advanced” from the dropdown.
  • Select SSL Profile(Client): clientssl
  • Change HTTP Profile to “http”
  • Request Logging Profile: cloudwatch_remote_logging

Click [Update].

_images/13_2nd_virtual.png _images/14_2nd_request_logging_profile.png _images/15_2nd_virtual_update.png

Run the lab-info command. Note the Elastic IP.

lab-info
_images/16_elastic_ip_for_testing.png

HTTPS to the Elastic IP to test request logging. Refresh with [F5] key for 15 seconds to generate a modest amount of traffic.

_images/17_refresh_https.png

Attention

Some lab testers reported an incompatibility issue with Mozilla Firefox on Linux and the AWS CloudWatch console. If Firefox doesn’t render the CloudWatch console, switch to Google Chrome for this part of the lab.

From the AWS Console, Services => Management Tools => CloudWatch => Log Groups. Select your log group and log-stream.

_images/18_log-stream.png

You will see the http request logs.

_images/18_log-stream_logs.png

Expand a log entry to see more detail.

_images/19_log-stream_expand.png

Copy the CLIENT_IP of a request and use this CLIENT_IP in the “Filter events” search filter. In production you would filter search results by attributes such as CLIENT-IP to home in on relevant logs.

_images/20_log-stream_filter1.png _images/21_log-stream_filter2.png
WAF HTTP Request and Security Logging to CloudWatch

HTTPS to the Configuration Utility (Web UI) of the BIG-IP Autoscale Instance: waf…

iApps => Application Services => waf=userxxf5labcom.

_images/21a_disable_strict_updates.png

Properties => UNCHECK “Strict Updates”. [Update].

_images/21b_disable_strict_updates.png

Create a new cloud_logger iApp. iApps => Application Services => Name: cloudwatch. Template: f5.cloud_logger.v1.0.0. Click [Finished].

Question value
Name cloudwatch
Template f5.cloud_logger.v1.0.0
Which AWS region is the provider located in? us-east-1
What is the access key you want to use for the API calls? value of $AWS_ACCESS_KEY_ID
What is the secret key you want to use for the API calls? value of $AWS_SECRET_ACCESS_KEY
What is the AWS CloudWatch Logs group name? log group name i.e. user55labcom
What is the AWS CloudWatrch Logs group’s stream name? log-stream
Do you want to enable ASM logging? Enable ASM logging
What ASM requests do you want to log? Log all requests (verbose)
Do you want to include ASM DOS logging? Include DOS protection logging
Do you want to enable LTM Request logging? Enable LTM request logging
What Request parameters do you want to send in the log? leave defaults

Click [Finished].

_images/22_cloud_logging_waf.png

Local Traffic => Virtual Server => waf-userXXf5labcom_vs.

_images/24_request_logging_waf.png

Change Request Logging Profile to cloudwatch_remote_logging.

_images/25_request_logging_waf.png

Click [Update].

_images/26_request_logging_update.png

Local Traffic => Virtual Server => waf-userXXf5labcom_vs => Security => Policies.

_images/26a_request_logging_security.png

Log Profile. Select cloudwatch_remote_logging. Click [Update].

_images/26b_request_logging_security.png

From the Super-NetOps terminal, run the lab-info utility.

lab-info

HTTPS to the WAF ELB URL. Refresh the browser with <CTRL+F5> for 15 seconds to generate a modest amount of traffic.

_images/28_refresh_waf_url.png

Back in the CloudWatch console. Use the search term waf to see logs coming from your F5 WAF.

_images/29_waf_cloudwatch.png

Autoscale WAF

Automatically scale out your Web Application Firewall to service a surge and scale in when surge subsides.

Autoscale WAF

HTTPS to the WAF ELB URL.

_images/waf-example-site.png

From the AWS console, navigate to Services => AUTO SCALING => Auto Scaling Groups. Filter on your username and select your waf-userxx… auto scaling group.

Select the ‘Instances’ tab below, and select your Instance ID (there should be only one). If your instance is “Protected from… Scale in” then it will always stay up regardless of scale up/down thresholds configured. It’s common to keep a single minimum WAF instance running at all times and scale the 2nd, 3rd, Nth WAF during surges.

_images/autoscale-pending.png

Select the Scaling Polices tab. These policies were deployed via the CloudFormation template and can be changed via the CloudFormation template.

_images/autoscale-policy2.png

Login to the active BIG-IP Autoscale Instance MGMT IP on port 8443 configuration utility (web ui).

lab-info

In the Big-IP Configuration utility (Web UI) navigate to Security -> Application Security -> Security Policies -> Active Polices. A “linux-low” policy was deployed via CloudFormation template and is in Enforcement Mode: Blocking.

_images/waf-policy.png

From the f5-super-netops container, let’s launch some traffic against the application behind our WAF and watch it autoscale to service the surge! Replace the https://waf-userxx… in the command below with the one in the output of lab-info and don’t miss that critical forward slash / at the end!

base64 /dev/urandom | head -c 3000 > payload
ab -t 120 -c 200 -c 5 -T 'multipart/form-data; boundary=1234567890' -p payload https://waf-user11f5democom-xxxxxxxxx.us-east-1.elb.amazonaws.com/

Services => Compute => EC2 => INSTANCES => Instances. Filter on your username and after 60 seconds (the lowest configurable time threshold) hit refresh to see your 2nd autoscale WAF instance starting.

_images/autoscale-initializing.png

Clean Up Environment

The exciting promise of public cloud is not only to stand up application environments quickly, consistently and with minimum capex, but also the inverse: to tear down application environments quickly, cleanly and completely.

Clean up the lab environment

From the Super-NetOps terminal, clean up, then destroy the environment.

lab-cleanup
terraform destroy --force

Attention

You might need to run terraform destroy –force a second time. Watch the console output. Nothing serious: sometimes the Internet gateways take longer to delete than the time we have configured for terraform to timeout.

_images/1_lab_cleanup.png _images/2_lab_cleanup.png _images/3_lab_cleanup.png _images/4_lab_cleanup.png