Terraform Cloud - Governance

Mocking & Testing

In this guide, we're going to illustrate how you can use the Sentinel Simulator with mocks of the Terraform Cloud Sentinel imports. A mock simulates what the output of a Terraform plan would produce. Since there are four Terraform Cloud Sentinel imports (tfplan, tfstate, tfconfig, and tfrun), there are four corresponding mocks. We'll use the tfconfig mock.

Each test of a policy with the Sentinel Simulator with mocks is faster since the test will not run a Terraform plan. Using the simulator avoids having to discard runs against workspaces and avoids having those runs in your workspace history. If you want to automate testing of policies that need frequent changes, then using the simulator is the best choice since each test will be faster and will not place extra load on your Terraform Cloud server.

Using the simulator with mocks carries the potential risk that you might make a mistake due to human error and end up with an invalid test of your Sentinel policy. In contrast, testing policies against actual Terraform code gives users more certainty that their Sentinel policies are really behaving as intended. In the near future, HashiCorp expects to release a mock generator that will make the creation of mocks for the four Terraform Sentinel imports less error-prone. At that point, using the Sentinel Simulator with mocks will become much easier.

Generating Mocks

Terraform Cloud makes it easy for users to generate the tfplan, tfconfig, tfstate, and tfrun mocks against any plan they have run in the past seven days. In the TFC UI, you can select a run from a workspace, expand the plan, and click the "Download Sentinel mocks" button to download a tar file with the mocks. You can also use the TFC Plans API to download the mocks.

Sentinel Mocks

Creating Mock file structure

Create a Sentinel Simulator configuration file called sentinel.json in the "mocks" directory that tells the simulator to load the mock:

{
  "mock": {
    "tfconfig": "mocks/mock-tfconfig.sentinel",
    "tfplan": "mocks/mock-tfplan.sentinel",
    "tfstate": "mocks/mock-tfstate.sentinel"
  }
}

Create a Sentinel policy called check-policy.sentinel in the "mocks directory:

import "tfplan"
 main = rule {
     all tfplan.resources.aws_instance as _, instances {
         all instances as _, r {
             r.applied.tags else null is not null
         }
     }
 }

Running Mocks in Sentinel Simulator

You can now use the simulator to run your policy against the mock data by running the command sentinel apply check-policy.sentinel. You should see one line of output with "Pass" written in green.

You can make the simulator print a full trace more like what you would see if using the Sentinel policy in TFE by adding the -trace option before the name of the policy. In this case, the command would be sentinel apply -trace check-policy.sentinel. You should see output like this (with some extra text that we have trimmed):

Pass

TRUE - policy.sentinel:3:1 - Rule "main"

Running Tests in Sentinel Simulator

Writing and running test suites against your Sentinel policies can also be helpful in cases where you do not have mock data. Let's go over a simple Sentinel policy to understand how to write and run tests in Sentinel Simulator.

Defining a Policy

In this example, our Sentinel policy will test that any AWS instances are sized at t2.medium. Save this policy locally in the root of a directory called policies as instance.sentinel.

main = rule {
  instance_type_is_medium
}

instance_type_is_medium = rule {
  instance_type is "t2.medium"
}

This policy has two rules, the main rule and the instance rule. In order for main to evaluate to TRUE, instance_type must equal t2.medium. Rules can be composed of other rules. When debugging or testing a policy, it is much easier to work with several small rules rather than one rule that has many conditions in it.

Writing Passing Tests

We should first implement a test that defines global values expected by the policy. We can pre-populate the variables that are passed in to the policy. Then we can specify whether the policy should pass or fail based on the test data we've provided.

In your policies directory, create a directory called test. In that directory, create a subdirectory that matches the name of the Sentinel policy (instance). Createa JSON file called pass.json and create a successful testing scenario.

For the successful scenario, we'll define instance_type under the global key in the JSON. We want to simulate a case where instance_type is given to the policy with a value of t2.medium.

Under the test section of the JSON, specify that the main rule should evaluate to true. You could optionally list the instance_type_is_medium rule and specify that it is expected to evaluate to true as well.

{
  "global": {
    "instance_type": "t2.medium"
  },
  "test": {
    "main": true,
    "instance_type_is_medium": true
  }
}

Writing Failing Tests

To see the testing suite evaluate failures, let's write a test to describe incorrect data that should cause rules to fail. In the test directory for your policy, create a file called expected-failure.json with the following contents:

 "global": {
    "instance_type": "t2.nano"
  },
  "test": {
    "main": false,
    "instance_type_is_medium": false,
  }
}

This data specifies an instance_type that should cause the instance_type_is_medium rule to fail. That rule is also specified under test as expecting a false result.

Running Tests

Now that we have manually created scenarios in which our policies should pass and fail, we can run sentinel test to ensure we receive the information we expect.

cd ~/policies/instance/
sentinel test
PASS - instance.sentinel
  PASS - test/instance/expected-failure.json
  PASS - test/instance/pass.json

When we run these tests, both should pass. In expected-failure.json We specified that the provided data should result in a false value for the rules. Since this scenario was correctly achieved, the failing test evaluates to a correct failure. For more trace information, run sentinel test with the -verbose flag.