HashiCorp Learn
Infrastructure
  • TerraformTerraformLearn terraformDocs
  • PackerPackerLearn packerDocs
  • VagrantVagrantLearn vagrantDocs
Security
  • VaultVaultLearn vaultDocs
  • BoundaryBoundaryLearn boundaryDocs
Networking
  • ConsulConsulLearn consulDocs
Applications
  • NomadNomadLearn nomadDocs
  • WaypointWaypointLearn waypointDocs
  • HashiCorp Cloud Platform (HCP) LogoHashiCorp Cloud Platform (HCP)HashiCorp Cloud Platform (HCP)Docs
Type '/' to Search
Loading account...
  • Bookmarks
  • Manage Account
  • Overview
  • Prerequisites
  • Set up your development environment
  • Explore your development environment
  • Build provider
  • Define coffees data resource
  • Define coffees schema
  • Implement read
  • Add data source to provider
  • Test the provider
  • Next steps
DocsForum
Back to terraform
ProvidersView Collection
    Perform CRUD operations with ProvidersSetup and Implement ReadAdd Authentication to a ProviderImplement Complex ReadDebug a Terraform ProviderImplement CreateImplement UpdateImplement Delete

Setup and Implement Read

  • 11 min
  • Products Usedterraform

In these tutorials, you will write a custom provider against the API of a fictional coffee-shop application called HashiCups using the Terraform Plugin SDKv2. Through the process, you will learn how to create data sources, authenticate the provider to the HashiCups client, and create resources with CRUD functionality.

There are a few possible reasons for authoring a custom Terraform provider, including:

  • An internal private cloud whose functionality is either proprietary or would not benefit the open source community.
  • Extending the capabilities of an existing provider (bug fixes, new features, or customizations)

In this tutorial, you will set up your Terraform provider development environment and create a coffees data source that will return all coffees HashiCups serves. To do this, you will:

  1. Set up your development environment.
    You will clone the HashiCups repository and checkout the boilerplate branch. This contains a scaffold for a generic Terraform provider.
  2. Define the coffees data source.
    You will add a scaffold that defines an empty schema and functions to retrieve a list of coffees.
  3. Define the coffees schema.
    The schema defines properties that allow Terraform to recognize, reference and store the coffees data resource.
  4. Implement read.
    This read function invokes a GET request to the /coffees endpoint, then maps its value to the schema defined above.
  5. Add coffees data source to the provider schema.
    This allows you to use the data source in your configuration.

»Prerequisites

To follow this tutorial, you need:

  • a Golang 1.13+ installed and configured.
  • the Terraform 0.14+ CLI installed locally. The Terraform binaries are located here.
  • Docker and Docker Compose to run an instance of HashiCups locally.

»Set up your development environment

Clone the boilerplate branch of the Terraform HashiCups Provider repository. This serves as the boilerplate for your provider workspace.

$ git clone --branch boilerplate https://github.com/hashicorp/terraform-provider-hashicups

Navigate in the directory.

$ cd terraform-provider-hashicups

The HashiCups provider requires an instance of HashiCups. Navigate to the docker_compose directory then run docker-compose up to spin up a local instance of HashiCups on port :19090.

$ cd docker_compose && docker-compose up

Leave this terminal running.

In another terminal, verify that HashiCups is running by sending a request to its health check endpoint.

$ curl localhost:19090/health
ok

The directory should have the following structure.

$ tree -L 3
.
├── Makefile
├── README.md
├── docker_compose
│   ├── conf.json
│   └── docker-compose.yml
├── examples
│   ├── coffee
│   │   └── main.tf
│   ├── main.tf
├── hashicups
│   └── provider.go
├── main.go

If you’re stuck, refer to the implement-read branch to see the changes implemented in this tutorial.

»Explore your development environment

The boilerplate includes the following:

  • Makefile contains helper functions used to build, package and install the HashiCups provider.
    It's currently written for MacOS Terraform provider development, but you can change the variables at the top of the file to match your OS_ARCH. If you're using Windows, update your Makefile. You can find a full list of supported GO_ARCH here.
    - BINARY=terraform-provider-${NAME}
    + BINARY=terraform-provider-${NAME}.exe
    - OS_ARCH=darwin_amd64
    + OS_ARCH=windows/amd64
    
    The install function is configured to install the provider into the appropriate subdirectory within the default MacOS and Linux user plugins directory as defined by Terraform 0.13 specifications.
  • docker_compose contains the files required to initialize a local instance of HashiCups.
  • examples contains sample Terraform configuration that can be used to test the HashiCups provider.
  • hashicups contains the main provider code. This will be where the provider's resources and data source implementations will be defined.
  • main.go is the main entry point. This file creates a valid, executable Go binary that Terraform Core can consume.

»Explore main.go file

Open main.go in the root of the repository. The contents of the main function consume the Plugin SDK's plugin library which facilitates the RPC communication between Terraform Core and the plugin.

package main

import (
  "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
  "github.com/hashicorp/terraform-plugin-sdk/v2/plugin"

  "terraform-provider-hashicups/hashicups"
)

func main() {
  plugin.Serve(&plugin.ServeOpts{
    ProviderFunc: func() *schema.Provider {
      return hashicups.Provider()
    },
  })
}

Notice the ProviderFunc returns a *schema.Provider from the hashicups package.

»Explore provider schema

The hashicups/provider.go file currently defines an empty provider.

package hashicups

import (
  "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)

// Provider -
func Provider() *schema.Provider {
  return &schema.Provider{
    ResourcesMap: map[string]*schema.Resource{},
    DataSourcesMap: map[string]*schema.Resource{},
  }
}

The helper/schema library is part of Terraform Core. It abstracts many of the complexities and ensures consistency between providers. The *schema.Provider type can accept:

  • the resources it supports (ResourcesMap and DataSourcesMap)
  • configuration keys (properties in *schema.Schema{})
  • any callbacks to configure (ConfigureContextFunc)

You can use configuration keys and callbacks to authenticate and configure the provider. You will add them in the Add Authentication to a Provider tutorial.

»Build provider

Run the go mod init command to define this directory as the root of a module.

$ go mod init terraform-provider-hashicups
go: creating new go.mod: module terraform-provider-hashicups

Then, run go mod vendor to create a vendor directory that contains all the provider's dependencies.

$ go mod vendor

Next, build the provider using the Makefile.

$ make build
go build -o terraform-provider-hashicups

This runs the go build -o terraform-provider-hashicups command. Terraform searches for plugins in the format of terraform-<TYPE>-<NAME>. In the case above, the plugin is of type "provider" and of name "hashicups".

To verify things are working correctly, execute the recently created binary.

$ ./terraform-provider-hashicups
This binary is a plugin. These are not meant to be executed directly.
Please execute the program that consumes these plugins, which will
load any plugins automatically

»Define coffees data resource

Now that you have created the provider, add the coffees data resource. The coffees data source will pull information on all coffees served by HashiCups.

Create a new file named data_source_coffee.go in the hashicups directory and add the following code snippet. As a general convention, Terraform providers put each data source in their own file, named after the resource, prefixed with data_source_.

The libraries imported here will be used in dataSourceCoffeesRead.

package hashicups

import (
  "context"
  "encoding/json"
  "fmt"
  "net/http"
  "strconv"
  "time"

  "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
  "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)

func dataSourceCoffees() *schema.Resource {
  return &schema.Resource{
    ReadContext: dataSourceCoffeesRead,
    Schema: map[string]*schema.Schema{},
  }
}

The coffees data source function returns a schema.Resource which defines the schema and CRUD operations for the resource. Since Terraform data resources should only read information (not create, update or delete), only read (ReadContext) is defined.

»Define coffees schema

All Terraform resources must have a schema. This allows the provider to map the JSON response to the schema.

The /coffees endpoint returns an array of coffees. The sample below shows a truncated output.

$ curl localhost:19090/coffees
[
 {
   "id": 1,
   "name": "Packer Spiced Latte",
   "teaser": "Packed with goodness to spice up your images",
   "description": "",
   "price": 350,
   "image": "/packer.png",
   "ingredients": [
     { "ingredient_id": 1 },
     { "ingredient_id": 2 },
     { "ingredient_id": 4 }
   ]
 },
 ## ...
]

Since the response returns a list of coffees, the coffees schema should reflect that. Update your coffees data source's schema with the following code snippet.

Schema: map[string]*schema.Schema{
  "coffees": &schema.Schema{
    Type:     schema.TypeList,
    Computed: true,
    Elem: &schema.Resource{
      Schema: map[string]*schema.Schema{
        "id": &schema.Schema{
          Type:     schema.TypeInt,
          Computed: true,
        },
        "name": &schema.Schema{
          Type:     schema.TypeString,
          Computed: true,
        },
        "teaser": &schema.Schema{
          Type:     schema.TypeString,
          Computed: true,
        },
        "description": &schema.Schema{
          Type:     schema.TypeString,
          Computed: true,
        },
        "price": &schema.Schema{
          Type:     schema.TypeInt,
          Computed: true,
        },
        "image": &schema.Schema{
          Type:     schema.TypeString,
          Computed: true,
        },
        "ingredients": &schema.Schema{
          Type:     schema.TypeList,
          Computed: true,
          Elem: &schema.Resource{
            Schema: map[string]*schema.Schema{
              "ingredient_id": &schema.Schema{
                Type:     schema.TypeInt,
                Computed: true,
              },
            },
          },
        },
      },
    },
  },
},

Notice that the coffees schema is a schema.TypeList of coffee (schema.Resource).

The coffee resource's properties should map to their respective values in the JSON response. In the above example response:

  • The coffee's id is 1, a schema.TypeInt.
  • The coffee's name is "Packer Spiced Latte", a schema.TypeString.
  • The coffee ingredients is an array of ingredient objects, a schema.TypeList with elements map[string]*schema.Schema{}.

You can use various schema types to define complex data models. You will implement a complex read in the Implement Complex Read tutorial and Implement Create tutorial.

»Implement read

Now that you defined the coffees schema, you can implement the dataSourceCoffeesRead function.

Add the following read function to your data_source_coffee.go file.

func dataSourceCoffeesRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
  client := &http.Client{Timeout: 10 * time.Second}

  // Warning or errors can be collected in a slice type
  var diags diag.Diagnostics

  req, err := http.NewRequest("GET", fmt.Sprintf("%s/coffees", "http://localhost:19090"), nil)
  if err != nil {
    return diag.FromErr(err)
  }

  r, err := client.Do(req)
  if err != nil {
    return diag.FromErr(err)
  }
  defer r.Body.Close()

  coffees := make([]map[string]interface{}, 0)
  err = json.NewDecoder(r.Body).Decode(&coffees)
  if err != nil {
    return diag.FromErr(err)
  }

  if err := d.Set("coffees", coffees); err != nil {
    return diag.FromErr(err)
  }

  // always run
  d.SetId(strconv.FormatInt(time.Now().Unix(), 10))

  return diags
}

This function creates a new GET request to localhost:19090/coffees. Then, it decodes the response into a []map[string]interface{}. Finally, the d.Set("coffees", coffees) function sets the response body (list of coffees object) to Terraform coffees data source, assigning each value to its respective schema position. Finally, it uses SetID to set the resource ID.

Notice that this function returns a diag.Diagnostics type, which can return multiple errors and warnings to Terraform, giving users more robust error and warning messages. You can use the diag.FromErr() helper function to convert a Go error to a diag.Diagnostics type. You will implement this in the Debug a Terraform provider tutorial.

Tip: This function doesn't use an API client library to explicitly show the steps involved. The HashiCups client library is used to abstract CRUD functionality in other tutorials.

The existence of a non-blank ID tells Terraform that a resource was created. This ID can be any string value, but should be a value that Terraform can use to read the resource again. Since this data resource doesn't have a unique ID, you set the ID to the current UNIX time, which will force this resource to refresh during every Terraform apply.

When you create something in Terraform but delete it manually, Terraform should gracefully handle it. If the API returns an error when the resource doesn't exist, the read function should check to see if the resource is available first. If the resource isn't available, the function should set the ID to an empty string so Terraform "destroys" the resource in state. The following code snippet is an example of how this can be implemented; you do not need to add this to your configuration for this tutorial.

if resourceDoesntExist {
  d.SetID("")
  return
}

»Add data source to provider

Now that you’ve defined your data source, you can add it to your provider.

In your provider.go file, add the coffees data source to the DataSourcesMap. The DataSourcesMap attribute takes a map of the data source name, hashicups_coffees, and the *schema.Resource defined in data_source_coffee.go. Resources and data sources names must follow the <provider>_<resource_name> convention.

// Provider -
func Provider() *schema.Provider {
   return &schema.Provider{
       ResourcesMap: map[string]*schema.Resource{},
-       DataSourcesMap: map[string]*schema.Resource{},
+       DataSourcesMap: map[string]*schema.Resource{
+            "hashicups_coffees":     dataSourceCoffees(),
+       },
   }
}

»Test the provider

Now that you’ve implemented read and created the coffees data source, verify that it works.

First, navigate to the terraform-provider-hashicups root directory.

Then, build the binary and move it into your user Terraform plugins directory. This allows you to sideload and test your custom providers.

$ make install
go build -o terraform-provider-hashicups
mv terraform-provider-hashicups ~/.terraform.d/plugins/hashicorp.com/edu/hashicups/0.2/darwin_amd64

Navigate to the examples directory. This contains a sample Terraform configuration for the Terraform HashiCups provider.

$ cd examples

Finally, initialize your workspace to refresh your HashiCups provider, then apply. This should return the properties of "Packer Spice Latte" in your output.

$ terraform init && terraform apply --auto-approve
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

psl = {
  "1" = {
    "description" = ""
    "id" = 1
    "image" = "/packer.png"
    "ingredients" = tolist([
      {
        "ingredient_id" = 1
      },
      {
        "ingredient_id" = 2
      },
      {
        "ingredient_id" = 4
      },
    ])
    "name" = "Packer Spiced Latte"
    "price" = 350
    "teaser" = "Packed with goodness to spice up your images"
  }
}

»Next steps

Congratulations! You created your first Terraform provider and data resource to reference information from an API in your Terraform configuration.

If you were stuck during this tutorial, checkout the implement-read branch to see the changes implemented in this tutorial.

  • To learn more about the SDK v2, refer to the Terraform Plugin SDK v2 Upgrade tutorial.
  • To learn more about the Terraform Plugin SDK, refer to the Terraform Plugin SDK Documentation.
  • To learn more about how the plugins system in Terraform works, refer to the Terraform Plugins Documentation.
  • To learn more about provider source, refer to the Terraform provider source documentation.


PreviousPerform CRUD operations with ProvidersNextAdd Authentication to a Provider
HashiCorp
  • System Status
  • Terms of Use
  • Security
  • Privacy
stdin: is not a tty