Skip to main content

Terraform Associate 004

The Terraform Associate 004 certification validates a professional's proficiency in using HashiCorp Terraform to manage infrastructure as code. It covers the fundamental concepts of provisioning and managing cloud resources across multiple providers using a declarative configuration language. Holding the symbol TERRA_A_004 demonstrates the technical skills needed to automate infrastructure deployment and ensure consistency in the cloud.



---------- Question 1
You have a set of legacy infrastructure resources that were created manually through the cloud provider console. You want to start managing these resources using Terraform without destroying and recreating them. Which Terraform command should you use to bring these existing resources under Terraform management?
  1. terraform plan -import
  2. terraform refresh
  3. terraform import
  4. terraform pull

---------- Question 2
A team is developing a complex Terraform configuration that requires a specific version of the HashiCorp AWS provider to ensure compatibility with certain legacy resources. They also need to use a community-developed provider for a third-party SaaS platform. How should the team properly manage these provider requirements and their respective versions within the Terraform block?
  1. They should use the required_providers block to specify the source address and version constraints for both the official and community providers.
  2. They must manually download the provider binaries and place them in the root directory of their project before running any Terraform commands.
  3. Terraform automatically detects the required versions by scanning the resource blocks, so explicit version definitions in the configuration are considered bad practice.
  4. They should use the terraform init -upgrade command which will automatically generate the source and version requirements based on the resources used.

---------- Question 3
A developer is troubleshooting a provider error that only occurs during the apply phase. They need to see the raw HTTP requests and responses between Terraform and the cloud API to identify the issue. How can they enable this level of detail in the console output?
  1. By setting the environment variable TF_LOG to the value TRACE before running the terraform apply command.
  2. By adding the --verbose flag to the terraform apply command and specifying the level as level-5.
  3. By enabling the verbose_logging = true argument inside the provider block for the specific cloud service being used.
  4. By running the terraform inspect command on the state file and looking for the network_logs attribute in the JSON output.

---------- Question 4
An engineer is troubleshooting a provider-related error that only occurs during the application phase. The error message provided by the CLI is too vague to identify the root cause. Which environment variable should the engineer set to see the raw API requests and responses between Terraform and the cloud provider, and what is the most detailed level available?
  1. Set the TF_LOG environment variable to TRACE to enable the most verbose logging output, which includes every internal step and API communication details.
  2. Set the TERRAFORM_DEBUG variable to 1 to enable the developer mode, which prints the stack trace of the Terraform binary whenever an error occurs during execution.
  3. Set the TF_LOG_LEVEL variable to INFO to filter out all non-essential messages and focus only on the specific API calls that returned an error code.
  4. Set the TF_VERBOSE_LOGGING variable to true and provide a path to a log file where the provider will dump its internal debugging information.

---------- Question 5
A global enterprise is transitioning from manual infrastructure provisioning to using Terraform for their multi-cloud strategy involving AWS and Azure. They want to ensure that their infrastructure is reproducible and that they can use the same workflow regardless of the cloud provider. Which specific advantage of the Infrastructure as Code pattern allows them to manage these diverse environments using a single toolset while remaining service-agnostic?
  1. Terraform provides a single unified API that abstracts all cloud provider differences into a universal resource syntax that never changes between vendors.
  2. Terraform utilizes a provider-based architecture where specific plugins interface between the HCL core and the cloud APIs, allowing for a consistent workflow across multiple platforms.
  3. Terraform acts as a local agent on virtual machines to automatically detect manual changes and revert them to the desired state without requiring a central server.
  4. Terraform requires all cloud providers to adhere to the HashiCorp Standard Specification, which ensures that an S3 bucket and an Azure Blob use the exact same resource arguments.

---------- Question 6
A junior administrator is attempting to deploy a new infrastructure stack and has just finished writing the HCL files. They run the terraform plan command but receive an error stating that the provider plugins are missing. Which step of the Core Terraform Workflow did the administrator neglect to perform before attempting to generate an execution plan?
  1. The administrator forgot to run terraform validate to ensure that the syntax of the provider blocks was written correctly.
  2. The administrator skipped the terraform init command, which is responsible for downloading the necessary provider plugins and preparing the working directory.
  3. The administrator did not run terraform fmt, which is a mandatory step that must be completed before Terraform allows any planning operations.
  4. The administrator failed to create a manual .terraform directory and copy the provider binaries into it from the HashiCorp website.

---------- Question 7
A developer needs to create a variable that can hold a list of objects, where each object contains a string for a username and a number for an age. Which Terraform complex type definition correctly represents this data structure to ensure strict type checking during execution?
  1. type = list(any)
  2. type = map(string)
  3. type = list(object({ username = string, age = number }))
  4. type = set(tuple([string, number]))

---------- Question 8
After successfully applying a configuration, a developer realizes they need to change the formatting of their files to meet the teams style guidelines. Additionally, they want to remove all resources managed by the current configuration to avoid unnecessary costs. Which two Terraform commands should be used to achieve these specific goals of formatting and infrastructure removal?
  1. terraform fmt and terraform destroy
  2. terraform clean and terraform delete
  3. terraform style and terraform remove
  4. terraform lint and terraform terminate

---------- Question 9
A system administrator notices that a security group rule was manually modified in the AWS Console, but the Terraform configuration remains unchanged. This discrepancy is known as drift. When the administrator runs a terraform plan, how does Terraform detect this drift, and what action will it propose to resolve the discrepancy in the next step?
  1. Terraform ignores the manual change because it only cares about what is in the configuration files and will do nothing until the code is updated.
  2. Terraform compares the current state file against the remote infrastructure during the refresh phase and proposes to revert the manual change to match the code.
  3. Terraform will automatically update the configuration files to match the manual changes made in the AWS Console to ensure the code is always current.
  4. Terraform will fail with a checksum mismatch error and refuse to run until the manual changes are deleted from the AWS Console by the administrator.

---------- Question 10
An engineer needs to create multiple AWS EC2 instances, but the number of instances varies based on the environment (development vs. production). They also need to ensure that each instance has a unique tag based on its index. Which Terraform feature is most appropriate for iterating over a list or count to create these resources efficiently within a single block?
  1. Use a series of nested if-else statements within the resource block to define each instance manually based on the environment name.
  2. Use the count meta-argument to specify the number of instances and use the count.index property to provide a unique value for each resource tag.
  3. Use the terraform loop command to execute the resource block multiple times, passing a different variable each time the command runs.
  4. Use the dynamic block syntax which is specifically designed for creating multiple top-level resources like EC2 instances from a map variable.


Are they useful?
Click here to get 360 more questions to pass this certification at the first try! Explanation for each answer is included!

Follow the below LINKEDIN channel to stay updated about 89+ exams!

Comments

Popular posts from this blog

Microsoft Certified: Azure Fundamentals (AZ-900)

The Microsoft Certified: Azure Fundamentals (AZ-900) is the essential starting point for anyone looking to validate their foundational knowledge of cloud services and how those services are provided with Microsoft Azure. It is designed for both technical and non-technical professionals ---------- Question 1 A new junior administrator has joined your IT team and needs to manage virtual machines for a specific development project within your Azure subscription. This project has its own dedicated resource group called dev-project-rg. The administrator should be able to start, stop, and reboot virtual machines, but should not be able to delete them or modify network configurations, and crucially, should not have access to virtual machines or resources in other projects or subscription-level settings. Which Azure identity and access management concept, along with its appropriate scope, should be used to grant these specific permissions? Microsoft Entra ID Conditional Access, applied at...

Google Associate Cloud Engineer

The Google Associate Cloud Engineer (ACE) certification validates the fundamental skills needed to deploy applications, monitor operations, and manage enterprise solutions on the Google Cloud Platform (GCP). It is considered the "gatekeeper" certification, proving a candidate's ability to perform practical cloud engineering tasks rather than just understanding theoretical architecture.  ---------- Question 1 Your team is developing a serverless application using Cloud Functions that needs to process data from Cloud Storage. When a new object is uploaded to a specific Cloud Storage bucket, the Cloud Function should automatically trigger and process the data. How can you achieve this? Use Cloud Pub/Sub as a message broker between Cloud Storage and Cloud Functions. Directly access Cloud Storage from the Cloud Function using the Cloud Storage Client Library. Use Cloud Scheduler to periodically check for new objects in the bucket. Configure Cloud Storage to directly ca...

CompTIA Cybersecurity Analyst (CySA+)

CompTIA Cybersecurity Analyst (CySA+) focuses on incident detection, prevention, and response through continuous security monitoring. It validates a professional's expertise in vulnerability management and the use of threat intelligence to strengthen organizational security. Achieving the symbol COMP_CYSA marks an individual as a proficient security analyst capable of mitigating modern cyber threats. ---------- Question 1 A security analyst is reviewing logs in the SIEM and identifies a series of unusual PowerShell executions on a critical application server. The logs show the use of the -EncodedCommand flag followed by a long Base64 string. Upon decoding, the script appears to be performing memory injection into a legitimate system process. Which of the following is the most likely indicator of malicious activity being observed, and what should be the analysts immediate technical response using scripting or tools? The activity indicates a fileless malware attack attempting to ...