Download AWS Certified Machine Learning Engineer - Associate.MLA-C01.ExamTopics.2026-02-20.106q.tqb

Vendor: Amazon
Exam Code: MLA-C01
Exam Name: AWS Certified Machine Learning Engineer - Associate
Date: Feb 20, 2026
File Size: 3 MB

How to open TQB files?

Files with TQB (Taurus Question Bank) extension can be opened by Taurus Exam Studio.

Demo Questions

Question 1
An ML engineer needs to implement a solution to host a trained ML model. The rate of requests to the model will be inconsistent throughout the day.
The ML engineer needs a scalable solution that minimizes costs when the model is not in use. The solution also must maintain the model's capacity to respond to requests during times of peak usage.
Which solution will meet these requirements?
  1. Create AWS Lambda functions that have fixed concurrency to host the model. Configure the Lambda functions to automatically scale based on the number of requests to the model.
  2. Deploy the model on an Amazon Elastic Container Service (Amazon ECS) cluster that uses AWS Fargate. Set a static number of tasks to handle requests during times of peak usage.
  3. Deploy the model to an Amazon SageMaker endpoint. Deploy multiple copies of the model to the endpoint. Create an Application Load Balancer to route traffic between the different copies of the model at the endpoint.
  4. Deploy the model to an Amazon SageMaker endpoint. Create SageMaker endpoint auto scaling policies that are based on Amazon CloudWatch metrics to adjust the number of instances dynamically.
Correct answer: D
Question 2
A company stores time-series data about user clicks in an Amazon S3 bucket. The raw data consists of millions of rows of user activity every day. ML engineers access the data to develop their ML models.
The ML engineers need to generate daily reports and analyze click trends over the past 3 days by using Amazon Athena. The company must retain the data for 30 days before archiving the data.
Which solution will provide the HIGHEST performance for data retrieval?
  1. Keep all the time-series data without partitioning in the S3 bucket. Manually move data that is older than 30 days to separate S3 buckets.
  2. Create AWS Lambda functions to copy the time-series data into separate S3 buckets. Apply S3 Lifecycle policies to archive data that is older than 30 days to S3 Glacier Flexible Retrieval.
  3. Organize the time-series data into partitions by date prefix in the S3 bucket. Apply S3 Lifecycle policies to archive partitions that are older than 30 days to S3 Glacier Flexible Retrieval.
  4. Put each day's time-series data into its own S3 bucket. Use S3 Lifecycle policies to archive S3 buckets that hold data that is older than 30 days to S3 Glacier Flexible Retrieval.
Correct answer: C
Question 3
A credit card company has a fraud detection model in production on an Amazon SageMaker endpoint. The company develops a new version of the model. The company needs to assess the new model's performance by using live data and without affecting production end users.
Which solution will meet these requirements?
  1. Set up SageMaker Debugger and create a custom rule.
  2. Set up blue/green deployments with all-at-once traffic shifting.
  3. Set up blue/green deployments with canary traffic shifting.
  4. Set up shadow testing with a shadow variant of the new model.
Correct answer: D
Question 4
An ML engineer is using a training job to fine-tune a deep learning model in Amazon SageMaker Studio. The ML engineer previously used the same pre-trained model with a similar dataset. The ML engineer expects vanishing gradient, underutilized GPU, and overfitting problems.
The ML engineer needs to implement a solution to detect these issues and to react in predefined ways when the issues occur. The solution also must provide comprehensive real-time metrics during the training.
Which solution will meet these requirements with the LEAST operational overhead?
  1. Use TensorBoard to monitor the training job. Publish the findings to an Amazon Simple Notification Service (Amazon SNS) topic. Create an AWS Lambda function to consume the findings and to initiate the predefined actions.
  2. Use Amazon CloudWatch default metrics to gain insights about the training job. Use the metrics to invoke an AWS Lambda function to initiate the predefined actions.
  3. Expand the metrics in Amazon CloudWatch to include the gradients in each training step. Use the metrics to invoke an AWS Lambda function to initiate the predefined actions.
  4. Use SageMaker Debugger built-in rules to monitor the training job. Configure the rules to initiate the predefined actions.
Correct answer: D
Question 5
A company is planning to create several ML prediction models. The training data is stored in Amazon S3. The entire dataset is more than 5 ТВ in size and consists of CSV, JSON, Apache Parquet, and simple text files.
The data must be processed in several consecutive steps. The steps include complex manipulations that can take hours to finish running. Some of the processing involves natural language processing (NLP) transformations. The entire process must be automated.
Which solution will meet these requirements?
  1. Process data at each step by using Amazon SageMaker Data Wrangler. Automate the process by using Data Wrangler jobs.
  2. Use Amazon SageMaker notebooks for each data processing step. Automate the process by using Amazon EventBridge.
  3. Process data at each step by using AWS Lambda functions. Automate the process by using AWS Step Functions and Amazon EventBridge.
  4. Use Amazon SageMaker Pipelines to create a pipeline of data processing steps. Automate the pipeline by using Amazon EventBridge.
Correct answer: D
Question 6
An ML engineer needs to use AWS CloudFormation to create an ML model that an Amazon SageMaker endpoint will host.
Which resource should the ML engineer declare in the CloudFormation template to meet this requirement?
  1. AWS::SageMaker::Model
  2. AWS::SageMaker::Endpoint
  3. AWS::SageMaker::NotebookInstance
  4. AWS::SageMaker::Pipeline
Correct answer: A
Question 7
A company wants to improve the sustainability of its ML operations.
Which actions will reduce the energy usage and computational resources that are associated with the company's training jobs? (Choose two.)
  1. Use Amazon SageMaker Debugger to stop training jobs when non-converging conditions are detected.
  2. Use Amazon SageMaker Ground Truth for data labeling.
  3. Deploy models by using AWS Lambda functions.
  4. Use AWS Trainium instances for training.
  5. Use PyTorch or TensorFlow with the distributed training option.
Correct answer: AD
Question 8
A company has a large collection of chat recordings from customer interactions after a product release. An ML engineer needs to create an ML model to analyze the chat data. The ML engineer needs to determine the success of the product by reviewing customer sentiments about the product.
Which action should the ML engineer take to complete the evaluation in the LEAST amount of time?
  1. Use Amazon Rekognition to analyze sentiments of the chat conversations.
  2. Train a Naive Bayes classifier to analyze sentiments of the chat conversations.
  3. Use Amazon Comprehend to analyze sentiments of the chat conversations.
  4. Use random forests to classify sentiments of the chat conversations.
Correct answer: C
Question 9
A company wants to predict the success of advertising campaigns by considering the color scheme of each advertisement. An ML engineer is preparing data for a neural network model. The dataset includes color information as categorical data.
Which technique for feature engineering should the ML engineer use for the model?
  1. Apply label encoding to the color categories. Automatically assign each color a unique integer.
  2. Implement padding to ensure that all color feature vectors have the same length.
  3. Perform dimensionality reduction on the color categories.
  4. One-hot encode the color categories to transform the color scheme feature into a binary matrix.
Correct answer: D
Question 10
A company uses Amazon Athena to query a dataset in Amazon S3. The dataset has a target variable that the company wants to predict.
The company needs to use the dataset in a solution to determine if a model can predict the target variable.
Which solution will provide this information with the LEAST development effort?
  1. Create a new model by using Amazon SageMaker Autopilot. Report the model's achieved performance.
  2. Implement custom scripts to perform data pre-processing, multiple linear regression, and performance evaluation. Run the scripts on Amazon EC2 instances.
  3. Configure Amazon Macie to analyze the dataset and to create a model. Report the model's achieved performance.
  4. Select a model from Amazon Bedrock. Tune the model with the data. Report the model's achieved performance.
Correct answer: A
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!