[2020.10] Pass4itsure New Amazon DAS-C01 Exam Dumps, DAS-C01 Practice Test Questions

Released the latest Amazon DAS-C01 exam dumps! You can get DAS-C01 VCE dumps and DAS-C01 PDF dumps from Pass4itsure, (including the latest DAS-C01 exam questions), which will ensure that your DAS-C01 exam is 100% passed! Pass4itsure DAS-C01 dumps VCE and PDF — https://www.pass4itsure.com/das-c01.html Updated!

Latest Amazon AWS Certified Specialty DAS-C01 exam practice test

QUESTION 1
A media analytics company consumes a stream of social media posts. The posts are sent to an Amazon Kinesis data
stream partitioned on user_id. An AWS Lambda function retrieves the records and validates the content before loading
the posts into an Amazon Elasticsearch cluster. The validation process needs to receive the posts for a given user in the
order they were received. A data analyst has noticed that, during peak hours, the social media platform posts take more
than an hour to appear in the Elasticsearch cluster.
What should the data analyst do to reduce this latency?
A. Migrate the validation process to Amazon Kinesis Data Firehose.
B. Migrate the Lambda consumers from standard data stream iterators to an HTTP/2 stream consumer.
C. Increase the number of shards in the stream.
D. Configure multiple Lambda functions to process the stream.
Correct Answer: C


QUESTION 2
A media company has been performing analytics on log data generated by its applications. There has been a recent
increase in the number of concurrent analytics jobs running, and the overall performance of existing jobs is decreasing
as the number of new jobs is increasing. The partitioned data is stored in Amazon S3 One Zone-Infrequent Access (S3
One Zone-IA) and the analytic processing is performed on Amazon EMR clusters using the EMR File System (EMRFS)
with consistent view enabled. A data analyst has determined that it is taking longer for the EMR task nodes to list
objects in Amazon S3.
Which action would MOST likely increase the performance of accessing log data in Amazon S3?
A. Use a hash function to create a random string and add that to the beginning of the object prefixes when storing the
log data in Amazon S3.
B. Use a lifecycle policy to change the S3 storage class to S3 Standard for the log data.
C. Increase the read capacity units (RCUs) for the shared Amazon DynamoDB table.
D. Redeploy the EMR clusters that are running slowly to a different Availability Zone.
Correct Answer: D

QUESTION 3
An insurance company has raw data in JSON format that is sent without a predefined schedule through an Amazon
Kinesis Data Firehose delivery stream to an Amazon S3 bucket. An AWS Glue crawler is scheduled to run every 8
hours to update the schema in the data catalog of the tables stored in the S3 bucket. Data analysts analyze the data
using Apache Spark SQL on Amazon EMR set up with AWS Glue Data Catalog as the metastore. Data analysts say
that, occasionally, the data they receive is stale. A data engineer needs to provide access to the most up-to-date data.
Which solution meets these requirements?
A. Create an external schema based on the AWS Glue Data Catalog on the existing Amazon Redshift cluster to query
new data in Amazon S3 with Amazon Redshift Spectrum.
B. Use Amazon CloudWatch Events with the rate (1 hour) expression to execute the AWS Glue crawler every hour.
C. Using the AWS CLI, modify the execution schedule of the AWS Glue crawler from 8 hours to 1 minute.
D. Run the AWS Glue crawler from an AWS Lambda function triggered by an S3: ObjectCreated:* event notification on
the S3 bucket.
Correct Answer: A

QUESTION 4
A company has 1 million scanned documents stored as image files in Amazon S3. The documents contain typewritten
application forms with information including the applicant’s first name, applicant’s last name, application date, application
type, and application text. The company has developed a machine-learning algorithm to extract the metadata values
from the scanned documents. The company wants to allow internal data analysts to analyze and find applications using
the applicant name, application date, or application text. The original images should also be downloadable. Cost control
is secondary to query performance.
Which solution organizes the images and metadata to drive insights while meeting the requirements?
A. For each image, use object tags to add the metadata. Use Amazon S3 Select to retrieve the files based on the
applicant’s name and application date.
B. Index the metadata and the Amazon S3 location of the image file in Amazon Elasticsearch Service. Allow the data
analysts to use Kibana to submit queries to the Elasticsearch cluster.
C. Store the metadata and the Amazon S3 location of the image file in an Amazon Redshift table. Allow the data
analysts to run ad-hoc queries on the table.
D. Store the metadata and the Amazon S3 location of the image files in an Apache Parquet file in Amazon S3, and
define a table in the AWS Glue Data Catalog. Allow data analysts to use Amazon Athena to submit custom queries.
Correct Answer: A

QUESTION 5
A media content company has a streaming playback application. The company wants to collect and analyze the data to
provide near-real-time feedback on playback issues. The company needs to consume this data and return results within
30 seconds according to the service-level agreement (SLA). The company needs the consumer to identify playback
issues, such as quality during a specified timeframe. The data will be emitted as JSON and may change schemas over
time.
Which solution will allow the company to collect data for processing while meeting these requirements?
A. Send the data to Amazon Kinesis Data Firehose with delivery to Amazon S3. Configure an S3 event trigger an AWS
Lambda functions to process the data. The Lambda function will consume the data and process it to identify potential
playback issues. Persist the raw data to Amazon S3.
B. Send the data to Amazon Managed Streaming for Kafka and configure an Amazon Kinesis Analytics for Java
application as the consumer. The application will consume the data and process it to identify potential playback issues.
Persist the raw data to Amazon DynamoDB.
C. Send the data to Amazon Kinesis Data Firehose with delivery to Amazon S3. Configure Amazon S3 to trigger an
event for AWS Lambda to process. The Lambda function will consume the data and process it to identify potential
playback issues. Persist the raw data to Amazon DynamoDB.
D. Send the data to Amazon Kinesis Data Streams and configure an Amazon Kinesis Analytics for Java application as
the consumer. The application will consume the data and process it to identify potential playback issues. Persist the raw
data to Amazon S3.
Correct Answer: B

QUESTION 6
A financial company uses Apache Hive on Amazon EMR for ad-hoc queries. Users are complaining of sluggish
performance.
A data analyst notes the following:
Approximately 90% of the queries are submitted 1 hour after the market opens. Hadoop Distributed File System (HDFS)
utilization never exceeds 10%.
Which solution would help address the performance issues?
A. Create instance fleet configurations for core and task nodes. Create an automatic scaling policy to scale out the
instance groups based on the Amazon CloudWatch CapacityRemainingGB metric. Create an automatic scaling policy to
scale in the instance fleet based on the CloudWatch CapacityRemainingGB metric.
B. Create instance fleet configurations for core and task nodes. Create an automatic scaling policy to scale out the
instance groups based on the Amazon CloudWatch YARNMemoryAvailablePercentage metric. Create an automatic
scaling policy to scale in the instance fleet based on the CloudWatch YARNMemoryAvailablePercentage metric.
C. Create instance group configurations for core and task nodes. Create an automatic scaling policy to scale out the
instance groups based on the Amazon CloudWatch CapacityRemainingGB metric. Create an automatic scaling policy to
scale in the instance groups based on the CloudWatch CapacityRemainingGB metric.
D. Create instance group configurations for core and task nodes. Create an automatic scaling policy to scale out the
instance groups based on the Amazon CloudWatch YARNMemoryAvailablePercentage metric. Create an automatic
scaling policy to scale in the instance groups based on the CloudWatch YARNMemoryAvailablePercentage metric.
Correct Answer: C

QUESTION 7
A large company has a central data lake to run analytics across different departments. Each department uses a
separate AWS account and stores its data in an Amazon S3 bucket in that account. Each AWS account uses the AWS
Glue Data Catalog as its data catalog. There are different data lake access requirements based on roles. Associate
analysts should only have read access to their departmental data. Senior data analysts can have access to multiple
departments including theirs, but for a subset of columns only.
Which solution achieves these required access patterns to minimize costs and administrative tasks?
A. Consolidate all AWS accounts into one account. Create different S3 buckets for each department and move all the
data from every account to the central data lake account. Migrate the individual data catalogs into a central data catalog
and apply fine-grained permissions to give to each user the required access to tables and databases in AWS Glue and
Amazon S3.
B. Keep the account structure and the individual AWS Glue catalogs on each account. Add a central data lake account
and use AWS Glue to catalog data from various accounts. Configure cross-account access for AWS Glue crawlers to
scan the data in each departmental S3 bucket to identify the schema and populate the catalog. Add the senior data
analysts into the central account and apply highly detailed access controls in the Data Catalog and Amazon S3.
C. Set up an individual AWS account for the central data lake. Use AWS Lake Formation to catalog the cross-account
locations. On each individual S3 bucket, modify the bucket policy to grant S3 permissions to the Lake Formation service-linked role. Use Lake Formation permissions to add fine-grained access controls to allow senior analysts to view specific
tables and columns.
D. Set up an individual AWS account for the central data lake and configure a central S3 bucket. Use an AWS Lake
Formation blueprint to move the data from the various buckets into the central S3 bucket. On each individual bucket,
modify the bucket policy to grant S3 permissions to the Lake Formation service-linked role. Use Lake Formation
permissions to add fine-grained access controls for both associate and senior analysts to view specific tables and
columns.
Correct Answer: B

QUESTION 8
A data analyst is using Amazon QuickSight for data visualization across multiple datasets generated by applications.
Each application stores files within a separate Amazon S3 bucket. AWS Glue Data Catalog is used as a central catalog
across all application data in Amazon S3. A new application stores its data within a separate S3 bucket. After updating
the catalog to include the new application data source, the data analyst created a new Amazon QuickSight data source
from an Amazon Athena table, but the import into SPICE failed.
How should the data analyst resolve the issue?
A. Edit the permissions for the AWS Glue Data Catalog from within the Amazon QuickSight console.
B. Edit the permissions for the new S3 bucket from within the Amazon QuickSight console.
C. Edit the permissions for the AWS Glue Data Catalog from within the AWS Glue console.
D. Edit the permissions for the new S3 bucket from within the S3 console.
Correct Answer: B
Reference: https://aws.amazon.com/blogs/big-data/harmonize-query-and-visualize-data-from-various-providers-usingaws-glue-amazon-athena-and-amazon-quicksight/

QUESTION 9
A company that produces network devices has millions of users. Data is collected from the devices on an hourly basis
and stored in an Amazon S3 data lake.
The company runs analyses on the last 24 hours of data flow logs for abnormality detection and to troubleshoot and
resolve user issues. The company also analyzes historical logs dating back 2 years to discover patterns and look for improvement opportunities.
The data flow logs contain many metrics, such as date, timestamp, source IP, and target IP. There are about 10 billion
events every day.
How should this data be stored for optimal performance?
A. In Apache ORC partitioned by date and sorted by source IP
B. In compressed .csv partitioned by date and sorted by source IP
C. In Apache Parquet partitioned by source IP and sorted by date
D. In compressed nested JSON partitioned by source IP and sorted by date
Correct Answer: D

QUESTION 10
A technology company is creating a dashboard that will visualize and analyze time-sensitive data. The data will come in
through Amazon Kinesis Data Firehose with the butter interval set to 60 seconds. The dashboard must support near real-time data.
Which visualization solution will meet these requirements?
A. Select Amazon Elasticsearch Service (Amazon ES) as the endpoint for Kinesis Data Firehose. Set up a Kibana
dashboard using the data in Amazon ES with the desired analyses and visualizations.
B. Select Amazon S3 as the endpoint for Kinesis Data Firehose. Read data into an Amazon SageMaker Jupyter
notebook and carry out the desired analyses and visualizations.
C. Select Amazon Redshift as the endpoint for Kinesis Data Firehose. Connect Amazon QuickSight with SPICE to
Amazon Redshift to create the desired analyses and visualizations.
D. Select Amazon S3 as the endpoint for Kinesis Data Firehose. Use AWS Glue to catalog the data and Amazon
Athena to query it. Connect Amazon QuickSight with SPICE to Athena to create the desired analyses and
visualizations.
Correct Answer: A

QUESTION 11
A retail company is building its data warehouse solution using Amazon Redshift. As a part of that effort, the company is
loading hundreds of files into the fact table created in its Amazon Redshift cluster. The company wants the solution to
achieve the highest throughput and optimally use cluster resources when loading data into the company\\’s fact table.
How should the company meet these requirements?
A. Use multiple COPY commands to load the data into the Amazon Redshift cluster.
B. Use S3DistCp to load multiple files into the Hadoop Distributed File System (HDFS) and use an HDFS connector to
ingest the data into the Amazon Redshift cluster.
C. Use LOAD commands equal to the number of Amazon Redshift cluster nodes and load the data in parallel into each
node.
D. Use a single COPY command to load the data into the Amazon Redshift cluster.
Correct Answer: B

QUESTION 12
A company\\’s marketing team has asked for help in identifying a high performing long-term storage service for their
data based on the following requirements:
The data size is approximately 32 TB uncompressed.
There is a low volume of single-row inserts each day.
There is a high volume of aggregation queries each day.
Multiple complex joins are performed.
The queries typically involve a small subset of the columns in a table.
Which storage service will provide the MOST performant solution?
A. Amazon Aurora MySQL
B. Amazon Redshift
C. Amazon Neptune
D. Amazon Elasticsearch
Correct Answer: B

QUESTION 13
A mobile gaming company wants to capture data from its gaming app and make the data available for analysis
immediately. The data record size will be approximately 20 KB. The company is concerned about achieving optimal
throughput from each device. Additionally, the company wants to develop a data stream processing application with
dedicated throughput for each consumer.
Which solution would achieve this goal?
A. Have the app call the PutRecords API to send data to Amazon Kinesis Data Streams. Use the enhanced fan-out
feature while consuming the data.
B. Have the app call the PutRecordBatch API to send data to Amazon Kinesis Data Firehose. Submit a support case to
enable dedicated throughput on the account.
C. Have the app use Amazon Kinesis Producer Library (KPL) to send data to Kinesis Data Firehose. Use the enhanced
fan-out feature while consuming the data.
D. Have the app call the PutRecords API to send data to Amazon Kinesis Data Streams. Host the stream-processing
application on Amazon EC2 with Auto Scaling.
Correct Answer: D

You may be interested in other Amazon exam practice, click to view!

Amazon DAS-C01 dumps pdf free download

[100% free] Amazon DAS-C01 dumps pdf https://drive.google.com/file/d/1W74vC9fIOz324qmxpGm-c5ZnPEoq1_B0/view?usp=sharing

Pass4itsure discount code 2020

Pass4itsure discount code 2020

P.S.

This is a free Amazon DAS-C01 study guide for the AWS Certified Specialty certification exam! It includes Amazon DAS-C01 pdf dumpsDAS-C01 exam videoDAS-C01 exam practice test & more free and paid resources! For more, please visit https://www.pass4itsure.com/das-c01.html Q&As. Study hard and practice a lot. This will help you prepare for the DAS-C01 exam. Good luck!