[Newest Version] Free Geekcert Microsoft DP-100 PDF and Exam Questions Download 100% Pass Exam

[Newest Version] Free Geekcert Microsoft DP-100 PDF and Exam Questions Download 100% Pass Exam

The Role-based Latest DP-100 free download Designing and Implementing a Data Science Solution on Azure certification exam is a real worth challenging task if you want to win a place in the IT industry. You should not feel frustrated about the confronting difficulties. Geekcert gives you the most comprehensive version of Latest DP-100 vce dumps Designing and Implementing a Data Science Solution on Azure VCE dumps now. Get a complete hold on Role-based Role-based Jan 14,2022 Latest DP-100 practice Designing and Implementing a Data Science Solution on Azure exam syllabus through Geekcert and boost up your skills. What’s more, the Role-based Latest DP-100 free download dumps are the latest. It would be great helpful to your Role-based Newest DP-100 QAs exam.

the Geekcert DP-100exam | pass the DP-100 exam on your first try! Geekcert 100% accurate exam brain dumps with latest update. download the free DP-100 demo to check first. Geekcert – best DP-100 training and certification computer-based-training online resources. Geekcert 100% latest and accurate real DP-100 exam questions and answers. get all DP-100 certification easily!

We Geekcert has our own expert team. They selected and published the latest DP-100 preparation materials from Microsoft Official Exam-Center: https://www.geekcert.com/dp-100.html

The following are the DP-100 free dumps. Go through and check the validity and accuracy of our DP-100 dumps.DP-100 free dumps are questions from the latest full DP-100 dumps. Check DP-100 free questions to get a better understanding of DP-100 exams.

Question 1:

You are developing a hands-on workshop to introduce Docker for Windows to attendees.

You need to ensure that workshop attendees can install Docker on their devices.

Which two prerequisite components should attendees install on the devices? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Microsoft Hardware-Assisted Virtualization Detection Tool

B. Kitematic

C. BIOS-enabled virtualization

D. VirtualBox

E. Windows 10 64-bit Professional

Correct Answer: CE

C: Make sure your Windows system supports Hardware Virtualization Technology and that virtualization is enabled. Ensure that hardware virtualization support is turned on in the BIOS settings. For example:

E: To run Docker, your machine must have a 64-bit operating system running Windows 7 or higher.

References:

https://docs.docker.com/toolbox/toolbox_install_windows/

https://blogs.technet.microsoft.com/canitpro/2015/09/08/step-by-step-enabling-hyper-v-for-use-on- windows-10/


Question 2:

You are implementing a machine learning model to predict stock prices.

The model uses a PostgreSQL database and requires GPU processing.

You need to create a virtual machine that is pre-configured with the required tools.

What should you do?

A. Create a Data Science Virtual Machine (DSVM) Windows edition.

B. Create a Geo Al Data Science Virtual Machine (Geo-DSVM) Windows edition.

C. Create a Deep Learning Virtual Machine (DLVM) Linux edition.

D. Create a Deep Learning Virtual Machine (DLVM) Windows edition.

Correct Answer: A

In the DSVM, your training models can use deep learning algorithms on hardware that\’s based on graphics processing units (GPUs).

PostgreSQL is available for the following operating systems: Linux (all recent distributions), 64-bit installers available for macOS (OS X) version 10.6 and newer ?Windows (with installers available for 64-bit version; tested on latest versions

and back to Windows 2012 R2.

Incorrect Answers:

B: The Azure Geo AI Data Science VM (Geo-DSVM) delivers geospatial analytics capabilities from Microsoft\’s Data Science VM. Specifically, this VM extends the AI and data science toolkits in the Data Science VM by adding ESRI\’s market-leading ArcGIS Pro Geographic Information System.

C, D: DLVM is a template on top of DSVM image. In terms of the packages, GPU drivers etc are all there in the DSVM image. Mostly it is for convenience during creation where we only allow DLVM to be created on GPU VM instances on Azure.

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview


Question 3:

You are developing deep learning models to analyze semi-structured, unstructured, and structured data types. You have the following data available for model building:

1.

Video recordings of sporting events

2.

Transcripts of radio commentary about events

3.

Logs from related social media feeds captured during sporting events

You need to select an environment for creating the model. Which environment should you use?

A. Azure Cognitive Services

B. Azure Data Lake Analytics

C. Azure HDInsight with Spark MLib

D. Azure Machine Learning Studio

Correct Answer: A

Azure Cognitive Services expand on Microsoft\’s evolving portfolio of machine learning APIs and enable developers to easily add cognitive features ?such as emotion and video detection; facial, speech, and vision recognition; and speech and language understanding ?into their applications. The goal of Azure Cognitive Services is to help developers create applications that can see, hear, speak, understand, and even begin to reason. The catalog of services within Azure Cognitive Services can be categorized into five main pillars – Vision, Speech, Language, Search, and Knowledge.

References: https://docs.microsoft.com/en-us/azure/cognitive-services/welcome


Question 4:

You must store data in Azure Blob Storage to support Azure Machine Learning.

You need to transfer the data into Azure Blob Storage.

What are three possible ways to achieve the goal? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. Bulk Insert SQL Query

B. AzCopy

C. Python script

D. Azure Storage Explorer

E. Bulk Copy Program (BCP)

Correct Answer: BCD

You can move data to and from Azure Blob storage using different technologies:

1.

Azure Storage-Explorer

2.

AzCopy

3.

Python

4.

SSIS

References: https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/move-azure-blob


Question 5:

You are moving a large dataset from Azure Machine Learning Studio to a Weka environment.

You need to format the data for the Weka environment.

Which module should you use?

A. Convert to CSV

B. Convert to Dataset

C. Convert to ARFF

D. Convert to SVMLight

Correct Answer: C

Use the Convert to ARFF module in Azure Machine Learning Studio, to convert datasets and results in Azure Machine Learning to the attribute-relation file format used by the Weka toolset. This format is known as ARFF.

The ARFF data specification for Weka supports multiple machine learning tasks, including data preprocessing, classification, and feature selection. In this format, data is organized by entites and their attributes, and is contained in a single text file.

References: https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/convert-to-arff


Question 6:

You plan to use a Deep Learning Virtual Machine (DLVM) to train deep learning models using Compute Unified Device Architecture (CUDA) computations.

You need to configure the DLVM to support CUDA. What should you implement?

A. Solid State Drives (SSD)

B. Computer Processing Unit (CPU) speed increase by using overclocking

C. Graphic Processing Unit (GPU)

D. High Random Access Memory (RAM) configuration

E. Intel Software Guard Extensions (Intel SGX) technology

Correct Answer: C

A Deep Learning Virtual Machine is a pre-configured environment for deep learning using GPU instances.

References: https://azuremarketplace.microsoft.com/en-au/marketplace/apps/microsoft-ads.dsvm-deep-learning


Question 7:

You plan to use a Data Science Virtual Machine (DSVM) with the open source deep learning frameworks Caffe2 and PyTorch.

You need to select a pre-configured DSVM to support the frameworks.

What should you create?

A. Data Science Virtual Machine for Windows 2012

B. Data Science Virtual Machine for Linux (CentOS)

C. Geo AI Data Science Virtual Machine with ArcGIS

D. Data Science Virtual Machine for Windows 2016

E. Data Science Virtual Machine for Linux (Ubuntu)

Correct Answer: E

Caffe2 and PyTorch is supported by Data Science Virtual Machine for Linux. Microsoft offers Linux editions of the DSVM on Ubuntu 16.04 LTS and CentOS 7.4. Only the DSVM on Ubuntu is preconfigured for Caffe2 and PyTorch. Incorrect Answers:

D: Caffe2 and PytOCH are only supported in the Data Science Virtual Machine for Linux.

References: https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview


Question 8:

You are developing a data science workspace that uses an Azure Machine Learning service.

You need to select a compute target to deploy the workspace.

What should you use?

A. Azure Data Lake Analytics

B. Azure Databricks

C. Azure Container Service

D. Apache Spark for HDInsight

Correct Answer: C

Azure Container Instances can be used as compute target for testing or development. Use for low-scale CPU-based workloads that require less than 48 GB of RAM.

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where


Question 9:

You are solving a classification task.

The dataset is imbalanced.

You need to select an Azure Machine Learning Studio module to improve the classification accuracy.

Which module should you use?

A. Permutation Feature Importance

B. Filter Based Feature Selection

C. Fisher Linear Discriminant Analysis

D. Synthetic Minority Oversampling Technique (SMOTE)

Correct Answer: D

Use the SMOTE module in Azure Machine Learning Studio (classic) to increase the number of underepresented cases in a dataset used for machine learning. SMOTE is a better way of increasing the number of rare cases than simply duplicating existing cases.

You connect the SMOTE module to a dataset that is imbalanced. There are many reasons why a dataset might be imbalanced: the category you are targeting might be very rare in the population, or the data might simply be difficult to collect. Typically, you use SMOTE when the class you want to analyze is under- represented.

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/smote


Question 10:

You are analyzing a dataset containing historical data from a local taxi company. You are developing a regression model.

You must predict the fare of a taxi trip.

You need to select performance metrics to correctly evaluate the regression model.

Which two metrics can you use? Each correct answer presents a complete solution?

NOTE: Each correct selection is worth one point.

A. a Root Mean Square Error value that is low

B. an R-Squared value close to 0

C. an F1 score that is low

D. an R-Squared value close to 1

E. an F1 score that is high

F. a Root Mean Square Error value that is high

Correct Answer: AD

RMSE and R2 are both metrics for regression models.

A: Root mean squared error (RMSE) creates a single value that summarizes the error in the model. By squaring the difference, the metric disregards the difference between over-prediction and under-prediction.

D: Coefficient of determination, often referred to as R2, represents the predictive power of the model as a value between 0 and 1. Zero means the model is random (explains nothing); 1 means there is a perfect fit. However, caution should be

used in interpreting R2 values, as low values can be entirely normal and high values can be suspect.

Incorrect Answers:

C, E: F-score is used for classification models, not for regression models.

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/evaluate-model


Question 11:

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while

others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are using Azure Machine Learning to run an experiment that trains a classification model.

You want to use Hyperdrive to find parameters that optimize the AUC metric for the model. You configure a HyperDriveConfig for the experiment by running the following code:

You plan to use this configuration to run a script that trains a random forest model and then tests it with validation data. The label values for the validation data are stored in a variable named y_test variable, and the predicted probabilities from

the model are stored in a variable named y_predicted.

You need to add logging to the script to allow Hyperdrive to optimize hyperparameters for the AUC metric.

Solution: Run the following code:

Does the solution meet the goal?

A. Yes

B. No

Correct Answer: A

Python printing/logging example: logging.info(message)

Destination: Driver logs, Azure Machine Learning designer

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-debug-pipelines


Question 12:

A set of CSV files contains sales records. All the CSV files have the same data schema.

Each CSV file contains the sales record for a particular month and has the filename sales.csv. Each file in stored in a folder that indicates the month and year when the data was recorded. The folders are in an Azure blob container for which a datastore has been defined in an Azure Machine Learning workspace. The folders are organized in a parent folder named sales to create the following hierarchical structure:

At the end of each month, a new folder with that month\’s sales file is added to the sales folder.

You plan to use the sales data to train a machine learning model based on the following requirements:

1.

You must define a dataset that loads all of the sales data to date into a structure that can be easily converted to a dataframe.

2.

You must be able to create experiments that use only data that was created before a specific previous month, ignoring any data that was added after that month.

3.

You must register the minimum number of datasets possible.

You need to register the sales data as a dataset in Azure Machine Learning service workspace.

What should you do?

A. Create a tabular dataset that references the datastore and explicitly specifies each \’sales/mm-yyyy/ sales.csv\’ file every month. Register the dataset with the name sales_dataset each month, replacing the existing dataset and specifying a tag named month indicating the month and year it was registered. Use this dataset for all experiments.

B. Create a tabular dataset that references the datastore and specifies the path \’sales/*/sales.csv\’, register the dataset with the name sales_dataset and a tag named month indicating the month and year it was registered, and use this dataset for all experiments.

C. Create a new tabular dataset that references the datastore and explicitly specifies each \’sales/mm- yyyy/sales.csv\’ file every month. Register the dataset with the name sales_dataset_MM-YYYY each month with appropriate MM and YYYY values for the month and year. Use the appropriate month- specific dataset for experiments.

D. Create a tabular dataset that references the datastore and explicitly specifies each \’sales/mm-yyyy/ sales.csv\’ file. Register the dataset with the name sales_dataset each month as a new version and with a tag named month indicating the month and year it was registered. Use this dataset for all experiments, identifying the version to be used based on the month tag as necessary.

Correct Answer: B

Specify the path.

Example:

The following code gets the workspace existing workspace and the desired datastore by name. And then passes the datastore and file locations to the path parameter to create a new TabularDataset, weather_ds.

from azureml.core import Workspace, Datastore, Dataset

datastore_name = \’your datastore name\’

# get existing workspace

workspace = Workspace.from_config()

# retrieve an existing datastore in the workspace by name datastore = Datastore.get(workspace, datastore_name)

# create a TabularDataset from 3 file paths in datastore datastore_paths = [(datastore, \’weather/2018/11.csv\’),

(datastore, \’weather/2018/12.csv\’),

(datastore, \’weather/2019/*.csv\’)]

weather_ds = Dataset.Tabular.from_delimited_files(path=datastore_paths)


Question 13:

You use the following code to run a script as an experiment in Azure Machine Learning:

You must identify the output files that are generated by the experiment run.

You need to add code to retrieve the output file names.

Which code segment should you add to the script?

A. files = run.get_properties()

B. files= run.get_file_names()

C. files = run.get_details_with_logs()

D. files = run.get_metrics()

E. files = run.get_details()

Correct Answer: B

You can list all of the files that are associated with this run record by called run.get_file_names()

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-track-experiments


Question 14:

You create a deep learning model for image recognition on Azure Machine Learning service using GPU- based training.

You must deploy the model to a context that allows for real-time GPU-based inferencing.

You need to configure compute resources for model inferencing.

Which compute type should you use?

A. Azure Container Instance

B. Azure Kubernetes Service

C. Field Programmable Gate Array

D. Machine Learning Compute

Correct Answer: B

You can use Azure Machine Learning to deploy a GPU-enabled model as a web service. Deploying a model on Azure Kubernetes Service (AKS) is one option. The AKS cluster provides a GPU resource that is used by the model for inference.

Inference, or model scoring, is the phase where the deployed model is used to make predictions. Using GPUs instead of CPUs offers performance advantages on highly parallelizable computation.

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-inferencing-gpus


Question 15:

You create a batch inference pipeline by using the Azure ML SDK. You run the pipeline by using the following code:

from azureml.pipeline.core import Pipeline

from azureml.core.experiment import Experiment

pipeline = Pipeline(workspace=ws, steps=[parallelrun_step]) pipeline_run = Experiment(ws, \’batch_pipeline\’).submit(pipeline)

You need to monitor the progress of the pipeline execution.

What are two possible ways to achieve this goal? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. Option A

B. Option B

C. Option C

D. Option D

E. Option E

Correct Answer: DE

A batch inference job can take a long time to finish. This example monitors progress by using a Jupyter widget. You can also manage the job\’s progress by using:

1.

Azure Machine Learning Studio.

2.

Console output from the PipelineRun object.

from azureml.widgets import RunDetails RunDetails(pipeline_run).show() pipeline_run.wait_for_completion(show_output=True) Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-parallel-run-step#monitor-the- parallel-run-job


Leave a Reply

Your email address will not be published. Required fields are marked *