Designing and Implementing a Data Science Solution on Azure (DP-100)


De DP-100 Azure Data Scientist training geeft je de kennis van data science en machine learning die je nodig hebt om machine learning-workloads op Microsoft Azure te implementeren en uit te voeren; in het bijzonder met behulp van Azure Machine Learning Service. Dit omvat het plannen en creëren van een geschikte werkomgeving voor data science-workloads op Azure, het uitvoeren van data-experimente en data-training.

Meer informatie

Klassikale training

Het is ook mogelijk om de training virtueel te volgen. Dezelfde leerervaring als klassikaal waarbij je de trainer en medecuristen ziet en hoort maar dan vanaf thuis. De planning en kosten blijven gelijk. 

Een klassikale cursus van Ictivity Training geeft je de garantie dat je uitstekend wordt opgeleid in een moderne comfortabele leeromgeving door deKlaslokaal meest deskundige trainers op hun vakgebied. In aaneengesloten dagen volg je de training op één van onze locaties. Tijdens de klassikale training heb je de beschikking over moderne apparatuur in een rustige leeromgeving. Trainingen bestaan uit een gedeelte theorie maar je krijgt ook veel oefeningen die de dagelijkse praktijk nabootsen.

Ictivity Training heeft in Nederland locaties in Utrecht (Vianen) en Eindhoven, tevens is het mogelijk om een locatie naar wens aan te vragen. Indien je niet wenst te reizen, kun je de training remote volgen via Virtual Classroom

 

Maatwerk en In-company training

Deze leervorm begint met een intakegesprek tussen een Learning Consultant van Ictivity Training en de opdrachtgever. Hierbij inventariseren we de beginsituatie, de doelstelling, de praktijksituatie en het verwachtingspatroon van de deelnemer(s). Met de gegevens maken wij het trainingsprogramma op maat.

Voordelen:

  • De inhoud van de training wordt volledig afgestemd op jouw specifieke kennisbehoefte.
  • De tijdsduur van de training wordt aangepast aan jouw specifieke behoefte.
  • Jij bepaalt zelf de locatie van de training (incompany of op een van onze locaties).
  • De planning van de trainingen wordt afgestemd op jouw projectplanning.

Deze training is bedoeld voor data scientists in het algemeen, en data scientists die de verantwoordelijkheid hebben over het trainen en inzetten van machine learning models.

Module 1: Introduction to Azure Machine Learning

In this module, you will learn how to provision an Azure Machine Learning workspace and use it to manage machine learning assets such as data, compute, model training code, logged metrics, and trained models. You will learn how to use the web-based Azure Machine Learning studio interface as well as the Azure Machine Learning SDK and developer tools like Visual Studio Code and Jupyter Notebooks to work with the assets in your workspace.

  • Getting Started with Azure Machine Learning
  • Azure Machine Learning Tools

Lab : Creating an Azure Machine Learning Workspace
Lab : Working with Azure Machine Learning Tools

After completing this module, you will be able to

  • Provision an Azure Machine Learning workspace
  • Use tools and code to work with Azure Machine Learning

Module 2: No-Code Machine Learning with Designer

This module introduces the Designer tool, a drag and drop interface for creating machine learning models without writing any code. You will learn how to create a training pipeline that encapsulates data preparation and model training, and then convert that training pipeline to an inference pipeline that can be used to predict values from new data, before finally deploying the inference pipeline as a service for client applications to consume.

  • Training Models with Designer
  • Publishing Models with Designer

Lab : Creating a Training Pipeline with the Azure ML Designer
Lab : Deploying a Service with the Azure ML Designer

After completing this module, you will be able to

  • Use designer to train a machine learning model
  • Deploy a Designer pipeline as a service

Module 3: Running Experiments and Training Models

In this module, you will get started with experiments that encapsulate data processing and model training code, and use them to train machine learning models.

  • Introduction to Experiments
  • Training and Registering Models

Lab : Running Experiments
Lab : Training and Registering Models

After completing this module, you will be able to

  • Run code-based experiments in an Azure Machine Learning workspace
  • Train and register machine learning models

Module 4: Working with Data

Data is a fundamental element in any machine learning workload, so in this module, you will learn how to create and manage datastores and datasets in an Azure Machine Learning workspace, and how to use them in model training experiments.

  • Working with Datastores
  • Working with Datasets

Lab : Working with Datastores
Lab : Working with Datasets

After completing this module, you will be able to

  • Create and consume datastores
  • Create and consume datasets

Module 5: Compute Contexts

One of the key benefits of the cloud is the ability to leverage compute resources on demand, and use them to scale machine learning processes to an extent that would be infeasible on your own hardware. In this module, you'll learn how to manage experiment environments that ensure consistent runtime consistency for experiments, and how to create and use compute targets for experiment runs.

  • Working with Environments
  • Working with Compute Targets

Lab : Working with Environments
Lab : Working with Compute Targets

After completing this module, you will be able to

  • Create and use environments
  • Create and use compute targets

Module 6: Orchestrating Operations with Pipelines

Now that you understand the basics of running workloads as experiments that leverage data assets and compute resources, it's time to learn how to orchestrate these workloads as pipelines of connected steps. Pipelines are key to implementing an effective Machine Learning Operationalization (ML Ops) solution in Azure, so you'll explore how to define and run them in this module.

  • Introduction to Pipelines
  • Publishing and Running Pipelines

Lab : Creating a Pipeline
Lab : Publishing a Pipeline

After completing this module, you will be able to

  • Create pipelines to automate machine learning workflows
  • Publish and run pipeline services

Module 7: Deploying and Consuming Models

Models are designed to help decision making through predictions, so they're only useful when deployed and available for an application to consume. In this module learn how to deploy models for real-time inferencing, and for batch inferencing.

  • Real-time Inferencing
  • Batch Inferencing

Lab : Creating a Real-time Inferencing Service
Lab : Creating a Batch Inferencing Service

After completing this module, you will be able to

  • Publish a model as a real-time inference service
  • Publish a model as a batch inference service

Module 8: Training Optimal Models

By this stage of the course, you've learned the end-to-end process for training, deploying, and consuming machine learning models; but how do you ensure your model produces the best predictive outputs for your data? In this module, you'll explore how you can use hyperparameter tuning and automated machine learning to take advantage of cloud-scale compute and find the best model for your data.

  • Hyperparameter Tuning
  • Automated Machine Learning

Lab : Tuning Hyperparameters
Lab : Using Automated Machine Learning

After completing this module, you will be able to

  • Optimize hyperparameters for model training
  • Use automated machine learning to find the optimal model for your data

Module 9: Interpreting Models

Many of the decisions made by organizations and automated systems today are based on predictions made by machine learning models. It's increasingly important to be able to understand the factors that influence the predictions made by a model, and to be able to determine any unintended biases in the model's behavior. This module describes how you can interpret models to explain how feature importance determines their predictions.

  • Introduction to Model Interpretation
  • using Model Explainers

Lab : Reviewing Automated Machine Learning Explanations
Lab : Interpreting Models

After completing this module, you will be able to

  • Generate model explanations with automated machine learning
  • Use explainers to interpret machine learning models

Module 10: Monitoring Models

After a model has been deployed, it's important to understand how the model is being used in production, and to detect any degradation in its effectiveness due to data drift. This module describes techniques for monitoring models and their data.

  • Monitoring Models with Application Insights
  • Monitoring Data Drift

Lab : Monitoring a Model with Application Insights
Lab : Monitoring Data Drift

After completing this module, you will be able to

  • Use Application Insights to monitor a published model
  • Monitor data drift

Je kunt hier het examen boeken voor deze training: DP-100

Klantbeoordelingen van Ictivity Training
full star
full star
full star
full star
half star
126 beoordelingen
full star
full star
full star
full star
full star
Eric
full star
full star
full star
full star
no star
Leerzame training met veel tips
full star
full star
full star
full star
no star
Carien gaf duidelijke uitleg en wilde zo veel mogelijk van haar kennis delen.

Kies een Leervorm:



Code: DP-100
Leervorm:    Klassikaal
Dagen: 3
1545
Per persoon
excl. BTW
Naar inschrijfpagina
Startdatum:
16 dec 2024
Locatie:
Nieuwegein

Andere locatie of datum

Deze trainingen kunnen wij ook als maatwerk bij jou / ons op locatie.

@ICTIVITYTRAINING 2024