Generative AI for DevOps: Build Your Own LLM Server (English) (Virtueel)
Startdata en plaatsen
placeVirtueel 11 feb. 2025Toon rooster event 11 februari 2025, 08:45-16:00, Virtueel, Day 1 |
placeVirtueel 28 mei. 2025Toon rooster event 28 mei 2025, 08:45-16:00, Virtueel, Day 1 |
Beschrijving
Lesmethode :
Virtueel
Algemeen :
Can you as a DevOps engineer not live without ChatGPT, but is its use prohibited at work due to policy or sensitive data? Are you curious about how you can host your own Large Language Model (LLM)?
During this one-day workshop, Generative AI for DevOps Engineers, you will learn at AT Computing how to set up and use your own local version of "ChatGPT". This will be done completely on the basis of open-source technology, without sending your data to the cloud!
The day will start with an introduction to Generative AI and which LLMs are available and how to use them. We will dive into the hardware aspects and show you how to apply GPU acceleration on Virtual Ma…
Veelgestelde vragen
Er zijn nog geen veelgestelde vragen over dit product. Als je een vraag hebt, neem dan contact op met onze klantenservice.
Lesmethode :
Virtueel
Algemeen :
Can you as a DevOps engineer not live without ChatGPT, but is
its use prohibited at work due to policy or sensitive data? Are you
curious about how you can host your own Large Language Model
(LLM)?
During this one-day workshop, Generative AI for DevOps Engineers,
you will learn at AT Computing how to set up and use your own local
version of "ChatGPT". This will be done completely on the basis of
open-source technology, without sending your data to the cloud!
The day will start with an introduction to Generative AI and which
LLMs are available and how to use them. We will dive into the
hardware aspects and show you how to apply GPU acceleration on
Virtual Machines and in Containers.
Through various practical exercises, you will set up your own Large
Language Model server and even create your own "model". You will
learn how to connect a web-based client to your own Large Language
Model, which will essentially create your own ChatGPT clone.
In addition, you will discover how to connect to the LLM API and
Python, how to apply Retrieval Augmented Generation to use your own
documents with your LLM, and how to analyze images with your own
LLM. We will also cover log analysis using your LLM.
Doel :
After following this course, you will have knowledge of the
internal workings of GenAI in general and LLMs in particular. You
will also learn how to set up a local LLM server without sending
your data to a Cloud environment.
Doelgroep :
This course is for IT professionals, system administrators,
software developers, Cloud, and DevOps Engineers who want to use
large language models in their daily work.
Voorkennis :
To follow the course, experience with the (Linux) command line
interface is required, for example, Linux/UNIX Fundamentals or
Linux Infrastructure. Knowledge of a programming language,
preferably Python, is beneficial.
Onderwerpen :
- What is Generative AI?
- Which LLMs are available and how do you use them?
- Tokens, vectors, and parameters
- Hardware: CPU versus GPU: how to apply GPU acceleration on
Virtual Machines and in containers (Docker)
- Setting up an LLM server
- Creating your own LLM model
- Applying a web-based client to your own LLM
- Connecting to the LLM API with Python
- Using your own documents with your own LLM
- Analyzing images with your own LLM
- Log analysis using your LLM
Blijf op de hoogte van nieuwe ervaringen
Deel je ervaring
Heb je ervaring met deze cursus? Deel je ervaring en help anderen kiezen. Als dank voor de moeite doneert Springest € 1,- aan Stichting Edukans.Er zijn nog geen veelgestelde vragen over dit product. Als je een vraag hebt, neem dan contact op met onze klantenservice.