New SkillUpWorks vertical
For DevOps β€’ SRE β€’ Kubernetes β€’ Cloud engineers

AI in DevOps, AIOps and LLMOps for Infrastructure Engineers|

Learn how AI is changing incident response, observability, Kubernetes operations, automation and production troubleshooting. This page is not about replacing engineers. It is about helping engineers reason faster, reduce repetitive work and operate modern AI workloads safely.

4learning tracks
10+core interview areas
6hands-on project ideas
0hype-first promises
Conversation-based project learning

Learn AI in DevOps like a mentor is sitting with you

This page is written for an infrastructure engineer who is strong in Linux, DevOps, Kubernetes or cloud, but is new to installing, configuring and using AI locally.

πŸ‘¨β€πŸ«
Mentor

Dinesh, we will not start with buzzwords. We will build one real lab: a local AI DevOps troubleshooting assistant. While building it, you will learn what local AI is, how to install it, how to use it for Linux logs, Kubernetes events, Prometheus alerts and incident reports.

πŸ‘¨β€πŸ’»
Engineer

I know Linux and DevOps, but I do not have hands-on experience configuring AI. I want to understand from zero and also know where this helps in real operations.

πŸ‘¨β€πŸ«
Mentor

Perfect. We will configure AI locally first, then use it only as a safe assistant. It will summarize, explain and suggest validation steps. It will not directly change production.

Foundation project

Local AI DevOps Troubleshooting Assistant

The goal is to create a private lab where you can paste sanitized operational data and ask AI to summarize issues, explain alerts, prepare incident timelines and generate interview-ready explanations.

Browser
β†’
Open WebUI
β†’
Ollama
β†’
Local Model
β†’
Linux / Kubernetes / Prometheus data
Start from zero

First, what are we actually configuring?

Before commands, understand the moving parts. This makes the installation meaningful instead of blindly copying commands.

🧠

Local model

A model is the AI brain that reads your prompt and generates an answer. Running it locally means the model runs on your laptop, workstation, lab server or VPS instead of relying only on an external chat website.

βš™οΈ

Ollama

Ollama is the simple local runtime we use to pull, run and call models. Think of it like a model manager plus local API for your AI lab.

πŸ’¬

Open WebUI

Open WebUI gives you a browser-based chat interface connected to your local model. It makes the lab easy to use without writing API calls every time.

Mentor note: Local AI is not automatically better for every case. It is useful for learning, private lab work, safer experimentation, prompt testing and internal troubleshooting workflows where you want more control.
Theory made simple

The mental model: what happens when you ask local AI a DevOps question?

Before you use AI for logs or Kubernetes events, understand the basic flow. Once this is clear, the commands and prompts make much more sense.

1️⃣

You provide context

The model does not automatically know your server, cluster, deployment, alert history or recent change. You must provide useful evidence such as logs, events, service status, alert labels, rollout history and the exact question you want answered.

2️⃣

The model predicts an answer

An LLM generates text based on the prompt and patterns it learned during training. It can summarize, classify, explain and suggest next checks, but it is not a monitoring system and it is not connected to your production environment unless you build that integration.

3️⃣

You validate with engineering evidence

The engineer must verify every AI suggestion using real signals: metrics, logs, traces, events, deployment history, configuration, network checks and runbooks. AI gives hypotheses. Production truth still comes from evidence.

Important terms you should know first

Model

The AI engine that generates responses. Example: a local model pulled through Ollama. Larger models may answer better but need more CPU/RAM/GPU.

Runtime

The software that runs the model. In our lab, Ollama plays this role by downloading models, running them and exposing a local API.

Prompt

The instruction you give the model. Good prompts include role, context, evidence, expected format and safety boundaries.

Context window

The amount of text the model can consider at once. This is why you should send focused logs from a specific time window instead of dumping everything.

Inference

The process where the model reads your prompt and generates an answer. On CPU this can be slower; on GPU it can be faster.

Grounding

Giving the model trusted internal data such as approved runbooks or sanitized incident notes so its answer is based on your actual environment.

Mentor note: In DevOps, AI becomes useful only when you combine it with good operational evidence. A weak engineer asks, β€œWhat is the root cause?” A strong engineer asks, β€œHere is the evidence. Summarize it, identify possible causes, and give me read-only validation checks.”
Lab preparation

What you need before installing

We keep the first version simple. You can run this on a Linux laptop, a small VPS, or a local VM. GPU is useful but not mandatory for learning.

βœ…

Minimum learning setup

  • Linux machine, VM or VPS
  • Docker installed for Open WebUI
  • 8 GB RAM minimum for lightweight models
  • Free disk space for model downloads
  • Basic terminal access
πŸš€

Better setup

  • 16 GB+ RAM
  • More CPU cores for smoother response
  • NVIDIA GPU if you want faster inference
  • Private network or VPN for remote access
  • Separate lab user account
πŸ‘¨β€πŸ’»
Engineer

Do I need a GPU to start?

πŸ‘¨β€πŸ«
Mentor

No. For learning, a CPU-based setup is enough. It may be slower, but you can still understand the workflow, prompts, safety rules and DevOps use cases. GPU becomes important when you want faster response or bigger models.

Practical choice

Which model and machine should you start with?

Do not start by chasing the biggest model. Start with a setup that works, then improve gradually.

Choice
Good for
What to remember
Small local model
Learning prompts, summarizing short logs, understanding the workflow, practicing interviews.
Fast enough on modest machines, but may miss complex context or produce weaker reasoning.
Bigger local model
Better summaries, richer explanations and more complex troubleshooting examples.
Needs more RAM/CPU/GPU. Do not assume bigger means always correct.
CPU-only lab
Beginners, VPS labs, basic testing, proof of concept.
Responses may be slow. That is acceptable for learning.
GPU lab
Faster inference, larger models, repeated team usage, more realistic internal assistant experiments.
Requires driver/CUDA/container runtime understanding and cost planning.
πŸ‘¨β€πŸ’»
Engineer

So should I worry about model names first?

πŸ‘¨β€πŸ«
Mentor

No. First make the workflow work: install runtime, run one model, connect the UI, test safe prompts, then use real DevOps scenarios. Model tuning comes later.

Step 1

Install Ollama and run your first local model

This is where AI becomes practical. You install a runtime, pull a model and ask it a simple Linux question.

What are we doing?

We are creating a local AI endpoint

After Ollama starts, your system can run models locally and expose an API on the machine. This API can later be used by scripts, tools or Open WebUI.

http://localhost:11434
Commands

Install and test

# Install Ollama on Linux
curl -fsSL https://ollama.com/install.sh | sh

# Pull a lightweight model for learning
ollama pull llama3.2

# Ask your first local AI question
ollama run llama3.2 "Explain journalctl to a Linux administrator."
Validation:

If the model responds, your local AI foundation is working. At this point you have not connected it to DevOps data yet. You have only confirmed that local AI inference works.

Command explanation

What each Ollama command means

This is important for beginners. Do not only copy commands; understand what each one changes in your lab.

curl -fsSL ... | sh

Downloads and runs the official Linux installer. In a production organization, you would review installation methods and package sources according to company policy.

ollama pull llama3.2

Downloads the model files to your machine. This is like pulling a container image, but for an AI model.

ollama run llama3.2

Starts an interactive prompt using that model. This is the fastest way to verify that local AI works.

localhost:11434

The local API endpoint that tools and scripts can call. Later, Open WebUI and shell scripts can talk to this endpoint.

Simple verification prompts

Explain the difference between systemctl status and journalctl.
Summarize what a Linux load average means.
Explain CrashLoopBackOff to a Kubernetes beginner.
Explain a Prometheus alert expression in simple terms.
Step 2

Add Open WebUI so you can use AI from a browser

Engineers can use the terminal, but a browser UI makes the workflow easier for long prompts, saved conversations and repeated learning.

πŸ‘¨β€πŸ’»
Engineer

Why do I need Open WebUI if Ollama already works from terminal?

πŸ‘¨β€πŸ«
Mentor

Ollama runs the model. Open WebUI gives you a clean interface to talk to that model, save prompts, compare outputs and make the assistant easier to use like an internal troubleshooting console.

🐳

Run Open WebUI with Docker

docker run -d \
  --name open-webui \
  -p 3000:8080 \
  -e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
  -v open-webui:/app/backend/data \
  ghcr.io/open-webui/open-webui:main

Then open http://SERVER-IP:3000 in your browser.

πŸ”

Access guidance

  • For a laptop lab, keep it local.
  • For a VPS, expose only behind VPN, firewall or reverse proxy authentication.
  • Do not expose your AI lab publicly without access control.
  • Do not paste secrets, private keys or customer data.
Step 3

Your first DevOps prompt should be safe and specific

Good AI usage starts with good context. Do not ask vague questions like β€œfix this server”. Ask for summary, hypotheses and validation steps.

Bad prompt

My server is down. Tell me the root cause and fix it.

This is unsafe because it asks AI to guess and possibly recommend actions without evidence.

Good prompt

You are assisting with Linux troubleshooting. Summarize the evidence, identify repeated errors, suggest possible causes, and provide read-only validation commands. Do not recommend destructive actions.

This creates boundaries and asks for support, not blind execution.

Prompt engineering for engineers

The 5-part prompt format for DevOps troubleshooting

You do not need fancy prompt tricks. You need a clear operational format that reduces guessing and keeps the assistant inside safe boundaries.

πŸ‘€

Role

Tell the model what role it should play: Linux troubleshooting assistant, Kubernetes incident reviewer, Prometheus alert explainer or SRE runbook helper.

πŸ“Œ

Context

Describe the system, namespace, service, alert time, recent deployment or current symptom. AI cannot infer your environment automatically.

πŸ“„

Evidence

Paste sanitized command output, logs, events, alert labels or configuration snippets. Use time-bounded data.

🧾

Output format

Ask for timeline, likely causes, validation commands, risk level and next questions. Structure makes answers easier to verify.

πŸ›‘οΈ

Safety boundary

Tell it not to suggest destructive changes, secret exposure, deletion, restarts or rollbacks unless human approval is required.

Reusable master prompt

You are assisting a DevOps/SRE engineer.

Role: Act as a troubleshooting assistant, not an automation executor.
Context: I will provide logs, events, metrics or command output.
Task: Summarize the evidence, identify patterns, list possible causes and provide read-only validation commands.
Safety: Do not recommend destructive actions. Do not assume root cause unless the evidence supports it. Mark uncertain items clearly.
Output format:
1. Short summary
2. Timeline
3. Repeated errors
4. Possible causes with confidence level
5. Read-only validation commands
6. Actions that require human approval
7. Questions I should ask next
Example library

Practical prompt examples you can reuse in the lab

These examples make the page more useful for learners. They can copy, modify and practice with real DevOps scenarios.

Linux service prompt

You are assisting with Linux service troubleshooting.
Analyze the following systemctl and journalctl output.
Return timeline, main error, possible causes, read-only validation commands and unsafe actions to avoid.

Kubernetes pod prompt

You are assisting with Kubernetes triage.
Analyze pod describe output, previous logs, events and rollout history.
Return first visible failure, repeated error, possible hypotheses and validation commands.

Prometheus alert prompt

Explain this Prometheus alert to an SRE.
Include what the expression measures, what labels matter, possible causes, validation PromQL and Kubernetes checks.

Incident update prompt

Convert the verified incident notes into a short update for stakeholders.
Include impact, current status, mitigation, next update time and avoid blaming language.
Scenario 1

Use local AI to summarize Linux service logs

This is the first real operational use case. We collect evidence, sanitize it and ask the local assistant to summarize.

🐧

Scenario: Nginx is not responding

You are on-call. The website is not responding. You need fast context before deciding what to do.

Collect read-only evidence

systemctl status nginx --no-pager
journalctl -u nginx --since "30 minutes ago" --no-pager
ss -tulpn | grep ':80'
df -h
free -m

Ask the local AI assistant

You are assisting with Linux service troubleshooting.

Analyze the following output and return:
1. Timeline
2. Main visible error
3. Possible causes
4. Read-only validation commands
5. Unsafe actions to avoid

Do not assume root cause without evidence.
AI helps withGrouping repeated errors, summarizing logs, creating a timeline and suggesting validation checks.
Engineer verifiesService status, ports, disk, memory, config syntax, recent changes and actual user impact.
Do not automateRestart, delete logs, disable firewall or change config without human review.

Example AI output you should expect

  • Timeline: Service started failing after a config reload or dependency error appeared.
  • Main symptom: Nginx is not listening on expected port or fails config validation.
  • Validation: Run nginx -t, check port binding, inspect recent config changes and verify disk/memory.
  • Human decision: Only reload or rollback after confirming the configuration issue.
Scenario 2

Use local AI for Kubernetes CrashLoopBackOff triage

This scenario connects AI assistance with real Kubernetes troubleshooting signals.

☸️

Scenario: payment-api pod is CrashLoopBackOff

The model should not guess. It should help you organize logs, previous container output, events and rollout history.

Collect Kubernetes evidence

kubectl get pods -n payments
kubectl describe pod payment-api-xxxxx -n payments
kubectl logs payment-api-xxxxx -n payments --previous
kubectl get events -n payments --sort-by=.lastTimestamp
kubectl rollout history deployment/payment-api -n payments

Prompt template

You are assisting with Kubernetes incident triage.

Use the pod logs, events and rollout history.
Return:
1. Incident summary
2. First visible failure
3. Most repeated error
4. Possible causes
5. Commands to validate each cause
6. Actions requiring human approval
7. Interview-style explanation
Mentor note: In Kubernetes incidents, AI is useful for reading a lot of text quickly. But rollback, config change, scaling, secret update or manifest patching must remain human-approved.

Example reasoning path

A good AI answer should separate symptoms from causes. For CrashLoopBackOff, symptoms may include repeated restarts, failed health checks or application exceptions. Possible causes may include bad image, missing secret, invalid config, insufficient resources, dependency failure or application bug. The engineer validates each one using Kubernetes events, previous logs, deployment history and metrics.

Scenario 3

Use local AI to explain Prometheus alerts

Many engineers receive alerts but struggle to explain what the expression means, what signal it uses and what to check first.

πŸ”₯

Alert input

alertname: HighHTTP5xxRate
service: payment-api
namespace: payments
expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.05
for: 10m
🧠

AI prompt

Explain this Prometheus alert for a DevOps/SRE engineer.

Return:
1. What the alert means
2. What metric signal it uses
3. Possible causes
4. PromQL validation queries
5. Kubernetes checks
6. Customer impact questions
7. Priority assessment
πŸ‘¨β€πŸ«
Mentor

A good alert explanation does not stop at β€œ5xx is high”. It connects the metric, service, recent change, dependency health and customer impact.

What makes alert explanation useful?

The assistant should explain the metric, labels, threshold, duration and blast radius. It should also ask for related signals: request rate, latency, dependency errors, recent deployment, pod restarts and customer impact. A weak answer says β€œ5xx is high.” A strong answer explains what to inspect next and why.

Mini project

Build a small local wrapper script

After you understand manual prompting, the next step is a small script that collects evidence and sends it to your local AI API.

πŸ› οΈ

What the script does

  • Collects read-only Linux or Kubernetes output
  • Removes obvious sensitive values
  • Sends the sanitized context to local Ollama
  • Saves the AI summary to a markdown file
  • Requires human approval for any next action
πŸ“„

Example flow

collect_logs()
sanitize_context()
call_local_model()
save_incident_notes()
print_validation_commands()

# No automatic restart
# No automatic delete
# No automatic kubectl apply
curl http://localhost:11434/api/generate -d '{
  "model": "llama3.2",
  "prompt": "Summarize these sanitized Linux logs into timeline, possible causes and validation commands...",
  "stream": false
}'
Day-to-day usage

Where can an infrastructure engineer actually use this?

The value is not only in incidents. Local AI can help with learning, documentation, runbooks, interview preparation and safer operational analysis.

πŸ“š

Learning assistant

Ask it to explain Linux, Kubernetes, Prometheus or Terraform outputs in simple language while you validate from docs and labs.

🧯

Incident triage

Summarize noisy logs, events and alerts into a timeline, suspected areas and validation checklist.

πŸ“˜

Runbook support

Use approved runbooks as context and ask the assistant to explain steps, prerequisites, risk level and rollback considerations.

πŸ§ͺ

Interview practice

Turn each lab scenario into interview answers: what happened, what evidence you collected, how you validated and what you avoided.

πŸ“

Documentation

Draft incident notes, troubleshooting summaries, change review notes and post-incident action items after human review.

πŸ”

Change review

Ask for risks in a Kubernetes manifest, Terraform plan summary or CI/CD change, but validate manually before applying anything.

πŸ’Έ

Capacity and cost notes

Summarize usage signals and ask for questions to investigate. AI should not replace capacity planning data.

πŸ”

Security hygiene

Use it to create checklists, but never paste secrets. For real security work, follow approved company tools and processes.

Production safety

How to use AI without creating production risk

This is the most important part. AI should assist engineering judgment, not bypass it.

βœ…

Safe AI usage

  • Summarize sanitized logs
  • Explain alerts and metrics
  • Generate incident timelines
  • Suggest read-only validation commands
  • Draft incident updates
  • Search approved runbooks
β›”

Unsafe without approval

  • Restart production services
  • Delete pods, volumes or files
  • Apply Kubernetes manifests
  • Modify Terraform state
  • Change firewall/security rules
  • Rotate secrets or credentials

Before using AI during an incident

Learning path

How you should learn this from zero

This roadmap is for engineers who understand infrastructure but are new to AI tooling.

1. Understand what local AI is

Learn model, runtime, prompt, context window, inference, API and UI basics.

2. Install Ollama and Open WebUI

Run your first model locally and open a browser-based AI interface.

3. Practice Linux log summarization

Use systemd, journalctl, ports, disk and memory outputs as your first safe data set.

4. Practice Kubernetes incident triage

Use pod logs, events, rollout history and metrics to create incident summaries.

5. Build automation carefully

Create scripts that summarize and document, but do not directly change production.

6. Prepare interview answers

Explain architecture, safety controls, limitations, validation process and real scenarios.

Interview framing

Scenario-based questions from this project

These questions come from the actual lab, so answers sound practical instead of memorized.

Foundational

Why would you run AI locally for DevOps learning?

To understand model usage, privacy boundaries, prompt design and safe troubleshooting workflows without depending only on external tools.

Operational

How would you use AI for Linux service troubleshooting?

Collect read-only logs and service data, sanitize it, ask for summary and validation commands, then manually verify the suggestions.

Kubernetes

How would you use AI for CrashLoopBackOff?

Provide previous logs, pod description, events and rollout history. Use AI to summarize evidence, but validate image, config, resources and dependency issues yourself.

Advanced

How do you prevent AI from becoming a production risk?

Use sanitization, read-only defaults, approval gates, audit logs, restricted access, runbook grounding and human ownership of final actions.

Practice AI in DevOps Questions
Common mistakes

What beginners usually get wrong

These mistakes are normal. The goal is to avoid them early.

⚠️

Starting with automation first

Do not start by asking AI to run commands. Start by asking it to summarize, explain and suggest validation checks.

πŸ”“

Pasting secrets

Never paste tokens, private keys, passwords, customer data or internal confidential data into uncontrolled AI tools.

🧱

Skipping fundamentals

AI cannot replace Linux, networking, Kubernetes and observability fundamentals. It becomes useful only when the engineer can verify it.

🎯

Asking vague questions

Good prompts include role, context, evidence, desired output and safety boundaries.

πŸ•°οΈ

Ignoring recent changes

AI cannot know your latest deployment, config change or outage unless you provide that context.

πŸ“‰

No measurement

Track whether AI actually reduces triage time, improves notes or helps learning. Otherwise it may become another tool with no value.

Project extensions

Turn this into hands-on portfolio work

Once the local AI lab works, extend it into useful DevOps/SRE projects.

🐧

Linux log analyzer

Parse journalctl output, group errors, summarize timeline and generate safe validation checks.

journalctl -u nginx --since "30 minutes ago"
☸️

Kubernetes incident summarizer

Collect pod logs, events and rollout history, then create an incident summary and validation checklist.

kubectl get events -n payments
πŸ”₯

Prometheus alert explainer

Explain alert expression, labels, signal quality, likely causes and first validation checks.

HighHTTP5xxRate firing
πŸ“˜

Runbook assistant

Search approved runbooks and return safe steps with risk level and approval requirements.

runbook: kubelet-not-ready
πŸ“

Incident report drafter

Turn verified evidence into timeline, impact, cause, action items and follow-up notes.

incident-notes.md
πŸ’¬

ChatOps helper

Build a chat assistant that runs read-only checks and requires approval for risky steps.

/check namespace payments

Build the local AI DevOps lab first. Then use it to learn real scenarios.

This vertical is not about marketing AI. It is about teaching infrastructure engineers how AI tools work, how to configure them locally, how to use them safely, and how to apply them to real Linux, Kubernetes and incident-response work.

Recommended path: install Ollama β†’ add Open WebUI β†’ test prompts β†’ analyze Linux logs β†’ triage Kubernetes events β†’ explain Prometheus alerts β†’ build safe helper scripts.
πŸ’¬ πŸ€–AI Interview AssistantAsk β€’ Practice β€’ Improve