

Security in Go LLM-based applications
Apr 29th, 2026
12:00PM - 1:00PM EDT
Remote
Secure Go-LLM apps: learn to block prompt injection, secure tool calls via least-privilege, and stop RAG data poisoning with robust patterns.
Free:
Enroll NowSecure Go-LLM apps: Learn to block prompt injection, secure tool calls via least-privilege, and stop RAG data poisoning with robust patterns
In as more Go applications use LLMs in one way or another, security is more important than ever.
In this talk, we’ll explore some design patterns to keep your application secured against a range of attacks, such as:
- Prompt injection — How injected instructions in user input or retrieved documents can hijack model behavior.
- Securing tool calls — Apply least-privilege principles to exposed functions, prevent command injection from model output, and enforce authorization checks before execution.
- RAG pipeline security — Guard against data poisoning and indirect injection via ingested documents. See how a malicious document in the vector DB can manipulate retrieval results and model responses.
Join us to explore this critical area and learn together how to write safer applications.

Instructor
Florin Pățan
Senior Engineer / Go Instructor
Florin has been working with Go daily for over a decade. He was involved in shaping GoLand, the Go IDE from JetBrains, and has consulted several high-profile clients with Ardan Labs. Florin is actively engaged in AI development and builds resilient systems that help people worldwide.
Why Engineers & Teams Trust Ardan Labs
From the Lab
Where ideas get tested and shared. From the Lab is your inside look at the tools, thinking, and tech powering our work in Go, Rust, and Kubernetes. Discover our technical blogs, engineering insights, and YouTube videos created to support the developer community.
Explore our content:
Using Tools: A Meeting Scheduler
Updated on

Miki Tebeka
Kronk AI: A Simpler Way to Build and Run AI Applications
Updated on

Ardan Labs









