Join us

ContentUpdates and recent posts about Kata Containers..
Link
@kala shared a link, 3 weeks, 5 days ago
FAUN.dev()

What if you don't need MCP at all?

MostMCP serversstuffed into LLM agents are overcomplicated, slow to adapt, and hog context. The post calls them out for what they are: a mess. The alternative? Scrap the kitchen sink. UseBash, leanNode.js/Puppeteer scripts, and a self-bootstrappingREADME. That’s it. Agents read the file, spin up the.. read more  

What if you don't need MCP at all?
Link
@kala shared a link, 3 weeks, 5 days ago
FAUN.dev()

How to write a great agents.md: Lessons from over 2,500 repositories

A GitHub Copilot feature allows for custom agents defined inagents.mdfiles. These agents act as specialists within a team, each with a specific role. The success of an agents.md file lies in providing a clear persona, executable commands, defined boundaries, specific examples, and detailed informati.. read more  

How to write a great agents.md: Lessons from over 2,500 repositories
Link
@devopslinks shared a link, 3 weeks, 5 days ago
FAUN.dev()

AWS to Bare Metal Two Years Later: Answering Your Toughest Questions About Leaving AWS

OneUptime ditched the cloud bill and rolled their own dual-site setup. Thinkbare metal, orchestrated withMicroK8s, booted byTinkerbell, patched together withCeph,Flux, andTerraform. Result?99.993% uptimeand$1.2M/year saved—76% cheaper than even well-optimized AWS. They run it all with just~14 engine.. read more  

Link
@devopslinks shared a link, 3 weeks, 5 days ago
FAUN.dev()

Monitor network performance and traffic across your EKS clusters with Container Network Observability

Amazon EKS just leveled up withContainer Network Observability- no extra tools needed. It now ships withservice maps,flow tables, andperformance metrics, all lit up by CloudWatch Network Flow Monitor. You get pod- and node-levelnetwork telemetryout of the box. Zoom in on service-to-service links. Si.. read more  

Monitor network performance and traffic across your EKS clusters with Container Network Observability
Link
@devopslinks shared a link, 3 weeks, 5 days ago
FAUN.dev()

S3 Storage Classes: Fast Access

A cost deep-dive breaks down three AWS S3 storage classes -Standard,Standard-IA, andGlacier Instant Retrieval- with sharp, interactive visualizations. It maps out the tradeoffs: storage cost, access frequency, and early deletion pain. Key tipping points surface: - UseStandard-IAif you read the objec.. read more  

S3 Storage Classes: Fast Access
Link
@devopslinks shared a link, 3 weeks, 5 days ago
FAUN.dev()

A complete guide to HTTP caching

A fresh guide reframes HTTP caching as less of a tweak, more of an architectural move. It breaks caching into layers - browser memory, CDNs, reverse proxies, app stores - and shows how each one plays a part (or gets in the way). It gets granular with headers likeCache-Control,ETag, andVary, calling .. read more  

A complete guide to HTTP caching
Link
@devopslinks shared a link, 3 weeks, 5 days ago
FAUN.dev()

Terraform Stacks: A Deep-Dive for Azure Practitioners in Europe

Terraform Stacksjust hit GA onHCP Terraform, and they bring some real structure to the chaos. Think modular, declarative, and way less workspace spaghetti. Build reusablecomponents(a.k.a. modules), bundle them intodeployments, and wire up stacks usingpublish/consume patterns- complete with automated.. read more  

Terraform Stacks: A Deep-Dive for Azure Practitioners in Europe
Link
@devopslinks shared a link, 3 weeks, 5 days ago
FAUN.dev()

WTF is ... - AI-Native SAST?

AI-native SAST is replacing the “LLM as magic scanner” myth. Instead, the smart play is combining language models with real static analysis. That’s how teams are catching the gnarlier stuff - like business logic bugs - that usually slip through. The trick?Use static analysis to grab clean, relevant .. read more  

Link
@devopslinks shared a link, 3 weeks, 5 days ago
FAUN.dev()

Unlocking self-service LLM deployment with platform engineering

A new platform stack - Port+GitHub Actions+HCP Terraform** - is turning LLM deployment into a clean self-service flow. The result => predictable, governed pipelines that ship faster. Infra gets standardized. Provisioning? Handled through GitHub Actions. Policies? Baked in via HCP Terraform. Port tie.. read more  

Unlocking self-service LLM deployment with platform engineering
Link
@devopslinks shared a link, 3 weeks, 5 days ago
FAUN.dev()

Post-quantum (ML-DSA) code signing with AWS Private CA and AWS KMS

AWS Private CA now supportspost-quantum ML-DSA X.509 certificates. That means quantum-resistant roots of trust - for code signing, mTLS, and device auth. It's wired up with AWS KMS, so you can handle signing workflows usingML-DSA keysand verify them with standard tools like OpenSSL usingCMS detached.. read more  

Post-quantum (ML-DSA) code signing with AWS Private CA and AWS KMS
Kata Containers is a Cloud Native Computing Foundation (CNCF) project designed to close the security gap between traditional Linux containers and virtual machines. Instead of sharing a single host kernel like standard containers, Kata Containers launches each pod or container inside its own lightweight virtual machine using hardware virtualization.

This approach dramatically reduces the attack surface and prevents container escape vulnerabilities, making Kata ideal for multi-tenant, untrusted, or sensitive workloads. Despite using VMs under the hood, Kata is optimized for fast startup times and integrates seamlessly with Kubernetes through the Container Runtime Interface (CRI), allowing it to be used alongside runtimes like containerd and CRI-O.

Kata Containers is commonly used in scenarios such as multi-tenant Kubernetes clusters, confidential computing, sandboxed AI workloads, serverless platforms, and agent execution environments where strong isolation is mandatory. It supports multiple hypervisors, including QEMU, Firecracker, and Cloud Hypervisor, and continues to evolve toward faster boot times, lower memory overhead, and better hardware acceleration support.