Join us

heart Curated links by FAUN
Link
@faun shared a link, 2 weeks, 4 days ago

Publishing a Docker container for Microsoft Edit to the GitHub Container Registry

Edithits GitHub's Container Registry like a buzzsaw, powered by Docker. Built forApple Silicon, it ridesAlpinelike a speed demon. No fuss, just raw efficiency...

Publishing a Docker container for Microsoft Edit to the GitHub Container Registry
Link
@faun shared a link, 2 weeks, 4 days ago

F5, Inc Announces New Capabilities for F5 BIG-IP Next for Kubernetes

F5, Inc. announced new capabilities for F5 BIG-IP Next for Kubernetes in collaboration with NVIDIA Corporation. The F5 BIG-IP Next for Kubernetes will be accelerated with NVIDIA’s BlueField-3 DPUs and the NVIDIA DOCA software framework...

Link
@faun shared a link, 2 weeks, 4 days ago

Kernel-level container insights: Utilizing eBPF with Cilium, Tetragon, and SBOMs for security

eBPF, Cilium'sTetragon, andSBOMsare the dream team for exposing real-time kernel-level drama inside containers. When these powers combine, they hunt down surprise breaches likeLog4Shellwith a sleuth's precision. Bonus: they shave off20%fromCPU usagewhile they're at it...

Link
@faun shared a link, 2 weeks, 4 days ago

What Would a Kubernetes 2.0 Look Like

Kubernetesrewrites the rulebook on infrastructure. Suddenly, scaling isn't a headache—it's an art. But then there'sYAML. With its peculiar quirks and knack for screwing up, it feels more like a punchline than a solution. EnterHelmand its template circus, juggling dependencies with all the grace of a..

What Would a Kubernetes 2.0 Look Like
Link
@faun shared a link, 2 weeks, 4 days ago

Why Chose OCI Artifacts for AI Model Packaging

Docker Model Runner injects LLMs into OCI artifacts, seamlessly marrying model delivery with container rituals. No need to invent custom toolchains. Think uncompressed "layers"—they're the secret sauce for faster, sharper, more efficient Model-Runner magic. It's not just a change; it's a quantum lea..

Link
@faun shared a link, 3 weeks, 5 days ago

Poison everywhere: No output from your MCP server is safe

Anthropic's MCPmakes LLMs groove with real-world tools but leaves the backdoor wide open for mischief. Full-Schema Poisoning (FSP) waltzes across schema fields like it owns the place.ATPAsneaks in by twisting tool outputs, throwing off detection like a pro magicians’ misdirection. Keep your eye on t..

Poison everywhere: No output from your MCP server is safe
Link
@faun shared a link, 3 weeks, 5 days ago

Vibe coding web frontend tests — from mocked to actual tests

Cursorwrestled with flaky tests, tangled in its over-reliance onXPath. A shift todata-testidfinally tamed the chaos. Though it tackled some UI tests, expired API tokens and timestamped transactions revealed its Achilles' heel...

Vibe coding web frontend tests — from mocked to actual tests
Link
@faun shared a link, 3 weeks, 5 days ago

AI Runbooks for Google SecOps: Security Operations with Model Context Protocol

Google's MCP servers arm SecOps teams with direct control of security tools using LLMs.Now, analysts can skip the fluff and get straight to work—no middleman needed. The system ties runbooks to live data, offeringautomated, role-specific security measures. The result? A fusion of top-tier protocols ..

AI Runbooks for Google SecOps: Security Operations with Model Context Protocol
Link
@faun shared a link, 3 weeks, 5 days ago

Why Go is a good fit for agents

Gorules the realm of long-lived, concurrent agent tasks. Its lightning-fast goroutines and petite memory use make Node.js and Python look like clunky dinosaurs trudging through thick mud. And don't get started on itscancellation mechanism—seamless cancelation, zero drama...

Why Go is a good fit for agents
Link
@faun shared a link, 3 weeks, 5 days ago

Meta Introduces LlamaRL: A Scalable PyTorch-Based Reinforcement Learning RL Framework for Efficient LLM Training at Scale

Reinforcement Learningfine-tunes large language models for better performance by adapting outputs based on structured feedback. Scaling RL for LLMs faces resource challenges due to massive computation, model sizes, and engineering problems like GPU idle time. Meta's LlamaRL is a PyTorch-based asynch..

Meta Introduces LlamaRL: A Scalable PyTorch-Based Reinforcement Learning RL Framework for Efficient LLM Training at Scale
loading...