ContentPosts from @apurvsheth..
Link
@kaptain shared a link, 1 day, 20 hours ago
FAUN.dev()

CNCF Project Antrea Compromised in Daring GitHub Attack

A throwaway GitHub account compromised CNCF projectAntrea's Jenkins infrastructure on May 2 by opening a malicious PR and firing/test-*slash-commands that detonated the workflow against PR-fork code with credentials in scope. The same operator ran parallel campaigns against at least seven other proj.. read more  

CNCF Project Antrea Compromised in Daring GitHub Attack
Link
@kaptain shared a link, 1 day, 20 hours ago
FAUN.dev()

How Cloud Native Infrastructure Powers AI on Kubernetes

A vendor piece from Mirantis arguing that GPU multi-tenancy on Kubernetes is widely misrepresented, with most platforms shipping namespace-based isolation while production GPU clouds require hardware-enforced separation through MIG partitioning, cluster-per-tenant architecture, and DPU-based network.. read more  

How Cloud Native Infrastructure Powers AI on Kubernetes
Link
@kaptain shared a link, 1 day, 20 hours ago
FAUN.dev()

v1.36: Moving Volume Group Snapshots to GA

Volume group snapshots reachedGAin Kubernetesv1.36, with the API promoted togroupsnapshot.storage.k8s.io/v1. The feature lets aVolumeGroupSnapshotobject take crash-consistent snapshots across multiple PVCs selected by label, removing the need to quiesce applications that span separate data and log v.. read more  

Link
@kaptain shared a link, 1 day, 20 hours ago
FAUN.dev()

v1.36: Server-Side Sharded List and Watch

Alpha inv1.36, server-side sharded list and watch adds ashardSelectorfield toListOptionsso the API server uses an FNV-1a hash onmetadata.uidormetadata.namespaceto send each controller replica only its slice of the resource collection. This eliminates the cost of every replica deserializing the full .. read more  

Link
@kaptain shared a link, 1 day, 20 hours ago
FAUN.dev()

v1.36: Declarative Validation Graduates to GA

Declarative validation graduated toGAin Kubernetesv1.36, replacing handwritten Go validation with+k8s:marker tags on field definitions... read more  

Link
@kala shared a link, 1 day, 21 hours ago
FAUN.dev()

How We Built an AI Second Brain for 60K Knowledge Workers

Meta built an AI agent system internally called the AI Second Brain that now has over 63,000 installs and ~10,000 daily active users across engineering, PM, design, legal, finance, comms, and sales, growing from zero in roughly three months after a non-technical PM's adoption post. The architecture .. read more  

How We Built an AI Second Brain for 60K Knowledge Workers
Link
@kala shared a link, 1 day, 21 hours ago
FAUN.dev()

Orchestrating AI Code Review at scale

Cloudflare engineers built an AI code review platform on OpenCode. They split GitLab integration, model providers, prompts, and policy into separate plugins. A coordinator assigns up to seven domain reviewers across security, performance, code quality, documentation, release checks, and AGENTS.md co.. read more  

Orchestrating AI Code Review at scale
Link
@kala shared a link, 1 day, 21 hours ago
FAUN.dev()

Democratizing Machine Learning at Netflix: Building the Model Lifecycle Graph

Netflix's Saish Sali, Nipun Kumar, and Sura Elamurugu describe the Metadata Service (MDS), a graph layer built to connect siloed ML tooling (model registry, pipeline orchestrator, experimentation platform, feature store, dataset platform, identity) across personalization, studio, payments, and ads. .. read more  

Link
@kala shared a link, 1 day, 21 hours ago
FAUN.dev()

The AWS MCP Server is now generally available

AWS now offers AWS MCP Server as a managed remote MCP server in US East (N. Virginia) and Europe (Frankfurt). MCP-compatible clients can use existing IAM credentials to access more than 15,000 AWS API operations. For GA, AWS added IAM context keys, documentation retrieval without authentication, low.. read more  

The AWS MCP Server is now generally available
Link
@kala shared a link, 1 day, 21 hours ago
FAUN.dev()

Running local models on an M4 with 24GB memory

Local LLMs work best as supervised coding assistants. The writer ran Qwen 3.5 9B (Q4) in LM Studio on a 24GB MacBook Pro and got about 40 tokens per second, with thinking mode, tool use, and a 128K context window. The author saw mixed results: Qwen helped with simple Elixir linter edits, then failed.. read more  

Running local models on an M4 with 24GB memory