Join us

ContentUpdates and recent posts about BigQuery..
Link
@kaptain shared a link, 3 days, 16 hours ago
FAUN.dev()

v1.36: Tiered Memory Protection with Memory QoS

Kubernetes v1.36 rolls out Memory QoS (alpha). Opt-inmemory reservation. Tiered protection by QoS class. Kubelet observability metrics. Kernel-version warnings. It separatesthrottlingfromreservation. A feature gate enables throttling. A kubelet config field controls tieredcgroup v2protection:Guarant.. read more  

Link
@kaptain shared a link, 3 days, 16 hours ago
FAUN.dev()

v1.36: In-Place Vertical Scaling for Pod-Level Resources Graduates to Beta

Kubernetes v1.36 moves In-Place Pod-Level Resources Vertical Scaling to Beta and flips the feature gate on by default. Operators can patch a Pod's aggregate resource to resize running Pods. Often no container restart is needed. Kubelet breaks the Pod-level change into per-container resize events. It.. read more  

Link
@kaptain shared a link, 3 days, 16 hours ago
FAUN.dev()

Auto-Diagnosing Kubernetes Alerts with HolmesGPT and CNCF Tools

STCLab built an AI investigation pipeline withHolmesGPT, a 200-linePythonplaybook, andOpenTelemetry. It streamedMimir,Loki, andTempointo Slack threads. Metadata-driven markdownrunbookslimited tools per namespace, cut wasted tool calls from 16 to 2, and let the same model resolve alerts faster... read more  

Auto-Diagnosing Kubernetes Alerts with HolmesGPT and CNCF Tools
Link
@kaptain shared a link, 3 days, 16 hours ago
FAUN.dev()

v1.36: Staleness Mitigation and Observability for Controllers

Kubernetes v1.36 shipsclient-goatomicFIFOprocessing and cache-introspection APIs. Controllers detect stale informer state and skip acting on it. kube-controller-managerenables the capability by default for four high-contention pod controllers. It addsalpha metricsfor skipped syncs and informer resou.. read more  

Link
@kala shared a link, 3 days, 17 hours ago
FAUN.dev()

An open-weights Chinese model just beat Claude, GPT-5.5, and Gemini in a programming challenge

The AI Coding Contest Day 12 matched ten models on a sliding‑letter puzzle. Open‑weightsKimi K2.6took first: 22 match points (7‑1‑0).MiMo V2‑Proscored second by blasting claims for intact ≥7‑letter seeds (43 points).GPT‑5.5andClaude Opus 4.7landed third and fifth. Grids ran10×10→30×30. Heavy scrambl.. read more  

An open-weights Chinese model just beat Claude, GPT-5.5, and Gemini in a programming challenge
Link
@kala shared a link, 3 days, 17 hours ago
FAUN.dev()

Monitoring LLM behavior: Drift, retries, and refusal patterns

Traditional software is predictable due to determinism, while generative AI is unpredictable. Engineers need a new infrastructure layer, the AI Evaluation Stack, to ship enterprise-ready AI products. The stack includes deterministic assertions and model-based assertions to ensure structural integrit.. read more  

Link
@kala shared a link, 3 days, 17 hours ago
FAUN.dev()

Introducing the Agent Readiness score. Check to see if your site is agent-ready

Cloudflare launchedIsItAgentReady. It scans200kdomains, scoresagent readiness, publishes weekly adoption charts, and exposes results via anAPI. It checksrobots.txt,llms.txt, content negotiation viaAccept: text/markdown,API Catalog,.well-known/mcp.json, OAuth discovery, andx402payments. Cloudflare ov.. read more  

Introducing the Agent Readiness score. Check to see if your site is agent-ready
Link
@kala shared a link, 3 days, 17 hours ago
FAUN.dev()

The AI engineering stack we built internally - on the platform we ship

Cloudflare wired AI into the engineering stack. LLM traffic funnels through aproxy WorkerandAI Gateway. It shippedWorkers AIand theAgents SDK. Daily users hit 3,683 (93% R&D). MR throughput climbed to ~10,952/week.Workers AIhandled 51B input tokens and cut a security agent's inference spend by 77%... read more  

The AI engineering stack we built internally - on the platform we ship
Link
@kala shared a link, 3 days, 17 hours ago
FAUN.dev()

Multi-Agent System Reliability

LLMs are unreliable out of the box, but multi-agent systems can improve by dividing work among specialized agents. Building robust systems involves leveraging human system patterns like hierarchy, consensus, adversarial debate, and knock-out in a multi-agent architecture to ensure correctness and re.. read more  

Link
@devopslinks shared a link, 3 days, 19 hours ago
FAUN.dev()

How incidents can teach us about what’s already working well

A famous optical illusion developed by Edward H. Adelson shows that two squares, despite appearing different in shade, are actually the same gray. This illusion demonstrates how the brain processes light, shadow, and objects when interpreting visual signals from the optic nerve. Studying such illusi.. read more  

How incidents can teach us about what’s already working well
BigQuery is a cloud-native, serverless analytics platform designed to store, query, and analyze massive volumes of structured and semi-structured data using standard SQL. It separates storage from compute, automatically scales resources, and eliminates the need for infrastructure management, indexing, or capacity planning.

BigQuery is optimized for analytical workloads such as business intelligence, log analysis, data science, and machine learning. It supports real-time data ingestion via streaming, batch loading from cloud storage, and federated queries across external data sources like Cloud Storage, Bigtable, and Google Drive.

Query execution is distributed and highly parallel, enabling interactive performance even on petabyte-scale datasets. The platform integrates deeply with the Google Cloud ecosystem, including Looker for BI, Vertex AI for ML workflows, Dataflow for streaming pipelines, and BigQuery ML, which allows users to train and run machine learning models directly using SQL.

Built-in security features include fine-grained IAM controls, column- and row-level security, encryption by default, and audit logging. BigQuery follows a consumption-based pricing model, charging for storage and queries (on-demand or reserved capacity).