Join us

ContentUpdates and recent posts about Gemini 3..
Link
@devopslinks shared a link, 12 hours ago
FAUN.dev()

Failure is inevitable: Learning from a large outage, and building for reliability in depth at

Datadog ditched its “never fail” mindset after a March 2023 meltdown knocked out half its Kubernetes nodes and took major user features down with them. The fix? A full-stack rethink built aroundgraceful degradation. The team addeddisk-based persistence at intake,live-data prioritization,QoS-aware re.. read more  

Failure is inevitable: Learning from a large outage, and building for reliability in depth at
Link
@devopslinks shared a link, 12 hours ago
FAUN.dev()

You’ll never see attrition referenced in an RCA

Lorin Hochstein argues that while high-profile engineer attrition is often speculated to contribute to major outages, it is universally absent from public Root Cause Analyses (RCAs). This exclusion occurs because public RCAs aim to reassure customers by focusing on technical fixes, whereas attrition.. read more  

Link
@devopslinks shared a link, 12 hours ago
FAUN.dev()

Declarative Action Architecture

The Declarative Action Architecture (DAA) is a scalable E2E testing pattern that separates concerns across three distinct layers. TheTest Layeris 100% declarative, statingwhatis being tested without any procedural logic, making tests read like documentation. The coreAction Layerimplements the execut.. read more  

Declarative Action Architecture
Link
@devopslinks shared a link, 12 hours ago
FAUN.dev()

Comparing AWS Lambda Arm64 vs x86_64 Performance Across Multiple Runtimes in Late 2025

A new open-source benchmark looked at 183,000 AWS Lambda invocations, andarm64 beats x86_64across the board in both cost and speed. Rust on arm64 with SHA-256 tuned in assembly? It clocks in 4–5× faster than x86 in CPU-heavy tasks. Cold starts are snappy too—5–8× quicker than Node.js and Python... read more  

Comparing AWS Lambda Arm64 vs x86_64 Performance Across Multiple Runtimes in Late 2025
Link
@devopslinks shared a link, 12 hours ago
FAUN.dev()

The story of how we almost got hacked

Team Invictus caught a BEC attempt using WeTransfer to slip in a fake Microsoft 365 login page powered byEvilProxy. Classic Adversary-in-the-Middle move, but dressed up with a slick delivery package. Digging deeper, the team mapped the attacker’s setup and found something bigger: a credential grab c.. read more  

The story of how we almost got hacked
News FAUN.dev() Team
@kaptain shared an update, 13 hours ago
FAUN.dev()

Agent Sandbox Brings Kernel-Level Guardrails to AI Agents on Kubernetes

Kubernetes gVisor Kata Containers Google Kubernetes Engine (GKE)

Agent Sandbox, a new Kubernetes primitive, was introduced at KubeCon NA 2025 to enhance AI agent management on Kubernetes and Google Kubernetes Engine.

Agent Sandbox Brings Kernel-Level Guardrails to AI Agents on Kubernetes
News FAUN.dev() Team
@devopslinks shared an update, 13 hours ago
FAUN.dev()

AWS Unveils Graviton5: A 192-Core Leap in Cloud Performance and Efficiency

Amazon EC2 Amazon Web Services

AWS introduces Graviton5-based EC2 M9g instances, boosting performance by 25% and enhancing scalability while reducing costs.

AWS Unveils Graviton5: A 192-Core Leap in Cloud Performance and Efficiency
News FAUN.dev() Team
@varbear shared an update, 16 hours ago
FAUN.dev()

Tor Goes Rust: Introducing Arti, a New Foundation for the Future of Tor

Rust Tor Arti

The development of "Arti," a Rust-based Tor implementation funded by Zcash, aims to enhance security and efficiency by addressing the limitations of the current C-based Tor.

Tor Goes Rust: Introducing Arti, a New Foundation for the Future of Tor
 Activity
@varbear added a new tool Arti , 16 hours, 53 minutes ago.
 Activity
@varbear added a new tool Tor , 17 hours, 4 minutes ago.
Gemini 3 is Google’s third-generation large language model family, designed to power advanced reasoning, multimodal understanding, and long-running agent workflows across consumer and enterprise products. It represents a major step forward in factual reliability, long-context comprehension, and tool-driven autonomy.

At its core, Gemini 3 emphasizes low hallucination rates, deep synthesis across large information spaces, and multi-step reasoning. Models in the Gemini 3 family are trained with scaled reinforcement learning for search and planning, enabling them to autonomously formulate queries, evaluate results, identify gaps, and iterate toward higher-quality outputs.

Gemini 3 powers advanced agents such as Gemini Deep Research, where it excels at producing well-structured, citation-rich reports by combining web data, uploaded documents, and proprietary sources. The model supports very large context windows, multimodal inputs (text, images, documents), and structured outputs like JSON, making it suitable for research, finance, science, and enterprise knowledge work.

Gemini 3 is available through Google’s AI platforms and APIs, including the Interactions API, and is being integrated across products such as Google Search, NotebookLM, Google Finance, and the Gemini app. It is positioned as Google’s most factual and research-capable model generation to date.