Join us

ContentUpdates and recent posts about Tor..
Link
@faun shared a link, 6 months, 3 weeks ago
FAUN.dev()

OpenAI Agent Builder: A Complete Guide to Building AI Workflows Without Code

OpenAI’sAgent Builderdrops the guardrails. It’s a no-code, drag-and-drop playground for building, testing, and shipping AI workflows - logic flows straight from your brain to the screen. Tweak interfaces inWidget Studio. Plug into real systems with theAgents SDK. Just one catch: it’s locked behind P.. read more  

Link
@faun shared a link, 6 months, 3 weeks ago
FAUN.dev()

Going down the rabbit hole of Postgres 18 features by Tudor Golubenco

PostgreSQL 18 just hit stable. Big swing! Async IO infrastructureis in. That means lower overhead, tighter storage control, and less CPU getting chewed up by I/O. Adddirect IO, and the database starts flexing beyond traditional bottlenecks. OAuth 2.0? Native now. No hacks needed. UUIDv7? Built-in su.. read more  

Going down the rabbit hole of Postgres 18 features by Tudor Golubenco
Link
@faun shared a link, 6 months, 3 weeks ago
FAUN.dev()

Technical Tuesday: 10 best practices for building reliable AI agents in 2025

UiPath just droppedAgent Builder in Studio- a legit development environment for AI agents that can actually handle enterprise chaos. Think production-grade: modular builds, traceable steps, and failure handling that doesn’t flake under pressure. It’s wired forschema-driven prompts,tool versioning, a.. read more  

Link
@faun shared a link, 6 months, 3 weeks ago
FAUN.dev()

Write Deep Learning Code Locally and Run on GPUs Instantly

Modal cuts the drama out of deep learning ops. Devs write Python like usual, then fire off training, eval, and serving scripts to serverless GPUs - zero cluster wrangling. It handles data blobs, image builds, and orchestration. You focus on tuning with libraries like Unsloth, or serving via vLLM... read more  

Write Deep Learning Code Locally and Run on GPUs Instantly
Link
@faun shared a link, 6 months, 3 weeks ago
FAUN.dev()

Serverless RL: Faster, Cheaper and More Flexible RL Training

New product, Serverless RL, available through collaboration between CoreWeave, Weights & Biases, and OpenPipe. Offers fast training, lower costs, and simple model deployment. Saves time with no infra setup, faster feedback loops, and easier entry into RL training... read more  

Link
@faun shared a link, 6 months, 3 weeks ago
FAUN.dev()

The RAG Obituary: Killed by Agents, Buried by Context Windows

Agent-based setups are starting to edge out old-school RAG. As LLMs snag multi-million-token context windows and better task chops, the need for chunking, embeddings, and reranking starts to fade. Claude Code, for example, skips all that - with direct file access and smart navigation instead. Retrie.. read more  

The RAG Obituary: Killed by Agents, Buried by Context Windows
Link
@faun shared a link, 6 months, 3 weeks ago
FAUN.dev()

How LogSeam Searches 500 Million Logs per second

LogSeam rips through500M log searches/secand pushes1.5+ TB/s throughputusing Tigris’ geo-distributed object storage. It slashes log volume by 100× with Parquet + Zstandard compression. Then it spins up compute on the fly, right where the data lives—no long-running infrastructure, no laggy reads... read more  

How LogSeam Searches 500 Million Logs per second
Link
@faun shared a link, 6 months, 3 weeks ago
FAUN.dev()

Ansible Service Module: Start, Stop, & Manage Services

The Ansibleservicemodulehandles LinuxandWindows without choking on init system quirks. One playbook can start, stop, enable, or restart anything - no matter the OS. Idempotent, so you don’t have to babysit state. Clean and repeatable. Bonus: it’s great for wrangling fleets. Think: coordinating servi.. read more  

Link
@faun shared a link, 6 months, 3 weeks ago
FAUN.dev()

How AWS S3 serves 1 petabyte per second on top of slow HDDs

AWS S3 doesn’t need fancy hardware. It wrings performance out ofcheap HDDs,log-structured merge trees, anderasure coding. The trick? Shard everything. Hit it in parallel. Randomized placementdodges hotspots.Hedged requestsrace the slowest links. And when things get lopsided, S3 rebalances - constant.. read more  

How AWS S3 serves 1 petabyte per second on top of slow HDDs
Link
@faun shared a link, 6 months, 3 weeks ago
FAUN.dev()

Seven Years of Firecracker

AWS is puttingFirecracker microVMsto work in two fresh stacks:AgentCore, the new base layer for AI agents, andAurora DSQL, a serverless, PostgreSQL-compatible database it just rolled out. AgentCore gives each agent session its own microVM. More isolation, less cross-talk - solid for multistep LLM wo.. read more  

Seven Years of Firecracker
Tor (The Onion Router) is an open-source network and software suite designed to protect user privacy and enable anonymous communication on the internet. It works by routing network traffic through a distributed, volunteer-run network of relays, encrypting data in multiple layers so that no single relay knows both the source and destination of the traffic. Tor is widely used to defend against traffic analysis, surveillance, and censorship. By obscuring IP addresses and routing paths, it helps users browse the web anonymously, publish information safely, and access services without revealing their location or identity. The network supports standard web traffic as well as specialized .onion services, which allow websites and services to operate anonymously without exposing their physical hosting location. Beyond web browsing, Tor is used as a foundational privacy layer for secure messaging, whistleblowing platforms, journalism, activism, academic research, and secure system administration. It is also integrated into many privacy-focused operating systems and tools. While Tor can reduce traceability, it does not make users invulnerable and must be used with proper operational security to avoid deanonymization risks. Tor is developed and maintained by the Tor Project, a nonprofit organization dedicated to advancing digital privacy and freedom worldwide