Join us

ContentUpdates and recent posts about GPT-5.3-Codex..
Link
@faun shared a link, 7 months ago
FAUN.dev()

Uncommon Uses of Common Python Standard Library Functions

A fresh guide gives old Python friends a second look—turns out, tools like **itertools.groupby**, **zip**, **bisect**, and **heapq** aren’t just standard; they’re slick solutions to real problems. Think run-length encoding, matrix transposes, or fast, sorted inserts without bringing in another depen.. read more  

Link
@faun shared a link, 7 months ago
FAUN.dev()

The productivity paradox of AI coding assistants

A July 2025 METR trial dropped a twist: seasoned devs using Cursor with Claude 3.5/3.7 moved **19% slower** - while thinking they were **20% faster**. Chalk it up to AI-induced confidence inflation. Faros AI tracked over **10,000 developers**. More AI didn’t mean more done. It meant more juggling, .. read more  

The productivity paradox of AI coding assistants
Link
@faun shared a link, 7 months ago
FAUN.dev()

Implementing Vector Search from Scratch: A Step-by-Step Tutorial

Search is a fundamental problem in computing, and vector search aims to match meanings rather than exact words. By converting queries and documents into numerical vectors and calculating similarity, vector search retrieves contextually relevant results. In this tutorial, a vector search system is bu.. read more  

Link
@faun shared a link, 7 months ago
FAUN.dev()

5 Free AI Courses from Hugging Face

Hugging Face just rolled out a sharp set of free AI courses. Real topics, real tools—think **AI agents, LLMs, diffusion models, deep RL**, and more. It’s hands-on from the jump, packed with frameworks like LangGraph, Diffusers, and Stable Baselines3. You don’t just read about models—you build ‘em i.. read more  

Link
@faun shared a link, 7 months ago
FAUN.dev()

Inside NVIDIA GPUs: Anatomy of high performance matmul kernels

NVIDIA Hopper packs serious architectural tricks. At the core: **Tensor Memory Accelerator (TMA)**, **tensor cores**, and **swizzling**—the trio behind async, cache-friendly matmul kernels that flirt with peak throughput. But folks aren't stopping at cuBLAS. They're stacking new tactics: **warp-gro.. read more  

Inside NVIDIA GPUs: Anatomy of high performance matmul kernels
Link
@faun shared a link, 7 months ago
FAUN.dev()

Becoming a Research Engineer at a Big LLM Lab - 18 Months of Strategic Career Development

To land a big career role like Mistral, mix efficient **tactical** moves (like LeetCode practice) with **strategic** ups, like building a powerful portfolio and a solid network. Balance is key; aim to impress and prepare well without overlooking the power of strategy in shaping a successful career... read more  

Link
@faun shared a link, 7 months ago
FAUN.dev()

Building a Natural Language Interface for Apache Pinot with LLM Agents

MiQ plugged **Google’s Agent Development Kit** into their stack to spin up **LLM agents** that turn plain English into clean, validated SQL. These agents speak directly to **Apache Pinot**, firing off real-time queries without the usual parsing pain. Behind the scenes, it’s a slick handoff: NL2SQL .. read more  

Building a Natural Language Interface for Apache Pinot with LLM Agents
Link
@faun shared a link, 7 months ago
FAUN.dev()

Jupyter Agents: training LLMs to reason with notebooks

Hugging Face dropped an open pipeline and dataset for training small models—think **Qwen3-4B**—into sharp **Jupyter-native data science agents**. They pulled curated Kaggle notebooks, whipped up synthetic QA pairs, added lightweight **scaffolding**, and went full fine-tune. Net result? A **36% jump .. read more  

Jupyter Agents: training LLMs to reason with notebooks
Link
@faun shared a link, 7 months ago
FAUN.dev()

Shai-Hulud npm Supply Chain Attack

Malicious npm packages just leveled up: this one dropped a self-spreading worm that hijacks repos and leaks secrets the moment it lands. It abuses `postinstall` scripts to run TruffleHog and swipe tokens straight from your codebase. Then it uses GitHub Actions to exfiltrate the loot and auto-publis.. read more  

Shai-Hulud npm Supply Chain Attack
Link
@faun shared a link, 7 months ago
FAUN.dev()

How FinOps Drives Value for Every Engineering Dollar

Duolingo’s FinOps crew didn’t just track cloud costs—they wired up sharp, automated observability across 100+ microservices. Real-time alerts now catch AI and infra spend spikes before they torch the budget. They sliced TTS costs by 40% with in-memory caching. Dumped pricey CloudWatch metrics for P.. read more  

How FinOps Drives Value for Every Engineering Dollar
GPT-5.3-Codex is OpenAI’s advanced agentic coding model, designed to go beyond writing code and operate as a general-purpose collaborator on a computer. It builds on GPT-5.2-Codex by combining stronger coding performance with improved reasoning and professional knowledge, while running about 25% faster. The model is optimized for long-running tasks that involve research, tool use, and complex execution, and it performs at the top of industry benchmarks such as SWE-Bench Pro and Terminal-Bench.

Unlike earlier Codex models that focused primarily on code generation and review, GPT-5.3-Codex can reason, plan, and act across the full software lifecycle. It supports activities such as debugging, deploying, monitoring, writing product requirement documents, creating tests, and analyzing metrics. It can also autonomously build and iterate on complex applications and better interpret underspecified prompts, producing more complete and production-ready results by default.

A defining feature of GPT-5.3-Codex is its interactive, agentic workflow. Users can steer the model while it is working, receive progress updates, and adjust direction without losing context, making it feel more like a teammate than a batch automation tool. The model was even used internally to help debug its own training and deployment processes. GPT-5.3-Codex is available through paid ChatGPT plans in the Codex app, CLI, IDE extension, and web, with API access planned for the future.