Join us

ContentUpdates from BAO...
Link
@faun shared a link, 1 month, 1 week ago

Uncommon Uses of Common Python Standard Library Functions

A fresh guide gives old Python friends a second look—turns out, tools like **itertools.groupby**, **zip**, **bisect**, and **heapq** aren’t just standard; they’re slick solutions to real problems. Think run-length encoding, matrix transposes, or fast, sorted inserts without bringing in another depen..

Link
@faun shared a link, 1 month, 1 week ago

Authentication Explained: When to Use Basic, Bearer, OAuth2, JWT & SSO

Modern apps don’t just check passwords—they rely on **API tokens**, **OAuth**, and **Single Sign-On (SSO)** to know who’s knocking before they open the door...

Link
@faun shared a link, 1 month, 1 week ago

Users Only Care About 20% of Your Application

Modern apps burst with features most people never touch. Users stick to their favorite 20%. The rest? Frustration, bloat, ignored edge cases. Tools like **VS Code**, **Slack**, and **Notion** nail it by staying lean at the core and letting users stack what they need. Extensions, plug-ins, integrati..

Link
@faun shared a link, 1 month, 1 week ago

Privacy for subdomains: the solution

A two-container setup using **acme.sh** gets Let's Encrypt certs running on a Synology NAS—thanks, Docker. No built-in Certbot support? No problem. Cloudflare DNS API token handles auth. Scheduled tasks handle renewal...

Privacy for subdomains: the solution
Link
@faun shared a link, 1 month, 1 week ago

Organize your Slack channels by “How Often”, not “What” - Aggressively Paraphrasing Me

One dev rewired their Slack setup by **engagement frequency**—not subject. Channels got sorted into tiers like “Read Now” and “Read Hourly,” cutting through noise and saving brainpower. It riffs off the **Eisenhower Matrix**, letting priorities shift with projects, not burn people out...

Link
@faun shared a link, 1 month, 1 week ago

Building a Natural Language Interface for Apache Pinot with LLM Agents

MiQ plugged **Google’s Agent Development Kit** into their stack to spin up **LLM agents** that turn plain English into clean, validated SQL. These agents speak directly to **Apache Pinot**, firing off real-time queries without the usual parsing pain. Behind the scenes, it’s a slick handoff: NL2SQL ..

Building a Natural Language Interface for Apache Pinot with LLM Agents
Link
@faun shared a link, 1 month, 1 week ago

Jupyter Agents: training LLMs to reason with notebooks

Hugging Face dropped an open pipeline and dataset for training small models—think **Qwen3-4B**—into sharp **Jupyter-native data science agents**. They pulled curated Kaggle notebooks, whipped up synthetic QA pairs, added lightweight **scaffolding**, and went full fine-tune. Net result? A **36% jump ..

Jupyter Agents: training LLMs to reason with notebooks
Link
@faun shared a link, 1 month, 1 week ago

Implementing Vector Search from Scratch: A Step-by-Step Tutorial

Search is a fundamental problem in computing, and vector search aims to match meanings rather than exact words. By converting queries and documents into numerical vectors and calculating similarity, vector search retrieves contextually relevant results. In this tutorial, a vector search system is bu..

Link
@faun shared a link, 1 month, 1 week ago

5 Free AI Courses from Hugging Face

Hugging Face just rolled out a sharp set of free AI courses. Real topics, real tools—think **AI agents, LLMs, diffusion models, deep RL**, and more. It’s hands-on from the jump, packed with frameworks like LangGraph, Diffusers, and Stable Baselines3. You don’t just read about models—you build ‘em i..

Link
@faun shared a link, 1 month, 1 week ago

Inside NVIDIA GPUs: Anatomy of high performance matmul kernels

NVIDIA Hopper packs serious architectural tricks. At the core: **Tensor Memory Accelerator (TMA)**, **tensor cores**, and **swizzling**—the trio behind async, cache-friendly matmul kernels that flirt with peak throughput. But folks aren't stopping at cuBLAS. They're stacking new tactics: **warp-gro..

Inside NVIDIA GPUs: Anatomy of high performance matmul kernels
BAO, c’est les chasseurs Tech et Produit qui apportent de la transparence au recrutement. Depuis Paris et Bordeaux, ils ont fait du monde startup leur terrain de jeu en décidant de ne travailler que sur très peu de postes à la fois. Pourquoi ? Parce qu’on voit la chasse comme un sprint dans lequel on travaille main dans la main avec nos clients : nous donnons un maximum de visibilité et de conseil à nos startups partenaires.

En créant BAO, en 2019, Baptiste et Lucas ont décidé de mettre l’écoute au cƓur de leur travail. Curiosité, sourire et empathie sont le seul trait commun à toute l’équipe !

Le bouche-à-oreille est au centre de leur manière de chasser : chaque recruteur entretient son réseau, conscient que la proximité amène à de belles rencontres.
Travailler chez BAO c’est avoir la volonté de rencontrer des personnes aux parcours passionnants et de tisser des liens avec eux. Mais c’est aussi gagner en autonomie tout en profitant d’une équipe dans laquelle les membres s’encouragent mutuellement.

C'est de la vente sans avoir à être agressif, des évolutions rapides au sein d’un écosystème passionnant et un environnement de travail ambitieux sans se prendre trop au sérieux.