Join us

ContentUpdates and recent posts about Slurm..
Link
@varbear shared a link, 1 month, 1 week ago
FAUN.dev()

Foreign hackers breached a US nuclear weapons plant via SharePoint flaws

UnpatchedSharePoint flaws(CVE-2025-53770, CVE-2025-49704) cracked open theKansas City National Security Campusin July. IT systems tied to 80% of U.S. non-nuclear weapons parts got compromised. Attackers—likely state-backed, Russian or Chinese—moved fast, hitting the zero-day RCE and spoofing bugs ju.. read more  

Foreign hackers breached a US nuclear weapons plant via SharePoint flaws
Link
@varbear shared a link, 1 month, 1 week ago
FAUN.dev()

Supply Chain Risk in VSCode Extension Marketplaces

Wiz dug up 550+ leaked secrets buried in 500+ public VSCode extensions—including 130+ live access tokens forVSCode MarketplaceandOpenVSX. That’s a wide-open door to supply chain attacks through auto-updates. Microsoft reacted fast: dumped the breached tokens, rolled outpre-publish secret scanning, a.. read more  

Link
@kala shared a link, 1 month, 1 week ago
FAUN.dev()

Sora 2 in Azure AI Foundry: Create videos with responsible AI

OpenAI’sSora 2just dropped intopublic previewvia theAzure AI FoundryAPI. It’s a multimodal video model aimed at serious use—enterprise safety, API-ready, built for scale. Azure didn’t stop there. It bundled inGPT-image-1,Flux 1.1, andKontext Pro, pulling together a full-gen stack under one roof... read more  

Sora 2 in Azure AI Foundry: Create videos with responsible AI
Link
@kala shared a link, 1 month, 1 week ago
FAUN.dev()

How Microsoft Evaluates LLMs in Azure AI Foundry: A Practical, End-to-End Playbook

Microsoft’s Azure AI Foundry just released a proper workflow for putting LLMs through their paces. Thinkoffline/online tests,human-in-the-loop checks,automated scoring, and evencustom evaluators—all wired into one system. At the heart of it: the newAzure AI Evaluation SDK. You can run it locally whi.. read more  

How Microsoft Evaluates LLMs in Azure AI Foundry: A Practical, End-to-End Playbook
Link
@kala shared a link, 1 month, 1 week ago
FAUN.dev()

Claude Skills are awesome, maybe a bigger deal than MCP

Anthropic releasedClaude Skills—a lean way to snap specialized instructions and scripts into Claude without bloating the prompt. Each “skill” lives in a folder with Markdown and optional code. Frontmatter tags tell Claude when to load what. No need to cram everything into the context window—Claude g.. read more  

Claude Skills are awesome, maybe a bigger deal than MCP
Link
@kala shared a link, 1 month, 1 week ago
FAUN.dev()

OpenAI Needs $400 Billion In The Next 12 Months

OpenAI, Broadcom, NVIDIA, and AMD say they’ll deploy10GWof AI compute by end of 2026. That includes custom chips and slews of 1GW data centers. What they didn’t say: where, when, or how. No sites named. No shovels in dirt. OpenAI alone aims for250GW by 2033—a moonshot that needs$400Bin the next 12 m.. read more  

OpenAI Needs $400 Billion In The Next 12 Months
Link
@kala shared a link, 1 month, 1 week ago
FAUN.dev()

Structured Vibe Coding: A Smarter Way to Build AI Agents with GitHub Copilot

A fresh approach calledstructured vibe codingblends human-style team habits with AI workflows. Specs, GitHub Issues, and Copilot now pull agents into the loop like actual teammates. Powered byGitHub Copilot Coding AgentsandAzure AI Foundry, devs can run full AI-driven sprints—spec to PR—right inside.. read more  

Structured Vibe Coding: A Smarter Way to Build AI Agents with GitHub Copilot
Link
@devopslinks shared a link, 1 month, 1 week ago
FAUN.dev()

How AI can help your DevSecOps pipeline

AI is sliding into DevSecOps and turning security into less of a slog. Tools likeDarktrace PREVENT,CrowdStrike Falcon, andMicrosoft Security Copilotaren't just watching—they're flagging weird behavior, proposing fixes, and unclogging patch pipelines inside CI/CD. The shift:DevSecOps is on its way to.. read more  

How AI can help your DevSecOps pipeline
Link
@devopslinks shared a link, 1 month, 1 week ago
FAUN.dev()

How Shopify Handles 30TB of Data Every Minute with a Monolithic Architecture

Shopify handles billions of Black Friday requests on amodular monolith, built with Ruby on Rails and kept in check byPackwerk. Domain boundaries are enforced. Chaos averted. Inside, it blendsHexagonal Architecture, isolatedPods, and real-time Kafka pipes. The system scales without fracturing into mi.. read more  

How Shopify Handles 30TB of Data Every Minute with a Monolithic Architecture
Link
@devopslinks shared a link, 1 month, 1 week ago
FAUN.dev()

How I Block All 26 Million Of Your Curl Requests

A developer built a razor-sharp TLS fingerprinting and blocking tool—all in kernel space—witheBPFandXDP. It hooks into incoming packets, scrapes TLS Client Hello messages, and cranks out simplified JA4-style hashes from their cipher suite lists. The fun part? It's running under tight stack limits, s.. read more  

How I Block All 26 Million Of Your Curl Requests
Slurm Workload Manager is an open-source, fault-tolerant, and highly scalable cluster management and scheduling system widely used in high-performance computing (HPC). Designed to operate without kernel modifications, Slurm coordinates thousands of compute nodes by allocating resources, launching and monitoring jobs, and managing contention through its flexible scheduling queue.

At its core, Slurm uses a centralized controller (slurmctld) to track cluster state and assign work, while lightweight daemons (slurmd) on each node execute tasks and communicate hierarchically for fault tolerance. Optional components like slurmdbd and slurmrestd extend Slurm with accounting and REST APIs. A rich set of commands—such as srun, squeue, scancel, and sinfo—gives users and administrators full visibility and control.

Slurm’s modular plugin architecture supports nearly every aspect of cluster operation, including authentication, MPI integration, container runtimes, resource limits, energy accounting, topology-aware scheduling, preemption, and GPU management via Generic Resources (GRES). Nodes are organized into partitions, enabling sophisticated policies for job size, priority, fairness, oversubscription, reservation, and resource exclusivity.

Widely adopted across academia, research labs, and enterprise HPC environments, Slurm serves as the backbone for many of the world’s top supercomputers, offering a battle-tested, flexible, and highly configurable framework for large-scale distributed computing.