Join us

ContentUpdates and recent posts about AIStor..
Link
@faun shared a link, 6 months, 2 weeks ago
FAUN.dev()

We built an MCP server so Claude can access your incidents

Incident.io dropped an open sourceMCP server in Gothat plugs Claude into their API using theModel Context Protocol. That means Claude can now ask questions, spin up incidents, and dig into timelines—just by talking. The server translates Claude’s prompts into REST calls, turning AI babble into real.. read more  

We built an MCP server so Claude can access your incidents
Link
@faun shared a link, 6 months, 2 weeks ago
FAUN.dev()

AWS Lambda now supports GitHub Actions to simplify function deployment

AWS Lambda just got a smoother ride to prod. There’s now a nativeGitHub Actions integration—no more DIY scripts to ship your serverless. On commit, the new action packages your code, wires up IAM viaOIDC, and deploys using either.zip bundles or containers. All from a tidy, declarative GitHub workfl.. read more  

Link
@faun shared a link, 6 months, 2 weeks ago
FAUN.dev()

Pinterest Uncovers Rare Search Failure During Migration to Kubernetes

Pinterest hit a weird one-in-a-million query mismatch during its search infra move to Kubernetes. The culprit? A slippery timing bug. To catch it, engineers pulled out every trick—live traffic replays, their own diff tools, hybrid rollouts layered on both the legacy and K8s stacks. Painful, but it .. read more  

Pinterest Uncovers Rare Search Failure During Migration to Kubernetes
Link
@faun shared a link, 6 months, 2 weeks ago
FAUN.dev()

Who does the unsexy but essential work for open source?

Oracle led the line-count race in the Linux 6.1 kernel release—beating out flashier open source names. Most of its work isn’t headline material. It’s deep-core stuff: memory management tweaks, block device updates, the quiet machinery real systems run on... read more  

Who does the unsexy but essential work for open source?
Link
@faun shared a link, 6 months, 2 weeks ago
FAUN.dev()

Terraform Validate Disagrees with Terraform Docs

Terraform’s CLI will throw errors on configs that match the docs—because your local provider schema might be stale or out of sync. Docs follow the latest release. Your machine might not. So even supported fields can break validation. Love that for us... read more  

Link
@faun shared a link, 6 months, 2 weeks ago
FAUN.dev()

Kubernetes 1.34 Debuts KYAML to Resolve YAML Challenges

Kubernetes 1.34 drops on August 27, 2025, and it’s bringingKYAML—a smarter, stricter take on YAML. No more surprise type coercion or “why is this indented wrong?” bugs. Think of it as YAML that behaves. kubectlgets a new trick too:-o kyaml. Use it to spit out manifests in KYAML format—easier to deb.. read more  

Kubernetes 1.34 Debuts KYAML to Resolve YAML Challenges
Link
@faun shared a link, 6 months, 2 weeks ago
FAUN.dev()

SUSE Adds Arm Support to HCI Platform for Running Monolithic Apps on Kubernetes

SUSE Virtualization 1.5 lands with64-bit Arm and Intelsupport,CSIstorage compatibility, and a tighter4-month release loopsynced with Kubernetes. Built on Harvester and KubeVirt, the update pushes harder on a clear trend: legacy VMs and cloud-native apps sharing the same Kubernetes real estate. Sys.. read more  

SUSE Adds Arm Support to HCI Platform for Running Monolithic Apps on Kubernetes
Link
@faun shared a link, 6 months, 2 weeks ago
FAUN.dev()

Scale AI/ML Workloads with Amazon EKS: Up to 100K Nodes

Amazon EKS just leveled up—clusters can now run withup to 100,000 nodeswith support ofKubernetes 1.30and up. That's not just big—it’s AI-and-ML-scale big. Cluster setup got a lot less manual, too. The AWS Console’s"auto mode"auto-builds your VPC and IAM configs.eksctlplugs right into the flow... read more  

Scale AI/ML Workloads with Amazon EKS: Up to 100K Nodes
Link
@faun shared a link, 6 months, 2 weeks ago
FAUN.dev()

Building a RAG chat-based assistant on Amazon EKS Auto Mode and NVIDIA NIMs

AWS and NVIDIA just dropped a full-stack recipe for running Retrieval-Augmented Generation (RAG) onAmazon EKS Auto Mode—built on top ofNVIDIA NIM microservices. It's LLMs on Kubernetes, but without the hair-pulling. Inference? GPU-accelerated. Embeddings? Covered. Vector search? Handled byAmazon Op.. read more  

Building a RAG chat-based assistant on Amazon EKS Auto Mode and NVIDIA NIMs
Link
@faun shared a link, 6 months, 2 weeks ago
FAUN.dev()

Estimate Your K8s Deployment Costs (Portainer Calculator)

A new TCO calculator breaks down what it really costs to run Kubernetes—DIY CNCF stacks, COSS platforms, and Portainer Business Edition. It crunches infra, labor, and software spend, then maps out staffing needs. It shows exactly where Portainer cuts Kubernetes bloat: itmaybe biased but it's worth t.. read more  

Estimate Your K8s Deployment Costs (Portainer Calculator)
AIStor is an enterprise-grade, high-performance object storage platform built for modern data workloads such as AI, machine learning, analytics, and large-scale data lakes. It is designed to handle massive datasets with predictable performance, operational simplicity, and hyperscale efficiency, while remaining fully compatible with the Amazon S3 API. AIStor is offered under a commercial license as a subscription-based product.

At its core, AIStor is a software-defined, distributed object store that runs on commodity hardware or in containerized environments like Kubernetes. Rather than being limited to traditional file or block interfaces, it exposes object storage semantics that scale from petabytes to exabytes within a single namespace, enabling consistent, flat addressing of vast datasets. It is engineered to sustain very high throughput and concurrency, with examples of multi-TiB/s read performance on optimized clusters.

AIStor is optimized specifically for AI and data-intensive workloads, where throughput, low latency, and horizontal scalability are critical. It integrates broadly with modern AI and analytics tools, including frameworks such as TensorFlow, PyTorch, Spark, and Iceberg-style table engines, making it suitable as the foundational storage layer for pipelines that demand both performance and consistency.

Security and enterprise readiness are central to AIStor’s design. It includes capabilities like encryption, replication, erasure coding, identity and access controls, immutability, lifecycle management, and operational observability, which are important for mission-critical deployments that must meet compliance and data protection requirements.

AIStor is positioned as a platform that unifies diverse data workloads — from unstructured storage for application data to structured table storage for analytics, as well as AI training and inference datasets — within a consistent object-native architecture. It supports multi-tenant environments and can be deployed across on-premises, cloud, and hybrid infrastructure.