Join us

ContentUpdates and recent posts about Grafana Tempo..
Link
@kala shared a link, 2 months, 3 weeks ago
FAUN.dev()

How to Create an Effective Prompt for Nano Banana Pro

The author details how to effectively prompt Google’s Nano Banana Pro, a visual reasoning model, emphasizing that success relies on structured design documents rather than vague requests. The method prioritizes four key steps: defining the Work Surface (e.g., dashboard or comic), specifying the prec.. read more  

Link
@kala shared a link, 2 months, 3 weeks ago
FAUN.dev()

So you wanna build a local RAG?

Skald spun up a full local RAG stack, withpgvector,Sentence Transformers,Docling, andllama.cpp, in under 10 minutes. The thing hums on English point queries. Benchmarks show open-source models and rerankers can go toe-to-toe with SaaS tools in most tasks. They stumble, though, on multilingual prompt.. read more  

Link
@kala shared a link, 2 months, 3 weeks ago
FAUN.dev()

Learning Collatz - The Mother of all Rabbit Holes

Researchers trained small transformer models to predict the "long Collatz step," an arithmetic rule for the infamous unsolved Collatz conjecture, achieving surprisingly high accuracy up to 99.8%. The models did not learn the universal algorithm, but instead showed quantized learning, mastering speci.. read more  

Link
@kala shared a link, 2 months, 3 weeks ago
FAUN.dev()

200k Tokens Is Plenty

Amp’s team isn’t chasing token limits. Even with ~200k available via Opus 4.5, they stick toshort, modular threads, around 80k tokens each. Why? Smaller threads are cheaper, more stable, and just work better. Instead of stuffing everything into a single mega-context, they slice big tasks into focuse.. read more  

200k Tokens Is Plenty
Link
@kala shared a link, 2 months, 3 weeks ago
FAUN.dev()

Google tests new Gemini 3 models on LM Arena

Google’s been quietly field-testing two shadow models,Fierce FalconandGhost Falcon, on LM Arena. Early signs? They're probably warm-ups for the next Gemini 3 Flash or Pro drop. Classic Google move: float a checkpoint, stir up curiosity, then go GA... read more  

Google tests new Gemini 3 models on LM Arena
Link
@kala shared a link, 2 months, 3 weeks ago
FAUN.dev()

A trillion dollars is a terrible thing to waste

OpenAI co-founder Ilya Sutskever just said the quiet part out loud: scaling laws are breaking down. Bigger models aren’t getting better at thinking, they’re getting worse at generalizing and reasoning. Now he’s eyeingneurosymbolic AIandinnate inductive constraints. Yep, the “just make it huge” era m.. read more  

A trillion dollars is a terrible thing to waste
Link
@kala shared a link, 2 months, 3 weeks ago
FAUN.dev()

Prompts for Open Problems

The author, Ben Recht, proposes five research directions inspired by his graduate machine learning class, arguing for different research rather than just more. These prompts include adopting a design-based view for decision theory, explaining the robust scaling trends in competitive testing, and mov.. read more  

Link
@kala shared a link, 2 months, 3 weeks ago
FAUN.dev()

Roses are red, violets are blue, if you phrase it as poem, any jailbreak will do

A new study just broke the safety game wide open: rhymed prompts slipped past filters in25 major LLMs, including Gemini 2.5 Pro and Deepseek - withup to 100% success. No clever chaining, no jailbreak soup. Just single-shot rhyme. Turns out, poetic language isn’t just for bard-core Twitter. When it c.. read more  

Roses are red, violets are blue, if you phrase it as poem, any jailbreak will do
Link
@kala shared a link, 2 months, 3 weeks ago
FAUN.dev()

Practical LLM Security Advice from the NVIDIA AI Red Team

NVIDIA’s AI Red Team nailed three security sinkholes in LLMs:reckless use ofexec/eval,RAG pipelines that grab too much data, andmarkdown that doesn't get cleaned. These cracks open doors to remote code execution, sneaky prompt injection, and link-based data leaks. The fix-it trend:App security’s lea.. read more  

Link
@devopslinks shared a link, 2 months, 3 weeks ago
FAUN.dev()

Why we're leaving serverless

Every millisecond matters in the critical path of API authentication. After two years of battling serverless limitations, the entire API stack was rebuilt to reduce end-to-end latency. The move from Cloudflare Workers to stateful Go servers resulted in a 6x performance improvement and simplified arc.. read more  

Why we're leaving serverless
Grafana Tempo is a distributed tracing backend built for massive scale and low operational overhead. Unlike traditional tracing systems that depend on complex databases, Tempo uses object storage—such as S3, GCS, or Azure Blob Storage—to store trace data, making it highly cost-effective and resilient. Tempo is part of the Grafana observability stack and integrates natively with Grafana, Prometheus, and Loki, enabling unified visualization and correlation across metrics, logs, and traces.

Technically, Tempo supports ingestion from major tracing protocols including Jaeger, Zipkin, OpenCensus, and OpenTelemetry, ensuring easy interoperability. It features TraceQL, a domain-specific query language for traces inspired by PromQL and LogQL, allowing developers to perform targeted searches and complex trace-based analytics. The newer TraceQL Metrics capability even lets users derive metrics directly from trace data, bridging the gap between tracing and performance analysis.

Tempo’s Traces Drilldown UI further enhances usability by providing intuitive, queryless analysis of latency, errors, and performance bottlenecks. Combined with the tempo-cli and tempo-vulture tools, it delivers a full suite for trace collection, verification, and debugging.

Built in Go and following OpenTelemetry standards, Grafana Tempo is ideal for organizations seeking scalable, vendor-neutral distributed tracing to power observability at cloud scale.