Join us

How Netflix optimized its petabyte-scale logging system with

How Netflix optimized its petabyte-scale logging system with

Netflix overhauled its logging pipeline to chew through 5 PB/day. The stack now leans on ClickHouse for speed and Apache Iceberg to keep storage costs sane.

Out went regex fingerprinting - slow and clumsy. In came a JFlex-generated lexer that actually keeps up. They also ditched generic serialization in favor of ClickHouse’s native protocol and sharded log tags to kill off unnecessary scans and trim query times.

Bigger picture: More teams are building bespoke ingest pipelines, tying high-throughput data flow to layered storage. Observability is getting the same kind of love as prod traffic. Finally.


Let's keep in touch!

Stay updated with my latest posts and news. I share insights, updates, and exclusive content.

Unsubscribe anytime. By subscribing, you share your email with @devopslinks and accept our Terms & Privacy.

Give a Pawfive to this post!


Only registered users can post comments. Please, login or signup.

Start writing about what excites you in tech β€” connect with developers, grow your voice, and get rewarded.

Join other developers and claim your FAUN.dev() account now!

Avatar

DevOpsLinks #DevOps

FAUN.dev()

@devopslinks
DevOps Weekly Newsletter, DevOpsLinks. Curated DevOps news, tutorials, tools and more!
Developer Influence
1

Influence

1

Total Hits

65

Posts