#clickhouse

[ follow ]
fromMedium
1 month ago

Stop Paying for Expensive Logging: Self-Hosted ClickHouse on Kubernetes

Centralized logging is crucial for understanding application behavior in Kubernetes, but the cost of storing those logs can quickly spiral out of control. ClickHouse is changing that narrative. It provides a high-performance, cost-effective alternative to traditional logging stacks, delivering lightning-fast query speeds that significantly reduces your infrastructure overhead. In this blog post, we're going to build a high-performance logging pipeline from the ground up.
DevOps
fromMedium
1 month ago

Stop Paying for Expensive Logging: Self-Hosted ClickHouse on Kubernetes

ClickHouse combined with Fluent Bit and Grafana enables a high-performance, cost-effective Kubernetes logging pipeline with fast queries and strong compression.
fromInfoWorld
1 week ago

ClickHouse buys Langfuse as data platforms race to own the AI feedback loop

By bringing Langfuse in-house, ClickHouse can offer customers a native way to collect, store, and analyze large volumes of LLM telemetry alongside operational and business data, helping teams debug models faster, control costs, and run more reliable AI workloads without relying on a separate observability tool, Tyagi added.
Artificial intelligence
Business intelligence
fromTechzine Global
1 week ago

ClickHouse, the open-source challenger to Snowflake and Databricks

ClickHouse is a high-performance columnar OLAP database rapidly adopted by AI and enterprise users, now valued at $15B and acquiring Langfuse.
#kubernetes
fromMedium
1 month ago
Software development

Stop Paying for Expensive Logging: Self-Hosted ClickHouse on Kubernetes

fromMedium
1 month ago
Software development

Stop Paying for Expensive Logging: Self-Hosted ClickHouse on Kubernetes

#cloudflare-outage
fromInfoQ
6 months ago

Cloudflare Chooses PostgreSQL Extension over Specialized OLAP for 100K Row/Second Analytics

The default and most commonly used table engine in ClickHouse, MergeTree, is optimized for high-throughput batch inserts. It writes each insert as a separate partition, then runs background merges to keep data manageable. This makes writes very fast, but not when they arrive in lots of tiny batches, which was exactly our case with millions of individual devices uploading one log event every 2 minutes.
Software development
[ Load more ]