Show HN: Telescope – an open-source web-based log viewer for logs in ClickHouse
github.comHey everyone! I’m working on Telescope - an open-source web-based log viewer designed to make working with logs stored in ClickHouse easier and more intuitive.
I wasn’t happy with existing log viewers - most of them force a specific log format, are tied to ingestion pipelines, or are just a small part of a larger platform. Others didn’t display logs the way I wanted.
So I decided to build my own lightweight, flexible log viewer - one that actually fits my needs.
Check it out:
Video demo: https://www.youtube.com/watch?v=5IItMOXwugY
GitHub: https://github.com/iamtelescope/telescope
Live demo: https://telescope.humanuser.net
Discord: https://discord.gg/rXpjDnEc
There's also Logdy (https://github.com/logdyhq/logdy-core) that can work with raw files and comes with a UI as well in a single precompiled binary so no need for installs and setups. If you're looking for a simple solution for browsing log files with a web UI, this might be it! (I'm the author)
Heyo I’ve noticed Lodgy come up a few times on HN now, and was curious if you explored making it a proper desktop application instead of a two-part UI and CLI application. Did you rule that out for some reason?
I'm not ruling that out, however there was no user feedback that's the use case honestly. So far users love that they can just drop a binary on a remote server and sping up a web UI. Similar with the local env. The nature of Logdy is that it's primarily designed to work in the CLI. What would be the use case for a desktop app?
It would be great if the logs could describe a bit what exactly one has to do to use this as an alternative to Grafana Loki.
How do I get my logs (e.g. local text files from disk like nginx logs, or files that need transformation like systemd journal logs) into ClickHouse in a way that's useful for Telescope?
What kind of indices do I have to configure so that queries are fast? Ideally with some examples.
How can I make that full-text substring search queries are fast (e.g. "unexpected error 123")? When I filter with regex, is that still fast / use indices?
From the docs it isn't quite clear to me how to configure the system so that I can just put a couple TB of logs into it and have queries be fast.
Thanks!
Telescope is primarily focused on log visualization, not on log collection or preparing ClickHouse for storage. The system does not currently provide (and I think will not ever) built-in mechanisms for ingesting logs from any sources.
I will consider providing a how-to guide on setting up log storage in ClickHouse, but I’m afraid I won’t be able to cover all possible scenarios. This is a highly specific topic that depends on the infrastructure and needs of each organization.
If you’re looking for a all-in-one solution that can*both collect and visualize logs, you might want to check out https://www.highlight.io or https://signoz.io or other similar projects.
And also, by the way, I’m not trying to create a "Grafana Loki killer" or a "killer" of any other tool. This is just an open source project - I simply want to build a great log viewer without worrying about how to attract users from Grafana Loki or Elastic or any other tool/product.
I think such a guide would be great.
My perspective:
A lot of people who operate servers (including me) just want to view and search their logs -- fast and convenient. Your tool provides that. They don't care about whether the backend uses ClickHouse or Postgres or whatever, that's just a pesky detail. They understand they may have to deal with it to some extent, but they don't want to have to become experts at those, and to conclude everything by themselves, just to read their logs.
Also, those things are general-purpose databases, so they don't tell the user how to best set them up so your tool can produce results fast and convenient. So currently, neither side helps the user with that.
That's why it's best if your tool's docs gives some basic tips on how to achieve the most commonly desired goals: Some basic way to get logs into the backend DB (if there's a standard way to do that for text log files and journald, probably fine to just link it), and docs on what indices Telescope needs to be faster than grep for typical log search tasks (ideally with some quick snippet or link on how to set those up, for people who haven't used ClickHouse before).
So overall, it's fine if the tool doesn't do everything. But it should say what it needs to work well.
As someone who has never worked anywhere that tried it out, what do you not like about loki. I've been stuck in the very expensive splunk and opensearch/kibana mines for many years and I find it an amazingly frustrating place to be. I honestly find that I can better debug via logs using grep than either of those tools.
Loki works fine for what it does; the problem is what it lacks.
It doesn't do full-text search indices. So if you just search for some word across all your logs (to find eg when a rare error happened), it is very slow (it runs the equivalent of grep, at 500 MB/s on my machine). If you have a couple TB, it takes half an hour!
As you say, even plain grep is usually faster for such plain linear search.
I want full-text indices so that such searches take milliseconds, or a couple seconds at most.
see to me, having at one point been responsible for maintaining an ES instance for logs (and exporters and all the other bits) I feel like the prices you pay in engineering hours and hardware costs to maintain all those indexes while keeping ES from absolutely melting down is way too high.
I think grep is amazing but yes if you unleash it on 'all the logs' without narrowing yourself down to a time frame first or some other taxonomy is going to be slow. This seems like a skill issue, frankly.
Also full text indexes for all the things are generally FASTER of course, but seconds/milliseconds? How much hardware are you throwing at logs. Most only go to logs in an emergency, during an incident and the like. How much are you paying just to index a bunch of shit that will probably never even be looked at, and how much are you paying for hardware to run queries on those indexes that will be largely idle.
The problems with ES/Splunk for logs is that they were not designed for logs, so they are both, in my view, overkill AND underkill for the task. Full fuzzy text serch is probably overkill, the UI for the task of dealing with log data is underkill. (The cloud bills are certainly overkill)
I'm currently doing platform engineering at a company in the top half of the fortune 500. Honestly, probably about 90-95% of the time when I'm helping a team troubleshoot their service on kubernetes I'm using the kubectl `stern` plugin (shows log streams from all pods that match a label query) and grep/sed/awk/jq if it's ongoing, it's just waaaaay more responsive. If it's a 'weird thing happened last night, investigate' task and I have to go to Kibana it's just a much worse experience overall.
On the naming front telescope is already use for a log viewer https://laravel.com/docs/11.x/telescope
If I search telescope logs on google that’s the top result for me.
Is there any comprehensive guide in building observability stack using otel, clickhouse, and grafana? I think this is a solid stack for logging & tracing, but I've been looking into it but haven't found any authoritative reference for this particular stack (unlike ELK & LGTM stack)
Looks cool!
If you're looking for this kind of UI also check out Coroot https://github.com/coroot/coroot which has awesome UI for logs and OpenTelemetry traces and also stores data in Clickhouse
Looks cool I might try it out!
I need a central place, something simple where I can actually read the contents of the logs that are generated by the dozen of services that I run for clients, etc… instead of stupidly SSH’ing to every server.
Does this fit the use case?
I tried Loki once but it was painful to set up and more geared toward aggregating events and stats.
Thanks! Telescope is more focused on displaying logs and providing access to them rather than handling log ingestion. In the future, I plan to support various sources like Docker, k8s, and files to improve the local development workflow. However, it's unlikely that Telescope will support fetching logs from remote servers via SSH, as that's not its primary use case.
If all you want is the plaintext logs, there's no need to bother with special products. Just point syslog in the right direction as if it was 1995. Everything can log to syslog already. Things like Splunk, Graylog and Kibana are mostly for visualization and query interfaces.
I'd recommend VictoriaLogs and shipping to via Vector
I also recommend to not hesitate to use other log shippers as well as VictoriaLogs support ingestion not only from Vector - see https://docs.victoriametrics.com/victorialogs/data-ingestion...
I'm author of Logdy: https://logdy.dev/ https://github.com/logdyhq/logdy-core It comes as a precompiled binary you can download/deploy on the server and use to browse larger log files. I suggest you take a look!
Graylog is a pretty standard solution to your problems (I believe), although they've been closing down their licensing more and more as time goes on.
I’m curious to know what makes the Loki installation process so painful.
I’m interested in learning more about the software installation experience.
Only problematic thing might be relatively frequent storage changes (like they like to deprecate primary storage driver), otherwise its IMHO easy to setup. I'm running it on several projects, because it doesn't needs beefy machine like Elastic or even ClickHouse.
genuinely wondering if https://multiplayer.app would work for you.
note: I'm part of the Multiplayer team.
This looks pretty cool, I love seeing more clickhouse-native logging platforms springing up! It's a surprisingly underrated platform to build on when I talk to other engineers.
I'm one of those authors of an existing log viewer (hyperdx) and was curious if we were one of those platforms that didn't fit your needs? Always love learning what use cases inspire different approaches.
How is it different from Signoz, a complete observability stack (including Logs) built on top of Clickhouse?
Telescope is focused purely on viewing logs for existing data. It doesn’t enforce any specific ingestion setup or schema and doesn’t support traces or session storage.
You can think of it as just one part of a logging platform, where a full platform might consist of multiple components like a UI, ingestion engine, logging agent, and storage. In this setup, Telescope is only the UI.
Would this also work with something like Plausible (https://github.com/plausible/analytics) which uses ClickHouse to store web analytics data, or is it primarily for log data?
Despite the fact that Telescope is focused on application log data, it could be used for any type of data as long as it's stored in ClickHouse and has some time fields.
At the moment, I have no plans to support arbitrary data visualization in Telescope, as I believe there are better BI-like tools for that scenario.
Yeah that's fair, thank you.
I like how this is mostly based on the Kibana UI. Makes easier to convince other people to move to it.
To be honest, I was more inspired by DataDog :)
I've used graylog the most so that's what it looks like to me :P. I like how you can do a bunch of extraction stuff right there in the query interface though, that's awesome. It seems like a very thoughtful UI.
Honestly that pushes me away from it. I find kibana to be a very frustrating experience.
(not op) curious what you find frustrating about it?
at the enterprise scale on the backend you end up paying for a bunch of indexing you will likely never use. On top of that you spend a LOT of money in engineering hours setting up indexes for many teams all with different log formats so the whole thing doesn't just melt down.
On the kibana side, their query language is unshared by any other tool, at least any that I use, meaning that in the middle of an outage I end up chasing my tail reading docs on how to query what you want. The returns are often slow and it's very hard to just export the logs you do find to text files so you can ingest them into other tools.
I mean I came up on cat/gerp/awk/sed/less/tail/(more recently jq for json logs) .. it wasn't perfect but it was RESPONSIVE and portable.
I just think that tools like ES/Splunk weren't conceived for dealing with logs (especially if your logs come in many formats) and are both overkill and at the same time underkill for the task. It's like using a ball peen hammer to drive nails, you can certainly DO it, but a claw hammer is cheaper and a more ergonomic experience.
Very cool! Just digging in. Does it works with the new JSON format clickhouse introduced recently?
Also, what service did you use to make the video, if you don't mind my asking?
Thanks!
I haven't tested the new JSON format in ClickHouse yet, but even if something doesn't work at the moment, fixing it should be trivial.
As for the video service, it wasn’t actually a service but rather a set of local tools:
- Video capture/screenshots - macOS default tools
- Screenshot editing - GIMP
- Voice generation - https://elevenlabs.io/app/speech-synthesis/text-to-speech
- Montage – DaVinci Resolve 19
Cool! I'm currently playing with the Grafana Clickhouse connector to do broadly similar - are these compatible? Can Telescope read an OTEL logs table in Clickhouse?
Yes, this is exactly where Telescope can be useful (and actually, the way Grafana displays logs was my motivation for writing my own viewer)
Telescope can work with any table in ClickHouse. Of course, not every single ClickHouse type has been tested, but there shouldn’t be any issues with the most common ones
If you want, you can check how it works with the OTEL schema in the live demo here: https://telescope.humanuser.net/sources/otel-demo/explore
Very cool! Tested it with the demo, very smooth!
Thanks!
Awesome stuff! Just published something similar today
Just curious, what is the most challenging thing in your opinion when building such log viewer?
That sounds great! Do you have a link? I'd love to check it out.
For me, the most challenging parts are still ahead - live tailing and a plugin system to support different storage backends beyond just ClickHouse. Those will be interesting problems to solve! What was the biggest challenge for you?
Look out, Kibana, they're gunning for you!
Just curious, as I'm in the market - why should I use this instead of the ELK stack?
Well, if you're happy with ELK, you should definitely use it! As I mentioned earlier, I’m not trying to sell anything or convince people to switch from their current solutions - just offering an alternative perspective on how things can be done.
From my perspective, a ClickHouse-based setup can be cheaper and possibly faster in certain conditions – here’s some comparison made by ClickHouse Inc. - https://clickhouse.com/blog/clickhouse_vs_elasticsearch_the_...
My motto is "Know your data". I’m not a big fan of schemaless setups - I believe in designing a proper database schema rather than just pushing data into a black hole and hoping the system will handle it.
It is not same as OP, but according to a similar o11y stack on top of Clickhouse, Signoz, Clickhouse based logging costs less than ELK for storage and performs better:
https://github.com/SigNoz/logs-benchmark
We've found that ClickHouse is extremely fast for write-once/read-many. So its great for recording logs. If Telescope provides the search/index features that Elastic provides, this could be a nice performance bump. FWIW, I haven't tested Telescope, so this is all just my musing.
Unfortunate name choice, as @csh602 mentioned
Viewer looks pretty good though. Reminds me of DataDog UI, but not as slow. Will play around more, thanks!
As we all know, naming is an unsolvable problem in IT :)
Regarding performance - 95% of Telescope's speed depends on how fast your ClickHouse responds. If you have a well-optimized schema and use the right indexes, Telescope's overhead will be minimal.
It can display logs in-context. Awesome!
Very cool! Would be nice to have a library for the frontend components for the log viewer, to be able to reuse them in other projects :)
Nice idea! However, I’m not experienced enough with Vue (and frontend) development to properly design an exportable component. So, at least for now, I don’t think I’ll be able to make it happen myself.
Rollbar has a feature to upload JavaScript sourcemaps files. When I am viewing logs from minified js files, it automatically apply sourcemaps and correctly shows line number.
Is there any open source tool that does the same?
Looks simple and clean! Big ups for starts of good screenshots, docs, and quickstart (Docker) instructions.
Regarding the name, "Telescope" is also the name of a Neovim fuzzy finder[0] that dominates the ecosystem there. Other results appear by searching "telescope github".
[0]: https://github.com/nvim-telescope/telescope.nvim
Clearly we need an extension to search this new service with telescope.nvim. telescope-telescope.nvim.
Also, a bit more directly related, a log viewer / monitoring solution from Laravel: https://laravel.com/docs/12.x/telescope
Well, every single name I came up with was already taken and present in GitHub. So...
This one seems to be optimized for log viewing at the moment, are there any DataDog alternatives built on top of Clickhouse, which supports full range of OpenTelemetry features?
There are these guys which are based on OTEL: https://github.com/hyperdxio/hyperdx
+1 for hyperdx, I'm testing it and love it.
author here, just wanted to chime in that I really appreciate the kind words about what we've been building :)
Check out signoz: https://github.com/SigNoz/signoz
OSS o11y platform built on clickhouse & otel.
There is lumigo, also based on Clickhouse https://lumigo.io/
I think https://multiplayer.app would also fit your description.
[dead]