These are public posts tagged with #grafana. You can interact with them if you have an account anywhere in the fediverse.
Can it run Doom?
The inaugural Science Fair at #GrafanaCON lets you get hands-on with #Grafana dashboards monitoring a 3D printer, a wind tunnel, and drones. We'll also feature hackathon projects like Doomfana, Dash Dash Revolution, and more!
Join the community and be the first to see all the new features we're announcing at GrafanaCON 2025 in Seattle (May 6-8): https://grafana.com/events/grafanacon/science-fair/?camp=gcon2025&mdm=social
Дело о несрабатывающем тайм-ауте
Привет! Меня зовут Олег Стрекаловский, я старший разработчик в команде корзины маркетплейса. Сервис корзины Ozon отвечает за хранение корзин покупателей и за отрисовку соответствующего экрана в приложении и на сайте. Слежение за стабильностью сервиса — важная задача. В этой статье я расскажу о нюансах интерпретации данных, которые предоставляет система мониторинга Prometheus. Если вы тоже часто всматриваетесь в графики, чтобы понять, как чувствует себя сервис, эта статья для вас.
Привет! Меня зовут Олег Стрекаловский, я старший разработчик…
ХабрThis is how I will render images from #Grafana, to insert #metrology graphs in Web pages, as of today. That will prevent me from doing duplicate jobs using RRDtool. Still a bit sad that this is implemented via a headless Chrome browser but hey, it’s 2025.
https://www.tumfatig.net/2025/rendering-static-images-from-grafana/
When it comes to rendering graphical representation…
www.tumfatig.netAnyone out there on the Grafana Cloud stack? I’m trying to do a cost/benefit analysis of moving something over there but haven’t had luck in my personal circle finding a group using it.
I’m looking for any surprises, feedback, loves/hates, etc. Thanks!
#grafana #monitoring #grafanacloud #devops
Is there an on-premise, open source option for a monitoring/telemetry platform comparable* to say New Relic or Dynatrace with an approachable query language?
Is Prometheus+Grafana pretty much it?
Simpler the better, approachable for a casual user (i.e. curious non-technical tenants) would be great.
* Caveats for "comparable" apply!
#Linux #RunBSD #telemetry #OpenTelemetry #NewRelic #Dynatrace #Datadog #PlatformMonitoring #Monitoring #DevOps #HomeLab #soho #infrastructure #Network #CloudNative #CNI #FOSS #FLOSS #prometheus #grafana #zabbix #icinga #nagios
So, I've been using Thanos to receive and store my prometheus metrics long term in a self hosted S3 bucket. Thanos also acts as a datasource for my dashboards in Grafana, and provides a Ruler, which evaluates alerting rules against my metrics and forwards them to my alertmanager. It's ok. It's certainly got it's downsides, which I can go into later, but I've thinking... what about Mimir?
How do you all feel about Grafana's Mimir (source on GitHub)? It's AGPL and seems to literally be a replacement of Thanos, which is Apache 2.0.
Thanos description from their website:
Open source, highly available Prometheus setup with long term storage capabilities.
Mimir description from their website:
...open source software project that provides horizontally scalable, highly available, multi-tenant, long-term storage for Prometheus and OpenTelemetry metrics.
Both with work with alloy and prometheus alike. Both require you to configure initially confusing hashrings and replication parameters. Both have a bunch of large companies adopting them, so... now I feel conflicted. Should I try mimir? Poll in reply.
#thanos #prometheus #alloy #grafana #observability #monitoring #kubernetes #k8s #foss #sre
We're having a blast at #KubeCon!
Stop by our booth (S462) to collect swag, learn about the LGTM Stack and Kubernetes Monitoring, and see featured #Grafana dashboards from the community.
And don't miss the sessions presented by our Grafanistas: https://grafana.com/events/kubecon/?mdm=social
Very happy to discover how easy it is to extend `node_exporter` with custom metrics:
Add `--collector.textfile.directory=/some/place` to the daemon's arguments (e.g. via ARGS in `/etc/default/prometheus-node-exporter`), and write `*.prom` files in that directory with your favorite tool, and **boom!**
Text format doc with a very good example: https://prometheus.io/docs/instrumenting/exposition_formats/
Creating Grafana alerts is something I'm less comfortable with, but I'm starting to get the hang of it…
An open-source monitoring system with a dimensional…
prometheus.ioMétricas de Windows con Prometheus y Grafana #blog #monitorización #exporter #grafana #metricas #prometheus #windows
https://www.bujarra.com/metricas-de-windows-con-prometheus-y-grafana/
This morning's accomplishment: finally setting up an otel collector at home, pointing it at #clickhouse, getting #caddy tracing via otel, and throwing a #grafana dashboard on it.
I didn't expect the dashboard to be the hard part.
Interested in #containers, #docker, #OCI and #guix #declarative #configuration?
Watch Giacomo Leidi's talk about self-hosting @forgejo using Guix's container backed configuration. Check it out
His Gocix project shows how to bring together container-based software while benefiting from the resilience of declarative configuration! He has services for #prometheus #grafana #traefik #bonfire and more!
@ron vielleicht schaust du dir mal #influxdb an. Da kannst du pro bucket definieren wie lange die Daten aufbewahrt werden. Spielt auch gut mit #grafana für Visualisierungen zusammen.
Wir benutzen aber #timescaledb, damit können wir z.b. für Verbräuche, also laufende Zähler, erreichen, das 14 Tage minütliche Werte, danach nur noch 15 min aggregierte Werte und nach einem halben Jahr 1h Werte gespeichert werden. So kann man immer noch Vergleiche mit dem letzten Jahr anstellen.
#Grafana 11.6 is here!
This release delivers a number of new dashboarding features, including one-click data links and actions, along with other notable updates related to security, alerting, and more.
Grafana 11.6 includes a number of new dashboarding…
Grafana Labs@stefan Auf jeden Fall machen. Finde ich. Schon schön, wenn man auch mal z.B. Verbräuche mit dem letzten Jahr vergleichen kann. Oder langfristigere Wetterentwicklungen anschauen. Gleich Grafana dazu installieren! #HomeAssistant #InfluxDB #Grafana
Редтимим мониторинг: рекон Grafana
Совсем недавно, принимая участие в Кибериспытаниях на платформе Standoff365, команда CyberOK на этапе начального рекона без особых сложностей получила доступ к системе мониторинга Grafana заказчика. Быстрый анализ показал, что хост с Grafana используется для сбора метрик с прода и одной ногой находится в Интернете, а второй – во внутренней сети. Такие узлы являются лакомым кусочком, поэтому, естественно, мы сделали стойку и начали рыть.
Совсем недавно, принимая участие в Кибериспытаниях…
ХабрI've been struggling with hostile actors hosing our website (tiny, media non-profit) for a while. Today, I finally got one up on the threat.
Thanks to #AI.
I know that many in the community are aggressively hostile towards LLMs.
And I am familiar with the contempt many in the #infosec community have towards AI used for Infosec work. But here is my use case and rationale.
First, the solution: Rate limiting plugin for WP and Cloudflare "under attack" switch. With my killer PID termination script as the backstop.
Second: The cause. A non-DDoS attack on the Wordpress, hitting templates and known files, saturating the resources with multiple sessions and database queries that sneaks under DDoS protection until the (WP) server overloads.
Method: Using commercial (pay for AI, #Claude), investigate possible causes, provide data, expand on the data using AI generated scripts, iteration loop until cause identified, hierarchy of fixes offered.
Points to make:
The criticism that "AI is too inexpert and too inaccurate to be useful" (esp. in Infosec), is patiently false. I do not have the domain expertise, time or money to do it myself. We certainly do not have the money to hire an expert.
So working with the AI, I was able to come up with a short term solution and a long term plan.
It's not magic. It was an iterative process and some knowledge was necessary. Like any other tool, you must learn how to use it to get to your objectives.
To folks who hear "It's useless and inaccurate" and do not try.
Do not be put off. Learn and develop your skills.
AI can advance your cause.
Attached #Grafana panel
(Also developed with AI)
Want #cloudnative but with the power of #declarative configuration? The recoverability of #transactions for system configuration?
Wednesday it's the online #guix meet-up! With a great talk by @paulbutgold
about running docker / oci containers using the Guix configuration system.
His Gocix project has #prometheus, #grafana, #forgejo, #conduit and #traefik examples.
Meet-up details:
I'm working on a #Grafana hack (bash scripts) that would feed MTR wrapper Data into a time series for each hop, showing Hi/Lo/Mean latency for each hop, thus showing realtime "weather" for the link.
(Only the last few hops), further up, it's not going to be meaningful.
You don't need historical data, you can display only the last value on a gauge, thus showing current data only.
What other data you were thinking of displaying on your panel?
(IMG my early panel with shell (top) output)
Here's how to turn web access logs in #clickhouse into nicely formatted maps in #grafana, like the picture that I posted yesterday.
https://scottstuff.net/posts/2025/03/21/geocoding-ip-addresses-with-clickhouse/
The hard part with this was getting IPv4 and IPv6 handling without needing special cases for either. Now that that's done, it should just be cut-and-paste to use it yourself now.
Now that I’m storing my blog’s request logs in Clickhouse,…
scottstuff.netМониторинг инфраструктуры: как избежать простых и неправильных решений
Мониторинг – это не только сбор информации о состоянии, а помощник для всех. И именно поэтому он такой разный. Ведь чтобы помочь пользователям, разработчикам, провайдерам, мониторингу приходится решать очень разные задачи на разных уровнях. Например, пользователям важно, чтобы сервис был доступен именно в тот момент, когда он им потребуется. Провайдеру – чтобы ресурсы работали максимально эффективно. На первый взгляд кажется, что главное для мониторинга – это выбрать ключевые метрики, учесть особенности инфраструктуры и настроить сбор данных, триггеры и алерты. Несомненно, это очень важно для инструмента наблюдения. Но всё же главное в мониторинге — сделать его источником информации для развития и оптимизации. Привет, Хабр! Я — Андрей Камардин, SRE-инженер одной из российских облачных компаний, старший преподаватель в МАИ и эксперт Skillbox по DevOps. Веду канал « Записки про IT ». Для закрытого комьюнити Skillbox IT Experts рассказал, как мы настраивали мониторинг инфраструктуры для принятия решений.
https://habr.com/ru/articles/893142/
#логирование #мониторинг #инфраструктура #devops #grafana #облачные_хранилища #метрики #данные #облачная_инфраструктура #облачное_хранилище