Compare commits

...

238 Commits

Author SHA1 Message Date
Prashant Shahi
82d53fa45c chore: 📌 pin versions: SigNoz 0.12.0, SigNoz OtelCollector 0.66.0
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-12-10 20:17:49 +05:30
Zsombor
c38d1c150d Fix case sensitivity in query parsing (#1670)
* Fix case sensitivity in query parsing - now the parser correctly recognize fields which contains uppercase letters

* fix: logs parser respects the case of fields

Co-authored-by: nityanandagohain <nityanandagohain@gmail.com>
Co-authored-by: Pranay Prateek <pranay@signoz.io>
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-12-10 19:27:57 +05:30
Srikanth Chekuri
16170eacc0 Revert "chore: use local table for innery query (#1815) (#1847)
* Revert "chore: use local table for innery query (#1815)"

This reverts commit 1b52edb056.

* chore: use localhost
2022-12-10 19:25:44 +05:30
Amol Umbark
66ddbfc085 fix: solves issue legend update causing null ch query (#1845)
Co-authored-by: mindhash <mindhash@mindhashs-MacBook-Pro.local>
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-12-10 12:21:20 +05:30
Vishal Sharma
2715ab61a4 chore: introduce docker_multi_node_cluster and by default set to false (#1839)
* chore: introduce docker_multi_node_cluster and by default set to false

* chore(query-service): 🔧 include docker_multi_node_cluster for tests

Co-authored-by: Prashant Shahi <prashant@signoz.io>
Co-authored-by: Prashant Shahi <me@prashantshahi.dev>
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-12-09 21:57:25 +05:30
Amol Umbark
4d291e92b9 fix: changed table names in default alert queries (#1843)
Co-authored-by: mindhash <mindhash@mindhashs-MacBook-Pro.local>
2022-12-09 21:54:51 +05:30
Nityananda Gohain
1b73649f8e fix: add default value for materialized column in distributed logs table (#1835)
Co-authored-by: Vishal Sharma <makeavish786@gmail.com>
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-12-09 20:18:58 +05:30
Amol Umbark
0abae1c09c feat: show release note in alerts dashboards and services pages (#1840)
* feat: show release note in alerts dashboards and services pages

* fix: made code changes as per review and changed message in release note

* fix: solved build pipeline issue

* fix: solved lint issue

Co-authored-by: mindhash <mindhash@mindhashs-MacBook-Pro.local>
Co-authored-by: Pranay Prateek <pranay@signoz.io>
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-12-09 20:16:09 +05:30
Pranay Prateek
4d02603aed Update README.md 2022-12-09 14:01:05 +05:30
Pranay Prateek
c58e43a678 Update README.md 2022-12-09 12:54:34 +05:30
Pranay Prateek
b77bbe1e4f Update README.md 2022-12-09 12:50:41 +05:30
Pranay Prateek
d4eb241c04 Update README.md 2022-12-09 12:48:57 +05:30
Pranay Prateek
98e1a77a43 Update README.md 2022-12-09 12:48:30 +05:30
Pranay Prateek
498b04491b updated logs image 2022-12-09 12:42:25 +05:30
Pranay Prateek
4e58414cc2 Update README.md 2022-12-09 12:36:05 +05:30
Pranay Prateek
67943cfec0 Update README.md 2022-12-09 12:32:03 +05:30
Palash Gupta
f170eb1b23 fix: scroll is added in case of extra space (#1838) 2022-12-09 10:00:55 +05:30
Vishal Sharma
6931b18382 fix: remove shared variable in TTL and async TTL queries (#1821)
* fix: remove shared variable in TTL

* fix: set distributed_ddl_task_timeout to 0 for async TTL

* chore: updated distributed_ddl_task_timeout to sync TTL queries
2022-12-07 18:23:01 +05:30
Prashant Shahi
8a9d6f664a fix: 🐛 log parsing issue (#1824)
Signed-off-by: Prashant Shahi <prashant@signoz.io>

Signed-off-by: Prashant Shahi <prashant@signoz.io>
Co-authored-by: Vishal Sharma <makeavish786@gmail.com>
2022-12-07 17:57:55 +05:30
Amol Umbark
8affe8df31 fix: solved issue with google help link (#1826) 2022-12-07 16:10:17 +05:30
Nityananda Gohain
1c8626e933 feat: usage collection updated for ee (#1654)
* feat: usage collection updated with new schema and logic

* fix: added exporter id and common collector id

* fix: upload usage only when license is present

* fix: handle if db doesn't exists

* fix: select query updated for usage collection to support distributed table

Co-authored-by: Pranay Prateek <pranay@signoz.io>
Co-authored-by: Vishal Sharma <makeavish786@gmail.com>
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-12-06 22:52:39 +05:30
Amol Umbark
87932de668 [feat] ee/google auth implementation (#1775)
* [feat] initial version for google oauth

* chore: arranged the sso packages and added prepare request for google auth

* feat: added google auth config page and backend to handle the request

* chore: code cleanup for domain SSO parsing

* Update constants.go

* chore: moved redirect sso error

* chore: lint issue fixed with domain

* chore: added tooltip for enforce sso and few changes to auth domain

* chore: moved question mark in enforce sso

* fix: resolved pr review comments

* chore: fixed type check for saml config

* fix: fixed saml config form

* chore: added util for transformed form values to samlconfig

Co-authored-by: mindhash <mindhash@mindhashs-MacBook-Pro.local>
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-12-06 22:32:59 +05:30
Srikanth Chekuri
1b52edb056 chore: use local table for innery query (#1816)
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-12-06 22:31:38 +05:30
Priyanka Chakraborty
5a81557df7 1374 dbcalls querybuilder (#1608)
* refactor: dbcalls-fromPromql-querybuilder


Co-authored-by: Pranay Prateek <pranay@signoz.io>
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-12-06 16:52:20 +05:30
Prashant Shahi
8bb3eefeb5 chore(clickhouse): 🔧 include cluster.xml for distributed set up (#1810)
* chore(clickhouse): 🔧 include cluster.xml for distributed set up

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-12-05 17:26:13 +05:30
Amol Umbark
a46f074e22 fix: resolves empty variables issue for imported dashboards (#1808) 2022-12-05 16:48:11 +05:30
Prashant Shahi
88fa3b7699 feat(distributed): create single docker-compose.yaml and CH configuration (#1803)
* feat: setup for distributed clickhouse

Signed-off-by: Prashant Shahi <prashant@signoz.io>
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-12-05 16:24:01 +05:30
Ankit Nayan
7f77bcca2b Feat/distributed ch (#1701)
* feat: support for distributed table

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2022-12-02 12:30:28 +05:30
Palash Gupta
ab5311caac feat: events is updated by adding the timestamp (#1802)
* feat: events is updated
* chore: title is updated
* feat: trace detail event timestamp is updated
2022-12-02 10:34:06 +05:30
Palash Gupta
8aae9f53a9 feat: search in tags is updated (#1788)
* feat: search in tags is updated

* chore: placeholder is updated
2022-12-01 14:19:12 +05:30
Ankit Nayan
18d80d47e5 Merge pull request #1776 from SigNoz/release/v0.11.4
Release/v0.11.4
2022-11-29 17:36:31 +05:30
Prashant Shahi
8e5522820c chore: 📌 pin versions: SigNoz 0.11.4
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-11-29 17:12:17 +05:30
Palash Gupta
5ae9557293 fix: logs time is fixed (#1772)
* fix: logs parsing is fixed

* fix: start and end time is updated
2022-11-29 14:41:36 +05:30
Palash Gupta
7e590f4bfb feat: meta description and image is updated (#1764)
* feat: meta description is updated

* chore: image is updated

Co-authored-by: Pranay Prateek <pranay@signoz.io>
2022-11-28 18:50:17 +05:30
Palash Gupta
ce072bdc3f fix: trace event is now not decoding the events (#1766)
Co-authored-by: Pranay Prateek <pranay@signoz.io>
2022-11-28 18:27:09 +05:30
Nityananda Gohain
67c0c9032f fix: logs aggreagte endpoint updated to differentiate between params and query string (#1768) 2022-11-28 18:16:21 +05:30
Palash Gupta
6c9036fbf4 fix[logs][FE]: live tail is fixed (#1759)
* fix: live tail is fixed

* fix: graph state is updated

* chore: step size is updated

* chore: xaxis config is updated

* chore: isDisabled state is updated for top navigation

* chore: selected interval is updated in the reducer

* fix: build is fixed

* chore: xAxis config is updated

Co-authored-by: Pranay Prateek <pranay@signoz.io>
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-11-28 15:44:33 +05:30
Nityananda Gohain
d06d41af87 fix: parser updated to differentiate between params and query string (#1763) 2022-11-28 14:18:43 +05:30
Amol Umbark
2771d2e774 fix: [alerts] [ch-query] added aliases in metric query result (#1760)
* fix: [alerts] [ch-query] added aliases in metric query result

* fix: added more column type support for target in ch query

* fix: added error handling when data type is unexpected in metric result

Co-authored-by: Pranay Prateek <pranay@signoz.io>
2022-11-27 14:29:09 +05:30
Amol Umbark
0cbba071ea fix: [alerts] fixed selected interval for chart preview in ch use case (#1761) 2022-11-25 16:04:09 +05:30
Amol Umbark
7cec2db503 fix: [alerts] solved legend not updating issue in ch query editor (#1757)
* fix: [alerts] solved legend not updating issue in ch query editor

* fix: [alerts]removed console.log

* fix: added jsdoc description tag
2022-11-25 12:16:47 +05:30
Amol Umbark
4b3829fd5b fix: fixed date condition (start and end) while preparing ch query (#1751)
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-11-24 18:19:07 +05:30
Vishal Sharma
983ca1ec6a feat: introduce getSubTreeSpans function in clickhouse query builder & introduce smart trace detail algorithm (#1648)
* perf: introduce smart trace detail algorithm
* fix: remove hardcoded levels and handle null levels
* feat: add support for broken trees
* feat: use spanLimit env variable
* fix: handle missing root span
* add root spanId and root name
* use permanent table
* add kind, events and tagmap support
* fix query formation
* increase context timeout to 600s
* perf improvement
* handle error
* return tableName as response to query
* support multiple queries tableName
* perf: improve memory and latency
* feat: add getSubTree custom func and smart trace detail algo to ee
* fix: create new functions for ee
* chore: refactor codebase
* chore: refactor frontend code


Co-authored-by: Ankit Nayan <ankit@signoz.io>
Co-authored-by: Palash Gupta <palashgdev@gmail.com>
2022-11-24 18:18:19 +05:30
Amol Umbark
33d34af2a6 feat: added exception based alerts (#1752) 2022-11-24 18:00:02 +05:30
Vishal Sharma
b0ec619881 fix: trace table pagination (#1749)
* fix: trace table pagination

* chore: refactor

* chore: refactor

Co-authored-by: Palash Gupta <palashgdev@gmail.com>
2022-11-24 16:25:26 +05:30
Amol Umbark
220f848b04 feat: [UI] clickhouse queries in alert builder (#1706)
* feat: added ui changes to support clickhouse queries in alert builder

* chore: minor fix to alert rules ui

* feat: alert form changes: ch query support, alert type selection

* chore: resolved review comments

* chore: added list for alert type selection instead

* chore: removed hard coded color and added antd/colors

* fix: resolved some issues found during testing alerts

* fix: moved alert defaults and added default queries for logs and traces

* feat: added default queries for logs and traces to reflect ts vars

* chore: fixed px to rem

Co-authored-by: Palash Gupta <palashgdev@gmail.com>
Co-authored-by: Pranay Prateek <pranay@signoz.io>
2022-11-24 13:21:46 +05:30
Palash Gupta
4727dbc9f0 fix: if invalid switch is disabled (#1656)
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-11-24 00:08:56 +05:30
Amol Umbark
00863e54de feat: added ch query support (#1735)
* feat: added ch query support
* fix: added new vars to resolve alert query format issue
* fix: replaced timestamp vars in metric query range

Co-authored-by: Pranay Prateek <pranay@signoz.io>
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2022-11-23 18:49:03 +05:30
Ankit Nayan
e9c47a6a73 Merge branch 'develop' of https://github.com/SigNoz/signoz into develop 2022-11-23 16:58:05 +05:30
Ankit Nayan
88af456915 chore: detect first registration 2022-11-23 16:57:49 +05:30
Ankit Nayan
7ebc94c273 display message updated (#1744)
* display message updated

* chore: display message changed
2022-11-23 16:44:47 +05:30
Palash Gupta
d5bd991417 fix: onApply data is updated (#1655) 2022-11-23 16:25:02 +05:30
Palash Gupta
4c0d573760 fix: Logs issues are fixed (#1727)
* feat: logs is updated
* chore: width:100% is removed
* chore: position of filter is updated
* chore: min time and max time are now tracked from global state


Co-authored-by: Pranay Prateek <pranay@signoz.io>
2022-11-23 13:42:36 +05:30
Vishal Sharma
1273bb5865 fix: getNanoTimestamp function and cache fix (#1737) 2022-11-22 13:13:10 +05:30
Palash Gupta
87502baabf feat: filter is added in exceptions page (#1731)
* feat: filter is added

Co-authored-by: Pranay Prateek <pranay@signoz.io>
Co-authored-by: Vishal Sharma <makeavish786@gmail.com>
2022-11-22 12:08:51 +05:30
Palash Gupta
90a6313423 feat: value graph is updated (#1733) 2022-11-21 21:03:33 +05:30
Palash Gupta
4a244ad7b2 feat: onClick is updated (#1732)
Co-authored-by: Pranay Prateek <pranay@signoz.io>
2022-11-21 16:44:41 +05:30
Palash Gupta
db105af89f refactor: some of the styles are removed and used native antd components (#1730) 2022-11-21 13:39:54 +05:30
Palash Gupta
b8c58a9812 chore: removed unnessesary eslint check (#1668)
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2022-11-18 19:04:40 +05:30
Ankit Nayan
78d2377520 Merge pull request #1722 from SigNoz/release/v0.11.3
Release/v0.11.3
2022-11-16 19:50:55 +05:30
Ankit Nayan
549535d09e Update README.md
Added Ruby

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2022-11-16 19:12:36 +05:30
Palash Gupta
ac4d35c6c0 chore: alignment is fixed in header (#1723)
* chore: alignment is fixed
2022-11-16 19:08:09 +05:30
Prashant Shahi
ad34c6e25f Merge branch 'develop' into release/v0.11.3 2022-11-16 17:37:56 +05:30
Prashant Shahi
c306701bab chore: 📌 pin versions: SigNoz 0.11.3, SigNoz OtelCollector 0.63.0
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-11-16 17:33:27 +05:30
Pranay Prateek
fcc725c6e6 Update README.md 2022-11-16 17:17:08 +05:30
Prashant Shahi
d615d7a9e3 Updating collection interval in otelcol configuration files (#1720)
* chore: 🔧 set collection interval of hostmetrics to 30s while others to 60s
2022-11-15 20:33:56 +05:30
Prashant Shahi
622943645f Bump version of clickhouse to 22.8.8 LTS and deploy file changes (#1711)
* chore: 🔥 remove docker-compose-prod.yaml as redundant and update Makefile
* chore: 🔧 scrape otel-collector internal metrics in same container and related changes
* chore: 📌 Bump version of clickhouse to 22.8.8 LTS

Signed-off-by: Prashant Shahi <prashant@signoz.io>
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2022-11-15 20:07:09 +05:30
Srikanth Chekuri
355264a43e chore: bump SigNoz/prometheus to v1.9.76 (#1719) 2022-11-15 18:45:47 +05:30
Srikanth Chekuri
2c7deca2ec fix: include inner panels support and map job,instance correctly (#1718)
* fix: include inner panels support and map job,instance correctly

* chore: remove debug and tidy up bit
2022-11-15 18:23:20 +05:30
Vishal Sharma
e558dcae3a fix: update trace URI when coming from metrics (#1715) 2022-11-15 13:08:48 +05:30
Srikanth Chekuri
4cf3dc2ec3 fix: remove usage of labels object (#1710)
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-11-14 22:51:23 +05:30
Palash Gupta
2e124da366 feat: refresh interval is added (#1712)
* feat: refresh interval is added
2022-11-14 22:32:19 +05:30
Vishal Sharma
a50d7f227c Feat: dynamic tooltip (#1705)
* feat: integrate config service with query service
* feat: add tooltip checkpoint
* feat: add support for dark and light mode icons

Co-authored-by: Palash Gupta <palashgdev@gmail.com>
2022-11-14 14:29:13 +05:30
Ankit Nayan
73706d872f Update telemetry.go 2022-11-12 17:19:34 +05:30
Palash Gupta
0480197914 fix Logs contains issue (#1708)
* chore: logs is updated
* chore: contains is updated
2022-11-12 11:37:52 +05:30
Palash Gupta
65af8c1b98 801 dropdown is added in the dashboard page (#1669)
* chore: update the import from constant rather than static string

* chore: removed redundant div

* feat: added auto refresh component

* refactor: top nav is refactored
2022-11-10 20:48:40 +05:30
Nityananda Gohain
a3b03ef0ca fix: parser updated to support escaped quotes in search (#1704)
Co-authored-by: Palash Gupta <palashgdev@gmail.com>
2022-11-10 18:24:20 +05:30
Srikanth Chekuri
9735a6e5ce feat: add ability to import Grafana dashboards (#1700)
* feat: add ability to import Grafana dashboards

* chore: remove unnecessary file

* chore: more 9XX support

* chore: some more hacks

* chore: update deps

* chore: arrange equal spaced widgets instead of inheriting from grafana
2022-11-10 16:49:54 +05:30
Vishal Sharma
674883cd18 Feature flagging (#1674)
* feat: introduce feature flagging via env variables
* refactor: enable sorting by default for users
2022-11-09 08:30:00 +05:30
Pang
36315fcf9c fix README.zh-cn.md readable (#1647)
Co-authored-by: Pranay Prateek <pranay@signoz.io>
2022-11-03 06:09:15 +05:30
Palash Gupta
46050a217c feat: all trace now open in new tab (#1662) 2022-10-26 12:53:47 +05:30
Ankit Nayan
c9363586e1 Merge branch 'main' into develop 2022-10-17 14:36:38 +05:30
Ankit Nayan
5eed384ffe Merge pull request #1637 from SigNoz/release/v0.11.2
Release/v0.11.2
2022-10-13 16:39:48 +05:30
Prashant Shahi
1b152c19ec ci(e2e): 👷 enable DEV_BUILD flag for query-service (#1636)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-10-13 16:10:36 +05:30
Prashant Shahi
6a3c1c10fb chore(release): 📌 pin versions: SigNoz 0.11.2, OtelCollector 0.55.3
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-10-13 15:28:03 +05:45
Palash Gupta
f580bedb1c 1627 login: onsubmit is added (#1635)
* feat: onsubmit is updated
* chore: precheckComplete handler is updated
2022-10-13 14:20:25 +05:30
Prashant Shahi
acd15af823 ci(e2e): 👷 ee build for query-service (#1633)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-10-13 08:58:06 +05:30
Nityananda Gohain
134c5dc1d2 fix: disable usage collection (#1631) 2022-10-12 12:04:36 +05:30
Palash Gupta
57f4f098f7 feat: onsubmit is updated (#1628) 2022-10-11 19:38:22 +05:30
Ankit Nayan
fce4496214 chore: rateLimit added 2022-10-11 18:35:05 +05:30
Palash Gupta
4e38f1dcc0 chore: free plan config is updated (#1625)
* chore: free plan config is updated


* fix: solved empty state issue with no auth domains

Co-authored-by: Amol <amolumbarkar@gmail.com>
2022-10-11 15:48:58 +05:30
Ankit Nayan
fe0f305ea7 Merge branch 'develop' of https://github.com/SigNoz/signoz into develop 2022-10-11 00:44:44 +05:30
Ankit Nayan
1374444f36 chore: analytics 2022-10-11 00:43:54 +05:30
Nityananda Gohain
fe0a4ab0cb Fix/delete old snapshot (#1621)
* fix: remove old snapshots
2022-10-07 20:06:01 +05:30
Srikanth Chekuri
f2f2069835 chore: bump SigNoz/prometheus to v1.9.74 (#1620) 2022-10-07 19:00:27 +05:30
Prashant Shahi
90d0c72aa2 ci(push): 👷 make ee query-service build default (#1616)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-10-07 14:23:01 +05:30
Nityananda Gohain
90d1a87027 fix: usage collection frequency updated (#1617) 2022-10-07 11:48:22 +05:30
Amol Umbark
9c4521b34a feat: enterprise edition (#1575)
* feat: added license manager and feature flags
* feat: completed org domain api
* chore: checking in saml auth handler code
* feat: added signup with sso
* feat: added login support for admins
* feat: added pem support for certificate
* ci(build-workflow): 👷 include EE query-service
* fix: 🐛 update package name
* chore(ee): 🔧 LD_FLAGS related changes

Signed-off-by: Prashant Shahi <prashant@signoz.io>
Co-authored-by: Prashant Shahi <prashant@signoz.io>
Co-authored-by: nityanandagohain <nityanandagohain@gmail.com>
2022-10-06 20:13:30 +05:30
Amol Umbark
106033c296 Feature: SSO Login and Feature gating in UI (#1605)
* feat: added usefeatureflags hook and relevant code
* chore: resolved lint issues
* chore: applied translations
* feat: added signup for sso
2022-10-04 13:43:58 +05:30
Palash Gupta
9372f763c8 feat: SAML settings is updated (#1556)
* chore: getFeatureFlag is implemented
* feat: authDomain are added
2022-10-03 21:27:42 +05:30
Priyanka Chakraborty
3bbe2f4f58 1363 externalcall querybuilder (#1550)
* externaltab-promql-to-querybuilder
* refactored the queries into separate file
* added logic for resourceattribute to tagFilter items conversion
* refactor: use useMemo

Co-authored-by: Palash Gupta <palashgdev@gmail.com>
2022-10-03 08:51:08 +05:30
Priyanka Chakraborty
1b1fb2f13b feat: route and breadcrumbs renamed to services (#1566)
* feat: route and breadcrumbs renamed to services
2022-10-03 07:02:03 +05:30
Prashant Shahi
a94bd9b99b introduce env for dashboards path in query-service (#1593)
* chore: 🔧 fetch dashboards path from DASHBOARDS_PATH env
* chore: 🚀 update docker files to include DASHBOARDS_PATH env
2022-10-03 05:48:54 +05:30
Nityananda Gohain
dcf2ac15b0 feat: add compression for materialized columns (#1585) 2022-10-03 05:45:59 +05:30
Vishal Gupta
9e9924943e feat: #1524 refresh button bug fix (#1582)
* feat: #1524 refresh button bug fix

* lint fixes

Co-authored-by: palashgdev <palashgdev@gmail.com>
2022-09-29 11:18:37 +05:30
Srikanth Chekuri
2c9794a6c6 fix: filter items can be empty (#1586) 2022-09-27 12:44:49 +05:30
Prashant Shahi
6b6f494574 fix: 🐛 update OTEL image (#1595)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-09-26 13:16:14 +05:30
Prashant Shahi
a3f11184e4 chore: 🔧 414 issue fix for large request URI (#1594)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-09-26 13:14:28 +05:30
Priyanka Chakraborty
cc3f36b62b 1587 update readme signoz vs jaeger (#1588)
* chore: update readme signoz vs jaeger
2022-09-22 22:03:50 +05:30
Koladele Olaitan
450602cd72 Removed Pranshu Chittora as a Frontend Maintainer (#1573) 2022-09-16 11:44:08 +05:30
Bryan Johnson
7088c22318 Fix minor grammar error in stale_version message for en and en-GB locales (#1570) 2022-09-14 21:26:58 +05:30
Pranay Prateek
b9af7e7ff3 update READMEs 2022-09-14 12:52:02 +05:30
Pranay Prateek
00389271cf chore: Introduce enterprise edition license (#1567)
* chore: Introduce enterprise edition license
2022-09-14 12:36:33 +05:30
Ankit Nayan
adda2e8a11 Merge pull request #1564 from SigNoz/release/v0.11.1
Release/v0.11.1
2022-09-14 10:21:54 +05:30
Prashant Shahi
ed2bbc5035 chore(release): 📌 pin versions: SigNoz-Otel-Collector 0.55.1
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-09-14 00:03:07 +05:30
Prashant Shahi
f1fdf78dc5 chore(release): 📌 pin versions: SigNoz 0.11.1
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-09-13 22:46:00 +05:30
Prashant Shahi
745fd07bd8 fix(lint): 🚨 format prometheus config YAML and remove trailing spaces
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-09-13 22:45:30 +05:30
Prashant Shahi
05de0ccba5 chore: 🔧 Add 414 issue fix for Frontend default.conf and Swarm
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-09-13 22:42:14 +05:30
palashgdev
c9139c5236 feat: height is updated (#1563) 2022-09-13 21:19:01 +05:30
palashgdev
0806397816 feat: webpack chunk name is updated (#1562) 2022-09-13 12:00:09 +05:30
Pranshu Chittora
c43dabdb0b fix: dashboard variable getting deleted on edit instances (#1561) 2022-09-12 23:24:45 +05:30
Vishal Sharma
eaadc3bb95 feat: introduce search trace ID component (#1551)
* feat: searchTraceID checkpoint
* feat: filter spans using TraceID from trace filter page

Co-authored-by: palashgdev <palashgdev@gmail.com>
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2022-09-12 19:35:31 +05:30
Pranshu Chittora
1ec9248975 feat: getting started page (#1560)
Co-authored-by: palashgdev <palashgdev@gmail.com>
2022-09-12 16:26:01 +05:30
Srikanth Chekuri
0ccd7777bf fix: make widget plot work with missing data points (#1559) 2022-09-12 16:11:55 +05:30
Srikanth Chekuri
ac86d840f9 fix: reuse the query engine and storage for alerts pqlEngine (#1558) 2022-09-12 12:30:36 +05:30
Srikanth Chekuri
8556c87d46 feat: add support for dashboard variables (#1557) 2022-09-11 03:34:02 +05:30
Pranshu Chittora
461a15d52d feat: dashboard variables (#1552)
* feat: dashboard variables
* fix: variable wipe on few instances
* feat: error handling states
* feat: eslint and tsc fixes
2022-09-09 17:43:25 +05:30
Ankit Nayan
9e6d9019f7 chore: added group analytics 2022-09-08 21:05:54 +05:30
Ankit Nayan
578dafd1ff chore: fixed random number generation to match to maxRandInt 2022-09-07 15:20:56 +05:30
Ankit Nayan
99c0c97c1e chore: added sampling in analytics 2022-09-06 19:55:01 +05:30
Ankit Nayan
4875652ecb chore: added group analytics 2022-09-06 19:29:07 +05:30
Ankit Nayan
d170515d4d Merge pull request #1532 from SigNoz/release/v0.11.0
Release/v0.11.0
2022-08-24 20:11:36 +05:30
Prashant Shahi
73b00f405b Merge branch 'develop' into release/v0.11.0 2022-08-24 19:01:33 +05:30
Nityananda Gohain
ea8bd7047f Fix case mismatch in static fields. (#1537)
* case mismatch fix
* fix: undefined handled in flattened data
2022-08-24 18:59:44 +05:30
Prashant Shahi
596daefa7e compose changes: add default env and remove otel memory limit (#1536)
* chore(docker-compose): 🔥 remove memory limit

* chore: 🔧 include envs: ALERTMANAGER_API_PREFIX and SIGNOZ_LOCAL_DB_PATH
2022-08-24 18:58:44 +05:30
Pranshu Chittora
7081e4ffce fix: recursive url reloading (#1535) 2022-08-24 17:46:31 +05:30
Prashant Shahi
07dcdb51f7 chore: 🔧 enable logs capturing by default (#1534)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-08-24 15:11:38 +05:30
Prashant Shahi
d2e990ebf4 chore: 📌 pin versions: SigNoz 0.11.0, OtelCollector distribution 0.55.0
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-08-24 00:00:19 +05:30
Prashant Shahi
c2f95dc727 Merge branch 'develop' into release/v0.11.0 2022-08-23 22:22:49 +05:30
Aditya Kumar Praharaj
37dedc8b87 feat(devbox): splitting docker-compose.yaml into core and prod / local for no-edit local setup (#1528) 2022-08-23 15:23:37 +05:30
Srikanth Chekuri
9cd1be6553 Allow search by service name in services list page (#1520)
* feat: add search by service name
* chore: allow clear
* chore: table search icon and review comments
* chore: fix lint
* chore: address review comments
* chore: fix types
* chore: tweak user experience
* chore: antd color enum
2022-08-23 13:05:19 +05:30
Pranshu Chittora
5e0eb05a9c feat: support for legend in query builderformulas (#1530) 2022-08-23 11:17:49 +05:30
Pranshu Chittora
f48a884f90 fix: eslint and tsc fixes for logs (#1527)
* fix: eslint and tsc fixes for logs

* chore: remove package-lock file
2022-08-19 17:16:04 +05:30
Srikanth Chekuri
32fba00aa8 fix: do not show services without any data for select interval (#1521) 2022-08-17 15:11:08 +05:30
Pranshu Chittora
166e5612eb feat: restrict timestamp from adding it to query (#1517)
Co-authored-by: Palash Gupta <palashgdev@gmail.com>
2022-08-16 18:53:34 +05:30
Ankit Nayan
b23d63cb2b Merge pull request #1515 from pranshuchittora/pranshuchittora/fix/sse-event-polyfill
fix: live tail sse prod issue
2022-08-16 13:33:45 +05:30
Pranshu Chittora
5e0ed6f5f5 chore: removed unused SSE libs 2022-08-16 13:07:05 +05:30
Pranshu Chittora
74f947a028 fix: live tail sse prod issue 2022-08-16 12:59:07 +05:30
Ankit Nayan
cca74e5926 Merge branch 'main' into develop 2022-08-11 18:53:38 +05:30
Ankit Nayan
1865d75df6 Merge pull request #1512 from SigNoz/release/v0.10.2
Release/v0.10.2
2022-08-11 18:52:34 +05:30
Prashant Shahi
d4b0013900 chore(prerelease): 📌 pin versions: SigNoz 0.11.0-rc.1
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-08-11 15:08:12 +05:30
Prashant Shahi
55c9eb733d chore(release): 📌 pin versions: SigNoz 0.10.2
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-08-11 14:58:36 +05:30
Amol Umbark
54cc363752 Alerts/edit rule issue 676 (#1505)
* fix: resolved issue with editing of rules

(cherry picked from commit a3015d1077)
2022-08-11 14:43:50 +05:30
Ankit Nayan
07d013a716 chore: added analytics for logs 2022-08-11 14:27:19 +05:30
Amol Umbark
a3015d1077 Alerts/edit rule issue 676 (#1505)
* fix: resolved issue with editing of rules
2022-08-11 13:54:17 +05:30
Nityananda Gohain
66b67a08a0 alias for timstamp interval changed in sql query (#1509) 2022-08-11 13:53:33 +05:30
Pranshu Chittora
7a4750a882 Logs UI (#1436)
* feat: logs routing
* feat: add redux for logs
* feat: logs filter ui
* feat: logsql parser integration
* feat: logs table initial version
* feat: logs aggregated view
* feat: add log detail
* feat: log live tail
* feat: Logs TTL UI
2022-08-11 11:45:28 +05:30
Nityananda Gohain
6d623c5d45 single otlp receiver (#1506) 2022-08-11 11:43:10 +05:30
Srikanth Chekuri
6e899175a0 fix: escape and encode operations regex for overview details (#1502)
* fix: escape and encode operations regex for overview details
* chore: use back tick with escaping
2022-08-11 11:41:32 +05:30
Amol Umbark
aecf3ef93e fix: added cache bursting for translations using file hash (#1478)
* fix: added cache bursting for translations using file hash


Co-authored-by: Palash Gupta <palashgdev@gmail.com>
2022-08-10 23:09:34 +05:30
Nityananda Gohain
b6afc9315b clickhouse logs exporter added to deployment file (#1500)
* clickhouse logs exporter added to deployment file
* updated to latest otel collector
2022-08-10 23:00:05 +05:30
Srikanth Chekuri
dda82474ae feat: add more options in service map time dropdown (#1501) 2022-08-10 21:09:02 +05:30
Srikanth Chekuri
998e72374f fix: escape and encode operations regex for overview details (#1499)
* fix: interval should be 1d=24h (#1482) (#1483)

* fix: escape and encode operations regex for overview details

Co-authored-by: Ankit Nayan <ankit@signoz.io>
Co-authored-by: zedongh <248348907@qq.com>
2022-08-10 21:04:12 +05:30
Ankit Nayan
a1f6f09ae1 Merge pull request #1379 from nityanandagohain/feat/logs
Support for Logs
2022-08-10 15:29:03 +05:30
nityanandagohain
7a1cbdb0bb Merge remote-tracking branch 'origin/feat/logs' into feat/logs 2022-08-10 14:28:11 +05:30
nityanandagohain
eb28ece680 parser updated for pagination 2022-08-10 14:27:46 +05:30
Vishal Sharma
fda6e4472a Merge branch 'develop' into feat/logs 2022-08-09 10:45:23 +05:30
zedongh
9de99d1872 fix: interval should be 1d=24h (#1482) (#1483) 2022-08-09 09:52:55 +05:30
Ankit Nayan
8f9d0f2403 Merge pull request #1480 from SigNoz/release/v0.10.1
Release/v0.10.1
2022-08-07 15:35:29 +05:30
Prashant Shahi
04cf1b2697 Merge branch 'develop' into release/v0.10.1 2022-08-07 15:27:33 +05:30
Amol Umbark
8bdc41bef0 fix: resolves issue for migrated promql (#1481) 2022-08-06 13:40:41 +05:30
Prashant Shahi
616da88790 chore(release): 📌 pin versions: SigNoz 0.10.1, OtelCollector 0.45.1-1.3, Alertmanager 0.23.0-0.2
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-08-05 22:30:55 +05:30
Srikanth Chekuri
143a5b65f9 Merge branch 'develop' into feat/logs 2022-08-04 20:08:58 +05:30
nityanandagohain
0807a0ae26 Merge remote-tracking branch 'upstream/develop' into feat/logs 2022-08-04 17:32:45 +05:30
Amol Umbark
1ebf64589f Alerts: Test Notifications in Rules 2022-08-04 17:24:15 +05:30
Amol Umbark
80c96af5a4 feat: added user selected filtering of channels in alerts (#1459) 2022-08-04 15:31:21 +05:30
nityanandagohain
61ebd3aded logs ttl support added in ttl api 2022-08-04 14:28:10 +05:30
Vishal Sharma
425b732370 fix: add defaultDependencyGraphTable to getTTL status check (#1474) 2022-08-04 13:41:25 +05:30
Vishal Sharma
a742c9aee1 feat: use materialized view for usage explorer API (#1466)
* feat: use materialized view for usage explorer API

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2022-08-04 12:55:21 +05:30
Srikanth Chekuri
3968f11b3d feat: improve service map (#1467)
* feat: improve service map
2022-08-04 12:38:53 +05:30
Srikanth Chekuri
5bfc2af51b feat: show messaging/cron/browser services in listing page (#1455)
* feat: show messaging/cron/browser services in listing page

* chore: issue maximum of ten queries to clickhouse

Co-authored-by: Palash Gupta <palashgdev@gmail.com>
2022-08-04 11:57:05 +05:30
Amol Umbark
8146da52af feat: Disable Alerts Feature (Backend) (#1443)
* feat: added patch rule api

* feat: added backend api for patching rule status

* fix: improved patchRule and also editRule


Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2022-08-04 11:55:54 +05:30
Amol Umbark
5dc6d28f2e feat: disable alerts feature (UI) (#1445)
* feat: added enable disable feature for rules

* fix: resolved type issue in getTriggered

Co-authored-by: Palash Gupta <palashgdev@gmail.com>
2022-08-04 11:55:09 +05:30
Amol Umbark
a6ed6c03c1 Alerts/607 test notifications UI (#1469)
* feat: added test alert feature

* fix: solved the lint issues

Co-authored-by: Palash Gupta <palashgdev@gmail.com>
2022-08-03 19:40:20 +05:30
Vishal Sharma
cca4db602c Remove query-service code owners (#1461)
* Remove query-service code owners
2022-08-03 16:55:51 +05:30
Amol Umbark
7ff49ba47c feat: added rule url to the title link in slack message (#1421)
* feat: added rule url to the title link in slack message

* fix: corrected duplication of code for generator url in rules engine

* fix: removed unnecessary import in rules engine
2022-08-03 15:08:14 +05:30
nityanandagohain
9dcf913a74 severity_number type changed to int8 2022-08-03 12:23:00 +05:30
Prashant Shahi
68194d7e07 fix(query-service): 🚀 embed copy of timezone data (#1462)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-08-02 10:05:16 +05:30
Amol Umbark
7881aee350 feat: added user preferred channel filters in alerts (#1458)
* feat: added user preferred channel filters in alerts

* fix: resolved minor translation issue
2022-08-02 09:54:24 +05:30
nityanandagohain
594bfc256c fulltext validation updated 2022-08-01 13:02:00 +05:30
nityanandagohain
5894acdb2d OR support added with contains 2022-08-01 12:30:11 +05:30
nityanandagohain
6eb9389e81 parser updated to include or as well 2022-08-01 12:17:15 +05:30
Srikanth Chekuri
39be8201aa Merge pull request #1450 from SigNoz/bump-json-iterator
chore: bump json-iterator version to v1.1.12
2022-08-01 12:10:07 +05:30
Srikanth Chekuri
6778163a07 Merge branch 'develop' into bump-json-iterator 2022-08-01 09:13:04 +05:30
Amol Umbark
56a2047560 fix: remove 'default channel' note from channels page (#1446) 2022-07-31 09:54:58 +05:30
Srikanth Chekuri
023ef66035 Merge branch 'develop' into bump-json-iterator 2022-07-29 08:37:46 +05:30
Prashant Shahi
22b8572495 feat(swarm): 🚀 scraping multiple otel-collector (#1438)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-07-28 14:05:35 +05:30
jshiwam
e39d2f799d Used Prepared Statements for GetChannel in clickhousereader (#1414)
* feat: used db.Preparex
2022-07-28 10:14:27 +05:30
nityanandagohain
5b28fe1c9d Merge remote-tracking branch 'origin/feat/logs' into feat/logs 2022-07-27 15:59:24 +05:30
nityanandagohain
d15f9a1709 log statement corrected 2022-07-27 15:58:58 +05:30
Srikanth Chekuri
2c383528bc Merge branch 'develop' into bump-json-iterator 2022-07-27 14:41:50 +05:30
Srikanth Chekuri
0378dfd12f chore: bump json-iterator version to v1.1.12 2022-07-27 14:40:45 +05:30
Srikanth Chekuri
002ccc3975 Merge branch 'develop' into feat/logs 2022-07-27 12:53:32 +05:30
Prashant Shahi
f8f903848e fix: 🚀 disables TTL moves on insert and only run in background (#1448)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-07-27 11:53:10 +05:30
nityanandagohain
047227ad18 use ticker for polling db in live tail. 2022-07-27 11:47:35 +05:30
nityanandagohain
7b6a086b37 consistant query formatting 2022-07-27 10:46:33 +05:30
Nityananda Gohain
baf72610d6 Update pkg/query-service/app/clickhouseReader/reader.go
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2022-07-27 10:39:08 +05:30
nityanandagohain
a5388d357c Merge remote-tracking branch 'origin/feat/logs' into feat/logs 2022-07-26 14:50:55 +05:30
nityanandagohain
294d527a0e parser updated to support more than one contains 2022-07-26 14:45:20 +05:30
Srikanth Chekuri
a23788852a Merge branch 'develop' into feat/logs 2022-07-26 11:55:18 +05:30
jshiwam
a771c3b9a6 docs: added documentation for query-service local setup (#1426)
* docs: added documentation for query-service local setup

* fix: updated clickhouse setup link

* Use Env Var for the alertmanager endpoint
2022-07-26 09:59:31 +05:30
nityanandagohain
0fe4327877 live tail fetch only recent 100 logs every 10s 2022-07-25 14:42:58 +05:30
nityanandagohain
4825ed6e5f dataType constant strings 2022-07-22 17:19:55 +05:30
nityanandagohain
2f17898390 primitive type pointers removed 2022-07-22 16:49:40 +05:30
nityanandagohain
373cbbc375 logs select statement converted to a const 2022-07-22 16:07:19 +05:30
nityanandagohain
f8be4a6d5b livetail timestamp correction 2022-07-22 15:49:50 +05:30
nityanandagohain
420d46ab01 tail function updated to use values instead of pointers 2022-07-22 15:44:07 +05:30
nityanandagohain
bdb6901c74 generateSql returns value insted of pointer 2022-07-22 15:39:43 +05:30
nityanandagohain
94cde11164 consistant response value instead of pointer 2022-07-22 15:27:52 +05:30
nityanandagohain
448e14b32f parser updated to support contians operator for other fields 2022-07-22 15:17:46 +05:30
nityanandagohain
6ac7cb1022 parser updated 2022-07-21 18:32:11 +05:30
nityanandagohain
2132d1059c live tail api excluded from timeout middleware 2022-07-21 17:55:08 +05:30
Palash Gupta
ff9c41464b test: error and error details case is added (#1420) 2022-07-21 11:25:54 +05:30
Akshay Awate
acb3721815 refactor: start_docker() (#1410)
* refactor: start_docker()

Co-authored-by: akshay <akshay.awate@infracloud.io>
2022-07-20 23:30:21 +05:30
nityanandagohain
5912d3a4a0 observed timestamp removed 2022-07-20 14:52:16 +05:30
nityanandagohain
a527c33c7d timestamp in ns from ms 2022-07-20 13:05:24 +05:30
nityanandagohain
c24bdfc8cf aggregate function added 2022-07-20 12:11:03 +05:30
nityanandagohain
051f640100 correct var names in live tail 2022-07-19 16:38:28 +05:30
nityanandagohain
b5c8764605 changes added for live tail api 2022-07-19 16:34:33 +05:30
Amol Umbark
475c44a000 fix: increased debounce to 1000 (from 500). also reduced networkcalls in getquery range (#1423) 2022-07-19 11:30:56 +05:30
Amol Umbark
1b6597b974 fix: resolved issue with promql rule creation (#1422) 2022-07-19 11:29:32 +05:30
nityanandagohain
8e4fbbe770 parsing logic and test updated 2022-07-19 10:40:19 +05:30
nityanandagohain
2450fff34d live tail v1 2022-07-18 18:55:52 +05:30
nityanandagohain
df17d4ca54 Merge remote-tracking branch 'upstream/develop' into feat/logs 2022-07-18 16:49:04 +05:30
nityanandagohain
33e7252645 Merge remote-tracking branch 'origin/develop' into feat/logs 2022-07-18 16:44:27 +05:30
nityanandagohain
2e9affa80c Filtering logic updated 2022-07-18 16:37:46 +05:30
nityanandagohain
ed5d217c76 API for filtering and paginating logs added 2022-07-13 15:42:13 +05:30
nityanandagohain
ef141d2cee API for fields added 2022-07-12 16:38:26 +05:30
553 changed files with 25009 additions and 5008 deletions

6
.dockerignore Normal file
View File

@@ -0,0 +1,6 @@
.git
.github
.vscode
README.md
deploy
sample-apps

2
.github/CODEOWNERS vendored
View File

@@ -4,4 +4,4 @@
* @ankitnayan
/frontend/ @palashgdev @pranshuchittora
/deploy/ @prashant-shahi
/pkg/query-service/ @srikanthccv @makeavish @nityanandagohain
/pkg/query-service/ @srikanthccv

View File

@@ -32,7 +32,17 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Build query-service image
- name: Build query-service image
shell: bash
run: |
make build-query-service-amd64
build-ee-query-service:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Build EE query-service image
shell: bash
run: |
make build-ee-query-service-amd64

View File

@@ -16,7 +16,9 @@ jobs:
uses: actions/checkout@v2
- name: Build query-service image
run: make build-query-service-amd64
env:
DEV_BUILD: 1
run: make build-ee-query-service-amd64
- name: Build frontend image
run: make build-frontend-amd64

View File

@@ -11,6 +11,41 @@ on:
jobs:
image-build-and-push-query-service:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- uses: benjlevesque/short-sha@v1.2
id: short-sha
- name: Get branch name
id: branch-name
uses: tj-actions/branch-names@v5.1
- name: Set docker tag environment
run: |
if [ '${{ steps.branch-name.outputs.is_tag }}' == 'true' ]; then
tag="${{ steps.branch-name.outputs.tag }}"
tag="${tag:1}"
echo "DOCKER_TAG=${tag}-oss" >> $GITHUB_ENV
elif [ '${{ steps.branch-name.outputs.current_branch }}' == 'main' ]; then
echo "DOCKER_TAG=latest-oss" >> $GITHUB_ENV
else
echo "DOCKER_TAG=${{ steps.branch-name.outputs.current_branch }}-oss" >> $GITHUB_ENV
fi
- name: Build and push docker image
run: make build-push-query-service
image-build-and-push-ee-query-service:
runs-on: ubuntu-latest
steps:
- name: Checkout code
@@ -43,7 +78,7 @@ jobs:
echo "DOCKER_TAG=${{ steps.branch-name.outputs.current_branch }}" >> $GITHUB_ENV
fi
- name: Build and push docker image
run: make build-push-query-service
run: make build-push-ee-query-service
image-build-and-push-frontend:
runs-on: ubuntu-latest

8
.gitignore vendored
View File

@@ -1,3 +1,4 @@
node_modules
yarn.lock
package.json
@@ -5,6 +6,7 @@ package.json
deploy/docker/environment_tiny/common_test
frontend/node_modules
frontend/.pnp
frontend/i18n-translations-hash.json
*.pnp.js
# testing
@@ -42,8 +44,12 @@ pkg/query-service/signoz.db
pkg/query-service/tests/test-deploy/data/
ee/query-service/signoz.db
ee/query-service/tests/test-deploy/data/
# local data
*.db
/deploy/docker/clickhouse-setup/data/
/deploy/docker-swarm/clickhouse-setup/data/
bin/

View File

@@ -207,7 +207,7 @@ If you don't want to install the SigNoz backend just for doing frontend developm
Please ping us in the [`#contributing`](https://signoz-community.slack.com/archives/C01LWQ8KS7M) channel or ask `@Prashant Shahi` in our [Slack Community](https://signoz.io/slack) and we will DM you with `<test environment URL>`.
**Frontend should now be accessible at** [`http://localhost:3301/application`](http://localhost:3301/application)
**Frontend should now be accessible at** [`http://localhost:3301/services`](http://localhost:3301/services)
**[`^top^`](#)**
@@ -215,7 +215,7 @@ Please ping us in the [`#contributing`](https://signoz-community.slack.com/archi
# 4. Contribute to Backend (Query-Service) 🌑
**Need to Update:** [**https://github.com/SigNoz/signoz/tree/develop/pkg/query-service**](https://github.com/SigNoz/signoz/tree/develop/pkg/query-service)
[**https://github.com/SigNoz/signoz/tree/develop/pkg/query-service**](https://github.com/SigNoz/signoz/tree/develop/pkg/query-service)
## 4.1 To run ClickHouse setup (recommended for local development)
@@ -363,10 +363,6 @@ There are many other ways to get involved with the community and to participate
- Tell others about the project on Twitter, your blog, etc.
## License
By contributing to SigNoz, you agree that your contributions will be licensed under its MIT license.
Again, Feel free to ping us on [`#contributing`](https://signoz-community.slack.com/archives/C01LWQ8KS7M) or [`#contributing-frontend`](https://signoz-community.slack.com/archives/C027134DM8B) on our slack community if you need any help on this :)
Thank You!

View File

@@ -1,6 +1,10 @@
MIT License
Copyright (c) 2020-present SigNoz Inc.
Copyright (c) 2021 SigNoz
Portions of this software are licensed as follows:
* All content that resides under the "ee/" directory of this repository, if that directory exists, is licensed under the license defined in "ee/LICENSE".
* All third party components incorporated into the SigNoz Software are licensed under the original license provided by the owner of the applicable component.
* Content outside of the above mentioned directories or restrictions above is available under the "MIT Expat" license as defined below.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -7,27 +7,34 @@ BUILD_VERSION ?= $(shell git describe --always --tags)
BUILD_HASH ?= $(shell git rev-parse --short HEAD)
BUILD_TIME ?= $(shell date -u +"%Y-%m-%dT%H:%M:%SZ")
BUILD_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
DEV_LICENSE_SIGNOZ_IO ?= https://staging-license.signoz.io/api/v1
# Internal variables or constants.
FRONTEND_DIRECTORY ?= frontend
QUERY_SERVICE_DIRECTORY ?= pkg/query-service
EE_QUERY_SERVICE_DIRECTORY ?= ee/query-service
STANDALONE_DIRECTORY ?= deploy/docker/clickhouse-setup
SWARM_DIRECTORY ?= deploy/docker-swarm/clickhouse-setup
LOCAL_GOOS ?= $(shell go env GOOS)
LOCAL_GOARCH ?= $(shell go env GOARCH)
REPONAME ?= signoz
DOCKER_TAG ?= latest
FRONTEND_DOCKER_IMAGE ?= frontend
QUERY_SERVICE_DOCKER_IMAGE ?= query-service
DEV_BUILD ?= ""
# Build-time Go variables
PACKAGE?=go.signoz.io/query-service
buildVersion=${PACKAGE}/version.buildVersion
buildHash=${PACKAGE}/version.buildHash
buildTime=${PACKAGE}/version.buildTime
gitBranch=${PACKAGE}/version.gitBranch
PACKAGE?=go.signoz.io/signoz
buildVersion=${PACKAGE}/pkg/query-service/version.buildVersion
buildHash=${PACKAGE}/pkg/query-service/version.buildHash
buildTime=${PACKAGE}/pkg/query-service/version.buildTime
gitBranch=${PACKAGE}/pkg/query-service/version.gitBranch
licenseSignozIo=${PACKAGE}/ee/query-service/constants.LicenseSignozIo
LD_FLAGS="-X ${buildHash}=${BUILD_HASH} -X ${buildTime}=${BUILD_TIME} -X ${buildVersion}=${BUILD_VERSION} -X ${gitBranch}=${BUILD_BRANCH}"
LD_FLAGS=-X ${buildHash}=${BUILD_HASH} -X ${buildTime}=${BUILD_TIME} -X ${buildVersion}=${BUILD_VERSION} -X ${gitBranch}=${BUILD_BRANCH}
DEV_LD_FLAGS=-X ${licenseSignozIo}=${DEV_LICENSE_SIGNOZ_IO}
all: build-push-frontend build-push-query-service
# Steps to build and push docker image of frontend
@@ -38,7 +45,7 @@ build-frontend-amd64:
@echo "--> Building frontend docker image for amd64"
@echo "------------------"
@cd $(FRONTEND_DIRECTORY) && \
docker build -f Dockerfile --no-cache -t $(REPONAME)/$(FRONTEND_DOCKER_IMAGE):$(DOCKER_TAG) \
docker build --file Dockerfile --no-cache -t $(REPONAME)/$(FRONTEND_DOCKER_IMAGE):$(DOCKER_TAG) \
--build-arg TARGETPLATFORM="linux/amd64" .
# Step to build and push docker image of frontend(used in push pipeline)
@@ -57,20 +64,43 @@ build-query-service-amd64:
@echo "------------------"
@echo "--> Building query-service docker image for amd64"
@echo "------------------"
@cd $(QUERY_SERVICE_DIRECTORY) && \
docker build -f Dockerfile --no-cache -t $(REPONAME)/$(QUERY_SERVICE_DOCKER_IMAGE):$(DOCKER_TAG) \
--build-arg TARGETPLATFORM="linux/amd64" --build-arg LD_FLAGS=$(LD_FLAGS) .
@docker build --file $(QUERY_SERVICE_DIRECTORY)/Dockerfile \
--no-cache -t $(REPONAME)/$(QUERY_SERVICE_DOCKER_IMAGE):$(DOCKER_TAG) \
--build-arg TARGETPLATFORM="linux/amd64" --build-arg LD_FLAGS="$(LD_FLAGS)" .
# Step to build and push docker image of query in amd64 and arm64 (used in push pipeline)
build-push-query-service:
@echo "------------------"
@echo "--> Building and pushing query-service docker image"
@echo "------------------"
@cd $(QUERY_SERVICE_DIRECTORY) && \
docker buildx build --file Dockerfile --progress plane --no-cache \
--push --platform linux/arm64,linux/amd64 --build-arg LD_FLAGS=$(LD_FLAGS) \
@docker buildx build --file $(QUERY_SERVICE_DIRECTORY)/Dockerfile --progress plane --no-cache \
--push --platform linux/arm64,linux/amd64 --build-arg LD_FLAGS="$(LD_FLAGS)" \
--tag $(REPONAME)/$(QUERY_SERVICE_DOCKER_IMAGE):$(DOCKER_TAG) .
# Step to build EE docker image of query service in amd64 (used in build pipeline)
build-ee-query-service-amd64:
@echo "------------------"
@echo "--> Building query-service docker image for amd64"
@echo "------------------"
@if [ $(DEV_BUILD) != "" ]; then \
docker build --file $(EE_QUERY_SERVICE_DIRECTORY)/Dockerfile \
--no-cache -t $(REPONAME)/$(QUERY_SERVICE_DOCKER_IMAGE):$(DOCKER_TAG) \
--build-arg TARGETPLATFORM="linux/amd64" --build-arg LD_FLAGS="${LD_FLAGS} ${DEV_LD_FLAGS}" .; \
else \
docker build --file $(EE_QUERY_SERVICE_DIRECTORY)/Dockerfile \
--no-cache -t $(REPONAME)/$(QUERY_SERVICE_DOCKER_IMAGE):$(DOCKER_TAG) \
--build-arg TARGETPLATFORM="linux/amd64" --build-arg LD_FLAGS="$(LD_FLAGS)" .; \
fi
# Step to build and push EE docker image of query in amd64 and arm64 (used in push pipeline)
build-push-ee-query-service:
@echo "------------------"
@echo "--> Building and pushing query-service docker image"
@echo "------------------"
@docker buildx build --file $(EE_QUERY_SERVICE_DIRECTORY)/Dockerfile \
--progress plane --no-cache --push --platform linux/arm64,linux/amd64 \
--build-arg LD_FLAGS="$(LD_FLAGS)" --tag $(REPONAME)/$(QUERY_SERVICE_DOCKER_IMAGE):$(DOCKER_TAG) .
dev-setup:
mkdir -p /var/lib/signoz
sqlite3 /var/lib/signoz/signoz.db "VACUUM";
@@ -79,16 +109,26 @@ dev-setup:
@echo "--> Local Setup completed"
@echo "------------------"
run-local:
@LOCAL_GOOS=$(LOCAL_GOOS) LOCAL_GOARCH=$(LOCAL_GOARCH) docker-compose -f \
$(STANDALONE_DIRECTORY)/docker-compose-core.yaml -f $(STANDALONE_DIRECTORY)/docker-compose-local.yaml \
up --build -d
down-local:
@docker-compose -f \
$(STANDALONE_DIRECTORY)/docker-compose-core.yaml -f $(STANDALONE_DIRECTORY)/docker-compose-local.yaml \
down -v
run-x86:
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.yaml up -d
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.yaml up --build -d
down-x86:
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.yaml down -v
clear-standalone-data:
@docker run --rm -v "$(PWD)/$(STANDALONE_DIRECTORY)/data:/pwd" busybox \
sh -c "cd /pwd && rm -rf alertmanager/* clickhouse/* signoz/*"
sh -c "cd /pwd && rm -rf alertmanager/* clickhous*/* signoz/* zookeeper-*/*"
clear-swarm-data:
@docker run --rm -v "$(PWD)/$(SWARM_DIRECTORY)/data:/pwd" busybox \
sh -c "cd /pwd && rm -rf alertmanager/* clickhouse/* signoz/*"
sh -c "cd /pwd && rm -rf alertmanager/* clickhous*/* signoz/* zookeeper-*/*"

View File

@@ -5,7 +5,6 @@
</p>
<p align="center">
<img alt="Lizenz" src="https://img.shields.io/badge/license-MIT-brightgreen"> </a>
<img alt="Downloads" src="https://img.shields.io/docker/pulls/signoz/frontend?label=Downloads"> </a>
<img alt="GitHub issues" src="https://img.shields.io/github/issues/signoz/signoz"> </a>
<a href="https://twitter.com/intent/tweet?text=Monitor%20your%20applications%20and%20troubleshoot%20problems%20with%20SigNoz,%20an%20open-source%20alternative%20to%20DataDog,%20NewRelic.&url=https://signoz.io/&via=SigNozHQ&hashtags=opensource,signoz,observability">
@@ -15,10 +14,10 @@
<h3 align="center">
<a href="https://signoz.io/docs"><b>Dokumentation</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.zh-cn.md"><b>ReadMe auf Chinesisch</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.pt-br.md"><b>ReadMe auf Portugiesisch</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/develop/README.zh-cn.md"><b>ReadMe auf Chinesisch</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/develop/README.pt-br.md"><b>ReadMe auf Portugiesisch</b></a> &bull;
<a href="https://signoz.io/slack"><b>Slack Community</b></a> &bull;
<a href="https://twitter.com/SigNozHq"><b>Twitter</b></a>
<a href="https://twitter.com/SigNozHQ"><b>Twitter</b></a>
</h3>
##

View File

@@ -5,7 +5,6 @@
</p>
<p align="center">
<img alt="License" src="https://img.shields.io/badge/license-MIT-brightgreen"> </a>
<img alt="Downloads" src="https://img.shields.io/docker/pulls/signoz/query-service?label=Downloads"> </a>
<img alt="GitHub issues" src="https://img.shields.io/github/issues/signoz/signoz"> </a>
<a href="https://twitter.com/intent/tweet?text=Monitor%20your%20applications%20and%20troubleshoot%20problems%20with%20SigNoz,%20an%20open-source%20alternative%20to%20DataDog,%20NewRelic.&url=https://signoz.io/&via=SigNozHQ&hashtags=opensource,signoz,observability">
@@ -15,9 +14,9 @@
<h3 align="center">
<a href="https://signoz.io/docs"><b>Documentation</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.zh-cn.md"><b>ReadMe in Chinese</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.de-de.md"><b>ReadMe in German</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.pt-br.md"><b>ReadMe in Portuguese</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/develop/README.zh-cn.md"><b>ReadMe in Chinese</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/develop/README.de-de.md"><b>ReadMe in German</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/develop/README.pt-br.md"><b>ReadMe in Portuguese</b></a> &bull;
<a href="https://signoz.io/slack"><b>Slack Community</b></a> &bull;
<a href="https://twitter.com/SigNozHq"><b>Twitter</b></a>
</h3>
@@ -26,17 +25,25 @@
SigNoz helps developers monitor applications and troubleshoot problems in their deployed applications. SigNoz uses distributed tracing to gain visibility into your software stack.
👉 Visualise Metrics, Traces and Logs in a single pane of glass
👉 You can see metrics like p99 latency, error rates for your services, external API calls and individual end points.
👉 You can find the root cause of the problem by going to the exact traces which are causing the problem and see detailed flamegraphs of individual request traces.
👉 Run aggregates on trace data to get business relevant metrics
![screenzy-1644432902955](https://user-images.githubusercontent.com/504541/153270713-1b2156e6-ec03-42de-975b-3c02b8ec1836.png)
👉 Filter and query logs, build dashboards and alerts based on attributes in logs
![screenzy-1670570187181](https://user-images.githubusercontent.com/504541/206646629-829fdafe-70e2-4503-a9c4-1301b7918586.png)
<br />
![screenzy-1644432986784](https://user-images.githubusercontent.com/504541/153270725-0efb73b3-06ed-4207-bf13-9b7e2e17c4b8.png)
![screenzy-1670570193901](https://user-images.githubusercontent.com/504541/206646676-a676fdeb-331c-4847-aea9-d1cabf7c47e1.png)
<br />
![screenzy-1647005040573](https://user-images.githubusercontent.com/504541/157875938-a3d57904-ea6d-4278-b929-bd1408d7f94c.png)
![screenzy-1670570199026](https://user-images.githubusercontent.com/504541/206646754-28c5534f-0377-428c-9c6e-5c7c0d9dd22d.png)
<br />
![screenzy-1670569888865](https://user-images.githubusercontent.com/504541/206645819-1e865a56-71b4-4fde-80cc-fbdb137a4da5.png)
<br /><br />
@@ -52,12 +59,12 @@ Come say Hi to us on [Slack](https://signoz.io/slack) 👋
## Features:
- Unified UI for metrics, traces and logs. No need to switch from Prometheus to Jaeger to debug issues, or use a logs tool like Elastic separate from your metrics and traces stack.
- Application overview metrics like RPS, 50th/90th/99th Percentile latencies, and Error Rate
- Slowest endpoints in your application
- See exact request trace to figure out issues in downstream services, slow DB queries, call to 3rd party services like payment gateways, etc
- Filter traces by service name, operation, latency, error, tags/annotations.
- Run aggregates on trace data (events/spans) to get business relevant metrics. e.g. You can get error rate and 99th percentile latency of `customer_type: gold` or `deployment_version: v2` or `external_call: paypal`
- Unified UI for metrics and traces. No need to switch from Prometheus to Jaeger to debug issues.
<br /><br />
@@ -79,6 +86,12 @@ We support [OpenTelemetry](https://opentelemetry.io) as the library which you ca
- Python
- NodeJS
- Go
- PHP
- .NET
- Ruby
- Elixir
- Rust
You can find the complete list of languages here - https://opentelemetry.io/docs/
@@ -117,13 +130,28 @@ Our goal is to provide an integrated UI between metrics & traces - similar to wh
### SigNoz vs Jaeger
Jaeger only does distributed tracing. SigNoz does both metrics and traces, and we also have log management in our roadmap.
Jaeger only does distributed tracing. SigNoz supports metrics, traces and logs - all the 3 pillars of observability.
Moreover, SigNoz has few more advanced features wrt Jaeger:
- Jaegar UI doesnt show any metrics on traces or on filtered traces
- Jaeger cant get aggregates on filtered traces. For example, p99 latency of requests which have tag - customer_type='premium'. This can be done easily on SigNoz
<p>&nbsp </p>
### SigNoz vs Elastic
- SigNoz Logs management are based on ClickHouse, a columnar OLAP datastore which makes aggregate log analytics queries much more efficient
- 50% lower resource requirement compared to Elastic during ingestion
<p>&nbsp </p>
### SigNoz vs Loki
- SigNoz supports aggregations on high-cardinality data over a huge volume while loki doesnt.
- SigNoz supports indexes over high cardinality data and has no limitations on the number of indexes, while Loki reaches max streams with a few indexes added to it.
- Searching over a huge volume of data is difficult and slow in Loki compared to SigNoz
<br /><br />
<img align="left" src="https://signoz-public.s3.us-east-2.amazonaws.com/Contributors.svg" width="50px" />
@@ -146,7 +174,6 @@ Not sure how to get started? Just ping us on `#contributing` in our [slack commu
#### Frontend
- [Palash Gupta](https://github.com/palashgdev)
- [Pranshu Chittora](https://github.com/pranshuchittora)
#### DevOps

View File

@@ -5,7 +5,6 @@
</p>
<p align="center">
<img alt="License" src="https://img.shields.io/badge/license-MIT-brightgreen"> </a>
<img alt="Downloads" src="https://img.shields.io/docker/pulls/signoz/frontend?label=Downloads"> </a>
<img alt="GitHub issues" src="https://img.shields.io/github/issues/signoz/signoz"> </a>
<a href="https://twitter.com/intent/tweet?text=Monitor%20your%20applications%20and%20troubleshoot%20problems%20with%20SigNoz,%20an%20open-source%20alternative%20to%20DataDog,%20NewRelic.&url=https://signoz.io/&via=SigNozHQ&hashtags=opensource,signoz,observability">

View File

@@ -5,7 +5,6 @@
</p>
<p align="center">
<img alt="License" src="https://img.shields.io/badge/license-MIT-brightgreen"> </a>
<img alt="Downloads" src="https://img.shields.io/docker/pulls/signoz/frontend?label=Downloads"> </a>
<img alt="GitHub issues" src="https://img.shields.io/github/issues/signoz/signoz"> </a>
<a href="https://twitter.com/intent/tweet?text=Monitor%20your%20applications%20and%20troubleshoot%20problems%20with%20SigNoz,%20an%20open-source%20alternative%20to%20DataDog,%20NewRelic.&url=https://signoz.io/&via=SigNozHQ&hashtags=opensource,signoz,observability">
@@ -14,14 +13,19 @@
##
SigNoz帮助开发人员监控应用并排查已部署应用中的问题。SigNoz使用分布式踪来增加软件技术栈的可见性。
SigNoz帮助开发人员监控应用并排查已部署应用中的问题。SigNoz使用分布式踪来增加软件技术栈的可见性。
👉 你能看到一些性能矩阵服务、外部api调用、每个终端(endpoint)的p99延迟和错误率。
👉 你能看到一些性能指标服务、外部api调用、每个终端(endpoint)的p99延迟和错误率。
👉 通过准确的踪来确定是什么引起了问题,并且可以看到每个独立请求的帧图(framegraph),这样你就能找到根本原因。
👉 通过准确的踪来确定是什么引起了问题,并且可以看到每个独立请求的帧图(framegraph),这样你就能找到根本原因。
👉 聚合trace数据来获得业务相关指标。
![SigNoz Feature](https://signoz-public.s3.us-east-2.amazonaws.com/signoz_hero_github.png)
![screenzy-1644432902955](https://user-images.githubusercontent.com/504541/153270713-1b2156e6-ec03-42de-975b-3c02b8ec1836.png)
<br />
![screenzy-1644432986784](https://user-images.githubusercontent.com/504541/153270725-0efb73b3-06ed-4207-bf13-9b7e2e17c4b8.png)
<br />
![screenzy-1647005040573](https://user-images.githubusercontent.com/504541/157875938-a3d57904-ea6d-4278-b929-bd1408d7f94c.png)
<br /><br />
@@ -37,12 +41,12 @@ SigNoz帮助开发人员监控应用并排查已部署应用中的问题。SigNo
## 功能:
- 应用总览矩阵(matrix)如RPS, 50/90/99百分比延迟率,错误率
- 应用概览指标(metrics)如RPS, p50/p90/p99延迟率分位值,错误率等。
- 应用中最慢的终端(endpoint)
- 查看准确的网络请求跟踪来分析下游服务问题、慢数据库查询问题 及调用第三方服务如支付网关的问题
- 通过服务名称、操作、延迟、错误、标签来过滤跟踪
- 对过滤后的跟踪数据做矩阵聚合。比如,获得过滤条件`customer_type: gold` or `deployment_version: v2` or `external_call: paypal`的错误率和p99延迟
- 整合的矩阵和跟踪用户界面。不需要像从Prometheus切换到Jaeger才能调试问题
- 查看特定请求的trace数据来分析下游服务问题、慢数据库查询问题 及调用第三方服务如支付网关的问题
- 通过服务名称、操作、延迟、错误、标签来过滤traces。
- 聚合trace数据(events/spans)来得到业务相关指标。比如,你可以通过过滤条件`customer_type: gold` or `deployment_version: v2` or `external_call: paypal` 来获取指定业务的错误率和p99延迟
- 为metrics和trace提供统一的UI。排查问题不需要PrometheusJaeger之间切换。
<br /><br />
@@ -54,7 +58,7 @@ SigNoz帮助开发人员监控应用并排查已部署应用中的问题。SigNo
我们想做一个自服务的开源版本的工具类似于DataDog和NewRelic用于那些对客户数据流入第三方有隐私和安全担忧的厂商。
开源也让你对配置、采样和上线率有完整的控制你可以在SigNoz基础上构建模块来满足特定的商业需求。
开源也让你对配置、采样和正常运行时间有完整的控制你可以在SigNoz基础上构建模块来满足特定的商业需求。
### 语言支持
@@ -72,8 +76,8 @@ SigNoz帮助开发人员监控应用并排查已部署应用中的问题。SigNo
<img align="left" src="https://signoz-public.s3.us-east-2.amazonaws.com/Philosophy.svg" width="50px" />
## 入门
### 使用Docker部署
请按照[这里](https://signoz.io/docs/deployment/docker/)列出的步骤使用Docker来安装
@@ -81,35 +85,34 @@ SigNoz帮助开发人员监控应用并排查已部署应用中的问题。SigNo
如果你遇到任何问题,这个[排查指南](https://signoz.io/docs/deployment/troubleshooting)会对你有帮助。
<p>&nbsp </p>
### 使用Helm在Kubernetes上部署
请跟着[这里](https://signoz.io/docs/deployment/helm_chart)的步骤使用helm charts安装
<br /><br />
<img align="left" src="https://signoz-public.s3.us-east-2.amazonaws.com/UseSigNoz.svg" width="50px" />
## Comparisons to Familiar Tools
## 与其他方案的比较
### SigNoz vs Prometheus
如果你只是需要矩阵那Prometheus是不错的但如果你要无缝的在矩阵和跟踪之间切换那目前把Prometheus & Jaeger串起来的体验并不好。
如果你只是需要监控指标(metrics)那Prometheus是不错的但如果你要无缝的在metrics和traces之间切换那目前把Prometheus & Jaeger串起来的体验并不好。
我们的目标是在矩阵和跟踪之间提供整合的UI - 类似于Datadog这样的Saas厂提供的方案,能够对跟踪进行过滤和聚合这是目前Jaeger缺失的功能。
我们的目标是为metrics和traces提供统一的UI - 类似于Datadog这样的Saas厂提供的方案。并且能够对trace进行过滤和聚合这是目前Jaeger缺失的功能。
<p>&nbsp </p>
### SigNoz vs Jaeger
Jaeger只做分布式跟踪SigNoz则是做了矩阵和跟踪两块我们在计划中也有日志管理功能
Jaeger只做分布式追踪(distributed tracing)SigNoz则支持metrics,traces,logs ,即可视化的三大支柱
并且SigNoz有一些Jaeger没有的高级功能
- Jaegar UI无法在跟踪或过滤的跟踪基础上展示矩阵
- Jaeger不能过滤的跟踪上进行聚合操作。例如拥有tag为customer_type='premium'的所有请求的p99延迟在SigNoz这很容易实现。
- Jaegar UI无法在traces或过滤的traces上展示metrics
- Jaeger不能过滤的traces做聚合操作。例如拥有tag为customer_type='premium'的所有请求的p99延迟。而这个功能在SigNoz这儿是很容易实现。
<br /><br />
@@ -122,6 +125,23 @@ Jaeger只做分布式跟踪SigNoz则是做了矩阵和跟踪两块我们
还不清楚怎么开始? 只需在[slack社区](https://signoz.io/slack)的`#contributing`频道里ping我们。
### Project maintainers
#### Backend
- [Ankit Nayan](https://github.com/ankitnayan)
- [Nityananda Gohain](https://github.com/nityanandagohain)
- [Srikanth Chekuri](https://github.com/srikanthccv)
- [Vishal Sharma](https://github.com/makeavish)
#### Frontend
- [Palash Gupta](https://github.com/palashgdev)
#### DevOps
- [Prashant Shahi](https://github.com/prashant-shahi)
<br /><br />
<img align="left" src="https://signoz-public.s3.us-east-2.amazonaws.com/DevelopingLocally.svg" width="50px" />

View File

@@ -0,0 +1,75 @@
<?xml version="1.0"?>
<clickhouse>
<!-- ZooKeeper is used to store metadata about replicas, when using Replicated tables.
Optional. If you don't use replicated tables, you could omit that.
See https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replication/
-->
<zookeeper>
<node index="1">
<host>zookeeper-1</host>
<port>2181</port>
</node>
<!-- <node index="2">
<host>zookeeper-2</host>
<port>2181</port>
</node>
<node index="3">
<host>zookeeper-3</host>
<port>2181</port>
</node> -->
</zookeeper>
<!-- Configuration of clusters that could be used in Distributed tables.
https://clickhouse.com/docs/en/operations/table_engines/distributed/
-->
<remote_servers>
<cluster>
<!-- Inter-server per-cluster secret for Distributed queries
default: no secret (no authentication will be performed)
If set, then Distributed queries will be validated on shards, so at least:
- such cluster should exist on the shard,
- such cluster should have the same secret.
And also (and which is more important), the initial_user will
be used as current user for the query.
Right now the protocol is pretty simple and it only takes into account:
- cluster name
- query
Also it will be nice if the following will be implemented:
- source hostname (see interserver_http_host), but then it will depends from DNS,
it can use IP address instead, but then the you need to get correct on the initiator node.
- target hostname / ip address (same notes as for source hostname)
- time-based security tokens
-->
<!-- <secret></secret> -->
<shard>
<!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
<!-- <internal_replication>false</internal_replication> -->
<!-- Optional. Shard weight when writing data. Default: 1. -->
<!-- <weight>1</weight> -->
<replica>
<host>clickhouse</host>
<port>9000</port>
<!-- Optional. Priority of the replica for load_balancing. Default: 1 (less value has more priority). -->
<!-- <priority>1</priority> -->
</replica>
</shard>
<!-- <shard>
<replica>
<host>clickhouse-2</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>clickhouse-3</host>
<port>9000</port>
</replica>
</shard> -->
</cluster>
</remote_servers>
</clickhouse>

View File

@@ -236,8 +236,8 @@
<openSSL>
<server> <!-- Used for https server AND secure tcp port -->
<!-- openssl req -subj "/CN=localhost" -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt -->
<certificateFile>/etc/clickhouse-server/server.crt</certificateFile>
<privateKeyFile>/etc/clickhouse-server/server.key</privateKeyFile>
<!-- <certificateFile>/etc/clickhouse-server/server.crt</certificateFile> -->
<!-- <privateKeyFile>/etc/clickhouse-server/server.key</privateKeyFile> -->
<!-- dhparams are optional. You can delete the <dhParamsFile> element.
To generate dhparams, use the following command:
openssl dhparam -out /etc/clickhouse-server/dhparam.pem 4096
@@ -618,148 +618,6 @@
</jdbc_bridge>
-->
<!-- Configuration of clusters that could be used in Distributed tables.
https://clickhouse.com/docs/en/operations/table_engines/distributed/
-->
<remote_servers>
<!-- Test only shard config for testing distributed storage -->
<test_shard_localhost>
<!-- Inter-server per-cluster secret for Distributed queries
default: no secret (no authentication will be performed)
If set, then Distributed queries will be validated on shards, so at least:
- such cluster should exist on the shard,
- such cluster should have the same secret.
And also (and which is more important), the initial_user will
be used as current user for the query.
Right now the protocol is pretty simple and it only takes into account:
- cluster name
- query
Also it will be nice if the following will be implemented:
- source hostname (see interserver_http_host), but then it will depends from DNS,
it can use IP address instead, but then the you need to get correct on the initiator node.
- target hostname / ip address (same notes as for source hostname)
- time-based security tokens
-->
<!-- <secret></secret> -->
<shard>
<!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
<!-- <internal_replication>false</internal_replication> -->
<!-- Optional. Shard weight when writing data. Default: 1. -->
<!-- <weight>1</weight> -->
<replica>
<host>localhost</host>
<port>9000</port>
<!-- Optional. Priority of the replica for load_balancing. Default: 1 (less value has more priority). -->
<!-- <priority>1</priority> -->
</replica>
</shard>
</test_shard_localhost>
<test_cluster_one_shard_three_replicas_localhost>
<shard>
<internal_replication>false</internal_replication>
<replica>
<host>127.0.0.1</host>
<port>9000</port>
</replica>
<replica>
<host>127.0.0.2</host>
<port>9000</port>
</replica>
<replica>
<host>127.0.0.3</host>
<port>9000</port>
</replica>
</shard>
<!--shard>
<internal_replication>false</internal_replication>
<replica>
<host>127.0.0.1</host>
<port>9000</port>
</replica>
<replica>
<host>127.0.0.2</host>
<port>9000</port>
</replica>
<replica>
<host>127.0.0.3</host>
<port>9000</port>
</replica>
</shard-->
</test_cluster_one_shard_three_replicas_localhost>
<test_cluster_two_shards_localhost>
<shard>
<replica>
<host>localhost</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>localhost</host>
<port>9000</port>
</replica>
</shard>
</test_cluster_two_shards_localhost>
<test_cluster_two_shards>
<shard>
<replica>
<host>127.0.0.1</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>127.0.0.2</host>
<port>9000</port>
</replica>
</shard>
</test_cluster_two_shards>
<test_cluster_two_shards_internal_replication>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>127.0.0.1</host>
<port>9000</port>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>127.0.0.2</host>
<port>9000</port>
</replica>
</shard>
</test_cluster_two_shards_internal_replication>
<test_shard_localhost_secure>
<shard>
<replica>
<host>localhost</host>
<port>9440</port>
<secure>1</secure>
</replica>
</shard>
</test_shard_localhost_secure>
<test_unavailable_shard>
<shard>
<replica>
<host>localhost</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>localhost</host>
<port>1</port>
</replica>
</shard>
</test_unavailable_shard>
</remote_servers>
<!-- The list of hosts allowed to use in URL-related storage engines and table functions.
If this section is not present in configuration, all hosts are allowed.
-->
@@ -786,29 +644,6 @@
Values for substitutions are specified in /clickhouse/name_of_substitution elements in that file.
-->
<!-- ZooKeeper is used to store metadata about replicas, when using Replicated tables.
Optional. If you don't use replicated tables, you could omit that.
See https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replication/
-->
<!--
<zookeeper>
<node>
<host>example1</host>
<port>2181</port>
</node>
<node>
<host>example2</host>
<port>2181</port>
</node>
<node>
<host>example3</host>
<port>2181</port>
</node>
</zookeeper>
-->
<!-- Substitutions for parameters of replicated tables.
Optional. If you don't use replicated tables, you could omit that.

View File

@@ -20,6 +20,7 @@
</default>
<s3>
<disk>s3</disk>
<perform_ttl_move_on_insert>0</perform_ttl_move_on_insert>
</s3>
</volumes>
</tiered>

View File

@@ -1,33 +1,136 @@
version: "3.9"
x-clickhouse-defaults: &clickhouse-defaults
image: clickhouse/clickhouse-server:22.8.8-alpine
tty: true
deploy:
restart_policy:
condition: on-failure
depends_on:
- zookeeper-1
# - zookeeper-2
# - zookeeper-3
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "localhost:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
x-clickhouse-depend: &clickhouse-depend
depends_on:
- clickhouse
# - clickhouse-2
# - clickhouse-3
services:
zookeeper-1:
image: bitnami/zookeeper:3.7.0
container_name: zookeeper-1
hostname: zookeeper-1
user: root
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
volumes:
- ./data/zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
# - ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
# zookeeper-2:
# image: bitnami/zookeeper:3.7.0
# container_name: zookeeper-2
# hostname: zookeeper-2
# user: root
# ports:
# - "2182:2181"
# - "2889:2888"
# - "3889:3888"
# volumes:
# - ./data/zookeeper-2:/bitnami/zookeeper
# environment:
# - ZOO_SERVER_ID=2
# - ZOO_SERVERS=zookeeper-1:2888:3888,0.0.0.0:2888:3888,zookeeper-3:2888:3888
# - ALLOW_ANONYMOUS_LOGIN=yes
# - ZOO_AUTOPURGE_INTERVAL=1
# zookeeper-3:
# image: bitnami/zookeeper:3.7.0
# container_name: zookeeper-3
# hostname: zookeeper-3
# user: root
# ports:
# - "2183:2181"
# - "2890:2888"
# - "3890:3888"
# volumes:
# - ./data/zookeeper-3:/bitnami/zookeeper
# environment:
# - ZOO_SERVER_ID=3
# - ZOO_SERVERS=zookeeper-1:2888:3888,zookeeper-2:2888:3888,0.0.0.0:2888:3888
# - ALLOW_ANONYMOUS_LOGIN=yes
# - ZOO_AUTOPURGE_INTERVAL=1
clickhouse:
image: clickhouse/clickhouse-server:22.4.5-alpine
<<: *clickhouse-defaults
container_name: clickhouse
hostname: clickhouse
# ports:
# - "9000:9000"
# - "8123:8123"
tty: true
# - "9181:9181"
volumes:
- ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
- ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
- ./data/clickhouse/:/var/lib/clickhouse/
deploy:
restart_policy:
condition: on-failure
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "localhost:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
# clickhouse-2:
# <<: *clickhouse-defaults
# container_name: clickhouse-2
# hostname: clickhouse-2
# ports:
# - "9001:9000"
# - "8124:8123"
# - "9182:9181"
# volumes:
# - ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
# - ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
# - ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# # - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
# - ./data/clickhouse-2/:/var/lib/clickhouse/
# clickhouse-3:
# <<: *clickhouse-defaults
# container_name: clickhouse-3
# hostname: clickhouse-3
# ports:
# - "9002:9000"
# - "8125:8123"
# - "9183:9181"
# volumes:
# - ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
# - ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
# - ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# # - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
# - ./data/clickhouse-3/:/var/lib/clickhouse/
alertmanager:
image: signoz/alertmanager:0.23.0-0.1
image: signoz/alertmanager:0.23.0-0.2
volumes:
- ./data/alertmanager:/data
command:
@@ -40,7 +143,7 @@ services:
condition: on-failure
query-service:
image: signoz/query-service:0.10.0
image: signoz/query-service:0.12.0
command: ["-config=/root/config/prometheus.yml"]
# ports:
# - "6060:6060" # pprof port
@@ -51,11 +154,13 @@ services:
- ./data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000/?database=signoz_traces
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_LOCAL_DB_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-swarm
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost:8080/api/v1/version"]
interval: 30s
@@ -64,11 +169,10 @@ services:
deploy:
restart_policy:
condition: on-failure
depends_on:
- clickhouse
<<: *clickhouse-depend
frontend:
image: signoz/frontend:0.10.0
image: signoz/frontend:0.12.0
deploy:
restart_policy:
condition: on-failure
@@ -81,10 +185,15 @@ services:
- ../common/nginx-config.conf:/etc/nginx/conf.d/default.conf
otel-collector:
image: signoz/otelcontribcol:0.45.1-1.1
image: signoz/signoz-otel-collector:0.66.0
command: ["--config=/etc/otel-collector-config.yaml"]
user: root # required for reading docker container logs
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- /var/lib/docker/containers:/var/lib/docker/containers:ro
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name={{.Node.Hostname}},os.type={{.Node.Platform.OS}},dockerswarm.service.name={{.Service.Name}},dockerswarm.task.name={{.Task.Name}}
- DOCKER_MULTI_NODE_CLUSTER=false
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
@@ -97,21 +206,14 @@ services:
# - "14268:14268" # Jaeger thrift HTTP
# - "55678:55678" # OpenCensus receiver
# - "55679:55679" # zPages extension
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name={{.Node.Hostname}},os.type={{.Node.Platform.OS}},dockerswarm.service.name={{.Service.Name}},dockerswarm.task.name={{.Task.Name}}
deploy:
mode: replicated
replicas: 3
mode: global
restart_policy:
condition: on-failure
resources:
limits:
memory: 2000m
depends_on:
- clickhouse
<<: *clickhouse-depend
otel-collector-metrics:
image: signoz/otelcontribcol:0.45.1-1.1
image: signoz/signoz-otel-collector:0.66.0
command: ["--config=/etc/otel-collector-metrics-config.yaml"]
volumes:
- ./otel-collector-metrics-config.yaml:/etc/otel-collector-metrics-config.yaml
@@ -123,8 +225,7 @@ services:
deploy:
restart_policy:
condition: on-failure
depends_on:
- clickhouse
<<: *clickhouse-depend
hotrod:
image: jaegertracing/example-hotrod:1.30

View File

@@ -1,4 +1,29 @@
receivers:
filelog/dockercontainers:
include: [ "/var/lib/docker/containers/*/*.log" ]
start_at: end
include_file_path: true
include_file_name: false
operators:
- type: json_parser
id: parser-docker
output: extract_metadata_from_filepath
timestamp:
parse_from: attributes.time
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: regex_parser
id: extract_metadata_from_filepath
regex: '^.*containers/(?P<container_id>[^_]+)/.*log$'
parse_from: attributes["log.file.path"]
output: parse_body
- type: move
id: parse_body
from: attributes.log
to: body
output: time
- type: remove
id: time
field: attributes.time
opencensus:
endpoint: 0.0.0.0:55678
otlp/spanmetrics:
@@ -22,7 +47,7 @@ receivers:
# thrift_binary:
# endpoint: 0.0.0.0:6832
hostmetrics:
collection_interval: 60s
collection_interval: 30s
scrapers:
cpu: {}
load: {}
@@ -30,6 +55,16 @@ receivers:
disk: {}
filesystem: {}
network: {}
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
# otel-collector internal metrics
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
processors:
batch:
@@ -40,7 +75,6 @@ processors:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors: [env, system] # include ec2 for AWS, gce for GCP and azure for Azure.
timeout: 2s
override: false
signozspanmetrics/prometheus:
metrics_exporter: prometheus
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
@@ -69,6 +103,8 @@ processors:
exporters:
clickhousetraces:
datasource: tcp://clickhouse:9000/?database=signoz_traces
docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
clickhousemetricswrite:
endpoint: tcp://clickhouse:9000/?database=signoz_metrics
resource_to_telemetry_conversion:
@@ -76,6 +112,17 @@ exporters:
prometheus:
endpoint: 0.0.0.0:8889
# logging: {}
clickhouselogsexporter:
dsn: tcp://clickhouse:9000/
docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
timeout: 5s
sending_queue:
queue_size: 100
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
extensions:
health_check:
@@ -99,10 +146,14 @@ service:
receivers: [otlp]
processors: [batch]
exporters: [clickhousemetricswrite]
metrics/hostmetrics:
receivers: [hostmetrics]
metrics/generic:
receivers: [hostmetrics, prometheus]
processors: [resourcedetection, batch]
exporters: [clickhousemetricswrite]
metrics/spanmetrics:
receivers: [otlp/spanmetrics]
exporters: [prometheus]
logs:
receivers: [otlp, filelog/dockercontainers]
processors: [batch]
exporters: [clickhouselogsexporter]

View File

@@ -2,24 +2,20 @@ receivers:
prometheus:
config:
scrape_configs:
# otel-collector internal metrics
- job_name: "otel-collector"
scrape_interval: 60s
static_configs:
- targets:
- otel-collector:8888
# otel-collector-metrics internal metrics
- job_name: "otel-collector-metrics"
- job_name: otel-collector-metrics
scrape_interval: 60s
static_configs:
- targets:
- localhost:8888
# SigNoz span metrics
- job_name: "signozspanmetrics-collector"
- job_name: signozspanmetrics-collector
scrape_interval: 60s
static_configs:
- targets:
- otel-collector:8889
dns_sd_configs:
- names:
- tasks.otel-collector
type: A
port: 8889
processors:
batch:

View File

@@ -19,8 +19,7 @@ rule_files:
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
scrape_configs: []
remote_read:
- url: tcp://clickhouse:9000/?database=signoz_metrics

View File

@@ -3,13 +3,17 @@ server {
server_name _;
gzip on;
gzip_static on;
gzip_static on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_proxied any;
gzip_vary on;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_http_version 1.1;
# to handle uri issue 414 from nginx
client_max_body_size 24M;
large_client_header_buffers 8 128k;
location / {
if ( $uri = '/index.html' ) {

View File

@@ -0,0 +1,75 @@
<?xml version="1.0"?>
<clickhouse>
<!-- ZooKeeper is used to store metadata about replicas, when using Replicated tables.
Optional. If you don't use replicated tables, you could omit that.
See https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replication/
-->
<zookeeper>
<node index="1">
<host>zookeeper-1</host>
<port>2181</port>
</node>
<!-- <node index="2">
<host>zookeeper-2</host>
<port>2181</port>
</node>
<node index="3">
<host>zookeeper-3</host>
<port>2181</port>
</node> -->
</zookeeper>
<!-- Configuration of clusters that could be used in Distributed tables.
https://clickhouse.com/docs/en/operations/table_engines/distributed/
-->
<remote_servers>
<cluster>
<!-- Inter-server per-cluster secret for Distributed queries
default: no secret (no authentication will be performed)
If set, then Distributed queries will be validated on shards, so at least:
- such cluster should exist on the shard,
- such cluster should have the same secret.
And also (and which is more important), the initial_user will
be used as current user for the query.
Right now the protocol is pretty simple and it only takes into account:
- cluster name
- query
Also it will be nice if the following will be implemented:
- source hostname (see interserver_http_host), but then it will depends from DNS,
it can use IP address instead, but then the you need to get correct on the initiator node.
- target hostname / ip address (same notes as for source hostname)
- time-based security tokens
-->
<!-- <secret></secret> -->
<shard>
<!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
<!-- <internal_replication>false</internal_replication> -->
<!-- Optional. Shard weight when writing data. Default: 1. -->
<!-- <weight>1</weight> -->
<replica>
<host>clickhouse</host>
<port>9000</port>
<!-- Optional. Priority of the replica for load_balancing. Default: 1 (less value has more priority). -->
<!-- <priority>1</priority> -->
</replica>
</shard>
<!-- <shard>
<replica>
<host>clickhouse-2</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>clickhouse-3</host>
<port>9000</port>
</replica>
</shard> -->
</cluster>
</remote_servers>
</clickhouse>

View File

@@ -236,8 +236,8 @@
<openSSL>
<server> <!-- Used for https server AND secure tcp port -->
<!-- openssl req -subj "/CN=localhost" -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt -->
<certificateFile>/etc/clickhouse-server/server.crt</certificateFile>
<privateKeyFile>/etc/clickhouse-server/server.key</privateKeyFile>
<!-- <certificateFile>/etc/clickhouse-server/server.crt</certificateFile> -->
<!-- <privateKeyFile>/etc/clickhouse-server/server.key</privateKeyFile> -->
<!-- dhparams are optional. You can delete the <dhParamsFile> element.
To generate dhparams, use the following command:
openssl dhparam -out /etc/clickhouse-server/dhparam.pem 4096
@@ -618,148 +618,6 @@
</jdbc_bridge>
-->
<!-- Configuration of clusters that could be used in Distributed tables.
https://clickhouse.com/docs/en/operations/table_engines/distributed/
-->
<remote_servers>
<!-- Test only shard config for testing distributed storage -->
<test_shard_localhost>
<!-- Inter-server per-cluster secret for Distributed queries
default: no secret (no authentication will be performed)
If set, then Distributed queries will be validated on shards, so at least:
- such cluster should exist on the shard,
- such cluster should have the same secret.
And also (and which is more important), the initial_user will
be used as current user for the query.
Right now the protocol is pretty simple and it only takes into account:
- cluster name
- query
Also it will be nice if the following will be implemented:
- source hostname (see interserver_http_host), but then it will depends from DNS,
it can use IP address instead, but then the you need to get correct on the initiator node.
- target hostname / ip address (same notes as for source hostname)
- time-based security tokens
-->
<!-- <secret></secret> -->
<shard>
<!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
<!-- <internal_replication>false</internal_replication> -->
<!-- Optional. Shard weight when writing data. Default: 1. -->
<!-- <weight>1</weight> -->
<replica>
<host>localhost</host>
<port>9000</port>
<!-- Optional. Priority of the replica for load_balancing. Default: 1 (less value has more priority). -->
<!-- <priority>1</priority> -->
</replica>
</shard>
</test_shard_localhost>
<test_cluster_one_shard_three_replicas_localhost>
<shard>
<internal_replication>false</internal_replication>
<replica>
<host>127.0.0.1</host>
<port>9000</port>
</replica>
<replica>
<host>127.0.0.2</host>
<port>9000</port>
</replica>
<replica>
<host>127.0.0.3</host>
<port>9000</port>
</replica>
</shard>
<!--shard>
<internal_replication>false</internal_replication>
<replica>
<host>127.0.0.1</host>
<port>9000</port>
</replica>
<replica>
<host>127.0.0.2</host>
<port>9000</port>
</replica>
<replica>
<host>127.0.0.3</host>
<port>9000</port>
</replica>
</shard-->
</test_cluster_one_shard_three_replicas_localhost>
<test_cluster_two_shards_localhost>
<shard>
<replica>
<host>localhost</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>localhost</host>
<port>9000</port>
</replica>
</shard>
</test_cluster_two_shards_localhost>
<test_cluster_two_shards>
<shard>
<replica>
<host>127.0.0.1</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>127.0.0.2</host>
<port>9000</port>
</replica>
</shard>
</test_cluster_two_shards>
<test_cluster_two_shards_internal_replication>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>127.0.0.1</host>
<port>9000</port>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>127.0.0.2</host>
<port>9000</port>
</replica>
</shard>
</test_cluster_two_shards_internal_replication>
<test_shard_localhost_secure>
<shard>
<replica>
<host>localhost</host>
<port>9440</port>
<secure>1</secure>
</replica>
</shard>
</test_shard_localhost_secure>
<test_unavailable_shard>
<shard>
<replica>
<host>localhost</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>localhost</host>
<port>1</port>
</replica>
</shard>
</test_unavailable_shard>
</remote_servers>
<!-- The list of hosts allowed to use in URL-related storage engines and table functions.
If this section is not present in configuration, all hosts are allowed.
-->
@@ -786,29 +644,6 @@
Values for substitutions are specified in /clickhouse/name_of_substitution elements in that file.
-->
<!-- ZooKeeper is used to store metadata about replicas, when using Replicated tables.
Optional. If you don't use replicated tables, you could omit that.
See https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replication/
-->
<!--
<zookeeper>
<node>
<host>example1</host>
<port>2181</port>
</node>
<node>
<host>example2</host>
<port>2181</port>
</node>
<node>
<host>example3</host>
<port>2181</port>
</node>
</zookeeper>
-->
<!-- Substitutions for parameters of replicated tables.
Optional. If you don't use replicated tables, you could omit that.

View File

@@ -20,6 +20,7 @@
</default>
<s3>
<disk>s3</disk>
<perform_ttl_move_on_insert>0</perform_ttl_move_on_insert>
</s3>
</volumes>
</tiered>

View File

@@ -0,0 +1,108 @@
version: "2.4"
services:
clickhouse:
image: clickhouse/clickhouse-server:22.8.8-alpine
container_name: clickhouse
# ports:
# - "9000:9000"
# - "8123:8123"
tty: true
volumes:
- ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
# - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
- ./data/clickhouse/:/var/lib/clickhouse/
restart: on-failure
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "localhost:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
alertmanager:
container_name: alertmanager
image: signoz/alertmanager:0.23.0-0.2
volumes:
- ./data/alertmanager:/data
depends_on:
query-service:
condition: service_healthy
restart: on-failure
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data
# Notes for Maintainers/Contributors who will change Line Numbers of Frontend & Query-Section. Please Update Line Numbers in `./scripts/commentLinesForSetup.sh` & `./CONTRIBUTING.md`
otel-collector:
container_name: otel-collector
image: signoz/signoz-otel-collector:0.66.0
command: ["--config=/etc/otel-collector-config.yaml"]
# user: root # required for reading docker container logs
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
# - "8888:8888" # OtelCollector internal metrics
# - "8889:8889" # signoz spanmetrics exposed by the agent
# - "9411:9411" # Zipkin port
# - "13133:13133" # health check extension
# - "14250:14250" # Jaeger gRPC
# - "14268:14268" # Jaeger thrift HTTP
# - "55678:55678" # OpenCensus receiver
# - "55679:55679" # zPages extension
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy
otel-collector-metrics:
container_name: otel-collector-metrics
image: signoz/signoz-otel-collector:0.66.0
command: ["--config=/etc/otel-collector-metrics-config.yaml"]
volumes:
- ./otel-collector-metrics-config.yaml:/etc/otel-collector-metrics-config.yaml
# ports:
# - "1777:1777" # pprof extension
# - "8888:8888" # OtelCollector internal metrics
# - "13133:13133" # Health check extension
# - "55679:55679" # zPages extension
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy
hotrod:
image: jaegertracing/example-hotrod:1.30
container_name: hotrod
logging:
options:
max-size: 50m
max-file: "3"
command: ["all"]
environment:
- JAEGER_ENDPOINT=http://otel-collector:14268/api/traces
load-hotrod:
image: "grubykarol/locust:1.2.3-python3.9-alpine3.12"
container_name: load-hotrod
hostname: load-hotrod
environment:
ATTACKED_HOST: http://hotrod:8080
LOCUST_MODE: standalone
NO_PROXY: standalone
TASK_DELAY_FROM: 5
TASK_DELAY_TO: 30
QUIET_MODE: "${QUIET_MODE:-false}"
LOCUST_OPTS: "--headless -u 10 -r 1"
volumes:
- ../common/locust-scripts:/locust

View File

@@ -0,0 +1,56 @@
version: "2.4"
services:
query-service:
hostname: query-service
build:
context: "../../../pkg/query-service"
dockerfile: "./Dockerfile"
args:
LDFLAGS: ""
TARGETPLATFORM: "${LOCAL_GOOS}/${LOCAL_GOARCH}"
container_name: query-service
environment:
- ClickHouseUrl=tcp://clickhouse:9000
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_LOCAL_DB_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
volumes:
- ./prometheus.yml:/root/config/prometheus.yml
- ../dashboards:/root/config/dashboards
- ./data/signoz/:/var/lib/signoz/
command: ["-config=/root/config/prometheus.yml"]
ports:
- "6060:6060"
- "8080:8080"
restart: on-failure
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost:8080/api/v1/version"]
interval: 30s
timeout: 5s
retries: 3
depends_on:
clickhouse:
condition: service_healthy
frontend:
build:
context: "../../../frontend"
dockerfile: "./Dockerfile"
args:
TARGETOS: "${LOCAL_GOOS}"
TARGETPLATFORM: "${LOCAL_GOARCH}"
container_name: frontend
environment:
- FRONTEND_API_ENDPOINT=http://query-service:8080
restart: on-failure
depends_on:
- alertmanager
- query-service
ports:
- "3301:3301"
volumes:
- ../common/nginx-config.conf:/etc/nginx/conf.d/default.conf

View File

@@ -1,31 +1,138 @@
version: "2.4"
x-clickhouse-defaults: &clickhouse-defaults
restart: on-failure
image: clickhouse/clickhouse-server:22.8.8-alpine
tty: true
depends_on:
- zookeeper-1
# - zookeeper-2
# - zookeeper-3
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "localhost:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
x-clickhouse-depend: &clickhouse-depend
depends_on:
clickhouse:
condition: service_healthy
# clickhouse-2:
# condition: service_healthy
# clickhouse-3:
# condition: service_healthy
services:
zookeeper-1:
image: bitnami/zookeeper:3.7.0
container_name: zookeeper-1
hostname: zookeeper-1
user: root
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
volumes:
- ./data/zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
# - ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
# zookeeper-2:
# image: bitnami/zookeeper:3.7.0
# container_name: zookeeper-2
# hostname: zookeeper-2
# user: root
# ports:
# - "2182:2181"
# - "2889:2888"
# - "3889:3888"
# volumes:
# - ./data/zookeeper-2:/bitnami/zookeeper
# environment:
# - ZOO_SERVER_ID=2
# - ZOO_SERVERS=zookeeper-1:2888:3888,0.0.0.0:2888:3888,zookeeper-3:2888:3888
# - ALLOW_ANONYMOUS_LOGIN=yes
# - ZOO_AUTOPURGE_INTERVAL=1
# zookeeper-3:
# image: bitnami/zookeeper:3.7.0
# container_name: zookeeper-3
# hostname: zookeeper-3
# user: root
# ports:
# - "2183:2181"
# - "2890:2888"
# - "3890:3888"
# volumes:
# - ./data/zookeeper-3:/bitnami/zookeeper
# environment:
# - ZOO_SERVER_ID=3
# - ZOO_SERVERS=zookeeper-1:2888:3888,zookeeper-2:2888:3888,0.0.0.0:2888:3888
# - ALLOW_ANONYMOUS_LOGIN=yes
# - ZOO_AUTOPURGE_INTERVAL=1
clickhouse:
image: clickhouse/clickhouse-server:22.4.5-alpine
# ports:
# - "9000:9000"
# - "8123:8123"
tty: true
<<: *clickhouse-defaults
container_name: clickhouse
hostname: clickhouse
ports:
- "9000:9000"
- "8123:8123"
- "9181:9181"
volumes:
- ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
- ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
- ./data/clickhouse/:/var/lib/clickhouse/
restart: on-failure
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "localhost:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
# clickhouse-2:
# <<: *clickhouse-defaults
# container_name: clickhouse-2
# hostname: clickhouse-2
# ports:
# - "9001:9000"
# - "8124:8123"
# - "9182:9181"
# volumes:
# - ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
# - ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
# - ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# # - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
# - ./data/clickhouse-2/:/var/lib/clickhouse/
# clickhouse-3:
# <<: *clickhouse-defaults
# container_name: clickhouse-3
# hostname: clickhouse-3
# ports:
# - "9002:9000"
# - "8125:8123"
# - "9183:9181"
# volumes:
# - ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
# - ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
# - ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# # - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
# - ./data/clickhouse-3/:/var/lib/clickhouse/
alertmanager:
image: signoz/alertmanager:0.23.0-0.1
image: signoz/alertmanager:0.23.0-0.2
volumes:
- ./data/alertmanager:/data
depends_on:
@@ -39,7 +146,7 @@ services:
# Notes for Maintainers/Contributors who will change Line Numbers of Frontend & Query-Section. Please Update Line Numbers in `./scripts/commentLinesForSetup.sh` & `./CONTRIBUTING.md`
query-service:
image: signoz/query-service:0.10.0
image: signoz/query-service:0.12.0
container_name: query-service
command: ["-config=/root/config/prometheus.yml"]
# ports:
@@ -51,6 +158,9 @@ services:
- ./data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000/?database=signoz_traces
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_LOCAL_DB_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
@@ -61,12 +171,10 @@ services:
interval: 30s
timeout: 5s
retries: 3
depends_on:
clickhouse:
condition: service_healthy
<<: *clickhouse-depend
frontend:
image: signoz/frontend:0.10.0
image: signoz/frontend:0.12.0
container_name: frontend
restart: on-failure
depends_on:
@@ -78,12 +186,15 @@ services:
- ../common/nginx-config.conf:/etc/nginx/conf.d/default.conf
otel-collector:
image: signoz/otelcontribcol:0.45.1-1.1
image: signoz/signoz-otel-collector:0.66.0
command: ["--config=/etc/otel-collector-config.yaml"]
user: root # required for reading docker container logs
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- /var/lib/docker/containers:/var/lib/docker/containers:ro
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
- DOCKER_MULTI_NODE_CLUSTER=false
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
@@ -96,14 +207,11 @@ services:
# - "14268:14268" # Jaeger thrift HTTP
# - "55678:55678" # OpenCensus receiver
# - "55679:55679" # zPages extension
mem_limit: 2000m
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy
<<: *clickhouse-depend
otel-collector-metrics:
image: signoz/otelcontribcol:0.45.1-1.1
image: signoz/signoz-otel-collector:0.66.0
command: ["--config=/etc/otel-collector-metrics-config.yaml"]
volumes:
- ./otel-collector-metrics-config.yaml:/etc/otel-collector-metrics-config.yaml
@@ -113,9 +221,7 @@ services:
# - "13133:13133" # Health check extension
# - "55679:55679" # zPages extension
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy
<<: *clickhouse-depend
hotrod:
image: jaegertracing/example-hotrod:1.30

View File

@@ -1,4 +1,29 @@
receivers:
filelog/dockercontainers:
include: [ "/var/lib/docker/containers/*/*.log" ]
start_at: end
include_file_path: true
include_file_name: false
operators:
- type: json_parser
id: parser-docker
output: extract_metadata_from_filepath
timestamp:
parse_from: attributes.time
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: regex_parser
id: extract_metadata_from_filepath
regex: '^.*containers/(?P<container_id>[^_]+)/.*log$'
parse_from: attributes["log.file.path"]
output: parse_body
- type: move
id: parse_body
from: attributes.log
to: body
output: time
- type: remove
id: time
field: attributes.time
opencensus:
endpoint: 0.0.0.0:55678
otlp/spanmetrics:
@@ -22,7 +47,7 @@ receivers:
# thrift_binary:
# endpoint: 0.0.0.0:6832
hostmetrics:
collection_interval: 60s
collection_interval: 30s
scrapers:
cpu: {}
load: {}
@@ -30,6 +55,16 @@ receivers:
disk: {}
filesystem: {}
network: {}
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
# otel-collector internal metrics
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
processors:
batch:
@@ -64,7 +99,6 @@ processors:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors: [env, system] # include ec2 for AWS, gce for GCP and azure for Azure.
timeout: 2s
override: false
extensions:
health_check:
@@ -77,6 +111,8 @@ extensions:
exporters:
clickhousetraces:
datasource: tcp://clickhouse:9000/?database=signoz_traces
docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
clickhousemetricswrite:
endpoint: tcp://clickhouse:9000/?database=signoz_metrics
resource_to_telemetry_conversion:
@@ -85,6 +121,18 @@ exporters:
endpoint: 0.0.0.0:8889
# logging: {}
clickhouselogsexporter:
dsn: tcp://clickhouse:9000/
docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
timeout: 5s
sending_queue:
queue_size: 100
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
service:
telemetry:
metrics:
@@ -102,10 +150,14 @@ service:
receivers: [otlp]
processors: [batch]
exporters: [clickhousemetricswrite]
metrics/hostmetrics:
receivers: [hostmetrics]
metrics/generic:
receivers: [hostmetrics, prometheus]
processors: [resourcedetection, batch]
exporters: [clickhousemetricswrite]
metrics/spanmetrics:
receivers: [otlp/spanmetrics]
exporters: [prometheus]
logs:
receivers: [otlp, filelog/dockercontainers]
processors: [batch]
exporters: [clickhouselogsexporter]

View File

@@ -6,20 +6,14 @@ receivers:
prometheus:
config:
scrape_configs:
# otel-collector internal metrics
- job_name: "otel-collector"
scrape_interval: 60s
static_configs:
- targets:
- otel-collector:8888
# otel-collector-metrics internal metrics
- job_name: "otel-collector-metrics"
- job_name: otel-collector-metrics
scrape_interval: 60s
static_configs:
- targets:
- localhost:8888
# SigNoz span metrics
- job_name: "signozspanmetrics-collector"
- job_name: signozspanmetrics-collector
scrape_interval: 60s
static_configs:
- targets:

View File

@@ -19,8 +19,7 @@ rule_files:
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
scrape_configs: []
remote_read:
- url: tcp://clickhouse:9000/?database=signoz_metrics

View File

@@ -3,7 +3,7 @@ server {
server_name _;
gzip on;
gzip_static on;
gzip_static on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_proxied any;
gzip_vary on;
@@ -13,8 +13,7 @@ server {
# to handle uri issue 414 from nginx
client_max_body_size 24M;
large_client_header_buffers 8 16k;
large_client_header_buffers 8 128k;
location / {
if ( $uri = '/index.html' ) {

View File

@@ -204,9 +204,14 @@ start_docker() {
echo "Starting docker service"
$sudo_cmd systemctl start docker.service
fi
# if [[ -z $sudo_cmd ]]; then
# docker ps > /dev/null && true
# if [[ $? -ne 0 ]]; then
# request_sudo
# fi
# fi
if [[ -z $sudo_cmd ]]; then
docker ps > /dev/null && true
if [[ $? -ne 0 ]]; then
if ! docker ps > /dev/null && true; then
request_sudo
fi
fi
@@ -268,8 +273,12 @@ request_sudo() {
if (( $EUID != 0 )); then
sudo_cmd="sudo"
echo -e "Please enter your sudo password, if prompt."
$sudo_cmd -l | grep -e "NOPASSWD: ALL" > /dev/null
if [[ $? -ne 0 ]] && ! $sudo_cmd -v; then
# $sudo_cmd -l | grep -e "NOPASSWD: ALL" > /dev/null
# if [[ $? -ne 0 ]] && ! $sudo_cmd -v; then
# echo "Need sudo privileges to proceed with the installation."
# exit 1;
# fi
if ! $sudo_cmd -l | grep -e "NOPASSWD: ALL" > /dev/null && ! $sudo_cmd -v; then
echo "Need sudo privileges to proceed with the installation."
exit 1;
fi
@@ -303,8 +312,13 @@ echo -e "🌏 Detecting your OS ...\n"
check_os
# Obtain unique installation id
sysinfo="$(uname -a)"
if [[ $? -ne 0 ]]; then
# sysinfo="$(uname -a)"
# if [[ $? -ne 0 ]]; then
# uuid="$(uuidgen)"
# uuid="${uuid:-$(cat /proc/sys/kernel/random/uuid)}"
# sysinfo="${uuid:-$(cat /proc/sys/kernel/random/uuid)}"
# fi
if ! sysinfo="$(uname -a)"; then
uuid="$(uuidgen)"
uuid="${uuid:-$(cat /proc/sys/kernel/random/uuid)}"
sysinfo="${uuid:-$(cat /proc/sys/kernel/random/uuid)}"

37
ee/LICENSE Normal file
View File

@@ -0,0 +1,37 @@
The SigNoz Enterprise license (the "Enterprise License")
Copyright (c) 2020 - present SigNoz Inc.
With regard to the SigNoz Software:
This software and associated documentation files (the "Software") may only be
used in production, if you (and any entity that you represent) have agreed to,
and are in compliance with, the SigNoz Subscription Terms of Service, available
via email (hello@signoz.io) (the "Enterprise Terms"), or other
agreement governing the use of the Software, as agreed by you and SigNoz,
and otherwise have a valid SigNoz Enterprise license for the
correct number of user seats. Subject to the foregoing sentence, you are free to
modify this Software and publish patches to the Software. You agree that SigNoz
and/or its licensors (as applicable) retain all right, title and interest in and
to all such modifications and/or patches, and all such modifications and/or
patches may only be used, copied, modified, displayed, distributed, or otherwise
exploited with a valid SigNoz Enterprise license for the correct
number of user seats. Notwithstanding the foregoing, you may copy and modify
the Software for development and testing purposes, without requiring a
subscription. You agree that SigNoz and/or its licensors (as applicable) retain
all right, title and interest in and to all such modifications. You are not
granted any other rights beyond what is expressly stated herein. Subject to the
foregoing, it is forbidden to copy, merge, publish, distribute, sublicense,
and/or sell the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
For all third party components incorporated into the SigNoz Software, those
components are licensed under the original license provided by the owner of the
applicable component.

View File

@@ -0,0 +1,4 @@
.vscode
README.md
signoz.db
bin

View File

@@ -0,0 +1,48 @@
FROM golang:1.17-buster AS builder
# LD_FLAGS is passed as argument from Makefile. It will be empty, if no argument passed
ARG LD_FLAGS
ARG TARGETPLATFORM
ENV CGO_ENABLED=1
ENV GOPATH=/go
RUN export GOOS=$(echo ${TARGETPLATFORM} | cut -d / -f1) && \
export GOARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2)
# Prepare and enter src directory
WORKDIR /go/src/github.com/signoz/signoz
# Add the sources and proceed with build
ADD . .
RUN cd ee/query-service \
&& go build -tags timetzdata -a -o ./bin/query-service \
-ldflags "-linkmode external -extldflags '-static' -s -w $LD_FLAGS" \
&& chmod +x ./bin/query-service
# use a minimal alpine image
FROM alpine:3.7
# Add Maintainer Info
LABEL maintainer="signoz"
# add ca-certificates in case you need them
RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
# set working directory
WORKDIR /root
# copy the binary from builder
COPY --from=builder /go/src/github.com/signoz/signoz/ee/query-service/bin/query-service .
# copy prometheus YAML config
COPY pkg/query-service/config/prometheus.yml /root/config/prometheus.yml
# run the binary
ENTRYPOINT ["./query-service"]
CMD ["-config", "../config/prometheus.yml"]
# CMD ["./query-service -config /root/config/prometheus.yml"]
EXPOSE 8080

View File

@@ -0,0 +1,131 @@
package api
import (
"net/http"
"github.com/gorilla/mux"
"go.signoz.io/signoz/ee/query-service/dao"
"go.signoz.io/signoz/ee/query-service/interfaces"
"go.signoz.io/signoz/ee/query-service/license"
baseapp "go.signoz.io/signoz/pkg/query-service/app"
baseint "go.signoz.io/signoz/pkg/query-service/interfaces"
rules "go.signoz.io/signoz/pkg/query-service/rules"
"go.signoz.io/signoz/pkg/query-service/version"
)
type APIHandlerOptions struct {
DataConnector interfaces.DataConnector
AppDao dao.ModelDao
RulesManager *rules.Manager
FeatureFlags baseint.FeatureLookup
LicenseManager *license.Manager
}
type APIHandler struct {
opts APIHandlerOptions
baseapp.APIHandler
}
// NewAPIHandler returns an APIHandler
func NewAPIHandler(opts APIHandlerOptions) (*APIHandler, error) {
baseHandler, err := baseapp.NewAPIHandler(baseapp.APIHandlerOpts{
Reader: opts.DataConnector,
AppDao: opts.AppDao,
RuleManager: opts.RulesManager,
FeatureFlags: opts.FeatureFlags})
if err != nil {
return nil, err
}
ah := &APIHandler{
opts: opts,
APIHandler: *baseHandler,
}
return ah, nil
}
func (ah *APIHandler) FF() baseint.FeatureLookup {
return ah.opts.FeatureFlags
}
func (ah *APIHandler) RM() *rules.Manager {
return ah.opts.RulesManager
}
func (ah *APIHandler) LM() *license.Manager {
return ah.opts.LicenseManager
}
func (ah *APIHandler) AppDao() dao.ModelDao {
return ah.opts.AppDao
}
func (ah *APIHandler) CheckFeature(f string) bool {
err := ah.FF().CheckFeature(f)
return err == nil
}
// RegisterRoutes registers routes for this handler on the given router
func (ah *APIHandler) RegisterRoutes(router *mux.Router) {
// note: add ee override methods first
// routes available only in ee version
router.HandleFunc("/api/v1/licenses",
baseapp.AdminAccess(ah.listLicenses)).
Methods(http.MethodGet)
router.HandleFunc("/api/v1/licenses",
baseapp.AdminAccess(ah.applyLicense)).
Methods(http.MethodPost)
router.HandleFunc("/api/v1/featureFlags",
baseapp.OpenAccess(ah.getFeatureFlags)).
Methods(http.MethodGet)
router.HandleFunc("/api/v1/loginPrecheck",
baseapp.OpenAccess(ah.precheckLogin)).
Methods(http.MethodGet)
// paid plans specific routes
router.HandleFunc("/api/v1/complete/saml",
baseapp.OpenAccess(ah.receiveSAML)).
Methods(http.MethodPost)
router.HandleFunc("/api/v1/complete/google",
baseapp.OpenAccess(ah.receiveGoogleAuth)).
Methods(http.MethodGet)
router.HandleFunc("/api/v1/orgs/{orgId}/domains",
baseapp.AdminAccess(ah.listDomainsByOrg)).
Methods(http.MethodGet)
router.HandleFunc("/api/v1/domains",
baseapp.AdminAccess(ah.postDomain)).
Methods(http.MethodPost)
router.HandleFunc("/api/v1/domains/{id}",
baseapp.AdminAccess(ah.putDomain)).
Methods(http.MethodPut)
router.HandleFunc("/api/v1/domains/{id}",
baseapp.AdminAccess(ah.deleteDomain)).
Methods(http.MethodDelete)
// base overrides
router.HandleFunc("/api/v1/version", baseapp.OpenAccess(ah.getVersion)).Methods(http.MethodGet)
router.HandleFunc("/api/v1/invite/{token}", baseapp.OpenAccess(ah.getInvite)).Methods(http.MethodGet)
router.HandleFunc("/api/v1/register", baseapp.OpenAccess(ah.registerUser)).Methods(http.MethodPost)
router.HandleFunc("/api/v1/login", baseapp.OpenAccess(ah.loginUser)).Methods(http.MethodPost)
router.HandleFunc("/api/v1/traces/{traceId}", baseapp.ViewAccess(ah.searchTraces)).Methods(http.MethodGet)
router.HandleFunc("/api/v2/metrics/query_range", baseapp.ViewAccess(ah.queryRangeMetricsV2)).Methods(http.MethodPost)
ah.APIHandler.RegisterRoutes(router)
}
func (ah *APIHandler) getVersion(w http.ResponseWriter, r *http.Request) {
version := version.GetVersion()
ah.WriteJSON(w, r, map[string]string{"version": version, "ee": "Y"})
}

View File

@@ -0,0 +1,332 @@
package api
import (
"context"
"encoding/base64"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"github.com/gorilla/mux"
"go.signoz.io/signoz/ee/query-service/constants"
"go.signoz.io/signoz/ee/query-service/model"
"go.signoz.io/signoz/pkg/query-service/auth"
baseauth "go.signoz.io/signoz/pkg/query-service/auth"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.uber.org/zap"
)
func parseRequest(r *http.Request, req interface{}) error {
defer r.Body.Close()
requestBody, err := ioutil.ReadAll(r.Body)
if err != nil {
return err
}
err = json.Unmarshal(requestBody, &req)
return err
}
// loginUser overrides base handler and considers SSO case.
func (ah *APIHandler) loginUser(w http.ResponseWriter, r *http.Request) {
req := basemodel.LoginRequest{}
err := parseRequest(r, &req)
if err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
ctx := context.Background()
if req.Email != "" && ah.CheckFeature(model.SSO) {
var apierr basemodel.BaseApiError
_, apierr = ah.AppDao().CanUsePassword(ctx, req.Email)
if apierr != nil && !apierr.IsNil() {
RespondError(w, apierr, nil)
}
}
// if all looks good, call auth
resp, err := auth.Login(ctx, &req)
if ah.HandleError(w, err, http.StatusUnauthorized) {
return
}
ah.WriteJSON(w, r, resp)
}
// registerUser registers a user and responds with a precheck
// so the front-end can decide the login method
func (ah *APIHandler) registerUser(w http.ResponseWriter, r *http.Request) {
if !ah.CheckFeature(model.SSO) {
ah.APIHandler.Register(w, r)
return
}
ctx := context.Background()
var req *baseauth.RegisterRequest
defer r.Body.Close()
requestBody, err := ioutil.ReadAll(r.Body)
if err != nil {
zap.S().Errorf("received no input in api\n", err)
RespondError(w, model.BadRequest(err), nil)
return
}
err = json.Unmarshal(requestBody, &req)
if err != nil {
zap.S().Errorf("received invalid user registration request", zap.Error(err))
RespondError(w, model.BadRequest(fmt.Errorf("failed to register user")), nil)
return
}
// get invite object
invite, err := baseauth.ValidateInvite(ctx, req)
if err != nil || invite == nil {
zap.S().Errorf("failed to validate invite token", err)
RespondError(w, model.BadRequest(basemodel.ErrSignupFailed{}), nil)
}
// get auth domain from email domain
domain, apierr := ah.AppDao().GetDomainByEmail(ctx, invite.Email)
if apierr != nil {
zap.S().Errorf("failed to get domain from email", apierr)
RespondError(w, model.InternalError(basemodel.ErrSignupFailed{}), nil)
}
precheckResp := &model.PrecheckResponse{
SSO: false,
IsUser: false,
}
if domain != nil && domain.SsoEnabled {
// so is enabled, create user and respond precheck data
user, apierr := baseauth.RegisterInvitedUser(ctx, req, true)
if apierr != nil {
RespondError(w, apierr, nil)
return
}
var precheckError basemodel.BaseApiError
precheckResp, precheckError = ah.AppDao().PrecheckLogin(ctx, user.Email, req.SourceUrl)
if precheckError != nil {
RespondError(w, precheckError, precheckResp)
}
} else {
// no-sso, validate password
if err := auth.ValidatePassword(req.Password); err != nil {
RespondError(w, model.InternalError(fmt.Errorf("password is not in a valid format")), nil)
return
}
_, registerError := baseauth.Register(ctx, req)
if !registerError.IsNil() {
RespondError(w, apierr, nil)
return
}
precheckResp.IsUser = true
}
ah.Respond(w, precheckResp)
}
// getInvite returns the invite object details for the given invite token. We do not need to
// protect this API because invite token itself is meant to be private.
func (ah *APIHandler) getInvite(w http.ResponseWriter, r *http.Request) {
token := mux.Vars(r)["token"]
sourceUrl := r.URL.Query().Get("ref")
ctx := context.Background()
inviteObject, err := baseauth.GetInvite(context.Background(), token)
if err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
resp := model.GettableInvitation{
InvitationResponseObject: inviteObject,
}
precheck, apierr := ah.AppDao().PrecheckLogin(ctx, inviteObject.Email, sourceUrl)
resp.Precheck = precheck
if apierr != nil {
RespondError(w, apierr, resp)
}
ah.WriteJSON(w, r, resp)
}
// PrecheckLogin enables browser login page to display appropriate
// login methods
func (ah *APIHandler) precheckLogin(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
email := r.URL.Query().Get("email")
sourceUrl := r.URL.Query().Get("ref")
resp, apierr := ah.AppDao().PrecheckLogin(ctx, email, sourceUrl)
if apierr != nil {
RespondError(w, apierr, resp)
}
ah.Respond(w, resp)
}
func handleSsoError(w http.ResponseWriter, r *http.Request, redirectURL string) {
ssoError := []byte("Login failed. Please contact your system administrator")
dst := make([]byte, base64.StdEncoding.EncodedLen(len(ssoError)))
base64.StdEncoding.Encode(dst, ssoError)
http.Redirect(w, r, fmt.Sprintf("%s?ssoerror=%s", redirectURL, string(dst)), http.StatusSeeOther)
}
// receiveGoogleAuth completes google OAuth response and forwards a request
// to front-end to sign user in
func (ah *APIHandler) receiveGoogleAuth(w http.ResponseWriter, r *http.Request) {
redirectUri := constants.GetDefaultSiteURL()
ctx := context.Background()
if !ah.CheckFeature(model.SSO) {
zap.S().Errorf("[receiveGoogleAuth] sso requested but feature unavailable %s in org domain %s", model.SSO)
http.Redirect(w, r, fmt.Sprintf("%s?ssoerror=%s", redirectUri, "feature unavailable, please upgrade your billing plan to access this feature"), http.StatusMovedPermanently)
return
}
q := r.URL.Query()
if errType := q.Get("error"); errType != "" {
zap.S().Errorf("[receiveGoogleAuth] failed to login with google auth", q.Get("error_description"))
http.Redirect(w, r, fmt.Sprintf("%s?ssoerror=%s", redirectUri, "failed to login through SSO "), http.StatusMovedPermanently)
return
}
relayState := q.Get("state")
zap.S().Debug("[receiveGoogleAuth] relay state received", zap.String("state", relayState))
parsedState, err := url.Parse(relayState)
if err != nil || relayState == "" {
zap.S().Errorf("[receiveGoogleAuth] failed to process response - invalid response from IDP", err, r)
handleSsoError(w, r, redirectUri)
return
}
// upgrade redirect url from the relay state for better accuracy
redirectUri = fmt.Sprintf("%s://%s%s", parsedState.Scheme, parsedState.Host, "/login")
// fetch domain by parsing relay state.
domain, err := ah.AppDao().GetDomainFromSsoResponse(ctx, parsedState)
if err != nil {
handleSsoError(w, r, redirectUri)
return
}
// now that we have domain, use domain to fetch sso settings.
// prepare google callback handler using parsedState -
// which contains redirect URL (front-end endpoint)
callbackHandler, err := domain.PrepareGoogleOAuthProvider(parsedState)
identity, err := callbackHandler.HandleCallback(r)
if err != nil {
zap.S().Errorf("[receiveGoogleAuth] failed to process HandleCallback ", domain.String(), zap.Error(err))
handleSsoError(w, r, redirectUri)
return
}
nextPage, err := ah.AppDao().PrepareSsoRedirect(ctx, redirectUri, identity.Email)
if err != nil {
zap.S().Errorf("[receiveGoogleAuth] failed to generate redirect URI after successful login ", domain.String(), zap.Error(err))
handleSsoError(w, r, redirectUri)
return
}
http.Redirect(w, r, nextPage, http.StatusSeeOther)
}
// receiveSAML completes a SAML request and gets user logged in
func (ah *APIHandler) receiveSAML(w http.ResponseWriter, r *http.Request) {
// this is the source url that initiated the login request
redirectUri := constants.GetDefaultSiteURL()
ctx := context.Background()
if !ah.CheckFeature(model.SSO) {
zap.S().Errorf("[receiveSAML] sso requested but feature unavailable %s in org domain %s", model.SSO)
http.Redirect(w, r, fmt.Sprintf("%s?ssoerror=%s", redirectUri, "feature unavailable, please upgrade your billing plan to access this feature"), http.StatusMovedPermanently)
return
}
err := r.ParseForm()
if err != nil {
zap.S().Errorf("[receiveSAML] failed to process response - invalid response from IDP", err, r)
handleSsoError(w, r, redirectUri)
return
}
// the relay state is sent when a login request is submitted to
// Idp.
relayState := r.FormValue("RelayState")
zap.S().Debug("[receiveML] relay state", zap.String("relayState", relayState))
parsedState, err := url.Parse(relayState)
if err != nil || relayState == "" {
zap.S().Errorf("[receiveSAML] failed to process response - invalid response from IDP", err, r)
handleSsoError(w, r, redirectUri)
return
}
// upgrade redirect url from the relay state for better accuracy
redirectUri = fmt.Sprintf("%s://%s%s", parsedState.Scheme, parsedState.Host, "/login")
// fetch domain by parsing relay state.
domain, err := ah.AppDao().GetDomainFromSsoResponse(ctx, parsedState)
if err != nil {
handleSsoError(w, r, redirectUri)
return
}
sp, err := domain.PrepareSamlRequest(parsedState)
if err != nil {
zap.S().Errorf("[receiveSAML] failed to prepare saml request for domain (%s): %v", domain.String(), err)
handleSsoError(w, r, redirectUri)
return
}
assertionInfo, err := sp.RetrieveAssertionInfo(r.FormValue("SAMLResponse"))
if err != nil {
zap.S().Errorf("[receiveSAML] failed to retrieve assertion info from saml response for organization (%s): %v", domain.String(), err)
handleSsoError(w, r, redirectUri)
return
}
if assertionInfo.WarningInfo.InvalidTime {
zap.S().Errorf("[receiveSAML] expired saml response for organization (%s): %v", domain.String(), err)
handleSsoError(w, r, redirectUri)
return
}
email := assertionInfo.NameID
if email == "" {
zap.S().Errorf("[receiveSAML] invalid email in the SSO response (%s)", domain.String())
handleSsoError(w, r, redirectUri)
return
}
nextPage, err := ah.AppDao().PrepareSsoRedirect(ctx, redirectUri, email)
if err != nil {
zap.S().Errorf("[receiveSAML] failed to generate redirect URI after successful login ", domain.String(), zap.Error(err))
handleSsoError(w, r, redirectUri)
return
}
http.Redirect(w, r, nextPage, http.StatusSeeOther)
}

View File

@@ -0,0 +1,90 @@
package api
import (
"context"
"encoding/json"
"fmt"
"net/http"
"github.com/google/uuid"
"github.com/gorilla/mux"
"go.signoz.io/signoz/ee/query-service/model"
)
func (ah *APIHandler) listDomainsByOrg(w http.ResponseWriter, r *http.Request) {
orgId := mux.Vars(r)["orgId"]
domains, apierr := ah.AppDao().ListDomains(context.Background(), orgId)
if apierr != nil {
RespondError(w, apierr, domains)
return
}
ah.Respond(w, domains)
}
func (ah *APIHandler) postDomain(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
req := model.OrgDomain{}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
if err := req.ValidNew(); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
if apierr := ah.AppDao().CreateDomain(ctx, &req); apierr != nil {
RespondError(w, apierr, nil)
return
}
ah.Respond(w, &req)
}
func (ah *APIHandler) putDomain(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
domainIdStr := mux.Vars(r)["id"]
domainId, err := uuid.Parse(domainIdStr)
if err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
req := model.OrgDomain{Id: domainId}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
req.Id = domainId
if err := req.Valid(nil); err != nil {
RespondError(w, model.BadRequest(err), nil)
}
if apierr := ah.AppDao().UpdateDomain(ctx, &req); apierr != nil {
RespondError(w, apierr, nil)
return
}
ah.Respond(w, &req)
}
func (ah *APIHandler) deleteDomain(w http.ResponseWriter, r *http.Request) {
domainIdStr := mux.Vars(r)["id"]
domainId, err := uuid.Parse(domainIdStr)
if err != nil {
RespondError(w, model.BadRequest(fmt.Errorf("invalid domain id")), nil)
return
}
apierr := ah.AppDao().DeleteDomain(context.Background(), domainId)
if apierr != nil {
RespondError(w, apierr, nil)
return
}
ah.Respond(w, nil)
}

View File

@@ -0,0 +1,10 @@
package api
import (
"net/http"
)
func (ah *APIHandler) getFeatureFlags(w http.ResponseWriter, r *http.Request) {
featureSet := ah.FF().GetFeatureFlags()
ah.Respond(w, featureSet)
}

View File

@@ -0,0 +1,40 @@
package api
import (
"context"
"encoding/json"
"fmt"
"go.signoz.io/signoz/ee/query-service/model"
"net/http"
)
func (ah *APIHandler) listLicenses(w http.ResponseWriter, r *http.Request) {
licenses, apiError := ah.LM().GetLicenses(context.Background())
if apiError != nil {
RespondError(w, apiError, nil)
}
ah.Respond(w, licenses)
}
func (ah *APIHandler) applyLicense(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
var l model.License
if err := json.NewDecoder(r.Body).Decode(&l); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
if l.Key == "" {
RespondError(w, model.BadRequest(fmt.Errorf("license key is required")), nil)
return
}
license, apiError := ah.LM().Activate(ctx, l.Key)
if apiError != nil {
RespondError(w, apiError, nil)
return
}
ah.Respond(w, license)
}

View File

@@ -0,0 +1,236 @@
package api
import (
"bytes"
"fmt"
"net/http"
"sync"
"text/template"
"time"
"go.signoz.io/signoz/pkg/query-service/app/metrics"
"go.signoz.io/signoz/pkg/query-service/app/parser"
"go.signoz.io/signoz/pkg/query-service/constants"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
querytemplate "go.signoz.io/signoz/pkg/query-service/utils/queryTemplate"
"go.uber.org/zap"
)
func (ah *APIHandler) queryRangeMetricsV2(w http.ResponseWriter, r *http.Request) {
if !ah.CheckFeature(basemodel.CustomMetricsFunction) {
zap.S().Info("CustomMetricsFunction feature is not enabled in this plan")
ah.APIHandler.QueryRangeMetricsV2(w, r)
return
}
metricsQueryRangeParams, apiErrorObj := parser.ParseMetricQueryRangeParams(r)
if apiErrorObj != nil {
zap.S().Errorf(apiErrorObj.Err.Error())
RespondError(w, apiErrorObj, nil)
return
}
// prometheus instant query needs same timestamp
if metricsQueryRangeParams.CompositeMetricQuery.PanelType == basemodel.QUERY_VALUE &&
metricsQueryRangeParams.CompositeMetricQuery.QueryType == basemodel.PROM {
metricsQueryRangeParams.Start = metricsQueryRangeParams.End
}
// round up the end to nearest multiple
if metricsQueryRangeParams.CompositeMetricQuery.QueryType == basemodel.QUERY_BUILDER {
end := (metricsQueryRangeParams.End) / 1000
step := metricsQueryRangeParams.Step
metricsQueryRangeParams.End = (end / step * step) * 1000
}
type channelResult struct {
Series []*basemodel.Series
TableName string
Err error
Name string
Query string
}
execClickHouseQueries := func(queries map[string]string) ([]*basemodel.Series, []string, error, map[string]string) {
var seriesList []*basemodel.Series
var tableName []string
ch := make(chan channelResult, len(queries))
var wg sync.WaitGroup
for name, query := range queries {
wg.Add(1)
go func(name, query string) {
defer wg.Done()
seriesList, tableName, err := ah.opts.DataConnector.GetMetricResultEE(r.Context(), query)
for _, series := range seriesList {
series.QueryName = name
}
if err != nil {
ch <- channelResult{Err: fmt.Errorf("error in query-%s: %v", name, err), Name: name, Query: query}
return
}
ch <- channelResult{Series: seriesList, TableName: tableName}
}(name, query)
}
wg.Wait()
close(ch)
var errs []error
errQuriesByName := make(map[string]string)
// read values from the channel
for r := range ch {
if r.Err != nil {
errs = append(errs, r.Err)
errQuriesByName[r.Name] = r.Query
continue
}
seriesList = append(seriesList, r.Series...)
tableName = append(tableName, r.TableName)
}
if len(errs) != 0 {
return nil, nil, fmt.Errorf("encountered multiple errors: %s", metrics.FormatErrs(errs, "\n")), errQuriesByName
}
return seriesList, tableName, nil, nil
}
execPromQueries := func(metricsQueryRangeParams *basemodel.QueryRangeParamsV2) ([]*basemodel.Series, error, map[string]string) {
var seriesList []*basemodel.Series
ch := make(chan channelResult, len(metricsQueryRangeParams.CompositeMetricQuery.PromQueries))
var wg sync.WaitGroup
for name, query := range metricsQueryRangeParams.CompositeMetricQuery.PromQueries {
if query.Disabled {
continue
}
wg.Add(1)
go func(name string, query *basemodel.PromQuery) {
var seriesList []*basemodel.Series
defer wg.Done()
tmpl := template.New("promql-query")
tmpl, tmplErr := tmpl.Parse(query.Query)
if tmplErr != nil {
ch <- channelResult{Err: fmt.Errorf("error in parsing query-%s: %v", name, tmplErr), Name: name, Query: query.Query}
return
}
var queryBuf bytes.Buffer
tmplErr = tmpl.Execute(&queryBuf, metricsQueryRangeParams.Variables)
if tmplErr != nil {
ch <- channelResult{Err: fmt.Errorf("error in parsing query-%s: %v", name, tmplErr), Name: name, Query: query.Query}
return
}
query.Query = queryBuf.String()
queryModel := basemodel.QueryRangeParams{
Start: time.UnixMilli(metricsQueryRangeParams.Start),
End: time.UnixMilli(metricsQueryRangeParams.End),
Step: time.Duration(metricsQueryRangeParams.Step * int64(time.Second)),
Query: query.Query,
}
promResult, _, err := ah.opts.DataConnector.GetQueryRangeResult(r.Context(), &queryModel)
if err != nil {
ch <- channelResult{Err: fmt.Errorf("error in query-%s: %v", name, err), Name: name, Query: query.Query}
return
}
matrix, _ := promResult.Matrix()
for _, v := range matrix {
var s basemodel.Series
s.QueryName = name
s.Labels = v.Metric.Copy().Map()
for _, p := range v.Points {
s.Points = append(s.Points, basemodel.MetricPoint{Timestamp: p.T, Value: p.V})
}
seriesList = append(seriesList, &s)
}
ch <- channelResult{Series: seriesList}
}(name, query)
}
wg.Wait()
close(ch)
var errs []error
errQuriesByName := make(map[string]string)
// read values from the channel
for r := range ch {
if r.Err != nil {
errs = append(errs, r.Err)
errQuriesByName[r.Name] = r.Query
continue
}
seriesList = append(seriesList, r.Series...)
}
if len(errs) != 0 {
return nil, fmt.Errorf("encountered multiple errors: %s", metrics.FormatErrs(errs, "\n")), errQuriesByName
}
return seriesList, nil, nil
}
var seriesList []*basemodel.Series
var tableName []string
var err error
var errQuriesByName map[string]string
switch metricsQueryRangeParams.CompositeMetricQuery.QueryType {
case basemodel.QUERY_BUILDER:
runQueries := metrics.PrepareBuilderMetricQueries(metricsQueryRangeParams, constants.SIGNOZ_TIMESERIES_TABLENAME)
if runQueries.Err != nil {
RespondError(w, &basemodel.ApiError{Typ: basemodel.ErrorBadData, Err: runQueries.Err}, nil)
return
}
seriesList, tableName, err, errQuriesByName = execClickHouseQueries(runQueries.Queries)
case basemodel.CLICKHOUSE:
queries := make(map[string]string)
for name, chQuery := range metricsQueryRangeParams.CompositeMetricQuery.ClickHouseQueries {
if chQuery.Disabled {
continue
}
tmpl := template.New("clickhouse-query")
tmpl, err := tmpl.Parse(chQuery.Query)
if err != nil {
RespondError(w, &basemodel.ApiError{Typ: basemodel.ErrorBadData, Err: err}, nil)
return
}
var query bytes.Buffer
// replace go template variables
querytemplate.AssignReservedVars(metricsQueryRangeParams)
err = tmpl.Execute(&query, metricsQueryRangeParams.Variables)
if err != nil {
RespondError(w, &basemodel.ApiError{Typ: basemodel.ErrorBadData, Err: err}, nil)
return
}
queries[name] = query.String()
}
seriesList, tableName, err, errQuriesByName = execClickHouseQueries(queries)
case basemodel.PROM:
seriesList, err, errQuriesByName = execPromQueries(metricsQueryRangeParams)
default:
err = fmt.Errorf("invalid query type")
RespondError(w, &basemodel.ApiError{Typ: basemodel.ErrorBadData, Err: err}, errQuriesByName)
return
}
if err != nil {
apiErrObj := &basemodel.ApiError{Typ: basemodel.ErrorBadData, Err: err}
RespondError(w, apiErrObj, errQuriesByName)
return
}
if metricsQueryRangeParams.CompositeMetricQuery.PanelType == basemodel.QUERY_VALUE &&
len(seriesList) > 1 &&
(metricsQueryRangeParams.CompositeMetricQuery.QueryType == basemodel.QUERY_BUILDER ||
metricsQueryRangeParams.CompositeMetricQuery.QueryType == basemodel.CLICKHOUSE) {
RespondError(w, &basemodel.ApiError{Typ: basemodel.ErrorBadData, Err: fmt.Errorf("invalid: query resulted in more than one series for value type")}, nil)
return
}
type ResponseFormat struct {
ResultType string `json:"resultType"`
Result []*basemodel.Series `json:"result"`
TableName []string `json:"tableName"`
}
resp := ResponseFormat{ResultType: "matrix", Result: seriesList, TableName: tableName}
ah.Respond(w, resp)
}

View File

@@ -0,0 +1,12 @@
package api
import (
"net/http"
baseapp "go.signoz.io/signoz/pkg/query-service/app"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
)
func RespondError(w http.ResponseWriter, apiErr basemodel.BaseApiError, data interface{}) {
baseapp.RespondError(w, apiErr, data)
}

View File

@@ -0,0 +1,39 @@
package api
import (
"net/http"
"strconv"
"go.signoz.io/signoz/ee/query-service/app/db"
"go.signoz.io/signoz/ee/query-service/constants"
"go.signoz.io/signoz/ee/query-service/model"
baseapp "go.signoz.io/signoz/pkg/query-service/app"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.uber.org/zap"
)
func (ah *APIHandler) searchTraces(w http.ResponseWriter, r *http.Request) {
if !ah.CheckFeature(basemodel.SmartTraceDetail) {
zap.S().Info("SmartTraceDetail feature is not enabled in this plan")
ah.APIHandler.SearchTraces(w, r)
return
}
traceId, spanId, levelUpInt, levelDownInt, err := baseapp.ParseSearchTracesParams(r)
if err != nil {
RespondError(w, &model.ApiError{Typ: model.ErrorBadData, Err: err}, "Error reading params")
return
}
spanLimit, err := strconv.Atoi(constants.SpanLimitStr)
if err != nil {
zap.S().Error("Error during strconv.Atoi() on SPAN_LIMIT env variable: ", err)
return
}
result, err := ah.opts.DataConnector.SearchTraces(r.Context(), traceId, spanId, levelUpInt, levelDownInt, spanLimit, db.SmartTraceAlgorithm)
if ah.HandleError(w, err, http.StatusBadRequest) {
return
}
ah.WriteJSON(w, r, result)
}

View File

@@ -0,0 +1,401 @@
package db
import (
"context"
"crypto/md5"
"encoding/json"
"fmt"
"reflect"
"regexp"
"sort"
"strings"
"time"
"go.signoz.io/signoz/ee/query-service/model"
baseconst "go.signoz.io/signoz/pkg/query-service/constants"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.signoz.io/signoz/pkg/query-service/utils"
"go.uber.org/zap"
)
// GetMetricResultEE runs the query and returns list of time series
func (r *ClickhouseReader) GetMetricResultEE(ctx context.Context, query string) ([]*basemodel.Series, string, error) {
defer utils.Elapsed("GetMetricResult")()
zap.S().Infof("Executing metric result query: %s", query)
var hash string
// If getSubTreeSpans function is used in the clickhouse query
if strings.Index(query, "getSubTreeSpans(") != -1 {
var err error
query, hash, err = r.getSubTreeSpansCustomFunction(ctx, query, hash)
if err == fmt.Errorf("No spans found for the given query") {
return nil, "", nil
}
if err != nil {
return nil, "", err
}
}
rows, err := r.conn.Query(ctx, query)
zap.S().Debug(query)
if err != nil {
zap.S().Debug("Error in processing query: ", err)
return nil, "", fmt.Errorf("error in processing query")
}
var (
columnTypes = rows.ColumnTypes()
columnNames = rows.Columns()
vars = make([]interface{}, len(columnTypes))
)
for i := range columnTypes {
vars[i] = reflect.New(columnTypes[i].ScanType()).Interface()
}
// when group by is applied, each combination of cartesian product
// of attributes is separate series. each item in metricPointsMap
// represent a unique series.
metricPointsMap := make(map[string][]basemodel.MetricPoint)
// attribute key-value pairs for each group selection
attributesMap := make(map[string]map[string]string)
defer rows.Close()
for rows.Next() {
if err := rows.Scan(vars...); err != nil {
return nil, "", err
}
var groupBy []string
var metricPoint basemodel.MetricPoint
groupAttributes := make(map[string]string)
// Assuming that the end result row contains a timestamp, value and option labels
// Label key and value are both strings.
for idx, v := range vars {
colName := columnNames[idx]
switch v := v.(type) {
case *string:
// special case for returning all labels
if colName == "fullLabels" {
var metric map[string]string
err := json.Unmarshal([]byte(*v), &metric)
if err != nil {
return nil, "", err
}
for key, val := range metric {
groupBy = append(groupBy, val)
groupAttributes[key] = val
}
} else {
groupBy = append(groupBy, *v)
groupAttributes[colName] = *v
}
case *time.Time:
metricPoint.Timestamp = v.UnixMilli()
case *float64:
metricPoint.Value = *v
case **float64:
// ch seems to return this type when column is derived from
// SELECT count(*)/ SELECT count(*)
floatVal := *v
if floatVal != nil {
metricPoint.Value = *floatVal
}
case *float32:
float32Val := float32(*v)
metricPoint.Value = float64(float32Val)
case *uint8, *uint64, *uint16, *uint32:
if _, ok := baseconst.ReservedColumnTargetAliases[colName]; ok {
metricPoint.Value = float64(reflect.ValueOf(v).Elem().Uint())
} else {
groupBy = append(groupBy, fmt.Sprintf("%v", reflect.ValueOf(v).Elem().Uint()))
groupAttributes[colName] = fmt.Sprintf("%v", reflect.ValueOf(v).Elem().Uint())
}
case *int8, *int16, *int32, *int64:
if _, ok := baseconst.ReservedColumnTargetAliases[colName]; ok {
metricPoint.Value = float64(reflect.ValueOf(v).Elem().Int())
} else {
groupBy = append(groupBy, fmt.Sprintf("%v", reflect.ValueOf(v).Elem().Int()))
groupAttributes[colName] = fmt.Sprintf("%v", reflect.ValueOf(v).Elem().Int())
}
default:
zap.S().Errorf("invalid var found in metric builder query result", v, colName)
}
}
sort.Strings(groupBy)
key := strings.Join(groupBy, "")
attributesMap[key] = groupAttributes
metricPointsMap[key] = append(metricPointsMap[key], metricPoint)
}
var seriesList []*basemodel.Series
for key := range metricPointsMap {
points := metricPointsMap[key]
// first point in each series could be invalid since the
// aggregations are applied with point from prev series
if len(points) != 0 && len(points) > 1 {
points = points[1:]
}
attributes := attributesMap[key]
series := basemodel.Series{Labels: attributes, Points: points}
seriesList = append(seriesList, &series)
}
// err = r.conn.Exec(ctx, "DROP TEMPORARY TABLE IF EXISTS getSubTreeSpans"+hash)
// if err != nil {
// zap.S().Error("Error in dropping temporary table: ", err)
// return nil, err
// }
if hash == "" {
return seriesList, hash, nil
} else {
return seriesList, "getSubTreeSpans" + hash, nil
}
}
func (r *ClickhouseReader) getSubTreeSpansCustomFunction(ctx context.Context, query string, hash string) (string, string, error) {
zap.S().Debugf("Executing getSubTreeSpans function")
// str1 := `select fromUnixTimestamp64Milli(intDiv( toUnixTimestamp64Milli ( timestamp ), 100) * 100) AS interval, toFloat64(count()) as count from (select timestamp, spanId, parentSpanId, durationNano from getSubTreeSpans(select * from signoz_traces.signoz_index_v2 where serviceName='frontend' and name='/driver.DriverService/FindNearest' and traceID='00000000000000004b0a863cb5ed7681') where name='FindDriverIDs' group by interval order by interval asc;`
// process the query to fetch subTree query
var subtreeInput string
query, subtreeInput, hash = processQuery(query, hash)
err := r.conn.Exec(ctx, "DROP TABLE IF EXISTS getSubTreeSpans"+hash)
if err != nil {
zap.S().Error("Error in dropping temporary table: ", err)
return query, hash, err
}
// Create temporary table to store the getSubTreeSpans() results
zap.S().Debugf("Creating temporary table getSubTreeSpans%s", hash)
err = r.conn.Exec(ctx, "CREATE TABLE IF NOT EXISTS "+"getSubTreeSpans"+hash+" (timestamp DateTime64(9) CODEC(DoubleDelta, LZ4), traceID FixedString(32) CODEC(ZSTD(1)), spanID String CODEC(ZSTD(1)), parentSpanID String CODEC(ZSTD(1)), rootSpanID String CODEC(ZSTD(1)), serviceName LowCardinality(String) CODEC(ZSTD(1)), name LowCardinality(String) CODEC(ZSTD(1)), rootName LowCardinality(String) CODEC(ZSTD(1)), durationNano UInt64 CODEC(T64, ZSTD(1)), kind Int8 CODEC(T64, ZSTD(1)), tagMap Map(LowCardinality(String), String) CODEC(ZSTD(1)), events Array(String) CODEC(ZSTD(2))) ENGINE = MergeTree() ORDER BY (timestamp)")
if err != nil {
zap.S().Error("Error in creating temporary table: ", err)
return query, hash, err
}
var getSpansSubQueryDBResponses []model.GetSpansSubQueryDBResponse
getSpansSubQuery := subtreeInput
// Execute the subTree query
zap.S().Debugf("Executing subTree query: %s", getSpansSubQuery)
err = r.conn.Select(ctx, &getSpansSubQueryDBResponses, getSpansSubQuery)
// zap.S().Info(getSpansSubQuery)
if err != nil {
zap.S().Debug("Error in processing sql query: ", err)
return query, hash, fmt.Errorf("Error in processing sql query")
}
var searchScanResponses []basemodel.SearchSpanDBResponseItem
// TODO : @ankit: I think the algorithm does not need to assume that subtrees are from the same TraceID. We can take this as an improvement later.
// Fetch all the spans from of same TraceID so that we can build subtree
modelQuery := fmt.Sprintf("SELECT timestamp, traceID, model FROM %s.%s WHERE traceID=$1", r.TraceDB, r.SpansTable)
if len(getSpansSubQueryDBResponses) == 0 {
return query, hash, fmt.Errorf("No spans found for the given query")
}
zap.S().Debugf("Executing query to fetch all the spans from the same TraceID: %s", modelQuery)
err = r.conn.Select(ctx, &searchScanResponses, modelQuery, getSpansSubQueryDBResponses[0].TraceID)
if err != nil {
zap.S().Debug("Error in processing sql query: ", err)
return query, hash, fmt.Errorf("Error in processing sql query")
}
// Process model to fetch the spans
zap.S().Debugf("Processing model to fetch the spans")
searchSpanResponses := []basemodel.SearchSpanResponseItem{}
for _, item := range searchScanResponses {
var jsonItem basemodel.SearchSpanResponseItem
json.Unmarshal([]byte(item.Model), &jsonItem)
jsonItem.TimeUnixNano = uint64(item.Timestamp.UnixNano())
if jsonItem.Events == nil {
jsonItem.Events = []string{}
}
searchSpanResponses = append(searchSpanResponses, jsonItem)
}
// Build the subtree and store all the subtree spans in temporary table getSubTreeSpans+hash
// Use map to store pointer to the spans to avoid duplicates and save memory
zap.S().Debugf("Building the subtree to store all the subtree spans in temporary table getSubTreeSpans%s", hash)
treeSearchResponse, err := getSubTreeAlgorithm(searchSpanResponses, getSpansSubQueryDBResponses)
if err != nil {
zap.S().Error("Error in getSubTreeAlgorithm function: ", err)
return query, hash, err
}
zap.S().Debugf("Preparing batch to store subtree spans in temporary table getSubTreeSpans%s", hash)
statement, err := r.conn.PrepareBatch(context.Background(), fmt.Sprintf("INSERT INTO getSubTreeSpans"+hash))
if err != nil {
zap.S().Error("Error in preparing batch statement: ", err)
return query, hash, err
}
for _, span := range treeSearchResponse {
var parentID string
if len(span.References) > 0 && span.References[0].RefType == "CHILD_OF" {
parentID = span.References[0].SpanId
}
err = statement.Append(
time.Unix(0, int64(span.TimeUnixNano)),
span.TraceID,
span.SpanID,
parentID,
span.RootSpanID,
span.ServiceName,
span.Name,
span.RootName,
uint64(span.DurationNano),
int8(span.Kind),
span.TagMap,
span.Events,
)
if err != nil {
zap.S().Debug("Error in processing sql query: ", err)
return query, hash, err
}
}
zap.S().Debugf("Inserting the subtree spans in temporary table getSubTreeSpans%s", hash)
err = statement.Send()
if err != nil {
zap.S().Error("Error in sending statement: ", err)
return query, hash, err
}
return query, hash, nil
}
func processQuery(query string, hash string) (string, string, string) {
re3 := regexp.MustCompile(`getSubTreeSpans`)
submatchall3 := re3.FindAllStringIndex(query, -1)
getSubtreeSpansMatchIndex := submatchall3[0][1]
query2countParenthesis := query[getSubtreeSpansMatchIndex:]
sqlCompleteIndex := 0
countParenthesisImbalance := 0
for i, char := range query2countParenthesis {
if string(char) == "(" {
countParenthesisImbalance += 1
}
if string(char) == ")" {
countParenthesisImbalance -= 1
}
if countParenthesisImbalance == 0 {
sqlCompleteIndex = i
break
}
}
subtreeInput := query2countParenthesis[1:sqlCompleteIndex]
// hash the subtreeInput
hmd5 := md5.Sum([]byte(subtreeInput))
hash = fmt.Sprintf("%x", hmd5)
// Reformat the query to use the getSubTreeSpans function
query = query[:getSubtreeSpansMatchIndex] + hash + " " + query2countParenthesis[sqlCompleteIndex+1:]
return query, subtreeInput, hash
}
// getSubTreeAlgorithm is an algorithm to build the subtrees of the spans and return the list of spans
func getSubTreeAlgorithm(payload []basemodel.SearchSpanResponseItem, getSpansSubQueryDBResponses []model.GetSpansSubQueryDBResponse) (map[string]*basemodel.SearchSpanResponseItem, error) {
var spans []*model.SpanForTraceDetails
for _, spanItem := range payload {
var parentID string
if len(spanItem.References) > 0 && spanItem.References[0].RefType == "CHILD_OF" {
parentID = spanItem.References[0].SpanId
}
span := &model.SpanForTraceDetails{
TimeUnixNano: spanItem.TimeUnixNano,
SpanID: spanItem.SpanID,
TraceID: spanItem.TraceID,
ServiceName: spanItem.ServiceName,
Name: spanItem.Name,
Kind: spanItem.Kind,
DurationNano: spanItem.DurationNano,
TagMap: spanItem.TagMap,
ParentID: parentID,
Events: spanItem.Events,
HasError: spanItem.HasError,
}
spans = append(spans, span)
}
zap.S().Debug("Building Tree")
roots, err := buildSpanTrees(&spans)
if err != nil {
return nil, err
}
searchSpansResult := make(map[string]*basemodel.SearchSpanResponseItem)
// Every span which was fetched from getSubTree Input SQL query is considered root
// For each root, get the subtree spans
for _, getSpansSubQueryDBResponse := range getSpansSubQueryDBResponses {
targetSpan := &model.SpanForTraceDetails{}
// zap.S().Debug("Building tree for span id: " + getSpansSubQueryDBResponse.SpanID + " " + strconv.Itoa(i+1) + " of " + strconv.Itoa(len(getSpansSubQueryDBResponses)))
// Search target span object in the tree
for _, root := range roots {
targetSpan, err = breadthFirstSearch(root, getSpansSubQueryDBResponse.SpanID)
if targetSpan != nil {
break
}
if err != nil {
zap.S().Error("Error during BreadthFirstSearch(): ", err)
return nil, err
}
}
if targetSpan == nil {
return nil, nil
}
// Build subtree for the target span
// Mark the target span as root by setting parent ID as empty string
targetSpan.ParentID = ""
preParents := []*model.SpanForTraceDetails{targetSpan}
children := []*model.SpanForTraceDetails{}
// Get the subtree child spans
for i := 0; len(preParents) != 0; i++ {
parents := []*model.SpanForTraceDetails{}
for _, parent := range preParents {
children = append(children, parent.Children...)
parents = append(parents, parent.Children...)
}
preParents = parents
}
resultSpans := children
// Add the target span to the result spans
resultSpans = append(resultSpans, targetSpan)
for _, item := range resultSpans {
references := []basemodel.OtelSpanRef{
{
TraceId: item.TraceID,
SpanId: item.ParentID,
RefType: "CHILD_OF",
},
}
if item.Events == nil {
item.Events = []string{}
}
searchSpansResult[item.SpanID] = &basemodel.SearchSpanResponseItem{
TimeUnixNano: item.TimeUnixNano,
SpanID: item.SpanID,
TraceID: item.TraceID,
ServiceName: item.ServiceName,
Name: item.Name,
Kind: item.Kind,
References: references,
DurationNano: item.DurationNano,
TagMap: item.TagMap,
Events: item.Events,
HasError: item.HasError,
RootSpanID: getSpansSubQueryDBResponse.SpanID,
RootName: targetSpan.Name,
}
}
}
return searchSpansResult, nil
}

View File

@@ -0,0 +1,29 @@
package db
import (
"github.com/ClickHouse/clickhouse-go/v2"
"github.com/jmoiron/sqlx"
basechr "go.signoz.io/signoz/pkg/query-service/app/clickhouseReader"
"go.signoz.io/signoz/pkg/query-service/interfaces"
)
type ClickhouseReader struct {
conn clickhouse.Conn
appdb *sqlx.DB
*basechr.ClickHouseReader
}
func NewDataConnector(localDB *sqlx.DB, promConfigPath string, lm interfaces.FeatureLookup) *ClickhouseReader {
ch := basechr.NewReader(localDB, promConfigPath, lm)
return &ClickhouseReader{
conn: ch.GetConn(),
appdb: localDB,
ClickHouseReader: ch,
}
}
func (r *ClickhouseReader) Start(readerReady chan bool) {
r.ClickHouseReader.Start(readerReady)
}

View File

@@ -0,0 +1,222 @@
package db
import (
"errors"
"strconv"
"go.signoz.io/signoz/ee/query-service/model"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.uber.org/zap"
)
// SmartTraceAlgorithm is an algorithm to find the target span and build a tree of spans around it with the given levelUp and levelDown parameters and the given spanLimit
func SmartTraceAlgorithm(payload []basemodel.SearchSpanResponseItem, targetSpanId string, levelUp int, levelDown int, spanLimit int) ([]basemodel.SearchSpansResult, error) {
var spans []*model.SpanForTraceDetails
// Build a slice of spans from the payload
for _, spanItem := range payload {
var parentID string
if len(spanItem.References) > 0 && spanItem.References[0].RefType == "CHILD_OF" {
parentID = spanItem.References[0].SpanId
}
span := &model.SpanForTraceDetails{
TimeUnixNano: spanItem.TimeUnixNano,
SpanID: spanItem.SpanID,
TraceID: spanItem.TraceID,
ServiceName: spanItem.ServiceName,
Name: spanItem.Name,
Kind: spanItem.Kind,
DurationNano: spanItem.DurationNano,
TagMap: spanItem.TagMap,
ParentID: parentID,
Events: spanItem.Events,
HasError: spanItem.HasError,
}
spans = append(spans, span)
}
// Build span trees from the spans
roots, err := buildSpanTrees(&spans)
if err != nil {
return nil, err
}
targetSpan := &model.SpanForTraceDetails{}
// Find the target span in the span trees
for _, root := range roots {
targetSpan, err = breadthFirstSearch(root, targetSpanId)
if targetSpan != nil {
break
}
if err != nil {
zap.S().Error("Error during BreadthFirstSearch(): ", err)
return nil, err
}
}
// If the target span is not found, return span not found error
if targetSpan == nil {
return nil, errors.New("Span not found")
}
// Build the final result
parents := []*model.SpanForTraceDetails{}
// Get the parent spans of the target span up to the given levelUp parameter and spanLimit
preParent := targetSpan
for i := 0; i < levelUp+1; i++ {
if i == levelUp {
preParent.ParentID = ""
}
if spanLimit-len(preParent.Children) <= 0 {
parents = append(parents, preParent)
parents = append(parents, preParent.Children[:spanLimit]...)
spanLimit -= (len(preParent.Children[:spanLimit]) + 1)
preParent.ParentID = ""
break
}
parents = append(parents, preParent)
parents = append(parents, preParent.Children...)
spanLimit -= (len(preParent.Children) + 1)
preParent = preParent.ParentSpan
if preParent == nil {
break
}
}
// Get the child spans of the target span until the given levelDown and spanLimit
preParents := []*model.SpanForTraceDetails{targetSpan}
children := []*model.SpanForTraceDetails{}
for i := 0; i < levelDown && len(preParents) != 0 && spanLimit > 0; i++ {
parents := []*model.SpanForTraceDetails{}
for _, parent := range preParents {
if spanLimit-len(parent.Children) <= 0 {
children = append(children, parent.Children[:spanLimit]...)
spanLimit -= len(parent.Children[:spanLimit])
break
}
children = append(children, parent.Children...)
parents = append(parents, parent.Children...)
}
preParents = parents
}
// Store the final list of spans in the resultSpanSet map to avoid duplicates
resultSpansSet := make(map[*model.SpanForTraceDetails]struct{})
resultSpansSet[targetSpan] = struct{}{}
for _, parent := range parents {
resultSpansSet[parent] = struct{}{}
}
for _, child := range children {
resultSpansSet[child] = struct{}{}
}
searchSpansResult := []basemodel.SearchSpansResult{{
Columns: []string{"__time", "SpanId", "TraceId", "ServiceName", "Name", "Kind", "DurationNano", "TagsKeys", "TagsValues", "References", "Events", "HasError"},
Events: make([][]interface{}, len(resultSpansSet)),
},
}
// Convert the resultSpansSet map to searchSpansResult
i := 0 // index for spans
for item := range resultSpansSet {
references := []basemodel.OtelSpanRef{
{
TraceId: item.TraceID,
SpanId: item.ParentID,
RefType: "CHILD_OF",
},
}
referencesStringArray := []string{}
for _, item := range references {
referencesStringArray = append(referencesStringArray, item.ToString())
}
keys := make([]string, 0, len(item.TagMap))
values := make([]string, 0, len(item.TagMap))
for k, v := range item.TagMap {
keys = append(keys, k)
values = append(values, v)
}
if item.Events == nil {
item.Events = []string{}
}
searchSpansResult[0].Events[i] = []interface{}{
item.TimeUnixNano,
item.SpanID,
item.TraceID,
item.ServiceName,
item.Name,
strconv.Itoa(int(item.Kind)),
strconv.FormatInt(item.DurationNano, 10),
keys,
values,
referencesStringArray,
item.Events,
item.HasError,
}
i++ // increment index
}
return searchSpansResult, nil
}
// buildSpanTrees builds trees of spans from a list of spans.
func buildSpanTrees(spansPtr *[]*model.SpanForTraceDetails) ([]*model.SpanForTraceDetails, error) {
// Build a map of spanID to span for fast lookup
var roots []*model.SpanForTraceDetails
spans := *spansPtr
mapOfSpans := make(map[string]*model.SpanForTraceDetails, len(spans))
for _, span := range spans {
if span.ParentID == "" {
roots = append(roots, span)
}
mapOfSpans[span.SpanID] = span
}
// Build the span tree by adding children to the parent spans
for _, span := range spans {
if span.ParentID == "" {
continue
}
parent := mapOfSpans[span.ParentID]
// If the parent span is not found, add current span to list of roots
if parent == nil {
// zap.S().Debug("Parent Span not found parent_id: ", span.ParentID)
roots = append(roots, span)
span.ParentID = ""
continue
}
span.ParentSpan = parent
parent.Children = append(parent.Children, span)
}
return roots, nil
}
// breadthFirstSearch performs a breadth-first search on the span tree to find the target span.
func breadthFirstSearch(spansPtr *model.SpanForTraceDetails, targetId string) (*model.SpanForTraceDetails, error) {
queue := []*model.SpanForTraceDetails{spansPtr}
visited := make(map[string]bool)
for len(queue) > 0 {
current := queue[0]
visited[current.SpanID] = true
queue = queue[1:]
if current.SpanID == targetId {
return current, nil
}
for _, child := range current.Children {
if ok, _ := visited[child.SpanID]; !ok {
queue = append(queue, child)
}
}
}
return nil, nil
}

View File

@@ -0,0 +1,442 @@
package app
import (
"context"
"fmt"
"net"
"net/http"
_ "net/http/pprof" // http profiler
"os"
"time"
"github.com/gorilla/handlers"
"github.com/gorilla/mux"
"github.com/jmoiron/sqlx"
"github.com/rs/cors"
"github.com/soheilhy/cmux"
"go.signoz.io/signoz/ee/query-service/app/api"
"go.signoz.io/signoz/ee/query-service/app/db"
"go.signoz.io/signoz/ee/query-service/dao"
"go.signoz.io/signoz/ee/query-service/interfaces"
licensepkg "go.signoz.io/signoz/ee/query-service/license"
"go.signoz.io/signoz/ee/query-service/usage"
"go.signoz.io/signoz/pkg/query-service/app/dashboards"
baseconst "go.signoz.io/signoz/pkg/query-service/constants"
"go.signoz.io/signoz/pkg/query-service/healthcheck"
basealm "go.signoz.io/signoz/pkg/query-service/integrations/alertManager"
baseint "go.signoz.io/signoz/pkg/query-service/interfaces"
pqle "go.signoz.io/signoz/pkg/query-service/pqlEngine"
rules "go.signoz.io/signoz/pkg/query-service/rules"
"go.signoz.io/signoz/pkg/query-service/telemetry"
"go.signoz.io/signoz/pkg/query-service/utils"
"go.uber.org/zap"
)
type ServerOptions struct {
PromConfigPath string
HTTPHostPort string
PrivateHostPort string
// alert specific params
DisableRules bool
RuleRepoURL string
}
// Server runs HTTP api service
type Server struct {
serverOptions *ServerOptions
conn net.Listener
ruleManager *rules.Manager
separatePorts bool
// public http router
httpConn net.Listener
httpServer *http.Server
// private http
privateConn net.Listener
privateHTTP *http.Server
// feature flags
featureLookup baseint.FeatureLookup
unavailableChannel chan healthcheck.Status
}
// HealthCheckStatus returns health check status channel a client can subscribe to
func (s Server) HealthCheckStatus() chan healthcheck.Status {
return s.unavailableChannel
}
// NewServer creates and initializes Server
func NewServer(serverOptions *ServerOptions) (*Server, error) {
modelDao, err := dao.InitDao("sqlite", baseconst.RELATIONAL_DATASOURCE_PATH)
if err != nil {
return nil, err
}
localDB, err := dashboards.InitDB(baseconst.RELATIONAL_DATASOURCE_PATH)
if err != nil {
return nil, err
}
localDB.SetMaxOpenConns(10)
// initiate license manager
lm, err := licensepkg.StartManager("sqlite", localDB)
if err != nil {
return nil, err
}
// set license manager as feature flag provider in dao
modelDao.SetFlagProvider(lm)
readerReady := make(chan bool)
var reader interfaces.DataConnector
storage := os.Getenv("STORAGE")
if storage == "clickhouse" {
zap.S().Info("Using ClickHouse as datastore ...")
qb := db.NewDataConnector(localDB, serverOptions.PromConfigPath, lm)
go qb.Start(readerReady)
reader = qb
} else {
return nil, fmt.Errorf("Storage type: %s is not supported in query service", storage)
}
<-readerReady
rm, err := makeRulesManager(serverOptions.PromConfigPath,
baseconst.GetAlertManagerApiPrefix(),
serverOptions.RuleRepoURL,
localDB,
reader,
serverOptions.DisableRules)
if err != nil {
return nil, err
}
// start the usagemanager
usageManager, err := usage.New("sqlite", localDB, lm.GetRepo(), reader.GetConn())
if err != nil {
return nil, err
}
err = usageManager.Start()
if err != nil {
return nil, err
}
telemetry.GetInstance().SetReader(reader)
apiOpts := api.APIHandlerOptions{
DataConnector: reader,
AppDao: modelDao,
RulesManager: rm,
FeatureFlags: lm,
LicenseManager: lm,
}
apiHandler, err := api.NewAPIHandler(apiOpts)
if err != nil {
return nil, err
}
s := &Server{
// logger: logger,
// tracer: tracer,
ruleManager: rm,
serverOptions: serverOptions,
unavailableChannel: make(chan healthcheck.Status),
}
httpServer, err := s.createPublicServer(apiHandler)
if err != nil {
return nil, err
}
s.httpServer = httpServer
privateServer, err := s.createPrivateServer(apiHandler)
if err != nil {
return nil, err
}
s.privateHTTP = privateServer
return s, nil
}
func (s *Server) createPrivateServer(apiHandler *api.APIHandler) (*http.Server, error) {
r := mux.NewRouter()
r.Use(setTimeoutMiddleware)
r.Use(s.analyticsMiddleware)
r.Use(loggingMiddlewarePrivate)
apiHandler.RegisterPrivateRoutes(r)
c := cors.New(cors.Options{
//todo(amol): find out a way to add exact domain or
// ip here for alert manager
AllowedOrigins: []string{"*"},
AllowedMethods: []string{"GET", "DELETE", "POST", "PUT", "PATCH"},
AllowedHeaders: []string{"Accept", "Authorization", "Content-Type"},
})
handler := c.Handler(r)
handler = handlers.CompressHandler(handler)
return &http.Server{
Handler: handler,
}, nil
}
func (s *Server) createPublicServer(apiHandler *api.APIHandler) (*http.Server, error) {
r := mux.NewRouter()
r.Use(setTimeoutMiddleware)
r.Use(s.analyticsMiddleware)
r.Use(loggingMiddleware)
apiHandler.RegisterRoutes(r)
apiHandler.RegisterMetricsRoutes(r)
apiHandler.RegisterLogsRoutes(r)
c := cors.New(cors.Options{
AllowedOrigins: []string{"*"},
AllowedMethods: []string{"GET", "DELETE", "POST", "PUT", "PATCH", "OPTIONS"},
AllowedHeaders: []string{"Accept", "Authorization", "Content-Type", "cache-control"},
})
handler := c.Handler(r)
handler = handlers.CompressHandler(handler)
return &http.Server{
Handler: handler,
}, nil
}
// loggingMiddleware is used for logging public api calls
func loggingMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
route := mux.CurrentRoute(r)
path, _ := route.GetPathTemplate()
startTime := time.Now()
next.ServeHTTP(w, r)
zap.S().Info(path, "\ttimeTaken: ", time.Now().Sub(startTime))
})
}
// loggingMiddlewarePrivate is used for logging private api calls
// from internal services like alert manager
func loggingMiddlewarePrivate(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
route := mux.CurrentRoute(r)
path, _ := route.GetPathTemplate()
startTime := time.Now()
next.ServeHTTP(w, r)
zap.S().Info(path, "\tprivatePort: true", "\ttimeTaken: ", time.Now().Sub(startTime))
})
}
type loggingResponseWriter struct {
http.ResponseWriter
statusCode int
}
func NewLoggingResponseWriter(w http.ResponseWriter) *loggingResponseWriter {
// WriteHeader(int) is not called if our response implicitly returns 200 OK, so
// we default to that status code.
return &loggingResponseWriter{w, http.StatusOK}
}
func (lrw *loggingResponseWriter) WriteHeader(code int) {
lrw.statusCode = code
lrw.ResponseWriter.WriteHeader(code)
}
// Flush implements the http.Flush interface.
func (lrw *loggingResponseWriter) Flush() {
lrw.ResponseWriter.(http.Flusher).Flush()
}
func (s *Server) analyticsMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
route := mux.CurrentRoute(r)
path, _ := route.GetPathTemplate()
lrw := NewLoggingResponseWriter(w)
next.ServeHTTP(lrw, r)
data := map[string]interface{}{"path": path, "statusCode": lrw.statusCode}
if _, ok := telemetry.IgnoredPaths()[path]; !ok {
telemetry.GetInstance().SendEvent(telemetry.TELEMETRY_EVENT_PATH, data)
}
})
}
func setTimeoutMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
var cancel context.CancelFunc
// check if route is not excluded
url := r.URL.Path
if _, ok := baseconst.TimeoutExcludedRoutes[url]; !ok {
ctx, cancel = context.WithTimeout(r.Context(), baseconst.ContextTimeout*time.Second)
defer cancel()
}
r = r.WithContext(ctx)
next.ServeHTTP(w, r)
})
}
// initListeners initialises listeners of the server
func (s *Server) initListeners() error {
// listen on public port
var err error
publicHostPort := s.serverOptions.HTTPHostPort
if publicHostPort == "" {
return fmt.Errorf("baseconst.HTTPHostPort is required")
}
s.httpConn, err = net.Listen("tcp", publicHostPort)
if err != nil {
return err
}
zap.S().Info(fmt.Sprintf("Query server started listening on %s...", s.serverOptions.HTTPHostPort))
// listen on private port to support internal services
privateHostPort := s.serverOptions.PrivateHostPort
if privateHostPort == "" {
return fmt.Errorf("baseconst.PrivateHostPort is required")
}
s.privateConn, err = net.Listen("tcp", privateHostPort)
if err != nil {
return err
}
zap.S().Info(fmt.Sprintf("Query server started listening on private port %s...", s.serverOptions.PrivateHostPort))
return nil
}
// Start listening on http and private http port concurrently
func (s *Server) Start() error {
// initiate rule manager first
if !s.serverOptions.DisableRules {
s.ruleManager.Start()
} else {
zap.S().Info("msg: Rules disabled as rules.disable is set to TRUE")
}
err := s.initListeners()
if err != nil {
return err
}
var httpPort int
if port, err := utils.GetPort(s.httpConn.Addr()); err == nil {
httpPort = port
}
go func() {
zap.S().Info("Starting HTTP server", zap.Int("port", httpPort), zap.String("addr", s.serverOptions.HTTPHostPort))
switch err := s.httpServer.Serve(s.httpConn); err {
case nil, http.ErrServerClosed, cmux.ErrListenerClosed:
// normal exit, nothing to do
default:
zap.S().Error("Could not start HTTP server", zap.Error(err))
}
s.unavailableChannel <- healthcheck.Unavailable
}()
go func() {
zap.S().Info("Starting pprof server", zap.String("addr", baseconst.DebugHttpPort))
err = http.ListenAndServe(baseconst.DebugHttpPort, nil)
if err != nil {
zap.S().Error("Could not start pprof server", zap.Error(err))
}
}()
var privatePort int
if port, err := utils.GetPort(s.privateConn.Addr()); err == nil {
privatePort = port
}
fmt.Println("starting private http")
go func() {
zap.S().Info("Starting Private HTTP server", zap.Int("port", privatePort), zap.String("addr", s.serverOptions.PrivateHostPort))
switch err := s.privateHTTP.Serve(s.privateConn); err {
case nil, http.ErrServerClosed, cmux.ErrListenerClosed:
// normal exit, nothing to do
zap.S().Info("private http server closed")
default:
zap.S().Error("Could not start private HTTP server", zap.Error(err))
}
s.unavailableChannel <- healthcheck.Unavailable
}()
return nil
}
func makeRulesManager(
promConfigPath,
alertManagerURL string,
ruleRepoURL string,
db *sqlx.DB,
ch baseint.Reader,
disableRules bool) (*rules.Manager, error) {
// create engine
pqle, err := pqle.FromConfigPath(promConfigPath)
if err != nil {
return nil, fmt.Errorf("failed to create pql engine : %v", err)
}
// notifier opts
notifierOpts := basealm.NotifierOptions{
QueueCapacity: 10000,
Timeout: 1 * time.Second,
AlertManagerURLs: []string{alertManagerURL},
}
// create manager opts
managerOpts := &rules.ManagerOptions{
NotifierOpts: notifierOpts,
Queriers: &rules.Queriers{
PqlEngine: pqle,
Ch: ch.GetConn(),
},
RepoURL: ruleRepoURL,
DBConn: db,
Context: context.Background(),
Logger: nil,
DisableRules: disableRules,
}
// create Manager
manager, err := rules.NewManager(managerOpts)
if err != nil {
return nil, fmt.Errorf("rule manager error: %v", err)
}
zap.S().Info("rules manager is ready")
return manager, nil
}

View File

@@ -0,0 +1,30 @@
package constants
import (
"os"
)
const (
DefaultSiteURL = "https://localhost:3301"
)
var LicenseSignozIo = "https://license.signoz.io/api/v1"
var SpanLimitStr = GetOrDefaultEnv("SPAN_LIMIT", "5000")
func GetOrDefaultEnv(key string, fallback string) string {
v := os.Getenv(key)
if len(v) == 0 {
return fallback
}
return v
}
// constant functions that override env vars
// GetDefaultSiteURL returns default site url, primarily
// used to send saml request and allowing backend to
// handle http redirect
func GetDefaultSiteURL() string {
return GetOrDefaultEnv("SIGNOZ_SITE_URL", DefaultSiteURL)
}

View File

@@ -0,0 +1,18 @@
package dao
import (
"fmt"
"go.signoz.io/signoz/ee/query-service/dao/sqlite"
)
func InitDao(engine, path string) (ModelDao, error) {
switch engine {
case "sqlite":
return sqlite.InitDB(path)
default:
return nil, fmt.Errorf("qsdb type: %s is not supported in query service", engine)
}
}

View File

@@ -0,0 +1,35 @@
package dao
import (
"context"
"net/url"
"github.com/google/uuid"
"github.com/jmoiron/sqlx"
"go.signoz.io/signoz/ee/query-service/model"
basedao "go.signoz.io/signoz/pkg/query-service/dao"
baseint "go.signoz.io/signoz/pkg/query-service/interfaces"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
)
type ModelDao interface {
basedao.ModelDao
// SetFlagProvider sets the feature lookup provider
SetFlagProvider(flags baseint.FeatureLookup)
DB() *sqlx.DB
// auth methods
PrecheckLogin(ctx context.Context, email, sourceUrl string) (*model.PrecheckResponse, basemodel.BaseApiError)
CanUsePassword(ctx context.Context, email string) (bool, basemodel.BaseApiError)
PrepareSsoRedirect(ctx context.Context, redirectUri, email string) (redirectURL string, apierr basemodel.BaseApiError)
GetDomainFromSsoResponse(ctx context.Context, relayState *url.URL) (*model.OrgDomain, error)
// org domain (auth domains) CRUD ops
ListDomains(ctx context.Context, orgId string) ([]model.OrgDomain, basemodel.BaseApiError)
GetDomain(ctx context.Context, id uuid.UUID) (*model.OrgDomain, basemodel.BaseApiError)
CreateDomain(ctx context.Context, d *model.OrgDomain) basemodel.BaseApiError
UpdateDomain(ctx context.Context, domain *model.OrgDomain) basemodel.BaseApiError
DeleteDomain(ctx context.Context, id uuid.UUID) basemodel.BaseApiError
GetDomainByEmail(ctx context.Context, email string) (*model.OrgDomain, basemodel.BaseApiError)
}

View File

@@ -0,0 +1,136 @@
package sqlite
import (
"context"
"fmt"
"net/url"
"strings"
"go.signoz.io/signoz/ee/query-service/constants"
"go.signoz.io/signoz/ee/query-service/model"
baseconst "go.signoz.io/signoz/pkg/query-service/constants"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
baseauth "go.signoz.io/signoz/pkg/query-service/auth"
"go.uber.org/zap"
)
// PrepareSsoRedirect prepares redirect page link after SSO response
// is successfully parsed (i.e. valid email is available)
func (m *modelDao) PrepareSsoRedirect(ctx context.Context, redirectUri, email string) (redirectURL string, apierr basemodel.BaseApiError) {
userPayload, apierr := m.GetUserByEmail(ctx, email)
if !apierr.IsNil() {
zap.S().Errorf(" failed to get user with email received from auth provider", apierr.Error())
return "", model.BadRequestStr("invalid user email received from the auth provider")
}
tokenStore, err := baseauth.GenerateJWTForUser(&userPayload.User)
if err != nil {
zap.S().Errorf("failed to generate token for SSO login user", err)
return "", model.InternalErrorStr("failed to generate token for the user")
}
return fmt.Sprintf("%s?jwt=%s&usr=%s&refreshjwt=%s",
redirectUri,
tokenStore.AccessJwt,
userPayload.User.Id,
tokenStore.RefreshJwt), nil
}
func (m *modelDao) CanUsePassword(ctx context.Context, email string) (bool, basemodel.BaseApiError) {
domain, apierr := m.GetDomainByEmail(ctx, email)
if apierr != nil {
return false, apierr
}
if domain != nil && domain.SsoEnabled {
// sso is enabled, check if the user has admin role
userPayload, baseapierr := m.GetUserByEmail(ctx, email)
if baseapierr != nil || userPayload == nil {
return false, baseapierr
}
if userPayload.Role != baseconst.AdminGroup {
return false, model.BadRequest(fmt.Errorf("auth method not supported"))
}
}
return true, nil
}
// PrecheckLogin is called when the login or signup page is loaded
// to check sso login is to be prompted
func (m *modelDao) PrecheckLogin(ctx context.Context, email, sourceUrl string) (*model.PrecheckResponse, basemodel.BaseApiError) {
// assume user is valid unless proven otherwise
resp := &model.PrecheckResponse{IsUser: true, CanSelfRegister: false}
// check if email is a valid user
userPayload, baseApiErr := m.GetUserByEmail(ctx, email)
if baseApiErr != nil {
return resp, baseApiErr
}
if userPayload == nil {
resp.IsUser = false
}
ssoAvailable := true
err := m.checkFeature(model.SSO)
if err != nil {
switch err.(type) {
case basemodel.ErrFeatureUnavailable:
// do nothing, just skip sso
ssoAvailable = false
default:
zap.S().Errorf("feature check failed", zap.String("featureKey", model.SSO), zap.Error(err))
return resp, model.BadRequest(err)
}
}
if ssoAvailable {
// find domain from email
orgDomain, apierr := m.GetDomainByEmail(ctx, email)
if apierr != nil {
var emailDomain string
emailComponents := strings.Split(email, "@")
if len(emailComponents) > 0 {
emailDomain = emailComponents[1]
}
zap.S().Errorf("failed to get org domain from email", zap.String("emailDomain", emailDomain), apierr.ToError())
return resp, apierr
}
if orgDomain != nil && orgDomain.SsoEnabled {
// saml is enabled for this domain, lets prepare sso url
if sourceUrl == "" {
sourceUrl = constants.GetDefaultSiteURL()
}
// parse source url that generated the login request
var err error
escapedUrl, _ := url.QueryUnescape(sourceUrl)
siteUrl, err := url.Parse(escapedUrl)
if err != nil {
zap.S().Errorf("failed to parse referer", err)
return resp, model.InternalError(fmt.Errorf("failed to generate login request"))
}
// build Idp URL that will authenticat the user
// the front-end will redirect user to this url
resp.SsoUrl, err = orgDomain.BuildSsoUrl(siteUrl)
if err != nil {
zap.S().Errorf("failed to prepare saml request for domain", zap.String("domain", orgDomain.Name), err)
return resp, model.InternalError(err)
}
// set SSO to true, as the url is generated correctly
resp.SSO = true
}
}
return resp, nil
}

View File

@@ -0,0 +1,212 @@
package sqlite
import (
"context"
"database/sql"
"encoding/json"
"net/url"
"fmt"
"strings"
"time"
"github.com/google/uuid"
"go.signoz.io/signoz/ee/query-service/model"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.uber.org/zap"
)
// StoredDomain represents stored database record for org domain
type StoredDomain struct {
Id uuid.UUID `db:"id"`
Name string `db:"name"`
OrgId string `db:"org_id"`
Data string `db:"data"`
CreatedAt int64 `db:"created_at"`
UpdatedAt int64 `db:"updated_at"`
}
// GetDomainFromSsoResponse uses relay state received from IdP to fetch
// user domain. The domain is further used to process validity of the response.
// when sending login request to IdP we send relay state as URL (site url)
// with domainId as query parameter.
func (m *modelDao) GetDomainFromSsoResponse(ctx context.Context, relayState *url.URL) (*model.OrgDomain, error) {
// derive domain id from relay state now
var domainIdStr string
for k, v := range relayState.Query() {
if k == "domainId" && len(v) > 0 {
domainIdStr = strings.Replace(v[0], ":", "-", -1)
}
}
domainId, err := uuid.Parse(domainIdStr)
if err != nil {
zap.S().Errorf("failed to parse domain id from relay state", err)
return nil, fmt.Errorf("failed to parse response from IdP response")
}
domain, err := m.GetDomain(ctx, domainId)
if (err != nil) || domain == nil {
zap.S().Errorf("failed to find domain received in IdP response", err.Error())
return nil, fmt.Errorf("invalid credentials")
}
return domain, nil
}
// GetDomain returns org domain for a given domain id
func (m *modelDao) GetDomain(ctx context.Context, id uuid.UUID) (*model.OrgDomain, basemodel.BaseApiError) {
stored := StoredDomain{}
err := m.DB().Get(&stored, `SELECT * FROM org_domains WHERE id=$1 LIMIT 1`, id)
if err != nil {
if err == sql.ErrNoRows {
return nil, model.BadRequest(fmt.Errorf("invalid domain id"))
}
return nil, model.InternalError(err)
}
domain := &model.OrgDomain{Id: stored.Id, Name: stored.Name, OrgId: stored.OrgId}
if err := domain.LoadConfig(stored.Data); err != nil {
return domain, model.InternalError(err)
}
return domain, nil
}
// ListDomains gets the list of auth domains by org id
func (m *modelDao) ListDomains(ctx context.Context, orgId string) ([]model.OrgDomain, basemodel.BaseApiError) {
domains := []model.OrgDomain{}
stored := []StoredDomain{}
err := m.DB().SelectContext(ctx, &stored, `SELECT * FROM org_domains WHERE org_id=$1`, orgId)
if err != nil {
if err == sql.ErrNoRows {
return []model.OrgDomain{}, nil
}
return nil, model.InternalError(err)
}
for _, s := range stored {
domain := model.OrgDomain{Id: s.Id, Name: s.Name, OrgId: s.OrgId}
if err := domain.LoadConfig(s.Data); err != nil {
zap.S().Errorf("ListDomains() failed", zap.Error(err))
}
domains = append(domains, domain)
}
return domains, nil
}
// CreateDomain creates a new auth domain
func (m *modelDao) CreateDomain(ctx context.Context, domain *model.OrgDomain) basemodel.BaseApiError {
if domain.Id == uuid.Nil {
domain.Id = uuid.New()
}
if domain.OrgId == "" || domain.Name == "" {
return model.BadRequest(fmt.Errorf("domain creation failed, missing fields: OrgId, Name "))
}
configJson, err := json.Marshal(domain)
if err != nil {
zap.S().Errorf("failed to unmarshal domain config", zap.Error(err))
return model.InternalError(fmt.Errorf("domain creation failed"))
}
_, err = m.DB().ExecContext(ctx,
"INSERT INTO org_domains (id, name, org_id, data, created_at, updated_at) VALUES ($1, $2, $3, $4, $5, $6)",
domain.Id,
domain.Name,
domain.OrgId,
configJson,
time.Now().Unix(),
time.Now().Unix())
if err != nil {
zap.S().Errorf("failed to insert domain in db", zap.Error(err))
return model.InternalError(fmt.Errorf("domain creation failed"))
}
return nil
}
// UpdateDomain updates stored config params for a domain
func (m *modelDao) UpdateDomain(ctx context.Context, domain *model.OrgDomain) basemodel.BaseApiError {
if domain.Id == uuid.Nil {
zap.S().Errorf("domain update failed", zap.Error(fmt.Errorf("OrgDomain.Id is null")))
return model.InternalError(fmt.Errorf("domain update failed"))
}
configJson, err := json.Marshal(domain)
if err != nil {
zap.S().Errorf("domain update failed", zap.Error(err))
return model.InternalError(fmt.Errorf("domain update failed"))
}
_, err = m.DB().ExecContext(ctx,
"UPDATE org_domains SET data = $1, updated_at = $2 WHERE id = $3",
configJson,
time.Now().Unix(),
domain.Id)
if err != nil {
zap.S().Errorf("domain update failed", zap.Error(err))
return model.InternalError(fmt.Errorf("domain update failed"))
}
return nil
}
// DeleteDomain deletes an org domain
func (m *modelDao) DeleteDomain(ctx context.Context, id uuid.UUID) basemodel.BaseApiError {
if id == uuid.Nil {
zap.S().Errorf("domain delete failed", zap.Error(fmt.Errorf("OrgDomain.Id is null")))
return model.InternalError(fmt.Errorf("domain delete failed"))
}
_, err := m.DB().ExecContext(ctx,
"DELETE FROM org_domains WHERE id = $1",
id)
if err != nil {
zap.S().Errorf("domain delete failed", zap.Error(err))
return model.InternalError(fmt.Errorf("domain delete failed"))
}
return nil
}
func (m *modelDao) GetDomainByEmail(ctx context.Context, email string) (*model.OrgDomain, basemodel.BaseApiError) {
if email == "" {
return nil, model.BadRequest(fmt.Errorf("could not find auth domain, missing fields: email "))
}
components := strings.Split(email, "@")
if len(components) < 2 {
return nil, model.BadRequest(fmt.Errorf("invalid email address"))
}
parsedDomain := components[1]
stored := StoredDomain{}
err := m.DB().Get(&stored, `SELECT * FROM org_domains WHERE name=$1 LIMIT 1`, parsedDomain)
if err != nil {
if err == sql.ErrNoRows {
return nil, nil
}
return nil, model.InternalError(err)
}
domain := &model.OrgDomain{Id: stored.Id, Name: stored.Name, OrgId: stored.OrgId}
if err := domain.LoadConfig(stored.Data); err != nil {
return domain, model.InternalError(err)
}
return domain, nil
}

View File

@@ -0,0 +1,63 @@
package sqlite
import (
"fmt"
"github.com/jmoiron/sqlx"
basedao "go.signoz.io/signoz/pkg/query-service/dao"
basedsql "go.signoz.io/signoz/pkg/query-service/dao/sqlite"
baseint "go.signoz.io/signoz/pkg/query-service/interfaces"
)
type modelDao struct {
*basedsql.ModelDaoSqlite
flags baseint.FeatureLookup
}
// SetFlagProvider sets the feature lookup provider
func (m *modelDao) SetFlagProvider(flags baseint.FeatureLookup) {
m.flags = flags
}
// CheckFeature confirms if a feature is available
func (m *modelDao) checkFeature(key string) error {
if m.flags == nil {
return fmt.Errorf("flag provider not set")
}
return m.flags.CheckFeature(key)
}
// InitDB creates and extends base model DB repository
func InitDB(dataSourceName string) (*modelDao, error) {
dao, err := basedsql.InitDB(dataSourceName)
if err != nil {
return nil, err
}
// set package variable so dependent base methods (e.g. AuthCache) will work
basedao.SetDB(dao)
m := &modelDao{ModelDaoSqlite: dao}
table_schema := `
PRAGMA foreign_keys = ON;
CREATE TABLE IF NOT EXISTS org_domains(
id TEXT PRIMARY KEY,
org_id TEXT NOT NULL,
name VARCHAR(50) NOT NULL UNIQUE,
created_at INTEGER NOT NULL,
updated_at INTEGER,
data TEXT NOT NULL,
FOREIGN KEY(org_id) REFERENCES organizations(id)
);`
_, err = m.DB().Exec(table_schema)
if err != nil {
return nil, fmt.Errorf("error in creating tables: %v", err.Error())
}
return m, nil
}
func (m *modelDao) DB() *sqlx.DB {
return m.ModelDaoSqlite.DB()
}

View File

@@ -0,0 +1,20 @@
package signozio
type status string
const (
statusSuccess status = "success"
statusError status = "error"
)
type ActivationResult struct {
Status status `json:"status"`
Data *ActivationResponse `json:"data,omitempty"`
ErrorType string `json:"errorType,omitempty"`
Error string `json:"error,omitempty"`
}
type ActivationResponse struct {
ActivationId string `json:"ActivationId"`
PlanDetails string `json:"PlanDetails"`
}

View File

@@ -0,0 +1,159 @@
package signozio
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"github.com/pkg/errors"
"go.signoz.io/signoz/ee/query-service/constants"
"go.signoz.io/signoz/ee/query-service/model"
"go.uber.org/zap"
)
var C *Client
const (
POST = "POST"
APPLICATION_JSON = "application/json"
)
type Client struct {
Prefix string
}
func New() *Client {
return &Client{
Prefix: constants.LicenseSignozIo,
}
}
func init() {
C = New()
}
// ActivateLicense sends key to license.signoz.io and gets activation data
func ActivateLicense(key, siteId string) (*ActivationResponse, *model.ApiError) {
licenseReq := map[string]string{
"key": key,
"siteId": siteId,
}
reqString, _ := json.Marshal(licenseReq)
httpResponse, err := http.Post(C.Prefix+"/licenses/activate", APPLICATION_JSON, bytes.NewBuffer(reqString))
if err != nil {
zap.S().Errorf("failed to connect to license.signoz.io", err)
return nil, model.BadRequest(fmt.Errorf("unable to connect with license.signoz.io, please check your network connection"))
}
httpBody, err := ioutil.ReadAll(httpResponse.Body)
if err != nil {
zap.S().Errorf("failed to read activation response from license.signoz.io", err)
return nil, model.BadRequest(fmt.Errorf("failed to read activation response from license.signoz.io"))
}
defer httpResponse.Body.Close()
// read api request result
result := ActivationResult{}
err = json.Unmarshal(httpBody, &result)
if err != nil {
zap.S().Errorf("failed to marshal activation response from license.signoz.io", err)
return nil, model.InternalError(errors.Wrap(err, "failed to marshal license activation response"))
}
switch httpResponse.StatusCode {
case 200, 201:
return result.Data, nil
case 400, 401:
return nil, model.BadRequest(fmt.Errorf(fmt.Sprintf("failed to activate: %s", result.Error)))
default:
return nil, model.InternalError(fmt.Errorf(fmt.Sprintf("failed to activate: %s", result.Error)))
}
}
// ValidateLicense validates the license key
func ValidateLicense(activationId string) (*ActivationResponse, *model.ApiError) {
validReq := map[string]string{
"activationId": activationId,
}
reqString, _ := json.Marshal(validReq)
response, err := http.Post(C.Prefix+"/licenses/validate", APPLICATION_JSON, bytes.NewBuffer(reqString))
if err != nil {
return nil, model.BadRequest(errors.Wrap(err, "unable to connect with license.signoz.io, please check your network connection"))
}
body, err := ioutil.ReadAll(response.Body)
if err != nil {
return nil, model.BadRequest(errors.Wrap(err, "failed to read validation response from license.signoz.io"))
}
defer response.Body.Close()
switch response.StatusCode {
case 200, 201:
a := ActivationResult{}
err = json.Unmarshal(body, &a)
if err != nil {
return nil, model.BadRequest(errors.Wrap(err, "failed to marshal license validation response"))
}
return a.Data, nil
case 400, 401:
return nil, model.BadRequest(errors.Wrap(fmt.Errorf(string(body)),
"bad request error received from license.signoz.io"))
default:
return nil, model.InternalError(errors.Wrap(fmt.Errorf(string(body)),
"internal error received from license.signoz.io"))
}
}
func NewPostRequestWithCtx(ctx context.Context, url string, contentType string, body io.Reader) (*http.Request, error) {
req, err := http.NewRequestWithContext(ctx, POST, url, body)
if err != nil {
return nil, err
}
req.Header.Add("Content-Type", contentType)
return req, err
}
// SendUsage reports the usage of signoz to license server
func SendUsage(ctx context.Context, usage model.UsagePayload) *model.ApiError {
reqString, _ := json.Marshal(usage)
req, err := NewPostRequestWithCtx(ctx, C.Prefix+"/usage", APPLICATION_JSON, bytes.NewBuffer(reqString))
if err != nil {
return model.BadRequest(errors.Wrap(err, "unable to create http request"))
}
res, err := http.DefaultClient.Do(req)
if err != nil {
return model.BadRequest(errors.Wrap(err, "unable to connect with license.signoz.io, please check your network connection"))
}
body, err := io.ReadAll(res.Body)
if err != nil {
return model.BadRequest(errors.Wrap(err, "failed to read usage response from license.signoz.io"))
}
defer res.Body.Close()
switch res.StatusCode {
case 200, 201:
return nil
case 400, 401:
return model.BadRequest(errors.Wrap(fmt.Errorf(string(body)),
"bad request error received from license.signoz.io"))
default:
return model.InternalError(errors.Wrap(fmt.Errorf(string(body)),
"internal error received from license.signoz.io"))
}
}

View File

@@ -0,0 +1,12 @@
package interfaces
import (
baseint "go.signoz.io/signoz/pkg/query-service/interfaces"
)
// Connector defines methods for interaction
// with o11y data. for example - clickhouse
type DataConnector interface {
Start(readerReady chan bool)
baseint.Reader
}

View File

@@ -0,0 +1,127 @@
package license
import (
"context"
"fmt"
"time"
"github.com/jmoiron/sqlx"
"go.signoz.io/signoz/ee/query-service/license/sqlite"
"go.signoz.io/signoz/ee/query-service/model"
"go.uber.org/zap"
)
// Repo is license repo. stores license keys in a secured DB
type Repo struct {
db *sqlx.DB
}
// NewLicenseRepo initiates a new license repo
func NewLicenseRepo(db *sqlx.DB) Repo {
return Repo{
db: db,
}
}
func (r *Repo) InitDB(engine string) error {
switch engine {
case "sqlite3", "sqlite":
return sqlite.InitDB(r.db)
default:
return fmt.Errorf("unsupported db")
}
}
func (r *Repo) GetLicenses(ctx context.Context) ([]model.License, error) {
licenses := []model.License{}
query := "SELECT key, activationId, planDetails, validationMessage FROM licenses"
err := r.db.Select(&licenses, query)
if err != nil {
return nil, fmt.Errorf("failed to get licenses from db: %v", err)
}
return licenses, nil
}
// GetActiveLicense fetches the latest active license from DB
func (r *Repo) GetActiveLicense(ctx context.Context) (*model.License, error) {
var err error
licenses := []model.License{}
query := "SELECT key, activationId, planDetails, validationMessage FROM licenses"
err = r.db.Select(&licenses, query)
if err != nil {
return nil, fmt.Errorf("failed to get active licenses from db: %v", err)
}
var active *model.License
for _, l := range licenses {
l.ParsePlan()
if active == nil &&
(l.ValidFrom != 0) &&
(l.ValidUntil == -1 || l.ValidUntil > time.Now().Unix()) {
active = &l
}
if active != nil &&
l.ValidFrom > active.ValidFrom &&
(l.ValidUntil == -1 || l.ValidUntil > time.Now().Unix()) {
active = &l
}
}
return active, nil
}
// InsertLicense inserts a new license in db
func (r *Repo) InsertLicense(ctx context.Context, l *model.License) error {
if l.Key == "" {
return fmt.Errorf("insert license failed: license key is required")
}
query := `INSERT INTO licenses
(key, planDetails, activationId, validationmessage)
VALUES ($1, $2, $3, $4)`
_, err := r.db.ExecContext(ctx,
query,
l.Key,
l.PlanDetails,
l.ActivationId,
l.ValidationMessage)
if err != nil {
zap.S().Errorf("error in inserting license data: ", zap.Error(err))
return fmt.Errorf("failed to insert license in db: %v", err)
}
return nil
}
// UpdatePlanDetails writes new plan details to the db
func (r *Repo) UpdatePlanDetails(ctx context.Context,
key,
planDetails string) error {
if key == "" {
return fmt.Errorf("Update Plan Details failed: license key is required")
}
query := `UPDATE licenses
SET planDetails = $1,
updatedAt = $2
WHERE key = $3`
_, err := r.db.ExecContext(ctx, query, planDetails, time.Now(), key)
if err != nil {
zap.S().Errorf("error in updating license: ", zap.Error(err))
return fmt.Errorf("failed to update license in db: %v", err)
}
return nil
}

View File

@@ -0,0 +1,311 @@
package license
import (
"context"
"fmt"
"sync/atomic"
"time"
"github.com/jmoiron/sqlx"
"sync"
baseconstants "go.signoz.io/signoz/pkg/query-service/constants"
validate "go.signoz.io/signoz/ee/query-service/integrations/signozio"
"go.signoz.io/signoz/ee/query-service/model"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.signoz.io/signoz/pkg/query-service/telemetry"
"go.uber.org/zap"
)
var LM *Manager
// validate and update license every 24 hours
var validationFrequency = 24 * 60 * time.Minute
type Manager struct {
repo *Repo
mutex sync.Mutex
validatorRunning bool
// end the license validation, this is important to gracefully
// stopping validation and protect in-consistent updates
done chan struct{}
// terminated waits for the validate go routine to end
terminated chan struct{}
// last time the license was validated
lastValidated int64
// keep track of validation failure attempts
failedAttempts uint64
// keep track of active license and features
activeLicense *model.License
activeFeatures basemodel.FeatureSet
}
func StartManager(dbType string, db *sqlx.DB) (*Manager, error) {
if LM != nil {
return LM, nil
}
repo := NewLicenseRepo(db)
err := repo.InitDB(dbType)
if err != nil {
return nil, fmt.Errorf("failed to initiate license repo: %v", err)
}
m := &Manager{
repo: &repo,
}
if err := m.start(); err != nil {
return m, err
}
LM = m
return m, nil
}
// start loads active license in memory and initiates validator
func (lm *Manager) start() error {
err := lm.LoadActiveLicense()
return err
}
func (lm *Manager) Stop() {
close(lm.done)
<-lm.terminated
}
func (lm *Manager) SetActive(l *model.License) {
lm.mutex.Lock()
defer lm.mutex.Unlock()
if l == nil {
return
}
lm.activeLicense = l
lm.activeFeatures = l.FeatureSet
// set default features
setDefaultFeatures(lm)
if !lm.validatorRunning {
// we want to make sure only one validator runs,
// we already have lock() so good to go
lm.validatorRunning = true
go lm.Validator(context.Background())
}
}
func setDefaultFeatures(lm *Manager) {
for k, v := range baseconstants.DEFAULT_FEATURE_SET {
lm.activeFeatures[k] = v
}
}
// LoadActiveLicense loads the most recent active license
func (lm *Manager) LoadActiveLicense() error {
var err error
active, err := lm.repo.GetActiveLicense(context.Background())
if err != nil {
return err
}
if active != nil {
lm.SetActive(active)
} else {
zap.S().Info("No active license found, defaulting to basic plan")
// if no active license is found, we default to basic(free) plan with all default features
lm.activeFeatures = basemodel.BasicPlan
setDefaultFeatures(lm)
}
return nil
}
func (lm *Manager) GetLicenses(ctx context.Context) (response []model.License, apiError *model.ApiError) {
licenses, err := lm.repo.GetLicenses(ctx)
if err != nil {
return nil, model.InternalError(err)
}
for _, l := range licenses {
l.ParsePlan()
if l.Key == lm.activeLicense.Key {
l.IsCurrent = true
}
if l.ValidUntil == -1 {
// for subscriptions, there is no end-date as such
// but for showing user some validity we default one year timespan
l.ValidUntil = l.ValidFrom + 31556926
}
response = append(response, l)
}
return
}
// Validator validates license after an epoch of time
func (lm *Manager) Validator(ctx context.Context) {
defer close(lm.terminated)
tick := time.NewTicker(validationFrequency)
defer tick.Stop()
lm.Validate(ctx)
for {
select {
case <-lm.done:
return
default:
select {
case <-lm.done:
return
case <-tick.C:
lm.Validate(ctx)
}
}
}
}
// Validate validates the current active license
func (lm *Manager) Validate(ctx context.Context) (reterr error) {
zap.S().Info("License validation started")
if lm.activeLicense == nil {
return nil
}
defer func() {
lm.mutex.Lock()
lm.lastValidated = time.Now().Unix()
if reterr != nil {
zap.S().Errorf("License validation completed with error", reterr)
atomic.AddUint64(&lm.failedAttempts, 1)
telemetry.GetInstance().SendEvent(telemetry.TELEMETRY_LICENSE_CHECK_FAILED,
map[string]interface{}{"err": reterr.Error()})
} else {
zap.S().Info("License validation completed with no errors")
}
lm.mutex.Unlock()
}()
response, apiError := validate.ValidateLicense(lm.activeLicense.ActivationId)
if apiError != nil {
zap.S().Errorf("failed to validate license", apiError)
return apiError.Err
}
if response.PlanDetails == lm.activeLicense.PlanDetails {
// license plan hasnt changed, nothing to do
return nil
}
if response.PlanDetails != "" {
// copy and replace the active license record
l := model.License{
Key: lm.activeLicense.Key,
CreatedAt: lm.activeLicense.CreatedAt,
PlanDetails: response.PlanDetails,
ValidationMessage: lm.activeLicense.ValidationMessage,
ActivationId: lm.activeLicense.ActivationId,
}
if err := l.ParsePlan(); err != nil {
zap.S().Errorf("failed to parse updated license", zap.Error(err))
return err
}
// updated plan is parsable, check if plan has changed
if lm.activeLicense.PlanDetails != response.PlanDetails {
err := lm.repo.UpdatePlanDetails(ctx, lm.activeLicense.Key, response.PlanDetails)
if err != nil {
// unexpected db write issue but we can let the user continue
// and wait for update to work in next cycle.
zap.S().Errorf("failed to validate license", zap.Error(err))
}
}
// activate the update license plan
lm.SetActive(&l)
}
return nil
}
// Activate activates a license key with signoz server
func (lm *Manager) Activate(ctx context.Context, key string) (licenseResponse *model.License, errResponse *model.ApiError) {
defer func() {
if errResponse != nil {
telemetry.GetInstance().SendEvent(telemetry.TELEMETRY_LICENSE_ACT_FAILED,
map[string]interface{}{"err": errResponse.Err.Error()})
}
}()
response, apiError := validate.ActivateLicense(key, "")
if apiError != nil {
zap.S().Errorf("failed to activate license", zap.Error(apiError.Err))
return nil, apiError
}
l := &model.License{
Key: key,
ActivationId: response.ActivationId,
PlanDetails: response.PlanDetails,
}
// parse validity and features from the plan details
err := l.ParsePlan()
if err != nil {
zap.S().Errorf("failed to activate license", zap.Error(err))
return nil, model.InternalError(err)
}
// store the license before activating it
err = lm.repo.InsertLicense(ctx, l)
if err != nil {
zap.S().Errorf("failed to activate license", zap.Error(err))
return nil, model.InternalError(err)
}
// license is valid, activate it
lm.SetActive(l)
return l, nil
}
// CheckFeature will be internally used by backend routines
// for feature gating
func (lm *Manager) CheckFeature(featureKey string) error {
if value, ok := lm.activeFeatures[featureKey]; ok {
if value {
return nil
}
return basemodel.ErrFeatureUnavailable{Key: featureKey}
}
return basemodel.ErrFeatureUnavailable{Key: featureKey}
}
// GetFeatureFlags returns current active features
func (lm *Manager) GetFeatureFlags() basemodel.FeatureSet {
return lm.activeFeatures
}
// GetRepo return the license repo
func (lm *Manager) GetRepo() *Repo {
return lm.repo
}

View File

@@ -0,0 +1,37 @@
package sqlite
import (
"fmt"
"github.com/jmoiron/sqlx"
)
func InitDB(db *sqlx.DB) error {
var err error
if db == nil {
return fmt.Errorf("invalid db connection")
}
table_schema := `CREATE TABLE IF NOT EXISTS licenses(
key TEXT PRIMARY KEY,
createdAt TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updatedAt TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
planDetails TEXT,
activationId TEXT,
validationMessage TEXT,
lastValidated TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS sites(
uuid TEXT PRIMARY KEY,
alias VARCHAR(180) DEFAULT 'PROD',
url VARCHAR(300),
createdAt TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
`
_, err = db.Exec(table_schema)
if err != nil {
return fmt.Errorf("Error in creating licenses table: %s", err.Error())
}
return nil
}

90
ee/query-service/main.go Normal file
View File

@@ -0,0 +1,90 @@
package main
import (
"context"
"flag"
"os"
"os/signal"
"syscall"
"go.signoz.io/signoz/ee/query-service/app"
"go.signoz.io/signoz/pkg/query-service/auth"
baseconst "go.signoz.io/signoz/pkg/query-service/constants"
"go.signoz.io/signoz/pkg/query-service/version"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
func initZapLog() *zap.Logger {
config := zap.NewDevelopmentConfig()
config.EncoderConfig.EncodeLevel = zapcore.CapitalColorLevelEncoder
config.EncoderConfig.TimeKey = "timestamp"
config.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
logger, _ := config.Build()
return logger
}
func main() {
var promConfigPath string
// disables rule execution but allows change to the rule definition
var disableRules bool
// the url used to build link in the alert messages in slack and other systems
var ruleRepoURL string
flag.StringVar(&promConfigPath, "config", "./config/prometheus.yml", "(prometheus config to read metrics)")
flag.BoolVar(&disableRules, "rules.disable", false, "(disable rule evaluation)")
flag.StringVar(&ruleRepoURL, "rules.repo-url", baseconst.AlertHelpPage, "(host address used to build rule link in alert messages)")
flag.Parse()
loggerMgr := initZapLog()
zap.ReplaceGlobals(loggerMgr)
defer loggerMgr.Sync() // flushes buffer, if any
logger := loggerMgr.Sugar()
version.PrintVersion()
serverOptions := &app.ServerOptions{
HTTPHostPort: baseconst.HTTPHostPort,
PromConfigPath: promConfigPath,
PrivateHostPort: baseconst.PrivateHostPort,
DisableRules: disableRules,
RuleRepoURL: ruleRepoURL,
}
// Read the jwt secret key
auth.JwtSecret = os.Getenv("SIGNOZ_JWT_SECRET")
if len(auth.JwtSecret) == 0 {
zap.S().Warn("No JWT secret key is specified.")
} else {
zap.S().Info("No JWT secret key set successfully.")
}
server, err := app.NewServer(serverOptions)
if err != nil {
logger.Fatal("Failed to create server", zap.Error(err))
}
if err := server.Start(); err != nil {
logger.Fatal("Could not start servers", zap.Error(err))
}
if err := auth.InitAuthCache(context.Background()); err != nil {
logger.Fatal("Failed to initialize auth cache", zap.Error(err))
}
signalsChannel := make(chan os.Signal, 1)
signal.Notify(signalsChannel, os.Interrupt, syscall.SIGTERM)
for {
select {
case status := <-server.HealthCheckStatus():
logger.Info("Received HealthCheck status: ", zap.Int("status", int(status)))
case <-signalsChannel:
logger.Fatal("Received OS Interrupt Signal ... ")
}
}
}

View File

@@ -0,0 +1,21 @@
package model
import (
basemodel "go.signoz.io/signoz/pkg/query-service/model"
)
// PrecheckResponse contains login precheck response
type PrecheckResponse struct {
SSO bool `json:"sso"`
SsoUrl string `json:"ssoUrl"`
CanSelfRegister bool `json:"canSelfRegister"`
IsUser bool `json:"isUser"`
SsoError string `json:"ssoError"`
}
// GettableInvitation overrides base object and adds precheck into
// response
type GettableInvitation struct {
*basemodel.InvitationResponseObject
Precheck *PrecheckResponse `json:"precheck"`
}

View File

@@ -0,0 +1,184 @@
package model
import (
"encoding/json"
"fmt"
"net/url"
"strings"
"github.com/google/uuid"
"github.com/pkg/errors"
saml2 "github.com/russellhaering/gosaml2"
"go.signoz.io/signoz/ee/query-service/sso/saml"
"go.signoz.io/signoz/ee/query-service/sso"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.uber.org/zap"
)
type SSOType string
const (
SAML SSOType = "SAML"
GoogleAuth SSOType = "GOOGLE_AUTH"
)
// OrgDomain identify org owned web domains for auth and other purposes
type OrgDomain struct {
Id uuid.UUID `json:"id"`
Name string `json:"name"`
OrgId string `json:"orgId"`
SsoEnabled bool `json:"ssoEnabled"`
SsoType SSOType `json:"ssoType"`
SamlConfig *SamlConfig `json:"samlConfig"`
GoogleAuthConfig *GoogleOAuthConfig `json:"googleAuthConfig"`
Org *basemodel.Organization
}
func (od *OrgDomain) String() string {
return fmt.Sprintf("[%s]%s-%s ", od.Name, od.Id.String(), od.SsoType)
}
// Valid is used a pipeline function to check if org domain
// loaded from db is valid
func (od *OrgDomain) Valid(err error) error {
if err != nil {
return err
}
if od.Id == uuid.Nil || od.OrgId == "" {
return fmt.Errorf("both id and orgId are required")
}
return nil
}
// ValidNew cheks if the org domain is valid for insertion in db
func (od *OrgDomain) ValidNew() error {
if od.OrgId == "" {
return fmt.Errorf("orgId is required")
}
if od.Name == "" {
return fmt.Errorf("name is required")
}
return nil
}
// LoadConfig loads config params from json text
func (od *OrgDomain) LoadConfig(jsondata string) error {
d := *od
err := json.Unmarshal([]byte(jsondata), &d)
if err != nil {
return errors.Wrap(err, "failed to marshal json to OrgDomain{}")
}
*od = d
return nil
}
func (od *OrgDomain) GetSAMLEntityID() string {
if od.SamlConfig != nil {
return od.SamlConfig.SamlEntity
}
return ""
}
func (od *OrgDomain) GetSAMLIdpURL() string {
if od.SamlConfig != nil {
return od.SamlConfig.SamlIdp
}
return ""
}
func (od *OrgDomain) GetSAMLCert() string {
if od.SamlConfig != nil {
return od.SamlConfig.SamlCert
}
return ""
}
// PrepareGoogleOAuthProvider creates GoogleProvider that is used in
// requesting OAuth and also used in processing response from google
func (od *OrgDomain) PrepareGoogleOAuthProvider(siteUrl *url.URL) (sso.OAuthCallbackProvider, error) {
if od.GoogleAuthConfig == nil {
return nil, fmt.Errorf("Google auth is not setup correctly for this domain")
}
return od.GoogleAuthConfig.GetProvider(od.Name, siteUrl)
}
// PrepareSamlRequest creates a request accordingly gosaml2
func (od *OrgDomain) PrepareSamlRequest(siteUrl *url.URL) (*saml2.SAMLServiceProvider, error) {
// this is the url Idp will call after login completion
acs := fmt.Sprintf("%s://%s/%s",
siteUrl.Scheme,
siteUrl.Host,
"api/v1/complete/saml")
// this is the address of the calling url, useful to redirect user
sourceUrl := fmt.Sprintf("%s://%s%s",
siteUrl.Scheme,
siteUrl.Host,
siteUrl.Path)
// ideally this should be some unique ID for each installation
// but since we dont have UI to support it, we default it to
// host. this issuer is an identifier of service provider (signoz)
// on id provider (e.g. azure, okta). Azure requires this id to be configured
// in their system, while others seem to not care about it.
// currently we default it to host from window.location (received from browser)
issuer := siteUrl.Host
return saml.PrepareRequest(issuer, acs, sourceUrl, od.GetSAMLEntityID(), od.GetSAMLIdpURL(), od.GetSAMLCert())
}
func (od *OrgDomain) BuildSsoUrl(siteUrl *url.URL) (ssoUrl string, err error) {
fmtDomainId := strings.Replace(od.Id.String(), "-", ":", -1)
// build redirect url from window.location sent by frontend
redirectURL := fmt.Sprintf("%s://%s%s", siteUrl.Scheme, siteUrl.Host, siteUrl.Path)
// prepare state that gets relayed back when the auth provider
// calls back our url. here we pass the app url (where signoz runs)
// and the domain Id. The domain Id helps in identifying sso config
// when the call back occurs and the app url is useful in redirecting user
// back to the right path.
// why do we need to pass app url? the callback typically is handled by backend
// and sometimes backend might right at a different port or is unaware of frontend
// endpoint (unless SITE_URL param is set). hence, we receive this build sso request
// along with frontend window.location and use it to relay the information through
// auth provider to the backend (HandleCallback or HandleSSO method).
relayState := fmt.Sprintf("%s?domainId=%s", redirectURL, fmtDomainId)
switch (od.SsoType) {
case SAML:
sp, err := od.PrepareSamlRequest(siteUrl)
if err != nil {
return "", err
}
return sp.BuildAuthURL(relayState)
case GoogleAuth:
googleProvider, err := od.PrepareGoogleOAuthProvider(siteUrl)
if err != nil {
return "", err
}
return googleProvider.BuildAuthURL(relayState)
default:
zap.S().Errorf("found unsupported SSO config for the org domain", zap.String("orgDomain", od.Name))
return "", fmt.Errorf("unsupported SSO config for the domain")
}
}

View File

@@ -0,0 +1,108 @@
package model
import (
"fmt"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
)
type ApiError struct {
Typ basemodel.ErrorType
Err error
}
func (a *ApiError) Type() basemodel.ErrorType {
return a.Typ
}
func (a *ApiError) ToError() error {
if a != nil {
return a.Err
}
return a.Err
}
func (a *ApiError) Error() string {
return a.Err.Error()
}
func (a *ApiError) IsNil() bool {
return a == nil || a.Err == nil
}
// NewApiError returns a ApiError object of given type
func NewApiError(typ basemodel.ErrorType, err error) *ApiError {
return &ApiError{
Typ: typ,
Err: err,
}
}
// BadRequest returns a ApiError object of bad request
func BadRequest(err error) *ApiError {
return &ApiError{
Typ: basemodel.ErrorBadData,
Err: err,
}
}
// BadRequestStr returns a ApiError object of bad request for string input
func BadRequestStr(s string) *ApiError {
return &ApiError{
Typ: basemodel.ErrorBadData,
Err: fmt.Errorf(s),
}
}
// InternalError returns a ApiError object of internal type
func InternalError(err error) *ApiError {
return &ApiError{
Typ: basemodel.ErrorInternal,
Err: err,
}
}
// InternalErrorStr returns a ApiError object of internal type for string input
func InternalErrorStr(s string) *ApiError {
return &ApiError{
Typ: basemodel.ErrorInternal,
Err: fmt.Errorf(s),
}
}
var (
ErrorNone basemodel.ErrorType = ""
ErrorTimeout basemodel.ErrorType = "timeout"
ErrorCanceled basemodel.ErrorType = "canceled"
ErrorExec basemodel.ErrorType = "execution"
ErrorBadData basemodel.ErrorType = "bad_data"
ErrorInternal basemodel.ErrorType = "internal"
ErrorUnavailable basemodel.ErrorType = "unavailable"
ErrorNotFound basemodel.ErrorType = "not_found"
ErrorNotImplemented basemodel.ErrorType = "not_implemented"
ErrorUnauthorized basemodel.ErrorType = "unauthorized"
ErrorForbidden basemodel.ErrorType = "forbidden"
ErrorConflict basemodel.ErrorType = "conflict"
ErrorStreamingNotSupported basemodel.ErrorType = "streaming is not supported"
)
func init() {
ErrorNone = basemodel.ErrorNone
ErrorTimeout = basemodel.ErrorTimeout
ErrorCanceled = basemodel.ErrorCanceled
ErrorExec = basemodel.ErrorExec
ErrorBadData = basemodel.ErrorBadData
ErrorInternal = basemodel.ErrorInternal
ErrorUnavailable = basemodel.ErrorUnavailable
ErrorNotFound = basemodel.ErrorNotFound
ErrorNotImplemented = basemodel.ErrorNotImplemented
ErrorUnauthorized = basemodel.ErrorUnauthorized
ErrorForbidden = basemodel.ErrorForbidden
ErrorConflict = basemodel.ErrorConflict
ErrorStreamingNotSupported = basemodel.ErrorStreamingNotSupported
}
type ErrUnsupportedAuth struct{}
func (errUnsupportedAuth ErrUnsupportedAuth) Error() string {
return "this authentication method not supported"
}

View File

@@ -0,0 +1,91 @@
package model
import (
"encoding/base64"
"encoding/json"
"time"
"github.com/pkg/errors"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
)
type License struct {
Key string `json:"key" db:"key"`
ActivationId string `json:"activationId" db:"activationId"`
CreatedAt time.Time `db:"created_at"`
// PlanDetails contains the encrypted plan info
PlanDetails string `json:"planDetails" db:"planDetails"`
// stores parsed license details
LicensePlan
FeatureSet basemodel.FeatureSet
// populated in case license has any errors
ValidationMessage string `db:"validationMessage"`
// used only for sending details to front-end
IsCurrent bool `json:"isCurrent"`
}
func (l *License) MarshalJSON() ([]byte, error) {
return json.Marshal(&struct {
Key string `json:"key" db:"key"`
ActivationId string `json:"activationId" db:"activationId"`
ValidationMessage string `db:"validationMessage"`
IsCurrent bool `json:"isCurrent"`
PlanKey string `json:"planKey"`
ValidFrom time.Time `json:"ValidFrom"`
ValidUntil time.Time `json:"ValidUntil"`
Status string `json:"status"`
}{
Key: l.Key,
ActivationId: l.ActivationId,
IsCurrent: l.IsCurrent,
PlanKey: l.PlanKey,
ValidFrom: time.Unix(l.ValidFrom, 0),
ValidUntil: time.Unix(l.ValidUntil, 0),
Status: l.Status,
ValidationMessage: l.ValidationMessage,
})
}
type LicensePlan struct {
PlanKey string `json:"planKey"`
ValidFrom int64 `json:"validFrom"`
ValidUntil int64 `json:"validUntil"`
Status string `json:"status"`
}
func (l *License) ParsePlan() error {
l.LicensePlan = LicensePlan{}
planData, err := base64.StdEncoding.DecodeString(l.PlanDetails)
if err != nil {
return err
}
plan := LicensePlan{}
err = json.Unmarshal([]byte(planData), &plan)
if err != nil {
l.ValidationMessage = "failed to parse plan from license"
return errors.Wrap(err, "failed to parse plan from license")
}
l.LicensePlan = plan
l.ParseFeatures()
return nil
}
func (l *License) ParseFeatures() {
switch l.PlanKey {
case Pro:
l.FeatureSet = ProPlan
case Enterprise:
l.FeatureSet = EnterprisePlan
default:
l.FeatureSet = BasicPlan
}
}

View File

@@ -0,0 +1,31 @@
package model
import (
basemodel "go.signoz.io/signoz/pkg/query-service/model"
)
const SSO = "SSO"
const Basic = "BASIC_PLAN"
const Pro = "PRO_PLAN"
const Enterprise = "ENTERPRISE_PLAN"
const DisableUpsell = "DISABLE_UPSELL"
var BasicPlan = basemodel.FeatureSet{
Basic: true,
SSO: false,
DisableUpsell: false,
}
var ProPlan = basemodel.FeatureSet{
Pro: true,
SSO: true,
basemodel.SmartTraceDetail: true,
basemodel.CustomMetricsFunction: true,
}
var EnterprisePlan = basemodel.FeatureSet{
Enterprise: true,
SSO: true,
basemodel.SmartTraceDetail: true,
basemodel.CustomMetricsFunction: true,
}

View File

@@ -0,0 +1,68 @@
package model
import (
"fmt"
"context"
"net/url"
"golang.org/x/oauth2"
"github.com/coreos/go-oidc/v3/oidc"
"go.signoz.io/signoz/ee/query-service/sso"
)
// SamlConfig contans SAML params to generate and respond to the requests
// from SAML provider
type SamlConfig struct {
SamlEntity string `json:"samlEntity"`
SamlIdp string `json:"samlIdp"`
SamlCert string `json:"samlCert"`
}
// GoogleOauthConfig contains a generic config to support oauth
type GoogleOAuthConfig struct {
ClientID string `json:"clientId"`
ClientSecret string `json:"clientSecret"`
RedirectURI string `json:"redirectURI"`
}
const (
googleIssuerURL = "https://accounts.google.com"
)
func (g *GoogleOAuthConfig) GetProvider(domain string, siteUrl *url.URL) (sso.OAuthCallbackProvider, error) {
ctx, cancel := context.WithCancel(context.Background())
provider, err := oidc.NewProvider(ctx, googleIssuerURL)
if err != nil {
cancel()
return nil, fmt.Errorf("failed to get provider: %v", err)
}
// default to email and profile scope as we just use google auth
// to verify identity and start a session.
scopes := []string{"email"}
// this is the url google will call after login completion
redirectURL := fmt.Sprintf("%s://%s/%s",
siteUrl.Scheme,
siteUrl.Host,
"api/v1/complete/google")
return &sso.GoogleOAuthProvider{
RedirectURI: g.RedirectURI,
OAuth2Config: &oauth2.Config{
ClientID: g.ClientID,
ClientSecret: g.ClientSecret,
Endpoint: provider.Endpoint(),
Scopes: scopes,
RedirectURL: redirectURL,
},
Verifier: provider.Verifier(
&oidc.Config{ClientID: g.ClientID},
),
Cancel: cancel,
HostedDomain: domain,
}, nil
}

View File

@@ -0,0 +1,22 @@
package model
type SpanForTraceDetails struct {
TimeUnixNano uint64 `json:"timestamp"`
SpanID string `json:"spanID"`
TraceID string `json:"traceID"`
ParentID string `json:"parentID"`
ParentSpan *SpanForTraceDetails `json:"parentSpan"`
ServiceName string `json:"serviceName"`
Name string `json:"name"`
Kind int32 `json:"kind"`
DurationNano int64 `json:"durationNano"`
TagMap map[string]string `json:"tagMap"`
Events []string `json:"event"`
HasError bool `json:"hasError"`
Children []*SpanForTraceDetails `json:"children"`
}
type GetSpansSubQueryDBResponse struct {
SpanID string `ch:"spanID"`
TraceID string `ch:"traceID"`
}

View File

@@ -0,0 +1,32 @@
package model
import (
"time"
"github.com/google/uuid"
)
type UsagePayload struct {
InstallationId uuid.UUID `json:"installationId"`
LicenseKey uuid.UUID `json:"licenseKey"`
Usage []Usage `json:"usage"`
}
type Usage struct {
CollectorID string `json:"collectorId"`
ExporterID string `json:"exporterId"`
Type string `json:"type"`
Tenant string `json:"tenant"`
TimeStamp time.Time `json:"timestamp"`
Count int64 `json:"count"`
Size int64 `json:"size"`
}
type UsageDB struct {
CollectorID string `ch:"collector_id" json:"collectorId"`
ExporterID string `ch:"exporter_id" json:"exporterId"`
Type string `ch:"-" json:"type"`
TimeStamp time.Time `ch:"timestamp" json:"timestamp"`
Tenant string `ch:"tenant" json:"tenant"`
Data string `ch:"data" json:"data"`
}

View File

@@ -0,0 +1,92 @@
package sso
import (
"fmt"
"errors"
"context"
"net/http"
"github.com/coreos/go-oidc/v3/oidc"
"golang.org/x/oauth2"
)
type GoogleOAuthProvider struct {
RedirectURI string
OAuth2Config *oauth2.Config
Verifier *oidc.IDTokenVerifier
Cancel context.CancelFunc
HostedDomain string
}
func (g *GoogleOAuthProvider) BuildAuthURL(state string) (string, error) {
var opts []oauth2.AuthCodeOption
// set hosted domain. google supports multiple hosted domains but in our case
// we have one config per host domain.
opts = append(opts, oauth2.SetAuthURLParam("hd", g.HostedDomain))
return g.OAuth2Config.AuthCodeURL(state, opts...), nil
}
type oauth2Error struct{
error string
errorDescription string
}
func (e *oauth2Error) Error() string {
if e.errorDescription == "" {
return e.error
}
return e.error + ": " + e.errorDescription
}
func (g *GoogleOAuthProvider) HandleCallback(r *http.Request) (identity *SSOIdentity, err error) {
q := r.URL.Query()
if errType := q.Get("error"); errType != "" {
return identity, &oauth2Error{errType, q.Get("error_description")}
}
token, err := g.OAuth2Config.Exchange(r.Context(), q.Get("code"))
if err != nil {
return identity, fmt.Errorf("google: failed to get token: %v", err)
}
return g.createIdentity(r.Context(), token)
}
func (g *GoogleOAuthProvider) createIdentity(ctx context.Context, token *oauth2.Token) (identity *SSOIdentity, err error) {
rawIDToken, ok := token.Extra("id_token").(string)
if !ok {
return identity, errors.New("google: no id_token in token response")
}
idToken, err := g.Verifier.Verify(ctx, rawIDToken)
if err != nil {
return identity, fmt.Errorf("google: failed to verify ID Token: %v", err)
}
var claims struct {
Username string `json:"name"`
Email string `json:"email"`
EmailVerified bool `json:"email_verified"`
HostedDomain string `json:"hd"`
}
if err := idToken.Claims(&claims); err != nil {
return identity, fmt.Errorf("oidc: failed to decode claims: %v", err)
}
if claims.HostedDomain != g.HostedDomain {
return identity, fmt.Errorf("oidc: unexpected hd claim %v", claims.HostedDomain)
}
identity = &SSOIdentity{
UserID: idToken.Subject,
Username: claims.Username,
Email: claims.Email,
EmailVerified: claims.EmailVerified,
ConnectorData: []byte(token.RefreshToken),
}
return identity, nil
}

View File

@@ -0,0 +1,31 @@
package sso
import (
"net/http"
)
// SSOIdentity contains details of user received from SSO provider
type SSOIdentity struct {
UserID string
Username string
PreferredUsername string
Email string
EmailVerified bool
ConnectorData []byte
}
// OAuthCallbackProvider is an interface implemented by connectors which use an OAuth
// style redirect flow to determine user information.
type OAuthCallbackProvider interface {
// The initial URL user would be redirect to.
// OAuth2 implementations support various scopes but we only need profile and user as
// the roles are still being managed in SigNoz.
BuildAuthURL(state string) (string, error)
// Handle the callback to the server (after login at oauth provider site)
// and return a email identity.
// At the moment we dont support auto signup flow (based on domain), so
// the full identity (including name, group etc) is not required outside of the
// connector
HandleCallback(r *http.Request) (identity *SSOIdentity, err error)
}

View File

@@ -0,0 +1,107 @@
package saml
import (
"crypto/x509"
"encoding/base64"
"encoding/pem"
"fmt"
"strings"
saml2 "github.com/russellhaering/gosaml2"
dsig "github.com/russellhaering/goxmldsig"
"go.signoz.io/signoz/pkg/query-service/constants"
"go.uber.org/zap"
)
func LoadCertificateStore(certString string) (dsig.X509CertificateStore, error) {
certStore := &dsig.MemoryX509CertificateStore{
Roots: []*x509.Certificate{},
}
certData, err := base64.StdEncoding.DecodeString(certString)
if err != nil {
return certStore, fmt.Errorf(fmt.Sprintf("failed to read certificate: %v", err))
}
idpCert, err := x509.ParseCertificate(certData)
if err != nil {
return certStore, fmt.Errorf(fmt.Sprintf("failed to prepare saml request, invalid cert: %s", err.Error()))
}
certStore.Roots = append(certStore.Roots, idpCert)
return certStore, nil
}
func LoadCertFromPem(certString string) (dsig.X509CertificateStore, error) {
certStore := &dsig.MemoryX509CertificateStore{
Roots: []*x509.Certificate{},
}
block, _ := pem.Decode([]byte(certString))
if block == nil {
return certStore, fmt.Errorf("no valid pem cert found")
}
idpCert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
return certStore, fmt.Errorf(fmt.Sprintf("failed to parse pem cert: %s", err.Error()))
}
certStore.Roots = append(certStore.Roots, idpCert)
return certStore, nil
}
// PrepareRequest prepares authorization URL (Idp Provider URL)
func PrepareRequest(issuer, acsUrl, audience, entity, idp, certString string) (*saml2.SAMLServiceProvider, error) {
var certStore dsig.X509CertificateStore
if certString == "" {
return nil, fmt.Errorf("invalid certificate data")
}
var err error
if strings.Contains(certString, "-----BEGIN CERTIFICATE-----") {
certStore, err = LoadCertFromPem(certString)
} else {
certStore, err = LoadCertificateStore(certString)
}
// certificate store can not be created, throw error
if err != nil {
return nil, err
}
randomKeyStore := dsig.RandomKeyStoreForTest()
// SIGNOZ_SAML_RETURN_URL env var would support overriding window.location
// as return destination after saml request is complete from IdP side.
// this var is also useful for development, as it is easy to override with backend endpoint
// e.g. http://localhost:8080/api/v1/complete/saml
acsUrl = constants.GetOrDefaultEnv("SIGNOZ_SAML_RETURN_URL", acsUrl)
sp := &saml2.SAMLServiceProvider{
IdentityProviderSSOURL: idp,
IdentityProviderIssuer: entity,
ServiceProviderIssuer: issuer,
AssertionConsumerServiceURL: acsUrl,
SignAuthnRequests: true,
AllowMissingAttributes: true,
// about cert stores -sender(signoz app) and receiver (idp)
// The random key (random key store) is sender cert. The public cert store(IDPCertificateStore) that you see on org domain is receiver cert (idp provided).
// At the moment, the library we use doesn't bother about sender cert and IdP too. It just adds additional layer of security, which we can explore in future versions
// The receiver (Idp) cert will be different for each org domain. Imagine cloud setup where each company setups their domain that integrates with their Idp.
// @signoz.io
// @next.io
// Each of above will have their own Idp setup and hence separate public cert to decrypt the response.
// The way SAML request travels is -
// SigNoz Backend -> IdP Login Screen -> SigNoz Backend -> SigNoz Frontend
// ---------------- | -------------------| -------------------------------------
// The dotted lines indicate request boundries. So if you notice, the response from Idp starts a new request. hence we need relay state to pass the context around.
IDPCertificateStore: certStore,
SPKeyStore: randomKeyStore,
}
zap.S().Debugf("SAML request:", sp)
return sp, nil
}

View File

@@ -0,0 +1,193 @@
package usage
import (
"context"
"encoding/json"
"fmt"
"strings"
"sync/atomic"
"time"
"github.com/ClickHouse/clickhouse-go/v2"
"github.com/google/uuid"
"github.com/jmoiron/sqlx"
"go.uber.org/zap"
licenseserver "go.signoz.io/signoz/ee/query-service/integrations/signozio"
"go.signoz.io/signoz/ee/query-service/license"
"go.signoz.io/signoz/ee/query-service/model"
"go.signoz.io/signoz/pkg/query-service/utils/encryption"
)
const (
MaxRetries = 3
RetryInterval = 5 * time.Second
stateUnlocked uint32 = 0
stateLocked uint32 = 1
)
var (
// send usage every 24 hour
uploadFrequency = 24 * time.Hour
locker = stateUnlocked
)
type Manager struct {
clickhouseConn clickhouse.Conn
licenseRepo *license.Repo
// end the usage routine, this is important to gracefully
// stopping usage reporting and protect in-consistent updates
done chan struct{}
// terminated waits for the UsageExporter go routine to end
terminated chan struct{}
}
func New(dbType string, db *sqlx.DB, licenseRepo *license.Repo, clickhouseConn clickhouse.Conn) (*Manager, error) {
m := &Manager{
// repository: repo,
clickhouseConn: clickhouseConn,
licenseRepo: licenseRepo,
}
return m, nil
}
// start loads collects and exports any exported snapshot and starts the exporter
func (lm *Manager) Start() error {
// compares the locker and stateUnlocked if both are same lock is applied else returns error
if !atomic.CompareAndSwapUint32(&locker, stateUnlocked, stateLocked) {
return fmt.Errorf("usage exporter is locked")
}
go lm.UsageExporter(context.Background())
return nil
}
func (lm *Manager) UsageExporter(ctx context.Context) {
defer close(lm.terminated)
uploadTicker := time.NewTicker(uploadFrequency)
defer uploadTicker.Stop()
for {
select {
case <-lm.done:
return
case <-uploadTicker.C:
lm.UploadUsage(ctx)
}
}
}
func (lm *Manager) UploadUsage(ctx context.Context) error {
// check if license is present or not
license, err := lm.licenseRepo.GetActiveLicense(context.Background())
if err != nil {
return fmt.Errorf("failed to get active license")
}
if license == nil {
// we will not start the usage reporting if license is not present.
zap.S().Info("no license present, skipping usage reporting")
return nil
}
usages := []model.UsageDB{}
// get usage from clickhouse
dbs := []string{"signoz_logs", "signoz_traces", "signoz_metrics"}
query := `
SELECT tenant, collector_id, exporter_id, timestamp, data
FROM %s.distributed_usage as u1
GLOBAL INNER JOIN
(SELECT
tenant, collector_id, exporter_id, MAX(timestamp) as ts
FROM %s.distributed_usage as u2
where timestamp >= $1
GROUP BY tenant, collector_id, exporter_id
) as t1
ON
u1.tenant = t1.tenant AND u1.collector_id = t1.collector_id AND u1.exporter_id = t1.exporter_id and u1.timestamp = t1.ts
order by timestamp
`
for _, db := range dbs {
dbusages := []model.UsageDB{}
err := lm.clickhouseConn.Select(ctx, &dbusages, fmt.Sprintf(query, db, db), time.Now().Add(-(24 * time.Hour)))
if err != nil && !strings.Contains(err.Error(), "doesn't exist") {
return err
}
for _, u := range dbusages {
u.Type = db
usages = append(usages, u)
}
}
if len(usages) <= 0 {
zap.S().Info("no snapshots to upload, skipping.")
return nil
}
zap.S().Info("uploading usage data")
usagesPayload := []model.Usage{}
for _, usage := range usages {
usageDataBytes, err := encryption.Decrypt([]byte(usage.ExporterID[:32]), []byte(usage.Data))
if err != nil {
return err
}
usageData := model.Usage{}
err = json.Unmarshal(usageDataBytes, &usageData)
if err != nil {
return err
}
usageData.CollectorID = usage.CollectorID
usageData.ExporterID = usage.ExporterID
usageData.Type = usage.Type
usageData.Tenant = usage.Tenant
usagesPayload = append(usagesPayload, usageData)
}
key, _ := uuid.Parse(license.Key)
payload := model.UsagePayload{
LicenseKey: key,
Usage: usagesPayload,
}
err = lm.UploadUsageWithExponentalBackOff(ctx, payload)
if err != nil {
return err
}
return nil
}
func (lm *Manager) UploadUsageWithExponentalBackOff(ctx context.Context, payload model.UsagePayload) error {
for i := 1; i <= MaxRetries; i++ {
apiErr := licenseserver.SendUsage(ctx, payload)
if apiErr != nil && i == MaxRetries {
zap.S().Errorf("retries stopped : %v", zap.Error(apiErr))
// not returning error here since it is captured in the failed count
return nil
} else if apiErr != nil {
// sleeping for exponential backoff
sleepDuration := RetryInterval * time.Duration(i)
zap.S().Errorf("failed to upload snapshot retrying after %v secs : %v", sleepDuration.Seconds(), zap.Error(apiErr.Err))
time.Sleep(sleepDuration)
} else {
break
}
}
return nil
}
func (lm *Manager) Stop() {
close(lm.done)
atomic.StoreUint32(&locker, stateUnlocked)
<-lm.terminated
}

View File

@@ -1,3 +1,4 @@
node_modules
build
*.typegen.ts
i18-generate-hash.js

View File

@@ -13,7 +13,7 @@ WORKDIR /frontend
COPY package.json ./
# Install the dependencies and make the folder
RUN CI=1 yarn install
RUN CI=1 yarn install
COPY . .

View File

@@ -1,15 +1,19 @@
server {
listen 3301;
server_name _;
gzip on;
gzip_static on;
gzip_static on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_proxied any;
gzip_vary on;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_http_version 1.1;
# to handle uri issue 414 from nginx
client_max_body_size 24M;
large_client_header_buffers 8 128k;
location / {
root /usr/share/nginx/html;

View File

@@ -0,0 +1,20 @@
const crypto = require('crypto');
const fs = require('fs');
const glob = require('glob');
function generateChecksum(str, algorithm, encoding) {
return crypto
.createHash(algorithm || 'md5')
.update(str, 'utf8')
.digest(encoding || 'hex');
}
const result = {};
glob.sync(`public/locales/**/*.json`).forEach((path) => {
const [_, lang] = path.split('public/locales');
const content = fs.readFileSync(path, { encoding: 'utf-8' });
result[lang.replace('.json', '')] = generateChecksum(content);
});
fs.writeFileSync('./i18n-translations-hash.json', JSON.stringify(result));

View File

@@ -4,18 +4,20 @@
"description": "",
"main": "webpack.config.js",
"scripts": {
"dev": "cross-env NODE_ENV=development webpack serve --progress",
"build": "webpack --config=webpack.config.prod.js --progress",
"i18n:generate-hash": "node ./i18-generate-hash.js",
"dev": "npm run i18n:generate-hash && cross-env NODE_ENV=development webpack serve --progress",
"build": "npm run i18n:generate-hash && webpack --config=webpack.config.prod.js --progress",
"prettify": "prettier --write .",
"lint": "eslint ./src",
"lint:fix": "eslint ./src --fix",
"lint": "npm run i18n:generate-hash && eslint ./src",
"lint:fix": "npm run i18n:generate-hash && eslint ./src --fix",
"jest": "jest",
"jest:coverage": "jest --coverage",
"jest:watch": "jest --watch",
"postinstall": "is-ci || yarn husky:configure",
"playwright": "NODE_ENV=testing playwright test --config=./playwright.config.ts",
"playwright": "npm run i18n:generate-hash && NODE_ENV=testing playwright test --config=./playwright.config.ts",
"playwright:local:debug": "PWDEBUG=console yarn playwright --headed --browser=chromium",
"playwright:codegen:local":"playwright codegen http://localhost:3301",
"playwright:codegen:local": "playwright codegen http://localhost:3301",
"playwright:codegen:local:auth": "yarn playwright:codegen:local --load-storage=tests/auth.json",
"husky:configure": "cd .. && husky install frontend/.husky && cd frontend && chmod ug+x .husky/*",
"commitlint": "commitlint --edit $1"
},
@@ -54,7 +56,9 @@
"d3-tip": "^0.9.1",
"dayjs": "^1.10.7",
"dotenv": "8.2.0",
"event-source-polyfill": "1.0.31",
"file-loader": "6.1.1",
"flat": "^5.0.2",
"history": "4.10.1",
"html-webpack-plugin": "5.1.0",
"i18next": "^21.6.12",
@@ -122,6 +126,8 @@
"@types/copy-webpack-plugin": "^8.0.1",
"@types/d3": "^6.2.0",
"@types/d3-tip": "^3.5.5",
"@types/event-source-polyfill": "^1.0.0",
"@types/flat": "^5.0.2",
"@types/jest": "^27.5.1",
"@types/lodash-es": "^4.17.4",
"@types/mini-css-extract-plugin": "^2.5.1",

View File

@@ -14,8 +14,8 @@ const config: PlaywrightTestConfig = {
baseURL: process.env.PLAYWRIGHT_TEST_BASE_URL || 'http://localhost:3301',
},
updateSnapshots: 'all',
fullyParallel: false,
quiet: true,
fullyParallel: !!process.env.CI,
quiet: false,
testMatch: ['**/*.spec.ts'],
reporter: process.env.CI ? 'github' : 'list',
};

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 KiB

View File

@@ -1,4 +1,11 @@
{
"target_missing": "Please enter a threshold to proceed",
"rule_test_fired": "Test notification sent successfully",
"no_alerts_found": "No alerts found during the evaluation. This happens when rule condition is unsatisfied. You may adjust the rule threshold and retry.",
"button_testrule": "Test Notification",
"label_channel_select": "Notification Channels",
"placeholder_channel_select": "select one or more channels",
"channel_select_tooltip": "Leave empty to send this alert on all the configured channels",
"preview_chart_unexpected_error": "An unexpeced error occurred updating the chart, please check your query.",
"preview_chart_threshold_label": "Threshold",
"placeholder_label_key_pair": "Click here to enter a label (key value pairs)",
@@ -21,6 +28,7 @@
"condition_required": "at least one metric condition is required",
"alertname_required": "alert name is required",
"promql_required": "promql expression is required when query format is set to PromQL",
"chquery_required": "query is required when query format is set to ClickHouse",
"button_savechanges": "Save Rule",
"button_createrule": "Create Rule",
"button_returntorules": "Return to rules",
@@ -48,6 +56,7 @@
"button_formula": "Formula",
"tab_qb": "Query Builder",
"tab_promql": "PromQL",
"tab_chquery": "ClickHouse Query",
"title_confirm": "Confirm",
"button_ok": "Yes",
"button_cancel": "No",
@@ -81,5 +90,23 @@
"user_guide_pql_step3": "Step 3 -Alert Configuration",
"user_guide_pql_step3a": "Set alert severity, name and descriptions",
"user_guide_pql_step3b": "Add tags to the alert in the Label field if needed",
"user_tooltip_more_help": "More details on how to create alerts"
"user_guide_ch_step1": "Step 1 - Define the metric",
"user_guide_ch_step1a": "Write a Clickhouse query for alert evaluation. Follow <0>this tutorial</0> to learn about query format and supported vars.",
"user_guide_ch_step1b": "Format the legends based on labels you want to highlight in the preview chart",
"user_guide_ch_step2": "Step 2 - Define Alert Conditions",
"user_guide_ch_step2a": "Select the threshold type and whether you want to alert above/below a value",
"user_guide_ch_step2b": "Enter the Alert threshold",
"user_guide_ch_step3": "Step 3 -Alert Configuration",
"user_guide_ch_step3a": "Set alert severity, name and descriptions",
"user_guide_ch_step3b": "Add tags to the alert in the Label field if needed",
"user_tooltip_more_help": "More details on how to create alerts",
"choose_alert_type": "Choose a type for the alert:",
"metric_based_alert": "Metric based Alert",
"metric_based_alert_desc": "Send a notification when a condition occurs in the metric data",
"log_based_alert": "Log-based Alert",
"log_based_alert_desc": "Send a notification when a condition occurs in the logs data.",
"traces_based_alert": "Trace-based Alert",
"traces_based_alert_desc": "Send a notification when a condition occurs in the traces data.",
"exceptions_based_alert": "Exceptions-based Alert",
"exceptions_based_alert_desc": "Send a notification when a condition occurs in the exceptions data."
}

View File

@@ -1,4 +1,14 @@
{
"channel_delete_unexp_error": "Something went wrong",
"channel_delete_success": "Channel Deleted Successfully",
"column_channel_name": "Name",
"column_channel_type": "Type",
"column_channel_action": "Action",
"column_channel_edit": "Edit",
"button_new_channel": "New Alert Channel",
"tooltip_notification_channels": "More details on how to setting notification channels",
"sending_channels_note": "The alerts will be sent to all the configured channels.",
"loading_channels_message": "Loading Channels..",
"page_title_create": "New Notification Channels",
"page_title_edit": "Edit Notification Channels",
"button_save_channel": "Save",

View File

@@ -1,6 +1,7 @@
{
"create_dashboard": "Create Dashboard",
"import_json": "Import JSON",
"import_grafana_json": "Import Grafana JSON",
"copy_to_clipboard": "Copy To ClipBoard",
"download_json": "Download JSON",
"view_json": "View JSON",

View File

@@ -0,0 +1,13 @@
{
"column_license_key": "License Key",
"column_valid_from": "Valid From",
"column_valid_until": "Valid Until",
"column_license_status": "Status",
"button_apply": "Apply",
"placeholder_license_key": "Enter a License Key",
"tab_current_license": "Current License",
"tab_license_history": "History",
"loading_licenses": "Loading licenses...",
"enter_license_key": "Please enter a license key",
"license_applied": "License applied successfully"
}

View File

@@ -0,0 +1,22 @@
{
"label_email": "Email",
"placeholder_email": "name@yourcompany.com",
"label_password": "Password",
"button_initiate_login": "Next",
"button_login": "Login",
"login_page_title": "Login with SigNoz",
"login_with_sso": "Login with SSO",
"login_with_pwd": "Login with password",
"forgot_password": "Forgot password?",
"create_an_account": "Create an account",
"prompt_if_admin": "If you are admin,",
"prompt_create_account": "If you are setting up SigNoz for the first time,",
"prompt_no_account": "Don't have an account? Contact your admin to send you an invite link.",
"prompt_forgot_password": "Ask your admin to reset your password and send you a new invite link",
"prompt_on_sso_error": "Are you trying to resolve SSO configuration issue?",
"unexpected_error": "Sorry, something went wrong",
"failed_to_login": "sorry, failed to login",
"invalid_email": "Please enter a valid email address",
"invalid_account": "This account does not exist. To create a new account, contact your admin to get an invite link",
"invalid_config": "Invalid configuration detected, please contact your administrator"
}

View File

@@ -9,5 +9,10 @@
"add_another_team_member": "Add another team member",
"invite_team_members": "Invite team members",
"invite_members": "Invite Members",
"pending_invites": "Pending Invites"
"pending_invites": "Pending Invites",
"authenticated_domains": "Authenticated Domains",
"delete_domain_message": "Are you sure you want to delete this domain?",
"delete_domain": "Delete Domain",
"add_domain": "Add Domains",
"saml_settings":"Your SAML settings have been saved, please login from incognito window to confirm that it has been set up correctly"
}

View File

@@ -0,0 +1,18 @@
{
"label_email": "Email",
"placeholder_email": "name@yourcompany.com",
"label_password": "Password",
"label_confirm_password": "Confirm Password",
"label_firstname": "First Name",
"placeholder_firstname": "Your Name",
"label_orgname": "Organization Name",
"placeholder_orgname": "Your Company",
"prompt_keepme_posted": "Keep me updated on new SigNoz features",
"prompt_anonymise": "Anonymise my usage date. We collect data to measure product usage",
"failed_confirm_password": "Passwords dont match. Please try again",
"unexpected_error": "Something went wrong",
"failed_to_initiate_login": "Signup completed but failed to initiate login",
"token_required": "Invite token is required for signup, please request one from your admin",
"button_get_started": "Get Started",
"prompt_admin_warning": "This will create an admin account. If you are not an admin, please ask your admin for an invite link"
}

View File

@@ -0,0 +1,3 @@
{
"search_tags": "Search Tag Names"
}

View File

@@ -6,7 +6,7 @@
"release_notes": "Release Notes",
"read_how_to_upgrade": "Read instructions on how to upgrade",
"latest_version_signoz": "You are running the latest version of SigNoz.",
"stale_version": "You are on an older version and may be loosing on the latest features we have shipped. We recommend to upgrade to the latest version",
"stale_version": "You are on an older version and may be losing out on the latest features we have shipped. We recommend to upgrade to the latest version",
"oops_something_went_wrong_version": "Oops.. facing issues with fetching updated version information",
"n_a": "N/A",
"routes": {

View File

@@ -1,4 +1,11 @@
{
"target_missing": "Please enter a threshold to proceed",
"rule_test_fired": "Test notification sent successfully",
"no_alerts_found": "No alerts found during the evaluation. This happens when rule condition is unsatisfied. You may adjust the rule threshold and retry.",
"button_testrule": "Test Notification",
"label_channel_select": "Notification Channels",
"placeholder_channel_select": "select one or more channels",
"channel_select_tooltip": "Leave empty to send this alert on all the configured channels",
"preview_chart_unexpected_error": "An unexpeced error occurred updating the chart, please check your query.",
"preview_chart_threshold_label": "Threshold",
"placeholder_label_key_pair": "Click here to enter a label (key value pairs)",
@@ -21,6 +28,7 @@
"condition_required": "at least one metric condition is required",
"alertname_required": "alert name is required",
"promql_required": "promql expression is required when query format is set to PromQL",
"chquery_required": "query is required when query format is set to ClickHouse",
"button_savechanges": "Save Rule",
"button_createrule": "Create Rule",
"button_returntorules": "Return to rules",
@@ -48,6 +56,7 @@
"button_formula": "Formula",
"tab_qb": "Query Builder",
"tab_promql": "PromQL",
"tab_chquery": "ClickHouse Query",
"title_confirm": "Confirm",
"button_ok": "Yes",
"button_cancel": "No",
@@ -81,5 +90,23 @@
"user_guide_pql_step3": "Step 3 -Alert Configuration",
"user_guide_pql_step3a": "Set alert severity, name and descriptions",
"user_guide_pql_step3b": "Add tags to the alert in the Label field if needed",
"user_tooltip_more_help": "More details on how to create alerts"
"user_guide_ch_step1": "Step 1 - Define the metric",
"user_guide_ch_step1a": "Write a Clickhouse query for alert evaluation. Follow <0>this tutorial</0> to learn about query format and supported vars.",
"user_guide_ch_step1b": "Format the legends based on labels you want to highlight in the preview chart",
"user_guide_ch_step2": "Step 2 - Define Alert Conditions",
"user_guide_ch_step2a": "Select the threshold type and whether you want to alert above/below a value",
"user_guide_ch_step2b": "Enter the Alert threshold",
"user_guide_ch_step3": "Step 3 -Alert Configuration",
"user_guide_ch_step3a": "Set alert severity, name and descriptions",
"user_guide_ch_step3b": "Add tags to the alert in the Label field if needed",
"user_tooltip_more_help": "More details on how to create alerts",
"choose_alert_type": "Choose a type for the alert:",
"metric_based_alert": "Metric based Alert",
"metric_based_alert_desc": "Send a notification when a condition occurs in the metric data",
"log_based_alert": "Log-based Alert",
"log_based_alert_desc": "Send a notification when a condition occurs in the logs data.",
"traces_based_alert": "Trace-based Alert",
"traces_based_alert_desc": "Send a notification when a condition occurs in the traces data.",
"exceptions_based_alert": "Exceptions-based Alert",
"exceptions_based_alert_desc": "Send a notification when a condition occurs in the exceptions data."
}

View File

@@ -1,4 +1,14 @@
{
"channel_delete_unexp_error": "Something went wrong",
"channel_delete_success": "Channel Deleted Successfully",
"column_channel_name": "Name",
"column_channel_type": "Type",
"column_channel_action": "Action",
"column_channel_edit": "Edit",
"button_new_channel": "New Alert Channel",
"tooltip_notification_channels": "More details on how to setting notification channels",
"sending_channels_note": "The alerts will be sent to all the configured channels.",
"loading_channels_message": "Loading Channels..",
"page_title_create": "New Notification Channels",
"page_title_edit": "Edit Notification Channels",
"button_save_channel": "Save",

View File

@@ -1,6 +1,7 @@
{
"create_dashboard": "Create Dashboard",
"import_json": "Import JSON",
"import_grafana_json": "Import Grafana JSON",
"copy_to_clipboard": "Copy To ClipBoard",
"download_json": "Download JSON",
"view_json": "View JSON",

View File

@@ -0,0 +1,13 @@
{
"column_license_key": "License Key",
"column_valid_from": "Valid From",
"column_valid_until": "Valid Until",
"column_license_status": "Status",
"button_apply": "Apply",
"placeholder_license_key": "Enter a License Key",
"tab_current_license": "Current License",
"tab_license_history": "History",
"loading_licenses": "Loading licenses...",
"enter_license_key": "Please enter a license key",
"license_applied": "License applied successfully"
}

Some files were not shown because too many files have changed in this diff Show More