Compare commits

..

639 Commits
v0.5.2 ... v0.8

Author SHA1 Message Date
Prashant Shahi
1f2ec0d728 chore(release): 📌 pin versions: SigNoz 0.8.2, OtelCollector 0.45.1-0.3
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-06-15 01:47:04 +05:30
Srikanth Chekuri
4f12f8c85c fix: incorrect 5xx rate calculation (#1229) 2022-06-14 01:09:44 +05:30
Ankit Nayan
7b315c6766 Merge pull request #1246 from SigNoz/release/v0.8.1
Release/v0.8.1
2022-06-09 21:17:59 +05:30
Prashant Shahi
676fe892a5 chore(release): 📌 pin versions: OtelCollectors 0.45.1-0.2 and config changes
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-06-09 20:58:10 +05:30
Prashant Shahi
15260e0e14 Merge branch 'main' into release/v0.8.1 2022-06-09 17:24:12 +05:30
Prashant Shahi
ce7be6e7cd chore(release): 📌 pin versions: SigNoz 0.8.1
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-06-09 17:20:11 +05:30
palash-signoz
99d38860cb Merge pull request #1243 from pranshuchittora/pranshuchittora/feat/dashboard-save-rbac
feat(FE): save dashboard with RBAC permissions
2022-06-09 12:23:18 +05:30
Pranshu Chittora
1f4f281965 feat(FE): save dashbaord with RBAC permissions 2022-06-09 12:04:47 +05:30
palash-signoz
4aa4bf9ea2 Merge pull request #1242 from pranshuchittora/pranshuchittora/feat/dashboard-edit-permission
feat(FE): dashboard edit permission based on RBAC
2022-06-08 23:25:21 +05:30
Pranshu Chittora
052eb25cff chore(FE): sidebar red dot styling 2022-06-08 23:15:48 +05:30
Pranshu Chittora
ce14638a63 feat(FE): dashboard edit permission based on RBAC 2022-06-08 22:57:34 +05:30
Prashant Shahi
fa142707dc chore(alertmanager): 🔧 use query-service internalport (#1241)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-06-08 16:03:48 +05:30
Amol Umbark
5ae4e05c96 HTTP listener for internal services (#1238)
* feat: added private http server to handle internal service requests
* feat: added private port default to constants
2022-06-08 12:22:25 +05:30
palash-signoz
b7d52b8fba fix: dashboard is updated (#1240)
* fix: dashboard is updated

* fix: redux is made empty when creating dashbaord

Co-authored-by: Palash gupta <palash@signoz.io>
2022-06-08 11:50:41 +05:30
palash-signoz
1c90e62189 feat: dashboard layout is updated (#1221)
* feat: dashboard layout is updated

* feat: onClick is made fixed

* feat: layout is updated

* feat: layout is updated

* feat: layout is updated

* fix: memo is removed and grid layout component is refactored to use use query

* fix: saveDashboard is updated

* feat: layout is fixed

* fix: tsc error are fixed

* fix: delete widgets is updated

* fix: useMount once is added

* fix: useMount once is removed

* chore: removed the commented code

Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-06-07 16:14:49 +05:30
palash-signoz
cfeb631a6e Merge pull request #1217 from palash-signoz/1215-signup
fix: button is disable until condition is met
2022-06-07 16:02:40 +05:30
palash-signoz
4fc4ab0611 Merge branch 'develop' into 1215-signup 2022-06-01 18:16:08 +05:30
palash-signoz
b107902c31 Merge pull request #1220 from palash-signoz/1219-video-update
chore: video link is updated
2022-06-01 18:15:52 +05:30
palash-signoz
2d83afd0c4 Merge branch 'develop' into 1215-signup 2022-05-31 11:20:29 +05:30
palash-signoz
e641577e1c Merge branch 'develop' into 1219-video-update 2022-05-31 11:19:49 +05:30
Palash gupta
3e4b56e012 chore: video link is updated 2022-05-31 11:18:52 +05:30
palash-signoz
697fd1d1bf Merge pull request #1209 from palash-signoz/dashboard-layout-fix
fix: layout is updated
2022-05-30 22:31:13 +05:30
palash-signoz
21dbdb57da Merge branch 'develop' into dashboard-layout-fix 2022-05-30 22:21:17 +05:30
palash-signoz
3406bcaa5f Merge branch 'develop' into 1215-signup 2022-05-30 22:06:05 +05:30
Palash gupta
de0fd64a5e fix: button is disable until condition is met 2022-05-30 22:04:45 +05:30
Pranay Prateek
c27c026e25 Update SECURITY.md 2022-05-30 17:22:47 +05:30
Pranay Prateek
0a4bc7e181 Update SECURITY.md 2022-05-30 17:22:20 +05:30
Pranay Prateek
b6cfe9d08e Update SECURITY.md 2022-05-30 17:14:01 +05:30
Pranay Prateek
b5b9f20b1f Update SECURITY.md 2022-05-30 17:13:34 +05:30
Pranay Prateek
25c6106bd6 Create SECURITY.md 2022-05-30 17:04:02 +05:30
Palash gupta
d5877337ec fix: layout is updated 2022-05-27 07:04:22 +05:30
Palash gupta
51e0972219 fix: layout is updated 2022-05-27 07:03:12 +05:30
palash-signoz
38c0bcf4ea fix: trace table is fixed (#1208) 2022-05-26 16:51:18 +05:30
palash-signoz
d863c2781a feat: dashboard layout is updated from widgets (#1207) 2022-05-26 15:09:59 +05:30
Prashant Shahi
642c6c5920 chore: TTL and S3 config related changes (#1201)
* fix: 🐛 convert TTL APIs to async

* chore: add archive support

* chore: update TTL async APIs according to new design

* chore: 🔥 clean removeTTL API

* fix: metrics s3 config

* feat: ttl async with polling (#1195)

* feat: ttl state message change and time unit language changes (#1197)

* test:  update tests for async TTL api

* feat: ttl message info icon (#1202)

* feat: ttl pr review changes

* chore: refractoring

Co-authored-by: makeavish <makeavish786@gmail.com>
Co-authored-by: Pranshu Chittora <pranshu@signoz.io>
Co-authored-by: palash-signoz <palash@signoz.io>
Co-authored-by: Pranay Prateek <pranay@signoz.io>
2022-05-25 18:19:44 +05:30
Prashant Shahi
f92e4798ce refactor: ⚰️ Remove deprecated flattner and Druid leftover files (#1194)
* refactor: ⚰️ Remove flattner from Makefile

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* refactor: ⚰️ Remove deprecated Druid leftover files

Signed-off-by: Prashant Shahi <prashant@signoz.io>

Co-authored-by: Pranay Prateek <pranay@signoz.io>
2022-05-25 18:06:14 +05:30
Vishal Sharma
5d080f5564 fix: 🐛 convert TTL APIs to async #902 (#1173)
* fix: 🐛 convert TTL APIs to async

* chore: add archive support

* chore: update TTL async APIs according to new design

* chore: 🔥 clean removeTTL API

* fix: metrics s3 config

* test:  update tests for async TTL api

* chore: refractoring

Co-authored-by: Pranay Prateek <pranay@signoz.io>
2022-05-25 16:55:30 +05:30
palash-signoz
eb9a8e3a97 feat: color is updated (#1198) 2022-05-25 10:48:53 +05:30
palash-signoz
4a13c524a3 chore: test result is added in the .gitignore (#1191)
* chore: test result is added in the .gitignore

* chore: cypress is removed from gitignore
2022-05-24 19:11:11 +05:30
palash-signoz
7c3edec3e6 Merge pull request #1190 from palash-signoz/react-version-resolution
fix: react version is made fixed
2022-05-24 18:37:07 +05:30
palash-signoz
199d6b6213 Merge branch 'develop' into react-version-resolution 2022-05-24 11:23:09 +05:30
palash-signoz
3d46abc1e9 Merge pull request #1189 from palash-signoz/1161-service-map
fix: handle the broken state in service map
2022-05-24 11:20:50 +05:30
palash-signoz
e6496ee67b Merge branch 'develop' into 1161-service-map 2022-05-23 16:05:09 +05:30
Pranay Prateek
fa6d5a7404 Merge branch 'develop' into react-version-resolution 2022-05-23 11:34:08 +05:30
Prashant Shahi
bd6153225f ci(build): 👷 Update build-pipeline workflow (#1187)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-05-23 11:33:33 +05:30
Palash gupta
bcceaf7937 fix: react version is made fixed 2022-05-22 21:25:12 +05:30
Palash gupta
4a287fd112 fix: handle the broken state 2022-05-22 20:59:35 +05:30
palash-signoz
8ec9cb2222 Merge pull request #1184 from pranshuchittora/pranshuchittora/fix/0.8.1/tsc
fix: ts typings and remove cypress types
2022-05-20 23:25:05 +05:30
palash-signoz
d3094e10bf Merge branch 'develop' into pranshuchittora/fix/0.8.1/tsc 2022-05-20 22:09:18 +05:30
Ankit Nayan
973ef56c09 Revert "feat: NODE_ENV is configured in the frontend" (#1186) 2022-05-20 18:08:20 +02:00
Prashant Shahi
26db6b5fcc Merge branch 'develop' into pranshuchittora/fix/0.8.1/tsc 2022-05-20 19:57:37 +05:30
Prashant Shahi
6e2afe1c78 fix(husky): 🚨 integrate is-ci and webpack-cli version bump (#1181)
* fix(husky): 🚨 integrate is-ci and webpack-cli version bump

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore(frontend-Dockerfile): 🚀 remove NODE_ENV

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-05-20 19:14:58 +05:30
Pranshu Chittora
0bcd9d8d98 fix: ts typings and remove cypress types 2022-05-20 18:36:46 +05:30
Ankit Nayan
be01bc9b82 Revert "fix: frontend/package.json & frontend/yarn.lock to reduce vulnerabilities (#1147)" (#1182)
This reverts commit 5a2ad9492c.
2022-05-20 14:08:12 +02:00
palash-signoz
5a2ad9492c fix: frontend/package.json & frontend/yarn.lock to reduce vulnerabilities (#1147)
The following vulnerabilities are fixed with an upgrade:
- https://snyk.io/vuln/SNYK-JS-ANSIREGEX-1583908

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
2022-05-20 13:38:22 +02:00
Ankit Nayan
747677d4b0 Merge pull request #1152 from palash-signoz/feat/playwright
feat: playwright is configured
2022-05-20 13:33:37 +02:00
palash-signoz
e7f49cf360 Merge pull request #1178 from palash-signoz/tag-improvement
fix: tag style is updated
2022-05-20 15:46:04 +05:30
palash-signoz
3ba519457a Merge pull request #1179 from palash-signoz/metrics-application-active-key
fix: getActiveKey is refactoring into switch
2022-05-20 15:45:49 +05:30
palash-signoz
8d6646afed Merge pull request #1180 from SigNoz/prashant/frontend-docker
chore(docker): 🚀 Update Dockerfile and .dockerignore files
2022-05-20 15:38:24 +05:30
Prashant Shahi
a4cfb44953 chore(docker): 🚀 Update Dockerfile and .dockerignore files
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-05-20 15:07:47 +05:30
Palash gupta
c77ad88f90 fix: getActiveKey is refactoring into switch 2022-05-20 14:12:13 +05:30
palash-signoz
914be6e4cf Merge pull request #1177 from palash-signoz/set-retention
fix: set retention query is fixed
2022-05-20 14:04:21 +05:30
Palash gupta
2e9e29eb38 fix: tag style is updated 2022-05-20 14:02:46 +05:30
Palash gupta
bbed3fda22 fix: set retention query is fixed 2022-05-20 13:39:23 +05:30
Palash gupta
cbaf9b009c fix: merge conflit resolved 2022-05-20 13:29:08 +05:30
Ankit Nayan
8471dc0c1b Merge pull request #1151 from SigNoz/feat/gh-bot
feat: signoz gh-bot integration
2022-05-20 09:38:27 +02:00
Ankit Nayan
49175b3784 Merge pull request #1176 from SigNoz/feat/analytics-check-response
feat: added status code of api calls
2022-05-20 09:32:07 +02:00
Ankit Nayan
961dc7e814 feat: added status code of api calls 2022-05-20 09:31:23 +02:00
palash-signoz
1315b43aad Merge pull request #1174 from palash-signoz/compress
fix: version is made exact
2022-05-20 12:55:37 +05:30
Pranshu Chittora
9e6d918d6a fix: removed comment of PR workflow 2022-05-20 12:37:33 +05:30
palash-signoz
5b5b19dd99 Merge pull request #1170 from palash-signoz/full-view
feat: full view is updated to use query
2022-05-20 11:55:10 +05:30
palash-signoz
4b8bd2e335 feat: tags are added in the sidebar (#1153)
* feat: tags are added in the sidebar

* chore: styles is updated
2022-05-20 11:43:23 +05:30
palash-signoz
7d2883df11 Merge pull request #1169 from palash-signoz/392-tab-persist
feat: Tabs selection persist when we refresh
2022-05-20 11:41:40 +05:30
Palash gupta
cb4e465a10 Merge branch 'develop' into compress 2022-05-20 10:55:18 +05:30
palash-signoz
b1ee56b2f2 Merge pull request #1171 from palash-signoz/tsc-fix-2
fix: tsc is fixed
2022-05-20 10:53:08 +05:30
Palash gupta
98dfcead5b fix: version is made exact 2022-05-20 10:48:23 +05:30
Palash gupta
3cc4fb9c30 fix: tsc is fixed 2022-05-19 23:15:12 +05:30
Palash gupta
83cb099aa6 feat: full view is updated to use query 2022-05-19 23:08:27 +05:30
Srikanth Chekuri
c480b3c563 Add section outlining ideal workflow for significant features/changes (#1111) 2022-05-19 22:24:59 +05:30
Palash gupta
f084637f84 fix: merge conflict removed 2022-05-19 21:29:21 +05:30
Palash gupta
9fd8d12cc0 feat: Tabs selection persist when we refresh 2022-05-19 21:25:03 +05:30
palash-signoz
22f9069a29 Merge pull request #1130 from palash-signoz/trigger-alerts-error-handling
fix: error handling is updated for the trigger alerts
2022-05-19 19:34:06 +05:30
Ankit Nayan
42269a7c78 Merge pull request #962 from SigNoz/ttl-plus
Add remove TTL api, and do not allow zero or negative TTL
2022-05-19 16:01:17 +02:00
palash-signoz
2c62a1c0f0 Merge pull request #1163 from palash-signoz/389-trace-left-panel
feat: tooltip is added and max width is configured in the left panel to show text ellipsis
2022-05-19 16:43:39 +05:30
palash-signoz
b3729e0b6c Merge pull request #1167 from palash-signoz/1112-errors
fix: route is updated
2022-05-19 16:42:55 +05:30
Palash gupta
696a6adc32 fix: route is updated 2022-05-19 16:40:41 +05:30
palash-signoz
d964b66bcc Merge pull request #1145 from palash-signoz/bug-double-org
bug: double org is fixed
2022-05-19 16:09:28 +05:30
palash-signoz
4a4ad7a3da Merge pull request #1119 from palash-signoz/logout
fix: logout the user if api is not successfull
2022-05-19 16:09:14 +05:30
palash-signoz
03ef3d3bcd Merge pull request #1146 from palash-signoz/379-json-data
feat: dashboard error and loading state is removed from dashboard object
2022-05-19 15:29:45 +05:30
palash-signoz
d2913a2831 Merge pull request #1107 from palash-signoz/app-actions
chore: type is updated for thunk
2022-05-19 15:20:57 +05:30
palash-signoz
4ca3f1f945 Merge pull request #1133 from palash-signoz/develop-tsc-fix
fix: tsc is fix in cypress
2022-05-19 15:20:36 +05:30
palash-signoz
f2074f01e8 Merge pull request #1164 from palash-signoz/393-error-expection-tooltip
feat: tooltip is added in the error message and error type
2022-05-19 15:19:45 +05:30
palash-signoz
ffd5621f09 Merge pull request #1118 from palash-signoz/application-metrics-error-handling
fix: error is now handled and displayed as antd notification message in /application
2022-05-19 15:19:25 +05:30
Palash gupta
429e3bbd0d fix: logout is fixed 2022-05-19 13:43:54 +05:30
palash-signoz
3f37fe4d60 Merge pull request #1158 from palash-signoz/391-top-end-points
feat: top end point table is fixed
2022-05-19 13:30:07 +05:30
palash-signoz
ec3fed05bb Merge pull request #1132 from palash-signoz/commitlint
feat: commit lint is added in the frontend
2022-05-19 13:29:24 +05:30
palash-signoz
31583b73d8 Merge pull request #1159 from palash-signoz/390-metrics-pagination
feat: pagination is added in the application table
2022-05-19 13:29:05 +05:30
palash-signoz
02ba0eda9a Merge pull request #1154 from palash-signoz/341-url-encoding
feat: url encoding is added in the new dashboard query
2022-05-19 13:28:32 +05:30
palash-signoz
7185f2fa24 Merge pull request #1155 from palash-signoz/dashboard-widget-hover-resize
feat: resize handler is visible on hover
2022-05-19 13:27:16 +05:30
Palash gupta
ceb59e8bb5 fix: yarn is turned into npm 2022-05-19 13:24:46 +05:30
Ankit Nayan
f063a82133 Merge pull request #1124 from palash-signoz/env
feat: NODE_ENV is configured in the frontend
2022-05-19 08:46:30 +02:00
Palash gupta
072c137f26 feat: tooltip is added in the error message and error type 2022-05-19 08:48:19 +05:30
Palash gupta
358fc3a217 feat: tooltip is added and max width is configured in the left panel to show text ellipsis 2022-05-19 08:28:50 +05:30
Palash gupta
60d869ddbe feat: pagination is added in the application table 2022-05-18 22:26:41 +05:30
Palash gupta
286d46edbe feat: topend point table is fixed 2022-05-18 22:22:12 +05:30
palash-signoz
b66ce81eb6 Merge pull request #1127 from palash-signoz/remove-unneccesary-file
fix: removed unnecessary file
2022-05-18 09:50:39 +05:30
Palash gupta
60bb82ea9d feat: resize handler is visible on hover 2022-05-18 08:55:08 +05:30
Palash gupta
e3987206de feat: url encoding is added in the new dashboard query 2022-05-18 07:37:15 +05:30
Palash gupta
b8f8d59d40 feat: baseurl is added and grabbed from the env 2022-05-18 07:12:53 +05:30
Palash gupta
b2fc4776b7 feat: playwright is configured 2022-05-18 00:08:36 +05:30
Palash gupta
dd0047da07 feat: playwright is configured 2022-05-17 19:28:06 +05:30
palash-signoz
d3c67bad5b Merge pull request #1138 from pranshuchittora/pranshuchittora/fix/service-map-color
fix: service map label readable
2022-05-17 17:40:36 +05:30
Pranshu Chittora
ff3b414645 feat: signoz gh-bot integration 2022-05-17 17:23:06 +05:30
Ankit Nayan
104256dcb5 Merge pull request #926 from prashant-shahi/prashant/pprof
feat(query-service):  integrate pprof
2022-05-17 10:25:19 +02:00
Ankit Nayan
38d89fc34a Merge pull request #1136 from SigNoz/prashant/nginx-cache-improvement
chore: 🔧 improve nginx cache configuration
2022-05-17 10:22:20 +02:00
Palash gupta
a2d67f1222 fix: isProduction is removed 2022-05-17 11:55:30 +05:30
palash-signoz
8e360e001f Merge pull request #1126 from palash-signoz/alerts-rules-error-handling
fix: list alerts rules is handled
2022-05-17 11:52:06 +05:30
palash-signoz
de3928c51f Merge pull request #1128 from palash-signoz/error-handling-error-detail
fix: error details error is handled
2022-05-17 11:49:50 +05:30
Palash gupta
228fb66251 feat: dashbaord error and loading state is removed from dashboard object 2022-05-13 12:59:35 +05:30
Palash gupta
12c14f71ba bug: bug double org is fixed 2022-05-13 11:15:10 +05:30
Prashant Shahi
80de9efa0e refactor(query-service): 🔊 update pprof server error log
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-05-13 10:53:43 +05:30
palash-signoz
3890e06d29 Merge pull request #1123 from palash-signoz/trace-handling
fix: error handling is updated in trace
2022-05-13 10:44:15 +05:30
Palash gupta
a34dbc4942 chore: error checking condition !=200 is moved to >=400 2022-05-13 10:43:26 +05:30
Palash gupta
4b591fabf7 chore: error detail is updated 2022-05-13 10:38:32 +05:30
Palash gupta
cc978153f9 chore: added the new line 2022-05-13 10:32:36 +05:30
Prashant Shahi
9ba0b84a91 refactor(query-service): ♻️ move pprof to server.go
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-05-13 03:38:00 +05:30
Prashant Shahi
ac06b02d52 Merge branch 'develop' of github.com:signoz/signoz into prashant/pprof 2022-05-13 03:06:09 +05:30
Prashant Shahi
9c173c8eb3 docs(contributing): 📝 Update CONTRIBUTING.md docs (#877)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-05-12 22:46:43 +05:30
Srikanth Chekuri
d0b21fce01 update exec to clickhouse v2 api; update the queries 2022-05-12 16:22:59 +05:30
Pranshu Chittora
07ffd13159 fix: service map label readable 2022-05-12 15:47:47 +05:30
palash-signoz
1926998e3c fix: error is now handled in the login screen (#1120) 2022-05-12 13:43:20 +05:30
Srikanth Chekuri
eb397babcd resolve merge conflicts 2022-05-12 11:13:44 +05:30
Prashant Shahi
a0643aaf4e chore: 🔧 improve nginx cache configuration
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-05-11 10:53:51 +05:30
Palash gupta
169185ff89 chore: type is updated 2022-05-11 01:47:12 +05:30
Palash gupta
7feee26f85 fix: tsc is fix in cypress 2022-05-11 01:34:53 +05:30
Palash gupta
ce72b1e7a0 feat: commit lint is added in the frontend 2022-05-11 01:12:29 +05:30
Palash gupta
e06f020162 fix: error handling is updated for the trigger alerts 2022-05-10 18:30:22 +05:30
Palash gupta
574088ad54 fix: error state is updated 2022-05-10 18:23:47 +05:30
Palash gupta
6f48030ab9 fix: error details error is handled 2022-05-10 17:54:26 +05:30
Palash gupta
ea3a5e20d9 fix: removed unnecessary import 2022-05-10 17:32:22 +05:30
Palash gupta
b4833eeb0e fix: list alerts rules is handled 2022-05-10 17:27:04 +05:30
Palash gupta
ce67005d66 feat: NODE_ENV is configured in the frontend 2022-05-10 17:18:20 +05:30
Palash gupta
80c0b5621d fix: error handling is updated in trace 2022-05-10 17:09:39 +05:30
Palash gupta
b21a2707d3 fix: logout the user if api is not successfull 2022-05-10 14:11:16 +05:30
Palash gupta
4fa5ff9319 fix: error is now handled and displayed as antd notification message 2022-05-10 14:02:05 +05:30
palash-signoz
53528f1045 fix: name is updated to the tag_name in the version (#1116) 2022-05-10 12:59:15 +05:30
Ankit Nayan
9522bbf33b Merge pull request #1106 from SigNoz/release/v0.8.0
Release/v0.8.0
2022-05-06 12:16:51 +05:30
Prashant Shahi
3995de16f0 chore(release): 📌 pin versions: SigNoz 0.8.0, Alertmanager 0.23.0-0.1, OtelCollector 0.43.0-0.1
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-05-06 11:49:49 +05:30
Palash gupta
f149258de2 chore: type is updated for thunk 2022-05-06 11:27:23 +05:30
Prashant Shahi
da386b0e8e chore(nginx-config): 🔧 add cache control headers for UI (#1104)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-05-06 09:12:01 +05:30
palash-signoz
75cdac376f feat: cache headers is added (#1103) 2022-05-06 00:33:41 +05:30
palash-signoz
0ef13a89ed feat: updated password is updated (#1102) 2022-05-06 00:33:04 +05:30
palash-signoz
084d8ecccd chore: logout is updated (#1100) 2022-05-05 21:21:39 +05:30
Vishal Sharma
b9f3663b6c fix: exceptionStacktrace typo (#1098) 2022-05-05 20:04:59 +05:30
palash-signoz
4067aa5025 chore: handled the null case (#1096) 2022-05-05 16:16:13 +05:30
palash-signoz
ebf9316714 feat: sorting is updated (#1095) 2022-05-05 14:06:22 +05:30
palash-signoz
f5009abca6 chore: error sorting is updated (#1094) 2022-05-05 14:00:37 +05:30
palash-signoz
b16a793cbc Error is re-name to exceptions (#1093)
* chore: error is named to expections
2022-05-05 13:59:25 +05:30
Vishal Sharma
374a2415d9 fix: fix typo and return empty struct instead of null (#1092) 2022-05-05 13:58:39 +05:30
palash-signoz
3789e25a1e feat: stack trace is show in monaco editor (#1091) 2022-05-05 13:57:52 +05:30
palash-signoz
10ab057e29 chore: error details is updated (#1090)
* chore: error details is updated
2022-05-05 13:57:26 +05:30
Ankit Nayan
41b9129145 viewAccess to usage explorer 2022-05-05 13:19:42 +05:30
Pranshu Chittora
f5d10b72f0 fix: y-axis generic time unit conversion (#1088)
* fix: y-axis generic time unit conversion
2022-05-05 12:22:56 +05:30
Vishal Sharma
6fb6a576aa fix: lagInFrame and leadInFrame nullable (#1089)
https://github.com/ClickHouse/ClickHouse/pull/26521
2022-05-05 12:22:19 +05:30
palash-signoz
7cf567792a chore: error message is displayed to user in the error page (#1085) 2022-05-05 11:12:42 +05:30
Vishal Sharma
fe18e85e36 fix: error and exception type error (#1086) 2022-05-05 11:11:35 +05:30
palash-signoz
147476d802 bug: confirm password bug is fixed (#1084) 2022-05-05 10:16:32 +05:30
palash-signoz
c94f23a710 chore: reset password is updated for the user id not the logged in user id (#1082) 2022-05-04 23:19:49 +05:30
Pranshu Chittora
1a2ef4fde6 fix: y-axis time unit (#1081) 2022-05-04 22:46:33 +05:30
Ahsan Barkati
6c505f9e86 Fix error message (#1080) 2022-05-04 21:45:20 +05:30
palash-signoz
fb97540c7c chore: tsc is fixed (#1078) 2022-05-04 20:52:33 +05:30
palash-signoz
9cf5c7ef74 chore: new dashboard permission is added for editor and admin (#1077)
* chore: new dashboard permission is added for editor and admin
2022-05-04 20:40:49 +05:30
palash-signoz
6223e89d4c chore: logout is fixed (#1076) 2022-05-04 20:38:03 +05:30
Pranshu Chittora
62f8cddc27 fix: fixed graph axis and tooltip (#1075) 2022-05-04 19:30:57 +05:30
palash-signoz
ffae767fab chore: edit widget is allowed for admin and editor (#1074) 2022-05-04 19:16:30 +05:30
palash-signoz
c23f97c3d0 chore: placeholder is updated for signup name (#1069) 2022-05-04 18:06:25 +05:30
palash-signoz
11eb1e4f72 chore: error message is updated for signup (#1072) 2022-05-04 18:06:02 +05:30
palash-signoz
0554ed7ecb bug: members is fixed (#1073) 2022-05-04 18:05:37 +05:30
palash-signoz
ca4ce0d380 Merge pull request #1071 from pranshuchittora/pranshuchittora/fix/chart-y-axis-zero-values
fix: chart y-axis values not showing for values < 1
2022-05-04 17:55:59 +05:30
Ankit Nayan
65f50bb70d chore: changed permission for edit and delete alert channels (#1070) 2022-05-04 17:53:53 +05:30
Pranshu Chittora
dbe9f3a034 fix: chart y-axis values not showing for values < 1 2022-05-04 17:52:14 +05:30
palash-signoz
7cdd136f61 chore: redirection is added (#1063) 2022-05-04 17:37:11 +05:30
palash-signoz
21d5e0b71c text is made optional (#1064) 2022-05-04 17:36:53 +05:30
palash-signoz
fe53aa412b copy to clipboard is updated (#1065) 2022-05-04 17:36:11 +05:30
palash-signoz
6c5a48082b delete widget is added for admin and editor (#1066) 2022-05-04 17:35:48 +05:30
palash-signoz
b7adc27f02 chore: button is not made disable for firstName (#1067) 2022-05-04 17:34:50 +05:30
Ankit Nayan
67b4290846 chore: change in error message on register (#1068) 2022-05-04 17:33:54 +05:30
Ankit Nayan
8a7cbc8ad3 Update http_handler.go 2022-05-04 17:00:49 +05:30
Vishal Sharma
c74d87a21a fix: handle empty data scenarios (#1062) 2022-05-04 15:03:48 +05:30
Ahsan Barkati
6486425f46 fix(auth): Return 403 for forbidden requests due to rbac (#1060)
* Return error json for user not found
* Return 403 for rbac error
* Return not_found instead of internal_error for getInvite
2022-05-04 14:50:15 +05:30
palash-signoz
5b316afa12 chore: sorting key is updated (#1059) 2022-05-04 14:32:38 +05:30
Ankit Nayan
2dcc6fda77 chore: analtics changed after rbac (#1058) 2022-05-04 14:27:32 +05:30
palash-signoz
a2f570d78c Merge pull request #1057 from pranshuchittora/pranshuchittora/fix/resource-attribute-error-message 2022-05-04 14:11:57 +05:30
Pranshu Chittora
ef209e11d5 fix: resource attribute error message 2022-05-04 13:52:44 +05:30
palash-signoz
1851e76bca Merge pull request #1056 from pranshuchittora/pranshuchittora/fix/dashboard-widget-uuid
fix: dashboard uuid issue
2022-05-04 12:57:20 +05:30
Pranshu Chittora
fa23050916 fix: dashboard uuid issue 2022-05-04 12:56:19 +05:30
palash-signoz
79475bde71 Merge pull request #1054 from palash-signoz/interceptor-is-updated
chore: interceptor is updated
2022-05-04 12:35:50 +05:30
Palash gupta
039201acae chore: interceptor is updated 2022-05-04 12:31:37 +05:30
palash-signoz
22454abc4a chore: api is updated (#1053) 2022-05-04 12:02:14 +05:30
palash-signoz
4c8b7af0eb chore: routesToSkip is updated (#1052) 2022-05-04 12:01:36 +05:30
palash-signoz
5caf94f024 feat: permission is added in the dashboard button (#1051) 2022-05-04 01:31:44 +05:30
Ankit Nayan
ce0b37ca2e Update jwt.go 2022-05-04 01:06:45 +05:30
palash-signoz
5f529e1c10 feat: refresh token is fixed (#1049) 2022-05-04 01:05:38 +05:30
Prashant Shahi
05c923df9b chore: 📌 pin alertmanager and otelcollector version and changes (#1048)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-05-03 23:59:29 +05:30
palash-signoz
90637212bc feat: total count is generated from the filter (#1047)
* feat: total count is generated from the filter
2022-05-03 23:33:12 +05:30
palash-signoz
b58f45c268 chore: scroll is made auto (#1044) 2022-05-03 23:04:16 +05:30
Vishal Sharma
6a6fd44719 fix: convert serviceMap API to POST (#1046) 2022-05-03 23:03:47 +05:30
Vishal Sharma
81cc120539 fix: aggregate of avg and percentile (#1045) 2022-05-03 23:02:49 +05:30
Pranshu Chittora
831381a1ff fix: trace detail styling issue (#1043) 2022-05-03 21:27:44 +05:30
palash-signoz
fd0656e0fc chore: redirect the user to application is user is navigated to non logged in page (#1042) 2022-05-03 21:27:17 +05:30
palash-signoz
e217ea0c9c chore: default value is added when on unmount (#1041) 2022-05-03 21:24:55 +05:30
palash-signoz
bdf9333dcf chore: version is added in the src of Logo (#1039) 2022-05-03 21:24:02 +05:30
Pranshu Chittora
eae97d6ffc fix: migrate service map APIs from GET to POST (#1038) 2022-05-03 21:21:40 +05:30
palash-signoz
9f5241e82c chore: error message is updated (#1037) 2022-05-03 21:21:10 +05:30
palash-signoz
284eda4072 chore: placeholder is updated for the signup page (#1035)
* chore: placeholder is updated for the signup page
2022-05-03 21:20:36 +05:30
palash-signoz
63693a4185 chore: name field is hidden when empty name is received from invite details api (#1036) 2022-05-03 21:19:35 +05:30
palash-signoz
d9cf9071d3 chore: text for forgot password is updated (#1034) 2022-05-03 21:15:58 +05:30
palash-signoz
5e41c7f62b chore: header style is fixed in the light mode (#1033) 2022-05-03 21:15:23 +05:30
palash-signoz
e903277143 feat: sorting duration is added (#1032)
* feat: sorting duration is added in trace filter page
2022-05-03 21:14:40 +05:30
palash-signoz
f2def38df8 Merge pull request #1040 from pranshuchittora/pranshuchittora/fix/empty-graph-on-zero-values
fix: empty graph showing no data on zero values
2022-05-03 21:10:55 +05:30
Ankit Nayan
71e742fb2b Update otel-collector-config.yaml 2022-05-03 21:01:49 +05:30
Pranshu Chittora
571e087f31 fix: empty graph showing no data on zero values 2022-05-03 20:14:35 +05:30
Srikanth Chekuri
3e5f9f3b25 bugfix: missing destination name for DiskItem (#1031) 2022-05-03 19:13:32 +05:30
palash-signoz
c969b5f329 chore: Private Wrapper is updated (#1029)
* chore: Private Wrapper is updated
2022-05-03 17:11:07 +05:30
palash-signoz
5bcf42d398 chore: isPasswordNotValidMessage is updated (#1030) 2022-05-03 17:10:39 +05:30
palash-signoz
c81b0b2a8b Auth Wrapper is updated (#1028) 2022-05-03 16:01:41 +05:30
palash-signoz
d52308c9b5 feat: some routes are removed from the date picker to display (#1023) 2022-05-03 15:59:12 +05:30
Pranshu Chittora
7948bca710 feat: resource attributes based filter for metrics (#1022)
* feat: resource attributes based filtering enabled
2022-05-03 15:41:40 +05:30
palash-signoz
29c0b43481 bug: Rows per page setting is now working (#1026) 2022-05-03 15:30:08 +05:30
palash-signoz
9351fd09c2 bug: exclude param is added in the getTagFilters (#1025) 2022-05-03 15:29:54 +05:30
palash-signoz
59f32884d2 Feat(UI): Auth (#1018)
* auth and rbac frontend changes
2022-05-03 15:27:09 +05:30
Ahsan Barkati
6ccdc5296e feat(auth): Add auth and access-controls support (#874)
* auth and rbac support enabled
2022-05-03 15:26:32 +05:30
Amol Umbark
3ef9d96678 Pagerduty - Create, Edit and Test Features (#1016)
* enabled sending alerts to pagerduty
2022-05-03 11:28:00 +05:30
Vishal Sharma
642ece288e perf: Query-service Performance Improvements (traces) (#838)
*feat: Update query-service Go version to 1.17 #911
*chore: Upgrade to clickhouse versions v2 #751
*feat: Duration sorting in events table of Trace-filter page #826
*feat: Add grpc status code to traces view #975
*feat: added filtering by resource attributes #881
2022-05-03 11:20:57 +05:30
Pranshu Chittora
3ab4f71aa1 enhancement(FE): span time unit normalisation (#1021)
* feat: relevant span time unit on trace detail page

* fix: remove time unit on hover on flame graph
2022-04-29 14:33:57 +05:30
palash-signoz
b5be770a03 Merge pull request #1019 from palash-signoz/error-details-fix
bug: editor is updated in error details page

> Not sure about the product requirements though

tsc is giving error at develop
2022-04-29 10:27:06 +05:30
Palash gupta
08e3428744 feat: editor is updated in error details page 2022-04-27 22:21:57 +05:30
palash-signoz
b335d440cf Feat: import export dashboard (#980)
* feat: added import & export functionality in dashboards
2022-04-25 22:41:46 +05:30
cui fliter
1293378c5c fix some typos (#976)
Signed-off-by: cuishuang <imcusg@gmail.com>
2022-04-22 20:04:37 +05:30
palash-signoz
5424c7714f feat: Error exception (#979)
* feat: error expection page is made
2022-04-22 20:03:08 +05:30
palash-signoz
95311db543 feat: tagkey length check is added (#999) 2022-04-22 19:45:14 +05:30
palash-signoz
bf52722689 bug: list of rules is fixed when created and come back to all rules (#998) 2022-04-22 19:39:01 +05:30
Vishal Sharma
6064840dd1 feat: support gRPC status, method in trace table (#987) 2022-04-22 19:38:08 +05:30
Pranshu Chittora
182adc551c feat: dashboard search and filter (#1005)
* feat: enable search and filter in dashboards
2022-04-22 18:57:05 +05:30
Amol Umbark
2b5b79e34a (feature): UI for Test alert channels (#994)
* (feature): Implemented test channel function for webhook and slack
2022-04-22 16:56:18 +05:30
Amol Umbark
508c6ced80 (feature): API - Implement receiver/channel test functionality (#993)
* (feature): Added test receiver/channel functionality
2022-04-22 12:11:19 +05:30
Pranshu Chittora
3c2173de9e feat: new dashboard widget's option selection (#982)
* feat: new dashboard widget's option selection

* fix: overflowing legend

* feat: delete menu item is of type danger

* feat: added keyboard events onFocus and onBlur
2022-04-19 10:57:56 +05:30
palash-signoz
9a6bcaadf8 Merge pull request #1000 from palash-signoz/984-updated-response
feat: httpCode and httpStatus is updated to statusCode and method
2022-04-19 10:52:31 +05:30
Palash gupta
08bbb0259d feat: httpCode and httpStatus is updated to code and method 2022-04-19 00:21:30 +05:30
palash-signoz
93638d5615 Use fetch fix (#995)
* feat: useFetch in tag value is removed and moved to use query

* feat: useFetch in all channels is removed and moved to use query

* feat: useFetch in edit rule is removed and moved to use query

* feat: useFetch in general settings is removed and moved to use query

* feat: useFetch in all alerts is changed into use query
2022-04-18 15:24:51 +05:30
Ankit Nayan
844ca57686 Merge pull request #997 from SigNoz/dashboard-bug-fix
(bugfix): remove validation on post data id
2022-04-18 14:37:46 +05:30
Srikanth Chekuri
b2eec25f33 (bugfix): remove validation on post data id 2022-04-18 14:12:49 +05:30
Ankit Nayan
61d01fa2d5 feat: added action to verify that every pr has a linked issue 2022-04-15 11:32:19 +05:30
Ankit Nayan
a6bf6e4e07 Merge pull request #923 from SigNoz/uuid-server
chore: generate uuid on server side
2022-04-13 11:45:30 +05:30
palash-signoz
d454482f43 wdyr is updated (#981) 2022-04-11 17:17:31 +05:30
Pranay Prateek
f6aece6349 Update CONTRIBUTING.md 2022-04-08 10:13:36 -07:00
Pranay Prateek
dc9508269d Update commentLinesForSetup.sh
Updating frontend service line number
2022-04-08 10:12:58 -07:00
Pranay Prateek
a6c41f312d Update CONTRIBUTING.md 2022-04-08 10:09:24 -07:00
palash-signoz
f487f7420b Merge pull request #972 from palash-signoz/refetch-onwindow-focus
chore: refetchOnWindowFocus is made false to global level
2022-04-08 15:13:30 +05:30
palash-signoz
da8f3a6e81 Merge pull request #973 from palash-signoz/eslint-used-var-error
chore(eslint): @typescript-eslint/no-unused-vars is made to error
2022-04-08 15:13:20 +05:30
palash-signoz
d102c94670 bug: unused import is removed and two unwanted eslint rule is removed (#968) 2022-04-08 14:05:16 +05:30
palash-signoz
60288f7ba0 chore(refactor): Signup app layout (#969)
* feat: useFetch is upgraded to useFetchQueries

* chore: en-gb and common.json is updated over public locale
2022-04-08 13:49:33 +05:30
Palash gupta
0cbe17a315 chore(eslint): @typescript-eslint/no-unused-vars is made to error 2022-04-08 02:15:50 +05:30
Palash gupta
dce9f36a8e chore: refetchOnWindowFocus is made false to global level 2022-04-08 02:12:54 +05:30
Ankit Nayan
aa5100261d Merge pull request #956 from SigNoz/release/v0.7.5
Release/v0.7.5
2022-04-07 17:12:16 +05:30
Prashant Shahi
f4cc2a3a05 Merge branch 'develop' into release/v0.7.5 2022-04-07 15:32:44 +05:30
palash-signoz
041a5249b3 feat: onClick is added in the row (#966) 2022-04-07 13:11:27 +05:30
palash-signoz
a767697a86 Merge pull request #965 from palash-signoz/trace-filter-fix-incoming-from-metrics-page
Bug: Trace filter fix incoming from metrics page
2022-04-07 12:59:37 +05:30
palash-signoz
71cb70c62c Merge pull request #958 from palash-signoz/style-trace-search
bug: style-trace-search is fixed
2022-04-07 11:03:40 +05:30
Pranshu Chittora
647cabc4f4 feat: migrated trace detail to use query (#963)
* feat: migrated trace detail to use query

* fix: remove unused imports

* chore: useQuery config is updated

Co-authored-by: Palash gupta <palash@signoz.io>
2022-04-07 10:48:09 +05:30
Palash gupta
e864e33ad3 bug: value is updated on selection 2022-04-06 22:01:42 +05:30
Palash gupta
5bdbe792f5 chore: selected filter is not removed when user close panel 2022-04-06 21:28:17 +05:30
Palash gupta
399efb0fb2 bug: trace filter bug are collapsed 2022-04-06 21:19:12 +05:30
palash-signoz
4b72de6884 Merge pull request #959 from pranshuchittora/pranshuchittora/fix/ttl-enhancements
fix: general setting TTL enhancements
2022-04-06 16:34:07 +05:30
palash-signoz
9f1473e7de Merge pull request #957 from palash-signoz/trace-table-current-page
bug: current page is updated in redux to see the updated values
2022-04-06 16:33:11 +05:30
Srikanth Chekuri
d6c4df8b4b Add remove TTL api, and do not allow zero or negative TTL 2022-04-06 16:29:10 +05:30
Palash gupta
7150971dc0 chore: style is updated 2022-04-06 16:28:39 +05:30
Pranshu Chittora
d0846b8dd2 fix: general setting TTL enhancements 2022-04-06 12:40:14 +05:30
Palash gupta
ead6885b29 bug: style-trace-search is fixed 2022-04-06 11:33:56 +05:30
Palash gupta
d72dacdc1f bug: current page is updated in redux to see the updated values 2022-04-06 10:52:46 +05:30
Prashant Shahi
1d6ddd4890 chore(install-script): 🩹 fix email condition
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-04-06 01:37:07 +05:30
Prashant Shahi
58daca1579 chore(docker-compose): 🔧 add restart policy and frontend dependency
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-04-06 00:52:48 +05:30
Prashant Shahi
1e522ad8f1 chore(release): 📌 pin 0.7.5 SigNoz version and 0.6.1 Alertmanager version
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-04-06 00:19:13 +05:30
Prashant Shahi
8809105a8d Prashant/deploy changes (#955)
- set information log level in clickhouse logger config
- maximum logs size 150m (3 files each of 50m)

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-04-06 00:05:05 +05:30
palash-signoz
064c3e0449 chore: default text is updated (#954)
* text and title is updated
2022-04-05 23:13:12 +05:30
palash-signoz
2a348e916c feat: version page is added (#924)
* feat👔 : getLatestVersion api is added

* chore: VERSION page is added

* feat:  version page is added

* chore: all string is grabbed from locale

* chore: warning is removed

* chore: translation json is added

* chore: feedback about version is added

* chore: made two different functions

* unused import is removed

* feat: version changes are updated

* chore: if current version is present then it is displayed
2022-04-05 18:21:25 +05:30
palash-signoz
5744193f50 Merge pull request #953 from pranshuchittora/pranshuchittora/fix/trace-detail-error-span-color
fix: error color for spans having error on trace detail page
2022-04-05 18:00:08 +05:30
Pranshu Chittora
ccf352f2db fix: error color for spans having error on trace detail page 2022-04-05 17:02:39 +05:30
palash-signoz
6e446dc0ab Merge pull request #949 from pranshuchittora/pranshuchittora/feat/ttl-s3
feat(FE): TTL/s3 integration
2022-04-05 16:28:41 +05:30
Pranshu Chittora
566c2becdf feat: dynamic step size for the data for graphs (#929)
* feat: dynamic step size for the data for graphs

* fix: remove console.log

* chore: add jest globals

* feat: add step size for dashboard

* chore: undo .eslintignore
2022-04-05 16:09:57 +05:30
Pranshu Chittora
3b3fd2b3a9 chore: update type 2022-04-05 16:09:04 +05:30
Pranshu Chittora
eae53d9eff feat: condition changes 2022-04-05 16:05:46 +05:30
Pranshu Chittora
42842b6b17 feat: i18n support for setting route names 2022-04-05 15:51:38 +05:30
Pranshu Chittora
95f8dfb4bc feat: i18n support for settings page 2022-04-05 15:44:01 +05:30
palash-signoz
a8c5934fc5 fix: Fix jest (#945)
* bug: jest is now fixed

* chore: files are included for the eslint

* chore: build is fixed

* test: jest test are fixed
2022-04-05 14:47:37 +05:30
palash-signoz
3f2a4d6eac bug: Trace filter page fixes (#846)
* order is added in the url
* local min max duration is kept in memory to show min and max even after filtering by duration
* checkbox ordering does not change when the user selects or un-selects a checkbox
2022-04-05 13:23:08 +05:30
Pranshu Chittora
170609a81f chore: type changes 2022-04-05 12:26:43 +05:30
Pranshu Chittora
76fccbbba4 fix: styling changes 2022-04-05 12:19:54 +05:30
palash-signoz
147ed9f24b chore: editor config is added (#818) 2022-04-05 11:19:06 +05:30
palash-signoz
a69bc321a9 Merge pull request #935 from pranshuchittora/pranshuchittora/feat/graph-memory-issue
feat: FE memory fixes and UX enhancements
2022-04-05 10:35:08 +05:30
Pranshu Chittora
c9e02a8b25 feat: final touches to the ttl 2022-04-05 00:06:13 +05:30
Pranshu Chittora
24d6a1e7b2 feat: s3 ttl validation 2022-04-04 19:38:23 +05:30
Nishidh Jain
a0efa63185 Fix(FE) : Ask for confirmation before deleting any dashboard from dashboard list (#534)
* A confirmation dialog will pop up before deleting any dashboard

Co-authored-by: Palash gupta <palash@signoz.io>
2022-04-04 17:35:44 +05:30
Ankit Nayan
fd83cea9a0 chore: removing version api from being tracked (#950) 2022-04-04 17:07:21 +05:30
Pranshu Chittora
5be1eb58b2 feat: ttl api integration 2022-04-04 15:26:29 +05:30
Pranshu Chittora
8367c106bc Merge branch 'develop' of github.com:SigNoz/signoz into pranshuchittora/feat/ttl-s3 2022-04-04 15:06:33 +05:30
Pranshu Chittora
8064ae1f37 feat: ttl api integration 2022-04-04 15:06:06 +05:30
palash-signoz
ab4d9af442 Merge pull request #928 from palash-signoz/tag-value-suggestion
feat: Tag value suggestion
2022-04-04 12:52:34 +05:30
Palash gupta
eb0d3374d5 Merge branch 'develop' into tag-value-suggestion 2022-04-04 12:35:50 +05:30
palash-signoz
6c4c814b3f bug: pathname check is added (#948) 2022-04-04 10:25:15 +05:30
Palash gupta
32e8e48928 chore: behaviour for dropdown is updated 2022-04-04 08:24:28 +05:30
Naman Jain
53e7037f48 fix: run go vet to fix some issues with json tag (#936)
Co-authored-by: Naman Jain <jain_n@apple.com>
2022-04-02 16:15:03 +05:30
palash-signoz
a566b5dc97 bug: no service and loading check are added (#934) 2022-04-01 17:59:44 +05:30
Pranshu Chittora
4dc668fd13 fix: remove unused props 2022-04-01 17:43:56 +05:30
palash-signoz
d085506d3e bug: logged in check is added in the useEffect (#921) 2022-04-01 15:47:39 +05:30
palash-signoz
1b28a4e6f5 chore: links are updated for all dashboard and promql (#908) 2022-04-01 15:43:58 +05:30
Pranshu Chittora
20e924b116 feat: S3 TTL support 2022-04-01 15:12:30 +05:30
Ahsan Barkati
1d28ceb3d7 feat(query-service): Add cold storage support in getTTL API (#922)
* Add cold storage support in getTTL API
2022-04-01 11:22:25 +05:30
Ankit Nayan
0ff4c040bf Merge pull request #933 from SigNoz/release/v0.7.4
Release/v0.7.4
2022-03-29 23:32:49 +05:30
Prashant Shahi
1002ab553e chore(release): 📌 pin 0.7.4 SigNoz version and deployment changes
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-29 22:59:32 +05:30
Amol Umbark
3dc94c8da7 (fix): Duplicate alerts in triggered alerts (#932)
* (fix): Duplicate alerts in triggered alerts fixed by changing source api from /alert/groups to /alerts

* (fix): added comments for removed lines of group api call

* (fix): restored all getGroup
2022-03-29 19:59:40 +05:30
Pranshu Chittora
5a5aca2113 chore: remove unused code 2022-03-29 16:06:27 +05:30
Pranshu Chittora
cb22117a0f Merge branch 'develop' of github.com:SigNoz/signoz into pranshuchittora/feat/graph-memory-issue 2022-03-29 16:05:49 +05:30
Pranshu Chittora
739946fa47 fix: over memory allocation on Graph on big time range 2022-03-29 16:05:08 +05:30
Pranshu Chittora
7939902f03 fix: dashboard table element overflow (#930) 2022-03-29 12:24:03 +05:30
palash-signoz
d34e08fa3d chore: build.yml file is updated for more strict frontend checks (#906)
chore: build.yml file is updated for more strict frontend checks
2022-03-29 11:13:26 +05:30
Palash gupta
5556d1d6fc feat: tag value suggestion is updated 2022-03-29 09:59:50 +05:30
Palash gupta
d4d1104a53 WIP: value suggestion is added 2022-03-29 00:02:56 +05:30
Palash gupta
225a345baa chore: getTagValue api is added 2022-03-29 00:02:16 +05:30
Prashant Shahi
31443dabe7 feat(query-service): integrate pprof
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-28 21:44:40 +05:30
Amol Umbark
0efb901863 feat: Amol/webhook (#868)
webhook receiver enabled for alerts

Co-authored-by: Palash gupta <palash@signoz.io>
2022-03-28 21:01:57 +05:30
Srikanth Chekuri
eb4abe900c chore: generate uuid on server side 2022-03-28 09:14:40 +05:30
palash-signoz
e7ba5f9f33 bug 🐛 : on click tag filter is now fixed (#916) 2022-03-25 19:16:07 +05:30
palash-signoz
995232e057 Merge pull request #914 from palash-signoz/tsc-fix
chore: tsc fix are updated over frontend
2022-03-25 15:39:27 +05:30
Palash gupta
cc5d47e3ee chore: updated the type any 2022-03-25 12:39:26 +05:30
Palash gupta
b1de6c1d7d chore: link is reverted 2022-03-25 12:35:38 +05:30
Palash gupta
84bfe11285 chore: tsc error are fixed 2022-03-25 12:33:52 +05:30
Pranshu Chittora
ca78947a55 fix: save unit on dashboard without hitting apply (#912) 2022-03-25 12:29:40 +05:30
palash-signoz
ac49f84982 Merge pull request #3 from pranshuchittora/pranshuchittora/fix/tsc-fixes
fix: tsc fixes
2022-03-25 12:12:08 +05:30
Pranshu Chittora
cc47f02ebf fix: tsc fixes 2022-03-25 12:03:57 +05:30
Palash gupta
ac70240b72 chore: some tsc fix 2022-03-24 15:39:33 +05:30
palash-signoz
78b1a750fa husky: pre-commit hook is added (#904) 2022-03-24 15:06:57 +05:30
palash-signoz
d5a6336239 Merge pull request #903 from pranshuchittora/pranshuchittora/feat/transformed-labels-on-tooltips
feat(FE): unit label on graph tooltip
2022-03-24 14:23:47 +05:30
palash-signoz
01bad0f18a chore: eslint fix (#884)
* chore: eslint is updated

* chore: some eslint fixes are made

* chore: some more eslint fix are updated

* chore: some eslint fix is made

* chore: styled components type is added

* chore: some more eslint fix are made

* chore: some more eslint fix are updated

* chore: some more eslint fix are updated

* fix: eslint fixes

Co-authored-by: Pranshu Chittora <pranshu@signoz.io>
2022-03-24 12:06:57 +05:30
Pranshu Chittora
1b79a9bf35 feat: unit label on graph tooltip 2022-03-24 11:44:38 +05:30
Ankit Nayan
0426bf06eb Merge pull request #901 from SigNoz/release/v0.7.3
Release/v0.7.3
2022-03-24 01:22:42 +05:30
Prashant Shahi
3d8354fb99 chore(release): 📌 pin 0.7.3 SigNoz version
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-24 00:52:32 +05:30
Prashant Shahi
696241b962 chore(install-script): 🔧 amazon-linux improvements and fixes (#900)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-24 00:46:43 +05:30
Pranshu Chittora
8a883f1b5e feat: unit selection for value graph on dashboard (#898) 2022-03-23 22:03:57 +05:30
palash-signoz
7765cee610 feat: onClick feature is updated (#895) 2022-03-23 19:44:26 +05:30
Ankit Nayan
b958bad81f Merge pull request #897 from palash-signoz/896-monaco-editor-change
chore: onChange event is added
2022-03-23 19:43:59 +05:30
Palash gupta
deff5d5e17 chore: onChange event is added 2022-03-23 19:03:05 +05:30
Ankit Nayan
44d3f35a5f Merge branch 'release/v0.7.2' into develop 2022-03-23 12:41:13 +05:30
Ankit Nayan
36d8bc7bc6 Merge pull request #891 from SigNoz/release/v0.7.2
Release/v0.7.2
2022-03-23 12:38:18 +05:30
Prashant Shahi
565dfd5b52 chore(release): 📌 pin 0.7.2 SigNoz version
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-23 12:06:14 +05:30
palash-signoz
897c5d2371 Merge pull request #890 from pranshuchittora/pranshuchittora/fix/832
fix: top endpoints table overflow
2022-03-23 11:48:09 +05:30
Pranshu Chittora
f22d5f0fbd fix: top endpoints table overflow 2022-03-23 11:44:37 +05:30
Prashant Shahi
8c56d04988 chore(test-framework): 🔧 expose query service port 2022-03-23 10:31:35 +05:30
Prashant Shahi
18cfc40982 chore(Makefile): 🔧 include missed slash and formatting
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-23 01:25:13 +05:30
Prashant Shahi
3c66f9d2dd chore(test-framework): 🔧 remove frontend service and use latest tag from arm env
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-23 01:04:31 +05:30
Prashant Shahi
4f3bb95a77 chore(makefile): 🔧 add down-x86 and down-arm targets (#887)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-23 00:51:24 +05:30
Prashant Shahi
8aa5eb78b2 chore: 🔧 set dimensions_cache_size in signozspanmetrics processor (#885)
* chore: 🔧  set dimensions_cache_size in signozspanmetrics processor
- add example usage of limit_percentage and spike_limit_percentage

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-23 00:03:47 +05:30
Prashant Shahi
79a1f79b7c chore: 🔧 Add targets to clear docker standalone and swarm data (#886)
- remove sudo from run-arm and run-x86 targets

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-23 00:02:50 +05:30
palash-signoz
1c5c65ddf7 bug: useHistory is removed and dashboard loading component is removed (#802)
* bug: useHistory is removed and dashboard loading component is removed

* chore: new dashboard is updated

* chore: new dashboard is updated

* chore: sidenav is updated

* chore: getX console is removed

* chore: sidenav is updated with correct pathname
2022-03-22 21:56:12 +05:30
palash-signoz
b2e78b9358 Merge pull request #883 from pranshuchittora/pranshuchittora/fix/trace-filter-group-by
fix: trace filter groupby selection is breaking the FE
2022-03-22 17:15:17 +05:30
Pranshu Chittora
5e02bfe2e4 fix: trace filter groupby selection is breaking the FE 2022-03-22 17:12:03 +05:30
palash-signoz
02d89a3a04 feat(FE): react-i18next is added (#789)
* chore: packages are added

* feat: i18next is added

* feat: translation for the signup is updated

* chore: package.json is updated
2022-03-22 16:22:41 +05:30
Pranshu Chittora
3ab0e1395a feat: data time, UI and graph label consistency across FE (#878)
* feat: data time and graph label consistency across FE

* feat: saved state of sidebar and horizontal scroll fix for trace filter page

* feat: add Y-Axis unit for missing metrics graphs

* chore: update node version from 12.18 to 12.22

* fix: 24hr time unit on graph
2022-03-22 16:22:02 +05:30
palash-signoz
f1f606844a chore: Eslint fix config (#882)
* chore: eslint config is updated

* chore: eslint auto fix is added
2022-03-22 12:10:31 +05:30
Ahsan Barkati
4e335054fb chore(tests): Add end-to-end testing system for query service (#867)
* Initial work on s3

* some more work

* Add policy api

* Cleanup

* Add multi-tier TTL and remove storagePolicy API

* Cleanup

* Typo fix

* Revert constants

* Cleanup

* Add API to get disks

* Add more validations

* Initial work on e2e tests

* Basic ttl test

* Add test which checks for objects in Minio

* Address comments

Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-03-22 00:03:20 +05:30
Ahsan Barkati
c902a6bac8 feat(query-service): Add cold storage support (#837)
* Initial work on s3

* some more work

* Add policy api

* Cleanup

* Add multi-tier TTL and remove storagePolicy API

* Cleanup

* Typo fix

* Revert constants

* Cleanup

* Add API to get disks

* Add more validations

* Cleanup
2022-03-21 23:58:56 +05:30
palash-signoz
c00f0f159b fix: save layout bug is resolved (#840)
* fix: save layout bug is resolved

* chore: onClick is also added in the component slider

* chore: dashboard Id is added

* chore: non dashboard widget is filtered out

* chore: panel layout stack issue is resolved
2022-03-21 21:04:32 +05:30
Prashant Shahi
86bdb9a5ad chore: deployment config changes (#869)
* chore(install-script): 🔧 include missed sudo_cmd variable

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore: 🔧 add .gitkeep in folders to mount

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore(docker-swarm): 🔧 Update deploy configurations

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore(compose-yaml): 🔧 expose otlp ports and restart on failure policy

Signed-off-by: Prashant Shahi <prashant@signoz.io>

Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-03-21 20:43:43 +05:30
Ankit Nayan
044f02c7c7 chore: version bumped for forked prometheus with clickhouse storage (#870) 2022-03-21 20:40:43 +05:30
Prashant Shahi
561d18efec chore(telemetry): add deployment type (#875)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-21 20:39:53 +05:30
palash-signoz
ab10a699b1 feat: timestamp is updated for selected start time (#852)
* feat: timestamp is updated for selected start time

* feat: startTime for the tree is updated
2022-03-17 11:56:25 +05:30
palash-signoz
e28733d246 feat: onClick in new tab is added (#842) 2022-03-16 23:26:45 +05:30
palash-signoz
a238123eb2 feat: monaco editor is updated (#851) 2022-03-16 23:24:27 +05:30
Prashant Shahi
4337ab5cd0 chore(install-script): remove mandatory sudo and digest improvement (#836)
* chore(install-script):  add fallback sha1sum command

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore(install-script):  remove mandatory sudo and digest improvement

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore(install-script): 🔧 use sudo_cmd variable

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore(install-script): 🔧 remove depreciated druid section

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore(install-script): 🔧 sudo prompt changes

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-16 22:59:20 +05:30
palash-signoz
7a3a3b8d89 bug: edit channel is fixed (#855) 2022-03-16 22:35:13 +05:30
palash-signoz
a9cbd12330 Merge pull request #861 from palash-signoz/trace-filter-fix-selected
chore: styled tab is updated to Tab from antd
2022-03-16 17:32:40 +05:30
Palash gupta
c320c20280 chore: styled tab is updated to Tab from antd 2022-03-16 17:09:27 +05:30
palash-signoz
b3c2fe75d3 Merge pull request #858 from palash-signoz/857-channels-dropdown
bug: global time selection dropdown is removed in the all channels page
2022-03-16 16:51:15 +05:30
Pranshu Chittora
95d3a27769 Merge pull request #845 from pranshuchittora/pranshuchittora/fix/trace-detail/events-error-handling
fix(FE): trace detail events error handling
2022-03-16 16:44:29 +05:30
palash-signoz
67f09c6def Merge pull request #860 from pranshuchittora/pranshuchittora/feat/y-axis-unit-selection
feat: y-axis units for pre-defined and dashboard graphs
2022-03-16 16:36:09 +05:30
Pranshu Chittora
eaeba43179 Merge pull request #835 from pranshuchittora/pc/feat/shared-styles-for-styled-components
feat(FE): new trace detail page code cleanup and enhancements
2022-03-16 16:29:50 +05:30
Pranshu Chittora
daadc584ea feat: update ts type 2022-03-16 16:28:11 +05:30
Pranshu Chittora
da8b16f588 feat: Updates TS type 2022-03-16 16:27:06 +05:30
Pranshu Chittora
17738a58a2 Merge branch 'develop' of github.com:SigNoz/signoz into pranshuchittora/feat/y-axis-unit-selection 2022-03-16 16:24:40 +05:30
Pranshu Chittora
3ebffae1c6 chore: revert eslint --debug flag 2022-03-16 16:16:20 +05:30
Pranshu Chittora
2ca67f1017 Merge pull request #844 from pranshuchittora/pranshuchittora/feat/x-axis-adaptive-lables
feat(FE): adaptive x axis time labels
2022-03-16 16:14:49 +05:30
Pranshu Chittora
00c7eccb0c fix: remove any type 2022-03-16 16:14:27 +05:30
Pranshu Chittora
7f3d9e2e35 feat: PR review changes 2022-03-16 16:05:21 +05:30
Pranshu Chittora
a95656b3a0 fix: x axis label when the time stamp is not parsed 2022-03-16 15:29:23 +05:30
Pranshu Chittora
9404768f9d Merge branch 'develop' of github.com:SigNoz/signoz into pc/feat/shared-styles-for-styled-components 2022-03-16 13:28:55 +05:30
Pranshu Chittora
7559445ebe Merge branch 'develop' of github.com:SigNoz/signoz into pranshuchittora/fix/trace-detail/events-error-handling 2022-03-16 13:10:26 +05:30
Pranshu Chittora
112766b265 Merge branch 'pranshuchittora/fix/trace-detail/events-error-handling' of github.com:pranshuchittora/signoz into pranshuchittora/fix/trace-detail/events-error-handling 2022-03-16 13:08:43 +05:30
Pranshu Chittora
ccf5af089d Merge pull request #856 from palash-signoz/eslint-fix
BUG(UI): eslint fixes are updated
2022-03-16 11:52:46 +05:30
Pranshu Chittora
0b6f31420b feat: y-axis units for pre-defined and dashboard graphs 2022-03-15 15:18:33 +05:30
Palash gupta
b4ce805c6f bug: global time selection dropdown is removed in the all channels page 2022-03-15 10:09:32 +05:30
Palash gupta
08f24fbdff chore: function is made async 2022-03-14 21:34:22 +05:30
Palash gupta
191925b418 Merge branch 'develop' into eslint-fix 2022-03-14 20:15:21 +05:30
Palash gupta
84b70c970f chore: eslint fixes are updated 2022-03-14 20:12:42 +05:30
Pranay Prateek
988ce36047 Update README.md 2022-03-11 18:54:28 +05:30
Pranay Prateek
24a4177a73 Update README.md 2022-03-11 18:53:37 +05:30
Pranshu Chittora
08ca3b7849 bug: timeline interval is updated
fix: add if condition for timeline interval

chore: remove mock response
2022-03-11 16:40:58 +05:30
Pranshu Chittora
d0723207c3 chore: remove mock response 2022-03-11 16:38:13 +05:30
Pranshu Chittora
16cf829ec3 Merge branch 'develop' of github.com:SigNoz/signoz into pranshuchittora/fix/trace-detail/events-error-handling 2022-03-11 16:37:15 +05:30
Pranshu Chittora
23f9949fad fix: trace detail events error handling 2022-03-11 16:32:11 +05:30
Pranshu Chittora
fafdd4b87f Merge pull request #828 from palash-signoz/timeline-fix
bug(FE): timeline interval is updated
2022-03-11 16:30:40 +05:30
Pranshu Chittora
f2ace729fd fix: linting fixes 2022-03-11 13:42:05 +05:30
Pranshu Chittora
f0c627eebe chore: add unit tests 2022-03-11 13:39:04 +05:30
Pranshu Chittora
1bf8e6bef6 feat: better x-axis labels 2022-03-11 11:20:52 +05:30
Ankit Nayan
f37e6ef1d1 Merge pull request #831 from SigNoz/prashant/e2e-k3s-changes
ci: 💚 fix e2e-k3s workflow as needed with the chart changes
2022-03-09 20:00:28 +05:30
Pranshu Chittora
c3ebbfa8ca feat: new trace detail page code enhancemenets and cleanup 2022-03-09 10:43:02 +05:30
Pranay Prateek
12970d6975 Update README.md 2022-03-08 20:56:59 +05:30
Vishal Sharma
1112ff7e7a Remove gitpod support temporarily (#834)
Gitpod environment has issues which has to be resolved before adding to contributing docs.
2022-03-08 19:55:19 +05:30
palash-signoz
09fd877b2a Merge pull request #1 from pranshuchittora/pc/fix/pr-828
fix: add if condition for timeline interval
2022-03-07 16:32:42 +05:30
Pranshu Chittora
c04c0284dc fix: add if condition for timeline interval 2022-03-07 16:08:37 +05:30
Prashant Shahi
239cdad57b ci: 💚 fix e2e-k3s workflow with chart changes
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-07 16:07:38 +05:30
Pranshu Chittora
314f95a914 feat(FE): shared styled for custom styled components 2022-03-07 14:32:17 +05:30
Pranay Prateek
e070ba61cd Update repo-stats.yml 2022-03-06 13:23:49 +05:30
Pranay Prateek
79576b476f Create repo-stats.yml (#829) 2022-03-06 13:22:37 +05:30
Palash gupta
8e4f987cf6 bug: timeline interval is updated 2022-03-06 12:02:21 +05:30
Axay sagathiya
3fe3bde0c7 Fix: Update Documentation to configure front-end and run back-end. (#815)
* fix: add all the steps to run query-service

* fix: add step to add configuration to run frontend
2022-03-05 23:51:48 +05:30
Ankit Nayan
efe57ff15d Merge pull request #825 from SigNoz/release/v0.7.1
Release/v0.7.1
2022-03-04 14:11:41 +05:30
Ankit Nayan
b8a6a27fad release: v0.7.1 2022-03-04 13:23:30 +05:30
Ankit Nayan
fb3dbcf662 Merge pull request #819 from pranshuchittora/pc/feat/trace-detail/eclear
fix: expand and unexpand on active span path
2022-03-04 13:16:46 +05:30
Ankit Nayan
cb04979bb7 Merge pull request #823 from pranshuchittora/pc/fix/yarn-install-node-gyp-error
fix: install deps node-gyp error
2022-03-04 13:16:20 +05:30
Ankit Nayan
967b83a5d0 Merge pull request #824 from palash-signoz/dashboard-fixes
bug(dashboard): useCallback is removed
2022-03-04 13:16:02 +05:30
Palash gupta
3fd086db4d dashboard: useCallback is removed 2022-03-04 13:10:40 +05:30
Pranshu Chittora
9aedcc1777 fix: install deps node-gyp error 2022-03-04 12:49:13 +05:30
Ankit Nayan
0fb5b90e4e Merge pull request #822 from SigNoz/release/v0.7.0
Release/v0.7.0
2022-03-04 12:12:12 +05:30
Ankit Nayan
91bdb77a0e Revert "Release/v0.7.0 (#814)" (#820)
This reverts commit eb63b6da2a.
2022-03-04 12:08:54 +05:30
Pranshu Chittora
e7d2bb13dc fix: expand and unexpand on active span path 2022-03-04 11:34:33 +05:30
palash-signoz
39c3f67d86 Merge pull request #816 from palash-signoz/eslint
feat(eslint): eslint-plugin-react-hooks is added
2022-03-04 11:33:00 +05:30
palash-signoz
49d1015a72 Merge pull request #817 from palash-signoz/sonar-eslint-plugin
feat(eslint): sonar js plugin for eslint is added
2022-03-04 11:32:45 +05:30
Palash gupta
f3b2f30c82 feat(eslint): sonar js plugin for eslint is added 2022-03-04 10:07:52 +05:30
Palash gupta
ff9d81aefc feat: eslint-plugin-react-hooks is added 2022-03-04 10:04:06 +05:30
Prashant Shahi
eb63b6da2a Release/v0.7.0 (#814)
* feat(FE): dynamic step size of metrics page

* chore(tests): migrate to dayjs for generating timestamp

* bug: sorting of date is fixed

* feat: soring filter is added

* chore: typo is fixed

* feat(backend): support custom events in span

* fix: encode event string to fix parsing at frontend

* chore: styles is updated for the not found button

* chore: update otel-collector to 0.43.0

* fix: remove encoding

* fix: set userId as distinctId if failed to fetch IP

* Fe: Feat/trace detail (#764)

* feat: new trace detail page flame graph

* feat: new trace detail page layout

* test: trace detail is wip

* chore: trace details in wip

* feat: trace detail page timeline component

* chore: spantoTree is updated

* chore: gantchart is updated

* chore: onClick is added

* chore: isSpanPresentInSearchString util is added

* chore: trace graph is updated

* chore: added the hack to work

* feat: is span present util is added

* chore: in span ms is added

* chore: tooltip is updated

* WIP: chore: trace details changes are updated

* feat: getTraceItem is added

* feat: trace detail page is updated

* feat: trace detail styling changes

* feat: trace detail page is updated

* feat: implement span hover, select, focus and reset

* feat: reset focus

* feat: spanId as query table and unfurling

* feat: trace details is updated

* chore: remove lodash

* chore: remove lodash

* feat: trace details is updated

* feat: new trace detail page styling changes

* feat: new trace detail page styling changes

* feat: improved styling

* feat: remove horizontal scrolling

* feat: new trace detail page modify caret icon

* chore styles are updated

* Revert "chore: Trace styles"

* chore styles are updated

* feat: timeline normalisation

* chore: remove mock data

* chore: sort tree data util is added and selected span component is updated

* chore: trace changes are updated

* chore: trace changes are updated

* chore: trace changes are updated

* feat: refactored time units for new trace detail page

* chore: remove mockdata

* feat: new trace detail page themeing and interval loop fix

* chore: error tag is updated

* chore: error tag is updated

* chore: error tag is updated

* chore: error tag is updated

* chore: console is removed

* fix: error tag expand button

* chore: expanded panel is updated

* feat: scroll span from gantt chart intoview

* chore: trace detail is removed

Co-authored-by: Pranshu Chittora <pranshu@signoz.io>

* bug: Trace search bug is resolved (#741)

* bug: Trace search bug is resolved

* bug: Trace search bug is resolved

* chore: parseTagsToQuery is updated

* chore: parseTagsToQuery is updated

* chore: parseTagsToQuery is updated

* chore: parseTagsToQuery is updated

* chore: ️ add hotrod template and install/delete scripts (#801)

* chore: ️ add hotrod template and scripts

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* refactor:  conditionally compute image

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* fix: 🩹 add signoz namespace

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore: 🔨  fix namespace template in scripts

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* docs(hotrod): 📝 Add README for hotrod k8s

Signed-off-by: Prashant Shahi <prashant@signoz.io>

Co-authored-by: Ankit Nayan <ankit@signoz.io>

* chore(release): 📌 pin SigNoz and OtelCollector versions

Signed-off-by: Prashant Shahi <prashant@signoz.io>

Co-authored-by: Pranshu Chittora <pranshu@signoz.io>
Co-authored-by: Palash gupta <palash@signoz.io>
Co-authored-by: makeavish <makeavish786@gmail.com>
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-03-04 01:41:49 +05:30
Prashant Shahi
c9b07ee5dd chore(release): 📌 pin SigNoz and OtelCollector versions
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-03-03 19:59:55 +05:30
Prashant Shahi
bda8ddc0e4 chore: ️ add hotrod template and install/delete scripts (#801)
* chore: ️ add hotrod template and scripts

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* refactor:  conditionally compute image

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* fix: 🩹 add signoz namespace

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore: 🔨  fix namespace template in scripts

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* docs(hotrod): 📝 Add README for hotrod k8s

Signed-off-by: Prashant Shahi <prashant@signoz.io>

Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-03-03 19:48:50 +05:30
palash-signoz
540a682bf0 bug: Trace search bug is resolved (#741)
* bug: Trace search bug is resolved

* bug: Trace search bug is resolved

* chore: parseTagsToQuery is updated

* chore: parseTagsToQuery is updated

* chore: parseTagsToQuery is updated

* chore: parseTagsToQuery is updated
2022-03-03 19:10:51 +05:30
palash-signoz
9f7c60d2c1 Fe: Feat/trace detail (#764)
* feat: new trace detail page flame graph

* feat: new trace detail page layout

* test: trace detail is wip

* chore: trace details in wip

* feat: trace detail page timeline component

* chore: spantoTree is updated

* chore: gantchart is updated

* chore: onClick is added

* chore: isSpanPresentInSearchString util is added

* chore: trace graph is updated

* chore: added the hack to work

* feat: is span present util is added

* chore: in span ms is added

* chore: tooltip is updated

* WIP: chore: trace details changes are updated

* feat: getTraceItem is added

* feat: trace detail page is updated

* feat: trace detail styling changes

* feat: trace detail page is updated

* feat: implement span hover, select, focus and reset

* feat: reset focus

* feat: spanId as query table and unfurling

* feat: trace details is updated

* chore: remove lodash

* chore: remove lodash

* feat: trace details is updated

* feat: new trace detail page styling changes

* feat: new trace detail page styling changes

* feat: improved styling

* feat: remove horizontal scrolling

* feat: new trace detail page modify caret icon

* chore styles are updated

* Revert "chore: Trace styles"

* chore styles are updated

* feat: timeline normalisation

* chore: remove mock data

* chore: sort tree data util is added and selected span component is updated

* chore: trace changes are updated

* chore: trace changes are updated

* chore: trace changes are updated

* feat: refactored time units for new trace detail page

* chore: remove mockdata

* feat: new trace detail page themeing and interval loop fix

* chore: error tag is updated

* chore: error tag is updated

* chore: error tag is updated

* chore: error tag is updated

* chore: console is removed

* fix: error tag expand button

* chore: expanded panel is updated

* feat: scroll span from gantt chart intoview

* chore: trace detail is removed

Co-authored-by: Pranshu Chittora <pranshu@signoz.io>
2022-03-03 19:04:23 +05:30
palash-signoz
80a06300ff Merge pull request #703 from SigNoz/pranshuchittora/feat/dynamic-step-size
feat(FE): Dynamic step size of metrics page
2022-03-03 18:52:15 +05:30
Ankit Nayan
7fae5207d1 Merge pull request #718 from palash-signoz/716-sorting
bug(FE): sorting of date is fixed
2022-03-03 18:28:50 +05:30
Ankit Nayan
b4f781ad47 Merge pull request #800 from palash-signoz/not-found-button-style
chore: styles is updated for the not found button
2022-03-03 18:25:13 +05:30
Ankit Nayan
d9468d438d Merge pull request #791 from palash-signoz/782-typo
chore: typo is fixed
2022-03-03 18:24:35 +05:30
Ankit Nayan
25559c8781 Merge pull request #796 from SigNoz/feat/support-custom-events
feat(backend): support custom events in span
2022-03-03 18:23:44 +05:30
Ankit Nayan
aa760336d0 Merge pull request #803 from SigNoz/chore/update-otelcollector-0.43.0
chore: update otel-collector to 0.43.0
2022-03-03 18:23:03 +05:30
Ankit Nayan
46aaa8199d Merge pull request #811 from ankitnayan/fix/na-telemetry
fix: set userId as distinctId if failed to fetch IP
2022-03-03 18:14:58 +05:30
Ankit Nayan
10faa6f42b fix: set userId as distinctId if failed to fetch IP 2022-03-03 18:11:23 +05:30
makeavish
2d5ad346a6 fix: remove encoding 2022-03-03 14:45:43 +05:30
makeavish
207360e074 chore: update otel-collector to 0.43.0 2022-03-03 13:23:49 +05:30
Palash gupta
a4b954e304 chore: styles is updated for the not found button 2022-03-02 18:32:24 +05:30
makeavish
3e24e371f4 fix: encode event string to fix parsing at frontend 2022-03-01 16:16:45 +05:30
makeavish
bdeadaeff6 feat(backend): support custom events in span 2022-02-28 16:51:58 +05:30
Palash gupta
43b39f1a7c chore: typo is fixed 2022-02-28 09:39:30 +05:30
Palash gupta
6ab7fea8de feat: soring filter is added 2022-02-27 16:56:18 +05:30
Palash gupta
05371085f9 Merge branch 'develop' into 716-sorting 2022-02-27 16:30:44 +05:30
Ankit Nayan
abe4940e74 Merge pull request #777 from SigNoz/release/v0.6.2
Release/v0.6.2
2022-02-25 22:15:05 +05:30
Ankit Nayan
938f42bd4f release: v0.6.2 2022-02-25 22:05:30 +05:30
Ankit Nayan
1652f35f1a release: v0.6.2 2022-02-25 21:28:12 +05:30
Ankit Nayan
3a67d0237c Merge pull request #769 from ankitnayan/feat/opt-out
feat: adding org name and adding opt-out
2022-02-25 20:42:27 +05:30
Ankit Nayan
7888865ddc feat: adding org name and adding opt-out 2022-02-25 20:40:38 +05:30
Ankit Nayan
25fb0deb8d Merge pull request #572 from palash-signoz/remove-duplicate-condition-bundle-analyser
fix: webpack config is updated
2022-02-25 20:10:05 +05:30
Ankit Nayan
b067db633a Merge pull request #568 from palash-signoz/toggle-setting-tabs
Feat(UI): toggle tabs is updated in the setting tab
2022-02-25 20:09:01 +05:30
Ankit Nayan
9094f20070 Merge pull request #567 from palash-signoz/fix-external-link
fix: code vulnerability is fixed
2022-02-25 20:08:03 +05:30
Ankit Nayan
9129704a93 Merge pull request #731 from palash-signoz/clear-all-with-user-filter
bug: clear all now moves with selected filter rather than user selected filter
2022-02-25 18:44:37 +05:30
Ankit Nayan
fc244edc50 Update CODEOWNERS 2022-02-25 18:07:07 +05:30
Ankit Nayan
cbe635bc94 Merge pull request #733 from palash-signoz/trace-test
test(Trace): Trace page test are added
2022-02-25 18:04:38 +05:30
Ankit Nayan
bb0a7a956a Merge pull request #705 from palash-signoz/bug-useDebouncedFunction
bug: useDebounce function is fixed
2022-02-25 17:59:48 +05:30
Ankit Nayan
2800b021e6 Merge pull request #689 from SigNoz/feat/tagValueSuggestion
feat: tag value suggestion API
2022-02-25 17:44:49 +05:30
Ankit Nayan
74170ffb4a Merge pull request #750 from SigNoz/fix/telemetry-bug
fix: avoid panic by handling getOutboundIP() error
2022-02-25 17:16:29 +05:30
Vishal Sharma
ab72e92fc6 return string(ip) instead of empty string on error 2022-02-25 17:11:45 +05:30
Ankit Nayan
8b224dd59c Merge pull request #753 from udasitharani/749-feedback-fab
Removing Feedback FAB
2022-02-25 17:06:46 +05:30
Ankit Nayan
ce77a3cd80 Merge pull request #758 from SigNoz/makeavish-patch-1
Add section for CPU architecture
2022-02-25 17:06:08 +05:30
Ankit Nayan
79fa4e7b74 Merge pull request #762 from axaysagathiya/issue-740
Fix: update documentation for query-service.
2022-02-25 17:05:45 +05:30
Ankit Nayan
90e97856a9 Merge pull request #755 from palash-signoz/bug-trace-table
bug: onClick cursor is added and display date is converted to minutes and 12hr format
2022-02-25 14:10:13 +05:30
Ankit Nayan
61fb11a232 Merge pull request #756 from SigNoz/feat/addHasErrorColumn
feat: add hasError tag in searchTraces response
2022-02-25 14:07:47 +05:30
axaysagathiya
729ea586a8 fix the spelling mistakes. 2022-02-24 17:27:22 +05:30
axaysagathiya
eb28459847 Fix: update documentation for query-service.
fixes #740
2022-02-24 15:10:52 +05:30
Vishal Sharma
10fc3bf456 Add section for CPU architecture 2022-02-24 09:57:03 +05:30
makeavish
244a07aef8 feat: add hasError tag in searchTraces response 2022-02-23 12:13:13 +05:30
Palash gupta
92cf617a53 bug: onClick cursor is added and display date is converted to minutes and 12hr format 2022-02-23 08:22:23 +05:30
Udasi Tharani
1eb333a890 fixing collapse button background 2022-02-22 22:52:12 +05:30
Udasi Tharani
ebc280db0e Merge branch '678-slack-community' of https://github.com/palash-signoz/signoz into 749-feedback-fab 2022-02-22 22:35:57 +05:30
Udasi Tharani
3beb7f1843 removing feedback fab button 2022-02-22 22:35:29 +05:30
makeavish
56c9ea5430 fix: avoid panic by handling getOutboundIP() error 2022-02-22 17:49:02 +05:30
Pranay Prateek
05c79b7119 Update .gitpod.yml 2022-02-20 18:03:54 +05:30
Palash gupta
a6da00f801 test: trace test is updated 2022-02-16 15:36:20 +05:30
Palash gupta
0514c5035e test: trace test is updated 2022-02-16 14:20:48 +05:30
Palash gupta
2df058825a Merge branch 'develop' into toggle-setting-tabs 2022-02-16 09:30:57 +05:30
Palash gupta
a07b8999c0 test: initial test is updated 2022-02-16 09:27:06 +05:30
Palash gupta
5405f0d9ed chore: initial response for the trace api is added 2022-02-16 09:00:41 +05:30
Palash gupta
00cefe2306 bug: clear all now moves with selected filter rather than user selected filter 2022-02-16 08:13:54 +05:30
Ankit Nayan
8e621b6a70 Merge pull request #727 from SigNoz/prashant/hotrod-yaml
chore: 🔧 update default jaeger endpoint in hotrod manifest
2022-02-15 20:49:08 +05:30
Ankit Nayan
5aa46c7e96 Merge pull request #723 from SigNoz/prashant/hotrod-yaml
chore: 🔧 update default jaeger endpoint in hotrod manifest
2022-02-15 17:25:39 +05:30
Ankit Nayan
c367f928c5 Merge pull request #581 from siddhantparekh/issue-561
fix(FE) Removed refresh time filter from trace-detail page
2022-02-14 12:13:13 +05:30
Prashant Shahi
4c2f287e06 chore: 🔧 update default jaeger endpoint in hotrod manifest
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-02-14 05:00:48 +05:30
Palash gupta
567c81c0f9 chore:button text is updated 2022-02-13 12:05:20 +05:30
Palash gupta
31f0a9f0b6 feat: slack community link is updated 2022-02-12 15:17:40 +05:30
Palash gupta
bad14a7d09 feat: slack community link is updated 2022-02-12 15:01:38 +05:30
Palash gupta
28e8cffeaa bug: sorting of date is fixed 2022-02-12 14:26:03 +05:30
Ankit Nayan
3302a84ab5 Merge pull request #714 from SigNoz/release/v0.6.1
Release: v0.6.1
2022-02-11 16:57:48 +05:30
Ankit Nayan
5510c67dbf release: v0.6.1 2022-02-11 16:22:51 +05:30
Ankit Nayan
347d73022a Merge branch 'main' into develop 2022-02-11 16:17:00 +05:30
Ankit Nayan
79e43f594d Merge pull request #702 from palash-signoz/signup-unmount-state
bug: signup state is now not toggled when component is not toggled
2022-02-11 16:14:41 +05:30
Ankit Nayan
0df3052e13 Merge pull request #677 from SoniaisMad/fix/dashboard-api-call-apply-button
fix: Dashboard page does not call API on clicking on apply button
2022-02-11 16:09:19 +05:30
Pranshu Chittora
baba0c389c chore(tests): migrate to dayjs for generating timestamp 2022-02-11 16:08:54 +05:30
Ankit Nayan
43983bc643 Merge pull request #701 from palash-signoz/filename-hashed
feat: now webpack filename are hashed
2022-02-11 16:05:18 +05:30
Ankit Nayan
4b9ef95f7a Merge pull request #706 from palash-signoz/700-widget-error
bug(FE): error state in the bar panel is added
2022-02-11 16:03:44 +05:30
Palash gupta
3db790c3c7 bug: merge conflit is resolved 2022-02-11 15:52:04 +05:30
Palash gupta
7d68e9cebc Merge branch 'develop' into 700-widget-error 2022-02-11 15:49:04 +05:30
Ankit Nayan
1e6df307a0 Merge pull request #712 from palash-signoz/full-view-graph-legend-fix
bug: full view legend is now fixed
2022-02-11 15:42:38 +05:30
Ankit Nayan
e6a53d6c06 Merge pull request #711 from palash-signoz/dashboard-graphs
bug: dashboard graph is now fixed
2022-02-11 15:38:35 +05:30
Palash gupta
db9052ea6e bug: full view legend is now fixed 2022-02-11 15:00:00 +05:30
Palash gupta
45eb201efd bug: dashboard graph is now fixed 2022-02-11 14:09:05 +05:30
Palash gupta
744dfd010a chore: modal is updated in the error state 2022-02-11 12:00:46 +05:30
Palash gupta
dc737f385a bug: in the error state bar panel is added 2022-02-10 22:20:31 +05:30
Palash gupta
0d98a4fd0c bug: useDebounce function is fixed 2022-02-10 22:00:35 +05:30
Pranshu Chittora
09344cfb44 feat(FE): dynamic step size of metrics page 2022-02-10 17:46:33 +05:30
Palash gupta
0ae5b824d9 bug: signup state is now not toggled when component is not toggled 2022-02-10 16:44:38 +05:30
Palash gupta
828bd3bac6 feat: now webpack filename are hashed 2022-02-10 16:37:14 +05:30
Ankit Nayan
10f4fb53ac Merge pull request #699 from Tazer/fix-constants-alertmanager
fix: added support for custom alertmanager url
2022-02-10 11:46:12 +05:30
Patrik
fdc8670fab fix: added support for custom alertmanager url 2022-02-09 22:05:27 +01:00
Pranay Prateek
d7fa503f04 Update README.md 2022-02-10 00:28:00 +05:30
Ankit Nayan
b37bc0620d Merge pull request #696 from SigNoz/release/v0.6.0
Release/v0.6.0
2022-02-09 23:32:52 +05:30
Ankit Nayan
9bf37b391e release: v0.6.0 2022-02-09 21:52:59 +05:30
Prashant Shahi
a5bf4c1a61 ci(push): 👷 push workflow update (#695)
* ci(push): 👷 remove prefix v for docker images

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* ci(push): 👷 remove path trigger and update tags regex

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-02-09 21:46:07 +05:30
Prashant Shahi
420f601a68 ci(push): 👷 add develop branch and remove second tag (#694)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-02-09 20:35:25 +05:30
palash-signoz
03eac8963f chore: Env fix (#693)
* bug(UI): frontend build is fixed

* chore; build is fixed

* chore: build is fixed
2022-02-09 17:22:21 +05:30
palash-signoz
ffd2c9b466 bug(UI): frontend build is fixed (#692) 2022-02-09 16:47:34 +05:30
Prashant Shahi
51b11d0119 fix(makefile): 🩹 buildx fix for pushing images 2022-02-09 16:00:53 +05:30
Ankit Nayan
c5ee8cd586 chore: merging main to develop 2022-02-09 14:59:40 +05:30
palash-signoz
acbe7f91cb bug(UI): optimisation config is updated (#650) 2022-02-09 11:50:29 +05:30
Pranshu Chittora
07f4fcb216 fix(FE): Sidebar navigation when collapsed (#686)
* Update README.md

* ci(k3s): 💚 fix correct raw github URL for hotrod (#661)

Signed-off-by: Prashant Shahi <prashant@signoz.io>
(cherry picked from commit d92a3e64f5)

* chore: 🚚 rename config .yaml to yml for behaviorbot (#673)

Signed-off-by: Prashant Shahi <prashant@signoz.io>
(cherry picked from commit cd04a39d3d)

* fix(FE): sidebar navigation when collapsed

Co-authored-by: Pranay Prateek <pranay@signoz.io>
Co-authored-by: Prashant Shahi <prashant@signoz.io>
Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-02-09 11:48:54 +05:30
palash-signoz
1ee2e302e2 Feature(FE): signup page (#642)
* chore: icon is updated

* feat: signup page design is updated

* chore: set get user pref is added

* chore: svg is added

* feat: signup page is updated

* feat: signup page is updated
2022-02-09 11:44:08 +05:30
palash-signoz
be8ec756c6 Feat (UI) :Trace Filter page is updated (#684)
* dayjs and less loader is added

* webpack config is added

* moment is removed

* useDebounceFunction hook is made

* old components and reducer is removed

* search is updated

* changes are upadted for the trace page as skeleton is ready

* chore: method is change from dayjs

* convertObject into params is updated

* initial filters are updated

* initial and final filter issue is fixed

* selection of the filter is updated

* filters are now able to selected

* checkbox disable when loading is in progress

* chore: getFilter filename is updated

* feat: clear all and exapanded filter is updated

* chore: clearAll and expand panel is updated

* feat: useClickOutSide hook is added

* chore: get filter url becomes encoded

* chore: get tag filters is added

* feat: search tags is wip

* bug: global max,min on change bug is resolved

* chore: getInitial filter is updated

* chore: expand panel is updated

* chore: get filter is updated

* chore: code smells is updated

* feat: loader is added in the panel header to show the loading

* chore: search tags in wip

* chore: button style is updated

* chore: search in wip

* chore: search ui is updated from the global state

* chore: search in wip

* chore: search is updated

* chore: getSpansAggregate section is updated

* useOutside click is updated

* useclickoutside hook is updated

* useclickoutside hook is updated

* parsing is updated

* initial filter is updated

* feat: trace table is updated

* chore: trace table is updated

* chore: useClickout side is updated for the search panel

* feat: unneccesary re-render and code is removed

* chore: trace table is updated

* custom component is removed and used antd search component

* error state is updated over search component

* chore: search bar is updated

* chore: left panel search and table component connection is updated

* chore: trace filter config is updated

* chore: for graph reducer is updated

* chore: graph is updated

* chore: table is updated

* chore: spans is updated

* chore: reducer is updated

* chore: graph component is updated

* chore: number of graph condition is updated

* chore: input and range slider is now sync

* chore: duration is updated

* chore: clearAllFilter is updated

* chore: duration slider is updated

* chore: duration is updated and panel body loading is updated

* chore: slider container is added to add padding from left to right

* chore: Select filter is updated

* chore: duration filter is updated

* chore: Divider is added

* chore: none option is added in both the dropdown

* chore: icon are updated

* chore: added padding in the pages component

* chore: none is updated

* chore: antd notification is added in the redux action

* chore: some of the changes are updated

* chore: display value is updated for the filter panel heading

* chore: calulation is memorised

* chore: utils function are updated in trace reducer

* chore: getFilters are updated

* tracetable is updated

* chore: actions is updated

* chore: metrics application is updated

* chore: search on clear action is updated

* chore: serviceName panel position is updated

* chore: added the label in the duration

* bug: edge case is fixed

* chore: some more changes are updated

* chore: some more changes are updated

* chore: clear all is fixed

* chore: panel heading caret is updated

* chore: checkbox is updated

* chore: isError handler is updated over initial render

* chore: traces is updated

* fix: tag search is updated

* chore: loading is added in the trace table and soring is introduced in the trace table

* bug: multiple render for the key is fixed

* Bug(UI): new suggestion is updated

* feat: isTraceFilterEnum function is made

* bug: new changes are updated

* chore: get Filter is updated

* chore: application metrics params is updated

* chore: error is added in the application metrics

* chore: filters is updated

* chore: expand panel edge case is updated

* chore: expand panel is updated and utls: updateUrl function is updated

* chore: reset trace state when unmounted

* chore: getFilter action is updated

* chore: api duration is updated

* chore: useEffect dependency is updated

* chore: filter is updated with the new arch

* bug: trace table issue is resolved

* chore: application rps url is updated for trace

* chore: duration filter is updated

* chore: search key is updated

* chore: filter is added in the search url

* bug: filter is fixed

* bug: filter is fixed

* bug: filter is fixed

* chore: reset trace data when unmounted

* chore: TopEnd point is added

* chore: getInitialSpanAggregate action is updated

* chore: application url is updated

* chore: no tags placeholder is updated

* chore: flow from customer is now fixed

* chore: search is updated

* chore: select all button is removed

* chore: prev filter is removed to show the result

* chore: config is updated

* chore: checkbox component is updated

* chore: span filter is updated

* chore: graph issue is resolved

* chore: selected is updated

* chore: all filter are selected

* feat: new trace page is updated

* chore: utils is updated

* feat: trace filter page is updated

* chore: duration is now fixed

* chore: duration clear filter is added

* chore: onClickCheck is updated

* chore: trace filter page is updated

* bug: some of bugs are resolved

* chore: duration body is updated

* chore: topEndPoint and application query is updated

* chore: user selection is updated in the duration filter

* chore: panel duration is updated

* chore: panel duration is updated

* chore: duration bug is solved

* chore: function display value is updated
2022-02-09 11:31:13 +05:30
Prashant Shahi
2de6574835 chore(k3s): 🩹 set up hotrod at start 2022-02-08 23:29:26 +05:30
Prashant Shahi
1acf009e62 docs: 📝 use 3301 for frontend port in README 2022-02-08 23:14:36 +05:30
Vishal Sharma
9a00dca749 feat: tag value suggestion API 2022-02-08 23:05:50 +05:30
Prashant Shahi
d22d1d1c3b refactor(ports): 💥 avoid exposing unnecessary ports and update frontend port to 3301 (#679)
* refactor(compose-yaml): ♻️ avoid unused and unnecessary ports mapping from compose files

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* refactor(frontend): 💥 change frontend port to 3301
BREAKING CHANGE:

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-02-08 22:47:06 +05:30
Prashant Shahi
6342e1cebc docs(deploy): 📝 Add README docs for deploy (#669)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-02-08 17:50:02 +05:30
Vishal Sharma
f74467e33c fix: exclude operation in trace APIs (#682) 2022-02-08 17:45:40 +05:30
palash-signoz
821b80acde chore: external address query is updated (#685) 2022-02-08 16:37:06 +05:30
Prashant Shahi
d41502df98 chore(helm-charts): 🚚 migrate helm charts to SigNoz/charts repository (#667)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-02-08 15:35:40 +05:30
Ankit Nayan
c1d4dc2ad6 fix: exclude added for status field (#681)
* fix: exclude added for status field

* chore: extracted status filtering to a function
2022-02-08 13:28:56 +05:30
Siddhant Khare
07183d5189 Gitpodify the Signoz (#634)
* chore(docs): updated lines of frontend & query sec

* fix: update baseURL for local & gitpod

* chore: allow all for dev to run on https

* chore(docs): add maintainer note at docker-compose

* chore: update gitignore to ignore .db & logs

* chore: upd lines of fe & query-service & notes

* feat: gitpodify the signoz with all envs. & ports

* fix: relative path of .scripts dir

* chore(ci): distribute tasks in gitpod.yml

* fix: run docker image while init

* fix: add empty url option for `baseURL`
2022-02-08 10:27:52 +05:30
Prashant Shahi
57992134bc chore: 🚚 rename config .yaml to yml for behaviorbot (#673)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
(cherry picked from commit cd04a39d3d)
2022-02-07 12:52:10 +05:30
Sonia Manoubi
e0a7002a29 fix: api call on apply button 2022-02-04 15:55:36 +01:00
Prashant Shahi
cd04a39d3d chore: 🚚 rename config .yaml to yml for behaviorbot
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-02-02 16:41:13 +05:30
Prashant Shahi
c372eac3e3 docs(contributing): 📝 Add Helm Chart contribute instructions (#668)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-02-02 15:22:35 +05:30
Vishal Sharma
fdd9287847 Fix 414 errors trace filter API (#660)
* fix: change trace filter APIs method from GET to POST

* fix: error filter param

* fix: json of aggregate params
2022-02-02 11:40:30 +05:30
Prashant Shahi
24f1404741 fix(compose-yaml): 🩹 infer max-file logging option as string
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-02-02 03:41:56 +05:30
Prashant Shahi
48ac20885f refactor(query-service): ♻️ Update ldflags and Makefile for dynamic versioning (#655)
* refactor(query-service): ♻️  Update ldflags and Makefile for dynamic versioning

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore: 🎨 Use blacnk spaces indentation in build details

* chore(query-service): 🎨  small build details format changes

* refactor(query-service): ♻️ refactor ldflags for go build
2022-02-01 23:03:16 +05:30
Mustaque Ahmed
ebb1c2ac79 fix: remove table default.signoz_spans from the codebase (#656)
* fix: remove table `default.signoz_spans` from the codebase

* fix: remove spanTable and archiveSpanTable from clickHouseReader
2022-02-01 10:26:31 +05:30
Prashant Shahi
e3c4bfce52 ci(k3s): 💚 fix correct raw github URL for hotrod (#661)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
(cherry picked from commit d92a3e64f5)
2022-01-31 19:07:45 +05:30
Prashant Shahi
d92a3e64f5 ci(k3s): 💚 fix correct raw github URL for hotrod
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-01-31 18:24:05 +05:30
Prashant Shahi
24162f8f96 chore(log-option): 🔧 set hotrod log options for hotrod app (#659)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-01-30 23:39:06 +05:30
Pranay Prateek
c7ffac46f5 Update README.md 2022-01-30 22:46:01 +05:30
Prashant Shahi
0d1526f6af docs: 📝 reverting minor docs changes
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-01-29 01:33:34 +05:30
Prashant Shahi
b0d68ac00f chore: install script improvements (#652)
* chore(install): 🔨  install script improvement

- remove ipify
- migrate from PostHog to Segment
- single function for sending event

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore: ⚰️ remove commented code

* chore(install): 🛂 update the auth token

* chore(install): 🔧 set context.default config true

* Revert "chore(install): 🔧 set context.default config true"

This reverts commit 0704013ac7.

* chore(install): 🔨 use uname sha for installation id

* refactor(slack): 🚚 use signoz.io/slack URL

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-01-29 01:20:25 +05:30
Prashant Shahi
8f0df5e1e3 ci(k3s): 🩹 simple fix as per the helm chart changes (#651) 2022-01-28 22:59:07 +05:30
Vishal Sharma
16fbbf8a0e exclude filter support and fix for not sending null string in groupby for aggregates API (#654)
* feat: add support to exclude filter params

* fix: null string in group by
2022-01-28 22:56:54 +05:30
Prashant Shahi
e823987eb0 build(docker): Two compose files for arm and amd (#638)
* build(docker): 🔨 Two compose files for arm and amd

* refactor(docker): ⚰️ remove env file from install script

* refactor: ⚰️ remove .gitkeep files from data folder

* chore(build): ⚰️ remove env files and update contributing docs

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* build: ♻️ use two compose files in Makefile

Signed-off-by: Prashant Shahi <prashant@signoz.io>

* chore(docker): 🚚 revert back to using same dir and pin image tag

* Revert "chore: Add migration file path in otel collector config (#628)"

This reverts commit 8467d6a00c.

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-01-27 22:34:26 +05:30
Devesh Kumar
f5abab6766 Fixed svg color mismatch in light mode and dark mode (#504)
* fixed svg color mismatch in light mode and dark mode

Added props in parent file

fixed and added fillColor as props to the highest order of parent

* set React.CSSProperties

props renamed and code reused
2022-01-26 21:55:48 +05:30
Devesh Kumar
b55c362bbb Fixed toggle Button contrast in Light Theme (#505)
* fixed toggle Button contrast in Light Theme

refactored to styled props and fixed theme

set defaultChecked to isDarkMode value

* Refactored boolean logic
2022-01-26 21:55:11 +05:30
Anik Das
6e6fd9b44b closes #569: critical css using critters (#570)
* feat(ui): critical css inline using critters

Signed-off-by: Anik Das <anikdas0811@gmail.com>

* fix: remove duplicate preload key

Signed-off-by: Anik Das <anikdas0811@gmail.com>
2022-01-26 21:53:03 +05:30
palash-signoz
50a88a8726 BUG: refresh button is now fixed (#590)
* chore: issue is fixed

* chore: unused import is removed
2022-01-26 21:46:59 +05:30
Vishal Sharma
0f4e5c9ef0 change migration file path (#630)
* chore: Add migration file path in otel collector config

* Update otel-collector-config.yaml
2022-01-26 21:43:15 +05:30
Ankit Nayan
be5d1f0090 feat: adding disable and anonymous functionality to telemetry collected (#637)
* chore: changed lib

* chore: changed lib

* chore: changed lib

* chore: changed lib

* chore: changes in params

* chore: changes in params

* chore: moving telemetry to a separate package

* feat: enabling telemetry via env var

* chore: removing posthog api_key

* feat: send heartbeat every 6hr

* feat: enabled version in application

* feat: added getter and setter apis and struct for user preferences

* feat: added version to properties to event

* feat: added apis to set and get user preferences and get version

* chore: refactored get and set userPreferences apis to dao pattern

* chore: added checks for telemetry enabled and anonymous during initialization

* chore: changed anonymous user functionality

* chore: sanitization

* chore: added uuid for userPreferences to send when user is anonymous
2022-01-26 21:40:44 +05:30
Vishal Sharma
0ab91707e9 New Trace Filter Page API changes (Backend) (#646)
* build: integrate sql migrations for clickhouse

* feat: support error/exception attributes for trace

* chore: fixing dependencies for docker go client libs

* feat: get trace filter api checkpoint

* chore: fixing dependencies for go-migrate

* feat: add new columns

* feat: move mirgate run from docker to code

* fix: migration file 404 issue

* feat: getSpanFilter API

* fix: migrate version naming bug

* chore: change url param format to array

* feat: add getTagFilter API

* feat: add getFilteredSpans API

* fix: using OFFSET in sqlx driver

* feat: aggregates API on getFilteredSpan, use IN and NOT IN for tag filtering

* feat: add more function support to span aggregate API

* fix: null component edge case

* feat: groupBy support for filteredSpanAggregate

* feat: add function param to span aggregate API

* feat: add support to return totalSpans in getFilteredSpans API

* fix: don't return null string as keys in span filters

* chore: remove SQL migrations(moved to otel collector)

* fix: null string issue in aggregate API

* Merge main

* fix: trace API db query param

* fix: signoz sql db path

* fix: case when both error and ok status are selected

Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-01-26 20:41:59 +05:30
Prashant Shahi
dcb17fb33a ci(k3s): k3s CI workflow enhancements (#643)
* ci(k3s): k3s CI workflow enhancements

* ci(k3s): 💚 Fix the names of deployment and statefulset

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-01-26 12:48:29 +05:30
Kartik Verma
9c07ac376d fix: update Contributing and Makefile for dev-setup (#599) 2022-01-25 16:31:19 +05:30
Prashant Shahi
40f9a4a5aa chore: ♻️ single manifest file for the hotrod (#639)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-01-25 12:29:55 +05:30
Prashant Shahi
8059fe14da chore: 🔧 Add behaviorbot config YAML (#640)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-01-25 11:33:08 +05:30
Ankit Nayan
2f665fcc63 chore: added version in clickhouse tag 2022-01-23 14:57:18 +05:30
Ankit Nayan
0e6a1082dc fix: init-db.sql restored 2022-01-23 14:53:44 +05:30
Vishal Sharma
1568075769 chore: update clean signoz command in install script (#631) 2022-01-22 23:16:09 +05:30
Prashant Shahi
cbbd3ce6ad chore: add codeowners for automatic review request (#633)
Signed-off-by: Prashant Shahi <prashant@signoz.io>
2022-01-22 16:08:19 +05:30
Vishal Sharma
af2399e627 Feat/support error tab page (#626)
* build: integrate sql migrations for clickhouse

* feat: support error/exception attributes for trace

* chore: fixing dependencies for docker go client libs

* chore: fixing dependencies for go-migrate

* feat: move mirgate run from docker to code

* fix: migration file 404 issue

* feat: error tab APIs

* chore: move migrations file

* chore: remove SQL migration (shifted to otel collector)

* chore: remove sql migration configs from dockerfile

Co-authored-by: Ankit Nayan <ankit@signoz.io>
2022-01-21 00:31:58 +05:30
Ankit Nayan
50e8f32291 Revert "chore: Add migration file path in otel collector config (#628)" (#629)
This reverts commit 8467d6a00c.
2022-01-20 00:22:21 +05:30
Vishal Sharma
8467d6a00c chore: Add migration file path in otel collector config (#628)
* chore: Add migration file path in otel collector config

* Update otel-collector-config.yaml
2022-01-20 00:16:46 +05:30
Prashant Shahi
cac31072a9 ci: use pull_request_target for remove label permission (#618) 2022-01-17 21:35:44 +05:30
Yoni Bettan
8b47f4af21 ci: adding a dummy push to check if the image push workflow works (#609)
Signed-off-by: Yoni Bettan <ybettan@redhat.com>
2022-01-17 16:12:18 +05:30
Yoni Bettan
b0b235cbc5 ci: making some improvements to e2e-k3s workflow (#615)
Signed-off-by: Yoni Bettan <ybettan@redhat.com>
2022-01-17 15:59:27 +05:30
Yoni Bettan
e0e4c7afe6 ci: filtering 'push' workflow to main and release branches (#614)
Signed-off-by: Yoni Bettan <ybettan@redhat.com>
2022-01-17 15:17:31 +05:30
Yoni Bettan
1eb0013352 ci: removing file filtering from some workflows (#610)
There are other files that can affect the correctness of the code rather
than the src files like the deployment yamls, Makefile etc.

Signed-off-by: Yoni Bettan <ybettan@redhat.com>
2022-01-17 11:40:17 +05:30
Prashant Shahi
68d68c2b57 fix(frontend): 📌 pin mini-css-extract-plugin version to 2.4.5 to fix breaking builds (#612) 2022-01-16 14:55:14 +05:30
Yoni Bettan
274f1fe07f Merge pull request #605 from ybettan/local-builds
ci: removing the timeout from the rollout command
2022-01-12 12:23:50 +02:00
Yoni Bettan
51dc54bcb9 ci: removing the timeout from the rollout command
It makes the flow fail for some reason.

Signed-off-by: Yoni Bettan <ybettan@redhat.com>
2022-01-12 11:45:09 +02:00
Yoni Bettan
0bc82237fc ci: making sure the sample-application is up before running the job (#603)
ci: making sure the sample-application is up before running the job

* tmp - timeout

Signed-off-by: Yoni Bettan <ybettan@redhat.com>
2022-01-12 14:57:14 +05:30
Yoni Bettan
53045fc58e ci: inject local images to k3d instead of publishing them (#600) 2022-01-11 14:42:56 +05:30
Yoni Bettan
9808c03d6d Merge pull request #593 from ybettan/push-workflow-pr
ci: adding 'push' workflow
2022-01-10 12:38:50 +02:00
Yoni Bettan
cf5036bc31 Merge pull request #597 from ybettan/helm-wait
ci: using --wait helm install flag instead of waiting manually
2022-01-10 12:38:43 +02:00
Yoni Bettan
e08bf85edf ci: using --wait helm install flag instead of waiting manually
Signed-off-by: Yoni Bettan <ybettan@redhat.com>
2022-01-10 12:28:05 +02:00
Yoni Bettan
e555e05f58 ci: adding 'push' workflow
This workflow will push up to 2 images with 4 tags, depending on
if they changed since the last image.

* query-service:<git sha>
* query-service:master

* frontend:<git sha>
* frontend:master

Signed-off-by: Yoni Bettan <ybettan@redhat.com>
2022-01-09 13:30:33 +02:00
Yoni Bettan
88c3f50cb1 Merge pull request #596 from ybettan/remove-label
ci: requiring the 'ok-to-test' label for running some workflows
2022-01-09 13:27:54 +02:00
Yoni Bettan
e4ef059d19 ci: requiring the 'ok-to-test' label for running some workflows
As of now, the 'e2e-k3s' workflow will require the 'ok-to-test' label in
order to get triggered.

In addition to that, on each change to the PR on the relevant files,
Github will remove the label from it and it will be required again.

Signed-off-by: Yoni Bettan <ybettan@redhat.com>
2022-01-08 21:19:04 +02:00
Yoni Bettan
b433d4ad4a Revert "ci: requiring the 'ok-to-test' label for running some workflows (#592)" (#595)
This reverts commit b3d5d6c281.
2022-01-08 16:37:38 +05:30
Yoni Bettan
b3d5d6c281 ci: requiring the 'ok-to-test' label for running some workflows (#592)
* ci: adding 'e2e' GH workflows

The flow contains of multiple steps:

    * build 'query-service' and 'frontend' images and push them to the image registry
    * deploy a disposable k3s cluster
    * deploy the app on the cluster
    * set a tunnel to allow accessing the UI from the web browser

Signed-off-by: Yoni Bettan <ybettan@redhat.com>

* ci: requiring the 'ok-to-test' label for running some workflows

As of now, the 'e2e' workflow will require the 'ok-to-test' label in
order to get triggered.

In addition to that, on each change to the PR, Github will remove the
label from it and it will be required again.

Signed-off-by: Yoni Bettan <ybettan@redhat.com>
2022-01-08 12:51:46 +05:30
Yoni Bettan
9a2aa7bcbd ci: adding 'e2e' GH workflows (#579)
The flow contains of multiple steps:

    * build 'query-service' and 'frontend' images and push them to the image registry
    * deploy a disposable k3s cluster
    * deploy the app on the cluster
    * set a tunnel to allow accessing the UI from the web browser

Signed-off-by: Yoni Bettan <ybettan@redhat.com>
2022-01-08 12:44:14 +05:30
Pranay Prateek
63c2e67cfc Update CONTRIBUTING.md 2022-01-06 21:35:31 +05:30
Yoni Bettan
1b398039e3 Swapping images on the "build" GH workflow. (#578)
query-service job is currently building flattener and flattener job is
currently building query-service.

This PR should fix that mix.

Signed-off-by: Yoni Bettan <ybettan@redhat.com>
2022-01-04 16:17:16 +05:30
Siddhant Parekh
e7253b7ca6 fix(FE) Removed refresh time filter from trace-detail page 2022-01-04 13:40:03 +05:30
Palash gupta
dddba68bfd fix: webpack config is updated 2021-12-28 20:51:36 +05:30
Palash gupta
40c287028b toggle tabs is updated 2021-12-27 23:55:38 +05:30
Palash gupta
30124de409 fix: code vunbretality is fixed 2021-12-27 13:07:10 +05:30
Ankit Nayan
f9c214bd53 release: v0.5.4 2021-12-24 13:25:14 +05:30
pal-sig
7d50895464 chore(query): query is updated for application and external graphs (#562) 2021-12-24 13:16:05 +05:30
Anurag Gupta
2031377dcb Feat(UI): Added webpack bundle analyser for dev and prod (#503)
* Added webpack bundle analyser for dev and prod

* Changes updated
2021-12-24 13:14:50 +05:30
pal-sig
d7eb9f7d0d fix(UI): cross env now will work fine (#485) 2021-12-24 13:14:41 +05:30
pal-sig
7f800c94ae fix(UI): latest timestamp bug is resolved (#494)
fix(UI): latest timestamp bug is resolved (#494)
2021-12-24 12:00:26 +05:30
Aryan Shridhar
0f39643a56 fix(UI): Restore theme preference after reloading (#469) (#473)
* Added helper functions for theme configs under lib/theme.
* Modified the id of theme stylsheet element to 'appMode'.
2021-12-24 11:57:29 +05:30
Hendy Irawan
3b687201a6 build(kubernetes): Support hcloud (Hetzner) (#537)
feat: using `hcloud` because it's Hetzner's own preferred short name.
2021-12-24 11:55:55 +05:30
Aryan Shridhar
f449775cd6 fix(BUG): Allow users to enter application if no sample data is provided (#478). (#538)
fix(BUG): Allow users to enter the application if no sample data is provided (#478). (#538)
2021-12-24 11:53:53 +05:30
pal-sig
ff2e9ae084 feat: tooltip is added (#501)
* tooltip is added

* fix: tooltip component is updated

* Tooltip component is updated

* settings tooltip is updated

* tooltip size is updated
2021-12-24 11:51:19 +05:30
pal-sig
7f5b0c15c7 Bug(UI): Signup onclick loading (#541)
* bug(UI): on click loader is added

* bug(UI): on click loader is added
2021-12-24 11:42:25 +05:30
pal-sig
d825fc2f30 fix: Antd tab issue (#507)
* fix(CSS): antd css tab issue is resolved

* removing redudant css
2021-12-24 11:41:45 +05:30
Vishal Sharma
55feec34ea fix: db/logger security bugs (#558) 2021-12-24 11:40:39 +05:30
Ankit Nayan
6f8b78bd97 chore: small change in api 2021-12-16 13:31:56 +05:30
Ankit Nayan
c4fa86bc95 feat: added memory limit to otel-collecor service (#514) 2021-12-15 00:04:55 +05:30
Ankit Nayan
14ea7bd86a release: v0.5.3 2021-12-11 11:42:12 +05:30
Ankit Nayan
8bf0123370 chore: changed default sample alert to High RPS (#497) 2021-12-11 11:26:02 +05:30
Ankit Nayan
dc9ffcdd45 feat: helm chart for clickhouse setup (#479)
* minor change for volume permission spec

Signed-off-by: Yash Sharma <yashrsharma44@gmail.com>

* added basic files of clickhouse chart

Signed-off-by: Yash Sharma <yashrsharma44@gmail.com>

* added a simple deployment yaml

Signed-off-by: Yash Sharma <yashrsharma44@gmail.com>

* added clickhouse support in signoz

Signed-off-by: Yash Sharma <yashrsharma44@gmail.com>

* clickhouse working

Signed-off-by: Yash Sharma <yashrsharma44@gmail.com>

* chore: helm charts wip

* chore: fixing path of otel-path in hotrod

* chore: wip running clickhouse in templates

* chore: clickhouse working in templates

* chore: clickhouse helm chart upgraded to latest query-service and frontend images

* chore: cleanup and upgrading signoz chart version

* chore: adding alertmanager and minor fixes

* chore: persistence enabled for query-service and clickhouse

* chore: scrape interval reduced to 30s

* chore: changed crd api version from v1beta1 to v1

* chore: removed druid parts in values.yaml

* chore: log container removed from clickhouse

* chore: removed *.tgz from gitignore to add charts

* chore: added dependency charts

* chore: added clickhouse-operator templates

Co-authored-by: Yash Sharma <yashrsharma44@gmail.com>
2021-12-11 11:08:25 +05:30
pal-sig
c79223742f fix(UI): graph legends is fixed (#461)
* fix(UI): graph legends is fixed

* chore(UI): some changes regarding the color of the chart is updated

* full view css is fixed

* usage explorer graph is fixed

* default query is removed

* fix: scroll is removed
2021-12-10 14:28:31 +05:30
Aryan Shridhar
d9a99827c0 fix(BUG): Allow webpack to link CSS with bundled HTML file. (#468)
While porting the webpack config file from typescript to
javascript, commit 3e0f5a866d accidentally removed
style-loader from config setting. Due to this, the CSS file
wasn't being attached with the bundled html file.

Fixed this by adding the style-loader back to config settings
in webpack.
2021-12-10 13:30:57 +05:30
Anurag Gupta
1bf6faff8b feedback btn fixed (#458) 2021-12-10 13:28:58 +05:30
Ankit Nayan
95fb068bb0 Revert "Feat(UI): Storybook is added (#456)" (#482)
This reverts commit b27e30db58.
2021-12-10 13:28:01 +05:30
Rishit Pandey
0907ed280b Fix crlf line break (#455)
* change line-break rule according to OS

* strict comparison
2021-12-10 13:27:14 +05:30
Aryan Shridhar
fc7a0a8354 fix(UI): Allow empty input values in settings retention page. (#459) 2021-12-10 13:27:07 +05:30
pal-sig
b27e30db58 Feat(UI): Storybook is added (#456)
* feat(storybook): storybook is added

* spinner story is added

* package.json is updated
2021-12-10 11:29:14 +05:30
pal-sig
1ab291f3e8 fix: height for the top nav bar is fixed (#462) 2021-12-10 11:28:11 +05:30
pal-sig
552d193cef fix: cluttering issue is fixed (#471)
* fix: cluttering issue is fixed

* fix: cluttering issue is fixed
2021-12-10 11:27:45 +05:30
pal-sig
ba0f06f381 fix(BUG): localstorage permission is updated (#477) 2021-12-10 11:26:20 +05:30
Vishal Sharma
bbbb1c1d60 docs(contributing.md): update contributing doc for latest SigNoz release (#466) 2021-12-07 12:17:39 +05:30
Vishal Sharma
32a09d4ca2 chore(contributing.md): update contributing doc for latest SigNoz release (#465) 2021-12-07 11:59:45 +05:30
pal-sig
7ae43cf511 fix(UI): portFinder sync is added (#439) 2021-12-05 17:08:27 +05:30
774 changed files with 36212 additions and 22120 deletions

33
.editorconfig Normal file
View File

@@ -0,0 +1,33 @@
# EditorConfig is awesome: https://EditorConfig.org
# top-most EditorConfig file
root = true
# Unix-style newlines with a newline ending every file
[*]
end_of_line = lf
insert_final_newline = true
# Matches multiple files with brace expansion notation
# Set default charset
[*.{js,py}]
charset = utf-8
# 4 space indentation
[*.py]
indent_style = space
indent_size = 4
# Tab indentation (no size specified)
[Makefile]
indent_style = tab
# Indentation override for all JS under lib directory
[lib/**.js]
indent_style = space
indent_size = 2
# Matches the exact files either package.json or .travis.yml
[{package.json,.travis.yml}]
indent_style = space
indent_size = 2

6
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,6 @@
# CODEOWNERS info: https://help.github.com/en/articles/about-code-owners
# Owners are automatically requested for review for PRs that changes code
# that they own.
* @ankitnayan
/frontend/ @palash-signoz @pranshuchittora
/deploy/ @prashant-shahi

View File

@@ -26,6 +26,7 @@ assignees: ''
* **Signoz version**:
* **Browser version**:
* **Your OS and version**:
* **Your CPU Architecture**(ARM/Intel):
## Additional context

31
.github/config.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
# Configuration for welcome - https://github.com/behaviorbot/welcome
# Configuration for new-issue-welcome - https://github.com/behaviorbot/new-issue-welcome
# Comment to be posted to on first time issues
newIssueWelcomeComment: >
Thanks for opening this issue. A team member should give feedback soon.
In the meantime, feel free to check out the [contributing guidelines](https://github.com/signoz/signoz/blob/main/CONTRIBUTING.md).
# Configuration for new-pr-welcome - https://github.com/behaviorbot/new-pr-welcome
# Comment to be posted to on PRs from first time contributors in your repository
newPRWelcomeComment: >
Welcome to the SigNoz community! Thank you for your first pull request and making this project better. 🤗
# Configuration for first-pr-merge - https://github.com/behaviorbot/first-pr-merge
# Comment to be posted to on pull requests merged by a first time user
firstPRMergeComment: >
Congrats on merging your first pull request!
![minion-party](https://i.imgur.com/Xlg59lP.gif)
We here at SigNoz are proud of you! 🥳
# Configuration for request-info - https://github.com/behaviorbot/request-info
# Comment to be posted in issues or pull requests, when no description is provided.
requestInfoReplyComment: >
We would appreciate it if you could provide us with more info about this issue/pr!
requestInfoLabelToAdd: request-more-info

View File

@@ -1,6 +1,18 @@
To run GitHub workflow, a few environment variables needs to add in GitHub secrets
# Github actions
#### Environment Variables
## Testing the UI manually on each PR
First we need to make sure the UI is ready
* Check the `Start tunnel` step in `e2e-k8s/deploy-on-k3s-cluster` job and make sure you see `your url is: https://pull-<number>-signoz.loca.lt`
* This job will run until the PR is merged or closed to keep the local tunneling alive
- github will cancel this job if the PR wasn't merged after 6h
- if the job was cancel, go to the action and press `Re-run all jobs`
Now you can open your browser at https://pull-<number>-signoz.loca.lt and check the UI.
## Environment Variables
To run GitHub workflow, a few environment variables needs to add in GitHub secrets
<table>
<tr>

View File

@@ -1,50 +1,25 @@
name: build-pipeline
on:
pull_request:
branches:
- develop
- main
- v*
paths:
- 'pkg/**'
- 'frontend/**'
- release/v*
jobs:
get_filters:
runs-on: ubuntu-latest
# Set job outputs to values from filter step
outputs:
frontend: ${{ steps.filter.outputs.frontend }}
query-service: ${{ steps.filter.outputs.query-service }}
flattener: ${{ steps.filter.outputs.flattener }}
steps:
# For pull requests it's not necessary to checkout the code
- uses: dorny/paths-filter@v2
id: filter
with:
filters: |
frontend:
- 'frontend/**'
query-service:
- 'pkg/query-service/**'
flattener:
- 'pkg/processors/flattener/**'
build-frontend:
runs-on: ubuntu-latest
needs:
- get_filters
if: ${{ needs.get_filters.outputs.frontend == 'true' }}
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Install dependencies
run: cd frontend && yarn install
- name: Run Prettier
run: cd frontend && npm run prettify
continue-on-error: true
- name: Run ESLint
run: cd frontend && npm run lint
continue-on-error: true
- name: TSC
run: yarn tsc
working-directory: ./frontend
- name: Build frontend docker image
shell: bash
run: |
@@ -52,26 +27,10 @@ jobs:
build-query-service:
runs-on: ubuntu-latest
needs:
- get_filters
if: ${{ needs.get_filters.outputs.query-service == 'true' }}
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Build query-service image
shell: bash
run: |
make build-flattener-amd64
build-flattener:
runs-on: ubuntu-latest
needs:
- get_filters
if: ${{ needs.get_filters.outputs.flattener == 'true' }}
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Build flattener docker image
shell: bash
run: |
make build-query-service-amd64

View File

@@ -0,0 +1,27 @@
on:
pull_request_target:
types:
- closed
env:
GITHUB_ACCESS_TOKEN: ${{ secrets.CI_BOT_TOKEN }}
PR_NUMBER: ${{ github.event.number }}
jobs:
create_issue_on_merge:
if: github.event.pull_request.merged == true
runs-on: ubuntu-latest
steps:
- name: Checkout Codebase
uses: actions/checkout@v2
with:
repository: signoz/gh-bot
- name: Use Node v16
uses: actions/setup-node@v2
with:
node-version: 16
- name: Setup Cache & Install Dependencies
uses: bahmutov/npm-install@v1
with:
install-command: yarn --frozen-lockfile
- name: Comment on PR
run: node create-issue.js

84
.github/workflows/e2e-k3s.yaml vendored Normal file
View File

@@ -0,0 +1,84 @@
name: e2e-k3s
on:
pull_request:
types: [labeled]
jobs:
e2e-k3s:
runs-on: ubuntu-latest
if: ${{ github.event.label.name == 'ok-to-test' }}
env:
DOCKER_TAG: pull-${{ github.event.number }}
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Build query-service image
run: make build-query-service-amd64
- name: Build frontend image
run: make build-frontend-amd64
- name: Create a k3s cluster
uses: AbsaOSS/k3d-action@v2
with:
cluster-name: "signoz"
- name: Inject the images to the cluster
run: k3d image import signoz/query-service:$DOCKER_TAG signoz/frontend:$DOCKER_TAG -c signoz
- name: Set up HotROD sample-app
run: |
# create sample-application namespace
kubectl create ns sample-application
# apply hotrod k8s manifest file
kubectl -n sample-application apply -f https://raw.githubusercontent.com/SigNoz/signoz/main/sample-apps/hotrod/hotrod.yaml
# wait for all deployments in sample-application namespace to be READY
kubectl -n sample-application get deploy --output name | xargs -r -n1 -t kubectl -n sample-application rollout status --timeout=300s
- name: Deploy the app
run: |
# add signoz helm repository
helm repo add signoz https://charts.signoz.io
# create platform namespace
kubectl create ns platform
# installing signoz using helm
helm install my-release signoz/signoz -n platform \
--wait \
--timeout 10m0s \
--set frontend.service.type=LoadBalancer \
--set queryService.image.tag=$DOCKER_TAG \
--set frontend.image.tag=$DOCKER_TAG
# get pods, services and the container images
kubectl get pods -n platform
kubectl get svc -n platform
- name: Kick off a sample-app workload
run: |
# start the locust swarm
kubectl -n sample-application run strzal --image=djbingham/curl \
--restart='OnFailure' -i --rm --command -- curl -X POST -F \
'locust_count=6' -F 'hatch_rate=2' http://locust-master:8089/swarm
- name: Get short commit SHA and display tunnel URL
id: get-subdomain
run: |
subdomain="pr-$(git rev-parse --short HEAD)"
echo "URL for tunnelling: https://$subdomain.loca.lt"
echo "::set-output name=subdomain::$subdomain"
- name: Start tunnel
env:
SUBDOMAIN: ${{ steps.get-subdomain.outputs.subdomain }}
run: |
npm install -g localtunnel
host=$(kubectl get svc -n platform | grep frontend | tr -s ' ' | cut -d" " -f4)
port=$(kubectl get svc -n platform | grep frontend | tr -s ' ' | cut -d" " -f5 | cut -d":" -f1)
lt -p $port -l $host -s $SUBDOMAIN

22
.github/workflows/playwright.yaml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Playwright Tests
on:
deployment_status:
jobs:
test:
timeout-minutes: 60
runs-on: ubuntu-latest
if: github.event.deployment_status.state == 'success'
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: "14.x"
- name: Install dependencies
run: npm ci
- name: Install Playwright
run: npx playwright install --with-deps
- name: Run Playwright tests
run: npm run test:e2e
env:
# This might depend on your test-runner/language binding
PLAYWRIGHT_TEST_BASE_URL: ${{ github.event.deployment_status.target_url }}

View File

@@ -0,0 +1,20 @@
# This workflow will inspect a pull request to ensure there is a linked issue or a
# valid issue is mentioned in the body. If neither is present it fails the check and adds
# a comment alerting users of this missing requirement.
name: VerifyIssue
on:
pull_request:
types: [edited, synchronize, opened, reopened]
check_run:
jobs:
verify_linked_issue:
runs-on: ubuntu-latest
name: Ensure Pull Request has a linked issue.
steps:
- name: Verify Linked Issue
uses: hattan/verify-linked-issue-action@v1.1.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,172 +1,90 @@
name: push-pipeline
name: push
on:
push:
branches:
- main
- ^v[0-9]*.[0-9]*.x$
- develop
tags:
- "*"
# pull_request:
# branches:
# - main
# - v*
# paths:
# - 'pkg/**'
# - 'frontend/**'
- v*
jobs:
get-envs:
image-build-and-push-query-service:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
- name: Checkout code
uses: actions/checkout@v2
- shell: bash
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- uses: benjlevesque/short-sha@v1.2
id: short-sha
- name: Get branch name
id: branch-name
uses: tj-actions/branch-names@v5.1
- name: Set docker tag environment
run: |
img_tag=""
array=(`echo ${GITHUB_REF} | sed 's/\//\n/g'`)
if [ ${array[1]} == "tags" ]
then
echo "tag build"
img_tag=${GITHUB_REF#refs/*/v}
elif [ ${array[1]} == "pull" ]
then
img_tag="pull-${{ github.event.number }}"
if [ '${{ steps.branch-name.outputs.is_tag }}' == 'true' ]; then
tag="${{ steps.branch-name.outputs.tag }}"
tag="${tag:1}"
echo "DOCKER_TAG=$tag" >> $GITHUB_ENV
elif [ '${{ steps.branch-name.outputs.current_branch }}' == 'main' ]; then
echo "DOCKER_TAG=latest" >> $GITHUB_ENV
else
echo "non tag build"
img_tag="latest"
echo "DOCKER_TAG=${{ steps.branch-name.outputs.current_branch }}" >> $GITHUB_ENV
fi
# This is a condition where image tag looks like "pull/<pullrequest-name>" during pull request build
NEW_IMG_TAG=`echo $img_tag | sed "s/\//-/g"`
echo $NEW_IMG_TAG
echo export IMG_TAG=$NEW_IMG_TAG >> env-vars
echo export FRONTEND_IMAGE="frontend" >> env-vars
echo export QUERY_SERVICE="query-service" >> env-vars
echo export FLATTENER_PROCESSOR="flattener-processor" >> env-vars
- name: Build and push docker image
run: make build-push-query-service
- name: Uploading envs
uses: actions/upload-artifact@v2
with:
name: env_artifact
path: env-vars
build-and-push-frontend:
image-build-and-push-frontend:
runs-on: ubuntu-latest
needs:
- get-envs
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Downloading image artifact
uses: actions/download-artifact@v2
with:
name: env_artifact
- name: Install dependencies
working-directory: frontend
run: yarn install
- name: Run Prettier
working-directory: frontend
run: npm run prettify
continue-on-error: true
- name: Run ESLint
working-directory: frontend
run: npm run lint
continue-on-error: true
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Push Frontend Docker Image
shell: bash
env:
FRONTEND_DIRECTORY: "frontend"
REPONAME: ${{ secrets.REPONAME }}
FRONTEND_DOCKER_IMAGE: ${FRONTEND_IMAGE}
DOCKER_TAG: ${IMG_TAG}
- uses: benjlevesque/short-sha@v1.2
id: short-sha
- name: Get branch name
id: branch-name
uses: tj-actions/branch-names@v5.1
- name: Set docker tag environment
run: |
branch=${GITHUB_REF#refs/*/}
array=(`echo ${GITHUB_REF} | sed 's/\//\n/g'`)
if [ $branch == "main" ] || [ ${array[1]} == "tags" ] || [ ${array[1]} == "pull" ] || [[ $branch =~ ^v[0-9]*.[0-9]*.x$ ]]
then
source env-vars
make build-push-frontend
fi
build-and-push-query-service:
runs-on: ubuntu-latest
needs:
- get-envs
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Downloading image artifact
uses: actions/download-artifact@v2
with:
name: env_artifact
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Push Query Service Docker Image
shell: bash
env:
QUERY_SERVICE_DIRECTORY: "pkg/query-service"
REPONAME: ${{ secrets.REPONAME }}
QUERY_SERVICE_DOCKER_IMAGE: ${QUERY_SERVICE}
DOCKER_TAG: ${IMG_TAG}
run: |
branch=${GITHUB_REF#refs/*/}
array=(`echo ${GITHUB_REF} | sed 's/\//\n/g'`)
if [ $branch == "main" ] || [ ${array[1]} == "tags" ] || [ ${array[1]} == "pull" ] ||[[ $branch =~ ^v[0-9]*.[0-9]*.x$ ]]
then
source env-vars
make build-push-query-service
fi
build-and-push-flattener:
runs-on: ubuntu-latest
needs:
- get-envs
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Downloading image artifact
uses: actions/download-artifact@v2
with:
name: env_artifact
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Push Flattener Processor Docker Image
shell: bash
env:
FLATTENER_DIRECTORY: "pkg/processors/flattener"
REPONAME: ${{ secrets.REPONAME }}
FLATTERNER_DOCKER_IMAGE: ${FLATTENER_PROCESSOR}
DOCKER_TAG: ${IMG_TAG}
run: |
branch=${GITHUB_REF#refs/*/}
array=(`echo ${GITHUB_REF} | sed 's/\//\n/g'`)
if [ $branch == "main" ] || [ ${array[1]} == "tags" ] || [ ${array[1]} == "pull" ] || [[ $branch =~ ^v[0-9]*.[0-9]*.x$ ]]
then
source env-vars
make build-push-flattener
if [ '${{ steps.branch-name.outputs.is_tag }}' == 'true' ]; then
tag="${{ steps.branch-name.outputs.tag }}"
tag="${tag:1}"
echo "DOCKER_TAG=$tag" >> $GITHUB_ENV
elif [ '${{ steps.branch-name.outputs.current_branch }}' == 'main' ]; then
echo "DOCKER_TAG=latest" >> $GITHUB_ENV
else
echo "DOCKER_TAG=${{ steps.branch-name.outputs.current_branch }}" >> $GITHUB_ENV
fi
- name: Build and push docker image
run: make build-push-frontend

16
.github/workflows/remove-label.yaml vendored Normal file
View File

@@ -0,0 +1,16 @@
name: remove-label
on:
pull_request_target:
types: [synchronize]
jobs:
remove:
runs-on: ubuntu-latest
steps:
- name: Remove label
uses: buildsville/add-remove-label@v1
with:
label: ok-to-test
type: remove
token: ${{ secrets.GITHUB_TOKEN }}

25
.github/workflows/repo-stats.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
on:
schedule:
# Run this once per day, towards the end of the day for keeping the most
# recent data point most meaningful (hours are interpreted in UTC).
- cron: "0 8 * * *"
workflow_dispatch: # Allow for running this manually.
jobs:
j1:
name: repostats
runs-on: ubuntu-latest
steps:
- name: run-ghrs
uses: jgehrcke/github-repo-stats@v1.1.0
with:
# Define the stats repository (the repo to fetch
# stats for and to generate the report for).
# Remove the parameter when the stats repository
# and the data repository are the same.
repository: signoz/signoz
# Set a GitHub API token that can read the stats
# repository, and that can push to the data
# repository (which this workflow file lives in),
# to store data and the report files.
ghtoken: ${{ github.token }}

14
.gitignore vendored
View File

@@ -15,6 +15,7 @@ frontend/build
frontend/.vscode
frontend/.yarnclean
frontend/.temp_cache
frontend/test-results
# misc
.DS_Store
@@ -27,15 +28,10 @@ frontend/npm-debug.log*
frontend/yarn-debug.log*
frontend/yarn-error.log*
frontend/src/constants/env.ts
frontend/cypress/**/*.mp4
# env file for cypress
frontend/cypress.env.json
.idea
**/.vscode
*.tgz
**/build
**/storage
**/locust-scripts/__pycache__/
@@ -43,3 +39,11 @@ frontend/cypress.env.json
frontend/*.env
pkg/query-service/signoz.db
pkg/query-service/tests/test-deploy/data/
# local data
/deploy/docker/clickhouse-setup/data/
/deploy/docker-swarm/clickhouse-setup/data/

36
.gitpod.yml Normal file
View File

@@ -0,0 +1,36 @@
# Please adjust to your needs (see https://www.gitpod.io/docs/config-gitpod-file)
# and commit this file to your remote git repository to share the goodness with others.
tasks:
- name: Run Script to Comment ut required lines
init: |
cd ./.scripts
sh commentLinesForSetup.sh
- name: Run Docker Images
init: |
cd ./deploy
sudo docker-compose -f docker/clickhouse-setup/docker-compose.yaml up -d
# command:
- name: Run Frontend
init: |
cd ./frontend
yarn install
command:
yarn dev
ports:
- port: 3301
onOpen: open-browser
- port: 8080
onOpen: ignore
- port: 9000
onOpen: ignore
- port: 8123
onOpen: ignore
- port: 8089
onOpen: ignore
- port: 9093
onOpen: ignore

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# It Comments out the Line Query-Service & Frontend Section of deploy/docker/clickhouse-setup/docker-compose.yaml
# Update the Line Numbers when deploy/docker/clickhouse-setup/docker-compose.yaml chnages.
# Docs Ref.: https://github.com/SigNoz/signoz/blob/main/CONTRIBUTING.md#contribute-to-frontend-with-docker-installation-of-signoz
sed -i 38,62's/.*/# &/' .././deploy/docker/clickhouse-setup/docker-compose.yaml

View File

@@ -1,13 +1,16 @@
# How to Contribute
There are primarily 3 areas in which you can contribute in SigNoz
There are primarily 2 areas in which you can contribute in SigNoz
- Frontend ( written in Typescript, React)
- Query Service (written in Go)
- Flattener Processor (written in Go)
- Backend - ( Query Service - written in Go)
Depending upon your area of expertise & interest, you can chose one or more to contribute. Below are detailed instructions to contribute in each area
> Please note: If you want to work on an issue, please ask the maintainers to assign the issue to you before starting work on it. This would help us understand who is working on an issue and prevent duplicate work. 🙏🏻
> If you just raise a PR, without the corresponding issue being assigned to you - it may not be accepted.
# Develop Frontend
Need to update [https://github.com/SigNoz/signoz/tree/main/frontend](https://github.com/SigNoz/signoz/tree/main/frontend)
@@ -15,22 +18,33 @@ Need to update [https://github.com/SigNoz/signoz/tree/main/frontend](https://git
### Contribute to Frontend with Docker installation of SigNoz
- `git clone https://github.com/SigNoz/signoz.git && cd signoz`
- comment out frontend service section at `deploy/docker/clickhouse-setup/docker-compose.yaml#L38`
- run `cd deploy && docker-compose -f docker/clickhouse-setup/docker-compose.yaml up -d` (this will install signoz locally without the frontend service)
- comment out frontend service section at `deploy/docker/clickhouse-setup/docker-compose.yaml#L62`
- run `cd deploy` to move to deploy directory
- Install signoz locally without the frontend
- Add below configuration to query-service section at `docker/clickhouse-setup/docker-compose.yaml#L38`
```docker
ports:
- "8080:8080"
```
- If you are using x86_64 processors (All Intel/AMD processors) run `sudo docker-compose -f docker/clickhouse-setup/docker-compose.yaml up -d`
- If you are on arm64 processors (Apple M1 Macbooks) run `sudo docker-compose -f docker/clickhouse-setup/docker-compose.arm.yaml up -d`
- `cd ../frontend` and change baseURL to `http://localhost:8080` in file `src/constants/env.ts`
- `yarn install`
- `yarn dev`
> Notes for Maintainers/Contributors who will change Line Numbers of Frontend & Query-Section. Please Update Line Numbers in `./scripts/commentLinesForSetup.sh`
### Contribute to Frontend without installing SigNoz backend
If you don't want to install SigNoz backend just for doing frontend development, we can provide you with test environments which you can use as the backend. Please ping us in #contributing channel in our [slack community](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA) and we will DM you with `<test environment URL>`
If you don't want to install SigNoz backend just for doing frontend development, we can provide you with test environments which you can use as the backend. Please ping us in #contributing channel in our [slack community](https://signoz.io/slack) and we will DM you with `<test environment URL>`
- `git clone https://github.com/SigNoz/signoz.git && cd signoz/frontend`
- Create a file `.env` with `FRONTEND_API_ENDPOINT=<test environment URL>`
- `yarn install`
- `yarn dev`
**_Frontend should now be accessible at `http://localhost:3000/application`_**
**_Frontend should now be accessible at `http://localhost:3301/application`_**
# Contribute to Query-Service
@@ -38,25 +52,125 @@ Need to update [https://github.com/SigNoz/signoz/tree/main/pkg/query-service](ht
### To run ClickHouse setup (recommended for local development)
- `git clone https://github.com/SigNoz/signoz.git && cd signoz/deploy`
- comment out frontend service section at `docker/clickhouse-setup/docker-compose.yaml#L38`
- comment out query-service section at `docker/clickhouse-setup/docker-compose.yaml#L22`
- Run `docker-compose -f docker/clickhouse-setup/docker-compose.yaml up -d` (this will install signoz locally without the frontend and query-service)
- `STORAGE=clickhouse ClickHouseUrl=tcp://localhost:9001 go run main.go`
- git clone https://github.com/SigNoz/signoz.git
- run `cd signoz` to move to signoz directory
- run `sudo make dev-setup` to configure local setup to run query-service
- comment out frontend service section at `docker/clickhouse-setup/docker-compose.yaml`
- comment out query-service section at `docker/clickhouse-setup/docker-compose.yaml`
- add below configuration to clickhouse section at `docker/clickhouse-setup/docker-compose.yaml`
```docker
expose:
- 9000
ports:
- 9001:9000
```
- run `cd pkg/query-service/` to move to query-service directory
- Open ./constants/constants.go
- Replace ```const RELATIONAL_DATASOURCE_PATH = "/var/lib/signoz/signoz.db"``` \
with ```const RELATIONAL_DATASOURCE_PATH = "./signoz.db".```
- Install signoz locally without the frontend and query-service
- If you are using x86_64 processors (All Intel/AMD processors) run `sudo make run-x86`
- If you are on arm64 processors (Apple M1 Macbooks) run `sudo make run-arm`
#### Run locally
```console
ClickHouseUrl=tcp://localhost:9001 STORAGE=clickhouse go run main.go
```
> Notes for Maintainers/Contributors who will change Line Numbers of Frontend & Query-Section. Please Update Line Numbers in `./scripts/commentLinesForSetup.sh`
**_Query Service should now be available at `http://localhost:8080`_**
> If you want to see how, frontend plays with query service, you can run frontend also in you local env with the baseURL changed to `http://localhost:8080` in file `src/constants/env.ts` as the query-service is now running at port `8080`
# Contribute to Flattener Processor
---
<!-- Instead of configuring a local setup, you can also use [Gitpod](https://www.gitpod.io/), a VSCode-based Web IDE.
Not needed to run for the ClickHouse setup
Click the button below. A workspace with all required environments will be created.
more info at [https://github.com/SigNoz/signoz/tree/main/pkg/processors/flattener](https://github.com/SigNoz/signoz/tree/main/pkg/processors/flattener)
[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/SigNoz/signoz)
> To use it on your forked repo, edit the 'Open in Gitpod' button url to `https://gitpod.io/#https://github.com/<your-github-username>/signoz` -->
# Contribute to SigNoz Helm Chart
Need to update [https://github.com/SigNoz/charts](https://github.com/SigNoz/charts).
### To run helm chart for local development
- run `git clone https://github.com/SigNoz/charts.git` followed by `cd charts`
- it is recommended to use lightweight kubernetes (k8s) cluster for local development:
- [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [k3d](https://k3d.io/#installation)
- [minikube](https://minikube.sigs.k8s.io/docs/start/)
- create a k8s cluster and make sure `kubectl` points to the locally created k8s cluster
- run `make dev-install` to install SigNoz chart with `my-release` release name in `platform` namespace.
- run `kubectl -n platform port-forward svc/my-release-signoz-frontend 3301:3301` to make SigNoz UI available at [localhost:3301](http://localhost:3301)
**To install HotROD sample app:**
```bash
curl -sL https://github.com/SigNoz/signoz/raw/main/sample-apps/hotrod/hotrod-install.sh \
| HELM_RELEASE=my-release SIGNOZ_NAMESPACE=platform bash
```
**To load data with HotROD sample app:**
```bash
kubectl -n sample-application run strzal --image=djbingham/curl \
--restart='OnFailure' -i --tty --rm --command -- curl -X POST -F \
'locust_count=6' -F 'hatch_rate=2' http://locust-master:8089/swarm
```
**To stop the load generation:**
```bash
kubectl -n sample-application run strzal --image=djbingham/curl \
--restart='OnFailure' -i --tty --rm --command -- curl \
http://locust-master:8089/stop
```
**To delete HotROD sample app:**
```bash
curl -sL https://github.com/SigNoz/signoz/raw/main/sample-apps/hotrod/hotrod-delete.sh \
| HOTROD_NAMESPACE=sample-application bash
```
---
## General Instructions
You can always reach out to `ankit@signoz.io` to understand more about the repo and product. We are very responsive over email and [slack](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA).
**Before making any significant changes, please open an issue**. Each issue
should describe the following:
* Requirement - what kind of use case are you trying to solve?
* Proposal - what do you suggest to solve the problem or improve the existing
situation?
* Any open questions to address
Discussing your proposed changes ahead of time will make the contribution
process smooth for everyone. Once the approach is agreed upon, make your changes
and open a pull request(s). Unless your change is small, Please consider submitting different PRs:
* First PR should include the overall structure of the new component:
* Readme, configuration, interfaces or base classes etc...
* This PR is usually trivial to review, so the size limit does not apply to
it.
* Second PR should include the concrete implementation of the component. If the
size of this PR is larger than the recommended size consider splitting it in
multiple PRs.
* If there are multiple sub-component then ideally each one should be implemented as
a separate pull request.
* Last PR should include changes to any user facing documentation. And should include
end to end tests if applicable. The component must be enabled
only after sufficient testing, and there is enough confidence in the
stability and quality of the component.
You can always reach out to `ankit@signoz.io` to understand more about the repo and product. We are very responsive over email and [slack](https://signoz.io/slack).
- If you find any bugs, please create an issue
- If you find anything missing in documentation, you can create an issue with label **documentation**

View File

@@ -1,19 +1,35 @@
#
# Reference Guide - https://www.gnu.org/software/make/manual/make.html
#
# Build variables
BUILD_VERSION ?= $(shell git describe --always --tags)
BUILD_HASH ?= $(shell git rev-parse --short HEAD)
BUILD_TIME ?= $(shell date -u +"%Y-%m-%dT%H:%M:%SZ")
BUILD_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
# Internal variables or constants.
#
FRONTEND_DIRECTORY ?= frontend
FLATTENER_DIRECTORY ?= pkg/processors/flattener
QUERY_SERVICE_DIRECTORY ?= pkg/query-service
STANDALONE_DIRECTORY ?= deploy/docker/clickhouse-setup
SWARM_DIRECTORY ?= deploy/docker-swarm/clickhouse-setup
REPONAME ?= signoz
DOCKER_TAG ?= latest
FRONTEND_DOCKER_IMAGE ?= frontend
FLATTERNER_DOCKER_IMAGE ?= query-service
QUERY_SERVICE_DOCKER_IMAGE ?= flattener-processor
QUERY_SERVICE_DOCKER_IMAGE ?= query-service
all: build-push-frontend build-push-query-service build-push-flattener
# Build-time Go variables
PACKAGE?=go.signoz.io/query-service
buildVersion=${PACKAGE}/version.buildVersion
buildHash=${PACKAGE}/version.buildHash
buildTime=${PACKAGE}/version.buildTime
gitBranch=${PACKAGE}/version.gitBranch
LD_FLAGS="-X ${buildHash}=${BUILD_HASH} -X ${buildTime}=${BUILD_TIME} -X ${buildVersion}=${BUILD_VERSION} -X ${gitBranch}=${BUILD_BRANCH}"
all: build-push-frontend build-push-query-service
# Steps to build and push docker image of frontend
.PHONY: build-frontend-amd64 build-push-frontend
# Step to build docker image of frontend in amd64 (used in build pipeline)
@@ -22,7 +38,8 @@ build-frontend-amd64:
@echo "--> Building frontend docker image for amd64"
@echo "------------------"
@cd $(FRONTEND_DIRECTORY) && \
docker build -f Dockerfile --no-cache -t $(REPONAME)/$(FRONTEND_DOCKER_IMAGE):$(DOCKER_TAG) . --build-arg TARGETPLATFORM="linux/amd64"
docker build -f Dockerfile --no-cache -t $(REPONAME)/$(FRONTEND_DOCKER_IMAGE):$(DOCKER_TAG) \
--build-arg TARGETPLATFORM="linux/amd64" .
# Step to build and push docker image of frontend(used in push pipeline)
build-push-frontend:
@@ -30,7 +47,8 @@ build-push-frontend:
@echo "--> Building and pushing frontend docker image"
@echo "------------------"
@cd $(FRONTEND_DIRECTORY) && \
docker buildx build --file Dockerfile --progress plane --no-cache --push --platform linux/amd64 --tag $(REPONAME)/$(FRONTEND_DOCKER_IMAGE):$(DOCKER_TAG) .
docker buildx build --file Dockerfile --progress plane --no-cache --push --platform linux/amd64 \
--tag $(REPONAME)/$(FRONTEND_DOCKER_IMAGE):$(DOCKER_TAG) .
# Steps to build and push docker image of query service
.PHONY: build-query-service-amd64 build-push-query-service
@@ -40,7 +58,8 @@ build-query-service-amd64:
@echo "--> Building query-service docker image for amd64"
@echo "------------------"
@cd $(QUERY_SERVICE_DIRECTORY) && \
docker build -f Dockerfile --no-cache -t $(REPONAME)/$(QUERY_SERVICE_DOCKER_IMAGE):$(DOCKER_TAG) . --build-arg TARGETPLATFORM="linux/amd64"
docker build -f Dockerfile --no-cache -t $(REPONAME)/$(QUERY_SERVICE_DOCKER_IMAGE):$(DOCKER_TAG) \
--build-arg TARGETPLATFORM="linux/amd64" --build-arg LD_FLAGS=$(LD_FLAGS) .
# Step to build and push docker image of query in amd64 and arm64 (used in push pipeline)
build-push-query-service:
@@ -48,22 +67,34 @@ build-push-query-service:
@echo "--> Building and pushing query-service docker image"
@echo "------------------"
@cd $(QUERY_SERVICE_DIRECTORY) && \
docker buildx build --file Dockerfile --progress plane --no-cache --push --platform linux/arm64,linux/amd64 --tag $(REPONAME)/$(QUERY_SERVICE_DOCKER_IMAGE):$(DOCKER_TAG) .
docker buildx build --file Dockerfile --progress plane --no-cache \
--push --platform linux/arm64,linux/amd64 --build-arg LD_FLAGS=$(LD_FLAGS) \
--tag $(REPONAME)/$(QUERY_SERVICE_DOCKER_IMAGE):$(DOCKER_TAG) .
# Steps to build and push docker image of flattener
.PHONY: build-flattener-amd64 build-push-flattener
# Step to build docker image of flattener in amd64 (used in build pipeline)
build-flattener-amd64:
dev-setup:
mkdir -p /var/lib/signoz
sqlite3 /var/lib/signoz/signoz.db "VACUUM";
mkdir -p pkg/query-service/config/dashboards
@echo "------------------"
@echo "--> Building flattener docker image for amd64"
@echo "--> Local Setup completed"
@echo "------------------"
@cd $(FLATTENER_DIRECTORY) && \
docker build -f Dockerfile --no-cache -t $(REPONAME)/$(FLATTERNER_DOCKER_IMAGE):$(DOCKER_TAG) . --build-arg TARGETPLATFORM="linux/amd64"
# Step to build and push docker image of flattener in amd64 (used in push pipeline)
build-push-flattener:
@echo "------------------"
@echo "--> Building and pushing flattener docker image"
@echo "------------------"
@cd $(FLATTENER_DIRECTORY) && \
docker buildx build --file Dockerfile --progress plane --no-cache --push --platform linux/arm64,linux/amd64 --tag $(REPONAME)/$(FLATTERNER_DOCKER_IMAGE):$(DOCKER_TAG) .
run-x86:
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.yaml up -d
run-arm:
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.arm.yaml up -d
down-x86:
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.yaml down -v
down-arm:
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.arm.yaml down -v
clear-standalone-data:
@docker run --rm -v "$(PWD)/$(STANDALONE_DIRECTORY)/data:/pwd" busybox \
sh -c "cd /pwd && rm -rf alertmanager/* clickhouse/* signoz/*"
clear-swarm-data:
@docker run --rm -v "$(PWD)/$(SWARM_DIRECTORY)/data:/pwd" busybox \
sh -c "cd /pwd && rm -rf alertmanager/* clickhouse/* signoz/*"

View File

@@ -17,7 +17,7 @@
<a href="https://signoz.io/docs"><b>Dokumentation</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.zh-cn.md"><b>ReadMe auf Chinesisch</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.pt-br.md"><b>ReadMe auf Portugiesisch</b></a> &bull;
<a href="https://bit.ly/signoz-slack"><b>Slack Community</b></a> &bull;
<a href="https://signoz.io/slack"><b>Slack Community</b></a> &bull;
<a href="https://twitter.com/SigNozHq"><b>Twitter</b></a>
</h3>
@@ -39,7 +39,7 @@ SigNoz hilft Entwicklern, Anwendungen zu überwachen und Probleme in ihren berei
## Werde Teil unserer Slack Community
Sag Hi zu uns auf [Slack](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA) 👋
Sag Hi zu uns auf [Slack](https://signoz.io/slack) 👋
<br /><br />
@@ -130,7 +130,7 @@ Außerdem hat SigNoz noch mehr spezielle Funktionen im Vergleich zu Jaeger:
Wir ❤️ Beiträge zum Projekt, egal ob große oder kleine. Bitte lies dir zuerst die [CONTRIBUTING.md](CONTRIBUTING.md) durch, bevor du anfängst, Beiträge zu SigNoz zu machen.
Du bist dir nicht sicher, wie du anfangen sollst? Schreib uns einfach auf dem `#contributing` Kanal in unserer [Slack Community](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA).
Du bist dir nicht sicher, wie du anfangen sollst? Schreib uns einfach auf dem `#contributing` Kanal in unserer [Slack Community](https://signoz.io/slack).
<br /><br />
@@ -146,7 +146,7 @@ Du findest unsere Dokumentation unter https://signoz.io/docs/. Falls etwas unver
## Community
Werde Teil der [Slack Community](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA) um mehr über verteilte Einzelschritt-Fehlersuche, Messung von Systemzuständen oder SigNoz zu erfahren und sich mit anderen Nutzern und Mitwirkenden in Verbindung zu setzen.
Werde Teil der [Slack Community](https://signoz.io/slack) um mehr über verteilte Einzelschritt-Fehlersuche, Messung von Systemzuständen oder SigNoz zu erfahren und sich mit anderen Nutzern und Mitwirkenden in Verbindung zu setzen.
Falls du irgendwelche Ideen, Fragen oder Feedback hast, kannst du sie gerne über unsere [Github Discussions](https://github.com/SigNoz/signoz/discussions) mit uns teilen.

View File

@@ -6,7 +6,7 @@
<p align="center">
<img alt="License" src="https://img.shields.io/badge/license-MIT-brightgreen"> </a>
<img alt="Downloads" src="https://img.shields.io/docker/pulls/signoz/frontend?label=Downloads"> </a>
<img alt="Downloads" src="https://img.shields.io/docker/pulls/signoz/query-service?label=Downloads"> </a>
<img alt="GitHub issues" src="https://img.shields.io/github/issues/signoz/signoz"> </a>
<a href="https://twitter.com/intent/tweet?text=Monitor%20your%20applications%20and%20troubleshoot%20problems%20with%20SigNoz,%20an%20open-source%20alternative%20to%20DataDog,%20NewRelic.&url=https://signoz.io/&via=SigNozHQ&hashtags=opensource,signoz,observability">
<img alt="tweet" src="https://img.shields.io/twitter/url/http/shields.io.svg?style=social"> </a>
@@ -18,7 +18,7 @@
<a href="https://github.com/SigNoz/signoz/blob/main/README.zh-cn.md"><b>ReadMe in Chinese</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.de-de.md"><b>ReadMe in German</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.pt-br.md"><b>ReadMe in Portuguese</b></a> &bull;
<a href="https://bit.ly/signoz-slack"><b>Slack Community</b></a> &bull;
<a href="https://signoz.io/slack"><b>Slack Community</b></a> &bull;
<a href="https://twitter.com/SigNozHq"><b>Twitter</b></a>
</h3>
@@ -33,7 +33,11 @@ SigNoz helps developers monitor applications and troubleshoot problems in their
👉 Run aggregates on trace data to get business relevant metrics
![SigNoz Feature](https://signoz-public.s3.us-east-2.amazonaws.com/signoz_hero_github.png)
![screenzy-1644432902955](https://user-images.githubusercontent.com/504541/153270713-1b2156e6-ec03-42de-975b-3c02b8ec1836.png)
<br />
![screenzy-1644432986784](https://user-images.githubusercontent.com/504541/153270725-0efb73b3-06ed-4207-bf13-9b7e2e17c4b8.png)
<br />
![screenzy-1647005040573](https://user-images.githubusercontent.com/504541/157875938-a3d57904-ea6d-4278-b929-bd1408d7f94c.png)
<br /><br />
@@ -41,7 +45,7 @@ SigNoz helps developers monitor applications and troubleshoot problems in their
## Join our Slack community
Come say Hi to us on [Slack](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA) 👋
Come say Hi to us on [Slack](https://signoz.io/slack) 👋
<br /><br />
@@ -132,7 +136,7 @@ Moreover, SigNoz has few more advanced features wrt Jaeger:
We ❤️ contributions big or small. Please read [CONTRIBUTING.md](CONTRIBUTING.md) to get started with making contributions to SigNoz.
Not sure how to get started? Just ping us on `#contributing` in our [slack community](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA)
Not sure how to get started? Just ping us on `#contributing` in our [slack community](https://signoz.io/slack)
<br /><br />
@@ -148,7 +152,7 @@ You can find docs at https://signoz.io/docs/. If you need any clarification or f
## Community
Join the [slack community](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA) to know more about distributed tracing, observability, or SigNoz and to connect with other users and contributors.
Join the [slack community](https://signoz.io/slack) to know more about distributed tracing, observability, or SigNoz and to connect with other users and contributors.
If you have any ideas, questions, or any feedback, please share on our [Github Discussions](https://github.com/SigNoz/signoz/discussions)

View File

@@ -15,7 +15,7 @@
<h3 align="center">
<a href="https://signoz.io/docs"><b>Documentação</b></a> &bull;
<a href="https://bit.ly/signoz-slack"><b>Comunidade no Slack</b></a> &bull;
<a href="https://signoz.io/slack"><b>Comunidade no Slack</b></a> &bull;
<a href="https://twitter.com/SigNozHq"><b>Twitter</b></a>
</h3>
@@ -38,7 +38,7 @@ SigNoz auxilia os desenvolvedores a monitorarem aplicativos e solucionar problem
## Junte-se à nossa comunidade no Slack
Venha dizer oi para nós no [Slack](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA) 👋
Venha dizer oi para nós no [Slack](https://signoz.io/slack) 👋
<br /><br />
@@ -129,7 +129,7 @@ Além disso, SigNoz tem alguns recursos mais avançados do que Jaeger:
Nós ❤️ contribuições grandes ou pequenas. Leia [CONTRIBUTING.md](CONTRIBUTING.md) para começar a fazer contribuições para o SigNoz.
Não sabe como começar? Basta enviar um sinal para nós no canal `#contributing` em nossa [comunidade no Slack.](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA)
Não sabe como começar? Basta enviar um sinal para nós no canal `#contributing` em nossa [comunidade no Slack.](https://signoz.io/slack)
<br /><br />
@@ -145,7 +145,7 @@ Você pode encontrar a documentação em https://signoz.io/docs/. Se você tiver
## Comunidade
Junte-se a [comunidade no Slack](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA) para saber mais sobre rastreamento distribuído, observabilidade ou SigNoz e para se conectar com outros usuários e colaboradores.
Junte-se a [comunidade no Slack](https://signoz.io/slack) para saber mais sobre rastreamento distribuído, observabilidade ou SigNoz e para se conectar com outros usuários e colaboradores.
Se você tiver alguma ideia, pergunta ou feedback, compartilhe em nosso [Github Discussões](https://github.com/SigNoz/signoz/discussions)

View File

@@ -29,7 +29,7 @@ SigNoz帮助开发人员监控应用并排查已部署应用中的问题。SigNo
## 加入我们的Slack社区
来[Slack](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA) 跟我们打声招呼👋
来[Slack](https://signoz.io/slack) 跟我们打声招呼👋
<br /><br />
@@ -120,7 +120,7 @@ Jaeger只做分布式跟踪SigNoz则是做了矩阵和跟踪两块我们
我们 ❤️ 任何贡献无论大小。 请阅读 [CONTRIBUTING.md](CONTRIBUTING.md) 然后开始给Signoz做贡献。
还不清楚怎么开始? 只需在[slack社区](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA)的`#contributing`频道里ping我们。
还不清楚怎么开始? 只需在[slack社区](https://signoz.io/slack)的`#contributing`频道里ping我们。
<br /><br />
@@ -136,7 +136,7 @@ Jaeger只做分布式跟踪SigNoz则是做了矩阵和跟踪两块我们
## 社区
加入[slack community](https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA),了解更多关于分布式跟踪、可观察性(observability)以及SigNoz。同时与其他用户和贡献者一起交流。
加入[slack community](https://signoz.io/slack),了解更多关于分布式跟踪、可观察性(observability)以及SigNoz。同时与其他用户和贡献者一起交流。
如果你有任何想法、问题或者反馈,请在[Github Discussions](https://github.com/SigNoz/signoz/discussions)分享给我们。

18
SECURITY.md Normal file
View File

@@ -0,0 +1,18 @@
# Security Policy
SigNoz is looking forward to working with security researchers across the world to keep SigNoz and our users safe. If you have found an issue in our systems/applications, please reach out to us.
## Supported Versions
We always recommend using the latest version of SigNoz to ensure you get all security updates
## Reporting a Vulnerability
If you believe you have found a security vulnerability within SigNoz, please let us know right away. We'll try and fix the problem as soon as possible.
**Do not report vulnerabilities using public GitHub issues**. Instead, email <security@signoz.io> with a detailed account of the issue. Please submit one issue per email, this helps us triage vulnerabilities.
Once we've received your email we'll keep you updated as we fix the vulnerability.
## Thanks
Thank you for keeping SigNoz and our users safe. 🙇

88
deploy/README.md Normal file
View File

@@ -0,0 +1,88 @@
# Deploy
Check that you have cloned [signoz/signoz](https://github.com/signoz/signoz)
and currently are in `signoz/deploy` folder.
## Docker
If you don't have docker set up, please follow [this guide](https://docs.docker.com/engine/install/)
to set up docker before proceeding with the next steps.
### Using Install Script
Now run the following command to install:
```sh
./install.sh
```
### Using Docker Compose
If you don't have docker-compose set up, please follow [this guide](https://docs.docker.com/compose/install/)
to set up docker compose before proceeding with the next steps.
For x86 chip (amd):
```sh
docker-compose -f docker/clickhouse-setup/docker-compose.yaml up -d
```
For Mac with Apple chip (arm):
```sh
docker-compose -f docker/clickhouse-setup/docker-compose.arm.yaml up -d
```
Open http://localhost:3301 in your favourite browser. In couple of minutes, you should see
the data generated from hotrod in SigNoz UI.
## Kubernetes
### Using Helm
#### Bring up SigNoz cluster
```sh
helm repo add signoz https://charts.signoz.io
kubectl create ns platform
helm -n platform install my-release signoz/signoz
```
To access the UI, you can `port-forward` the frontend service:
```sh
kubectl -n platform port-forward svc/my-release-frontend 3301:3301
```
Open http://localhost:3301 in your favourite browser. Few minutes after you generate load
from the HotROD application, you should see the data generated from hotrod in SigNoz UI.
#### Test HotROD application with SigNoz
```sh
kubectl create ns sample-application
kubectl -n sample-application apply -f https://raw.githubusercontent.com/SigNoz/signoz/main/sample-apps/hotrod/hotrod.yaml
```
To generate load:
```sh
kubectl -n sample-application run strzal --image=djbingham/curl \
--restart='OnFailure' -i --tty --rm --command -- curl -X POST -F \
'locust_count=6' -F 'hatch_rate=2' http://locust-master:8089/swarm
```
To stop load:
```sh
kubectl -n sample-application run strzal --image=djbingham/curl \
--restart='OnFailure' -i --tty --rm --command -- curl \
http://locust-master:8089/stop
```
## Uninstall/Troubleshoot?
Go to our official documentation site [signoz.io/docs](https://signoz.io/docs) for more.

View File

@@ -0,0 +1,35 @@
global:
resolve_timeout: 1m
slack_api_url: 'https://hooks.slack.com/services/xxx'
route:
receiver: 'slack-notifications'
receivers:
- name: 'slack-notifications'
slack_configs:
- channel: '#alerts'
send_resolved: true
icon_url: https://avatars3.githubusercontent.com/u/3380462
title: |-
[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .CommonLabels.alertname }} for {{ .CommonLabels.job }}
{{- if gt (len .CommonLabels) (len .GroupLabels) -}}
{{" "}}(
{{- with .CommonLabels.Remove .GroupLabels.Names }}
{{- range $index, $label := .SortedPairs -}}
{{ if $index }}, {{ end }}
{{- $label.Name }}="{{ $label.Value -}}"
{{- end }}
{{- end -}}
)
{{- end }}
text: >-
{{ range .Alerts -}}
*Alert:* {{ .Annotations.title }}{{ if .Labels.severity }} - `{{ .Labels.severity }}`{{ end }}
*Description:* {{ .Annotations.description }}
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}

View File

@@ -0,0 +1,11 @@
groups:
- name: ExampleCPULoadGroup
rules:
- alert: HighCpuLoad
expr: system_cpu_load_average_1m > 0.1
for: 0m
labels:
severity: warning
annotations:
summary: High CPU load
description: "CPU load is > 0.1\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"

View File

@@ -1,11 +1,8 @@
<?xml version="1.0"?>
<yandex>
<logger>
<level>trace</level>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<size>1000M</size>
<count>10</count>
<level>information</level>
<console>1</console>
</logger>
<http_port>8123</http_port>
@@ -45,6 +42,34 @@
</client>
</openSSL>
<!-- Example config for tiered storage -->
<!-- <storage_configuration>
<disks>
<default>
<keep_free_space_bytes>10485760</keep_free_space_bytes>
</default>
<s3>
<type>s3</type>
<endpoint>https://BUCKET-NAME.s3.amazonaws.com/data/</endpoint>
<access_key_id>ACCESS-KEY-ID</access_key_id>
<secret_access_key>SECRET-ACCESS-KEY</secret_access_key>
</s3>
</disks>
<policies>
<tiered>
<volumes>
<default>
<disk>default</disk>
</default>
<s3>
<disk>s3</disk>
</s3>
</volumes>
</tiered>
</policies>
</storage_configuration> -->
<!-- Default root page on http[s] server. For example load UI from https://tabix.io/ when opening http://localhost:8123 -->
<!--
<http_server_default_response><![CDATA[<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>]]></http_server_default_response>

View File

@@ -1,106 +1,132 @@
version: "3"
version: "3.9"
services:
clickhouse:
image: yandex/clickhouse-server
expose:
- 8123
- 9000
ports:
- 9001:9000
- 8123:8123
volumes:
- ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./docker-entrypoint-initdb.d/init-db.sql:/docker-entrypoint-initdb.d/init-db.sql
- ./data/clickhouse/:/var/lib/clickhouse/
image: yandex/clickhouse-server:21.12.3.32
# ports:
# - "9000:9000"
# - "8123:8123"
volumes:
- ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./data/clickhouse/:/var/lib/clickhouse/
deploy:
restart_policy:
condition: on-failure
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "localhost:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "localhost:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
alertmanager:
image: signoz/alertmanager:0.23.0-0.1
volumes:
- ./data/alertmanager:/data
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data
depends_on:
- query-service
deploy:
restart_policy:
condition: on-failure
query-service:
image: signoz/query-service:0.4.1
container_name: query-service
restart: always
image: signoz/query-service:0.8.2
command: ["-config=/root/config/prometheus.yml"]
ports:
- "8080:8080"
# ports:
# - "6060:6060" # pprof port
# - "8080:8080" # query-service port
volumes:
- ./prometheus.yml:/root/config/prometheus.yml
- ../dashboards:/root/config/dashboards
- ./data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000
- ClickHouseUrl=tcp://clickhouse:9000/?database=signoz_traces
- STORAGE=clickhouse
- POSTHOG_API_KEY=H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w
- GODEBUG=netdns=go
depends_on:
- clickhouse
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-swarm
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost:8080/api/v1/version"]
interval: 30s
timeout: 5s
retries: 3
deploy:
restart_policy:
condition: on-failure
depends_on:
- clickhouse
frontend:
image: signoz/frontend:0.4.1
container_name: frontend
image: signoz/frontend:0.8.2
deploy:
restart_policy:
condition: on-failure
depends_on:
- alertmanager
- query-service
links:
- "query-service"
ports:
- "3000:3000"
- "3301:3301"
volumes:
- ../common/nginx-config.conf:/etc/nginx/conf.d/default.conf
otel-collector:
image: signoz/otelcontribcol:0.4.0
command: ["--config=/etc/otel-collector-config.yaml", "--mem-ballast-size-mib=2000"]
image: signoz/otelcontribcol:0.45.1-0.3
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1777:1777" # pprof extension
- "8887:8888" # Prometheus metrics exposed by the agent
- "14268:14268" # Jaeger receiver
- "55678" # OpenCensus receiver
- "55680:55680" # OTLP HTTP/2.0 legacy port
- "55681:55681" # OTLP HTTP/1.0 receiver
- "4317:4317" # OTLP GRPC receiver
- "55679:55679" # zpages extension
- "13133" # health_check
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
# - "8889:8889" # Prometheus metrics exposed by the agent
# - "13133:13133" # health_check
# - "14268:14268" # Jaeger receiver
# - "55678:55678" # OpenCensus receiver
# - "55679:55679" # zpages extension
# - "55680:55680" # OTLP gRPC legacy receiver
# - "55681:55681" # OTLP HTTP legacy receiver
deploy:
mode: replicated
replicas: 3
restart_policy:
condition: on-failure
resources:
limits:
memory: 2000m
depends_on:
- clickhouse
- clickhouse
otel-collector-hostmetrics:
image: signoz/otelcontribcol:0.4.0
command: ["--config=/etc/otel-collector-config-hostmetrics.yaml", "--mem-ballast-size-mib=683"]
otel-collector-metrics:
image: signoz/otelcontribcol:0.45.1-0.3
command: ["--config=/etc/otel-collector-metrics-config.yaml"]
volumes:
- ./otel-collector-config-hostmetrics.yaml:/etc/otel-collector-config-hostmetrics.yaml
- ./otel-collector-metrics-config.yaml:/etc/otel-collector-metrics-config.yaml
deploy:
restart_policy:
condition: on-failure
depends_on:
- clickhouse
- clickhouse
hotrod:
image: jaegertracing/example-hotrod:latest
container_name: hotrod
ports:
- "9000:8080"
image: jaegertracing/example-hotrod:1.30
command: ["all"]
environment:
- JAEGER_ENDPOINT=http://otel-collector:14268/api/traces
- JAEGER_ENDPOINT=http://otel-collector:14268/api/traces
logging:
options:
max-size: 50m
max-file: "3"
load-hotrod:
image: "grubykarol/locust:1.2.3-python3.9-alpine3.12"
container_name: load-hotrod
hostname: load-hotrod
ports:
- "8089:8089"
environment:
ATTACKED_HOST: http://hotrod:8080
LOCUST_MODE: standalone
@@ -110,4 +136,4 @@ services:
QUIET_MODE: "${QUIET_MODE:-false}"
LOCUST_OPTS: "--headless -u 10 -r 1"
volumes:
- ../common/locust-scripts:/locust
- ../common/locust-scripts:/locust

View File

@@ -1,72 +0,0 @@
receivers:
otlp:
protocols:
grpc:
http:
jaeger:
protocols:
grpc:
thrift_http:
hostmetrics:
collection_interval: 60s
scrapers:
cpu:
load:
memory:
disk:
filesystem:
network:
# Data sources: metrics
prometheus:
config:
scrape_configs:
- job_name: "otel-collector"
dns_sd_configs:
- names:
- 'tasks.signoz_otel-collector'
type: 'A'
port: 8888
- job_name: "otel-collector-hostmetrics"
scrape_interval: 10s
static_configs:
- targets: ["otel-collector-hostmetrics:8888"]
processors:
batch:
send_batch_size: 1000
timeout: 10s
memory_limiter:
# Same as --mem-ballast-size-mib CLI argument
ballast_size_mib: 683
# 80% of maximum memory up to 2G
limit_mib: 1500
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
# queued_retry:
# num_workers: 4
# queue_size: 100
# retry_on_failure: true
extensions:
health_check: {}
zpages: {}
exporters:
clickhouse:
datasource: tcp://clickhouse:9000
clickhousemetricswrite:
endpoint: tcp://clickhouse:9000/?database=signoz_metrics
resource_to_telemetry_conversion:
enabled: true
service:
extensions: [health_check, zpages]
pipelines:
traces:
receivers: [jaeger, otlp]
processors: [batch]
exporters: [clickhouse]
metrics:
receivers: [otlp, prometheus, hostmetrics]
processors: [batch]
exporters: [clickhousemetricswrite]

View File

@@ -1,4 +1,8 @@
receivers:
otlp/spanmetrics:
protocols:
grpc:
endpoint: "localhost:12345"
otlp:
protocols:
grpc:
@@ -7,18 +11,39 @@ receivers:
protocols:
grpc:
thrift_http:
hostmetrics:
collection_interval: 30s
scrapers:
cpu:
load:
memory:
disk:
filesystem:
network:
processors:
batch:
send_batch_size: 1000
timeout: 10s
memory_limiter:
# Same as --mem-ballast-size-mib CLI argument
ballast_size_mib: 683
# 80% of maximum memory up to 2G
limit_mib: 1500
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
signozspanmetrics/prometheus:
metrics_exporter: prometheus
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
dimensions_cache_size: 10000
dimensions:
- name: service.namespace
default: default
- name: deployment.environment
default: default
# memory_limiter:
# # 80% of maximum memory up to 2G
# limit_mib: 1500
# # 25% of limit up to 2G
# spike_limit_mib: 512
# check_interval: 5s
#
# # 50% of the maximum memory
# limit_percentage: 50
# # 20% of max memory usage spike expected
# spike_limit_percentage: 20
# queued_retry:
# num_workers: 4
# queue_size: 100
@@ -27,21 +52,25 @@ extensions:
health_check: {}
zpages: {}
exporters:
clickhouse:
datasource: tcp://clickhouse:9000
clickhousetraces:
datasource: tcp://clickhouse:9000/?database=signoz_traces
clickhousemetricswrite:
endpoint: tcp://clickhouse:9000/?database=signoz_metrics
resource_to_telemetry_conversion:
enabled: true
prometheus:
endpoint: "0.0.0.0:8889"
service:
extensions: [health_check, zpages]
pipelines:
traces:
receivers: [jaeger, otlp]
processors: [batch]
exporters: [clickhouse]
processors: [signozspanmetrics/prometheus, batch]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
receivers: [otlp, hostmetrics]
processors: [batch]
exporters: [clickhousemetricswrite]
exporters: [clickhousemetricswrite]
metrics/spanmetrics:
receivers: [otlp/spanmetrics]
exporters: [prometheus]

View File

@@ -0,0 +1,47 @@
receivers:
otlp:
protocols:
grpc:
http:
# Data sources: metrics
prometheus:
config:
scrape_configs:
- job_name: "otel-collector"
scrape_interval: 30s
static_configs:
- targets: ["otel-collector:8889"]
processors:
batch:
send_batch_size: 1000
timeout: 10s
# memory_limiter:
# # 80% of maximum memory up to 2G
# limit_mib: 1500
# # 25% of limit up to 2G
# spike_limit_mib: 512
# check_interval: 5s
#
# # 50% of the maximum memory
# limit_percentage: 50
# # 20% of max memory usage spike expected
# spike_limit_percentage: 20
# queued_retry:
# num_workers: 4
# queue_size: 100
# retry_on_failure: true
extensions:
health_check: {}
zpages: {}
exporters:
clickhousemetricswrite:
endpoint: tcp://clickhouse:9000/?database=signoz_metrics
service:
extensions: [health_check, zpages]
pipelines:
metrics:
receivers: [otlp, prometheus]
processors: [batch]
exporters: [clickhousemetricswrite]

View File

@@ -9,12 +9,13 @@ alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
- alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
- 'alerts.yml'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.

View File

@@ -1,7 +1,7 @@
server {
listen 3000;
listen 3301;
server_name _;
gzip on;
gzip_static on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
@@ -12,19 +12,26 @@ server {
gzip_http_version 1.1;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
if ( $uri = '/index.html' ) {
add_header Cache-Control no-store always;
}
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /api/alertmanager {
proxy_pass http://alertmanager:9093/api/v2;
}
location /api {
proxy_pass http://query-service:8080/api;
proxy_pass http://query-service:8080/api;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
root /usr/share/nginx/html;
}
}

View File

@@ -1,11 +1,8 @@
<?xml version="1.0"?>
<yandex>
<logger>
<level>trace</level>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<size>1000M</size>
<count>10</count>
<level>information</level>
<console>1</console>
</logger>
<http_port>8123</http_port>
@@ -45,6 +42,34 @@
</client>
</openSSL>
<!-- Example config for tiered storage -->
<!-- <storage_configuration>
<disks>
<default>
<keep_free_space_bytes>10485760</keep_free_space_bytes>
</default>
<s3>
<type>s3</type>
<endpoint>https://BUCKET-NAME.s3.amazonaws.com/data/</endpoint>
<access_key_id>ACCESS-KEY-ID</access_key_id>
<secret_access_key>SECRET-ACCESS-KEY</secret_access_key>
</s3>
</disks>
<policies>
<tiered>
<volumes>
<default>
<disk>default</disk>
</default>
<s3>
<disk>s3</disk>
</s3>
</volumes>
</tiered>
</policies>
</storage_configuration> -->
<!-- Default root page on http[s] server. For example load UI from https://tabix.io/ when opening http://localhost:8123 -->
<!--
<http_server_default_response><![CDATA[<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>]]></http_server_default_response>

View File

@@ -0,0 +1,133 @@
version: "2.4"
services:
clickhouse:
image: altinity/clickhouse-server:21.12.3.32.altinitydev.arm
# ports:
# - "9000:9000"
# - "8123:8123"
volumes:
- ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./data/clickhouse/:/var/lib/clickhouse/
restart: on-failure
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "localhost:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
alertmanager:
image: signoz/alertmanager:0.23.0-0.1
volumes:
- ./data/alertmanager:/data
depends_on:
query-service:
condition: service_healthy
restart: on-failure
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data
# Notes for Maintainers/Contributors who will change Line Numbers of Frontend & Query-Section. Please Update Line Numbers in `./scripts/commentLinesForSetup.sh` & `./CONTRIBUTING.md`
query-service:
image: signoz/query-service:0.8.2
container_name: query-service
command: ["-config=/root/config/prometheus.yml"]
# ports:
# - "6060:6060" # pprof port
# - "8080:8080" # query-service port
volumes:
- ./prometheus.yml:/root/config/prometheus.yml
- ../dashboards:/root/config/dashboards
- ./data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000/?database=signoz_traces
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-standalone-arm
restart: on-failure
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost:8080/api/v1/version"]
interval: 30s
timeout: 5s
retries: 3
depends_on:
clickhouse:
condition: service_healthy
frontend:
image: signoz/frontend:0.8.2
container_name: frontend
restart: on-failure
depends_on:
- alertmanager
- query-service
ports:
- "3301:3301"
volumes:
- ../common/nginx-config.conf:/etc/nginx/conf.d/default.conf
otel-collector:
image: signoz/otelcontribcol:0.45.1-0.3
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
# - "8889:8889" # Prometheus metrics exposed by the agent
# - "13133:13133" # health_check
# - "14268:14268" # Jaeger receiver
# - "55678:55678" # OpenCensus receiver
# - "55679:55679" # zpages extension
# - "55680:55680" # OTLP gRPC legacy receiver
# - "55681:55681" # OTLP HTTP legacy receiver
mem_limit: 2000m
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy
otel-collector-metrics:
image: signoz/otelcontribcol:0.45.1-0.3
command: ["--config=/etc/otel-collector-metrics-config.yaml"]
volumes:
- ./otel-collector-metrics-config.yaml:/etc/otel-collector-metrics-config.yaml
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy
hotrod:
image: jaegertracing/example-hotrod:1.30
container_name: hotrod
logging:
options:
max-size: 50m
max-file: "3"
command: ["all"]
environment:
- JAEGER_ENDPOINT=http://otel-collector:14268/api/traces
load-hotrod:
image: "grubykarol/locust:1.2.3-python3.9-alpine3.12"
container_name: load-hotrod
hostname: load-hotrod
environment:
ATTACKED_HOST: http://hotrod:8080
LOCUST_MODE: standalone
NO_PROXY: standalone
TASK_DELAY_FROM: 5
TASK_DELAY_TO: 30
QUIET_MODE: "${QUIET_MODE:-false}"
LOCUST_OPTS: "--headless -u 10 -r 1"
volumes:
- ../common/locust-scripts:/locust

View File

@@ -2,115 +2,124 @@ version: "2.4"
services:
clickhouse:
image: ${clickhouse_image}
expose:
- 8123
- 9000
ports:
- 9001:9000
- 8123:8123
volumes:
- ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./docker-entrypoint-initdb.d/init-db.sql:/docker-entrypoint-initdb.d/init-db.sql
- ./data/clickhouse/:/var/lib/clickhouse/
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "localhost:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
image: yandex/clickhouse-server:21.12.3.32
# ports:
# - "9000:9000"
# - "8123:8123"
volumes:
- ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./data/clickhouse/:/var/lib/clickhouse/
restart: on-failure
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "localhost:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
alertmanager:
image: signoz/alertmanager:0.5.0
image: signoz/alertmanager:0.23.0-0.1
volumes:
- ./alertmanager.yml:/prometheus/alertmanager.yml
- ./data/alertmanager:/data
depends_on:
query-service:
condition: service_healthy
restart: on-failure
command:
- '--config.file=/prometheus/alertmanager.yml'
- '--storage.path=/data'
ports:
- 9093:9093
- --queryService.url=http://query-service:8085
- --storage.path=/data
# Notes for Maintainers/Contributors who will change Line Numbers of Frontend & Query-Section. Please Update Line Numbers in `./scripts/commentLinesForSetup.sh` & `./CONTRIBUTING.md`
query-service:
image: signoz/query-service:0.5.1
image: signoz/query-service:0.8.2
container_name: query-service
command: ["-config=/root/config/prometheus.yml"]
ports:
- "8080:8080"
# ports:
# - "6060:6060" # pprof port
# - "8080:8080" # query-service port
volumes:
- ./prometheus.yml:/root/config/prometheus.yml
- ../dashboards:/root/config/dashboards
- ./data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000
- ClickHouseUrl=tcp://clickhouse:9000/?database=signoz_traces
- STORAGE=clickhouse
- POSTHOG_API_KEY=H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-standalone-amd
restart: on-failure
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost:8080/api/v1/version"]
interval: 30s
timeout: 5s
retries: 3
depends_on:
clickhouse:
condition: service_healthy
frontend:
image: signoz/frontend:0.5.2
container_name: frontend
frontend:
image: signoz/frontend:0.8.2
container_name: frontend
restart: on-failure
depends_on:
- alertmanager
- query-service
links:
- "query-service"
ports:
- "3000:3000"
- "3301:3301"
volumes:
- ../common/nginx-config.conf:/etc/nginx/conf.d/default.conf
otel-collector:
image: signoz/otelcontribcol:0.4.2
command: ["--config=/etc/otel-collector-config.yaml", "--mem-ballast-size-mib=683"]
image: signoz/otelcontribcol:0.45.1-0.3
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1777:1777" # pprof extension
- "8887:8888" # Prometheus metrics exposed by the agent
- "14268:14268" # Jaeger receiver
- "55678" # OpenCensus receiver
- "55680:55680" # OTLP HTTP/2.0 legacy port
- "55681:55681" # OTLP HTTP/1.0 receiver
- "4317:4317" # OTLP GRPC receiver
- "55679:55679" # zpages extension
- "13133" # health_check
- "8889:8889" # prometheus exporter
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
# - "8889:8889" # Prometheus metrics exposed by the agent
# - "13133:13133" # health_check
# - "14268:14268" # Jaeger receiver
# - "55678:55678" # OpenCensus receiver
# - "55679:55679" # zpages extension
# - "55680:55680" # OTLP gRPC legacy receiver
# - "55681:55681" # OTLP HTTP legacy receiver
mem_limit: 2000m
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy
otel-collector-metrics:
image: signoz/otelcontribcol:0.4.2
command: ["--config=/etc/otel-collector-metrics-config.yaml", "--mem-ballast-size-mib=683"]
image: signoz/otelcontribcol:0.45.1-0.3
command: ["--config=/etc/otel-collector-metrics-config.yaml"]
volumes:
- ./otel-collector-metrics-config.yaml:/etc/otel-collector-metrics-config.yaml
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy
hotrod:
image: jaegertracing/example-hotrod:latest
container_name: hotrod
ports:
- "9000:8080"
command: ["all"]
environment:
- JAEGER_ENDPOINT=http://otel-collector:14268/api/traces
hotrod:
image: jaegertracing/example-hotrod:1.30
container_name: hotrod
logging:
options:
max-size: 50m
max-file: "3"
command: ["all"]
environment:
- JAEGER_ENDPOINT=http://otel-collector:14268/api/traces
load-hotrod:
image: "grubykarol/locust:1.2.3-python3.9-alpine3.12"
container_name: load-hotrod
hostname: load-hotrod
ports:
- "8089:8089"
environment:
ATTACKED_HOST: http://hotrod:8080
LOCUST_MODE: standalone
@@ -120,4 +129,4 @@ services:
QUIET_MODE: "${QUIET_MODE:-false}"
LOCUST_OPTS: "--headless -u 10 -r 1"
volumes:
- ../common/locust-scripts:/locust
- ../common/locust-scripts:/locust

View File

@@ -1,31 +0,0 @@
CREATE TABLE IF NOT EXISTS signoz_index (
timestamp DateTime64(9) CODEC(Delta, ZSTD(1)),
traceID String CODEC(ZSTD(1)),
spanID String CODEC(ZSTD(1)),
parentSpanID String CODEC(ZSTD(1)),
serviceName LowCardinality(String) CODEC(ZSTD(1)),
name LowCardinality(String) CODEC(ZSTD(1)),
kind Int32 CODEC(ZSTD(1)),
durationNano UInt64 CODEC(ZSTD(1)),
tags Array(String) CODEC(ZSTD(1)),
tagsKeys Array(String) CODEC(ZSTD(1)),
tagsValues Array(String) CODEC(ZSTD(1)),
statusCode Int64 CODEC(ZSTD(1)),
references String CODEC(ZSTD(1)),
externalHttpMethod Nullable(String) CODEC(ZSTD(1)),
externalHttpUrl Nullable(String) CODEC(ZSTD(1)),
component Nullable(String) CODEC(ZSTD(1)),
dbSystem Nullable(String) CODEC(ZSTD(1)),
dbName Nullable(String) CODEC(ZSTD(1)),
dbOperation Nullable(String) CODEC(ZSTD(1)),
peerService Nullable(String) CODEC(ZSTD(1)),
INDEX idx_traceID traceID TYPE bloom_filter GRANULARITY 4,
INDEX idx_service serviceName TYPE bloom_filter GRANULARITY 4,
INDEX idx_name name TYPE bloom_filter GRANULARITY 4,
INDEX idx_kind kind TYPE minmax GRANULARITY 4,
INDEX idx_tagsKeys tagsKeys TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX idx_tagsValues tagsValues TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX idx_duration durationNano TYPE minmax GRANULARITY 1
) ENGINE MergeTree()
PARTITION BY toDate(timestamp)
ORDER BY (serviceName, -toUnixTimestamp(timestamp))

View File

@@ -1 +0,0 @@
clickhouse_image=altinity/clickhouse-server:21.8.12.1.testingarm

View File

@@ -1 +0,0 @@
clickhouse_image=yandex/clickhouse-server

View File

@@ -27,14 +27,23 @@ processors:
signozspanmetrics/prometheus:
metrics_exporter: prometheus
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
memory_limiter:
# Same as --mem-ballast-size-mib CLI argument
ballast_size_mib: 683
# 80% of maximum memory up to 2G
limit_mib: 1500
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
dimensions_cache_size: 10000
dimensions:
- name: service.namespace
default: default
- name: deployment.environment
default: default
# memory_limiter:
# # 80% of maximum memory up to 2G
# limit_mib: 1500
# # 25% of limit up to 2G
# spike_limit_mib: 512
# check_interval: 5s
#
# # 50% of the maximum memory
# limit_percentage: 50
# # 20% of max memory usage spike expected
# spike_limit_percentage: 20
# queued_retry:
# num_workers: 4
# queue_size: 100
@@ -43,8 +52,8 @@ extensions:
health_check: {}
zpages: {}
exporters:
clickhouse:
datasource: tcp://clickhouse:9000
clickhousetraces:
datasource: tcp://clickhouse:9000/?database=signoz_traces
clickhousemetricswrite:
endpoint: tcp://clickhouse:9000/?database=signoz_metrics
resource_to_telemetry_conversion:
@@ -57,11 +66,11 @@ service:
traces:
receivers: [jaeger, otlp]
processors: [signozspanmetrics/prometheus, batch]
exporters: [clickhouse]
exporters: [clickhousetraces]
metrics:
receivers: [otlp, hostmetrics]
processors: [batch]
exporters: [clickhousemetricswrite]
metrics/spanmetrics:
receivers: [otlp/spanmetrics]
exporters: [prometheus]
exporters: [prometheus]

View File

@@ -16,14 +16,17 @@ processors:
batch:
send_batch_size: 1000
timeout: 10s
memory_limiter:
# Same as --mem-ballast-size-mib CLI argument
ballast_size_mib: 683
# 80% of maximum memory up to 2G
limit_mib: 1500
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
# memory_limiter:
# # 80% of maximum memory up to 2G
# limit_mib: 1500
# # 25% of limit up to 2G
# spike_limit_mib: 512
# check_interval: 5s
#
# # 50% of the maximum memory
# limit_percentage: 50
# # 20% of max memory usage spike expected
# spike_limit_percentage: 20
# queued_retry:
# num_workers: 4
# queue_size: 100
@@ -41,4 +44,4 @@ service:
metrics:
receivers: [otlp, prometheus]
processors: [batch]
exporters: [clickhousemetricswrite]
exporters: [clickhousemetricswrite]

View File

@@ -1,7 +1,7 @@
server {
listen 3000;
listen 3301;
server_name _;
gzip on;
gzip_static on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
@@ -9,25 +9,29 @@ server {
gzip_vary on;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_http_version 1.1;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
if ( $uri = '/index.html' ) {
add_header Cache-Control no-store always;
}
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /api/alertmanager{
proxy_pass http://alertmanager:9093/api/v2;
location /api/alertmanager {
proxy_pass http://alertmanager:9093/api/v2;
}
location /api {
proxy_pass http://query-service:8080/api;
proxy_pass http://query-service:8080/api;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
root /usr/share/nginx/html;
}
}

View File

@@ -1,276 +0,0 @@
version: "2.4"
volumes:
metadata_data: {}
middle_var: {}
historical_var: {}
broker_var: {}
coordinator_var: {}
router_var: {}
# If able to connect to kafka but not able to write to topic otlp_spans look into below link
# https://github.com/wurstmeister/kafka-docker/issues/409#issuecomment-428346707
services:
zookeeper:
image: bitnami/zookeeper:3.6.2-debian-10-r100
ports:
- "2181:2181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
# image: wurstmeister/kafka
image: bitnami/kafka:2.7.0-debian-10-r1
ports:
- "9092:9092"
hostname: kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: 'true'
KAFKA_TOPICS: 'otlp_spans:1:1,flattened_spans:1:1'
healthcheck:
# test: ["CMD", "kafka-topics.sh", "--create", "--topic", "otlp_spans", "--zookeeper", "zookeeper:2181"]
test: ["CMD", "kafka-topics.sh", "--list", "--zookeeper", "zookeeper:2181"]
interval: 30s
timeout: 10s
retries: 10
depends_on:
- zookeeper
postgres:
container_name: postgres
image: postgres:latest
volumes:
- metadata_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=FoolishPassword
- POSTGRES_USER=druid
- POSTGRES_DB=druid
coordinator:
image: apache/druid:0.20.0
container_name: coordinator
volumes:
- ./storage:/opt/data
- coordinator_var:/opt/druid/var
depends_on:
- zookeeper
- postgres
ports:
- "8081:8081"
command:
- coordinator
env_file:
- environment_tiny/coordinator
- environment_tiny/common
broker:
image: apache/druid:0.20.0
container_name: broker
volumes:
- broker_var:/opt/druid/var
depends_on:
- zookeeper
- postgres
- coordinator
ports:
- "8082:8082"
command:
- broker
env_file:
- environment_tiny/broker
- environment_tiny/common
historical:
image: apache/druid:0.20.0
container_name: historical
volumes:
- ./storage:/opt/data
- historical_var:/opt/druid/var
depends_on:
- zookeeper
- postgres
- coordinator
ports:
- "8083:8083"
command:
- historical
env_file:
- environment_tiny/historical
- environment_tiny/common
middlemanager:
image: apache/druid:0.20.0
container_name: middlemanager
volumes:
- ./storage:/opt/data
- middle_var:/opt/druid/var
depends_on:
- zookeeper
- postgres
- coordinator
ports:
- "8091:8091"
command:
- middleManager
env_file:
- environment_tiny/middlemanager
- environment_tiny/common
router:
image: apache/druid:0.20.0
container_name: router
volumes:
- router_var:/opt/druid/var
depends_on:
- zookeeper
- postgres
- coordinator
ports:
- "8888:8888"
command:
- router
env_file:
- environment_tiny/router
- environment_tiny/common
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://router:8888/druid/coordinator/v1/datasources/flattened_spans"]
interval: 30s
timeout: 5s
retries: 5
flatten-processor:
image: signoz/flattener-processor:0.4.0
container_name: flattener-processor
depends_on:
- kafka
- otel-collector
ports:
- "8000:8000"
environment:
- KAFKA_BROKER=kafka:9092
- KAFKA_INPUT_TOPIC=otlp_spans
- KAFKA_OUTPUT_TOPIC=flattened_spans
query-service:
image: signoz.docker.scarf.sh/signoz/query-service:0.4.1
container_name: query-service
depends_on:
- router
ports:
- "8080:8080"
volumes:
- ../dashboards:/root/config/dashboards
- ./data/signoz/:/var/lib/signoz/
environment:
- DruidClientUrl=http://router:8888
- DruidDatasource=flattened_spans
- STORAGE=druid
- POSTHOG_API_KEY=H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w
- GODEBUG=netdns=go
depends_on:
router:
condition: service_healthy
frontend:
image: signoz/frontend:0.4.1
container_name: frontend
depends_on:
- query-service
links:
- "query-service"
ports:
- "3000:3000"
volumes:
- ../common/nginx-config.conf:/etc/nginx/conf.d/default.conf
create-supervisor:
image: theithollow/hollowapp-blog:curl
container_name: create-supervisor
command:
- /bin/sh
- -c
- "curl -X POST -H 'Content-Type: application/json' -d @/app/supervisor-spec.json http://router:8888/druid/indexer/v1/supervisor"
depends_on:
- router
restart: on-failure:6
volumes:
- ./druid-jobs/supervisor-spec.json:/app/supervisor-spec.json
set-retention:
image: theithollow/hollowapp-blog:curl
container_name: set-retention
command:
- /bin/sh
- -c
- "curl -X POST -H 'Content-Type: application/json' -d @/app/retention-spec.json http://router:8888/druid/coordinator/v1/rules/flattened_spans"
depends_on:
- router
restart: on-failure:6
volumes:
- ./druid-jobs/retention-spec.json:/app/retention-spec.json
otel-collector:
image: otel/opentelemetry-collector:0.18.0
command: ["--config=/etc/otel-collector-config.yaml", "--mem-ballast-size-mib=683"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1777:1777" # pprof extension
- "8887:8888" # Prometheus metrics exposed by the agent
- "14268:14268" # Jaeger receiver
- "55678" # OpenCensus receiver
- "55680:55680" # OTLP HTTP/2.0 legacy port
- "55681:55681" # OTLP HTTP/1.0 receiver
- "4317:4317" # OTLP GRPC receiver
- "55679:55679" # zpages extension
- "13133" # health_check
depends_on:
kafka:
condition: service_healthy
hotrod:
image: jaegertracing/example-hotrod:latest
container_name: hotrod
ports:
- "9000:8080"
command: ["all"]
environment:
- JAEGER_ENDPOINT=http://otel-collector:14268/api/traces
load-hotrod:
image: "grubykarol/locust:1.2.3-python3.9-alpine3.12"
container_name: load-hotrod
hostname: load-hotrod
ports:
- "8089:8089"
environment:
ATTACKED_HOST: http://hotrod:8080
LOCUST_MODE: standalone
NO_PROXY: standalone
TASK_DELAY_FROM: 5
TASK_DELAY_TO: 30
QUIET_MODE: "${QUIET_MODE:-false}"
LOCUST_OPTS: "--headless -u 10 -r 1"
volumes:
- ../common/locust-scripts:/locust

View File

@@ -1,272 +0,0 @@
version: "2.4"
volumes:
metadata_data: {}
middle_var: {}
historical_var: {}
broker_var: {}
coordinator_var: {}
router_var: {}
# If able to connect to kafka but not able to write to topic otlp_spans look into below link
# https://github.com/wurstmeister/kafka-docker/issues/409#issuecomment-428346707
services:
zookeeper:
image: bitnami/zookeeper:3.6.2-debian-10-r100
ports:
- "2181:2181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
# image: wurstmeister/kafka
image: bitnami/kafka:2.7.0-debian-10-r1
ports:
- "9092:9092"
hostname: kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: 'true'
KAFKA_TOPICS: 'otlp_spans:1:1,flattened_spans:1:1'
healthcheck:
# test: ["CMD", "kafka-topics.sh", "--create", "--topic", "otlp_spans", "--zookeeper", "zookeeper:2181"]
test: ["CMD", "kafka-topics.sh", "--list", "--zookeeper", "zookeeper:2181"]
interval: 30s
timeout: 10s
retries: 10
depends_on:
- zookeeper
postgres:
container_name: postgres
image: postgres:latest
volumes:
- metadata_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=FoolishPassword
- POSTGRES_USER=druid
- POSTGRES_DB=druid
coordinator:
image: apache/druid:0.20.0
container_name: coordinator
volumes:
- ./storage:/opt/druid/deepStorage
- coordinator_var:/opt/druid/data
depends_on:
- zookeeper
- postgres
ports:
- "8081:8081"
command:
- coordinator
env_file:
- environment_small/coordinator
broker:
image: apache/druid:0.20.0
container_name: broker
volumes:
- broker_var:/opt/druid/data
depends_on:
- zookeeper
- postgres
- coordinator
ports:
- "8082:8082"
command:
- broker
env_file:
- environment_small/broker
historical:
image: apache/druid:0.20.0
container_name: historical
volumes:
- ./storage:/opt/druid/deepStorage
- historical_var:/opt/druid/data
depends_on:
- zookeeper
- postgres
- coordinator
ports:
- "8083:8083"
command:
- historical
env_file:
- environment_small/historical
middlemanager:
image: apache/druid:0.20.0
container_name: middlemanager
volumes:
- ./storage:/opt/druid/deepStorage
- middle_var:/opt/druid/data
depends_on:
- zookeeper
- postgres
- coordinator
ports:
- "8091:8091"
command:
- middleManager
env_file:
- environment_small/middlemanager
router:
image: apache/druid:0.20.0
container_name: router
volumes:
- router_var:/opt/druid/data
depends_on:
- zookeeper
- postgres
- coordinator
ports:
- "8888:8888"
command:
- router
env_file:
- environment_small/router
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://router:8888/druid/coordinator/v1/datasources/flattened_spans"]
interval: 30s
timeout: 5s
retries: 5
flatten-processor:
image: signoz/flattener-processor:0.4.0
container_name: flattener-processor
depends_on:
- kafka
- otel-collector
ports:
- "8000:8000"
environment:
- KAFKA_BROKER=kafka:9092
- KAFKA_INPUT_TOPIC=otlp_spans
- KAFKA_OUTPUT_TOPIC=flattened_spans
query-service:
image: signoz.docker.scarf.sh/signoz/query-service:0.4.1
container_name: query-service
depends_on:
- router
ports:
- "8080:8080"
volumes:
- ../dashboards:/root/config/dashboards
- ./data/signoz/:/var/lib/signoz/
environment:
- DruidClientUrl=http://router:8888
- DruidDatasource=flattened_spans
- STORAGE=druid
- POSTHOG_API_KEY=H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w
- GODEBUG=netdns=go
depends_on:
router:
condition: service_healthy
frontend:
image: signoz/frontend:0.4.1
container_name: frontend
depends_on:
- query-service
links:
- "query-service"
ports:
- "3000:3000"
volumes:
- ./nginx-config.conf:/etc/nginx/conf.d/default.conf
create-supervisor:
image: theithollow/hollowapp-blog:curl
container_name: create-supervisor
command:
- /bin/sh
- -c
- "curl -X POST -H 'Content-Type: application/json' -d @/app/supervisor-spec.json http://router:8888/druid/indexer/v1/supervisor"
depends_on:
- router
restart: on-failure:6
volumes:
- ./druid-jobs/supervisor-spec.json:/app/supervisor-spec.json
set-retention:
image: theithollow/hollowapp-blog:curl
container_name: set-retention
command:
- /bin/sh
- -c
- "curl -X POST -H 'Content-Type: application/json' -d @/app/retention-spec.json http://router:8888/druid/coordinator/v1/rules/flattened_spans"
depends_on:
- router
restart: on-failure:6
volumes:
- ./druid-jobs/retention-spec.json:/app/retention-spec.json
otel-collector:
image: otel/opentelemetry-collector:0.18.0
command: ["--config=/etc/otel-collector-config.yaml", "--mem-ballast-size-mib=683"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1777:1777" # pprof extension
- "8887:8888" # Prometheus metrics exposed by the agent
- "14268:14268" # Jaeger receiver
- "55678" # OpenCensus receiver
- "55680:55680" # OTLP HTTP/2.0 leagcy grpc receiver
- "55681:55681" # OTLP HTTP/1.0 receiver
- "4317:4317" # OTLP GRPC receiver
- "55679:55679" # zpages extension
- "13133" # health_check
depends_on:
kafka:
condition: service_healthy
hotrod:
image: jaegertracing/example-hotrod:latest
container_name: hotrod
ports:
- "9000:8080"
command: ["all"]
environment:
- JAEGER_ENDPOINT=http://otel-collector:14268/api/traces
load-hotrod:
image: "grubykarol/locust:1.2.3-python3.9-alpine3.12"
container_name: load-hotrod
hostname: load-hotrod
ports:
- "8089:8089"
environment:
ATTACKED_HOST: http://hotrod:8080
LOCUST_MODE: standalone
NO_PROXY: standalone
TASK_DELAY_FROM: 5
TASK_DELAY_TO: 30
QUIET_MODE: "${QUIET_MODE:-false}"
LOCUST_OPTS: "--headless -u 10 -r 1"
volumes:
- ./locust-scripts:/locust

View File

@@ -1 +0,0 @@
[{"period":"P3D","includeFuture":true,"tieredReplicants":{"_default_tier":1},"type":"loadByPeriod"},{"type":"dropForever"}]

View File

@@ -1,69 +0,0 @@
{
"type": "kafka",
"dataSchema": {
"dataSource": "flattened_spans",
"parser": {
"type": "string",
"parseSpec": {
"format": "json",
"timestampSpec": {
"column": "StartTimeUnixNano",
"format": "nano"
},
"dimensionsSpec": {
"dimensions": [
"TraceId",
"SpanId",
"ParentSpanId",
"Name",
"ServiceName",
"References",
"Tags",
"ExternalHttpMethod",
"ExternalHttpUrl",
"Component",
"DBSystem",
"DBName",
"DBOperation",
"PeerService",
{
"type": "string",
"name": "TagsKeys",
"multiValueHandling": "ARRAY"
},
{
"type": "string",
"name": "TagsValues",
"multiValueHandling": "ARRAY"
},
{ "name": "DurationNano", "type": "Long" },
{ "name": "Kind", "type": "int" },
{ "name": "StatusCode", "type": "int" }
]
}
}
},
"metricsSpec" : [
{ "type": "quantilesDoublesSketch", "name": "QuantileDuration", "fieldName": "DurationNano" }
],
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "DAY",
"queryGranularity": "NONE",
"rollup": false
}
},
"tuningConfig": {
"type": "kafka",
"reportParseExceptions": true
},
"ioConfig": {
"topic": "flattened_spans",
"replicas": 1,
"taskDuration": "PT20M",
"completionTimeout": "PT30M",
"consumerProperties": {
"bootstrap.servers": "kafka:9092"
}
}
}

View File

@@ -1,53 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Java tuning
DRUID_XMX=512m
DRUID_XMS=512m
DRUID_MAXNEWSIZE=256m
DRUID_NEWSIZE=256m
DRUID_MAXDIRECTMEMORYSIZE=768m
druid_emitter_logging_logLevel=debug
druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service"]
druid_zk_service_host=zookeeper
druid_metadata_storage_host=
druid_metadata_storage_type=postgresql
druid_metadata_storage_connector_connectURI=jdbc:postgresql://postgres:5432/druid
druid_metadata_storage_connector_user=druid
druid_metadata_storage_connector_password=FoolishPassword
druid_coordinator_balancer_strategy=cachingCost
druid_indexer_runner_javaOptsArray=["-server", "-Xms512m", "-Xmx512m", "-XX:MaxDirectMemorySize=768m", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
druid_indexer_fork_property_druid_processing_buffer_sizeBytes=25000000
druid_processing_buffer_sizeBytes=100MiB
druid_storage_type=local
druid_storage_storageDirectory=/opt/druid/deepStorage
druid_indexer_logs_type=file
druid_indexer_logs_directory=/opt/druid/data/indexing-logs
druid_processing_numThreads=1
druid_processing_numMergeBuffers=2
DRUID_LOG4J=<?xml version="1.0" encoding="UTF-8" ?><Configuration status="WARN"><Appenders><Console name="Console" target="SYSTEM_OUT"><PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/></Console></Appenders><Loggers><Root level="info"><AppenderRef ref="Console"/></Root><Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"><AppenderRef ref="Console"/></Logger></Loggers></Configuration>

View File

@@ -1,52 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Java tuning
DRUID_XMX=64m
DRUID_XMS=64m
DRUID_MAXNEWSIZE=256m
DRUID_NEWSIZE=256m
DRUID_MAXDIRECTMEMORYSIZE=400m
druid_emitter_logging_logLevel=debug
druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service"]
druid_zk_service_host=zookeeper
druid_metadata_storage_host=
druid_metadata_storage_type=postgresql
druid_metadata_storage_connector_connectURI=jdbc:postgresql://postgres:5432/druid
druid_metadata_storage_connector_user=druid
druid_metadata_storage_connector_password=FoolishPassword
druid_coordinator_balancer_strategy=cachingCost
druid_indexer_runner_javaOptsArray=["-server", "-Xms64m", "-Xmx64m", "-XX:MaxDirectMemorySize=400m", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
druid_indexer_fork_property_druid_processing_buffer_sizeBytes=25000000
druid_storage_type=local
druid_storage_storageDirectory=/opt/druid/deepStorage
druid_indexer_logs_type=file
druid_indexer_logs_directory=/opt/druid/data/indexing-logs
druid_processing_numThreads=1
druid_processing_numMergeBuffers=2
DRUID_LOG4J=<?xml version="1.0" encoding="UTF-8" ?><Configuration status="WARN"><Appenders><Console name="Console" target="SYSTEM_OUT"><PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/></Console></Appenders><Loggers><Root level="info"><AppenderRef ref="Console"/></Root><Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"><AppenderRef ref="Console"/></Logger></Loggers></Configuration>

View File

@@ -1,53 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Java tuning
DRUID_XMX=512m
DRUID_XMS=512m
DRUID_MAXNEWSIZE=256m
DRUID_NEWSIZE=256m
DRUID_MAXDIRECTMEMORYSIZE=1280m
druid_emitter_logging_logLevel=debug
druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service"]
druid_zk_service_host=zookeeper
druid_metadata_storage_host=
druid_metadata_storage_type=postgresql
druid_metadata_storage_connector_connectURI=jdbc:postgresql://postgres:5432/druid
druid_metadata_storage_connector_user=druid
druid_metadata_storage_connector_password=FoolishPassword
druid_coordinator_balancer_strategy=cachingCost
druid_indexer_runner_javaOptsArray=["-server", "-Xms512m", "-Xmx512m", "-XX:MaxDirectMemorySize=1280m", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
druid_indexer_fork_property_druid_processing_buffer_sizeBytes=25000000
druid_processing_buffer_sizeBytes=200MiB
druid_storage_type=local
druid_storage_storageDirectory=/opt/druid/deepStorage
druid_indexer_logs_type=file
druid_indexer_logs_directory=/opt/druid/data/indexing-logs
druid_processing_numThreads=2
druid_processing_numMergeBuffers=2
DRUID_LOG4J=<?xml version="1.0" encoding="UTF-8" ?><Configuration status="WARN"><Appenders><Console name="Console" target="SYSTEM_OUT"><PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/></Console></Appenders><Loggers><Root level="info"><AppenderRef ref="Console"/></Root><Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"><AppenderRef ref="Console"/></Logger></Loggers></Configuration>

View File

@@ -1,53 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Java tuning
DRUID_XMX=1g
DRUID_XMS=1g
DRUID_MAXNEWSIZE=256m
DRUID_NEWSIZE=256m
DRUID_MAXDIRECTMEMORYSIZE=2g
druid_emitter_logging_logLevel=debug
druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service"]
druid_zk_service_host=zookeeper
druid_metadata_storage_host=
druid_metadata_storage_type=postgresql
druid_metadata_storage_connector_connectURI=jdbc:postgresql://postgres:5432/druid
druid_metadata_storage_connector_user=druid
druid_metadata_storage_connector_password=FoolishPassword
druid_coordinator_balancer_strategy=cachingCost
druid_indexer_runner_javaOptsArray=["-server", "-Xms1g", "-Xmx1g", "-XX:MaxDirectMemorySize=2g", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
druid_indexer_fork_property_druid_processing_buffer_sizeBytes=25000000
druid_processing_buffer_sizeBytes=200MiB
druid_storage_type=local
druid_storage_storageDirectory=/opt/druid/deepStorage
druid_indexer_logs_type=file
druid_indexer_logs_directory=/opt/druid/data/indexing-logs
druid_processing_numThreads=2
druid_processing_numMergeBuffers=2
DRUID_LOG4J=<?xml version="1.0" encoding="UTF-8" ?><Configuration status="WARN"><Appenders><Console name="Console" target="SYSTEM_OUT"><PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/></Console></Appenders><Loggers><Root level="info"><AppenderRef ref="Console"/></Root><Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"><AppenderRef ref="Console"/></Logger></Loggers></Configuration>

View File

@@ -1,52 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Java tuning
DRUID_XMX=128m
DRUID_XMS=128m
DRUID_MAXNEWSIZE=256m
DRUID_NEWSIZE=256m
DRUID_MAXDIRECTMEMORYSIZE=128m
druid_emitter_logging_logLevel=debug
druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service"]
druid_zk_service_host=zookeeper
druid_metadata_storage_host=
druid_metadata_storage_type=postgresql
druid_metadata_storage_connector_connectURI=jdbc:postgresql://postgres:5432/druid
druid_metadata_storage_connector_user=druid
druid_metadata_storage_connector_password=FoolishPassword
druid_coordinator_balancer_strategy=cachingCost
druid_indexer_runner_javaOptsArray=["-server", "-Xms128m", "-Xmx128m", "-XX:MaxDirectMemorySize=128m", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
druid_indexer_fork_property_druid_processing_buffer_sizeBytes=25000000
druid_storage_type=local
druid_storage_storageDirectory=/opt/druid/deepStorage
druid_indexer_logs_type=file
druid_indexer_logs_directory=/opt/druid/data/indexing-logs
druid_processing_numThreads=1
druid_processing_numMergeBuffers=2
DRUID_LOG4J=<?xml version="1.0" encoding="UTF-8" ?><Configuration status="WARN"><Appenders><Console name="Console" target="SYSTEM_OUT"><PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/></Console></Appenders><Loggers><Root level="info"><AppenderRef ref="Console"/></Root><Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"><AppenderRef ref="Console"/></Logger></Loggers></Configuration>

View File

@@ -1,52 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Java tuning
DRUID_XMX=512m
DRUID_XMS=512m
DRUID_MAXNEWSIZE=256m
DRUID_NEWSIZE=256m
DRUID_MAXDIRECTMEMORYSIZE=400m
druid_emitter_logging_logLevel=debug
# druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service"]
druid_zk_service_host=zookeeper
druid_metadata_storage_host=
druid_metadata_storage_type=postgresql
druid_metadata_storage_connector_connectURI=jdbc:postgresql://postgres:5432/druid
druid_metadata_storage_connector_user=druid
druid_metadata_storage_connector_password=FoolishPassword
druid_coordinator_balancer_strategy=cachingCost
druid_indexer_runner_javaOptsArray=["-server", "-Xms512m", "-Xmx512m", "-XX:MaxDirectMemorySize=400m", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
druid_indexer_fork_property_druid_processing_buffer_sizeBytes=25000000
druid_processing_buffer_sizeBytes=50MiB
druid_processing_numThreads=1
druid_processing_numMergeBuffers=2
DRUID_LOG4J=<?xml version="1.0" encoding="UTF-8" ?><Configuration status="WARN"><Appenders><Console name="Console" target="SYSTEM_OUT"><PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/></Console></Appenders><Loggers><Root level="info"><AppenderRef ref="Console"/></Root><Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"><AppenderRef ref="Console"/></Logger></Loggers></Configuration>

View File

@@ -1,26 +0,0 @@
# For S3 storage
# druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service", "druid-s3-extensions"]
# druid_storage_type=s3
# druid_storage_bucket=<s3-bucket-name>
# druid_storage_baseKey=druid/segments
# AWS_ACCESS_KEY_ID=<s3-access-id>
# AWS_SECRET_ACCESS_KEY=<s3-access-key>
# AWS_REGION=<s3-aws-region>
# druid_indexer_logs_type=s3
# druid_indexer_logs_s3Bucket=<s3-bucket-name>
# druid_indexer_logs_s3Prefix=druid/indexing-logs
# -----------------------------------------------------------
# For local storage
druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service"]
druid_storage_type=local
druid_storage_storageDirectory=/opt/data/segments
druid_indexer_logs_type=file
druid_indexer_logs_directory=/opt/data/indexing-logs

View File

@@ -1,49 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Java tuning
DRUID_XMX=64m
DRUID_XMS=64m
DRUID_MAXNEWSIZE=256m
DRUID_NEWSIZE=256m
DRUID_MAXDIRECTMEMORYSIZE=400m
druid_emitter_logging_logLevel=debug
# druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service"]
druid_zk_service_host=zookeeper
druid_metadata_storage_host=
druid_metadata_storage_type=postgresql
druid_metadata_storage_connector_connectURI=jdbc:postgresql://postgres:5432/druid
druid_metadata_storage_connector_user=druid
druid_metadata_storage_connector_password=FoolishPassword
druid_coordinator_balancer_strategy=cachingCost
druid_indexer_runner_javaOptsArray=["-server", "-Xms64m", "-Xmx64m", "-XX:MaxDirectMemorySize=400m", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
druid_indexer_fork_property_druid_processing_buffer_sizeBytes=25000000
druid_processing_numThreads=1
druid_processing_numMergeBuffers=2
DRUID_LOG4J=<?xml version="1.0" encoding="UTF-8" ?><Configuration status="WARN"><Appenders><Console name="Console" target="SYSTEM_OUT"><PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/></Console></Appenders><Loggers><Root level="info"><AppenderRef ref="Console"/></Root><Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"><AppenderRef ref="Console"/></Logger></Loggers></Configuration>

View File

@@ -1,49 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Java tuning
DRUID_XMX=512m
DRUID_XMS=512m
DRUID_MAXNEWSIZE=256m
DRUID_NEWSIZE=256m
DRUID_MAXDIRECTMEMORYSIZE=400m
druid_emitter_logging_logLevel=debug
# druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service"]
druid_zk_service_host=zookeeper
druid_metadata_storage_host=
druid_metadata_storage_type=postgresql
druid_metadata_storage_connector_connectURI=jdbc:postgresql://postgres:5432/druid
druid_metadata_storage_connector_user=druid
druid_metadata_storage_connector_password=FoolishPassword
druid_coordinator_balancer_strategy=cachingCost
druid_indexer_runner_javaOptsArray=["-server", "-Xms512m", "-Xmx512m", "-XX:MaxDirectMemorySize=400m", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
druid_indexer_fork_property_druid_processing_buffer_sizeBytes=25000000
druid_processing_buffer_sizeBytes=50MiB
druid_processing_numThreads=1
druid_processing_numMergeBuffers=2
DRUID_LOG4J=<?xml version="1.0" encoding="UTF-8" ?><Configuration status="WARN"><Appenders><Console name="Console" target="SYSTEM_OUT"><PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/></Console></Appenders><Loggers><Root level="info"><AppenderRef ref="Console"/></Root><Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"><AppenderRef ref="Console"/></Logger></Loggers></Configuration>

View File

@@ -1,50 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Java tuning
DRUID_XMX=64m
DRUID_XMS=64m
DRUID_MAXNEWSIZE=256m
DRUID_NEWSIZE=256m
DRUID_MAXDIRECTMEMORYSIZE=400m
druid_emitter_logging_logLevel=debug
# druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service"]
druid_zk_service_host=zookeeper
druid_metadata_storage_host=
druid_metadata_storage_type=postgresql
druid_metadata_storage_connector_connectURI=jdbc:postgresql://postgres:5432/druid
druid_metadata_storage_connector_user=druid
druid_metadata_storage_connector_password=FoolishPassword
druid_coordinator_balancer_strategy=cachingCost
druid_indexer_runner_javaOptsArray=["-server", "-Xms256m", "-Xmx256m", "-XX:MaxDirectMemorySize=400m", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
druid_indexer_fork_property_druid_processing_buffer_sizeBytes=25000000
druid_processing_numThreads=1
druid_processing_numMergeBuffers=2
DRUID_LOG4J=<?xml version="1.0" encoding="UTF-8" ?><Configuration status="WARN"><Appenders><Console name="Console" target="SYSTEM_OUT"><PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/></Console></Appenders><Loggers><Root level="info"><AppenderRef ref="Console"/></Root><Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"><AppenderRef ref="Console"/></Logger></Loggers></Configuration>

View File

@@ -1,49 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Java tuning
DRUID_XMX=64m
DRUID_XMS=64m
DRUID_MAXNEWSIZE=256m
DRUID_NEWSIZE=256m
DRUID_MAXDIRECTMEMORYSIZE=128m
druid_emitter_logging_logLevel=debug
# druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service"]
druid_zk_service_host=zookeeper
druid_metadata_storage_host=
druid_metadata_storage_type=postgresql
druid_metadata_storage_connector_connectURI=jdbc:postgresql://postgres:5432/druid
druid_metadata_storage_connector_user=druid
druid_metadata_storage_connector_password=FoolishPassword
druid_coordinator_balancer_strategy=cachingCost
druid_indexer_runner_javaOptsArray=["-server", "-Xms64m", "-Xmx64m", "-XX:MaxDirectMemorySize=128m", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
druid_indexer_fork_property_druid_processing_buffer_sizeBytes=25000000
druid_processing_numThreads=1
druid_processing_numMergeBuffers=2
DRUID_LOG4J=<?xml version="1.0" encoding="UTF-8" ?><Configuration status="WARN"><Appenders><Console name="Console" target="SYSTEM_OUT"><PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/></Console></Appenders><Loggers><Root level="info"><AppenderRef ref="Console"/></Root><Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"><AppenderRef ref="Console"/></Logger></Loggers></Configuration>

View File

@@ -1,51 +0,0 @@
receivers:
otlp:
protocols:
grpc:
http:
jaeger:
protocols:
grpc:
thrift_http:
processors:
batch:
send_batch_size: 1000
timeout: 10s
memory_limiter:
# Same as --mem-ballast-size-mib CLI argument
ballast_size_mib: 683
# 80% of maximum memory up to 2G
limit_mib: 1500
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
queued_retry:
num_workers: 4
queue_size: 100
retry_on_failure: true
extensions:
health_check: {}
zpages: {}
exporters:
kafka/traces:
brokers:
- kafka:9092
topic: 'otlp_spans'
protocol_version: 2.0.0
kafka/metrics:
brokers:
- kafka:9092
topic: 'otlp_metrics'
protocol_version: 2.0.0
service:
extensions: [health_check, zpages]
pipelines:
traces:
receivers: [jaeger, otlp]
processors: [memory_limiter, batch, queued_retry]
exporters: [kafka/traces]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [kafka/metrics]

View File

@@ -102,7 +102,7 @@ check_os() {
# The script should error out in case they aren't available
check_ports_occupied() {
local port_check_output
local ports_pattern="80|3000|8080"
local ports_pattern="3301|4317"
if is_mac; then
port_check_output="$(netstat -anp tcp | awk '$6 == "LISTEN" && $4 ~ /^.*\.('"$ports_pattern"')$/')"
@@ -116,18 +116,10 @@ check_ports_occupied() {
fi
if [[ -n $port_check_output ]]; then
DATA='{ "api_key": "H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w", "type": "capture", "event": "Installation Error", "distinct_id": "'"$SIGNOZ_INSTALLATION_ID"'", "properties": { "os": "'"$os"'", "error": "port not available" } }'
URL="https://app.posthog.com/capture"
HEADER="Content-Type: application/json"
if has_curl; then
curl -sfL -d "$DATA" --header "$HEADER" "$URL" > /dev/null 2>&1
elif has_wget; then
wget -q --post-data="$DATA" --header="$HEADER" "$URL" > /dev/null 2>&1
fi
send_event "port_not_available"
echo "+++++++++++ ERROR ++++++++++++++++++++++"
echo "SigNoz requires ports 80 & 443 to be open. Please shut down any other service(s) that may be running on these ports."
echo "SigNoz requires ports 3301 & 4317 to be open. Please shut down any other service(s) that may be running on these ports."
echo "You can run SigNoz on another port following this guide https://signoz.io/docs/deployment/docker#troubleshooting"
echo "++++++++++++++++++++++++++++++++++++++++"
echo ""
@@ -141,58 +133,44 @@ install_docker() {
if [[ $package_manager == apt-get ]]; then
apt_cmd="sudo apt-get --yes --quiet"
apt_cmd="$sudo_cmd apt-get --yes --quiet"
$apt_cmd update
$apt_cmd install software-properties-common gnupg-agent
curl -fsSL "https://download.docker.com/linux/$os/gpg" | sudo apt-key add -
sudo add-apt-repository \
curl -fsSL "https://download.docker.com/linux/$os/gpg" | $sudo_cmd apt-key add -
$sudo_cmd add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$os $(lsb_release -cs) stable"
$apt_cmd update
echo "Installing docker"
$apt_cmd install docker-ce docker-ce-cli containerd.io
elif [[ $package_manager == zypper ]]; then
zypper_cmd="sudo zypper --quiet --no-gpg-checks --non-interactive"
zypper_cmd="$sudo_cmd zypper --quiet --no-gpg-checks --non-interactive"
echo "Installing docker"
if [[ $os == sles ]]; then
os_sp="$(cat /etc/*-release | awk -F= '$1 == "VERSION_ID" { gsub(/"/, ""); print $2; exit }')"
os_arch="$(uname -i)"
sudo SUSEConnect -p sle-module-containers/$os_sp/$os_arch -r ''
SUSEConnect -p sle-module-containers/$os_sp/$os_arch -r ''
fi
$zypper_cmd install docker docker-runc containerd
sudo systemctl enable docker.service
$sudo_cmd systemctl enable docker.service
elif [[ $package_manager == yum && $os == 'amazon linux' ]]; then
echo
echo "Amazon Linux detected ... "
echo
# sudo yum install docker
# sudo service docker start
sudo amazon-linux-extras install docker
# yum install docker
# service docker start
$sudo_cmd yum install -y amazon-linux-extras
$sudo_cmd amazon-linux-extras enable docker
$sudo_cmd yum install -y docker
else
yum_cmd="sudo yum --assumeyes --quiet"
yum_cmd="$sudo_cmd yum --assumeyes --quiet"
$yum_cmd install yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/$os/docker-ce.repo
$sudo_cmd yum-config-manager --add-repo https://download.docker.com/linux/$os/docker-ce.repo
echo "Installing docker"
$yum_cmd install docker-ce docker-ce-cli containerd.io
fi
}
install_docker_machine() {
echo "\nInstalling docker machine ..."
if [[ $os == "Mac" ]];then
curl -sL https://github.com/docker/machine/releases/download/v0.16.2/docker-machine-`uname -s`-`uname -m` >/usr/local/bin/docker-machine
chmod +x /usr/local/bin/docker-machine
else
curl -sL https://github.com/docker/machine/releases/download/v0.16.2/docker-machine-`uname -s`-`uname -m` >/tmp/docker-machine
chmod +x /tmp/docker-machine
sudo cp /tmp/docker-machine /usr/local/bin/docker-machine
fi
}
install_docker_compose() {
@@ -200,22 +178,14 @@ install_docker_compose() {
if [[ ! -f /usr/bin/docker-compose ]];then
echo "++++++++++++++++++++++++"
echo "Installing docker-compose"
sudo curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
$sudo_cmd curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
$sudo_cmd chmod +x /usr/local/bin/docker-compose
$sudo_cmd ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
echo "docker-compose installed!"
echo ""
fi
else
DATA='{ "api_key": "H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w", "type": "capture", "event": "Installation Error", "distinct_id": "'"$SIGNOZ_INSTALLATION_ID"'", "properties": { "os": "'"$os"'", "error": "Docker Compose not found", "setup_type": "'"$setup_type"'" } }'
URL="https://app.posthog.com/capture"
HEADER="Content-Type: application/json"
if has_curl; then
curl -sfL -d "$DATA" --header "$HEADER" "$URL" > /dev/null 2>&1
elif has_wget; then
wget -q --post-data="$DATA" --header="$HEADER" "$URL" > /dev/null 2>&1
fi
send_event "docker_compose_not_found"
echo "+++++++++++ IMPORTANT READ ++++++++++++++++++++++"
echo "docker-compose not found! Please install docker-compose first and then continue with this installation."
@@ -226,35 +196,32 @@ install_docker_compose() {
}
start_docker() {
echo "Starting Docker ..."
if [ $os = "Mac" ]; then
echo -e "🐳 Starting Docker ...\n"
if [[ $os == "Mac" ]]; then
open --background -a Docker && while ! docker system info > /dev/null 2>&1; do sleep 1; done
else
if ! sudo systemctl is-active docker.service > /dev/null; then
if ! $sudo_cmd systemctl is-active docker.service > /dev/null; then
echo "Starting docker service"
sudo systemctl start docker.service
$sudo_cmd systemctl start docker.service
fi
if [[ -z $sudo_cmd ]]; then
docker ps > /dev/null && true
if [[ $? -ne 0 ]]; then
request_sudo
fi
fi
fi
}
wait_for_containers_start() {
local timeout=$1
# The while loop is important because for-loops don't work for dynamic values
while [[ $timeout -gt 0 ]]; do
status_code="$(curl -s -o /dev/null -w "%{http_code}" http://localhost:3000/api/v1/services/list || true)"
status_code="$(curl -s -o /dev/null -w "%{http_code}" http://localhost:3301/api/v1/services/list || true)"
if [[ status_code -eq 200 ]]; then
break
else
if [ $setup_type == 'druid' ]; then
SUPERVISORS="$(curl -so - http://localhost:8888/druid/indexer/v1/supervisor)"
LEN_SUPERVISORS="${#SUPERVISORS}"
if [[ LEN_SUPERVISORS -ne 19 && $timeout -eq 50 ]];then
echo -e "\n🟠 Supervisors taking time to start ⏳ ... let's wait for some more time ⏱️\n\n"
sudo docker-compose -f ./docker/druid-kafka-setup/docker-compose-tiny.yaml up -d
fi
fi
echo -ne "Waiting for all containers to start. This check will timeout in $timeout seconds ...\r\c"
fi
((timeout--))
@@ -265,43 +232,33 @@ wait_for_containers_start() {
}
bye() { # Prints a friendly good bye message and exits the script.
if [ "$?" -ne 0 ]; then
if [[ "$?" -ne 0 ]]; then
set +o errexit
echo "🔴 The containers didn't seem to start correctly. Please run the following command to check containers that may have errored out:"
echo ""
if [ $setup_type == 'clickhouse' ]; then
if is_arm64; then
echo -e "sudo docker-compose --env-file ./docker/clickhouse-setup/env/arm64.env -f docker/clickhouse-setup/docker-compose.yaml ps -a"
else
echo -e "sudo docker-compose --env-file ./docker/clickhouse-setup/env/x86_64.env -f docker/clickhouse-setup/docker-compose.yaml ps -a"
fi
else
echo -e "sudo docker-compose -f docker/druid-kafka-setup/docker-compose-tiny.yaml ps -a"
if is_arm64; then
echo -e "$sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.arm.yaml ps -a"
else
echo -e "$sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml ps -a"
fi
# echo "Please read our troubleshooting guide https://signoz.io/docs/deployment/docker#troubleshooting"
echo "or reach us on SigNoz for support https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA"
echo "or reach us for support in #help channel in our Slack Community https://signoz.io/slack"
echo "++++++++++++++++++++++++++++++++++++++++"
echo -e "\n📨 Please share your email to receive support with the installation"
read -rp 'Email: ' email
while [[ $email == "" ]]
do
if [[ $email == "" ]]; then
echo -e "\n📨 Please share your email to receive support with the installation"
read -rp 'Email: ' email
done
DATA='{ "api_key": "H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w", "type": "capture", "event": "Installation Support", "distinct_id": "'"$SIGNOZ_INSTALLATION_ID"'", "properties": { "os": "'"$os"'", "email": "'"$email"'", "setup_type": "'"$setup_type"'" } }'
URL="https://app.posthog.com/capture"
HEADER="Content-Type: application/json"
if has_curl; then
curl -sfL -d "$DATA" --header "$HEADER" "$URL" > /dev/null 2>&1
elif has_wget; then
wget -q --post-data="$DATA" --header="$HEADER" "$URL" > /dev/null 2>&1
while [[ $email == "" ]]
do
read -rp 'Email: ' email
done
fi
send_event "installation_support"
echo ""
echo -e "\nWe will reach out to you at the email provided shortly, Exiting for now. Bye! 👋 \n"
@@ -309,24 +266,73 @@ bye() { # Prints a friendly good bye message and exits the script.
fi
}
request_sudo() {
if hash sudo 2>/dev/null; then
echo -e "\n\n🙇 We will need sudo access to complete the installation."
if (( $EUID != 0 )); then
sudo_cmd="sudo"
echo -e "Please enter your sudo password, if prompt."
$sudo_cmd -l | grep -e "NOPASSWD: ALL" > /dev/null
if [[ $? -ne 0 ]] && ! $sudo_cmd -v; then
echo "Need sudo privileges to proceed with the installation."
exit 1;
fi
echo -e "Got it! Thanks!! 🙏\n"
echo -e "Okay! We will bring up the SigNoz cluster from here 🚀\n"
fi
fi
}
echo ""
echo -e "👋 Thank you for trying out SigNoz! "
echo ""
sudo_cmd=""
# Check sudo permissions
if (( $EUID != 0 )); then
echo "🟡 Running installer with non-sudo permissions."
echo " In case of any failure or prompt, please consider running the script with sudo privileges."
echo ""
else
sudo_cmd="sudo"
fi
# Checking OS and assigning package manager
desired_os=0
os=""
echo -e "Detecting your OS ..."
email=""
echo -e "🌏 Detecting your OS ...\n"
check_os
SIGNOZ_INSTALLATION_ID=$(curl -s 'https://api64.ipify.org')
# Obtain unique installation id
sysinfo="$(uname -a)"
if [[ $? -ne 0 ]]; then
uuid="$(uuidgen)"
uuid="${uuid:-$(cat /proc/sys/kernel/random/uuid)}"
sysinfo="${uuid:-$(cat /proc/sys/kernel/random/uuid)}"
fi
digest_cmd=""
if hash shasum 2>/dev/null; then
digest_cmd="shasum -a 256"
elif hash sha256sum 2>/dev/null; then
digest_cmd="sha256sum"
elif hash openssl 2>/dev/null; then
digest_cmd="openssl dgst -sha256"
fi
if [[ -z $digest_cmd ]]; then
SIGNOZ_INSTALLATION_ID="$sysinfo"
else
SIGNOZ_INSTALLATION_ID=$(echo "$sysinfo" | $digest_cmd | grep -E -o '[a-zA-Z0-9]{64}')
fi
# echo ""
# echo -e "👉 ${RED}Two ways to go forward\n"
# echo -e "${RED}1) ClickHouse as database (default)\n"
# echo -e "${RED}2) Kafka + Druid as datastore \n"
# read -p "⚙️ Enter your preference (1/2):" choice_setup
# while [[ $choice_setup != "1" && $choice_setup != "2" && $choice_setup != "" ]]
@@ -339,8 +345,6 @@ SIGNOZ_INSTALLATION_ID=$(curl -s 'https://api64.ipify.org')
# if [[ $choice_setup == "1" || $choice_setup == "" ]];then
# setup_type='clickhouse'
# else
# setup_type='druid'
# fi
setup_type='clickhouse'
@@ -350,95 +354,133 @@ setup_type='clickhouse'
# Run bye if failure happens
trap bye EXIT
URL="https://api.segment.io/v1/track"
HEADER_1="Content-Type: application/json"
HEADER_2="Authorization: Basic NEdtb2E0aXhKQVVIeDJCcEp4c2p3QTFiRWZud0VlUno6"
DATA='{ "api_key": "H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w", "type": "capture", "event": "Installation Started", "distinct_id": "'"$SIGNOZ_INSTALLATION_ID"'", "properties": { "os": "'"$os"'", "setup_type": "'"$setup_type"'" } }'
URL="https://app.posthog.com/capture"
HEADER="Content-Type: application/json"
send_event() {
error=""
if has_curl; then
curl -sfL -d "$DATA" --header "$HEADER" "$URL" > /dev/null 2>&1
elif has_wget; then
wget -q --post-data="$DATA" --header="$HEADER" "$URL" > /dev/null 2>&1
fi
case "$1" in
'install_started')
event="Installation Started"
;;
'os_not_supported')
event="Installation Error"
error="OS Not Supported"
;;
'docker_not_installed')
event="Installation Error"
error="Docker not installed"
;;
'docker_compose_not_found')
event="Installation Error"
event="Docker Compose not found"
;;
'port_not_available')
event="Installation Error"
error="port not available"
;;
'installation_error_checks')
event="Installation Error - Checks"
error="Containers not started"
others='"data": "some_checks",'
;;
'installation_support')
event="Installation Support"
others='"email": "'"$email"'",'
;;
'installation_success')
event="Installation Success"
;;
'identify_successful_installation')
event="Identify Successful Installation"
others='"email": "'"$email"'",'
;;
*)
print_error "unknown event type: $1"
exit 1
;;
esac
if [[ $desired_os -eq 0 ]];then
DATA='{ "api_key": "H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w", "type": "capture", "event": "Installation Error", "distinct_id": "'"$SIGNOZ_INSTALLATION_ID"'", "properties": { "os": "'"$os"'", "error": "OS Not Supported", "setup_type": "'"$setup_type"'" } }'
URL="https://app.posthog.com/capture"
HEADER="Content-Type: application/json"
if has_curl; then
curl -sfL -d "$DATA" --header "$HEADER" "$URL" > /dev/null 2>&1
elif has_wget; then
wget -q --post-data="$DATA" --header="$HEADER" "$URL" > /dev/null 2>&1
if [[ "$error" != "" ]]; then
error='"error": "'"$error"'", '
fi
DATA='{ "anonymousId": "'"$SIGNOZ_INSTALLATION_ID"'", "event": "'"$event"'", "properties": { "os": "'"$os"'", '"$error $others"' "setup_type": "'"$setup_type"'" } }'
if has_curl; then
curl -sfL -d "$DATA" --header "$HEADER_1" --header "$HEADER_2" "$URL" > /dev/null 2>&1
elif has_wget; then
wget -q --post-data="$DATA" --header "$HEADER_1" --header "$HEADER_2" "$URL" > /dev/null 2>&1
fi
}
send_event "install_started"
if [[ $desired_os -eq 0 ]]; then
send_event "os_not_supported"
fi
# check_ports_occupied
# Check is Docker daemon is installed and available. If not, the install & start Docker for Linux machines. We cannot automatically install Docker Desktop on Mac OS
if ! is_command_present docker; then
if [[ $package_manager == "apt-get" || $package_manager == "zypper" || $package_manager == "yum" ]]; then
request_sudo
install_docker
else
# enable docker without sudo from next reboot
sudo usermod -aG docker "${USER}"
elif is_mac; then
echo ""
echo "+++++++++++ IMPORTANT READ ++++++++++++++++++++++"
echo "Docker Desktop must be installed manually on Mac OS to proceed. Docker can only be installed automatically on Ubuntu / openSUSE / SLES / Redhat / Cent OS"
echo "https://docs.docker.com/docker-for-mac/install/"
echo "++++++++++++++++++++++++++++++++++++++++++++++++"
DATA='{ "api_key": "H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w", "type": "capture", "event": "Installation Error", "distinct_id": "'"$SIGNOZ_INSTALLATION_ID"'", "properties": { "os": "'"$os"'", "error": "Docker not installed", "setup_type": "'"$setup_type"'" } }'
URL="https://app.posthog.com/capture"
HEADER="Content-Type: application/json"
if has_curl; then
curl -sfL -d "$DATA" --header "$HEADER" "$URL" > /dev/null 2>&1
elif has_wget; then
wget -q --post-data="$DATA" --header="$HEADER" "$URL" > /dev/null 2>&1
fi
send_event "docker_not_installed"
exit 1
else
echo ""
echo "+++++++++++ IMPORTANT READ ++++++++++++++++++++++"
echo "Docker must be installed manually on your machine to proceed. Docker can only be installed automatically on Ubuntu / openSUSE / SLES / Redhat / Cent OS"
echo "https://docs.docker.com/get-docker/"
echo "++++++++++++++++++++++++++++++++++++++++++++++++"
send_event "docker_not_installed"
exit 1
fi
fi
# Install docker-compose
if ! is_command_present docker-compose; then
request_sudo
install_docker_compose
fi
start_docker
# sudo docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml up -d --remove-orphans || true
# $sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml up -d --remove-orphans || true
echo ""
echo -e "\n🟡 Pulling the latest container images for SigNoz. To run as sudo it may ask for system password\n"
if [ $setup_type == 'clickhouse' ]; then
if is_arm64; then
sudo docker-compose --env-file ./docker/clickhouse-setup/env/arm64.env -f ./docker/clickhouse-setup/docker-compose.yaml pull
else
sudo docker-compose --env-file ./docker/clickhouse-setup/env/x86_64.env -f ./docker/clickhouse-setup/docker-compose.yaml pull
fi
echo -e "\n🟡 Pulling the latest container images for SigNoz.\n"
if is_arm64; then
$sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.arm.yaml pull
else
sudo docker-compose -f ./docker/druid-kafka-setup/docker-compose-tiny.yaml pull
$sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml pull
fi
echo ""
echo "🟡 Starting the SigNoz containers. It may take a few minutes ..."
echo
# The docker-compose command does some nasty stuff for the `--detach` functionality. So we add a `|| true` so that the
# script doesn't exit because this command looks like it failed to do it's thing.
if [ $setup_type == 'clickhouse' ]; then
if is_arm64; then
sudo docker-compose --env-file ./docker/clickhouse-setup/env/arm64.env -f ./docker/clickhouse-setup/docker-compose.yaml up --detach --remove-orphans || true
else
sudo docker-compose --env-file ./docker/clickhouse-setup/env/x86_64.env -f ./docker/clickhouse-setup/docker-compose.yaml up --detach --remove-orphans || true
fi
if is_arm64; then
$sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.arm.yaml up --detach --remove-orphans || true
else
sudo docker-compose -f ./docker/druid-kafka-setup/docker-compose-tiny.yaml up --detach --remove-orphans || true
$sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml up --detach --remove-orphans || true
fi
wait_for_containers_start 60
@@ -448,64 +490,37 @@ if [[ $status_code -ne 200 ]]; then
echo "+++++++++++ ERROR ++++++++++++++++++++++"
echo "🔴 The containers didn't seem to start correctly. Please run the following command to check containers that may have errored out:"
echo ""
if [ $setup_type == 'clickhouse' ]; then
echo -e "sudo docker-compose -f docker/clickhouse-setup/docker-compose.yaml ps -a"
else
echo -e "sudo docker-compose -f docker/druid-kafka-setup/docker-compose-tiny.yaml ps -a"
fi
echo -e "$sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml ps -a"
echo "Please read our troubleshooting guide https://signoz.io/docs/deployment/docker/#troubleshooting-of-common-issues"
echo "or reach us on SigNoz for support https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA"
echo "or reach us on SigNoz for support https://signoz.io/slack"
echo "++++++++++++++++++++++++++++++++++++++++"
if [ $setup_type == 'clickhouse' ]; then
DATA='{ "api_key": "H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w", "type": "capture", "event": "Installation Error - Checks", "distinct_id": "'"$SIGNOZ_INSTALLATION_ID"'", "properties": { "os": "'"$os"'", "error": "Containers not started", "data": "some_checks", "setup_type": "'"$setup_type"'" } }'
else
SUPERVISORS="$(curl -so - http://localhost:8888/druid/indexer/v1/supervisor)"
DATASOURCES="$(curl -so - http://localhost:8888/druid/coordinator/v1/datasources)"
DATA='{ "api_key": "H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w", "type": "capture", "event": "Installation Error - Checks", "distinct_id": "'"$SIGNOZ_INSTALLATION_ID"'", "properties": { "os": "'"$os"'", "error": "Containers not started", "SUPERVISORS": '"$SUPERVISORS"', "DATASOURCES": '"$DATASOURCES"', "setup_type": "'"$setup_type"'" } }'
fi
URL="https://app.posthog.com/capture"
HEADER="Content-Type: application/json"
if has_curl; then
curl -sfL -d "$DATA" --header "$HEADER" "$URL" > /dev/null 2>&1
elif has_wget; then
wget -q --post-data="$DATA" --header="$HEADER" "$URL" > /dev/null 2>&1
fi
send_event "installation_error_checks"
exit 1
else
DATA='{ "api_key": "H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w", "type": "capture", "event": "Installation Success", "distinct_id": "'"$SIGNOZ_INSTALLATION_ID"'", "properties": { "os": "'"$os"'"}, "setup_type": "'"$setup_type"'" }'
URL="https://app.posthog.com/capture"
HEADER="Content-Type: application/json"
send_event "installation_success"
if has_curl; then
curl -sfL -d "$DATA" --header "$HEADER" "$URL" > /dev/null 2>&1
elif has_wget; then
wget -q --post-data="$DATA" --header="$HEADER" "$URL" > /dev/null 2>&1
fi
echo "++++++++++++++++++ SUCCESS ++++++++++++++++++++++"
echo ""
echo "🟢 Your installation is complete!"
echo ""
echo -e "🟢 Your frontend is running on http://localhost:3000"
echo -e "🟢 Your frontend is running on http://localhost:3301"
echo ""
if [ $setup_type == 'clickhouse' ]; then
echo " To bring down SigNoz and clean volumes : sudo docker-compose --env-file ./docker/clickhouse-setup/env/arm64.env -f docker/clickhouse-setup/docker-compose.yaml down -v"
if is_arm64; then
echo " To bring down SigNoz and clean volumes : $sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.arm.yaml down -v"
else
echo " To bring down SigNoz and clean volumes : sudo docker-compose -f docker/druid-kafka-setup/docker-compose-tiny.yaml down -v"
echo " To bring down SigNoz and clean volumes : $sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml down -v"
fi
echo ""
echo "+++++++++++++++++++++++++++++++++++++++++++++++++"
echo ""
echo "👉 Need help Getting Started?"
echo -e "Join us on Slack https://join.slack.com/t/signoz-community/shared_invite/zt-lrjknbbp-J_mI13rlw8pGF4EWBnorJA"
echo -e "Join us on Slack https://signoz.io/slack"
echo ""
echo -e "\n📨 Please share your email to receive support & updates about SigNoz!"
read -rp 'Email: ' email
@@ -515,16 +530,7 @@ else
read -rp 'Email: ' email
done
DATA='{ "api_key": "H-htDCae7CR3RV57gUzmol6IAKtm5IMCvbcm_fwnL-w", "type": "capture", "event": "Identify Successful Installation", "distinct_id": "'"$SIGNOZ_INSTALLATION_ID"'", "properties": { "os": "'"$os"'", "email": "'"$email"'", "setup_type": "'"$setup_type"'" } }'
URL="https://app.posthog.com/capture"
HEADER="Content-Type: application/json"
if has_curl; then
curl -sfL -d "$DATA" --header "$HEADER" "$URL" > /dev/null 2>&1
elif has_wget; then
wget -q --post-data="$DATA" --header="$HEADER" "$URL" > /dev/null 2>&1
fi
send_event "identify_successful_installation"
fi
echo -e "\n🙏 Thank you!\n"

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: retention-config
data:
retention-spec.json: |
[{"period":"P3D","includeFuture":true,"tieredReplicants":{"_default_tier":1},"type":"loadByPeriod"},{"type":"dropForever"}]

View File

@@ -1,29 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: set-retention
annotations:
"helm.sh/hook": post-install,post-upgrade
spec:
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: set-retention
image: theithollow/hollowapp-blog:curl
volumeMounts:
- name: retention-config-volume
mountPath: /app/retention-spec.json
subPath: retention-spec.json
args:
- /bin/sh
- -c
- "curl -X POST -H 'Content-Type: application/json' -d @/app/retention-spec.json http://signoz-druid-router:8888/druid/coordinator/v1/rules/flattened_spans"
volumes:
- name: retention-config-volume
configMap:
name: retention-config
restartPolicy: Never
backoffLimit: 8

View File

@@ -1,76 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: supervisor-config
data:
supervisor-spec.json: |
{
"type": "kafka",
"dataSchema": {
"dataSource": "flattened_spans",
"parser": {
"type": "string",
"parseSpec": {
"format": "json",
"timestampSpec": {
"column": "StartTimeUnixNano",
"format": "nano"
},
"dimensionsSpec": {
"dimensions": [
"TraceId",
"SpanId",
"ParentSpanId",
"Name",
"ServiceName",
"References",
"Tags",
"ExternalHttpMethod",
"ExternalHttpUrl",
"Component",
"DBSystem",
"DBName",
"DBOperation",
"PeerService",
{
"type": "string",
"name": "TagsKeys",
"multiValueHandling": "ARRAY"
},
{
"type": "string",
"name": "TagsValues",
"multiValueHandling": "ARRAY"
},
{ "name": "DurationNano", "type": "Long" },
{ "name": "Kind", "type": "int" },
{ "name": "StatusCode", "type": "int" }
]
}
}
},
"metricsSpec" : [
{ "type": "quantilesDoublesSketch", "name": "QuantileDuration", "fieldName": "DurationNano" }
],
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "DAY",
"queryGranularity": "NONE",
"rollup": false
}
},
"tuningConfig": {
"type": "kafka",
"reportParseExceptions": true
},
"ioConfig": {
"topic": "flattened_spans",
"replicas": 1,
"taskDuration": "PT20M",
"completionTimeout": "PT30M",
"consumerProperties": {
"bootstrap.servers": "signoz-kafka:9092"
}
}
}

View File

@@ -1,27 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: create-supervisor
annotations:
"helm.sh/hook": post-install,post-upgrade
spec:
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: create-supervisor
image: theithollow/hollowapp-blog:curl
volumeMounts:
- name: supervisor-config-volume
mountPath: /app/supervisor-spec.json
subPath: supervisor-spec.json
args:
- /bin/sh
- -c
- "curl -X POST -H 'Content-Type: application/json' -d @/app/supervisor-spec.json http://signoz-druid-router:8888/druid/indexer/v1/supervisor"
volumes:
- name: supervisor-config-volume
configMap:
name: supervisor-config
restartPolicy: Never
backoffLimit: 8

View File

@@ -1,60 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-conf
labels:
app: opentelemetry
component: otel-collector-conf
data:
otel-collector-config: |
receivers:
otlp:
protocols:
grpc:
http:
jaeger:
protocols:
grpc:
thrift_http:
processors:
batch:
send_batch_size: 1000
timeout: 10s
memory_limiter:
# Same as --mem-ballast-size-mib CLI argument
ballast_size_mib: 683
# 80% of maximum memory up to 2G
limit_mib: 1500
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
queued_retry:
num_workers: 4
queue_size: 100
retry_on_failure: true
extensions:
health_check: {}
zpages: {}
exporters:
kafka/traces:
brokers:
- signoz-kafka:9092
topic: 'otlp_spans'
protocol_version: 2.0.0
kafka/metrics:
brokers:
- signoz-kafka:9092
topic: 'otlp_metrics'
protocol_version: 2.0.0
service:
extensions: [health_check, zpages]
pipelines:
traces:
receivers: [jaeger, otlp]
processors: [memory_limiter, batch, queued_retry]
exporters: [kafka/traces]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [kafka/metrics]

View File

@@ -1,72 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: otel-collector
labels:
app: opentelemetry
component: otel-collector
spec:
selector:
matchLabels:
app: opentelemetry
component: otel-collector
minReadySeconds: 5
progressDeadlineSeconds: 120
replicas: 1 #TODO - adjust this to your own requirements
template:
metadata:
labels:
app: opentelemetry
component: otel-collector
spec:
containers:
- command:
- "/otelcol"
- "--config=/conf/otel-collector-config.yaml"
# Memory Ballast size should be max 1/3 to 1/2 of memory.
- "--mem-ballast-size-mib=683"
image: otel/opentelemetry-collector:0.18.0
name: otel-collector
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 200m
memory: 400Mi
ports:
- containerPort: 55679 # Default endpoint for ZPages.
- containerPort: 55680 # Default endpoint for OpenTelemetry receiver.
- containerPort: 55681 # Default endpoint for OpenTelemetry HTTP/1.0 receiver.
- containerPort: 4317 # Default endpoint for OpenTelemetry GRPC receiver.
- containerPort: 14250 # Default endpoint for Jaeger GRPC receiver.
- containerPort: 14268 # Default endpoint for Jaeger HTTP receiver.
- containerPort: 9411 # Default endpoint for Zipkin receiver.
- containerPort: 8888 # Default endpoint for querying metrics.
volumeMounts:
- name: otel-collector-config-vol
mountPath: /conf
# - name: otel-collector-secrets
# mountPath: /secrets
livenessProbe:
httpGet:
path: /
port: 13133 # Health Check extension default port.
readinessProbe:
httpGet:
path: /
port: 13133 # Health Check extension default port.
volumes:
- configMap:
name: otel-collector-conf
items:
- key: otel-collector-config
path: otel-collector-config.yaml
name: otel-collector-config-vol
# - secret:
# name: otel-collector-secrets
# items:
# - key: cert.pem
# path: cert.pem
# - key: key.pem
# path: key.pem

View File

@@ -1,31 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: otel-collector
labels:
app: opentelemetry
component: otel-collector
spec:
ports:
- name: otlp # Default endpoint for OpenTelemetry receiver.
port: 55680
protocol: TCP
targetPort: 55680
- name: otlp-http-legacy # Default endpoint for OpenTelemetry receiver.
port: 55681
protocol: TCP
targetPort: 55681
- name: otlp-grpc # Default endpoint for OpenTelemetry receiver.
port: 4317
protocol: TCP
targetPort: 4317
- name: jaeger-grpc # Default endpoing for Jaeger gRPC receiver
port: 14250
- name: jaeger-thrift-http # Default endpoint for Jaeger HTTP receiver.
port: 14268
- name: zipkin # Default endpoint for Zipkin receiver.
port: 9411
- name: metrics # Default endpoint for querying metrics.
port: 8888
selector:
component: otel-collector

View File

@@ -1,21 +0,0 @@
dependencies:
- name: zookeeper
repository: https://charts.bitnami.com/bitnami
version: 6.0.0
- name: kafka
repository: https://charts.bitnami.com/bitnami
version: 12.0.0
- name: druid
repository: https://charts.helm.sh/incubator
version: 0.2.18
- name: flattener-processor
repository: file://./signoz-charts/flattener-processor
version: 0.3.6
- name: query-service
repository: file://./signoz-charts/query-service
version: 0.3.6
- name: frontend
repository: file://./signoz-charts/frontend
version: 0.3.6
digest: sha256:b160e903c630a90644683c512eb8ba018e18d2c08051e255edd3749cb9cc7228
generated: "2021-08-23T12:06:37.231066+05:30"

View File

@@ -1,43 +0,0 @@
apiVersion: v2
name: signoz-platform
description: SigNoz Observability Platform Helm Chart
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.2
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 0.3.2
dependencies:
- name: zookeeper
repository: "https://charts.bitnami.com/bitnami"
version: 6.0.0
- name: kafka
repository: "https://charts.bitnami.com/bitnami"
version: 12.0.0
- name: druid
repository: "https://charts.helm.sh/incubator"
version: 0.2.18
- name: flattener-processor
repository: "file://./signoz-charts/flattener-processor"
version: 0.3.6
- name: query-service
repository: "file://./signoz-charts/query-service"
version: 0.3.6
- name: frontend
repository: "file://./signoz-charts/frontend"
version: 0.3.6

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,21 +0,0 @@
apiVersion: v2
name: flattener-processor
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.3.6
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 0.3.6

View File

@@ -1,21 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "flattener-processor.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "flattener-processor.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "flattener-processor.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "flattener-processor.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:80
{{- end }}

View File

@@ -1,63 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "flattener-processor.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "flattener-processor.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "flattener-processor.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "flattener-processor.labels" -}}
helm.sh/chart: {{ include "flattener-processor.chart" . }}
{{ include "flattener-processor.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "flattener-processor.selectorLabels" -}}
app.kubernetes.io/name: {{ include "flattener-processor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "flattener-processor.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "flattener-processor.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}

View File

@@ -1,65 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "flattener-processor.fullname" . }}
labels:
{{- include "flattener-processor.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "flattener-processor.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "flattener-processor.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "flattener-processor.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- "/root/flattener"
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: KAFKA_BROKER
value: {{ .Values.configVars.KAFKA_BROKER }}
- name: KAFKA_INPUT_TOPIC
value: {{ .Values.configVars.KAFKA_INPUT_TOPIC }}
- name: KAFKA_OUTPUT_TOPIC
value: {{ .Values.configVars.KAFKA_OUTPUT_TOPIC }}
# livenessProbe:
# httpGet:
# path: /
# port: http
# readinessProbe:
# httpGet:
# path: /
# port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -1,41 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "flattener-processor.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "flattener-processor.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "flattener-processor.fullname" . }}
labels:
{{- include "flattener-processor.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "flattener-processor.selectorLabels" . | nindent 4 }}

View File

@@ -1,12 +0,0 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "flattener-processor.serviceAccountName" . }}
labels:
{{- include "flattener-processor.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end -}}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "flattener-processor.fullname" . }}-test-connection"
labels:
{{- include "flattener-processor.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "flattener-processor.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never

View File

@@ -1,74 +0,0 @@
# Default values for flattener-processor.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: signoz/flattener-processor
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
configVars:
KAFKA_BROKER: signoz-kafka:9092
KAFKA_INPUT_TOPIC: otlp_spans
KAFKA_OUTPUT_TOPIC: flattened_spans
serviceAccount:
# Specifies whether a service account should be created
create: false
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 8080
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,21 +0,0 @@
apiVersion: v2
name: frontend
description: A Helm chart for SigNoz Frontend Service
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.3.6
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 0.3.6

View File

@@ -1,21 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "frontend.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "frontend.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "frontend.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "frontend.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:80
{{- end }}

View File

@@ -1,63 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "frontend.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "frontend.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "frontend.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "frontend.labels" -}}
helm.sh/chart: {{ include "frontend.chart" . }}
{{ include "frontend.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "frontend.selectorLabels" -}}
app.kubernetes.io/name: {{ include "frontend.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "frontend.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "frontend.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}

View File

@@ -1,37 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.config.name }}
labels:
release: {{ .Release.Name }}
data:
default.conf: |-
server {
listen {{ .Values.service.port }};
server_name _;
gzip on;
gzip_static on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_proxied any;
gzip_vary on;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /api {
proxy_pass http://{{ .Values.config.queryServiceUrl }}/api;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

View File

@@ -1,64 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "frontend.fullname" . }}
labels:
{{- include "frontend.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "frontend.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "frontend.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
volumes:
- name: nginx-config
configMap:
name: {{ .Values.config.name }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
env:
- name: REACT_APP_QUERY_SERVICE_URL
value: {{ .Values.configVars.REACT_APP_QUERY_SERVICE_URL }}
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d
# livenessProbe:
# httpGet:
# path: /
# port: http
# readinessProbe:
# httpGet:
# path: /
# port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -1,41 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "frontend.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "frontend.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "frontend.fullname" . }}
labels:
{{- include "frontend.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "frontend.selectorLabels" . | nindent 4 }}

View File

@@ -1,12 +0,0 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "frontend.serviceAccountName" . }}
labels:
{{- include "frontend.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end -}}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "frontend.fullname" . }}-test-connection"
labels:
{{- include "frontend.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "frontend.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never

View File

@@ -1,75 +0,0 @@
# Default values for frontend.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: signoz/frontend
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
configVars: {}
config:
name: signoz-nginx-config
queryServiceUrl: signoz-query-service:8080
serviceAccount:
# Specifies whether a service account should be created
create: false
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 3000
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,21 +0,0 @@
apiVersion: v2
name: query-service
description: A Helm chart for running SigNoz Query Service in Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.3.6
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 0.3.6

View File

@@ -1,21 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "query-service.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "query-service.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "query-service.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "query-service.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:80
{{- end }}

View File

@@ -1,63 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "query-service.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "query-service.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "query-service.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "query-service.labels" -}}
helm.sh/chart: {{ include "query-service.chart" . }}
{{ include "query-service.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "query-service.selectorLabels" -}}
app.kubernetes.io/name: {{ include "query-service.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "query-service.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "query-service.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}

View File

@@ -1,63 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "query-service.fullname" . }}
labels:
{{- include "query-service.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "query-service.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "query-service.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "query-service.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: DruidClientUrl
value: {{ .Values.configVars.DruidClientUrl }}
- name: DruidDatasource
value: {{ .Values.configVars.DruidDatasource }}
- name: STORAGE
value: {{ .Values.configVars.STORAGE }}
# livenessProbe:
# httpGet:
# path: /
# port: http
# readinessProbe:
# httpGet:
# path: /
# port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -1,41 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "query-service.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "query-service.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "query-service.fullname" . }}
labels:
{{- include "query-service.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "query-service.selectorLabels" . | nindent 4 }}

View File

@@ -1,12 +0,0 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "query-service.serviceAccountName" . }}
labels:
{{- include "query-service.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end -}}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "query-service.fullname" . }}-test-connection"
labels:
{{- include "query-service.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "query-service.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never

Some files were not shown because too many files have changed in this diff Show More