Compare commits

...

47 Commits

Author SHA1 Message Date
Srikanth Chekuri
ae110c60e6 Merge branch 'main' into issue_6829 2025-01-29 13:20:20 +05:30
Shaheer Kochai
42a5d71d81 feat: add documentation link for logs explorer quick filters (#6854)
* feat: add documentation link for logs explorer quick filters

* refactor: simplify QuickFilters component rendering logic

* fix: properly implement the log fast filter empty state UI

* fix: don't display empty state while loading quick filters

* refactor: extract quick filter empty state into a separate component

* refactor: render quick filters empty state based on  source param

* refactor: update QuickFilters source to use QuickFiltersSource.INFRA_MONITORING for consistency

* fix: address review comments

* refactor: fix the failing test by moving QuickFilters types to types.tsx

* refactor: properly import use QuickFiltersSource

---------

Co-authored-by: Vikrant Gupta <vikrant.thomso@gmail.com>
2025-01-29 10:39:42 +04:30
nityanandagohain
00401690fd fix: add support for asc 2025-01-28 23:14:41 +05:30
Vikrant Gupta
08e1fd3ca5 feat(trace-details): frontend changes for trace details (#6905)
* feat(trace-details): frontend changes for trace details

* feat(trace-detail): address review comments from elipsis

* feat(trace0-detail): add the new drawer designs

* feat(trace-detail): handle the selected span hover

* feat(trace-detail): address theme colors and span selection

* feat(trace-detail): fix some more css

* feat(trace-detail): fix some more css

* feat(trace-detail): add hoverred span and handled no data components for new drawer

* feat(trace-detail): handle light mode designs

* feat(trace-detail): remove the hover functionality in favor of performance

* feat(trace-detail): span lines connectors

* feat(trace-detail): span lines connectors

* feat(trace-detail): handle the line matching for flamegraph and waterfall

* feat(trace-waterfall): change the timeline color to make it less poky

* feat(trace-waterfall): added where clause support in trace details page

* feat(trace-waterfall): added where clause support in trace details page

* feat(trace-detail): handle light mode designs

* feat(trace-detail): handle light mode designs

* feat(trace-detail): fix build issues

* feat(trace-detail): handle loading error state for filters and flamegraph hovered state

* feat(trace-detail): fix the hardcoded traceID

* feat(trace-detail): remove unnecessaru use effects

* feat(trace-detail): handled the flamegraph update with ID

* feat(trace-detail): added timestamp bucketing and latency sampling

* feat(trace-detail): extract the buckets and span limit in constants

* feat(trace-detail): minor VQA comments

* feat(trace-detail): remove unnecessaru use effects

* feat(trace-detail): add go to related logs

* feat(trace-detail): address review comments

* feat(trace-detail): address review comments

* feat(trace-detail): address review comments

* feat(trace-detail): address review comments
2025-01-28 22:04:24 +05:30
Nityananda Gohain
42c614371d Merge branch 'main' into issue_6829 2025-01-28 14:19:52 +05:30
Shaheer Kochai
8cb8beb63f fix: adjust the buffer time for start and end time in trace to logs (#6953) 2025-01-28 12:02:28 +05:30
SagarRajput-7
35c61045c4 feat: added right panel graphs in overview page (#6944)
* feat: added right panel graphs in overview page

* feat: added right panel trace navigation

* feat: implement search column wise and global table wise

* feat: implemented filter on table with api filtering

* feat: added allow clear to table filters

* feat: fixed empty table issue

* feat: fixed celery state - count display
2025-01-28 10:04:06 +05:30
Amlan Kumar Nandy
7c85befc17 feat: implement page size configuration in infra monitoring (#6950) 2025-01-28 09:50:34 +05:30
Amlan Kumar Nandy
813ca8bc23 feat: statefulsets, daemonsets, jobs and volumes implementation in k8s infra monitoring (#6629) 2025-01-27 22:22:48 +05:30
Shaheer Kochai
98cdbcd711 feat: show/hide timestamp and body fields in logs explorer (raw, default, column views) (#6903)
* feat: show/hide timestamp and body fields in logs explorer (raw, default, column views)

* fix: add width to log indicator column to ensure that a single column doesn't take half the space

* fix: handle edge cases and fix issues for show/hide body and timestamp in logs explorer
2025-01-27 21:54:59 +05:30
Prashant Shahi
61a6c21edb feat: docker/swarm deployment restructuring and improvements (#6904)
This PR consists of improvements and fixes related to SigNoz in Docker Standalone and Docker Swarm.

- Docker/Swarm deploy directory restructuring and improvements
- Prometheus metrics scraping using labels in Docker Standalone and Docker Swarm
- Swarm: single schema-migrator container for both sync/async migrations
- Remove Kubernetes HOTRod files (will be migrated to charts)
- Fetch histogram-quantile from release instead of local binary mount

---------

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-01-27 12:48:42 +00:00
Shaheer Kochai
8d6731c7ba feat: Centralize and Standardize Date/Time Format Handling (#6925)
* feat: change the format of uplot x axis to 24 hours

* feat: centralize date/time formats

* refactor: use the centralized 24 hour date/time formatting across components

* refactor: centralize uPlot x-axis values formatting

* feat: make the x axis of alert history in 24 hour format
2025-01-27 11:58:54 +00:00
Ekansh Gupta
6dc7a76853 feat: changed the traces tab query to incorporate span scopes (#6922) 2025-01-27 08:37:47 +00:00
Shaheer Kochai
cc3d78cd71 feat: add support for span scope (#6910)
* feat: add support for span scope

* refactor: remove inline style
2025-01-27 08:25:10 +00:00
SagarRajput-7
bd0c4beeee feat: celery overview page (#6918)
* feat: added celery task feature - with task garphs and details

* feat: added celery bar graph toggle states UI

* feat: added histogram charts and right panel

* feat: added task latency graph with different states

* feat: added right panel trace navigation

* feat: added dynamic stepinterval based on timerange

* feat: changed histogram occurences to bar

* feat: onclick right panels for celery state bar graphs

* feat: pagesetup and tabs with kafka setup

* feat: custom series for bar for color generation

* feat: fixed test cases

* feat: added new celery overview page

* feat: added table feat and column details

* feat: improved table style and column configs

* feat: added service name filter and common filter logic

* feat: code fix

* feat: code fix
2025-01-27 10:56:01 +05:30
SagarRajput-7
e83e691ef5 feat: added celery task feature - with task graphs and details (#6840)
* feat: added celery task feature - with task garphs and details

* feat: added celery bar graph toggle states UI

* feat: added histogram charts and right panel

* feat: added task latency graph with different states

* feat: added right panel trace navigation

* feat: added navigateToTrace logic

* feat: added value graph and global filter logic

* feat: added dynamic stepinterval based on timerange

* feat: changed histogram occurences to bar

* feat: onclick right panels for celery state bar graphs

* feat: pagesetup and tabs with kafka setup

* feat: custom series for bar for color generation

* feat: fixed test cases

* feat: update styles
2025-01-27 08:53:19 +05:30
Prashant Shahi
c30c882aae chore: integrate goreleaser for user_scripts and restructuring (#6911)
* chore: integrate goreleaser for user_scripts and restructuring
* ci(goreleaser): add goreleaser workflow
* chore(goreleaser): update Makefile and config

---------

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-01-25 17:10:21 +05:30
Vibhu Pandey
001122db2c feat(instrumentation): adopt slog (#6907)
### Summary

feat(instrumentation): adopt slog
2025-01-24 09:23:02 +00:00
Nityananda Gohain
b544a54c40 fix: logging middleware add excluded routes (#6917)
* fix: logging middleware add excluded routes

* fix: consistant name and update example

* fix: consistant name
2025-01-24 14:43:28 +05:30
SagarRajput-7
af6b54aeb4 fix: added safety check on uplot-xaxis logic to solve sentry issue (#6869) 2025-01-24 03:48:47 +05:30
Vikrant Gupta
1e61e6c2f6 feat(trace-details): query service changes for trace details (#6906)
* feat(trace-details): query service changes for trace details

* feat(trace-detail): refactor query service trace apis

* feat(trace-detail): minor bug fixes

* feat(trace-detail): address review comments

* feat(trace-detail): address review comments
2025-01-24 00:16:38 +05:30
Shivanshu Raj Shrivastava
390b04c015 chore: add global to inner join (#6912)
Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
2025-01-23 14:05:22 +00:00
Srikanth Chekuri
df39c53f84 Merge branch 'main' into issue_6829 2025-01-23 18:13:24 +05:30
Nityananda Gohain
c56345e1db fix: move analytics middleware (#6901)
* fix: move analytics middleware

* fix: use logger from params

* fix: use remove custom response writer

* fix: use correct names
2025-01-23 16:21:08 +05:30
Nityananda Gohain
7f25674640 fix: move log comment middleware (#6899)
* fix: move log comment middleware

* fix: move log comment to logger middleware

* fix: make logcomment a method of logging struct
2025-01-23 13:27:21 +05:30
Nityananda Gohain
df5ab64c83 fix: use common timeout middleware (#6866)
* fix: use common timeout middleware

* fix: use apiserver factory for config

* fix: add backward compatibility for old variables

* fix: remove apiserver provider and use config directly

* fix: remove apiserver interface

* fix: address comments

* fix: address minor comments

* fix: address minor comments
2025-01-22 17:48:47 +00:00
primus-bot[bot]
726f2b0fa2 chore(release): bump to v0.69.0 (#6898)
#### Summary
 - Release SigNoz v0.69.0
 - Bump SigNoz OTel Collector to v0.111.24

 Created by [Primus-Bot](https://github.com/apps/primus-bot)
2025-01-22 15:12:06 +05:30
Nityananda Gohain
4a0b0aafbd fix: use common logging middleware (#6865)
* feat: use common logging middleware

* fix: update comment

* fix: remove privatePort:true

* fix: remove staticfields from logging

* fix: remove staticfields from logging
2025-01-22 07:42:21 +00:00
Vibhu Pandey
837f434fe9 feat(sqlstore): open a connection to sql once (#6867) 2025-01-22 07:19:38 +00:00
Srikanth Chekuri
76bec046cd Merge branch 'main' into issue_6829 2025-01-22 10:01:03 +05:30
Srikanth Chekuri
0baf0e9453 chore: address some gaps in infra (#6871) 2025-01-21 23:25:15 +05:30
Amlan Kumar Nandy
403043e076 feat: implement deployments, clusters and namespaces in k8s infra monitoring (#6786) 2025-01-21 21:57:32 +05:30
Shaheer Kochai
7730f76128 Revert "feat: show/hide timestamp and body fields in logs explorer (raw, defa…" (#6868)
This reverts commit b465f74e4a.
2025-01-21 15:03:03 +00:00
Vishal Sharma
6e3ffd555d fix: update azure event hub receiver config (#6864) 2025-01-21 14:35:23 +05:30
Nityananda Gohain
c565c2b865 fix: remove depricated message from ingestion dashboard (#6862) 2025-01-21 07:17:21 +00:00
Srikanth Chekuri
4ec1e66c7e chore: clickhouse sql support for v3 (#6778) 2025-01-21 04:09:40 +00:00
Srikanth Chekuri
89541862cc chore: correct order within page result (#6847) 2025-01-20 13:38:49 +00:00
Prashant Shahi
610f4d43e7 chore(install-script): install.sh script improvements (#6857)
### Summary

- support for docker compose plugin
- check for occupied port when installing SigNoz for the first time
- remove unused code

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-01-20 13:26:40 +00:00
Ekansh Gupta
044a124cc1 feat: add search span scope in the list view tab to add the scope at per Query level (#6810)
* feat: add search span scope in the list view tab to add the scope at per query level

* feat: add search span scope in the list view tab to add the scope at per query level

* feat: add search span scope in the list view tab to add the scope at per query level

* feat: add search span scope in the list view tab to add the scope at per query level

* feat: add search span scope in the list view tab to add the scope at per query level

* feat: add search span scope in the list view tab to add the scope at per query level

* feat: add search span scope in the list view tab to add the scope at per query level

* feat: add search span scope in the list view tab to add the scope at per query level

* feat: add search span scope in the list view tab to add the scope at per query level

* feat: add search span scope in the list view tab to add the scope at per query level

* feat: add search span scope in the list view tab to add the scope at per query level
2025-01-20 18:04:19 +05:30
Vibhu Pandey
0cf9003e3a feat(.): initialize all factories (#6844)
### Summary

feat(.): initialize all factories

#### Related Issues / PR's

Removed all redundant commits of https://github.com/SigNoz/signoz/pull/6843
Closes https://github.com/SigNoz/signoz/pull/6782
2025-01-20 17:45:33 +05:30
Nityananda Gohain
644135a933 fix: remove analytics schema migrations (#6856) 2025-01-20 17:09:55 +05:30
Shaheer Kochai
b465f74e4a feat: show/hide timestamp and body fields in logs explorer (raw, default, column views) (#6831)
* feat: show/hide timestamp and body fields in logs explorer (raw, default, column views)

* fix: add width to log indicator column to ensure that a single column doesn't take half the space

---------

Co-authored-by: Nityananda Gohain <nityanandagohain@gmail.com>
2025-01-20 05:43:48 +00:00
Shaheer Kochai
e00e365964 chore: new alert rule documentation redirection tests (#6822)
* chore: new alert rule documentation redirection tests

* fix: fix the failing test

* fix: add test cases for anomaly based alert and fix the failing tests
2025-01-20 05:18:41 +00:00
Srikanth Chekuri
6a2a670330 Merge branch 'main' into issue_6829 2025-01-19 00:42:43 +05:30
Amlan Kumar Nandy
5c45e1f7b3 chore: infra monitoring bug fixes (#6795) 2025-01-18 10:55:50 +00:00
Nityananda Gohain
16e61b45ac fix: support in operator for json search (#6689)
* fix: support in in json search

* fix: remove else condition and update test cases

---------

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2025-01-18 01:44:25 +05:30
nityanandagohain
d0a130f7c9 fix: logs window based pagination to pageSize offset instead of using id filter 2025-01-16 15:03:05 +05:30
540 changed files with 40658 additions and 12018 deletions

View File

@@ -3,4 +3,7 @@
.vscode
README.md
deploy
sample-apps
sample-apps
# frontend
node_modules

35
.github/workflows/goreleaser.yaml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: goreleaser
on:
push:
tags:
- v*
- histogram-quantile/v*
permissions:
contents: write
jobs:
goreleaser:
runs-on: ubuntu-latest
strategy:
matrix:
workdirs:
- scripts/clickhouse/histogramquantile
steps:
- name: checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: set-up-go
uses: actions/setup-go@v5
- name: run-goreleaser
uses: goreleaser/goreleaser-action@v6
with:
distribution: goreleaser-pro
version: '~> v2'
args: release --clean
workdir: ${{ matrix.workdirs }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GORELEASER_KEY: ${{ secrets.GORELEASER_KEY }}

View File

@@ -7,6 +7,7 @@ on:
jobs:
detect:
if: ${{ !startsWith(github.event.release.tag_name, 'histogram-quantile/') }}
runs-on: ubuntu-latest
outputs:
release_type: ${{ steps.find.outputs.release_type }}
@@ -22,6 +23,7 @@ jobs:
fi
echo "release_type=${release_type}" >> "$GITHUB_OUTPUT"
charts:
if: ${{ !startsWith(github.event.release.tag_name, 'histogram-quantile/') }}
uses: signoz/primus.workflows/.github/workflows/github-trigger.yaml@main
secrets: inherit
needs: [detect]

8
.gitignore vendored
View File

@@ -52,7 +52,7 @@ ee/query-service/tests/test-deploy/data/
/deploy/docker/clickhouse-setup/data/
/deploy/docker-swarm/clickhouse-setup/data/
bin/
.local/
*/query-service/queries.active
# e2e
@@ -70,3 +70,9 @@ vendor/
# git-town
.git-branches.toml
# goreleaser
dist/
# ignore user_scripts that is fetched by init-clickhouse
deploy/common/clickhouse/user_scripts/

View File

@@ -3,16 +3,10 @@
tasks:
- name: Run Script to Comment ut required lines
init: |
cd ./.scripts
sh commentLinesForSetup.sh
- name: Run Docker Images
init: |
cd ./deploy
sudo docker-compose -f docker/clickhouse-setup/docker-compose.yaml up -d
# command:
cd ./deploy/docker
sudo docker compose up -d
- name: Run Frontend
init: |

View File

@@ -141,9 +141,9 @@ Depending upon your area of expertise & interest, you can choose one or more to
# 3. Develop Frontend 🌚
**Need to Update: [https://github.com/SigNoz/signoz/tree/develop/frontend](https://github.com/SigNoz/signoz/tree/develop/frontend)**
**Need to Update: [https://github.com/SigNoz/signoz/tree/main/frontend](https://github.com/SigNoz/signoz/tree/main/frontend)**
Also, have a look at [Frontend README.md](https://github.com/SigNoz/signoz/blob/develop/frontend/README.md) sections for more info on how to setup SigNoz frontend locally (with and without Docker).
Also, have a look at [Frontend README.md](https://github.com/SigNoz/signoz/blob/main/frontend/README.md) sections for more info on how to setup SigNoz frontend locally (with and without Docker).
## 3.1 Contribute to Frontend with Docker installation of SigNoz
@@ -151,14 +151,14 @@ Also, have a look at [Frontend README.md](https://github.com/SigNoz/signoz/blob/
```
git clone https://github.com/SigNoz/signoz.git && cd signoz
```
- Comment out `frontend` service section at [`deploy/docker/clickhouse-setup/docker-compose.yaml#L68`](https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/docker-compose.yaml#L68)
- Comment out `frontend` service section at [`deploy/docker/docker-compose.yaml#L68`](https://github.com/SigNoz/signoz/blob/main/deploy/docker/docker-compose.yaml#L68)
![develop-frontend](https://user-images.githubusercontent.com/52788043/179009217-6692616b-17dc-4d27-b587-9d007098d739.jpeg)
- run `cd deploy` to move to deploy directory,
- Install signoz locally **without** the frontend,
- Add / Uncomment the below configuration to query-service section at [`deploy/docker/clickhouse-setup/docker-compose.yaml#L47`](https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/docker-compose.yaml#L47)
- Add / Uncomment the below configuration to query-service section at [`deploy/docker/docker-compose.yaml#L47`](https://github.com/SigNoz/signoz/blob/main/deploy/docker/docker-compose.yaml#L47)
```
ports:
- "8080:8080"
@@ -167,9 +167,10 @@ Also, have a look at [Frontend README.md](https://github.com/SigNoz/signoz/blob/
- Next run,
```
sudo docker-compose -f docker/clickhouse-setup/docker-compose.yaml up -d
cd deploy/docker
sudo docker compose up -d
```
- `cd ../frontend` and change baseURL in file [`frontend/src/constants/env.ts#L2`](https://github.com/SigNoz/signoz/blob/develop/frontend/src/constants/env.ts#L2) and for that, you need to create a `.env` file in the `frontend` directory with the following environment variable (`FRONTEND_API_ENDPOINT`) matching your configuration.
- `cd ../frontend` and change baseURL in file [`frontend/src/constants/env.ts#L2`](https://github.com/SigNoz/signoz/blob/main/frontend/src/constants/env.ts#L2) and for that, you need to create a `.env` file in the `frontend` directory with the following environment variable (`FRONTEND_API_ENDPOINT`) matching your configuration.
If you have backend api exposed via frontend nginx:
```
@@ -186,11 +187,6 @@ Also, have a look at [Frontend README.md](https://github.com/SigNoz/signoz/blob/
yarn dev
```
### Important Notes:
The Maintainers / Contributors who will change Line Numbers of `Frontend` & `Query-Section`, please update line numbers in [`/.scripts/commentLinesForSetup.sh`](https://github.com/SigNoz/signoz/blob/develop/.scripts/commentLinesForSetup.sh)
**[`^top^`](#contributing-guidelines)**
## 3.2 Contribute to Frontend without installing SigNoz backend
If you don't want to install the SigNoz backend just for doing frontend development, we can provide you with test environments that you can use as the backend.
@@ -216,7 +212,7 @@ Please ping us in the [`#contributing`](https://signoz-community.slack.com/archi
# 4. Contribute to Backend (Query-Service) 🌑
**Need to Update: [https://github.com/SigNoz/signoz/tree/develop/pkg/query-service](https://github.com/SigNoz/signoz/tree/develop/pkg/query-service)**
**Need to Update: [https://github.com/SigNoz/signoz/tree/main/pkg/query-service](https://github.com/SigNoz/signoz/tree/main/pkg/query-service)**
## 4.1 Prerequisites
@@ -242,13 +238,13 @@ Please ping us in the [`#contributing`](https://signoz-community.slack.com/archi
git clone https://github.com/SigNoz/signoz.git && cd signoz
```
- run `sudo make dev-setup` to configure local setup to run query-service,
- Comment out `frontend` service section at [`deploy/docker/clickhouse-setup/docker-compose.yaml#L68`](https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/docker-compose.yaml#L68)
- Comment out `frontend` service section at [`deploy/docker/docker-compose.yaml#L68`](https://github.com/SigNoz/signoz/blob/main/deploy/docker/docker-compose.yaml#L68)
<img width="982" alt="develop-frontend" src="https://user-images.githubusercontent.com/52788043/179043977-012be8b0-a2ed-40d1-b2e6-2ab72d7989c0.png">
- Comment out `query-service` section at [`deploy/docker/clickhouse-setup/docker-compose.yaml#L41`,](https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/docker-compose.yaml#L41)
- Comment out `query-service` section at [`deploy/docker/docker-compose.yaml#L41`,](https://github.com/SigNoz/signoz/blob/main/deploy/docker/docker-compose.yaml#L41)
<img width="1068" alt="Screenshot 2022-07-14 at 22 48 07" src="https://user-images.githubusercontent.com/52788043/179044151-a65ba571-db0b-4a16-b64b-ca3fadcf3af0.png">
- add below configuration to `clickhouse` section at [`deploy/docker/clickhouse-setup/docker-compose.yaml`,](https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/docker-compose.yaml)
- add below configuration to `clickhouse` section at [`deploy/docker/docker-compose.yaml`,](https://github.com/SigNoz/signoz/blob/main/deploy/docker/docker-compose.yaml)
```
ports:
- 9001:9000
@@ -258,9 +254,9 @@ Please ping us in the [`#contributing`](https://signoz-community.slack.com/archi
- run `cd pkg/query-service/` to move to `query-service` directory,
- Then, you need to create a `.env` file with the following environment variable
```
SIGNOZ_LOCAL_DB_PATH="./signoz.db"
SIGNOZ_SQLSTORE_SQLITE_PATH="./signoz.db"
```
to set your local environment with the right `RELATIONAL_DATASOURCE_PATH` as mentioned in [`./constants/constants.go#L38`,](https://github.com/SigNoz/signoz/blob/develop/pkg/query-service/constants/constants.go#L38)
to set your local environment with the right `RELATIONAL_DATASOURCE_PATH` as mentioned in [`./constants/constants.go#L38`,](https://github.com/SigNoz/signoz/blob/main/pkg/query-service/constants/constants.go#L38)
- Now, install SigNoz locally **without** the `frontend` and `query-service`,
- If you are using `x86_64` processors (All Intel/AMD processors) run `sudo make run-x86`
@@ -294,13 +290,10 @@ docker pull signoz/query-service:develop
```
### Important Note:
The Maintainers / Contributors who will change Line Numbers of `Frontend` & `Query-Section`, please update line numbers in [`/.scripts/commentLinesForSetup.sh`](https://github.com/SigNoz/signoz/blob/develop/.scripts/commentLinesForSetup.sh)
**Query Service should now be available at** [`http://localhost:8080`](http://localhost:8080)
If you want to see how the frontend plays with query service, you can run the frontend also in your local env with the baseURL changed to `http://localhost:8080` in file [`frontend/src/constants/env.ts`](https://github.com/SigNoz/signoz/blob/develop/frontend/src/constants/env.ts) as the `query-service` is now running at port `8080`.
If you want to see how the frontend plays with query service, you can run the frontend also in your local env with the baseURL changed to `http://localhost:8080` in file [`frontend/src/constants/env.ts`](https://github.com/SigNoz/signoz/blob/main/frontend/src/constants/env.ts) as the `query-service` is now running at port `8080`.
<!-- Instead of configuring a local setup, you can also use [Gitpod](https://www.gitpod.io/), a VSCode-based Web IDE.

View File

@@ -15,8 +15,9 @@ DEV_BUILD ?= "" # set to any non-empty value to enable dev build
FRONTEND_DIRECTORY ?= frontend
QUERY_SERVICE_DIRECTORY ?= pkg/query-service
EE_QUERY_SERVICE_DIRECTORY ?= ee/query-service
STANDALONE_DIRECTORY ?= deploy/docker/clickhouse-setup
SWARM_DIRECTORY ?= deploy/docker-swarm/clickhouse-setup
STANDALONE_DIRECTORY ?= deploy/docker
SWARM_DIRECTORY ?= deploy/docker-swarm
CH_HISTOGRAM_QUANTILE_DIRECTORY ?= scripts/clickhouse/histogramquantile
GOOS ?= $(shell go env GOOS)
GOARCH ?= $(shell go env GOARCH)
@@ -142,16 +143,6 @@ dev-setup:
@echo "--> Local Setup completed"
@echo "------------------"
run-local:
@docker-compose -f \
$(STANDALONE_DIRECTORY)/docker-compose-core.yaml -f $(STANDALONE_DIRECTORY)/docker-compose-local.yaml \
up --build -d
down-local:
@docker-compose -f \
$(STANDALONE_DIRECTORY)/docker-compose-core.yaml -f $(STANDALONE_DIRECTORY)/docker-compose-local.yaml \
down -v
pull-signoz:
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.yaml pull
@@ -191,3 +182,15 @@ check-no-ee-references:
test:
go test ./pkg/...
goreleaser-snapshot:
@if [[ ${GORELEASER_WORKDIR} ]]; then \
cd ${GORELEASER_WORKDIR} && \
goreleaser release --clean --snapshot; \
cd -; \
else \
goreleaser release --clean --snapshot; \
fi
goreleaser-snapshot-histogram-quantile:
make GORELEASER_WORKDIR=$(CH_HISTOGRAM_QUANTILE_DIRECTORY) goreleaser-snapshot

View File

@@ -13,9 +13,9 @@
<h3 align="center">
<a href="https://signoz.io/docs"><b>Dokumentation</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/develop/README.md"><b>Readme auf Englisch </b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/develop/README.zh-cn.md"><b>ReadMe auf Chinesisch</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/develop/README.pt-br.md"><b>ReadMe auf Portugiesisch</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.md"><b>Readme auf Englisch </b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.zh-cn.md"><b>ReadMe auf Chinesisch</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.pt-br.md"><b>ReadMe auf Portugiesisch</b></a> &bull;
<a href="https://signoz.io/slack"><b>Slack Community</b></a> &bull;
<a href="https://twitter.com/SigNozHq"><b>Twitter</b></a>
</h3>

View File

@@ -17,9 +17,9 @@
<h3 align="center">
<a href="https://signoz.io/docs"><b>Documentation</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/develop/README.zh-cn.md"><b>ReadMe in Chinese</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/develop/README.de-de.md"><b>ReadMe in German</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/develop/README.pt-br.md"><b>ReadMe in Portuguese</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.zh-cn.md"><b>ReadMe in Chinese</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.de-de.md"><b>ReadMe in German</b></a> &bull;
<a href="https://github.com/SigNoz/signoz/blob/main/README.pt-br.md"><b>ReadMe in Portuguese</b></a> &bull;
<a href="https://signoz.io/slack"><b>Slack Community</b></a> &bull;
<a href="https://twitter.com/SigNozHq"><b>Twitter</b></a>
</h3>

View File

@@ -12,9 +12,9 @@
<h3 align="center">
<a href="https://signoz.io/docs"><b>文档</b></a>
<a href="https://github.com/SigNoz/signoz/blob/develop/README.zh-cn.md"><b>中文ReadMe</b></a>
<a href="https://github.com/SigNoz/signoz/blob/develop/README.de-de.md"><b>德文ReadMe</b></a>
<a href="https://github.com/SigNoz/signoz/blob/develop/README.pt-br.md"><b>葡萄牙语ReadMe</b></a>
<a href="https://github.com/SigNoz/signoz/blob/main/README.zh-cn.md"><b>中文ReadMe</b></a>
<a href="https://github.com/SigNoz/signoz/blob/main/README.de-de.md"><b>德文ReadMe</b></a>
<a href="https://github.com/SigNoz/signoz/blob/main/README.pt-br.md"><b>葡萄牙语ReadMe</b></a>
<a href="https://signoz.io/slack"><b>Slack 社区</b></a>
<a href="https://twitter.com/SigNozHq"><b>Twitter</b></a>
</h3>

View File

@@ -1,32 +0,0 @@
##################### SigNoz Configuration Defaults #####################
#
# Do not modify this file
#
##################### Web #####################
web:
# The prefix to serve web on
prefix: /
# The directory containing the static build files.
directory: /etc/signoz/web
##################### Cache #####################
cache:
# specifies the caching provider to use.
provider: memory
# memory: Uses in-memory caching.
memory:
# Time-to-live for cache entries in memory. Specify the duration in ns
ttl: 60000000000
# The interval at which the cache will be cleaned up
cleanupInterval:
# redis: Uses Redis as the caching backend.
redis:
# The hostname or IP address of the Redis server.
host: localhost
# The port on which the Redis server is running. Default is usually 6379.
port: 6379
# The password for authenticating with the Redis server, if required.
password:
# The Redis database number to use
db: 0

80
conf/example.yaml Normal file
View File

@@ -0,0 +1,80 @@
##################### SigNoz Configuration Example #####################
#
# Do not modify this file
#
##################### Instrumentation #####################
instrumentation:
logs:
# The log level to use.
level: info
traces:
# Whether to enable tracing.
enabled: false
processors:
batch:
exporter:
otlp:
endpoint: localhost:4317
metrics:
# Whether to enable metrics.
enabled: true
readers:
pull:
exporter:
prometheus:
host: "0.0.0.0"
port: 9090
##################### Web #####################
web:
# Whether to enable the web frontend
enabled: true
# The prefix to serve web on
prefix: /
# The directory containing the static build files.
directory: /etc/signoz/web
##################### Cache #####################
cache:
# specifies the caching provider to use.
provider: memory
# memory: Uses in-memory caching.
memory:
# Time-to-live for cache entries in memory. Specify the duration in ns
ttl: 60000000000
# The interval at which the cache will be cleaned up
cleanupInterval: 1m
# redis: Uses Redis as the caching backend.
redis:
# The hostname or IP address of the Redis server.
host: localhost
# The port on which the Redis server is running. Default is usually 6379.
port: 6379
# The password for authenticating with the Redis server, if required.
password:
# The Redis database number to use
db: 0
##################### SQLStore #####################
sqlstore:
# specifies the SQLStore provider to use.
provider: sqlite
# The maximum number of open connections to the database.
max_open_conns: 100
sqlite:
# The path to the SQLite database file.
path: /var/lib/signoz/signoz.db
##################### APIServer #####################
apiserver:
timeout:
default: 60s
max: 600s
excluded_routes:
- /api/v1/logs/tail
- /api/v3/logs/livetail
logging:
excluded_routes:
- /api/v1/health

View File

@@ -18,65 +18,64 @@ Now run the following command to install:
### Using Docker Compose
If you don't have docker-compose set up, please follow [this guide](https://docs.docker.com/compose/install/)
If you don't have docker compose set up, please follow [this guide](https://docs.docker.com/compose/install/)
to set up docker compose before proceeding with the next steps.
For x86 chip (amd):
```sh
docker-compose -f docker/clickhouse-setup/docker-compose.yaml up -d
cd deploy/docker
docker compose up -d
```
Open http://localhost:3301 in your favourite browser. In couple of minutes, you should see
the data generated from hotrod in SigNoz UI.
Open http://localhost:3301 in your favourite browser.
## Kubernetes
### Using Helm
#### Bring up SigNoz cluster
To start collecting logs and metrics from your infrastructure, run the following command:
```sh
helm repo add signoz https://charts.signoz.io
kubectl create ns platform
helm -n platform install my-release signoz/signoz
cd generator/infra
docker compose up -d
```
To access the UI, you can `port-forward` the frontend service:
To start generating sample traces, run the following command:
```sh
kubectl -n platform port-forward svc/my-release-frontend 3301:3301
cd generator/hotrod
docker compose up -d
```
Open http://localhost:3301 in your favourite browser. Few minutes after you generate load
from the HotROD application, you should see the data generated from hotrod in SigNoz UI.
In a couple of minutes, you should see the data generated from hotrod in SigNoz UI.
#### Test HotROD application with SigNoz
For more details, please refer to the [SigNoz documentation](https://signoz.io/docs/install/docker/).
## Docker Swarm
To install SigNoz using Docker Swarm, run the following command:
```sh
kubectl create ns sample-application
kubectl -n sample-application apply -f https://raw.githubusercontent.com/SigNoz/signoz/main/sample-apps/hotrod/hotrod.yaml
cd deploy/docker-swarm
docker stack deploy -c docker-compose.yaml signoz
```
To generate load:
Open http://localhost:3301 in your favourite browser.
To start collecting logs and metrics from your infrastructure, run the following command:
```sh
kubectl -n sample-application run strzal --image=djbingham/curl \
--restart='OnFailure' -i --tty --rm --command -- curl -X POST -F \
'user_count=6' -F 'spawn_rate=2' http://locust-master:8089/swarm
cd generator/infra
docker stack deploy -c docker-compose.yaml infra
```
To stop load:
To start generating sample traces, run the following command:
```sh
kubectl -n sample-application run strzal --image=djbingham/curl \
--restart='OnFailure' -i --tty --rm --command -- curl \
http://locust-master:8089/stop
cd generator/hotrod
docker stack deploy -c docker-compose.yaml hotrod
```
In a couple of minutes, you should see the data generated from hotrod in SigNoz UI.
For more details, please refer to the [SigNoz documentation](https://signoz.io/docs/install/docker-swarm/).
## Uninstall/Troubleshoot?
Go to our official documentation site [signoz.io/docs](https://signoz.io/docs) for more.

View File

@@ -10,14 +10,14 @@
<host>zookeeper-1</host>
<port>2181</port>
</node>
<!-- <node index="2">
<node index="2">
<host>zookeeper-2</host>
<port>2181</port>
</node>
<node index="3">
<host>zookeeper-3</host>
<port>2181</port>
</node> -->
</node>
</zookeeper>
<!-- Configuration of clusters that could be used in Distributed tables.
@@ -58,7 +58,7 @@
<!-- <priority>1</priority> -->
</replica>
</shard>
<!-- <shard>
<shard>
<replica>
<host>clickhouse-2</host>
<port>9000</port>
@@ -69,7 +69,7 @@
<host>clickhouse-3</host>
<port>9000</port>
</replica>
</shard> -->
</shard>
</cluster>
</remote_servers>
</clickhouse>
</clickhouse>

View File

@@ -72,4 +72,4 @@
</shard> -->
</cluster>
</remote_servers>
</clickhouse>
</clickhouse>

View File

@@ -370,7 +370,7 @@
<!-- Path to temporary data for processing hard queries. -->
<tmp_path>/var/lib/clickhouse/tmp/</tmp_path>
<!-- Disable AuthType plaintext_password and no_password for ACL. -->
<!-- <allow_plaintext_password>0</allow_plaintext_password> -->
<!-- <allow_no_password>0</allow_no_password> -->`
@@ -652,12 +652,12 @@
See https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replication/#creating-replicated-tables
-->
<macros>
<shard>01</shard>
<replica>example01-01-1</replica>
</macros>
<!-- Reloading interval for embedded dictionaries, in seconds. Default: 3600. -->
@@ -716,7 +716,7 @@
asynchronous_metrics - send data from table system.asynchronous_metrics
status_info - send data from different component from CH, ex: Dictionaries status
-->
<!--
<prometheus>
<endpoint>/metrics</endpoint>
<port>9363</port>
@@ -726,7 +726,6 @@
<asynchronous_metrics>true</asynchronous_metrics>
<status_info>true</status_info>
</prometheus>
-->
<!-- Query log. Used only for queries with setting log_queries = 1. -->
<query_log>

View File

View File

@@ -44,7 +44,7 @@ server {
location /api {
proxy_pass http://query-service:8080/api;
# connection will be closed if no data is read for 600s between successive read operations
proxy_read_timeout 600s;
proxy_read_timeout 600s;
}
location /ws {

View File

@@ -12,10 +12,10 @@ alerting:
- alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
rule_files: []
# - "first_rules.yml"
# - "second_rules.yml"
- 'alerts.yml'
# - 'alerts.yml'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.

View File

@@ -1,35 +0,0 @@
global:
resolve_timeout: 1m
slack_api_url: 'https://hooks.slack.com/services/xxx'
route:
receiver: 'slack-notifications'
receivers:
- name: 'slack-notifications'
slack_configs:
- channel: '#alerts'
send_resolved: true
icon_url: https://avatars3.githubusercontent.com/u/3380462
title: |-
[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .CommonLabels.alertname }} for {{ .CommonLabels.job }}
{{- if gt (len .CommonLabels) (len .GroupLabels) -}}
{{" "}}(
{{- with .CommonLabels.Remove .GroupLabels.Names }}
{{- range $index, $label := .SortedPairs -}}
{{ if $index }}, {{ end }}
{{- $label.Name }}="{{ $label.Value -}}"
{{- end }}
{{- end -}}
)
{{- end }}
text: >-
{{ range .Alerts -}}
*Alert:* {{ .Annotations.title }}{{ if .Labels.severity }} - `{{ .Labels.severity }}`{{ end }}
*Description:* {{ .Annotations.description }}
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}

View File

@@ -1,11 +0,0 @@
groups:
- name: ExampleCPULoadGroup
rules:
- alert: HighCpuLoad
expr: system_cpu_load_average_1m > 0.1
for: 0m
labels:
severity: warning
annotations:
summary: High CPU load
description: "CPU load is > 0.1\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"

File diff suppressed because it is too large Load Diff

View File

@@ -1,251 +0,0 @@
version: "3.9"
x-clickhouse-defaults: &clickhouse-defaults
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
deploy:
restart_policy:
condition: on-failure
depends_on:
- zookeeper-1
# - zookeeper-2
# - zookeeper-3
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "0.0.0.0:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
x-db-depend: &db-depend
depends_on:
- clickhouse
- otel-collector-migrator
# - clickhouse-2
# - clickhouse-3
services:
zookeeper-1:
image: bitnami/zookeeper:3.7.1
hostname: zookeeper-1
user: root
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
volumes:
- ./data/zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
# - ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
# zookeeper-2:
# image: bitnami/zookeeper:3.7.0
# hostname: zookeeper-2
# user: root
# ports:
# - "2182:2181"
# - "2889:2888"
# - "3889:3888"
# volumes:
# - ./data/zookeeper-2:/bitnami/zookeeper
# environment:
# - ZOO_SERVER_ID=2
# - ZOO_SERVERS=zookeeper-1:2888:3888,0.0.0.0:2888:3888,zookeeper-3:2888:3888
# - ALLOW_ANONYMOUS_LOGIN=yes
# - ZOO_AUTOPURGE_INTERVAL=1
# zookeeper-3:
# image: bitnami/zookeeper:3.7.0
# hostname: zookeeper-3
# user: root
# ports:
# - "2183:2181"
# - "2890:2888"
# - "3890:3888"
# volumes:
# - ./data/zookeeper-3:/bitnami/zookeeper
# environment:
# - ZOO_SERVER_ID=3
# - ZOO_SERVERS=zookeeper-1:2888:3888,zookeeper-2:2888:3888,0.0.0.0:2888:3888
# - ALLOW_ANONYMOUS_LOGIN=yes
# - ZOO_AUTOPURGE_INTERVAL=1
clickhouse:
!!merge <<: *clickhouse-defaults
hostname: clickhouse
# ports:
# - "9000:9000"
# - "8123:8123"
# - "9181:9181"
volumes:
- ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
- ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
- ./data/clickhouse/:/var/lib/clickhouse/
# clickhouse-2:
# <<: *clickhouse-defaults
# hostname: clickhouse-2
# ports:
# - "9001:9000"
# - "8124:8123"
# - "9182:9181"
# volumes:
# - ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
# - ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
# - ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# # - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
# - ./data/clickhouse-2/:/var/lib/clickhouse/
# clickhouse-3:
# <<: *clickhouse-defaults
# hostname: clickhouse-3
# ports:
# - "9002:9000"
# - "8125:8123"
# - "9183:9181"
# volumes:
# - ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
# - ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
# - ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# # - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
# - ./data/clickhouse-3/:/var/lib/clickhouse/
alertmanager:
image: signoz/alertmanager:0.23.7
volumes:
- ./data/alertmanager:/data
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data
depends_on:
- query-service
deploy:
restart_policy:
condition: on-failure
query-service:
image: signoz/query-service:0.68.0
command: ["-config=/root/config/prometheus.yml", "--use-logs-new-schema=true", "--use-trace-new-schema=true"]
# ports:
# - "6060:6060" # pprof port
# - "8080:8080" # query-service port
volumes:
- ./prometheus.yml:/root/config/prometheus.yml
- ../dashboards:/root/config/dashboards
- ./data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_LOCAL_DB_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-swarm
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost:8080/api/v1/health"]
interval: 30s
timeout: 5s
retries: 3
deploy:
restart_policy:
condition: on-failure
!!merge <<: *db-depend
frontend:
image: signoz/frontend:0.68.0
deploy:
restart_policy:
condition: on-failure
depends_on:
- alertmanager
- query-service
ports:
- "3301:3301"
volumes:
- ../common/nginx-config.conf:/etc/nginx/conf.d/default.conf
otel-collector:
image: signoz/signoz-otel-collector:0.111.23
command: ["--config=/etc/otel-collector-config.yaml", "--manager-config=/etc/manager-config.yaml", "--feature-gates=-pkg.translator.prometheus.NormalizeName"]
user: root # required for reading docker container logs
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ./otel-collector-opamp-config.yaml:/etc/manager-config.yaml
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /:/hostfs:ro
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name={{.Node.Hostname}},os.type={{.Node.Platform.OS}},dockerswarm.service.name={{.Service.Name}},dockerswarm.task.name={{.Task.Name}}
- LOW_CARDINAL_EXCEPTION_GROUPING=false
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
# - "8888:8888" # OtelCollector internal metrics
# - "8889:8889" # signoz spanmetrics exposed by the agent
# - "9411:9411" # Zipkin port
# - "13133:13133" # Health check extension
# - "14250:14250" # Jaeger gRPC
# - "14268:14268" # Jaeger thrift HTTP
# - "55678:55678" # OpenCensus receiver
# - "55679:55679" # zPages extension
deploy:
mode: global
restart_policy:
condition: on-failure
depends_on:
- clickhouse
- otel-collector-migrator
- query-service
otel-collector-migrator:
image: signoz/signoz-schema-migrator:0.111.23
deploy:
restart_policy:
condition: on-failure
delay: 5s
command:
- "sync"
- "--dsn=tcp://clickhouse:9000"
- "--up="
depends_on:
- clickhouse
# - clickhouse-2
# - clickhouse-3
logspout:
image: "gliderlabs/logspout:v3.2.14"
volumes:
- /etc/hostname:/etc/host_hostname:ro
- /var/run/docker.sock:/var/run/docker.sock
command: syslog+tcp://otel-collector:2255
depends_on:
- otel-collector
deploy:
mode: global
restart_policy:
condition: on-failure
hotrod:
image: jaegertracing/example-hotrod:1.30
command: ["all"]
environment:
- JAEGER_ENDPOINT=http://otel-collector:14268/api/traces
logging:
options:
max-size: 50m
max-file: "3"
load-hotrod:
image: "signoz/locust:1.2.3"
hostname: load-hotrod
environment:
ATTACKED_HOST: http://hotrod:8080
LOCUST_MODE: standalone
NO_PROXY: standalone
TASK_DELAY_FROM: 5
TASK_DELAY_TO: 30
QUIET_MODE: "${QUIET_MODE:-false}"
LOCUST_OPTS: "--headless -u 10 -r 1"
volumes:
- ../common/locust-scripts:/locust

View File

@@ -1,31 +0,0 @@
CREATE TABLE IF NOT EXISTS signoz_index (
timestamp DateTime64(9) CODEC(Delta, ZSTD(1)),
traceID String CODEC(ZSTD(1)),
spanID String CODEC(ZSTD(1)),
parentSpanID String CODEC(ZSTD(1)),
serviceName LowCardinality(String) CODEC(ZSTD(1)),
name LowCardinality(String) CODEC(ZSTD(1)),
kind Int32 CODEC(ZSTD(1)),
durationNano UInt64 CODEC(ZSTD(1)),
tags Array(String) CODEC(ZSTD(1)),
tagsKeys Array(String) CODEC(ZSTD(1)),
tagsValues Array(String) CODEC(ZSTD(1)),
statusCode Int64 CODEC(ZSTD(1)),
references String CODEC(ZSTD(1)),
externalHttpMethod Nullable(String) CODEC(ZSTD(1)),
externalHttpUrl Nullable(String) CODEC(ZSTD(1)),
component Nullable(String) CODEC(ZSTD(1)),
dbSystem Nullable(String) CODEC(ZSTD(1)),
dbName Nullable(String) CODEC(ZSTD(1)),
dbOperation Nullable(String) CODEC(ZSTD(1)),
peerService Nullable(String) CODEC(ZSTD(1)),
INDEX idx_traceID traceID TYPE bloom_filter GRANULARITY 4,
INDEX idx_service serviceName TYPE bloom_filter GRANULARITY 4,
INDEX idx_name name TYPE bloom_filter GRANULARITY 4,
INDEX idx_kind kind TYPE minmax GRANULARITY 4,
INDEX idx_tagsKeys tagsKeys TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX idx_tagsValues tagsValues TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX idx_duration durationNano TYPE minmax GRANULARITY 1
) ENGINE MergeTree()
PARTITION BY toDate(timestamp)
ORDER BY (serviceName, -toUnixTimestamp(timestamp))

View File

@@ -1,25 +0,0 @@
# my global config
global:
scrape_interval: 5s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
- 'alerts.yml'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs: []
remote_read:
- url: tcp://clickhouse:9000/signoz_metrics

View File

@@ -1,51 +0,0 @@
server {
listen 3301;
server_name _;
gzip on;
gzip_static on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_proxied any;
gzip_vary on;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
# to handle uri issue 414 from nginx
client_max_body_size 24M;
large_client_header_buffers 8 128k;
location / {
if ( $uri = '/index.html' ) {
add_header Cache-Control no-store always;
}
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location ~ ^/api/(v1|v3)/logs/(tail|livetail){
proxy_pass http://query-service:8080;
proxy_http_version 1.1;
# connection will be closed if no data is read for 600s between successive read operations
proxy_read_timeout 600s;
# dont buffer the data send it directly to client.
proxy_buffering off;
proxy_cache off;
}
location /api {
proxy_pass http://query-service:8080/api;
# connection will be closed if no data is read for 600s between successive read operations
proxy_read_timeout 600s;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

View File

@@ -0,0 +1,265 @@
version: "3"
x-common: &common
networks:
- signoz-net
deploy:
restart_policy:
condition: on-failure
logging:
options:
max-size: 50m
max-file: "3"
x-clickhouse-defaults: &clickhouse-defaults
<<: *common
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
deploy:
labels:
signoz.io/scrape: "true"
signoz.io/port: "9363"
signoz.io/path: "/metrics"
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- 0.0.0.0:8123/ping
interval: 30s
timeout: 5s
retries: 3
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
x-zookeeper-defaults: &zookeeper-defaults
<<: *common
image: bitnami/zookeeper:3.7.1
user: root
deploy:
labels:
signoz.io/scrape: "true"
signoz.io/port: "9141"
signoz.io/path: "/metrics"
healthcheck:
test:
- CMD-SHELL
- curl -s -m 2 http://localhost:8080/commands/ruok | grep error | grep null
interval: 30s
timeout: 5s
retries: 3
x-db-depend: &db-depend
<<: *common
depends_on:
- clickhouse
- clickhouse-2
- clickhouse-3
- schema-migrator
services:
init-clickhouse:
<<: *common
image: clickhouse/clickhouse-server:24.1.2-alpine
command:
- bash
- -c
- |
version="v0.0.1"
node_os=$$(uname -s | tr '[:upper:]' '[:lower:]')
node_arch=$$(uname -m | sed s/aarch64/arm64/ | sed s/x86_64/amd64/)
echo "Fetching histogram-binary for $${node_os}/$${node_arch}"
cd /tmp
wget -O histogram-quantile.tar.gz "https://github.com/SigNoz/signoz/releases/download/histogram-quantile%2F$${version}/histogram-quantile_$${node_os}_$${node_arch}.tar.gz"
tar -xvzf histogram-quantile.tar.gz
mv histogram-quantile /var/lib/clickhouse/user_scripts/histogramQuantile
volumes:
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
zookeeper-1:
<<: *zookeeper-defaults
# ports:
# - "2181:2181"
# - "2888:2888"
# - "3888:3888"
volumes:
- ./clickhouse-setup/data/zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
- ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
- ZOO_ENABLE_PROMETHEUS_METRICS=yes
- ZOO_PROMETHEUS_METRICS_PORT_NUMBER=9141
zookeeper-2:
<<: *zookeeper-defaults
# ports:
# - "2182:2181"
# - "2889:2888"
# - "3889:3888"
volumes:
- ./clickhouse-setup/data/zookeeper-2:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=2
- ZOO_SERVERS=zookeeper-1:2888:3888,0.0.0.0:2888:3888,zookeeper-3:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
- ZOO_ENABLE_PROMETHEUS_METRICS=yes
- ZOO_PROMETHEUS_METRICS_PORT_NUMBER=9141
zookeeper-3:
<<: *zookeeper-defaults
# ports:
# - "2183:2181"
# - "2890:2888"
# - "3890:3888"
volumes:
- ./clickhouse-setup/data/zookeeper-3:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=3
- ZOO_SERVERS=zookeeper-1:2888:3888,zookeeper-2:2888:3888,0.0.0.0:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
- ZOO_ENABLE_PROMETHEUS_METRICS=yes
- ZOO_PROMETHEUS_METRICS_PORT_NUMBER=9141
clickhouse:
<<: *clickhouse-defaults
# TODO: needed for schema-migrator to work, remove this redundancy once we have a better solution
hostname: clickhouse
# ports:
# - "9000:9000"
# - "8123:8123"
# - "9181:9181"
volumes:
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
- ../common/clickhouse/cluster.ha.xml:/etc/clickhouse-server/config.d/cluster.xml
- ./clickhouse-setup/data/clickhouse/:/var/lib/clickhouse/
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
clickhouse-2:
<<: *clickhouse-defaults
hostname: clickhouse-2
# ports:
# - "9001:9000"
# - "8124:8123"
# - "9182:9181"
volumes:
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
- ../common/clickhouse/cluster.ha.xml:/etc/clickhouse-server/config.d/cluster.xml
- ./clickhouse-setup/data/clickhouse-2/:/var/lib/clickhouse/
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
clickhouse-3:
<<: *clickhouse-defaults
hostname: clickhouse-3
# ports:
# - "9002:9000"
# - "8125:8123"
# - "9183:9181"
volumes:
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
- ../common/clickhouse/cluster.ha.xml:/etc/clickhouse-server/config.d/cluster.xml
- ./clickhouse-setup/data/clickhouse-3/:/var/lib/clickhouse/
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
alertmanager:
<<: *common
image: signoz/alertmanager:0.23.7
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data
volumes:
- ./clickhouse-setup/data/alertmanager:/data
depends_on:
- query-service
query-service:
<<: *db-depend
image: signoz/query-service:0.69.0
command:
- --config=/root/config/prometheus.yml
- --use-logs-new-schema=true
- --use-trace-new-schema=true
# ports:
# - "8080:8080" # signoz port
# - "6060:6060" # pprof port
volumes:
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
- ../common/dashboards:/root/config/dashboards
- ./clickhouse-setup/data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_SQLSTORE_SQLITE_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-swarm
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- localhost:8080/api/v1/health
interval: 30s
timeout: 5s
retries: 3
frontend:
<<: *common
image: signoz/frontend:0.69.0
depends_on:
- alertmanager
- query-service
ports:
- "3301:3301"
volumes:
- ../common/signoz/nginx-config.conf:/etc/nginx/conf.d/default.conf
otel-collector:
<<: *db-depend
image: signoz/signoz-otel-collector:0.111.24
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
- --copy-path=/var/tmp/collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ../common/signoz/otel-collector-opamp-config.yaml:/etc/manager-config.yaml
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name={{.Node.Hostname}},os.type={{.Node.Platform.OS}}
- LOW_CARDINAL_EXCEPTION_GROUPING=false
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
deploy:
replicas: 3
depends_on:
- clickhouse
- schema-migrator
- query-service
schema-migrator:
<<: *common
image: signoz/signoz-schema-migrator:0.111.24
deploy:
restart_policy:
condition: on-failure
delay: 5s
entrypoint: sh
command:
- -c
- "/signoz-schema-migrator sync --dsn=tcp://clickhouse:9000 --up= && /signoz-schema-migrator async --dsn=tcp://clickhouse:9000 --up="
depends_on:
- clickhouse
networks:
signoz-net:
name: signoz-net

View File

@@ -0,0 +1,201 @@
version: "3"
x-common: &common
networks:
- signoz-net
deploy:
restart_policy:
condition: on-failure
logging:
options:
max-size: 50m
max-file: "3"
x-clickhouse-defaults: &clickhouse-defaults
<<: *common
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
deploy:
labels:
signoz.io/scrape: "true"
signoz.io/port: "9363"
signoz.io/path: "/metrics"
depends_on:
- init-clickhouse
- zookeeper-1
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- 0.0.0.0:8123/ping
interval: 30s
timeout: 5s
retries: 3
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
x-zookeeper-defaults: &zookeeper-defaults
<<: *common
image: bitnami/zookeeper:3.7.1
user: root
deploy:
labels:
signoz.io/scrape: "true"
signoz.io/port: "9141"
signoz.io/path: "/metrics"
healthcheck:
test:
- CMD-SHELL
- curl -s -m 2 http://localhost:8080/commands/ruok | grep error | grep null
interval: 30s
timeout: 5s
retries: 3
x-db-depend: &db-depend
<<: *common
depends_on:
- clickhouse
- schema-migrator
services:
init-clickhouse:
<<: *common
image: clickhouse/clickhouse-server:24.1.2-alpine
command:
- bash
- -c
- |
version="v0.0.1"
node_os=$$(uname -s | tr '[:upper:]' '[:lower:]')
node_arch=$$(uname -m | sed s/aarch64/arm64/ | sed s/x86_64/amd64/)
echo "Fetching histogram-binary for $${node_os}/$${node_arch}"
cd /tmp
wget -O histogram-quantile.tar.gz "https://github.com/SigNoz/signoz/releases/download/histogram-quantile%2F$${version}/histogram-quantile_$${node_os}_$${node_arch}.tar.gz"
tar -xvzf histogram-quantile.tar.gz
mv histogram-quantile /var/lib/clickhouse/user_scripts/histogramQuantile
volumes:
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
zookeeper-1:
<<: *zookeeper-defaults
# ports:
# - "2181:2181"
# - "2888:2888"
# - "3888:3888"
volumes:
- ./clickhouse-setup/data/zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
- ZOO_ENABLE_PROMETHEUS_METRICS=yes
- ZOO_PROMETHEUS_METRICS_PORT_NUMBER=9141
clickhouse:
<<: *clickhouse-defaults
# TODO: needed for clickhouse TCP connectio
hostname: clickhouse
# ports:
# - "9000:9000"
# - "8123:8123"
# - "9181:9181"
volumes:
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
- ../common/clickhouse/cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
- ./clickhouse-setup/data/clickhouse/:/var/lib/clickhouse/
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
alertmanager:
<<: *common
image: signoz/alertmanager:0.23.7
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data
volumes:
- ./clickhouse-setup/data/alertmanager:/data
depends_on:
- query-service
query-service:
<<: *db-depend
image: signoz/query-service:0.69.0
command:
- --config=/root/config/prometheus.yml
- --use-logs-new-schema=true
- --use-trace-new-schema=true
# ports:
# - "8080:8080" # signoz port
# - "6060:6060" # pprof port
volumes:
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
- ../common/dashboards:/root/config/dashboards
- ./clickhouse-setup/data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_SQLSTORE_SQLITE_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-swarm
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- localhost:8080/api/v1/health
interval: 30s
timeout: 5s
retries: 3
frontend:
<<: *common
image: signoz/frontend:0.69.0
depends_on:
- alertmanager
- query-service
ports:
- "3301:3301"
volumes:
- ../common/signoz/nginx-config.conf:/etc/nginx/conf.d/default.conf
otel-collector:
<<: *db-depend
image: signoz/signoz-otel-collector:0.111.24
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
- --copy-path=/var/tmp/collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ../common/signoz/otel-collector-opamp-config.yaml:/etc/manager-config.yaml
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name={{.Node.Hostname}},os.type={{.Node.Platform.OS}}
- LOW_CARDINAL_EXCEPTION_GROUPING=false
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
deploy:
replicas: 3
depends_on:
- clickhouse
- schema-migrator
- query-service
schema-migrator:
<<: *common
image: signoz/signoz-schema-migrator:0.111.24
deploy:
restart_policy:
condition: on-failure
delay: 5s
entrypoint: sh
command:
- -c
- "/signoz-schema-migrator sync --dsn=tcp://clickhouse:9000 --up= && /signoz-schema-migrator async --dsn=tcp://clickhouse:9000 --up="
depends_on:
- clickhouse
networks:
signoz-net:
name: signoz-net

View File

@@ -0,0 +1,38 @@
version: "3"
x-common: &common
networks:
- signoz-net
extra_hosts:
- host.docker.internal:host-gateway
logging:
options:
max-size: 50m
max-file: "3"
deploy:
restart_policy:
condition: on-failure
services:
hotrod:
<<: *common
image: jaegertracing/example-hotrod:1.61.0
command: [ "all" ]
environment:
- OTEL_EXPORTER_OTLP_ENDPOINT=http://host.docker.internal:4318 #
load-hotrod:
<<: *common
image: "signoz/locust:1.2.3"
environment:
ATTACKED_HOST: http://hotrod:8080
LOCUST_MODE: standalone
NO_PROXY: standalone
TASK_DELAY_FROM: 5
TASK_DELAY_TO: 30
QUIET_MODE: "${QUIET_MODE:-false}"
LOCUST_OPTS: "--headless -u 10 -r 1"
volumes:
- ../../../common/locust-scripts:/locust
networks:
signoz-net:
name: signoz-net
external: true

View File

@@ -0,0 +1,69 @@
version: "3"
x-common: &common
networks:
- signoz-net
extra_hosts:
- host.docker.internal:host-gateway
logging:
options:
max-size: 50m
max-file: "3"
deploy:
mode: global
restart_policy:
condition: on-failure
services:
otel-agent:
<<: *common
image: otel/opentelemetry-collector-contrib:0.111.0
command:
- --config=/etc/otel-collector-config.yaml
volumes:
- ./otel-agent-config.yaml:/etc/otel-collector-config.yaml
- /:/hostfs:ro
environment:
- SIGNOZ_COLLECTOR_ENDPOINT=http://host.docker.internal:4317 # In case of external SigNoz or cloud, update the endpoint and access token
- OTEL_RESOURCE_ATTRIBUTES=host.name={{.Node.Hostname}},os.type={{.Node.Platform.OS}}
# - SIGNOZ_ACCESS_TOKEN="<your-access-token>"
# Before exposing the ports, make sure the ports are not used by other services
# ports:
# - "4317:4317"
# - "4318:4318"
otel-metrics:
<<: *common
image: otel/opentelemetry-collector-contrib:0.111.0
user: 0:0 # If you have security concerns, you can replace this with your `UID:GID` that has necessary permissions to docker.sock
command:
- --config=/etc/otel-collector-config.yaml
volumes:
- ./otel-metrics-config.yaml:/etc/otel-collector-config.yaml
- /var/run/docker.sock:/var/run/docker.sock
environment:
- SIGNOZ_COLLECTOR_ENDPOINT=http://host.docker.internal:4317 # In case of external SigNoz or cloud, update the endpoint and access token
- OTEL_RESOURCE_ATTRIBUTES=host.name={{.Node.Hostname}},os.type={{.Node.Platform.OS}}
# - SIGNOZ_ACCESS_TOKEN="<your-access-token>"
# Before exposing the ports, make sure the ports are not used by other services
# ports:
# - "4317:4317"
# - "4318:4318"
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
logspout:
<<: *common
image: "gliderlabs/logspout:v3.2.14"
command: syslog+tcp://otel-agent:2255
user: root
volumes:
- /etc/hostname:/etc/host_hostname:ro
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- otel-agent
networks:
signoz-net:
name: signoz-net
external: true

View File

@@ -0,0 +1,102 @@
receivers:
hostmetrics:
collection_interval: 30s
root_path: /hostfs
scrapers:
cpu: {}
load: {}
memory: {}
disk: {}
filesystem: {}
network: {}
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
- job_name: otel-agent
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-agent
tcplog/docker:
listen_address: "0.0.0.0:2255"
operators:
- type: regex_parser
regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?'
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: move
from: attributes["body"]
to: body
- type: remove
field: attributes.timestamp
# please remove names from below if you want to collect logs from them
- type: filter
id: signoz_logs_filter
expr: 'attributes.container_name matches "^(signoz_(logspout|alertmanager|query-service|otel-collector|clickhouse|zookeeper))|(infra_(logspout|otel-agent|otel-metrics)).*"'
processors:
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors:
# - ec2
# - gcp
# - azure
- env
- system
timeout: 2s
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
exporters:
otlp:
endpoint: ${env:SIGNOZ_COLLECTOR_ENDPOINT}
tls:
insecure: true
headers:
signoz-access-token: ${env:SIGNOZ_ACCESS_TOKEN}
# debug: {}
service:
telemetry:
logs:
encoding: json
metrics:
address: 0.0.0.0:8888
extensions:
- health_check
- pprof
pipelines:
traces:
receivers: [otlp]
processors: [resourcedetection, batch]
exporters: [otlp]
metrics:
receivers: [otlp]
processors: [resourcedetection, batch]
exporters: [otlp]
metrics/hostmetrics:
receivers: [hostmetrics]
processors: [resourcedetection, batch]
exporters: [otlp]
metrics/prometheus:
receivers: [prometheus]
processors: [resourcedetection, batch]
exporters: [otlp]
logs:
receivers: [otlp, tcplog/docker]
processors: [resourcedetection, batch]
exporters: [otlp]

View File

@@ -0,0 +1,103 @@
receivers:
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
- job_name: otel-metrics
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-metrics
# For Docker daemon metrics to be scraped, it must be configured to expose
# Prometheus metrics, as documented here: https://docs.docker.com/config/daemon/prometheus/
# - job_name: docker-daemon
# dockerswarm_sd_configs:
# - host: unix:///var/run/docker.sock
# role: nodes
# relabel_configs:
# - source_labels: [__meta_dockerswarm_node_address]
# target_label: __address__
# replacement: $1:9323
- job_name: "dockerswarm"
dockerswarm_sd_configs:
- host: unix:///var/run/docker.sock
role: tasks
relabel_configs:
- action: keep
regex: running
source_labels:
- __meta_dockerswarm_task_desired_state
- action: keep
regex: true
source_labels:
- __meta_dockerswarm_service_label_signoz_io_scrape
- regex: ([^:]+)(?::\d+)?
replacement: $1
source_labels:
- __address__
target_label: swarm_container_ip
- separator: .
source_labels:
- __meta_dockerswarm_service_name
- __meta_dockerswarm_task_slot
- __meta_dockerswarm_task_id
target_label: swarm_container_name
- target_label: __address__
source_labels:
- swarm_container_ip
- __meta_dockerswarm_service_label_signoz_io_port
separator: ":"
- source_labels:
- __meta_dockerswarm_service_label_signoz_io_path
target_label: __metrics_path__
- source_labels:
- __meta_dockerswarm_service_label_com_docker_stack_namespace
target_label: namespace
- source_labels:
- __meta_dockerswarm_service_name
target_label: service_name
- source_labels:
- __meta_dockerswarm_task_id
target_label: service_instance_id
- source_labels:
- __meta_dockerswarm_node_hostname
target_label: host_name
processors:
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
resourcedetection:
detectors:
- env
- system
timeout: 2s
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
exporters:
otlp:
endpoint: ${env:SIGNOZ_COLLECTOR_ENDPOINT}
tls:
insecure: true
headers:
signoz-access-token: ${env:SIGNOZ_ACCESS_TOKEN}
# debug: {}
service:
telemetry:
logs:
encoding: json
metrics:
address: 0.0.0.0:8888
extensions:
- health_check
- pprof
pipelines:
metrics:
receivers: [prometheus]
processors: [resourcedetection, batch]
exporters: [otlp]

View File

@@ -1,85 +1,29 @@
receivers:
tcplog/docker:
listen_address: "0.0.0.0:2255"
operators:
- type: regex_parser
regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?'
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: move
from: attributes["body"]
to: body
- type: remove
field: attributes.timestamp
# please remove names from below if you want to collect logs from them
- type: filter
id: signoz_logs_filter
expr: 'attributes.container_name matches "^signoz-(logspout|frontend|alertmanager|query-service|otel-collector|clickhouse|zookeeper)"'
opencensus:
endpoint: 0.0.0.0:55678
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_http:
endpoint: 0.0.0.0:14268
# thrift_compact:
# endpoint: 0.0.0.0:6831
# thrift_binary:
# endpoint: 0.0.0.0:6832
hostmetrics:
collection_interval: 30s
root_path: /hostfs
scrapers:
cpu: {}
load: {}
memory: {}
disk: {}
filesystem: {}
network: {}
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
# otel-collector internal metrics
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-collector
processors:
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
# memory_limiter:
# # 80% of maximum memory up to 2G
# limit_mib: 1500
# # 25% of limit up to 2G
# spike_limit_mib: 512
# check_interval: 5s
#
# # 50% of the maximum memory
# limit_percentage: 50
# # 20% of max memory usage spike expected
# spike_limit_percentage: 20
# queued_retry:
# num_workers: 4
# queue_size: 100
# retry_on_failure: true
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors: [env, system] # include ec2 for AWS, gcp for GCP and azure for Azure.
detectors: [env, system]
timeout: 2s
signozspanmetrics/delta:
metrics_exporter: clickhousemetricswrite
@@ -106,15 +50,11 @@ processors:
- name: host.name
- name: host.type
- name: container.name
extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages:
endpoint: 0.0.0.0:55679
pprof:
endpoint: 0.0.0.0:1777
exporters:
clickhousetraces:
datasource: tcp://clickhouse:9000/signoz_traces
@@ -132,8 +72,7 @@ exporters:
dsn: tcp://clickhouse:9000/signoz_logs
timeout: 10s
use_new_schema: true
# logging: {}
# debug: {}
service:
telemetry:
logs:
@@ -142,26 +81,21 @@ service:
address: 0.0.0.0:8888
extensions:
- health_check
- zpages
- pprof
pipelines:
traces:
receivers: [jaeger, otlp]
receivers: [otlp]
processors: [signozspanmetrics/delta, batch]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [clickhousemetricswrite, clickhousemetricswritev2]
metrics/hostmetrics:
receivers: [hostmetrics]
processors: [resourcedetection, batch]
exporters: [clickhousemetricswrite, clickhousemetricswritev2]
metrics/prometheus:
receivers: [prometheus]
processors: [batch]
exporters: [clickhousemetricswrite/prometheus, clickhousemetricswritev2]
logs:
receivers: [otlp, tcplog/docker]
receivers: [otlp]
processors: [batch]
exporters: [clickhouselogsexporter]
exporters: [clickhouselogsexporter]

1
deploy/docker/.env Normal file
View File

@@ -0,0 +1 @@
COMPOSE_PROJECT_NAME=clickhouse-setup

View File

View File

@@ -1,35 +0,0 @@
global:
resolve_timeout: 1m
slack_api_url: 'https://hooks.slack.com/services/xxx'
route:
receiver: 'slack-notifications'
receivers:
- name: 'slack-notifications'
slack_configs:
- channel: '#alerts'
send_resolved: true
icon_url: https://avatars3.githubusercontent.com/u/3380462
title: |-
[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .CommonLabels.alertname }} for {{ .CommonLabels.job }}
{{- if gt (len .CommonLabels) (len .GroupLabels) -}}
{{" "}}(
{{- with .CommonLabels.Remove .GroupLabels.Names }}
{{- range $index, $label := .SortedPairs -}}
{{ if $index }}, {{ end }}
{{- $label.Name }}="{{ $label.Value -}}"
{{- end }}
{{- end -}}
)
{{- end }}
text: >-
{{ range .Alerts -}}
*Alert:* {{ .Annotations.title }}{{ if .Labels.severity }} - `{{ .Labels.severity }}`{{ end }}
*Description:* {{ .Annotations.description }}
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}

View File

@@ -1,11 +0,0 @@
groups:
- name: ExampleCPULoadGroup
rules:
- alert: HighCpuLoad
expr: system_cpu_load_average_1m > 0.1
for: 0m
labels:
severity: warning
annotations:
summary: High CPU load
description: "CPU load is > 0.1\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"

View File

@@ -1,41 +0,0 @@
<?xml version="1.0"?>
<clickhouse>
<storage_configuration>
<disks>
<default>
<keep_free_space_bytes>10485760</keep_free_space_bytes>
</default>
<s3>
<type>s3</type>
<!-- For S3 cold storage,
if region is us-east-1, endpoint can be https://<bucket-name>.s3.amazonaws.com
if region is not us-east-1, endpoint should be https://<bucket-name>.s3-<region>.amazonaws.com
For GCS cold storage,
endpoint should be https://storage.googleapis.com/<bucket-name>/data/
-->
<endpoint>https://BUCKET-NAME.s3-REGION-NAME.amazonaws.com/data/</endpoint>
<access_key_id>ACCESS-KEY-ID</access_key_id>
<secret_access_key>SECRET-ACCESS-KEY</secret_access_key>
<!-- In case of S3, uncomment the below configuration in case you want to read
AWS credentials from the Environment variables if they exist. -->
<!-- <use_environment_credentials>true</use_environment_credentials> -->
<!-- In case of GCS, uncomment the below configuration, since GCS does
not support batch deletion and result in error messages in logs. -->
<!-- <support_batch_delete>false</support_batch_delete> -->
</s3>
</disks>
<policies>
<tiered>
<volumes>
<default>
<disk>default</disk>
</default>
<s3>
<disk>s3</disk>
<perform_ttl_move_on_insert>0</perform_ttl_move_on_insert>
</s3>
</volumes>
</tiered>
</policies>
</storage_configuration>
</clickhouse>

View File

@@ -1,123 +0,0 @@
<?xml version="1.0"?>
<clickhouse>
<!-- See also the files in users.d directory where the settings can be overridden. -->
<!-- Profiles of settings. -->
<profiles>
<!-- Default settings. -->
<default>
<!-- Maximum memory usage for processing single query, in bytes. -->
<max_memory_usage>10000000000</max_memory_usage>
<!-- How to choose between replicas during distributed query processing.
random - choose random replica from set of replicas with minimum number of errors
nearest_hostname - from set of replicas with minimum number of errors, choose replica
with minimum number of different symbols between replica's hostname and local hostname
(Hamming distance).
in_order - first live replica is chosen in specified order.
first_or_random - if first replica one has higher number of errors, pick a random one from replicas with minimum number of errors.
-->
<load_balancing>random</load_balancing>
</default>
<!-- Profile that allows only read queries. -->
<readonly>
<readonly>1</readonly>
</readonly>
</profiles>
<!-- Users and ACL. -->
<users>
<!-- If user name was not specified, 'default' user is used. -->
<default>
<!-- See also the files in users.d directory where the password can be overridden.
Password could be specified in plaintext or in SHA256 (in hex format).
If you want to specify password in plaintext (not recommended), place it in 'password' element.
Example: <password>qwerty</password>.
Password could be empty.
If you want to specify SHA256, place it in 'password_sha256_hex' element.
Example: <password_sha256_hex>65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5</password_sha256_hex>
Restrictions of SHA256: impossibility to connect to ClickHouse using MySQL JS client (as of July 2019).
If you want to specify double SHA1, place it in 'password_double_sha1_hex' element.
Example: <password_double_sha1_hex>e395796d6546b1b65db9d665cd43f0e858dd4303</password_double_sha1_hex>
If you want to specify a previously defined LDAP server (see 'ldap_servers' in the main config) for authentication,
place its name in 'server' element inside 'ldap' element.
Example: <ldap><server>my_ldap_server</server></ldap>
If you want to authenticate the user via Kerberos (assuming Kerberos is enabled, see 'kerberos' in the main config),
place 'kerberos' element instead of 'password' (and similar) elements.
The name part of the canonical principal name of the initiator must match the user name for authentication to succeed.
You can also place 'realm' element inside 'kerberos' element to further restrict authentication to only those requests
whose initiator's realm matches it.
Example: <kerberos />
Example: <kerberos><realm>EXAMPLE.COM</realm></kerberos>
How to generate decent password:
Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-'
In first line will be password and in second - corresponding SHA256.
How to generate double SHA1:
Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
In first line will be password and in second - corresponding double SHA1.
-->
<password></password>
<!-- List of networks with open access.
To open access from everywhere, specify:
<ip>::/0</ip>
To open access only from localhost, specify:
<ip>::1</ip>
<ip>127.0.0.1</ip>
Each element of list has one of the following forms:
<ip> IP-address or network mask. Examples: 213.180.204.3 or 10.0.0.1/8 or 10.0.0.1/255.255.255.0
2a02:6b8::3 or 2a02:6b8::3/64 or 2a02:6b8::3/ffff:ffff:ffff:ffff::.
<host> Hostname. Example: server01.clickhouse.com.
To check access, DNS query is performed, and all received addresses compared to peer address.
<host_regexp> Regular expression for host names. Example, ^server\d\d-\d\d-\d\.clickhouse\.com$
To check access, DNS PTR query is performed for peer address and then regexp is applied.
Then, for result of PTR query, another DNS query is performed and all received addresses compared to peer address.
Strongly recommended that regexp is ends with $
All results of DNS requests are cached till server restart.
-->
<networks>
<ip>::/0</ip>
</networks>
<!-- Settings profile for user. -->
<profile>default</profile>
<!-- Quota for user. -->
<quota>default</quota>
<!-- User can create other users and grant rights to them. -->
<!-- <access_management>1</access_management> -->
</default>
</users>
<!-- Quotas. -->
<quotas>
<!-- Name of quota. -->
<default>
<!-- Limits for time interval. You could specify many intervals with different limits. -->
<interval>
<!-- Length of interval. -->
<duration>3600</duration>
<!-- No limits. Just calculate resource usage for time interval. -->
<queries>0</queries>
<errors>0</errors>
<result_rows>0</result_rows>
<read_rows>0</read_rows>
<execution_time>0</execution_time>
</interval>
</default>
</quotas>
</clickhouse>

View File

@@ -1,115 +0,0 @@
version: "2.4"
include:
- test-app-docker-compose.yaml
services:
zookeeper-1:
image: bitnami/zookeeper:3.7.1
container_name: signoz-zookeeper-1
hostname: zookeeper-1
user: root
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
volumes:
- ./data/zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
# - ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
clickhouse:
image: clickhouse/clickhouse-server:24.1.2-alpine
container_name: signoz-clickhouse
# ports:
# - "9000:9000"
# - "8123:8123"
tty: true
volumes:
- ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
- ./custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
- ./data/clickhouse/:/var/lib/clickhouse/
- ./user_scripts:/var/lib/clickhouse/user_scripts/
restart: on-failure
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "0.0.0.0:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
alertmanager:
container_name: signoz-alertmanager
image: signoz/alertmanager:0.23.7
volumes:
- ./data/alertmanager:/data
depends_on:
query-service:
condition: service_healthy
restart: on-failure
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data
otel-collector-migrator:
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.23}
container_name: otel-migrator
command:
- "sync"
- "--dsn=tcp://clickhouse:9000"
- "--up="
depends_on:
clickhouse:
condition: service_healthy
# clickhouse-2:
# condition: service_healthy
# clickhouse-3:
# condition: service_healthy
# Notes for Maintainers/Contributors who will change Line Numbers of Frontend & Query-Section. Please Update Line Numbers in `./scripts/commentLinesForSetup.sh` & `./CONTRIBUTING.md`
otel-collector:
container_name: signoz-otel-collector
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.111.23}
command: ["--config=/etc/otel-collector-config.yaml", "--manager-config=/etc/manager-config.yaml", "--copy-path=/var/tmp/collector-config.yaml", "--feature-gates=-pkg.translator.prometheus.NormalizeName"]
# user: root # required for reading docker container logs
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ./otel-collector-opamp-config.yaml:/etc/manager-config.yaml
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /:/hostfs:ro
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
# - "8888:8888" # OtelCollector internal metrics
# - "8889:8889" # signoz spanmetrics exposed by the agent
# - "9411:9411" # Zipkin port
# - "13133:13133" # health check extension
# - "14250:14250" # Jaeger gRPC
# - "14268:14268" # Jaeger thrift HTTP
# - "55678:55678" # OpenCensus receiver
# - "55679:55679" # zPages extension
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy
otel-collector-migrator:
condition: service_completed_successfully
query-service:
condition: service_healthy
logspout:
image: "gliderlabs/logspout:v3.2.14"
container_name: signoz-logspout
volumes:
- /etc/hostname:/etc/host_hostname:ro
- /var/run/docker.sock:/var/run/docker.sock
command: syslog+tcp://otel-collector:2255
depends_on:
- otel-collector
restart: on-failure

View File

@@ -1,68 +0,0 @@
version: "2.4"
services:
query-service:
hostname: query-service
build:
context: "../../../"
dockerfile: "./pkg/query-service/Dockerfile"
args:
LDFLAGS: ""
TARGETPLATFORM: "${GOOS}/${GOARCH}"
container_name: signoz-query-service
environment:
- ClickHouseUrl=tcp://clickhouse:9000
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_LOCAL_DB_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
volumes:
- ./prometheus.yml:/root/config/prometheus.yml
- ../dashboards:/root/config/dashboards
- ./data/signoz/:/var/lib/signoz/
command:
[
"-config=/root/config/prometheus.yml",
"--use-logs-new-schema=true",
"--use-trace-new-schema=true"
]
ports:
- "6060:6060"
- "8080:8080"
restart: on-failure
healthcheck:
test:
[
"CMD",
"wget",
"--spider",
"-q",
"localhost:8080/api/v1/health"
]
interval: 30s
timeout: 5s
retries: 3
depends_on:
clickhouse:
condition: service_healthy
frontend:
build:
context: "../../../frontend"
dockerfile: "./Dockerfile"
args:
TARGETOS: "${GOOS}"
TARGETPLATFORM: "${GOARCH}"
container_name: signoz-frontend
environment:
- FRONTEND_API_ENDPOINT=http://query-service:8080
restart: on-failure
depends_on:
- alertmanager
- query-service
ports:
- "3301:3301"
volumes:
- ../common/nginx-config.conf:/etc/nginx/conf.d/default.conf

View File

@@ -1,257 +0,0 @@
x-clickhouse-defaults: &clickhouse-defaults
restart: on-failure
# addding non LTS version due to this fix https://github.com/ClickHouse/ClickHouse/commit/32caf8716352f45c1b617274c7508c86b7d1afab
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
depends_on:
- zookeeper-1
# - zookeeper-2
# - zookeeper-3
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "0.0.0.0:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
x-db-depend: &db-depend
depends_on:
clickhouse:
condition: service_healthy
otel-collector-migrator-sync:
condition: service_completed_successfully
# clickhouse-2:
# condition: service_healthy
# clickhouse-3:
# condition: service_healthy
services:
zookeeper-1:
image: bitnami/zookeeper:3.7.1
container_name: signoz-zookeeper-1
hostname: zookeeper-1
user: root
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
volumes:
- ./data/zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
# - ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
# zookeeper-2:
# image: bitnami/zookeeper:3.7.0
# container_name: signoz-zookeeper-2
# hostname: zookeeper-2
# user: root
# ports:
# - "2182:2181"
# - "2889:2888"
# - "3889:3888"
# volumes:
# - ./data/zookeeper-2:/bitnami/zookeeper
# environment:
# - ZOO_SERVER_ID=2
# - ZOO_SERVERS=zookeeper-1:2888:3888,0.0.0.0:2888:3888,zookeeper-3:2888:3888
# - ALLOW_ANONYMOUS_LOGIN=yes
# - ZOO_AUTOPURGE_INTERVAL=1
# zookeeper-3:
# image: bitnami/zookeeper:3.7.0
# container_name: signoz-zookeeper-3
# hostname: zookeeper-3
# user: root
# ports:
# - "2183:2181"
# - "2890:2888"
# - "3890:3888"
# volumes:
# - ./data/zookeeper-3:/bitnami/zookeeper
# environment:
# - ZOO_SERVER_ID=3
# - ZOO_SERVERS=zookeeper-1:2888:3888,zookeeper-2:2888:3888,0.0.0.0:2888:3888
# - ALLOW_ANONYMOUS_LOGIN=yes
# - ZOO_AUTOPURGE_INTERVAL=1
clickhouse:
!!merge <<: *clickhouse-defaults
container_name: signoz-clickhouse
hostname: clickhouse
ports:
- "9000:9000"
- "8123:8123"
- "9181:9181"
volumes:
- ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
- ./custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
- ./data/clickhouse/:/var/lib/clickhouse/
- ./user_scripts:/var/lib/clickhouse/user_scripts/
# clickhouse-2:
# <<: *clickhouse-defaults
# container_name: signoz-clickhouse-2
# hostname: clickhouse-2
# ports:
# - "9001:9000"
# - "8124:8123"
# - "9182:9181"
# volumes:
# - ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
# - ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
# - ./custom-function.xml:/etc/clickhouse-server/custom-function.xml
# - ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# # - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
# - ./data/clickhouse-2/:/var/lib/clickhouse/
# - ./user_scripts:/var/lib/clickhouse/user_scripts/
# clickhouse-3:
# <<: *clickhouse-defaults
# container_name: signoz-clickhouse-3
# hostname: clickhouse-3
# ports:
# - "9002:9000"
# - "8125:8123"
# - "9183:9181"
# volumes:
# - ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
# - ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
# - ./custom-function.xml:/etc/clickhouse-server/custom-function.xml
# - ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# # - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
# - ./data/clickhouse-3/:/var/lib/clickhouse/
# - ./user_scripts:/var/lib/clickhouse/user_scripts/
alertmanager:
image: signoz/alertmanager:${ALERTMANAGER_TAG:-0.23.7}
container_name: signoz-alertmanager
volumes:
- ./data/alertmanager:/data
depends_on:
query-service:
condition: service_healthy
restart: on-failure
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data
# Notes for Maintainers/Contributors who will change Line Numbers of Frontend & Query-Section. Please Update Line Numbers in `./scripts/commentLinesForSetup.sh` & `./CONTRIBUTING.md`
query-service:
image: signoz/query-service:${DOCKER_TAG:-0.68.0}
container_name: signoz-query-service
command: ["-config=/root/config/prometheus.yml", "--use-logs-new-schema=true", "--use-trace-new-schema=true"]
# ports:
# - "6060:6060" # pprof port
# - "8080:8080" # query-service port
volumes:
- ./prometheus.yml:/root/config/prometheus.yml
- ../dashboards:/root/config/dashboards
- ./data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_LOCAL_DB_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-standalone-amd
restart: on-failure
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost:8080/api/v1/health"]
interval: 30s
timeout: 5s
retries: 3
!!merge <<: *db-depend
frontend:
image: signoz/frontend:${DOCKER_TAG:-0.68.0}
container_name: signoz-frontend
restart: on-failure
depends_on:
- alertmanager
- query-service
ports:
- "3301:3301"
volumes:
- ../common/nginx-config.conf:/etc/nginx/conf.d/default.conf
otel-collector-migrator-sync:
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.23}
container_name: otel-migrator-sync
command:
- "sync"
- "--dsn=tcp://clickhouse:9000"
- "--up="
depends_on:
clickhouse:
condition: service_healthy
# clickhouse-2:
# condition: service_healthy
# clickhouse-3:
# condition: service_healthy
otel-collector-migrator-async:
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.23}
container_name: otel-migrator-async
command:
- "async"
- "--dsn=tcp://clickhouse:9000"
- "--up="
depends_on:
clickhouse:
condition: service_healthy
otel-collector-migrator-sync:
condition: service_completed_successfully
# clickhouse-2:
# condition: service_healthy
# clickhouse-3:
# condition: service_healthy
otel-collector:
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.111.23}
container_name: signoz-otel-collector
command: ["--config=/etc/otel-collector-config.yaml", "--manager-config=/etc/manager-config.yaml", "--copy-path=/var/tmp/collector-config.yaml", "--feature-gates=-pkg.translator.prometheus.NormalizeName"]
user: root # required for reading docker container logs
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ./otel-collector-opamp-config.yaml:/etc/manager-config.yaml
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /:/hostfs:ro
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
- LOW_CARDINAL_EXCEPTION_GROUPING=false
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
# - "8888:8888" # OtelCollector internal metrics
# - "8889:8889" # signoz spanmetrics exposed by the agent
# - "9411:9411" # Zipkin port
# - "13133:13133" # health check extension
# - "14250:14250" # Jaeger gRPC
# - "14268:14268" # Jaeger thrift HTTP
# - "55678:55678" # OpenCensus receiver
# - "55679:55679" # zPages extension
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy
otel-collector-migrator-sync:
condition: service_completed_successfully
query-service:
condition: service_healthy
logspout:
image: "gliderlabs/logspout:v3.2.14"
container_name: signoz-logspout
volumes:
- /etc/hostname:/etc/host_hostname:ro
- /var/run/docker.sock:/var/run/docker.sock
command: syslog+tcp://otel-collector:2255
depends_on:
- otel-collector
restart: on-failure

View File

@@ -1,243 +0,0 @@
version: "2.4"
include:
- test-app-docker-compose.yaml
x-clickhouse-defaults: &clickhouse-defaults
restart: on-failure
# addding non LTS version due to this fix https://github.com/ClickHouse/ClickHouse/commit/32caf8716352f45c1b617274c7508c86b7d1afab
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
depends_on:
- zookeeper-1
# - zookeeper-2
# - zookeeper-3
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test: ["CMD", "wget", "--spider", "-q", "0.0.0.0:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
x-db-depend: &db-depend
depends_on:
clickhouse:
condition: service_healthy
otel-collector-migrator:
condition: service_completed_successfully
# clickhouse-2:
# condition: service_healthy
# clickhouse-3:
# condition: service_healthy
services:
zookeeper-1:
image: bitnami/zookeeper:3.7.1
container_name: signoz-zookeeper-1
hostname: zookeeper-1
user: root
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
volumes:
- ./data/zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
# - ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
# zookeeper-2:
# image: bitnami/zookeeper:3.7.0
# container_name: signoz-zookeeper-2
# hostname: zookeeper-2
# user: root
# ports:
# - "2182:2181"
# - "2889:2888"
# - "3889:3888"
# volumes:
# - ./data/zookeeper-2:/bitnami/zookeeper
# environment:
# - ZOO_SERVER_ID=2
# - ZOO_SERVERS=zookeeper-1:2888:3888,0.0.0.0:2888:3888,zookeeper-3:2888:3888
# - ALLOW_ANONYMOUS_LOGIN=yes
# - ZOO_AUTOPURGE_INTERVAL=1
# zookeeper-3:
# image: bitnami/zookeeper:3.7.0
# container_name: signoz-zookeeper-3
# hostname: zookeeper-3
# user: root
# ports:
# - "2183:2181"
# - "2890:2888"
# - "3890:3888"
# volumes:
# - ./data/zookeeper-3:/bitnami/zookeeper
# environment:
# - ZOO_SERVER_ID=3
# - ZOO_SERVERS=zookeeper-1:2888:3888,zookeeper-2:2888:3888,0.0.0.0:2888:3888
# - ALLOW_ANONYMOUS_LOGIN=yes
# - ZOO_AUTOPURGE_INTERVAL=1
clickhouse:
!!merge <<: *clickhouse-defaults
container_name: signoz-clickhouse
hostname: clickhouse
ports:
- "9000:9000"
- "8123:8123"
- "9181:9181"
volumes:
- ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
- ./custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
- ./data/clickhouse/:/var/lib/clickhouse/
- ./user_scripts:/var/lib/clickhouse/user_scripts/
# clickhouse-2:
# <<: *clickhouse-defaults
# container_name: signoz-clickhouse-2
# hostname: clickhouse-2
# ports:
# - "9001:9000"
# - "8124:8123"
# - "9182:9181"
# volumes:
# - ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
# - ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
# - ./custom-function.xml:/etc/clickhouse-server/custom-function.xml
# - ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# # - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
# - ./data/clickhouse-2/:/var/lib/clickhouse/
# - ./user_scripts:/var/lib/clickhouse/user_scripts/
# clickhouse-3:
# <<: *clickhouse-defaults
# container_name: signoz-clickhouse-3
# hostname: clickhouse-3
# ports:
# - "9002:9000"
# - "8125:8123"
# - "9183:9181"
# volumes:
# - ./clickhouse-config.xml:/etc/clickhouse-server/config.xml
# - ./clickhouse-users.xml:/etc/clickhouse-server/users.xml
# - ./custom-function.xml:/etc/clickhouse-server/custom-function.xml
# - ./clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
# # - ./clickhouse-storage.xml:/etc/clickhouse-server/config.d/storage.xml
# - ./data/clickhouse-3/:/var/lib/clickhouse/
# - ./user_scripts:/var/lib/clickhouse/user_scripts/
alertmanager:
image: signoz/alertmanager:${ALERTMANAGER_TAG:-0.23.7}
container_name: signoz-alertmanager
volumes:
- ./data/alertmanager:/data
depends_on:
query-service:
condition: service_healthy
restart: on-failure
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data
# Notes for Maintainers/Contributors who will change Line Numbers of Frontend & Query-Section. Please Update Line Numbers in `./scripts/commentLinesForSetup.sh` & `./CONTRIBUTING.md`
query-service:
image: signoz/query-service:${DOCKER_TAG:-0.68.0}
container_name: signoz-query-service
command: ["-config=/root/config/prometheus.yml", "-gateway-url=https://api.staging.signoz.cloud", "--use-logs-new-schema=true", "--use-trace-new-schema=true"]
# ports:
# - "6060:6060" # pprof port
# - "8080:8080" # query-service port
volumes:
- ./prometheus.yml:/root/config/prometheus.yml
- ../dashboards:/root/config/dashboards
- ./data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_LOCAL_DB_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-standalone-amd
- KAFKA_SPAN_EVAL=${KAFKA_SPAN_EVAL:-false}
restart: on-failure
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost:8080/api/v1/health"]
interval: 30s
timeout: 5s
retries: 3
!!merge <<: *db-depend
frontend:
image: signoz/frontend:${DOCKER_TAG:-0.68.0}
container_name: signoz-frontend
restart: on-failure
depends_on:
- alertmanager
- query-service
ports:
- "3301:3301"
volumes:
- ../common/nginx-config.conf:/etc/nginx/conf.d/default.conf
otel-collector-migrator:
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.23}
container_name: otel-migrator
command:
- "--dsn=tcp://clickhouse:9000"
depends_on:
clickhouse:
condition: service_healthy
# clickhouse-2:
# condition: service_healthy
# clickhouse-3:
# condition: service_healthy
otel-collector:
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.111.23}
container_name: signoz-otel-collector
command: ["--config=/etc/otel-collector-config.yaml", "--manager-config=/etc/manager-config.yaml", "--copy-path=/var/tmp/collector-config.yaml", "--feature-gates=-pkg.translator.prometheus.NormalizeName"]
user: root # required for reading docker container logs
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ./otel-collector-opamp-config.yaml:/etc/manager-config.yaml
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /:/hostfs:ro
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
- LOW_CARDINAL_EXCEPTION_GROUPING=false
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
# - "8888:8888" # OtelCollector internal metrics
# - "8889:8889" # signoz spanmetrics exposed by the agent
# - "9411:9411" # Zipkin port
# - "13133:13133" # health check extension
# - "14250:14250" # Jaeger gRPC
# - "14268:14268" # Jaeger thrift HTTP
# - "55678:55678" # OpenCensus receiver
# - "55679:55679" # zPages extension
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy
otel-collector-migrator:
condition: service_completed_successfully
query-service:
condition: service_healthy
logspout:
image: "gliderlabs/logspout:v3.2.14"
container_name: signoz-logspout
volumes:
- /etc/hostname:/etc/host_hostname:ro
- /var/run/docker.sock:/var/run/docker.sock
command: syslog+tcp://otel-collector:2255
depends_on:
- otel-collector
restart: on-failure

View File

@@ -1,3 +0,0 @@
include:
- test-app-docker-compose.yaml
- docker-compose-minimal.yaml

View File

@@ -1,64 +0,0 @@
<clickhouse>
<logger>
<!-- Possible levels [1]:
- none (turns off logging)
- fatal
- critical
- error
- warning
- notice
- information
- debug
- trace
[1]: https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/Logger.h#L105-L114
-->
<level>information</level>
<log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log>
<errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog>
<!-- Rotation policy
See https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/FileChannel.h#L54-L85
-->
<size>1000M</size>
<count>10</count>
<!-- <console>1</console> --> <!-- Default behavior is autodetection (log to console if not daemon mode and is tty) -->
</logger>
<listen_host>0.0.0.0</listen_host>
<max_connections>4096</max_connections>
<keeper_server>
<tcp_port>9181</tcp_port>
<!-- Must be unique among all keeper serves -->
<server_id>1</server_id>
<log_storage_path>/var/lib/clickhouse/coordination/logs</log_storage_path>
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
<coordination_settings>
<operation_timeout_ms>10000</operation_timeout_ms>
<min_session_timeout_ms>10000</min_session_timeout_ms>
<session_timeout_ms>100000</session_timeout_ms>
<raft_logs_level>information</raft_logs_level>
<compress_logs>false</compress_logs>
<!-- All settings listed in https://github.com/ClickHouse/ClickHouse/blob/master/src/Coordination/CoordinationSettings.h -->
</coordination_settings>
<!-- enable sanity hostname checks for cluster configuration (e.g. if localhost is used with remote endpoints) -->
<hostname_checks_enabled>true</hostname_checks_enabled>
<raft_configuration>
<server>
<id>1</id>
<!-- Internal port and hostname -->
<hostname>clickhouses-keeper-1</hostname>
<port>9234</port>
</server>
<!-- Add more servers here -->
</raft_configuration>
</keeper_server>
</clickhouse>

View File

@@ -1 +0,0 @@
server_endpoint: ws://query-service:4320/v1/opamp

View File

@@ -1,26 +0,0 @@
services:
hotrod:
image: jaegertracing/example-hotrod:1.30
container_name: hotrod
logging:
options:
max-size: 50m
max-file: "3"
command: [ "all" ]
environment:
- JAEGER_ENDPOINT=http://otel-collector:14268/api/traces
load-hotrod:
image: "signoz/locust:1.2.3"
container_name: load-hotrod
hostname: load-hotrod
environment:
ATTACKED_HOST: http://hotrod:8080
LOCUST_MODE: standalone
NO_PROXY: standalone
TASK_DELAY_FROM: 5
TASK_DELAY_TO: 30
QUIET_MODE: "${QUIET_MODE:-false}"
LOCUST_OPTS: "--headless -u 10 -r 1"
volumes:
- ../common/locust-scripts:/locust

View File

@@ -0,0 +1,2 @@
This directory is deprecated and will be removed in the future.
Please use the new directory for Clickhouse setup scripts: `scripts/clickhouse` instead.

View File

@@ -1,16 +0,0 @@
from locust import HttpUser, task, between
class UserTasks(HttpUser):
wait_time = between(5, 15)
@task
def rachel(self):
self.client.get("/dispatch?customer=123&nonse=0.6308392664170006")
@task
def trom(self):
self.client.get("/dispatch?customer=392&nonse=0.015296363321630757")
@task
def japanese(self):
self.client.get("/dispatch?customer=731&nonse=0.8022286220408668")
@task
def coffee(self):
self.client.get("/dispatch?customer=567&nonse=0.0022220379420636593")

View File

@@ -0,0 +1,283 @@
version: "3"
x-common: &common
networks:
- signoz-net
restart: on-failure
logging:
options:
max-size: 50m
max-file: "3"
x-clickhouse-defaults: &clickhouse-defaults
<<: *common
# addding non LTS version due to this fix https://github.com/ClickHouse/ClickHouse/commit/32caf8716352f45c1b617274c7508c86b7d1afab
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
labels:
signoz.io/scrape: "true"
signoz.io/port: "9363"
signoz.io/path: "/metrics"
depends_on:
init-clickhouse:
condition: service_completed_successfully
zookeeper-1:
condition: service_healthy
zookeeper-2:
condition: service_healthy
zookeeper-3:
condition: service_healthy
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- 0.0.0.0:8123/ping
interval: 30s
timeout: 5s
retries: 3
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
x-zookeeper-defaults: &zookeeper-defaults
<<: *common
image: bitnami/zookeeper:3.7.1
user: root
labels:
signoz.io/scrape: "true"
signoz.io/port: "9141"
signoz.io/path: "/metrics"
healthcheck:
test:
- CMD-SHELL
- curl -s -m 2 http://localhost:8080/commands/ruok | grep error | grep null
interval: 30s
timeout: 5s
retries: 3
x-db-depend: &db-depend
<<: *common
depends_on:
clickhouse:
condition: service_healthy
schema-migrator-sync:
condition: service_completed_successfully
services:
init-clickhouse:
<<: *common
image: clickhouse/clickhouse-server:24.1.2-alpine
container_name: signoz-init-clickhouse
command:
- bash
- -c
- |
version="v0.0.1"
node_os=$$(uname -s | tr '[:upper:]' '[:lower:]')
node_arch=$$(uname -m | sed s/aarch64/arm64/ | sed s/x86_64/amd64/)
echo "Fetching histogram-binary for $${node_os}/$${node_arch}"
cd /tmp
wget -O histogram-quantile.tar.gz "https://github.com/SigNoz/signoz/releases/download/histogram-quantile%2F$${version}/histogram-quantile_$${node_os}_$${node_arch}.tar.gz"
tar -xvzf histogram-quantile.tar.gz
mv histogram-quantile /var/lib/clickhouse/user_scripts/histogramQuantile
volumes:
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
zookeeper-1:
<<: *zookeeper-defaults
container_name: signoz-zookeeper-1
# ports:
# - "2181:2181"
# - "2888:2888"
# - "3888:3888"
volumes:
- ./clickhouse-setup/data/zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
- ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
- ZOO_ENABLE_PROMETHEUS_METRICS=yes
- ZOO_PROMETHEUS_METRICS_PORT_NUMBER=9141
zookeeper-2:
<<: *zookeeper-defaults
container_name: signoz-zookeeper-2
# ports:
# - "2182:2181"
# - "2889:2888"
# - "3889:3888"
volumes:
- ./clickhouse-setup/data/zookeeper-2:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=2
- ZOO_SERVERS=zookeeper-1:2888:3888,0.0.0.0:2888:3888,zookeeper-3:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
- ZOO_ENABLE_PROMETHEUS_METRICS=yes
- ZOO_PROMETHEUS_METRICS_PORT_NUMBER=9141
zookeeper-3:
<<: *zookeeper-defaults
container_name: signoz-zookeeper-3
# ports:
# - "2183:2181"
# - "2890:2888"
# - "3890:3888"
volumes:
- ./clickhouse-setup/data/zookeeper-3:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=3
- ZOO_SERVERS=zookeeper-1:2888:3888,zookeeper-2:2888:3888,0.0.0.0:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
- ZOO_ENABLE_PROMETHEUS_METRICS=yes
- ZOO_PROMETHEUS_METRICS_PORT_NUMBER=9141
clickhouse:
<<: *clickhouse-defaults
container_name: signoz-clickhouse
# ports:
# - "9000:9000"
# - "8123:8123"
# - "9181:9181"
volumes:
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
- ../common/clickhouse/cluster.ha.xml:/etc/clickhouse-server/config.d/cluster.xml
- ./clickhouse-setup/data/clickhouse/:/var/lib/clickhouse/
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
clickhouse-2:
<<: *clickhouse-defaults
container_name: signoz-clickhouse-2
# ports:
# - "9001:9000"
# - "8124:8123"
# - "9182:9181"
volumes:
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
- ../common/clickhouse/cluster.ha.xml:/etc/clickhouse-server/config.d/cluster.xml
- ./clickhouse-setup/data/clickhouse-2/:/var/lib/clickhouse/
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
clickhouse-3:
<<: *clickhouse-defaults
container_name: signoz-clickhouse-3
# ports:
# - "9002:9000"
# - "8125:8123"
# - "9183:9181"
volumes:
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
- ../common/clickhouse/cluster.ha.xml:/etc/clickhouse-server/config.d/cluster.xml
- ./clickhouse-setup/data/clickhouse-3/:/var/lib/clickhouse/
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
alertmanager:
<<: *common
image: signoz/alertmanager:${ALERTMANAGER_TAG:-0.23.7}
container_name: signoz-alertmanager
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data
volumes:
- ./clickhouse-setup/data/alertmanager:/data
depends_on:
query-service:
condition: service_healthy
query-service:
<<: *db-depend
image: signoz/query-service:${DOCKER_TAG:-0.69.0}
container_name: signoz-query-service
command:
- --config=/root/config/prometheus.yml
- --use-logs-new-schema=true
- --use-trace-new-schema=true
# ports:
# - "3301:8080" # signoz port
# - "6060:6060" # pprof port
volumes:
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
- ../common/dashboards:/root/config/dashboards
- ./clickhouse-setup/data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_SQLSTORE_SQLITE_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-standalone-amd
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- localhost:8080/api/v1/health
interval: 30s
timeout: 5s
retries: 3
frontend:
<<: *common
image: signoz/frontend:${DOCKER_TAG:-0.69.0}
container_name: signoz-frontend
depends_on:
- alertmanager
- query-service
ports:
- "3301:3301"
volumes:
- ../common/signoz/nginx-config.conf:/etc/nginx/conf.d/default.conf
# TODO: support otel-collector multiple replicas. Nginx/Traefik for loadbalancing?
otel-collector:
<<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.111.24}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
- --copy-path=/var/tmp/collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ../common/signoz/otel-collector-opamp-config.yaml:/etc/manager-config.yaml
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
- LOW_CARDINAL_EXCEPTION_GROUPING=false
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
depends_on:
clickhouse:
condition: service_healthy
schema-migrator-sync:
condition: service_completed_successfully
query-service:
condition: service_healthy
schema-migrator-sync:
<<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.24}
container_name: schema-migrator-sync
command:
- sync
- --dsn=tcp://clickhouse:9000
- --up=
depends_on:
clickhouse:
condition: service_healthy
schema-migrator-async:
<<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.24}
container_name: schema-migrator-async
command:
- async
- --dsn=tcp://clickhouse:9000
- --up=
networks:
signoz-net:
name: signoz-net

View File

@@ -0,0 +1,213 @@
version: "3"
x-common: &common
networks:
- signoz-net
restart: on-failure
logging:
options:
max-size: 50m
max-file: "3"
x-clickhouse-defaults: &clickhouse-defaults
<<: *common
# addding non LTS version due to this fix https://github.com/ClickHouse/ClickHouse/commit/32caf8716352f45c1b617274c7508c86b7d1afab
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
labels:
signoz.io/scrape: "true"
signoz.io/port: "9363"
signoz.io/path: "/metrics"
depends_on:
init-clickhouse:
condition: service_completed_successfully
zookeeper-1:
condition: service_healthy
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- 0.0.0.0:8123/ping
interval: 30s
timeout: 5s
retries: 3
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
x-zookeeper-defaults: &zookeeper-defaults
<<: *common
image: bitnami/zookeeper:3.7.1
user: root
labels:
signoz.io/scrape: "true"
signoz.io/port: "9141"
signoz.io/path: "/metrics"
healthcheck:
test:
- CMD-SHELL
- curl -s -m 2 http://localhost:8080/commands/ruok | grep error | grep null
interval: 30s
timeout: 5s
retries: 3
x-db-depend: &db-depend
<<: *common
depends_on:
clickhouse:
condition: service_healthy
schema-migrator-sync:
condition: service_completed_successfully
services:
init-clickhouse:
<<: *common
image: clickhouse/clickhouse-server:24.1.2-alpine
container_name: signoz-init-clickhouse
command:
- bash
- -c
- |
version="v0.0.1"
node_os=$$(uname -s | tr '[:upper:]' '[:lower:]')
node_arch=$$(uname -m | sed s/aarch64/arm64/ | sed s/x86_64/amd64/)
echo "Fetching histogram-binary for $${node_os}/$${node_arch}"
cd /tmp
wget -O histogram-quantile.tar.gz "https://github.com/SigNoz/signoz/releases/download/histogram-quantile%2F$${version}/histogram-quantile_$${node_os}_$${node_arch}.tar.gz"
tar -xvzf histogram-quantile.tar.gz
mv histogram-quantile /var/lib/clickhouse/user_scripts/histogramQuantile
volumes:
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
zookeeper-1:
<<: *zookeeper-defaults
container_name: signoz-zookeeper-1
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
volumes:
- ./clickhouse-setup/data/zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
- ZOO_ENABLE_PROMETHEUS_METRICS=yes
- ZOO_PROMETHEUS_METRICS_PORT_NUMBER=9141
clickhouse:
<<: *clickhouse-defaults
container_name: signoz-clickhouse
ports:
- "9000:9000"
- "8123:8123"
- "9181:9181"
volumes:
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
- ../common/clickhouse/cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
- ./clickhouse-setup/data/clickhouse/:/var/lib/clickhouse/
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
alertmanager:
<<: *common
image: signoz/alertmanager:${ALERTMANAGER_TAG:-0.23.7}
container_name: signoz-alertmanager
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data
volumes:
- ./clickhouse-setup/data/alertmanager:/data
depends_on:
query-service:
condition: service_healthy
query-service:
<<: *db-depend
image: signoz/query-service:${DOCKER_TAG:-0.69.0}
container_name: signoz-query-service
command:
- --config=/root/config/prometheus.yml
- --gateway-url=https://api.staging.signoz.cloud
- --use-logs-new-schema=true
- --use-trace-new-schema=true
# ports:
# - "8080:8080" # signoz port
# - "6060:6060" # pprof port
volumes:
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
- ../common/dashboards:/root/config/dashboards
- ./clickhouse-setup/data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_SQLSTORE_SQLITE_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-standalone-amd
- KAFKA_SPAN_EVAL=${KAFKA_SPAN_EVAL:-false}
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- localhost:8080/api/v1/health
interval: 30s
timeout: 5s
retries: 3
frontend:
<<: *common
image: signoz/frontend:${DOCKER_TAG:-0.69.0}
container_name: signoz-frontend
depends_on:
- alertmanager
- query-service
ports:
- "3301:3301"
volumes:
- ../common/signoz/nginx-config.conf:/etc/nginx/conf.d/default.conf
otel-collector:
<<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.111.24}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
- --copy-path=/var/tmp/collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ../common/signoz/otel-collector-opamp-config.yaml:/etc/manager-config.yaml
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
- LOW_CARDINAL_EXCEPTION_GROUPING=false
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
depends_on:
query-service:
condition: service_healthy
schema-migrator-sync:
<<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.24}
container_name: schema-migrator-sync
command:
- sync
- --dsn=tcp://clickhouse:9000
- --up=
depends_on:
clickhouse:
condition: service_healthy
schema-migrator-async:
<<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.24}
container_name: schema-migrator-async
command:
- async
- --dsn=tcp://clickhouse:9000
- --up=
networks:
signoz-net:
name: signoz-net

View File

@@ -0,0 +1,211 @@
version: "3"
x-common: &common
networks:
- signoz-net
restart: on-failure
logging:
options:
max-size: 50m
max-file: "3"
x-clickhouse-defaults: &clickhouse-defaults
<<: *common
# addding non LTS version due to this fix https://github.com/ClickHouse/ClickHouse/commit/32caf8716352f45c1b617274c7508c86b7d1afab
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
labels:
signoz.io/scrape: "true"
signoz.io/port: "9363"
signoz.io/path: "/metrics"
depends_on:
init-clickhouse:
condition: service_completed_successfully
zookeeper-1:
condition: service_healthy
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- 0.0.0.0:8123/ping
interval: 30s
timeout: 5s
retries: 3
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
x-zookeeper-defaults: &zookeeper-defaults
<<: *common
image: bitnami/zookeeper:3.7.1
user: root
labels:
signoz.io/scrape: "true"
signoz.io/port: "9141"
signoz.io/path: "/metrics"
healthcheck:
test:
- CMD-SHELL
- curl -s -m 2 http://localhost:8080/commands/ruok | grep error | grep null
interval: 30s
timeout: 5s
retries: 3
x-db-depend: &db-depend
<<: *common
depends_on:
clickhouse:
condition: service_healthy
schema-migrator-sync:
condition: service_completed_successfully
services:
init-clickhouse:
<<: *common
image: clickhouse/clickhouse-server:24.1.2-alpine
container_name: signoz-init-clickhouse
command:
- bash
- -c
- |
version="v0.0.1"
node_os=$$(uname -s | tr '[:upper:]' '[:lower:]')
node_arch=$$(uname -m | sed s/aarch64/arm64/ | sed s/x86_64/amd64/)
echo "Fetching histogram-binary for $${node_os}/$${node_arch}"
cd /tmp
wget -O histogram-quantile.tar.gz "https://github.com/SigNoz/signoz/releases/download/histogram-quantile%2F$${version}/histogram-quantile_$${node_os}_$${node_arch}.tar.gz"
tar -xvzf histogram-quantile.tar.gz
mv histogram-quantile /var/lib/clickhouse/user_scripts/histogramQuantile
volumes:
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
zookeeper-1:
<<: *zookeeper-defaults
container_name: signoz-zookeeper-1
# ports:
# - "2181:2181"
# - "2888:2888"
# - "3888:3888"
volumes:
- ./clickhouse-setup/data/zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
- ZOO_ENABLE_PROMETHEUS_METRICS=yes
- ZOO_PROMETHEUS_METRICS_PORT_NUMBER=9141
clickhouse:
<<: *clickhouse-defaults
container_name: signoz-clickhouse
# ports:
# - "9000:9000"
# - "8123:8123"
# - "9181:9181"
volumes:
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
- ../common/clickhouse/cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
- ./clickhouse-setup/data/clickhouse/:/var/lib/clickhouse/
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
alertmanager:
<<: *common
image: signoz/alertmanager:${ALERTMANAGER_TAG:-0.23.7}
container_name: signoz-alertmanager
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data
volumes:
- ./clickhouse-setup/data/alertmanager:/data
depends_on:
query-service:
condition: service_healthy
query-service:
<<: *db-depend
image: signoz/query-service:${DOCKER_TAG:-0.69.0}
container_name: signoz-query-service
command:
- --config=/root/config/prometheus.yml
- --use-logs-new-schema=true
- --use-trace-new-schema=true
# ports:
# - "3301:8080" # signoz port
# - "6060:6060" # pprof port
volumes:
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
- ../common/dashboards:/root/config/dashboards
- ./clickhouse-setup/data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_SQLSTORE_SQLITE_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-standalone-amd
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- localhost:8080/api/v1/health
interval: 30s
timeout: 5s
retries: 3
frontend:
<<: *common
image: signoz/frontend:${DOCKER_TAG:-0.69.0}
container_name: signoz-frontend
depends_on:
- alertmanager
- query-service
ports:
- "3301:3301"
volumes:
- ../common/signoz/nginx-config.conf:/etc/nginx/conf.d/default.conf
otel-collector:
<<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.111.24}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
- --copy-path=/var/tmp/collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ../common/signoz/otel-collector-opamp-config.yaml:/etc/manager-config.yaml
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
- LOW_CARDINAL_EXCEPTION_GROUPING=false
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
depends_on:
query-service:
condition: service_healthy
schema-migrator-sync:
<<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.24}
container_name: schema-migrator-sync
command:
- sync
- --dsn=tcp://clickhouse:9000
- --up=
depends_on:
clickhouse:
condition: service_healthy
schema-migrator-async:
<<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.24}
container_name: schema-migrator-async
command:
- async
- --dsn=tcp://clickhouse:9000
- --up=
networks:
signoz-net:
name: signoz-net

View File

@@ -0,0 +1,39 @@
version: "3"
x-common: &common
networks:
- signoz-net
extra_hosts:
- host.docker.internal:host-gateway
logging:
options:
max-size: 50m
max-file: "3"
restart: on-failure
services:
hotrod:
<<: *common
image: jaegertracing/example-hotrod:1.61.0
container_name: hotrod
command: [ "all" ]
environment:
- OTEL_EXPORTER_OTLP_ENDPOINT=http://host.docker.internal:4318 # In case of external SigNoz or cloud, update the endpoint and access token
# - OTEL_OTLP_HEADERS=signoz-access-token=<your-access-token>
load-hotrod:
<<: *common
image: "signoz/locust:1.2.3"
container_name: load-hotrod
environment:
ATTACKED_HOST: http://hotrod:8080
LOCUST_MODE: standalone
NO_PROXY: standalone
TASK_DELAY_FROM: 5
TASK_DELAY_TO: 30
QUIET_MODE: "${QUIET_MODE:-false}"
LOCUST_OPTS: "--headless -u 10 -r 1"
volumes:
- ../../../common/locust-scripts:/locust
networks:
signoz-net:
name: signoz-net
external: true

View File

@@ -0,0 +1,43 @@
version: "3"
x-common: &common
networks:
- signoz-net
extra_hosts:
- host.docker.internal:host-gateway
logging:
options:
max-size: 50m
max-file: "3"
restart: on-failure
services:
otel-agent:
<<: *common
image: otel/opentelemetry-collector-contrib:0.111.0
command:
- --config=/etc/otel-collector-config.yaml
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- /:/hostfs:ro
- /var/run/docker.sock:/var/run/docker.sock
environment:
- SIGNOZ_COLLECTOR_ENDPOINT=http://host.docker.internal:4317 # In case of external SigNoz or cloud, update the endpoint and access token
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux # Replace signoz-host with the actual hostname
# - SIGNOZ_ACCESS_TOKEN="<your-access-token>"
# Before exposing the ports, make sure the ports are not used by other services
# ports:
# - "4317:4317"
# - "4318:4318"
logspout:
<<: *common
image: "gliderlabs/logspout:v3.2.14"
volumes:
- /etc/hostname:/etc/host_hostname:ro
- /var/run/docker.sock:/var/run/docker.sock
command: syslog+tcp://otel-agent:2255
depends_on:
- otel-agent
networks:
signoz-net:
name: signoz-net
external: true

View File

@@ -0,0 +1,139 @@
receivers:
hostmetrics:
collection_interval: 30s
root_path: /hostfs
scrapers:
cpu: {}
load: {}
memory: {}
disk: {}
filesystem: {}
network: {}
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-collector
# For Docker daemon metrics to be scraped, it must be configured to expose
# Prometheus metrics, as documented here: https://docs.docker.com/config/daemon/prometheus/
# - job_name: docker-daemon
# static_configs:
# - targets:
# - host.docker.internal:9323
# labels:
# job_name: docker-daemon
- job_name: docker-container
docker_sd_configs:
- host: unix:///var/run/docker.sock
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_docker_container_label_signoz_io_scrape
- regex: true
source_labels:
- __meta_docker_container_label_signoz_io_path
target_label: __metrics_path__
- regex: (.+)
source_labels:
- __meta_docker_container_label_signoz_io_path
target_label: __metrics_path__
- separator: ":"
source_labels:
- __meta_docker_network_ip
- __meta_docker_container_label_signoz_io_port
target_label: __address__
- regex: '/(.*)'
replacement: '$1'
source_labels:
- __meta_docker_container_name
target_label: container_name
- regex: __meta_docker_container_label_signoz_io_(.+)
action: labelmap
replacement: $1
tcplog/docker:
listen_address: "0.0.0.0:2255"
operators:
- type: regex_parser
regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?'
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: move
from: attributes["body"]
to: body
- type: remove
field: attributes.timestamp
# please remove names from below if you want to collect logs from them
- type: filter
id: signoz_logs_filter
expr: 'attributes.container_name matches "^(signoz-(|alertmanager|query-service|otel-collector|clickhouse|zookeeper))|(infra-(logspout|otel-agent)-.*)"'
processors:
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors:
# - ec2
# - gcp
# - azure
- env
- system
timeout: 2s
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
exporters:
otlp:
endpoint: ${env:SIGNOZ_COLLECTOR_ENDPOINT}
tls:
insecure: true
headers:
signoz-access-token: ${env:SIGNOZ_ACCESS_TOKEN}
# debug: {}
service:
telemetry:
logs:
encoding: json
metrics:
address: 0.0.0.0:8888
extensions:
- health_check
- pprof
pipelines:
traces:
receivers: [otlp]
processors: [resourcedetection, batch]
exporters: [otlp]
metrics:
receivers: [otlp]
processors: [resourcedetection, batch]
exporters: [otlp]
metrics/hostmetrics:
receivers: [hostmetrics]
processors: [resourcedetection, batch]
exporters: [otlp]
metrics/prometheus:
receivers: [prometheus]
processors: [resourcedetection, batch]
exporters: [otlp]
logs:
receivers: [otlp, tcplog/docker]
processors: [resourcedetection, batch]
exporters: [otlp]

View File

@@ -1,62 +1,21 @@
receivers:
tcplog/docker:
listen_address: "0.0.0.0:2255"
operators:
- type: regex_parser
regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?'
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: move
from: attributes["body"]
to: body
- type: remove
field: attributes.timestamp
# please remove names from below if you want to collect logs from them
- type: filter
id: signoz_logs_filter
expr: 'attributes.container_name matches "^signoz_(logspout|frontend|alertmanager|query-service|otel-collector|clickhouse|zookeeper)"'
opencensus:
endpoint: 0.0.0.0:55678
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_http:
endpoint: 0.0.0.0:14268
# thrift_compact:
# endpoint: 0.0.0.0:6831
# thrift_binary:
# endpoint: 0.0.0.0:6832
hostmetrics:
collection_interval: 30s
root_path: /hostfs
scrapers:
cpu: {}
load: {}
memory: {}
disk: {}
filesystem: {}
network: {}
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
# otel-collector internal metrics
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-collector
processors:
batch:
send_batch_size: 10000
@@ -64,25 +23,11 @@ processors:
timeout: 10s
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors: [env, system] # include ec2 for AWS, gcp for GCP and azure for Azure.
detectors: [env, system]
timeout: 2s
# memory_limiter:
# # 80% of maximum memory up to 2G
# limit_mib: 1500
# # 25% of limit up to 2G
# spike_limit_mib: 512
# check_interval: 5s
#
# # 50% of the maximum memory
# limit_percentage: 50
# # 20% of max memory usage spike expected
# spike_limit_percentage: 20
# queued_retry:
# num_workers: 4
# queue_size: 100
# retry_on_failure: true
signozspanmetrics/delta:
metrics_exporter: clickhousemetricswrite
metrics_flush_interval: 60s
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
dimensions_cache_size: 100000
aggregation_temporality: AGGREGATION_TEMPORALITY_DELTA
@@ -105,7 +50,11 @@ processors:
- name: host.name
- name: host.type
- name: container.name
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
exporters:
clickhousetraces:
datasource: tcp://clickhouse:9000/signoz_traces
@@ -119,44 +68,34 @@ exporters:
endpoint: tcp://clickhouse:9000/signoz_metrics
clickhousemetricswritev2:
dsn: tcp://clickhouse:9000/signoz_metrics
# logging: {}
clickhouselogsexporter:
dsn: tcp://clickhouse:9000/signoz_logs
timeout: 10s
use_new_schema: true
extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages:
endpoint: 0.0.0.0:55679
pprof:
endpoint: 0.0.0.0:1777
# debug: {}
service:
telemetry:
logs:
encoding: json
metrics:
address: 0.0.0.0:8888
extensions: [health_check, zpages, pprof]
extensions:
- health_check
- pprof
pipelines:
traces:
receivers: [jaeger, otlp]
receivers: [otlp]
processors: [signozspanmetrics/delta, batch]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [clickhousemetricswrite, clickhousemetricswritev2]
metrics/hostmetrics:
receivers: [hostmetrics]
processors: [resourcedetection, batch]
exporters: [clickhousemetricswrite, clickhousemetricswritev2]
metrics/prometheus:
receivers: [prometheus]
processors: [batch]
exporters: [clickhousemetricswrite/prometheus, clickhousemetricswritev2]
logs:
receivers: [otlp, tcplog/docker]
receivers: [otlp]
processors: [batch]
exporters: [clickhouselogsexporter]

View File

@@ -2,6 +2,11 @@
set -o errexit
# Variables
BASE_DIR="$(dirname "$(readlink -f "$0")")"
DOCKER_STANDALONE_DIR="docker"
DOCKER_SWARM_DIR="docker-swarm" # TODO: Add docker swarm support
# Regular Colors
Black='\033[0;30m' # Black
Red='\[\e[0;31m\]' # Red
@@ -32,6 +37,11 @@ has_cmd() {
command -v "$1" > /dev/null 2>&1
}
# Check if docker compose plugin is present
has_docker_compose_plugin() {
docker compose version > /dev/null 2>&1
}
is_mac() {
[[ $OSTYPE == darwin* ]]
}
@@ -183,9 +193,7 @@ install_docker() {
$sudo_cmd yum-config-manager --add-repo https://download.docker.com/linux/$os/docker-ce.repo
echo "Installing docker"
$yum_cmd install docker-ce docker-ce-cli containerd.io
fi
}
compose_version () {
@@ -222,17 +230,11 @@ start_docker() {
echo -e "🐳 Starting Docker ...\n"
if [[ $os == "Mac" ]]; then
open --background -a Docker && while ! docker system info > /dev/null 2>&1; do sleep 1; done
else
else
if ! $sudo_cmd systemctl is-active docker.service > /dev/null; then
echo "Starting docker service"
$sudo_cmd systemctl start docker.service
fi
# if [[ -z $sudo_cmd ]]; then
# docker ps > /dev/null && true
# if [[ $? -ne 0 ]]; then
# request_sudo
# fi
# fi
if [[ -z $sudo_cmd ]]; then
if ! docker ps > /dev/null && true; then
request_sudo
@@ -260,12 +262,15 @@ wait_for_containers_start() {
}
bye() { # Prints a friendly good bye message and exits the script.
# Switch back to the original directory
popd > /dev/null 2>&1
if [[ "$?" -ne 0 ]]; then
set +o errexit
echo "🔴 The containers didn't seem to start correctly. Please run the following command to check containers that may have errored out:"
echo ""
echo -e "$sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml ps -a"
echo -e "cd ${DOCKER_STANDALONE_DIR}"
echo -e "$sudo_cmd $docker_compose_cmd ps -a"
echo "Please read our troubleshooting guide https://signoz.io/docs/install/troubleshooting/"
echo "or reach us for support in #help channel in our Slack Community https://signoz.io/slack"
@@ -296,11 +301,6 @@ request_sudo() {
if (( $EUID != 0 )); then
sudo_cmd="sudo"
echo -e "Please enter your sudo password, if prompted."
# $sudo_cmd -l | grep -e "NOPASSWD: ALL" > /dev/null
# if [[ $? -ne 0 ]] && ! $sudo_cmd -v; then
# echo "Need sudo privileges to proceed with the installation."
# exit 1;
# fi
if ! $sudo_cmd -l | grep -e "NOPASSWD: ALL" > /dev/null && ! $sudo_cmd -v; then
echo "Need sudo privileges to proceed with the installation."
exit 1;
@@ -317,6 +317,7 @@ echo -e "👋 Thank you for trying out SigNoz! "
echo ""
sudo_cmd=""
docker_compose_cmd=""
# Check sudo permissions
if (( $EUID != 0 )); then
@@ -362,28 +363,8 @@ else
SIGNOZ_INSTALLATION_ID=$(echo "$sysinfo" | $digest_cmd | grep -E -o '[a-zA-Z0-9]{64}')
fi
# echo ""
# echo -e "👉 ${RED}Two ways to go forward\n"
# echo -e "${RED}1) ClickHouse as database (default)\n"
# read -p "⚙️ Enter your preference (1/2):" choice_setup
# while [[ $choice_setup != "1" && $choice_setup != "2" && $choice_setup != "" ]]
# do
# # echo $choice_setup
# echo -e "\n❌ ${CYAN}Please enter either 1 or 2"
# read -p "⚙️ Enter your preference (1/2): " choice_setup
# # echo $choice_setup
# done
# if [[ $choice_setup == "1" || $choice_setup == "" ]];then
# setup_type='clickhouse'
# fi
setup_type='clickhouse'
# echo -e "\n✅ ${CYAN}You have chosen: ${setup_type} setup\n"
# Run bye if failure happens
trap bye EXIT
@@ -455,8 +436,6 @@ if [[ $desired_os -eq 0 ]]; then
send_event "os_not_supported"
fi
# check_ports_occupied
# Check is Docker daemon is installed and available. If not, the install & start Docker for Linux machines. We cannot automatically install Docker Desktop on Mac OS
if ! is_command_present docker; then
@@ -486,27 +465,42 @@ if ! is_command_present docker; then
fi
fi
if has_docker_compose_plugin; then
echo "docker compose plugin is present, using it"
docker_compose_cmd="docker compose"
# Install docker-compose
if ! is_command_present docker-compose; then
request_sudo
install_docker_compose
else
docker_compose_cmd="docker-compose"
if ! is_command_present docker-compose; then
request_sudo
install_docker_compose
fi
fi
start_docker
# $sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml up -d --remove-orphans || true
# Switch to the Docker Standalone directory
pushd "${BASE_DIR}/${DOCKER_STANDALONE_DIR}" > /dev/null 2>&1
# check for open ports, if signoz is not installed
if is_command_present docker-compose; then
if $sudo_cmd $docker_compose_cmd ps | grep "signoz-query-service" | grep -q "healthy" > /dev/null 2>&1; then
echo "SigNoz already installed, skipping the occupied ports check"
else
check_ports_occupied
fi
fi
echo ""
echo -e "\n🟡 Pulling the latest container images for SigNoz.\n"
$sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml pull
$sudo_cmd $docker_compose_cmd pull
echo ""
echo "🟡 Starting the SigNoz containers. It may take a few minutes ..."
echo
# The docker-compose command does some nasty stuff for the `--detach` functionality. So we add a `|| true` so that the
# The $docker_compose_cmd command does some nasty stuff for the `--detach` functionality. So we add a `|| true` so that the
# script doesn't exit because this command looks like it failed to do it's thing.
$sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml up --detach --remove-orphans || true
$sudo_cmd $docker_compose_cmd up --detach --remove-orphans || true
wait_for_containers_start 60
echo ""
@@ -516,7 +510,14 @@ if [[ $status_code -ne 200 ]]; then
echo "🔴 The containers didn't seem to start correctly. Please run the following command to check containers that may have errored out:"
echo ""
echo -e "$sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml ps -a"
echo "cd ${DOCKER_STANDALONE_DIR}"
echo "$sudo_cmd $docker_compose_cmd ps -a"
echo ""
echo "Try bringing down the containers and retrying the installation"
echo "cd ${DOCKER_STANDALONE_DIR}"
echo "$sudo_cmd $docker_compose_cmd down -v"
echo ""
echo "Please read our troubleshooting guide https://signoz.io/docs/install/troubleshooting/"
echo "or reach us on SigNoz for support https://signoz.io/slack"
@@ -534,10 +535,13 @@ else
echo ""
echo -e "🟢 Your frontend is running on http://localhost:3301"
echo ""
echo " By default, retention period is set to 15 days for logs and traces, and 30 days for metrics."
echo " By default, retention period is set to 15 days for logs and traces, and 30 days for metrics."
echo -e "To change this, navigate to the General tab on the Settings page of SigNoz UI. For more details, refer to https://signoz.io/docs/userguide/retention-period \n"
echo " To bring down SigNoz and clean volumes : $sudo_cmd docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml down -v"
echo " To bring down SigNoz and clean volumes:"
echo ""
echo "cd ${DOCKER_STANDALONE_DIR}"
echo "$sudo_cmd $docker_compose_cmd down -v"
echo ""
echo "+++++++++++++++++++++++++++++++++++++++++++++++++"
@@ -552,7 +556,7 @@ else
do
read -rp 'Email: ' email
done
send_event "identify_successful_installation"
fi

View File

@@ -7,6 +7,7 @@ import (
"github.com/jmoiron/sqlx"
"go.signoz.io/signoz/pkg/cache"
basechr "go.signoz.io/signoz/pkg/query-service/app/clickhouseReader"
"go.signoz.io/signoz/pkg/query-service/interfaces"
)
@@ -27,8 +28,10 @@ func NewDataConnector(
cluster string,
useLogsNewSchema bool,
useTraceNewSchema bool,
fluxIntervalForTraceDetail time.Duration,
cache cache.Cache,
) *ClickhouseReader {
ch := basechr.NewReader(localDB, promConfigPath, lm, maxIdleConns, maxOpenConns, dialTimeout, cluster, useLogsNewSchema, useTraceNewSchema)
ch := basechr.NewReader(localDB, promConfigPath, lm, maxIdleConns, maxOpenConns, dialTimeout, cluster, useLogsNewSchema, useTraceNewSchema, fluxIntervalForTraceDetail, cache)
return &ClickhouseReader{
conn: ch.GetConn(),
appdb: localDB,

View File

@@ -29,8 +29,8 @@ import (
"go.signoz.io/signoz/ee/query-service/integrations/gateway"
"go.signoz.io/signoz/ee/query-service/interfaces"
"go.signoz.io/signoz/ee/query-service/rules"
"go.signoz.io/signoz/pkg/http/middleware"
baseauth "go.signoz.io/signoz/pkg/query-service/auth"
"go.signoz.io/signoz/pkg/query-service/migrate"
v3 "go.signoz.io/signoz/pkg/query-service/model/v3"
"go.signoz.io/signoz/pkg/signoz"
"go.signoz.io/signoz/pkg/web"
@@ -64,25 +64,26 @@ import (
const AppDbEngine = "sqlite"
type ServerOptions struct {
Config signoz.Config
SigNoz *signoz.SigNoz
PromConfigPath string
SkipTopLvlOpsPath string
HTTPHostPort string
PrivateHostPort string
// alert specific params
DisableRules bool
RuleRepoURL string
PreferSpanMetrics bool
MaxIdleConns int
MaxOpenConns int
DialTimeout time.Duration
CacheConfigPath string
FluxInterval string
Cluster string
GatewayUrl string
UseLogsNewSchema bool
UseTraceNewSchema bool
SkipWebFrontend bool
DisableRules bool
RuleRepoURL string
PreferSpanMetrics bool
MaxIdleConns int
MaxOpenConns int
DialTimeout time.Duration
CacheConfigPath string
FluxInterval string
FluxIntervalForTraceDetail string
Cluster string
GatewayUrl string
UseLogsNewSchema bool
UseTraceNewSchema bool
}
// Server runs HTTP api service
@@ -113,25 +114,22 @@ func (s Server) HealthCheckStatus() chan healthcheck.Status {
// NewServer creates and initializes Server
func NewServer(serverOptions *ServerOptions) (*Server, error) {
modelDao, err := dao.InitDao("sqlite", baseconst.RELATIONAL_DATASOURCE_PATH)
modelDao, err := dao.InitDao(serverOptions.SigNoz.SQLStore.SQLxDB())
if err != nil {
return nil, err
}
baseexplorer.InitWithDSN(baseconst.RELATIONAL_DATASOURCE_PATH)
if err := preferences.InitDB(baseconst.RELATIONAL_DATASOURCE_PATH); err != nil {
if err := baseexplorer.InitWithDSN(serverOptions.SigNoz.SQLStore.SQLxDB()); err != nil {
return nil, err
}
localDB, err := dashboards.InitDB(baseconst.RELATIONAL_DATASOURCE_PATH)
if err != nil {
if err := preferences.InitDB(serverOptions.SigNoz.SQLStore.SQLxDB()); err != nil {
return nil, err
}
localDB.SetMaxOpenConns(10)
if err := dashboards.InitDB(serverOptions.SigNoz.SQLStore.SQLxDB()); err != nil {
return nil, err
}
gatewayProxy, err := gateway.NewProxy(serverOptions.GatewayUrl, gateway.RoutePrefix)
if err != nil {
@@ -139,7 +137,7 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
}
// initiate license manager
lm, err := licensepkg.StartManager("sqlite", localDB)
lm, err := licensepkg.StartManager(serverOptions.SigNoz.SQLStore.SQLxDB())
if err != nil {
return nil, err
}
@@ -148,12 +146,17 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
modelDao.SetFlagProvider(lm)
readerReady := make(chan bool)
fluxIntervalForTraceDetail, err := time.ParseDuration(serverOptions.FluxIntervalForTraceDetail)
if err != nil {
return nil, err
}
var reader interfaces.DataConnector
storage := os.Getenv("STORAGE")
if storage == "clickhouse" {
zap.L().Info("Using ClickHouse as datastore ...")
qb := db.NewDataConnector(
localDB,
serverOptions.SigNoz.SQLStore.SQLxDB(),
serverOptions.PromConfigPath,
lm,
serverOptions.MaxIdleConns,
@@ -162,6 +165,8 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
serverOptions.Cluster,
serverOptions.UseLogsNewSchema,
serverOptions.UseTraceNewSchema,
fluxIntervalForTraceDetail,
serverOptions.SigNoz.Cache,
)
go qb.Start(readerReady)
reader = qb
@@ -189,7 +194,7 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
rm, err := makeRulesManager(serverOptions.PromConfigPath,
baseconst.GetAlertManagerApiPrefix(),
serverOptions.RuleRepoURL,
localDB,
serverOptions.SigNoz.SQLStore.SQLxDB(),
reader,
c,
serverOptions.DisableRules,
@@ -202,27 +207,20 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
return nil, err
}
go func() {
err = migrate.ClickHouseMigrate(reader.GetConn(), serverOptions.Cluster)
if err != nil {
zap.L().Error("error while running clickhouse migrations", zap.Error(err))
}
}()
// initiate opamp
_, err = opAmpModel.InitDB(localDB)
_, err = opAmpModel.InitDB(serverOptions.SigNoz.SQLStore.SQLxDB())
if err != nil {
return nil, err
}
integrationsController, err := integrations.NewController(localDB)
integrationsController, err := integrations.NewController(serverOptions.SigNoz.SQLStore.SQLxDB())
if err != nil {
return nil, fmt.Errorf(
"couldn't create integrations controller: %w", err,
)
}
cloudIntegrationsController, err := cloudintegrations.NewController(localDB)
cloudIntegrationsController, err := cloudintegrations.NewController(serverOptions.SigNoz.SQLStore.SQLxDB())
if err != nil {
return nil, fmt.Errorf(
"couldn't create cloud provider integrations controller: %w", err,
@@ -231,7 +229,7 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
// ingestion pipelines manager
logParsingPipelineController, err := logparsingpipeline.NewLogParsingPipelinesController(
localDB, "sqlite", integrationsController.GetPipelinesForInstalledIntegrations,
serverOptions.SigNoz.SQLStore.SQLxDB(), integrationsController.GetPipelinesForInstalledIntegrations,
)
if err != nil {
return nil, err
@@ -239,8 +237,7 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
// initiate agent config handler
agentConfMgr, err := agentConf.Initiate(&agentConf.ManagerOptions{
DB: localDB,
DBEngine: AppDbEngine,
DB: serverOptions.SigNoz.SQLStore.SQLxDB(),
AgentFeatures: []agentConf.AgentFeature{logParsingPipelineController},
})
if err != nil {
@@ -248,7 +245,7 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
}
// start the usagemanager
usageManager, err := usage.New("sqlite", modelDao, lm.GetRepo(), reader.GetConn())
usageManager, err := usage.New(modelDao, lm.GetRepo(), reader.GetConn())
if err != nil {
return nil, err
}
@@ -261,7 +258,6 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
telemetry.GetInstance().SetSaasOperator(constants.SaasSegmentKey)
fluxInterval, err := time.ParseDuration(serverOptions.FluxInterval)
if err != nil {
return nil, err
}
@@ -328,10 +324,13 @@ func (s *Server) createPrivateServer(apiHandler *api.APIHandler) (*http.Server,
r := baseapp.NewRouter()
r.Use(setTimeoutMiddleware)
r.Use(s.analyticsMiddleware)
r.Use(loggingMiddlewarePrivate)
r.Use(baseapp.LogCommentEnricher)
r.Use(middleware.NewTimeout(zap.L(),
s.serverOptions.Config.APIServer.Timeout.ExcludedRoutes,
s.serverOptions.Config.APIServer.Timeout.Default,
s.serverOptions.Config.APIServer.Timeout.Max,
).Wrap)
r.Use(middleware.NewAnalytics(zap.L()).Wrap)
r.Use(middleware.NewLogging(zap.L(), s.serverOptions.Config.APIServer.Logging.ExcludedRoutes).Wrap)
apiHandler.RegisterPrivateRoutes(r)
@@ -371,10 +370,13 @@ func (s *Server) createPublicServer(apiHandler *api.APIHandler, web web.Web) (*h
}
am := baseapp.NewAuthMiddleware(getUserFromRequest)
r.Use(setTimeoutMiddleware)
r.Use(s.analyticsMiddleware)
r.Use(loggingMiddleware)
r.Use(baseapp.LogCommentEnricher)
r.Use(middleware.NewTimeout(zap.L(),
s.serverOptions.Config.APIServer.Timeout.ExcludedRoutes,
s.serverOptions.Config.APIServer.Timeout.Default,
s.serverOptions.Config.APIServer.Timeout.Max,
).Wrap)
r.Use(middleware.NewAnalytics(zap.L()).Wrap)
r.Use(middleware.NewLogging(zap.L(), s.serverOptions.Config.APIServer.Logging.ExcludedRoutes).Wrap)
apiHandler.RegisterRoutes(r, am)
apiHandler.RegisterLogsRoutes(r, am)
@@ -396,11 +398,9 @@ func (s *Server) createPublicServer(apiHandler *api.APIHandler, web web.Web) (*h
handler = handlers.CompressHandler(handler)
if !s.serverOptions.SkipWebFrontend {
err := web.AddToRouter(r)
if err != nil {
return nil, err
}
err := web.AddToRouter(r)
if err != nil {
return nil, err
}
return &http.Server{
@@ -408,31 +408,6 @@ func (s *Server) createPublicServer(apiHandler *api.APIHandler, web web.Web) (*h
}, nil
}
// TODO(remove): Implemented at pkg/http/middleware/logging.go
// loggingMiddleware is used for logging public api calls
func loggingMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
route := mux.CurrentRoute(r)
path, _ := route.GetPathTemplate()
startTime := time.Now()
next.ServeHTTP(w, r)
zap.L().Info(path, zap.Duration("timeTaken", time.Since(startTime)), zap.String("path", path))
})
}
// TODO(remove): Implemented at pkg/http/middleware/logging.go
// loggingMiddlewarePrivate is used for logging private api calls
// from internal services like alert manager
func loggingMiddlewarePrivate(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
route := mux.CurrentRoute(r)
path, _ := route.GetPathTemplate()
startTime := time.Now()
next.ServeHTTP(w, r)
zap.L().Info(path, zap.Duration("timeTaken", time.Since(startTime)), zap.String("path", path), zap.Bool("tprivatePort", true))
})
}
// TODO(remove): Implemented at pkg/http/middleware/logging.go
type loggingResponseWriter struct {
http.ResponseWriter
@@ -601,23 +576,6 @@ func (s *Server) analyticsMiddleware(next http.Handler) http.Handler {
})
}
// TODO(remove): Implemented at pkg/http/middleware/timeout.go
func setTimeoutMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
var cancel context.CancelFunc
// check if route is not excluded
url := r.URL.Path
if _, ok := baseconst.TimeoutExcludedRoutes[url]; !ok {
ctx, cancel = context.WithTimeout(r.Context(), baseconst.ContextTimeout)
defer cancel()
}
r = r.WithContext(ctx)
next.ServeHTTP(w, r)
})
}
// initListeners initialises listeners of the server
func (s *Server) initListeners() error {
// listen on public port

View File

@@ -1,18 +1,10 @@
package dao
import (
"fmt"
"github.com/jmoiron/sqlx"
"go.signoz.io/signoz/ee/query-service/dao/sqlite"
)
func InitDao(engine, path string) (ModelDao, error) {
switch engine {
case "sqlite":
return sqlite.InitDB(path)
default:
return nil, fmt.Errorf("qsdb type: %s is not supported in query service", engine)
}
func InitDao(inputDB *sqlx.DB) (ModelDao, error) {
return sqlite.InitDB(inputDB)
}

View File

@@ -65,8 +65,8 @@ func columnExists(db *sqlx.DB, tableName, columnName string) bool {
}
// InitDB creates and extends base model DB repository
func InitDB(dataSourceName string) (*modelDao, error) {
dao, err := basedsql.InitDB(dataSourceName)
func InitDB(inputDB *sqlx.DB) (*modelDao, error) {
dao, err := basedsql.InitDB(inputDB)
if err != nil {
return nil, err
}

View File

@@ -28,13 +28,8 @@ func NewLicenseRepo(db *sqlx.DB) Repo {
}
}
func (r *Repo) InitDB(engine string) error {
switch engine {
case "sqlite3", "sqlite":
return sqlite.InitDB(r.db)
default:
return fmt.Errorf("unsupported db")
}
func (r *Repo) InitDB(inputDB *sqlx.DB) error {
return sqlite.InitDB(inputDB)
}
func (r *Repo) GetLicenses(ctx context.Context) ([]model.License, error) {

View File

@@ -51,13 +51,13 @@ type Manager struct {
activeFeatures basemodel.FeatureSet
}
func StartManager(dbType string, db *sqlx.DB, features ...basemodel.Feature) (*Manager, error) {
func StartManager(db *sqlx.DB, features ...basemodel.Feature) (*Manager, error) {
if LM != nil {
return LM, nil
}
repo := NewLicenseRepo(db)
err := repo.InitDB(dbType)
err := repo.InitDB(db)
if err != nil {
return nil, fmt.Errorf("failed to initiate license repo: %v", err)

View File

@@ -10,12 +10,12 @@ import (
"syscall"
"time"
"go.opentelemetry.io/collector/confmap"
"go.opentelemetry.io/otel/sdk/resource"
semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
"go.signoz.io/signoz/ee/query-service/app"
signozconfig "go.signoz.io/signoz/pkg/config"
"go.signoz.io/signoz/pkg/confmap/provider/signozenvprovider"
"go.signoz.io/signoz/pkg/config"
"go.signoz.io/signoz/pkg/config/envprovider"
"go.signoz.io/signoz/pkg/config/fileprovider"
"go.signoz.io/signoz/pkg/query-service/auth"
baseconst "go.signoz.io/signoz/pkg/query-service/constants"
"go.signoz.io/signoz/pkg/query-service/migrate"
@@ -99,7 +99,7 @@ func main() {
var useLogsNewSchema bool
var useTraceNewSchema bool
var cacheConfigPath, fluxInterval string
var cacheConfigPath, fluxInterval, fluxIntervalForTraceDetail string
var enableQueryServiceLogOTLPExport bool
var preferSpanMetrics bool
@@ -108,7 +108,6 @@ func main() {
var dialTimeout time.Duration
var gatewayUrl string
var useLicensesV3 bool
var skipWebFrontend bool
flag.BoolVar(&useLogsNewSchema, "use-logs-new-schema", false, "use logs_v2 schema for logs")
flag.BoolVar(&useTraceNewSchema, "use-trace-new-schema", false, "use new schema for traces")
@@ -122,11 +121,11 @@ func main() {
flag.StringVar(&ruleRepoURL, "rules.repo-url", baseconst.AlertHelpPage, "(host address used to build rule link in alert messages)")
flag.StringVar(&cacheConfigPath, "experimental.cache-config", "", "(cache config to use)")
flag.StringVar(&fluxInterval, "flux-interval", "5m", "(the interval to exclude data from being cached to avoid incorrect cache for data in motion)")
flag.StringVar(&fluxIntervalForTraceDetail, "flux-interval-trace-detail", "2m", "(the interval to exclude data from being cached to avoid incorrect cache for trace data in motion)")
flag.BoolVar(&enableQueryServiceLogOTLPExport, "enable.query.service.log.otlp.export", false, "(enable query service log otlp export)")
flag.StringVar(&cluster, "cluster", "cluster", "(cluster name - defaults to 'cluster')")
flag.StringVar(&gatewayUrl, "gateway-url", "", "(url to the gateway)")
flag.BoolVar(&useLicensesV3, "use-licenses-v3", false, "use licenses_v3 schema for licenses")
flag.BoolVar(&skipWebFrontend, "skip-web-frontend", false, "skip web frontend")
flag.Parse()
loggerMgr := initZapLog(enableQueryServiceLogOTLPExport)
@@ -136,42 +135,42 @@ func main() {
version.PrintVersion()
config, err := signozconfig.New(context.Background(), signozconfig.ProviderSettings{
ResolverSettings: confmap.ResolverSettings{
URIs: []string{"signozenv:"},
ProviderFactories: []confmap.ProviderFactory{
signozenvprovider.NewFactory(),
},
config, err := signoz.NewConfig(context.Background(), config.ResolverConfig{
Uris: []string{"env:"},
ProviderFactories: []config.ProviderFactory{
envprovider.NewFactory(),
fileprovider.NewFactory(),
},
})
if err != nil {
zap.L().Fatal("Failed to create config", zap.Error(err))
}
signoz, err := signoz.New(config, skipWebFrontend)
signoz, err := signoz.New(context.Background(), config, signoz.NewProviderConfig())
if err != nil {
zap.L().Fatal("Failed to create signoz struct", zap.Error(err))
}
serverOptions := &app.ServerOptions{
SigNoz: signoz,
HTTPHostPort: baseconst.HTTPHostPort,
PromConfigPath: promConfigPath,
SkipTopLvlOpsPath: skipTopLvlOpsPath,
PreferSpanMetrics: preferSpanMetrics,
PrivateHostPort: baseconst.PrivateHostPort,
DisableRules: disableRules,
RuleRepoURL: ruleRepoURL,
MaxIdleConns: maxIdleConns,
MaxOpenConns: maxOpenConns,
DialTimeout: dialTimeout,
CacheConfigPath: cacheConfigPath,
FluxInterval: fluxInterval,
Cluster: cluster,
GatewayUrl: gatewayUrl,
UseLogsNewSchema: useLogsNewSchema,
UseTraceNewSchema: useTraceNewSchema,
SkipWebFrontend: skipWebFrontend,
Config: config,
SigNoz: signoz,
HTTPHostPort: baseconst.HTTPHostPort,
PromConfigPath: promConfigPath,
SkipTopLvlOpsPath: skipTopLvlOpsPath,
PreferSpanMetrics: preferSpanMetrics,
PrivateHostPort: baseconst.PrivateHostPort,
DisableRules: disableRules,
RuleRepoURL: ruleRepoURL,
MaxIdleConns: maxIdleConns,
MaxOpenConns: maxOpenConns,
DialTimeout: dialTimeout,
CacheConfigPath: cacheConfigPath,
FluxInterval: fluxInterval,
FluxIntervalForTraceDetail: fluxIntervalForTraceDetail,
Cluster: cluster,
GatewayUrl: gatewayUrl,
UseLogsNewSchema: useLogsNewSchema,
UseTraceNewSchema: useTraceNewSchema,
}
// Read the jwt secret key
@@ -183,7 +182,7 @@ func main() {
zap.L().Info("JWT secret key set successfully.")
}
if err := migrate.Migrate(baseconst.RELATIONAL_DATASOURCE_PATH); err != nil {
if err := migrate.Migrate(signoz.SQLStore.SQLxDB()); err != nil {
zap.L().Error("Failed to migrate", zap.Error(err))
} else {
zap.L().Info("Migration successful")

View File

@@ -46,7 +46,7 @@ type Manager struct {
tenantID string
}
func New(dbType string, modelDao dao.ModelDao, licenseRepo *license.Repo, clickhouseConn clickhouse.Conn) (*Manager, error) {
func New(modelDao dao.ModelDao, licenseRepo *license.Repo, clickhouseConn clickhouse.Conn) (*Manager, error) {
hostNameRegex := regexp.MustCompile(`tcp://(?P<hostname>.*):`)
hostNameRegexMatches := hostNameRegex.FindStringSubmatch(os.Getenv("ClickHouseUrl"))

View File

@@ -6,7 +6,7 @@
**Building image**
``docker-compose up`
``docker compose up`
/ This will also run
or
@@ -19,7 +19,7 @@ docker tag signoz/frontend:latest 7296823551/signoz:latest
```
```
docker-compose up
docker compose up
```
## Without Docker

View File

@@ -0,0 +1,7 @@
version: "3.9"
services:
web:
build: .
image: signoz/frontend:latest
ports:
- "3301:3301"

View File

@@ -1,7 +0,0 @@
version: "3.9"
services:
web:
build: .
image: signoz/frontend:latest
ports:
- "3301:3301"

View File

@@ -44,6 +44,7 @@
"@sentry/webpack-plugin": "2.22.6",
"@signozhq/design-tokens": "1.1.4",
"@tanstack/react-table": "8.20.6",
"@tanstack/react-virtual": "3.11.2",
"@uiw/react-md-editor": "3.23.5",
"@visx/group": "3.3.0",
"@visx/shape": "3.5.0",

View File

@@ -0,0 +1,22 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M4.02102 14.418C4.02102 14.418 3.82659 14.6658 3.32106 14.6658C2.81553 14.6658 2.62109 14.418 2.62109 14.418V5.74512H4.02102V14.418Z" fill="#9E9E9E"/>
<path d="M4.02102 14.4179C4.02102 14.4179 3.82659 14.6657 3.32106 14.6657C2.81553 14.6657 2.62109 14.4179 2.62109 14.4179V11.077C2.62109 11.077 2.76331 10.8059 3.28328 10.8059C3.80325 10.8059 4.02102 11.077 4.02102 11.077V14.4179Z" fill="#BDBDBD"/>
<path d="M13.3765 14.418C13.3765 14.418 13.1821 14.6658 12.6765 14.6658C12.171 14.6658 11.9766 14.418 11.9766 14.418V5.74512H13.3765V14.418Z" fill="#9E9E9E"/>
<path d="M13.3765 14.4179C13.3765 14.4179 13.1821 14.6657 12.6765 14.6657C12.171 14.6657 11.9766 14.4179 11.9766 14.4179V11.077C11.9766 11.077 12.1188 10.8059 12.6387 10.8059C13.1587 10.8059 13.3765 11.077 13.3765 11.077V14.4179Z" fill="#BDBDBD"/>
<path d="M14.3623 9.98379H1.63633C1.46856 9.98379 1.33301 9.84825 1.33301 9.68048V5.79624C1.33301 5.62847 1.46856 5.49292 1.63633 5.49292H14.3623C14.5301 5.49292 14.6656 5.62847 14.6656 5.79624V9.68048C14.6656 9.84825 14.5301 9.98379 14.3623 9.98379Z" fill="#9E9E9E"/>
<path d="M14.3623 5.71509H1.63633C1.46856 5.71509 1.33301 5.85064 1.33301 6.01841V9.90264C1.33301 10.0704 1.46856 10.206 1.63633 10.206H14.3623C14.5301 10.206 14.6656 10.0704 14.6656 9.90264V6.01841C14.6656 5.85064 14.5301 5.71509 14.3623 5.71509Z" fill="#FFD600"/>
<path d="M2.82515 5.71509L1.33301 7.20723V9.40712L5.02504 5.71509H2.82515Z" fill="#212121"/>
<path d="M7.01027 5.71509L2.52051 10.206H4.72039L9.21016 5.71509H7.01027Z" fill="#212121"/>
<path d="M11.1969 5.71509L6.70605 10.206H8.90594L13.3957 5.71509H11.1969Z" fill="#212121"/>
<path d="M14.6658 6.43188L10.8916 10.2061H13.0915L14.6658 8.63177V6.43188Z" fill="#212121"/>
<path d="M13.5322 5.9095H11.8223C11.6623 5.9095 11.5445 5.80506 11.5823 5.6984L11.7012 4.95288H13.6511L13.77 5.6984C13.81 5.80617 13.6922 5.9095 13.5322 5.9095Z" fill="#E2A610"/>
<path d="M12.6768 4.93514C13.4567 4.93514 14.0889 4.3029 14.0889 3.52299C14.0889 2.74308 13.4567 2.11084 12.6768 2.11084C11.8969 2.11084 11.2646 2.74308 11.2646 3.52299C11.2646 4.3029 11.8969 4.93514 12.6768 4.93514Z" fill="#FFCA28"/>
<path d="M12.6761 4.56629C13.2523 4.56629 13.7194 4.0992 13.7194 3.52301C13.7194 2.94683 13.2523 2.47974 12.6761 2.47974C12.0999 2.47974 11.6328 2.94683 11.6328 3.52301C11.6328 4.0992 12.0999 4.56629 12.6761 4.56629Z" fill="#FF5722"/>
<path d="M13.652 4.96417H11.7021C11.7021 4.96417 11.7088 4.65308 12.2666 4.65308C12.8243 4.65308 13.0932 4.65308 13.0932 4.65308C13.6832 4.65308 13.652 4.96417 13.652 4.96417Z" fill="#FFCA28"/>
<path d="M12.7998 3.02315L13.0665 2.67872C13.0787 2.66317 13.1031 2.67539 13.0987 2.69428L12.9954 3.11759C12.9709 3.21647 13.0465 3.31202 13.1487 3.3098L13.5842 3.30313C13.6042 3.30313 13.6098 3.3298 13.592 3.33869L13.1965 3.52201C13.1042 3.56534 13.0776 3.68311 13.142 3.762L13.4187 4.09865C13.4309 4.1142 13.4142 4.13531 13.3965 4.12642L13.0065 3.93199C12.9154 3.88644 12.8065 3.93866 12.7843 4.03865L12.6932 4.46529C12.6887 4.48529 12.6609 4.48529 12.6576 4.46529L12.5665 4.03865C12.5454 3.93866 12.4354 3.88644 12.3443 3.93199L11.9543 4.12642C11.9365 4.13531 11.9188 4.11309 11.9321 4.09865L12.2087 3.762C12.2732 3.68311 12.2465 3.56534 12.1543 3.52201L11.7588 3.33869C11.741 3.3298 11.7465 3.30313 11.7665 3.30313L12.2021 3.3098C12.3043 3.31091 12.3798 3.21647 12.3554 3.11759L12.2521 2.69428C12.2476 2.67539 12.2721 2.66317 12.2843 2.67872L12.5509 3.02315C12.6165 3.10314 12.7376 3.10314 12.7998 3.02315Z" fill="#FFD5CA"/>
<path d="M4.17575 5.9095H2.46584C2.30584 5.9095 2.18807 5.80506 2.22585 5.6984L2.34473 4.95288H4.29463L4.41351 5.6984C4.45351 5.80617 4.33574 5.9095 4.17575 5.9095Z" fill="#E2A610"/>
<path d="M3.3223 4.93514C4.10221 4.93514 4.73445 4.3029 4.73445 3.52299C4.73445 2.74308 4.10221 2.11084 3.3223 2.11084C2.5424 2.11084 1.91016 2.74308 1.91016 3.52299C1.91016 4.3029 2.5424 4.93514 3.3223 4.93514Z" fill="#FFCA28"/>
<path d="M3.3216 4.56629C3.89779 4.56629 4.36488 4.0992 4.36488 3.52301C4.36488 2.94683 3.89779 2.47974 3.3216 2.47974C2.74541 2.47974 2.27832 2.94683 2.27832 3.52301C2.27832 4.0992 2.74541 4.56629 3.3216 4.56629Z" fill="#FF5722"/>
<path d="M4.29658 4.96417H2.34668C2.34668 4.96417 2.35335 4.65308 2.91109 4.65308C3.46884 4.65308 3.73772 4.65308 3.73772 4.65308C4.32769 4.65308 4.29658 4.96417 4.29658 4.96417Z" fill="#FFCA28"/>
<path d="M3.44337 3.02315L3.71002 2.67872C3.72224 2.66317 3.74669 2.67539 3.74224 2.69428L3.63892 3.11759C3.61447 3.21647 3.69002 3.31202 3.79224 3.3098L4.22777 3.30313C4.24777 3.30313 4.25333 3.3298 4.23555 3.33869L3.84002 3.52201C3.7478 3.56534 3.72113 3.68311 3.78557 3.762L4.06223 4.09865C4.07445 4.1142 4.05778 4.13531 4.04001 4.12642L3.65003 3.93199C3.55892 3.88644 3.45004 3.93866 3.42782 4.03865L3.33671 4.46529C3.33227 4.48529 3.30449 4.48529 3.30116 4.46529L3.21005 4.03865C3.18894 3.93866 3.07894 3.88644 2.98784 3.93199L2.59786 4.12642C2.58008 4.13531 2.56231 4.11309 2.57564 4.09865L2.85229 3.762C2.91673 3.68311 2.89007 3.56534 2.79785 3.52201L2.40231 3.33869C2.38454 3.3298 2.39009 3.30313 2.41009 3.30313L2.84562 3.3098C2.94784 3.31091 3.02339 3.21647 2.99895 3.11759L2.89562 2.69428C2.89118 2.67539 2.91562 2.66317 2.92784 2.67872L3.19449 3.02315C3.26005 3.10314 3.38115 3.10314 3.44337 3.02315Z" fill="#FFD5CA"/>
</svg>

After

Width:  |  Height:  |  Size: 5.2 KiB

View File

@@ -0,0 +1 @@
<svg width="32" height="32" fill="none" xmlns="http://www.w3.org/2000/svg"><path fill-rule="evenodd" clip-rule="evenodd" d="M17.947 6.906h-6.62V6.24h6.62v.666zM17.947 10.293h-6.62v-.667h6.62v.667z" fill="#616161"/><path d="M16.174 29.403h-2.553V3.442s.238-.553 1.278-.553 1.278.553 1.278.553v25.96h-.003z" fill="#9E9E9E"/><path d="M15.519 2.971v20.974H13.62v1.91c.322.001.613.04.876.097a1.3 1.3 0 011.024 1.269v2.182h.655V3.442c-.002 0-.14-.318-.657-.471z" fill="#757575"/><path d="M22.362 2.738a5.382 5.382 0 00-5.381 5.401 5.365 5.365 0 005.381 5.382 5.378 5.378 0 005.382-5.382c0-2.975-2.406-5.401-5.382-5.401z" fill="#2196F3"/><path d="M24.713 4.749c-.338-.085-1.011-.174-2.349-.174-1.338 0-2.01.087-2.349.174-.2.05-.869.328-.869 1.077v4.618c0 .253.205.46.46.46h.17v.51c0 .15.12.27.268.27h.545c.149 0 .269-.12.269-.27v-.51h3.008v.51c0 .15.12.27.27.27h.544c.148 0 .268-.12.268-.27v-.51h.174a.46.46 0 00.46-.46V5.826c0-.717-.67-1.029-.87-1.077zm-3.802.366c0-.113.09-.204.204-.204h2.494c.113 0 .204.09.204.204v.362a.204.204 0 01-.204.205h-2.494a.204.204 0 01-.204-.205v-.362zm.042 4.864a.139.139 0 01-.138.138h-.549a.469.469 0 01-.468-.469v-.246c0-.076.062-.138.137-.138h.55c.257 0 .468.209.468.469v.246zm3.973-.33a.469.469 0 01-.469.468h-.549a.138.138 0 01-.137-.138v-.246c0-.258.209-.47.468-.47h.55c.075 0 .137.063.137.139v.246zm.125-2.007c0 .297-.645.817-2.69.817-2.046 0-2.688-.482-2.688-.817V6.277c0-.089.089-.31.311-.31h4.791c.222 0 .276.224.276.31v1.365z" fill="#fff"/><path d="M18.86 24.652h-7.926a.506.506 0 01-.506-.507V14.99c0-.28.226-.506.506-.506h7.924c.28 0 .507.226.507.506v9.155c0 .28-.227.507-.504.507z" fill="#F5F5F5"/><path opacity=".8" d="M18.005 23.139h-6.216l-.018-7.235h6.218l.015 7.235z" fill="#82AEC0"/><path fill-rule="evenodd" clip-rule="evenodd" d="M14.66 23.234v-7.33h.444v7.33h-.445z" fill="#F5F5F5"/><path d="M14.658 15.904h-2.886v.604h2.886v-.604z" fill="#616161"/><path fill-rule="evenodd" clip-rule="evenodd" d="M18.03 21.879h-6.263v-.223h6.264v.223zM18.03 20.412h-6.263v-.222h6.264v.222zM17.989 18.946H11.77v-.223h6.218v.223zM17.99 17.482h-6.223v-.223h6.224v.223z" fill="#F5F5F5"/><path d="M17.988 18.534h-2.886v.605h2.886v-.605zM14.672 19.999h-2.878v.604h2.878V20z" fill="#616161"/><path fill-rule="evenodd" clip-rule="evenodd" d="M10.935 14.817a.173.173 0 00-.174.173v9.155c0 .096.078.174.174.174h7.926a.173.173 0 00.171-.174V14.99a.173.173 0 00-.173-.173h-7.924zm-.84.173a.84.84 0 01.84-.84h7.924a.84.84 0 01.84.84v9.155a.84.84 0 01-.838.84h-7.926a.84.84 0 01-.84-.84V14.99z" fill="#9E9E9E"/><path d="M10.968 24.323c-.344.031-.344-.195-.344-.258V15.1c0-.25.204-.455.456-.455h7.693c.208 0 .304.127.262.384 0 0-.027-.164-.227-.164h-7.726a.237.237 0 00-.236.235v8.969c0 .209.122.255.122.255z" fill="#757575"/><path d="M12.268 4.933H4.695v6.577h7.573V4.933z" fill="#FFCA28"/><path d="M11.845 4.933c.233 0 .422.189.422.422v5.733a.422.422 0 01-.422.422H5.119a.422.422 0 01-.423-.422V5.355c0-.233.19-.422.423-.422h6.726zm0-.444H5.119a.868.868 0 00-.867.866v5.733c0 .478.389.867.867.867h6.726a.868.868 0 00.866-.867V5.355a.866.866 0 00-.866-.866z" fill="#9E9E9E"/><path fill-rule="evenodd" clip-rule="evenodd" d="M12.27 7.308H4.696v-.444h7.575v.444zM12.27 9.58H4.696v-.445h7.575v.444z" fill="#FFFDE7"/><path d="M7.066 5.664H5.162v.502h1.904v-.502zM6.598 10.27H5.162v.503h1.436v-.502zM11.648 10.27h-1.435v.503h1.435v-.502zM11.648 5.664h-1.435v.502h1.435v-.502zM10.301 7.968h-.826v.502h.826v-.502zM11.647 7.968h-.827v.502h.827v-.502zM7.802 7.968h-2.64v.502h2.64v-.502z" fill="#757575"/></svg>

After

Width:  |  Height:  |  Size: 3.4 KiB

View File

@@ -43,7 +43,10 @@ export const TraceFilter = Loadable(
);
export const TraceDetail = Loadable(
() => import(/* webpackChunkName: "TraceDetail Page" */ 'pages/TraceDetail'),
() =>
import(
/* webpackChunkName: "TraceDetail Page" */ 'pages/TraceDetailV2/index'
),
);
export const UsageExplorerPage = Loadable(
@@ -229,7 +232,7 @@ export const InstalledIntegrations = Loadable(
),
);
export const MessagingQueues = Loadable(
export const MessagingQueuesMainPage = Loadable(
() =>
import(/* webpackChunkName: "MessagingQueues" */ 'pages/MessagingQueues'),
);
@@ -247,3 +250,17 @@ export const InfrastructureMonitoring = Loadable(
/* webpackChunkName: "InfrastructureMonitoring" */ 'pages/InfrastructureMonitoring'
),
);
export const CeleryTask = Loadable(
() =>
import(
/* webpackChunkName: "CeleryTask" */ 'pages/Celery/CeleryTask/CeleryTask'
),
);
export const CeleryOverview = Loadable(
() =>
import(
/* webpackChunkName: "CeleryOverview" */ 'pages/Celery/CeleryOverview/CeleryOverview'
),
);

View File

@@ -1,4 +1,5 @@
import ROUTES from 'constants/routes';
import MessagingQueues from 'pages/MessagingQueues';
import { RouteProps } from 'react-router-dom';
import {
@@ -27,7 +28,6 @@ import {
LogsExplorer,
LogsIndexToFields,
LogsSaveViews,
MessagingQueues,
MQDetailPage,
MySettings,
NewDashboardPage,
@@ -401,6 +401,20 @@ const routes: AppRoutes[] = [
key: 'MESSAGING_QUEUES',
isPrivate: true,
},
{
path: ROUTES.MESSAGING_QUEUES_CELERY_TASK,
exact: true,
component: MessagingQueues,
key: 'MESSAGING_QUEUES_CELERY_TASK',
isPrivate: true,
},
{
path: ROUTES.MESSAGING_QUEUES_OVERVIEW,
exact: true,
component: MessagingQueues,
key: 'MESSAGING_QUEUES_OVERVIEW',
isPrivate: true,
},
{
path: ROUTES.MESSAGING_QUEUES_DETAIL,
exact: true,

View File

@@ -1,4 +1,4 @@
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import createQueryParams from 'lib/createQueryParams';
@@ -19,7 +19,7 @@ export const getInfraAttributesValues = async ({
SuccessResponse<IAttributeValuesResponse> | ErrorResponse
> => {
try {
const response = await ApiBaseInstance.get(
const response = await axios.get(
`/hosts/attribute_values?${createQueryParams({
dataSource,
attributeKey,

View File

@@ -0,0 +1,64 @@
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import { ErrorResponse, SuccessResponse } from 'types/api';
import { BaseAutocompleteData } from 'types/api/queryBuilder/queryAutocompleteResponse';
import { TagFilter } from 'types/api/queryBuilder/queryBuilderData';
export interface K8sClustersListPayload {
filters: TagFilter;
groupBy?: BaseAutocompleteData[];
offset?: number;
limit?: number;
orderBy?: {
columnName: string;
order: 'asc' | 'desc';
};
}
export interface K8sClustersData {
clusterUID: string;
cpuUsage: number;
cpuAllocatable: number;
memoryUsage: number;
memoryAllocatable: number;
meta: {
k8s_cluster_name: string;
k8s_cluster_uid: string;
};
}
export interface K8sClustersListResponse {
status: string;
data: {
type: string;
records: K8sClustersData[];
groups: null;
total: number;
sentAnyHostMetricsData: boolean;
isSendingK8SAgentMetrics: boolean;
};
}
export const getK8sClustersList = async (
props: K8sClustersListPayload,
signal?: AbortSignal,
headers?: Record<string, string>,
): Promise<SuccessResponse<K8sClustersListResponse> | ErrorResponse> => {
try {
const response = await axios.post('/clusters/list', props, {
signal,
headers,
});
return {
statusCode: 200,
error: null,
message: 'Success',
payload: response.data,
params: props,
};
} catch (error) {
return ErrorResponseHandler(error as AxiosError);
}
};

View File

@@ -0,0 +1,70 @@
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import { ErrorResponse, SuccessResponse } from 'types/api';
import { BaseAutocompleteData } from 'types/api/queryBuilder/queryAutocompleteResponse';
import { TagFilter } from 'types/api/queryBuilder/queryBuilderData';
export interface K8sDaemonSetsListPayload {
filters: TagFilter;
groupBy?: BaseAutocompleteData[];
offset?: number;
limit?: number;
orderBy?: {
columnName: string;
order: 'asc' | 'desc';
};
}
export interface K8sDaemonSetsData {
daemonSetName: string;
cpuUsage: number;
memoryUsage: number;
cpuRequest: number;
memoryRequest: number;
cpuLimit: number;
memoryLimit: number;
restarts: number;
desiredNodes: number;
availableNodes: number;
meta: {
k8s_cluster_name: string;
k8s_daemonset_name: string;
k8s_namespace_name: string;
};
}
export interface K8sDaemonSetsListResponse {
status: string;
data: {
type: string;
records: K8sDaemonSetsData[];
groups: null;
total: number;
sentAnyHostMetricsData: boolean;
isSendingK8SAgentMetrics: boolean;
};
}
export const getK8sDaemonSetsList = async (
props: K8sDaemonSetsListPayload,
signal?: AbortSignal,
headers?: Record<string, string>,
): Promise<SuccessResponse<K8sDaemonSetsListResponse> | ErrorResponse> => {
try {
const response = await axios.post('/daemonsets/list', props, {
signal,
headers,
});
return {
statusCode: 200,
error: null,
message: 'Success',
payload: response.data,
params: props,
};
} catch (error) {
return ErrorResponseHandler(error as AxiosError);
}
};

View File

@@ -0,0 +1,70 @@
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import { ErrorResponse, SuccessResponse } from 'types/api';
import { BaseAutocompleteData } from 'types/api/queryBuilder/queryAutocompleteResponse';
import { TagFilter } from 'types/api/queryBuilder/queryBuilderData';
export interface K8sDeploymentsListPayload {
filters: TagFilter;
groupBy?: BaseAutocompleteData[];
offset?: number;
limit?: number;
orderBy?: {
columnName: string;
order: 'asc' | 'desc';
};
}
export interface K8sDeploymentsData {
deploymentName: string;
cpuUsage: number;
memoryUsage: number;
desiredPods: number;
availablePods: number;
cpuRequest: number;
memoryRequest: number;
cpuLimit: number;
memoryLimit: number;
restarts: number;
meta: {
k8s_cluster_name: string;
k8s_deployment_name: string;
k8s_namespace_name: string;
};
}
export interface K8sDeploymentsListResponse {
status: string;
data: {
type: string;
records: K8sDeploymentsData[];
groups: null;
total: number;
sentAnyHostMetricsData: boolean;
isSendingK8SAgentMetrics: boolean;
};
}
export const getK8sDeploymentsList = async (
props: K8sDeploymentsListPayload,
signal?: AbortSignal,
headers?: Record<string, string>,
): Promise<SuccessResponse<K8sDeploymentsListResponse> | ErrorResponse> => {
try {
const response = await axios.post('/deployments/list', props, {
signal,
headers,
});
return {
statusCode: 200,
error: null,
message: 'Success',
payload: response.data,
params: props,
};
} catch (error) {
return ErrorResponseHandler(error as AxiosError);
}
};

View File

@@ -0,0 +1,72 @@
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import { ErrorResponse, SuccessResponse } from 'types/api';
import { BaseAutocompleteData } from 'types/api/queryBuilder/queryAutocompleteResponse';
import { TagFilter } from 'types/api/queryBuilder/queryBuilderData';
export interface K8sJobsListPayload {
filters: TagFilter;
groupBy?: BaseAutocompleteData[];
offset?: number;
limit?: number;
orderBy?: {
columnName: string;
order: 'asc' | 'desc';
};
}
export interface K8sJobsData {
jobName: string;
cpuUsage: number;
memoryUsage: number;
cpuRequest: number;
memoryRequest: number;
cpuLimit: number;
memoryLimit: number;
restarts: number;
desiredSuccessfulPods: number;
activePods: number;
failedPods: number;
successfulPods: number;
meta: {
k8s_cluster_name: string;
k8s_job_name: string;
k8s_namespace_name: string;
};
}
export interface K8sJobsListResponse {
status: string;
data: {
type: string;
records: K8sJobsData[];
groups: null;
total: number;
sentAnyHostMetricsData: boolean;
isSendingK8SAgentMetrics: boolean;
};
}
export const getK8sJobsList = async (
props: K8sJobsListPayload,
signal?: AbortSignal,
headers?: Record<string, string>,
): Promise<SuccessResponse<K8sJobsListResponse> | ErrorResponse> => {
try {
const response = await axios.post('/jobs/list', props, {
signal,
headers,
});
return {
statusCode: 200,
error: null,
message: 'Success',
payload: response.data,
params: props,
};
} catch (error) {
return ErrorResponseHandler(error as AxiosError);
}
};

View File

@@ -0,0 +1,62 @@
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import { ErrorResponse, SuccessResponse } from 'types/api';
import { BaseAutocompleteData } from 'types/api/queryBuilder/queryAutocompleteResponse';
import { TagFilter } from 'types/api/queryBuilder/queryBuilderData';
export interface K8sNamespacesListPayload {
filters: TagFilter;
groupBy?: BaseAutocompleteData[];
offset?: number;
limit?: number;
orderBy?: {
columnName: string;
order: 'asc' | 'desc';
};
}
export interface K8sNamespacesData {
namespaceName: string;
cpuUsage: number;
memoryUsage: number;
meta: {
k8s_cluster_name: string;
k8s_namespace_name: string;
};
}
export interface K8sNamespacesListResponse {
status: string;
data: {
type: string;
records: K8sNamespacesData[];
groups: null;
total: number;
sentAnyHostMetricsData: boolean;
isSendingK8SAgentMetrics: boolean;
};
}
export const getK8sNamespacesList = async (
props: K8sNamespacesListPayload,
signal?: AbortSignal,
headers?: Record<string, string>,
): Promise<SuccessResponse<K8sNamespacesListResponse> | ErrorResponse> => {
try {
const response = await axios.post('/namespaces/list', props, {
signal,
headers,
});
return {
statusCode: 200,
error: null,
message: 'Success',
payload: response.data,
params: props,
};
} catch (error) {
return ErrorResponseHandler(error as AxiosError);
}
};

View File

@@ -1,4 +1,4 @@
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import { ErrorResponse, SuccessResponse } from 'types/api';
@@ -47,7 +47,7 @@ export const getK8sNodesList = async (
headers?: Record<string, string>,
): Promise<SuccessResponse<K8sNodesListResponse> | ErrorResponse> => {
try {
const response = await ApiBaseInstance.post('/nodes/list', props, {
const response = await axios.post('/nodes/list', props, {
signal,
headers,
});

View File

@@ -1,4 +1,4 @@
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import { ErrorResponse, SuccessResponse } from 'types/api';
@@ -75,7 +75,7 @@ export const getK8sPodsList = async (
headers?: Record<string, string>,
): Promise<SuccessResponse<K8sPodsListResponse> | ErrorResponse> => {
try {
const response = await ApiBaseInstance.post('/pods/list', props, {
const response = await axios.post('/pods/list', props, {
signal,
headers,
});

View File

@@ -0,0 +1,71 @@
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import { ErrorResponse, SuccessResponse } from 'types/api';
import { BaseAutocompleteData } from 'types/api/queryBuilder/queryAutocompleteResponse';
import { TagFilter } from 'types/api/queryBuilder/queryBuilderData';
export interface K8sVolumesListPayload {
filters: TagFilter;
groupBy?: BaseAutocompleteData[];
offset?: number;
limit?: number;
orderBy?: {
columnName: string;
order: 'asc' | 'desc';
};
}
export interface K8sVolumesData {
persistentVolumeClaimName: string;
volumeAvailable: number;
volumeCapacity: number;
volumeInodes: number;
volumeInodesFree: number;
volumeInodesUsed: number;
volumeUsage: number;
meta: {
k8s_cluster_name: string;
k8s_namespace_name: string;
k8s_node_name: string;
k8s_persistentvolumeclaim_name: string;
k8s_pod_name: string;
k8s_pod_uid: string;
k8s_statefulset_name: string;
};
}
export interface K8sVolumesListResponse {
status: string;
data: {
type: string;
records: K8sVolumesData[];
groups: null;
total: number;
sentAnyHostMetricsData: boolean;
isSendingK8SAgentMetrics: boolean;
};
}
export const getK8sVolumesList = async (
props: K8sVolumesListPayload,
signal?: AbortSignal,
headers?: Record<string, string>,
): Promise<SuccessResponse<K8sVolumesListResponse> | ErrorResponse> => {
try {
const response = await axios.post('/pvcs/list', props, {
signal,
headers,
});
return {
statusCode: 200,
error: null,
message: 'Success',
payload: response.data,
params: props,
};
} catch (error) {
return ErrorResponseHandler(error as AxiosError);
}
};

View File

@@ -0,0 +1,69 @@
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import { ErrorResponse, SuccessResponse } from 'types/api';
import { BaseAutocompleteData } from 'types/api/queryBuilder/queryAutocompleteResponse';
import { TagFilter } from 'types/api/queryBuilder/queryBuilderData';
export interface K8sStatefulSetsListPayload {
filters: TagFilter;
groupBy?: BaseAutocompleteData[];
offset?: number;
limit?: number;
orderBy?: {
columnName: string;
order: 'asc' | 'desc';
};
}
export interface K8sStatefulSetsData {
statefulSetName: string;
cpuUsage: number;
memoryUsage: number;
desiredPods: number;
availablePods: number;
cpuRequest: number;
memoryRequest: number;
cpuLimit: number;
memoryLimit: number;
restarts: number;
meta: {
k8s_statefulset_name: string;
k8s_namespace_name: string;
};
}
export interface K8sStatefulSetsListResponse {
status: string;
data: {
type: string;
records: K8sStatefulSetsData[];
groups: null;
total: number;
sentAnyHostMetricsData: boolean;
isSendingK8SAgentMetrics: boolean;
};
}
export const getK8sStatefulSetsList = async (
props: K8sStatefulSetsListPayload,
signal?: AbortSignal,
headers?: Record<string, string>,
): Promise<SuccessResponse<K8sStatefulSetsListResponse> | ErrorResponse> => {
try {
const response = await axios.post('/statefulsets/list', props, {
signal,
headers,
});
return {
statusCode: 200,
error: null,
message: 'Success',
payload: response.data,
params: props,
};
} catch (error) {
return ErrorResponseHandler(error as AxiosError);
}
};

View File

@@ -0,0 +1,53 @@
import axios from 'api';
import { ErrorResponse, SuccessResponse } from 'types/api';
export interface QueueOverviewPayload {
start: number;
end: number;
filters: {
items: {
key: {
key: string;
dataType: string;
};
op: string;
value: string[];
}[];
op: 'AND' | 'OR';
};
}
export interface QueueOverviewResponse {
status: string;
data: {
timestamp?: string;
data: {
destination?: string;
error_percentage?: number;
kind_string?: string;
messaging_system?: string;
p95_latency?: number;
service_name?: string;
span_name?: string;
throughput?: number;
}[];
}[];
}
export const getQueueOverview = async (
props: QueueOverviewPayload,
): Promise<SuccessResponse<QueueOverviewResponse['data']> | ErrorResponse> => {
const { start, end, filters } = props;
const response = await axios.post(`messaging-queues/queue-overview`, {
start,
end,
filters,
});
return {
statusCode: 200,
error: null,
message: response.data.status,
payload: response.data.data,
};
};

View File

@@ -0,0 +1,27 @@
import { ApiV2Instance as axios } from 'api';
import { omit } from 'lodash-es';
import { ErrorResponse, SuccessResponse } from 'types/api';
import {
GetTraceFlamegraphPayloadProps,
GetTraceFlamegraphSuccessResponse,
} from 'types/api/trace/getTraceFlamegraph';
const getTraceFlamegraph = async (
props: GetTraceFlamegraphPayloadProps,
): Promise<
SuccessResponse<GetTraceFlamegraphSuccessResponse> | ErrorResponse
> => {
const response = await axios.post<GetTraceFlamegraphSuccessResponse>(
`/traces/flamegraph/${props.traceId}`,
omit(props, 'traceId'),
);
return {
statusCode: 200,
error: null,
message: 'Success',
payload: response.data,
};
};
export default getTraceFlamegraph;

View File

@@ -0,0 +1,35 @@
import { ApiV2Instance as axios } from 'api';
import { omit } from 'lodash-es';
import { ErrorResponse, SuccessResponse } from 'types/api';
import {
GetTraceV2PayloadProps,
GetTraceV2SuccessResponse,
} from 'types/api/trace/getTraceV2';
const getTraceV2 = async (
props: GetTraceV2PayloadProps,
): Promise<SuccessResponse<GetTraceV2SuccessResponse> | ErrorResponse> => {
let uncollapsedSpans = [...props.uncollapsedSpans];
if (!props.isSelectedSpanIDUnCollapsed) {
uncollapsedSpans = uncollapsedSpans.filter(
(node) => node !== props.selectedSpanId,
);
}
const postData: GetTraceV2PayloadProps = {
...props,
uncollapsedSpans,
};
const response = await axios.post<GetTraceV2SuccessResponse>(
`/traces/waterfall/${props.traceId}`,
omit(postData, 'traceId'),
);
return {
statusCode: 200,
error: null,
message: 'Success',
payload: response.data,
};
};
export default getTraceV2;

View File

@@ -0,0 +1,90 @@
interface EmptyQuickFilterIconProps {
width?: number;
height?: number;
className?: string;
}
function EmptyQuickFilterIcon({
width = 32,
height = 32,
className,
}: EmptyQuickFilterIconProps): JSX.Element {
return (
<svg
width={width}
height={height}
fill="none"
xmlns="http://www.w3.org/2000/svg"
className={className}
>
<path
d="M17.948 6.906h-6.62V6.24h6.62v.667ZM17.948 10.293h-6.62v-.667h6.62v.667Z"
fill="#616161"
/>
<path
d="M16.175 29.403H13.62V3.442s.238-.554 1.278-.554 1.278.554 1.278.554v25.96h-.002Z"
fill="#9E9E9E"
/>
<path
d="M15.52 2.971v20.974H13.62v1.91c.323.001.614.04.876.097a1.3 1.3 0 0 1 1.024 1.269v2.182h.656V3.442c-.002 0-.14-.318-.658-.471Z"
fill="#757575"
/>
<path
d="M22.363 2.737a5.382 5.382 0 0 0-5.382 5.402 5.365 5.365 0 0 0 5.382 5.382 5.378 5.378 0 0 0 5.382-5.382c0-2.975-2.407-5.402-5.382-5.402Z"
fill="#2196F3"
/>
<path
d="M24.714 4.749c-.338-.085-1.01-.174-2.349-.174-1.337 0-2.01.087-2.348.174-.2.05-.87.328-.87 1.077v4.618c0 .253.205.46.46.46h.17v.51c0 .15.12.27.268.27h.545c.149 0 .269-.12.269-.27v-.51h3.008v.51c0 .15.12.27.27.27h.544c.149 0 .269-.12.269-.27v-.51h.173a.46.46 0 0 0 .46-.46V5.826c0-.717-.669-1.029-.869-1.077Zm-3.802.366c0-.113.091-.204.205-.204h2.493c.113 0 .204.09.204.204v.362a.204.204 0 0 1-.204.205h-2.493a.204.204 0 0 1-.205-.205v-.362Zm.042 4.864a.138.138 0 0 1-.137.138h-.55a.469.469 0 0 1-.468-.469v-.246c0-.076.062-.138.138-.138h.548c.258 0 .47.209.47.469v.246Zm3.973-.33a.469.469 0 0 1-.468.468h-.55a.138.138 0 0 1-.137-.138v-.246c0-.258.209-.47.469-.47h.549c.075 0 .137.063.137.139v.246Zm.125-2.007c0 .297-.645.817-2.689.817-2.046 0-2.689-.482-2.689-.817V6.277c0-.089.09-.31.311-.31h4.791c.223 0 .276.224.276.31v1.365Z"
fill="#fff"
/>
<path
d="M18.861 24.652h-7.926a.506.506 0 0 1-.507-.507V14.99c0-.28.227-.507.507-.507h7.924c.28 0 .507.227.507.507v9.155c0 .28-.227.507-.505.507Z"
fill="#F5F5F5"
/>
<path
opacity=".8"
d="M18.006 23.139H11.79l-.017-7.236h6.217l.016 7.236Z"
fill="#82AEC0"
/>
<path d="M14.66 23.234v-7.33h.444v7.33h-.445Z" fill="#F5F5F5" />
<path d="M14.66 15.903h-2.887v.605h2.886v-.605Z" fill="#616161" />
<path
d="M18.03 21.878h-6.264v-.222h6.264v.223ZM18.03 20.412h-6.264v-.222h6.264v.222ZM17.99 18.945h-6.217v-.222h6.217v.222ZM17.99 17.481h-6.224v-.222h6.224v.222Z"
fill="#F5F5F5"
/>
<path
d="M17.988 18.534H15.1v.605h2.887v-.605ZM14.672 19.999h-2.877v.604h2.877V20Z"
fill="#616161"
/>
<path
d="M10.935 14.817a.173.173 0 0 0-.173.173v9.155c0 .096.077.174.173.174h7.926a.173.173 0 0 0 .171-.174V14.99a.173.173 0 0 0-.173-.173h-7.924Zm-.84.173a.84.84 0 0 1 .84-.84h7.924a.84.84 0 0 1 .84.84v9.155a.84.84 0 0 1-.838.84h-7.926a.84.84 0 0 1-.84-.84V14.99Z"
fill="#9E9E9E"
/>
<path
d="M10.968 24.323c-.344.031-.344-.195-.344-.258V15.1c0-.251.204-.455.455-.455h7.693c.21 0 .305.126.262.384 0 0-.026-.165-.226-.165h-7.726a.237.237 0 0 0-.236.236v8.968c0 .21.122.256.122.256Z"
fill="#757575"
/>
<path d="M12.268 4.933H4.695v6.577h7.573V4.933Z" fill="#FFCA28" />
<path
d="M11.846 4.933c.233 0 .422.189.422.422v5.733a.422.422 0 0 1-.422.422H5.12a.422.422 0 0 1-.423-.422V5.355c0-.233.19-.422.423-.422h6.726Zm0-.445H5.12a.868.868 0 0 0-.867.867v5.733c0 .478.389.867.867.867h6.726a.868.868 0 0 0 .867-.867V5.355a.866.866 0 0 0-.867-.867Z"
fill="#9E9E9E"
/>
<path
d="M12.27 7.308H4.696v-.444h7.575v.444ZM12.27 9.579H4.696v-.444h7.575v.444Z"
fill="#FFFDE7"
/>
<path
d="M7.066 5.664H5.162v.502h1.904v-.502ZM6.597 10.27H5.162v.503h1.435v-.502ZM11.648 10.27h-1.435v.503h1.435v-.502ZM11.648 5.664h-1.435v.502h1.435v-.502ZM10.302 7.968h-.827v.502h.827v-.502ZM11.648 7.968h-.826v.502h.826v-.502ZM7.802 7.968h-2.64v.502h2.64v-.502Z"
fill="#757575"
/>
</svg>
);
}
EmptyQuickFilterIcon.defaultProps = {
width: 32,
height: 32,
className: '',
};
export default EmptyQuickFilterIcon;

View File

@@ -0,0 +1,25 @@
import { Color } from '@signozhq/design-tokens';
import { useIsDarkMode } from 'hooks/useDarkMode';
function FlamegraphImg(): JSX.Element {
const isDarkMode = useIsDarkMode();
return (
<svg
width="14"
height="14"
viewBox="0 0 24 24"
fill="none"
xmlns="http://www.w3.org/2000/svg"
>
<path
d="M8 3c1 3 2.5 3.5 3.5 4.5A5 5 0 0113 11a5 5 0 11-10 0c0-.3 0-.6.1-.9a2 2 0 103.3-2C4 5.5 7 3 8 3zM21 4h-8M20 14.5h-3M20 9.5h-3M21 20H4"
stroke={isDarkMode ? Color.BG_VANILLA_100 : Color.BG_INK_500}
strokeWidth="2"
strokeLinecap="round"
strokeLinejoin="round"
/>
</svg>
);
}
export default FlamegraphImg;

View File

@@ -0,0 +1,46 @@
.celery-overview-filters {
display: flex;
justify-content: space-between;
width: 100%;
.celery-filters {
display: flex;
align-items: center;
gap: 8px;
.config-select-option {
width: 100%;
min-width: 160px;
max-width: fit-content;
.ant-select-selector {
display: flex;
min-height: 32px;
align-items: center;
gap: 16px;
min-width: 164px;
border-radius: 2px;
border: 1px solid var(--bg-slate-400);
background: var(--bg-ink-300);
}
}
}
.copy-url-btn {
flex-shrink: 0;
}
}
.lightMode {
.celery-overview-filters {
.celery-filters {
.config-select-option {
.ant-select-selector {
border: 1px solid var(--bg-vanilla-300);
background: var(--bg-vanilla-100);
}
}
}
}
}

View File

@@ -0,0 +1,138 @@
import './CeleryOverviewConfigOptions.styles.scss';
import { Color } from '@signozhq/design-tokens';
import { Button, Row, Select, Spin, Tooltip } from 'antd';
import {
getValuesFromQueryParams,
setQueryParamsFromOptions,
} from 'components/CeleryTask/CeleryUtils';
import { useCeleryFilterOptions } from 'components/CeleryTask/useCeleryFilterOptions';
import { SelectMaxTagPlaceholder } from 'components/MessagingQueues/MQCommon/MQCommon';
import { QueryParams } from 'constants/query';
import useUrlQuery from 'hooks/useUrlQuery';
import { Check, Share2 } from 'lucide-react';
import { useState } from 'react';
import { useHistory, useLocation } from 'react-router-dom';
import { useCopyToClipboard } from 'react-use';
interface SelectOptionConfig {
placeholder: string;
queryParam: QueryParams;
filterType: string | string[];
}
function FilterSelect({
placeholder,
queryParam,
filterType,
}: SelectOptionConfig): JSX.Element {
const { handleSearch, isFetching, options } = useCeleryFilterOptions(
filterType,
);
const urlQuery = useUrlQuery();
const history = useHistory();
const location = useLocation();
return (
<Select
key={filterType.toString()}
placeholder={placeholder}
showSearch
mode="multiple"
options={options}
loading={isFetching}
className="config-select-option"
onSearch={handleSearch}
maxTagCount={4}
allowClear
maxTagPlaceholder={SelectMaxTagPlaceholder}
value={getValuesFromQueryParams(queryParam, urlQuery) || []}
notFoundContent={
isFetching ? (
<span>
<Spin size="small" /> Loading...
</span>
) : (
<span>No {placeholder} found</span>
)
}
onChange={(value): void => {
handleSearch('');
setQueryParamsFromOptions(value, urlQuery, history, location, queryParam);
}}
/>
);
}
function CeleryOverviewConfigOptions(): JSX.Element {
const [isURLCopied, setIsURLCopied] = useState(false);
const [, handleCopyToClipboard] = useCopyToClipboard();
const selectConfigs: SelectOptionConfig[] = [
{
placeholder: 'Service Name',
queryParam: QueryParams.service,
filterType: 'serviceName',
},
{
placeholder: 'Span Name',
queryParam: QueryParams.spanName,
filterType: 'name',
},
{
placeholder: 'Msg System',
queryParam: QueryParams.msgSystem,
filterType: 'messaging.system',
},
{
placeholder: 'Destination',
queryParam: QueryParams.destination,
filterType: ['messaging.destination.name', 'messaging.destination'],
},
{
placeholder: 'Kind',
queryParam: QueryParams.kindString,
filterType: 'kind_string',
},
];
const handleShareURL = (): void => {
handleCopyToClipboard(window.location.href);
setIsURLCopied(true);
setTimeout(() => {
setIsURLCopied(false);
}, 1000);
};
return (
<div className="celery-overview-filters">
<Row className="celery-filters">
{selectConfigs.map((config) => (
<FilterSelect
key={config.filterType.toString()}
placeholder={config.placeholder}
queryParam={config.queryParam}
filterType={config.filterType}
/>
))}
</Row>
<Tooltip title="Share this" arrow={false}>
<Button
className="periscope-btn copy-url-btn"
onClick={handleShareURL}
icon={
isURLCopied ? (
<Check size={14} color={Color.BG_FOREST_500} />
) : (
<Share2 size={14} />
)
}
/>
</Tooltip>
</div>
);
}
export default CeleryOverviewConfigOptions;

View File

@@ -0,0 +1,106 @@
.progress-container {
display: flex;
align-items: center;
}
.progress-bar {
flex: 1;
margin-right: 8px;
}
.clickable-row {
cursor: pointer;
}
.celery-overview-table {
.ant-table {
.ant-table-thead > tr > th {
padding: 12px;
font-weight: 500;
font-size: 12px;
line-height: 18px;
background: var(--bg-ink-500);
border-bottom: none;
color: var(--Vanilla-400, #c0c1c3);
font-family: Inter;
font-size: 11px;
font-style: normal;
font-weight: 600;
line-height: 18px; /* 163.636% */
letter-spacing: 0.44px;
text-transform: uppercase;
&::before {
background-color: transparent;
}
}
.ant-table-cell {
padding: 12px;
font-size: 13px;
line-height: 20px;
color: var(--bg-vanilla-100);
background: var(--bg-ink-500);
}
.progress-container {
.ant-progress-bg {
height: 8px !important;
border-radius: 4px;
}
}
.ant-table-tbody > tr:hover > td {
background: rgba(255, 255, 255, 0.04);
}
.ant-table-cell:first-child {
text-align: justify;
}
.ant-table-cell:nth-child(2) {
padding-left: 16px;
padding-right: 16px;
}
.ant-table-cell:nth-child(n + 3) {
padding-right: 24px;
}
.column-header-right {
text-align: right;
}
.column-header-left {
text-align: left;
}
.ant-table-tbody > tr > td {
border-bottom: none;
}
}
.ant-pagination {
position: relative;
bottom: 0;
width: 100%;
background: var(--bg-ink-500);
padding: 4px;
margin: 0;
// this is to offset intercom icon
padding-right: 72px;
.ant-pagination-item {
border-radius: 4px;
&-active {
background: var(--bg-robin-500);
border-color: var(--bg-robin-500);
a {
color: var(--bg-ink-500) !important;
}
}
}
}
}

View File

@@ -0,0 +1,494 @@
import './CeleryOverviewTable.styles.scss';
import { LoadingOutlined, SearchOutlined } from '@ant-design/icons';
import { Color } from '@signozhq/design-tokens';
import {
Button,
Input,
InputRef,
Progress,
Space,
Spin,
TableColumnsType,
TableColumnType,
Tooltip,
Typography,
} from 'antd';
import { FilterDropdownProps } from 'antd/lib/table/interface';
import {
getQueueOverview,
QueueOverviewResponse,
} from 'api/messagingQueues/celery/getQueueOverview';
import { isNumber } from 'chart.js/helpers';
import { ResizeTable } from 'components/ResizeTable';
import { LOCALSTORAGE } from 'constants/localStorage';
import { QueryParams } from 'constants/query';
import useDragColumns from 'hooks/useDragColumns';
import { getDraggedColumns } from 'hooks/useDragColumns/utils';
import useUrlQuery from 'hooks/useUrlQuery';
import { isEmpty } from 'lodash-es';
import { useCallback, useEffect, useMemo, useRef, useState } from 'react';
import { useMutation } from 'react-query';
import { useSelector } from 'react-redux';
import { AppState } from 'store/reducers';
import { GlobalReducer } from 'types/reducer/globalTime';
const INITIAL_PAGE_SIZE = 20;
const showPaginationItem = (total: number, range: number[]): JSX.Element => (
<>
<Typography.Text className="numbers">
{range[0]} &#8212; {range[1]}
</Typography.Text>
<Typography.Text className="total"> of {total}</Typography.Text>
</>
);
export type RowData = {
key: string | number;
[key: string]: string | number;
};
function ProgressRender(item: string | number): JSX.Element {
const percent = Number(Number(item).toFixed(1));
return (
<div className="progress-container">
<Progress
percent={percent}
strokeLinecap="butt"
size="small"
strokeColor={((): string => {
const cpuPercent = percent;
if (cpuPercent >= 90) return Color.BG_SAKURA_500;
if (cpuPercent >= 60) return Color.BG_AMBER_500;
return Color.BG_FOREST_500;
})()}
className="progress-bar"
/>
</div>
);
}
const getColumnSearchProps = (
searchInput: React.RefObject<InputRef>,
handleReset: (
clearFilters: () => void,
confirm: FilterDropdownProps['confirm'],
) => void,
handleSearch: (selectedKeys: string[], confirm: () => void) => void,
dataIndex?: string,
): TableColumnType<RowData> => ({
filterDropdown: ({
setSelectedKeys,
selectedKeys,
confirm,
clearFilters,
close,
}): JSX.Element => (
// eslint-disable-next-line jsx-a11y/no-static-element-interactions
<div style={{ padding: 8 }} onKeyDown={(e): void => e.stopPropagation()}>
<Input
ref={searchInput}
placeholder={`Search ${dataIndex}`}
value={selectedKeys[0]}
onChange={(e): void =>
setSelectedKeys(e.target.value ? [e.target.value] : [])
}
onPressEnter={(): void => handleSearch(selectedKeys as string[], confirm)}
style={{ marginBottom: 8, display: 'block' }}
/>
<Space>
<Button
type="primary"
size="small"
onClick={(): void => handleSearch(selectedKeys as string[], confirm)}
icon={<SearchOutlined />}
>
Search
</Button>
<Button
onClick={(): void => clearFilters && handleReset(clearFilters, confirm)}
size="small"
style={{ width: 90 }}
>
Reset
</Button>
<Button
type="link"
size="small"
onClick={(): void => {
close();
}}
>
close
</Button>
</Space>
</div>
),
filterIcon: (filtered: boolean): JSX.Element => (
<SearchOutlined
style={{ color: filtered ? Color.BG_ROBIN_500 : undefined }}
/>
),
onFilter: (value, record): boolean =>
record[dataIndex || '']
.toString()
.toLowerCase()
.includes((value as string).toLowerCase()),
});
function getColumns(data: RowData[]): TableColumnsType<RowData> {
if (data?.length === 0) {
return [];
}
const tooltipRender = (item: string): JSX.Element => (
<Tooltip placement="topLeft" title={item}>
{item}
</Tooltip>
);
return [
{
title: 'SERVICE NAME',
dataIndex: 'service_name',
key: 'service_name',
ellipsis: {
showTitle: false,
},
width: 200,
sorter: (a: RowData, b: RowData): number =>
String(a.service_name).localeCompare(String(b.service_name)),
render: tooltipRender,
fixed: 'left',
},
{
title: 'SPAN NAME',
dataIndex: 'span_name',
key: 'span_name',
ellipsis: {
showTitle: false,
},
width: 200,
sorter: (a: RowData, b: RowData): number =>
String(a.span_name).localeCompare(String(b.span_name)),
render: tooltipRender,
},
{
title: 'MESSAGING SYSTEM',
dataIndex: 'messaging_system',
key: 'messaging_system',
ellipsis: {
showTitle: false,
},
width: 200,
sorter: (a: RowData, b: RowData): number =>
String(a.messaging_system).localeCompare(String(b.messaging_system)),
render: tooltipRender,
},
{
title: 'DESTINATION',
dataIndex: 'destination',
key: 'destination',
ellipsis: {
showTitle: false,
},
render: tooltipRender,
width: 200,
sorter: (a: RowData, b: RowData): number =>
String(a.destination).localeCompare(String(b.destination)),
},
{
title: 'KIND',
dataIndex: 'kind_string',
key: 'kind_string',
ellipsis: {
showTitle: false,
},
width: 100,
sorter: (a: RowData, b: RowData): number =>
String(a.kind_string).localeCompare(String(b.kind_string)),
render: tooltipRender,
},
{
title: 'ERROR %',
dataIndex: 'error_percentage',
key: 'error_percentage',
ellipsis: {
showTitle: false,
},
width: 200,
sorter: (a: RowData, b: RowData): number =>
String(a.error_percentage).localeCompare(String(b.error_percentage)),
render: ProgressRender,
},
{
title: 'LATENCY (P95)',
dataIndex: 'p95_latency',
key: 'p95_latency',
ellipsis: {
showTitle: false,
},
width: 100,
sorter: (a: RowData, b: RowData): number =>
String(a.p95_latency).localeCompare(String(b.p95_latency)),
render: (value: number | string): string => {
if (!isNumber(value)) return value.toString();
return (typeof value === 'string' ? parseFloat(value) : value).toFixed(3);
},
},
{
title: 'THROUGHPUT',
dataIndex: 'throughput',
key: 'throughput',
ellipsis: {
showTitle: false,
},
width: 100,
sorter: (a: RowData, b: RowData): number =>
String(a.throughput).localeCompare(String(b.throughput)),
render: (value: number | string): string => {
if (!isNumber(value)) return value.toString();
return (typeof value === 'string' ? parseFloat(value) : value).toFixed(3);
},
},
];
}
function getTableData(data: QueueOverviewResponse['data']): RowData[] {
if (data?.length === 0) {
return [];
}
const columnOrder = [
'service_name',
'span_name',
'messaging_system',
'destination',
'kind_string',
'error_percentage',
'p95_latency',
'throughput',
];
const tableData: RowData[] =
data?.map(
(row, index: number): RowData => {
const rowData: Record<string, string | number> = {};
columnOrder.forEach((key) => {
const value = row.data[key as keyof typeof row.data];
if (typeof value === 'string' || typeof value === 'number') {
rowData[key] = value;
}
});
Object.entries(row.data).forEach(([key, value]) => {
if (
!columnOrder.includes(key) &&
(typeof value === 'string' || typeof value === 'number')
) {
rowData[key] = value;
}
});
return {
...rowData,
key: index,
};
},
) || [];
return tableData;
}
type Filter = {
key: {
key: string;
dataType: string;
};
op: string;
value: string[];
};
type FilterConfig = {
paramName: string;
operator: string;
key: string;
};
function makeFilters(urlQuery: URLSearchParams): Filter[] {
const filterConfigs: FilterConfig[] = [
{ paramName: QueryParams.destination, key: 'destination', operator: 'in' },
{ paramName: QueryParams.msgSystem, key: 'queue', operator: 'in' },
{ paramName: QueryParams.kindString, key: 'kind_string', operator: 'in' },
{ paramName: QueryParams.service, key: 'service.name', operator: 'in' },
{ paramName: QueryParams.spanName, key: 'name', operator: 'in' },
];
return filterConfigs
.map(({ paramName, operator, key }) => {
const value = urlQuery.get(paramName);
if (!value) return null;
return {
key: {
key,
dataType: 'string',
},
op: operator,
value: value.split(','),
};
})
.filter((filter): filter is Filter => filter !== null);
}
export default function CeleryOverviewTable({
onRowClick,
}: {
onRowClick: (record: RowData) => void;
}): JSX.Element {
const [tableData, setTableData] = useState<RowData[]>([]);
const { minTime, maxTime } = useSelector<AppState, GlobalReducer>(
(state) => state.globalTime,
);
const { mutate: getOverviewData, isLoading } = useMutation(getQueueOverview, {
onSuccess: (data) => {
if (data?.payload) {
setTableData(getTableData(data?.payload));
} else if (isEmpty(data?.payload)) {
setTableData([]);
}
},
});
const urlQuery = useUrlQuery();
const filters = useMemo(() => makeFilters(urlQuery), [urlQuery]);
useEffect(() => {
getOverviewData({
start: minTime,
end: maxTime,
filters: {
items: filters,
op: 'AND',
},
});
}, [getOverviewData, minTime, maxTime, filters]);
const { draggedColumns, onDragColumns } = useDragColumns<RowData>(
LOCALSTORAGE.CELERY_OVERVIEW_COLUMNS,
);
const [searchText, setSearchText] = useState('');
const searchInput = useRef<InputRef>(null);
const handleSearch = (
selectedKeys: string[],
confirm: FilterDropdownProps['confirm'],
): void => {
confirm();
setSearchText(selectedKeys[0]);
};
const handleReset = (
clearFilters: () => void,
confirm: FilterDropdownProps['confirm'],
): void => {
clearFilters();
setSearchText('');
confirm();
};
const columns = useMemo(
() =>
getDraggedColumns<RowData>(
getColumns(tableData).map((item) => ({
...item,
...getColumnSearchProps(
searchInput,
handleReset,
handleSearch,
item.key?.toString(),
),
})),
draggedColumns,
),
[tableData, draggedColumns],
);
const handleDragColumn = useCallback(
(fromIndex: number, toIndex: number) =>
onDragColumns(columns, fromIndex, toIndex),
[columns, onDragColumns],
);
const paginationConfig = useMemo(
() =>
tableData?.length > INITIAL_PAGE_SIZE && {
pageSize: INITIAL_PAGE_SIZE,
showTotal: showPaginationItem,
showSizeChanger: false,
hideOnSinglePage: true,
},
[tableData],
);
const handleRowClick = (record: RowData): void => {
onRowClick(record);
};
const getFilteredData = useCallback(
(data: RowData[]): RowData[] => {
if (!searchText) return data;
const searchLower = searchText.toLowerCase();
return data.filter((record) =>
Object.values(record).some(
(value) =>
value !== undefined &&
value.toString().toLowerCase().includes(searchLower),
),
);
},
[searchText],
);
const filteredData = useMemo(() => getFilteredData(tableData), [
getFilteredData,
tableData,
]);
return (
<div style={{ width: '100%' }}>
<Input.Search
placeholder="Search across all columns"
onChange={(e): void => setSearchText(e.target.value)}
value={searchText}
allowClear
/>
<ResizeTable
className="celery-overview-table"
pagination={paginationConfig}
size="middle"
columns={columns}
dataSource={filteredData}
bordered={false}
loading={{
spinning: isLoading,
indicator: <Spin indicator={<LoadingOutlined size={14} spin />} />,
}}
locale={{
emptyText: isLoading ? null : <Typography.Text>No data</Typography.Text>,
}}
scroll={{ x: true }}
showSorterTooltip
onDragColumn={handleDragColumn}
onRow={(record): { onClick: () => void; className: string } => ({
onClick: (): void => handleRowClick(record),
className: 'clickable-row',
})}
tableLayout="fixed"
/>
</div>
);
}

View File

@@ -0,0 +1,39 @@
.celery-task-filters {
display: flex;
justify-content: space-between;
width: 100%;
.celery-filters {
display: flex;
align-items: center;
gap: 8px;
.config-select-option {
width: 100%;
.ant-select-selector {
display: flex;
min-height: 32px;
align-items: center;
gap: 16px;
min-width: 164px;
border-radius: 2px;
border: 1px solid var(--bg-slate-400);
background: var(--bg-ink-300);
}
}
}
}
.lightMode {
.celery-task-filters {
.celery-filters {
.config-select-option {
.ant-select-selector {
border: 1px solid var(--bg-vanilla-300);
background: var(--bg-vanilla-100);
}
}
}
}
}

Some files were not shown because too many files have changed in this diff Show More