Compare commits

..

101 Commits

Author SHA1 Message Date
Abhishek Kumar Singh
75e67a7e35 feat: apply templates on Clickhouse query before parsing 2025-12-26 18:22:53 +05:30
Abhishek Kumar Singh
8c67f6ff7a chore: fixed tests for ValidateCompositeQuery 2025-12-26 13:21:05 +05:30
Abhishek Kumar Singh
d62ed6f003 feat: added validation for QueryBuilderFormula 2025-12-26 13:07:34 +05:30
Abhishek Kumar Singh
ef4ef47634 feat: added new error type for Query parsing, added validation for QueryBuilderJoin 2025-12-26 12:52:30 +05:30
Abhishek Kumar Singh
0a42c77ca7 test: added test for ValidateCompositeQuery 2025-12-24 19:16:20 +05:30
Abhishek Kumar Singh
a89bb71f2c feat: added ValidateCompositeQuery in queryparser 2025-12-24 19:14:39 +05:30
Abhishek Kumar Singh
521e5d92e7 test: fixed breaking tests post PostableRule validations 2025-12-24 17:27:09 +05:30
Abhishek Kumar Singh
09b7360513 feat: added struct based validation for PostableRule and it's child structs 2025-12-24 17:20:57 +05:30
Abhishek Kumar Singh
0fd926b8a1 feat: exclude filtering new series in Logs and Traces queries with corresponding tests 2025-12-23 13:21:30 +05:30
Abhishek Kumar Singh
e4214309f4 refactor: moved series filteration logic to new function for prom_rule 2025-12-22 14:30:50 +05:30
Abhishek Kumar Singh
297383ddca feat: don't skip series with missing metadata as we can't decide in this case if series is old/new 2025-12-22 13:51:08 +05:30
Abhishek Kumar Singh
6871eccd28 feat: return series early on no skipped index 2025-12-22 12:01:12 +05:30
Abhishek Kumar Singh
0a272b5b43 Merge branch 'main' into feat/exclude_new_series_from_alert 2025-12-22 11:47:41 +05:30
Abhishek Kumar Singh
4c4387b6d2 test: added test for FilterNewSeries 2025-12-19 23:32:26 +05:30
Abhi kumar
edcae53b64 fix: added fix for quick filter reset not working (#9824)
* fix: added fix for quick filter reset not working

* chore: added fix for quick filter reset issue

* chore: removed non-required code

* test: added tests for checkbox component

* test: added updated test for querysearch

* chore: updated querysearch test
2025-12-19 18:34:51 +05:30
Abhishek Kumar Singh
cb242e2d4c refactor: removed un-used check from filterNewSeries + fix CH metadata table name 2025-12-19 18:16:16 +05:30
Abhishek Kumar Singh
c98cdc174b refactor: code reuse 2025-12-19 17:00:29 +05:30
Abhishek Kumar Singh
6fc38bac79 refactor: updated FilterNewSeries to use v3.Series a common collection to filter new series 2025-12-19 16:42:03 +05:30
Vikrant Gupta
72fda90ec2 fix(apikey): batch last seen sql update for api-key middleware (#9833)
* fix(apikey): batch last seen sql update for api-key middleware

* fix(apikey): remove debug statement

* fix(apikey): remove debug statement
2025-12-19 14:56:33 +05:30
Abhishek Kumar Singh
ddba7e71b7 refactor: improve type assertion for filtered collections in anomaly, prom, and threshold rules 2025-12-19 14:18:21 +05:30
Abhishek Kumar Singh
23f9ff50a7 chore: added helpful comments for FilterNewSeries 2025-12-19 13:30:54 +05:30
Abhishek Kumar Singh
55e5c871fe refactor: user real QP for FilterNewSeries test 2025-12-19 12:53:22 +05:30
Abhishek Kumar Singh
511bb176dd Merge branch 'main' into feat/exclude_new_series_from_alert 2025-12-19 11:50:55 +05:30
Yunus M
8acfc3c9f7 Update CODEOWNERS (#9828)
Update codeowners for frontend repo from individuals to frontend-maintainers team
2025-12-18 23:51:36 +05:30
Shaheer Kochai
463ae443f9 feat: add support for truncating long status_message in trace details and showing expandable popover on hover (#9630)
* feat: add support for showing truncated status_message and showing expandable popover on hover

* test: add tests for status message truncation and expandable popover functionality

* test: update expand button interaction to use fireEvent for status message modal
2025-12-18 15:18:40 +00:00
Abhi kumar
f72535a15f fix: added fix for prefix units not rendering in value panel (#9750)
* fix: added fix for prefix units not rendering in value panel

* chore: updated snapshot for valuepanelwrapper

* chore: added changes to stop unit from recomputation

* chore: added support for scientific notation as well

* chore: pr comments fixes
2025-12-18 12:32:23 +00:00
Abhi kumar
e21e99ce64 fix: clear search term when closing widget header search (#9663)
* fix: clear search term when closing widget header search

* test: added test for widgetheader component

* fix: added fixes for pr comments

* chore: updated test to use mock antd

---------

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2025-12-18 12:22:12 +00:00
Abhi kumar
d1559a3262 test: added test for testing codemirror state when switching tabs (#9766)
* test: added test for testing codemirror state when switching tabs

* chore: added test for verify query-add-on and query-aggregation being rendered
2025-12-18 12:11:57 +00:00
Abhi kumar
1ccb9bb4c2 fix: added fix for free text with quick filter issue (#9768)
* fix: added fix free text with quick filter issue

* chore: removed debug console log

* chore: pr review changes
2025-12-18 12:00:08 +00:00
Vikrant Gupta
0c059df327 feat(global): add global config support (#9826)
* feat(global): add global config support

* feat(global): revert factory name changes

* feat(global): add global config support
2025-12-18 11:40:37 +00:00
Nikhil Mantri
8a5539679c chore(metrics-explorer): API for the alerts with metric_name (#9640) 2025-12-18 10:10:51 +00:00
Abhishek Kumar Singh
4e0c0319d0 Merge branch 'main' into feat/exclude_new_series_from_alert 2025-12-18 14:49:28 +05:30
Ashwin Bhatkal
89b188f73d fix: unable to edit panels when dashboard path has trailing slash (#9816)
* fix: unable to edit panels when dashboard path has trailing slash

* fix: name better

* fix: add tests

* fix: resolve comment

* fix: itch - variable rename

* fix: typo
2025-12-18 09:42:43 +05:30
Pandey
bb4d6117ac test: add integration tests for preferences and add --with-web flag (#9821)
* test: add integration test for preferences

* test: add flag --with-web
2025-12-18 00:05:27 +05:30
primus-bot[bot]
1110864549 chore(release): bump to v0.105.1 (#9819)
Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
2025-12-17 14:17:54 +05:30
Vikrant Gupta
5cb515cade fix(apiserver): preference routes registeration before user routes (#9818) 2025-12-17 14:01:28 +05:30
Srikanth Chekuri
9e5ea4de9c Merge branch 'main' into feat/exclude_new_series_from_alert 2025-12-15 08:27:00 +05:30
Abhishek Kumar Singh
81e0df09b8 refactor: merge conflict fix 2025-12-03 19:06:18 +05:30
Abhishek Kumar Singh
a522f39b9b Merge branch 'main' into feat/exclude_new_series_from_alert 2025-12-03 19:04:18 +05:30
Abhishek Kumar Singh
affb6eee05 feat: added function in query parser to parse composite query and get filter result 2025-12-02 18:56:19 +05:30
Abhishek Kumar Singh
13a5e9dd24 Merge branch 'feat/groups_in_ch_and_promql_queries' into feat/exclude_new_series_from_alert 2025-12-02 18:02:40 +05:30
Abhishek Kumar Singh
f620767876 refactor: used binding.JSON.BindBody to parse body 2025-12-02 17:58:39 +05:30
Srikanth Chekuri
9fb8b2bb1b Merge branch 'main' into feat/groups_in_ch_and_promql_queries 2025-12-02 17:26:54 +05:30
Abhishek Kumar Singh
30494c9196 chore: updated comments 2025-12-02 16:58:06 +05:30
Abhishek Kumar Singh
cae4cf0777 refactor: use query parser in baseRule 2025-12-02 16:55:58 +05:30
Abhishek Kumar Singh
c9538b0604 Merge branch 'feat/groups_in_ch_and_promql_queries' into feat/exclude_new_series_from_alert 2025-12-02 15:57:15 +05:30
Abhishek Kumar Singh
204cc4e5c5 Merge branch 'main' into feat/groups_in_ch_and_promql_queries 2025-12-02 15:47:30 +05:30
Abhishek Kumar Singh
6dd2ffcb64 refactor: update API handler to use new queryparser package for query parsing 2025-11-27 20:11:16 +05:30
Abhishek Kumar Singh
13c15249c5 Merge branch 'feat/groups_in_ch_and_promql_queries' into feat/exclude_new_series_from_alert 2025-11-27 19:51:27 +05:30
Abhishek Kumar Singh
8419ca7982 Merge branch 'main' into feat/exclude_new_series_from_alert 2025-11-27 19:49:57 +05:30
Abhishek Kumar Singh
6b189b14c6 chore: updated series collection to labelledcollection 2025-11-27 19:45:01 +05:30
Abhishek Kumar Singh
550c49fab0 feat: created queryparser package with APIs 2025-11-27 19:37:41 +05:30
Abhishek Kumar Singh
5b6ff92648 refactor: moved package queryfilterextractor in pkg/queryparser 2025-11-27 19:14:55 +05:30
Abhishek Kumar Singh
45954b38fa Merge branch 'main' into feat/exclude_new_series_from_alert 2025-11-25 12:27:00 +05:30
Abhishek Kumar Singh
ceade6c7d7 refactor: moved query parser APIs to queryfilterextractor package 2025-11-24 14:15:44 +05:30
Srikanth Chekuri
f15c88836c Merge branch 'main' into feat/groups_in_ch_and_promql_queries 2025-11-21 15:26:44 +05:30
Abhishek Kumar Singh
9af45643a9 refactor: created valuer enum for extractor types + exposed alias along with column name for groups 2025-11-21 15:16:03 +05:30
Srikanth Chekuri
d15e974e9f Merge branch 'main' into feat/groups_in_ch_and_promql_queries 2025-11-20 22:59:29 +05:30
Abhishek Kumar Singh
71e752a015 refactor: moved api request and response to types 2025-11-20 20:00:51 +05:30
Abhishek Kumar Singh
3407760585 Merge branch 'feat/queryfilterextractor' into feat/groups_in_ch_and_promql_queries 2025-11-20 19:02:11 +05:30
Abhishek Kumar Singh
58a0e36869 refactor: return relevant errors when parsing query 2025-11-20 19:01:40 +05:30
Abhishek Kumar Singh
5d688eb919 test: updated test case to include CH query error case 2025-11-20 18:55:45 +05:30
Abhishek Kumar Singh
c0f237a7c4 refactor: use render.Error instead of RespondError 2025-11-20 18:12:47 +05:30
Abhishek Kumar Singh
8ce8bc940a Merge branch 'feat/queryfilterextractor' into feat/groups_in_ch_and_promql_queries 2025-11-20 17:42:46 +05:30
Abhishek Kumar Singh
abce05b289 feat: exposed group name from column info 2025-11-20 17:38:00 +05:30
Abhishek Kumar Singh
ccd25c3b67 refactor: removed redundant checks 2025-11-20 17:24:36 +05:30
Abhishek Kumar Singh
ddb98da217 refactor: change function visibility for extractMetricAndGroupBys 2025-11-20 15:56:45 +05:30
Abhishek Kumar Singh
18d63d2e66 fix: close CH rows on reading each chunk 2025-11-20 15:44:07 +05:30
Abhishek Kumar Singh
67c108f021 feat: added new series filter in anomaly rule as well 2025-11-20 15:26:14 +05:30
Abhishek Kumar Singh
02939cafa4 refactor: added GetFirstSeenFromMetricMetadata in clickhouse reader, removed caching for query result 2025-11-20 15:24:14 +05:30
Abhishek Kumar Singh
e62b070c1e fix: created interface for standard series filtration from both prom and threshold rule 2025-11-20 14:35:25 +05:30
Srikanth Chekuri
be0a7d8fd4 Merge branch 'main' into feat/queryfilterextractor 2025-11-20 02:48:43 +05:30
Srikanth Chekuri
419044dc9e Merge branch 'main' into feat/queryfilterextractor 2025-11-19 22:42:12 +05:30
Abhishek Kumar Singh
223465d6d5 Merge branch 'feat/queryfilterextractor' into feat/exclude_new_series_from_alert 2025-11-19 20:56:01 +05:30
Abhishek Kumar Singh
cec99674fa feat: exclude new samples from alert evals 2025-11-19 20:52:59 +05:30
Abhishek Kumar Singh
0ccf58ac7a fix: common behaviour across CH and PromQL originField and originExpr 2025-11-19 20:36:00 +05:30
Abhishek Kumar Singh
b08d636d6a refactor: updated query type check with constants 2025-11-19 20:12:20 +05:30
Abhishek Kumar Singh
f6141bc6c5 feat: added API to extract metric names and groups from CH or PromQL query 2025-11-19 16:48:03 +05:30
Srikanth Chekuri
bfe49f0f1b Merge branch 'main' into feat/queryfilterextractor 2025-11-19 14:21:35 +05:30
Abhishek Kumar Singh
8e8064c5c1 fix: ci lint issues 2025-11-19 13:28:04 +05:30
Abhishek Kumar Singh
4392341467 improv: added comments for CH originparser + some code improv 2025-11-19 12:58:34 +05:30
Abhishek Kumar Singh
521d8e4f4d improv: added more tests for promql and added comments 2025-11-19 12:15:20 +05:30
Abhishek Kumar Singh
b6103f371f Merge branch 'main' into feat/queryfilterextractor 2025-11-18 21:09:45 +05:30
Abhishek Kumar Singh
43283506db Merge branch 'feat/queryfilterextractor_complex' into feat/queryfilterextractor 2025-11-18 15:22:24 +05:30
Abhishek Kumar Singh
694d9958db improv: integrated origin field extraction and updated tests to check for origin fields 2025-11-18 15:03:24 +05:30
Abhishek Kumar Singh
addee4c0a5 feat: added origin field extractor for ch query 2025-11-18 14:36:03 +05:30
Abhishek Kumar Singh
f10cf7ac04 refactor: code organisation 2025-11-17 16:27:17 +05:30
Abhishek Kumar Singh
b336678639 fix: CH test cases 2025-11-17 15:01:32 +05:30
Abhishek Kumar Singh
c438b3444e refactor: removed GroupBy from FilterResult 2025-11-17 14:34:46 +05:30
Abhishek Kumar Singh
b624414507 feat: extract column origin from subquery and join before searching directly 2025-11-17 13:42:47 +05:30
Abhishek Kumar Singh
bde7963444 feat: implemented extractOriginFromSelectItem which will find the given columnName till the very end to return the origin column with given name 2025-11-17 09:00:18 +05:30
Abhishek Kumar Singh
2df93ff217 feat: extract column origin from query and add in column info 2025-11-16 10:20:38 +05:30
Abhishek Kumar Singh
f496a6ecde improv: updated result for queryfilterextractor to return column with alias 2025-11-16 08:58:33 +05:30
Abhishek Kumar Singh
599e230a72 feat: added NewExtractor function for creating extractor 2025-11-13 13:52:32 +05:30
Abhishek Kumar Singh
9a0e32ff3b refactor: removed redundant non nil checks 2025-11-13 13:41:51 +05:30
Abhishek Kumar Singh
5fe2732698 refactor: removed unused extractFromAnyFunction 2025-11-13 13:20:59 +05:30
Abhishek Kumar Singh
4993a44ecc refactor: removed unused cases + added comments 2025-11-13 12:59:35 +05:30
Abhishek Kumar Singh
ebd575a16b chore: comments + remove usage of seen map in extractGroupFromGroupByClause 2025-11-12 19:26:44 +05:30
Abhishek Kumar Singh
666582337e feat: support for CTE in clickhouse queryfilterextractor 2025-11-12 18:58:30 +05:30
Abhishek Kumar Singh
23512ab05c feat: added support for promql in queryfilterextractor 2025-11-10 20:50:42 +05:30
Abhishek Kumar Singh
1423749529 feat: added filter extractor interface and clickhouse impl with tests 2025-11-10 20:05:39 +05:30
106 changed files with 5563 additions and 281 deletions

2
.github/CODEOWNERS vendored
View File

@@ -2,7 +2,7 @@
# Owners are automatically requested for review for PRs that changes code
# that they own.
/frontend/ @YounixM @aks07
/frontend/ @SigNoz/frontend-maintainers
# Onboarding
/frontend/src/container/OnboardingV2Container/onboarding-configs/onboarding-config-with-links.json @makeavish

View File

@@ -9,6 +9,29 @@ on:
- labeled
jobs:
fmtlint:
if: |
((github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))) && contains(github.event.pull_request.labels.*.name, 'safe-to-integrate')
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v4
- name: python
uses: actions/setup-python@v5
with:
python-version: 3.13
- name: poetry
run: |
python -m pip install poetry==2.1.2
python -m poetry config virtualenvs.in-project true
cd tests/integration && poetry install --no-root
- name: fmt
run: |
make py-fmt
- name: lint
run: |
make py-lint
test:
strategy:
fail-fast: false
@@ -21,6 +44,7 @@ jobs:
- dashboard
- querier
- ttl
- preference
sqlstore-provider:
- postgres
- sqlite

View File

@@ -1,11 +1,3 @@
FROM node:18-bullseye AS build
WORKDIR /opt/
COPY ./frontend/ ./
ENV NODE_OPTIONS=--max-old-space-size=8192
RUN CI=1 yarn install
RUN CI=1 yarn build
FROM golang:1.24-bullseye
ARG OS="linux"
@@ -40,8 +32,6 @@ COPY Makefile Makefile
RUN TARGET_DIR=/root ARCHS=${TARGETARCH} ZEUS_URL=${ZEUSURL} LICENSE_URL=${ZEUSURL}/api/v1 make go-build-enterprise-race
RUN mv /root/linux-${TARGETARCH}/signoz /root/signoz
COPY --from=build /opt/build ./web/
RUN chmod 755 /root /root/signoz
ENTRYPOINT ["/root/signoz", "server"]

View File

@@ -0,0 +1,47 @@
FROM node:18-bullseye AS build
WORKDIR /opt/
COPY ./frontend/ ./
ENV NODE_OPTIONS=--max-old-space-size=8192
RUN CI=1 yarn install
RUN CI=1 yarn build
FROM golang:1.24-bullseye
ARG OS="linux"
ARG TARGETARCH
ARG ZEUSURL
# This path is important for stacktraces
WORKDIR $GOPATH/src/github.com/signoz/signoz
WORKDIR /root
RUN set -eux; \
apt-get update; \
apt-get install -y --no-install-recommends \
g++ \
gcc \
libc6-dev \
make \
pkg-config \
; \
rm -rf /var/lib/apt/lists/*
COPY go.mod go.sum ./
RUN go mod download
COPY ./cmd/ ./cmd/
COPY ./ee/ ./ee/
COPY ./pkg/ ./pkg/
COPY ./templates/email /root/templates
COPY Makefile Makefile
RUN TARGET_DIR=/root ARCHS=${TARGETARCH} ZEUS_URL=${ZEUSURL} LICENSE_URL=${ZEUSURL}/api/v1 make go-build-enterprise-race
RUN mv /root/linux-${TARGETARCH}/signoz /root/signoz
COPY --from=build /opt/build ./web/
RUN chmod 755 /root /root/signoz
ENTRYPOINT ["/root/signoz", "server"]

View File

@@ -3,6 +3,13 @@
# Do not modify this file
#
##################### Global #####################
global:
# the url under which the signoz apiserver is externally reachable.
external_url: <unset>
# the url where the SigNoz backend receives telemetry data (traces, metrics, logs) from instrumented applications.
ingestion_url: <unset>
##################### Version #####################
version:
banner:

View File

@@ -176,7 +176,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.105.0
image: signoz/signoz:v0.105.1
command:
- --config=/root/config/prometheus.yml
ports:

View File

@@ -117,7 +117,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.105.0
image: signoz/signoz:v0.105.1
command:
- --config=/root/config/prometheus.yml
ports:

View File

@@ -179,7 +179,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.105.0}
image: signoz/signoz:${VERSION:-v0.105.1}
container_name: signoz
command:
- --config=/root/config/prometheus.yml

View File

@@ -111,7 +111,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.105.0}
image: signoz/signoz:${VERSION:-v0.105.1}
container_name: signoz
command:
- --config=/root/config/prometheus.yml

View File

@@ -473,6 +473,49 @@ paths:
summary: Get reset password token
tags:
- users
/api/v1/global/config:
get:
deprecated: false
description: This endpoints returns global config
operationId: GetGlobalConfig
responses:
"200":
content:
application/json:
schema:
properties:
data:
$ref: '#/components/schemas/TypesGettableGlobalConfig'
status:
type: string
type: object
description: OK
"401":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Unauthorized
"403":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Forbidden
"500":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Internal Server Error
security:
- api_key:
- EDITOR
- tokenizer:
- EDITOR
summary: Get global config
tags:
- global
/api/v1/invite:
get:
deprecated: false
@@ -2145,6 +2188,13 @@ components:
userId:
type: string
type: object
TypesGettableGlobalConfig:
properties:
external_url:
type: string
ingestion_url:
type: string
type: object
TypesInvite:
properties:
createdAt:

View File

@@ -0,0 +1,179 @@
# Handler
Handlers in SigNoz are responsible for exposing module functionality over HTTP. They are thin adapters that:
- Decode incoming HTTP requests
- Call the appropriate module layer
- Return structured responses (or errors) in a consistent format
- Describe themselves for OpenAPI generation
They are **not** the place for complex business logic; that belongs in modules (for example, `pkg/modules/user`, `pkg/modules/session`, etc).
## How are handlers structured?
At a high level, a typical flow looks like this:
1. A `Handler` interface is defined in the module (for example, `user.Handler`, `session.Handler`, `organization.Handler`).
2. The `apiserver` provider wires those handlers into HTTP routes using Gorilla `mux.Router`.
Each route wraps a module handler method with the following:
- Authorization middleware (from `pkg/http/middleware`)
- A generic HTTP `handler.Handler` (from `pkg/http/handler`)
- An `OpenAPIDef` that describes the operation for OpenAPI generation
For example, in `pkg/apiserver/signozapiserver`:
```go
if err := router.Handle("/api/v1/invite", handler.New(
provider.authZ.AdminAccess(provider.userHandler.CreateInvite),
handler.OpenAPIDef{
ID: "CreateInvite",
Tags: []string{"users"},
Summary: "Create invite",
Description: "This endpoint creates an invite for a user",
Request: new(types.PostableInvite),
RequestContentType: "application/json",
Response: new(types.Invite),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusCreated,
ErrorStatusCodes: []int{http.StatusBadRequest, http.StatusConflict},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
},
)).Methods(http.MethodPost).GetError(); err != nil {
return err
}
```
In this pattern:
- `provider.userHandler.CreateInvite` is a handler method.
- `provider.authZ.AdminAccess(...)` wraps that method with authorization checks and context setup.
- `handler.New` converts it into an HTTP handler and wires it to OpenAPI via the `OpenAPIDef`.
## How to write a new handler method?
When adding a new endpoint:
1. Add a method to the appropriate module `Handler` interface.
2. Implement that method in the module.
3. Register the method in `signozapiserver` with the correct route, HTTP method, auth, and `OpenAPIDef`.
### 1. Extend an existing `Handler` interface or create a new one
Find the module in `pkg/modules/<name>` and extend its `Handler` interface with a new method that receives an `http.ResponseWriter` and `*http.Request`. For example:
```go
type Handler interface {
// existing methods...
CreateThing(rw http.ResponseWriter, req *http.Request)
}
```
Keep the method focused on HTTP concerns and delegate business logic to the module.
### 2. Implement the handler method
In the module implementation, implement the new method. A typical implementation:
- Extracts authentication and organization context from `req.Context()`
- Decodes the request body into a `types.*` struct using the `binding` package
- Calls module functions
- Uses the `render` package to write responses or errors
```go
func (h *handler) CreateThing(rw http.ResponseWriter, req *http.Request) {
// Extract authentication and organization context from req.Context()
claims, err := authtypes.ClaimsFromContext(req.Context())
if err != nil {
render.Error(rw, err)
return
}
// Decode the request body into a `types.*` struct using the `binding` package
var in types.PostableThing
if err := binding.JSON.BindBody(req.Body, &in); err != nil {
render.Error(rw, err)
return
}
// Call module functions
out, err := h.module.CreateThing(req.Context(), claims.OrgID, &in)
if err != nil {
render.Error(rw, err)
return
}
// Use the `render` package to write responses or errors
render.Success(rw, http.StatusCreated, out)
}
```
### 3. Register the handler in `signozapiserver`
In `pkg/apiserver/signozapiserver`, add a route in the appropriate `add*Routes` function (`addUserRoutes`, `addSessionRoutes`, `addOrgRoutes`, etc.). The pattern is:
```go
if err := router.Handle("/api/v1/things", handler.New(
provider.authZ.AdminAccess(provider.thingHandler.CreateThing),
handler.OpenAPIDef{
ID: "CreateThing",
Tags: []string{"things"},
Summary: "Create thing",
Description: "This endpoint creates a thing",
Request: new(types.PostableThing),
RequestContentType: "application/json",
Response: new(types.GettableThing),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusCreated,
ErrorStatusCodes: []int{http.StatusBadRequest, http.StatusConflict},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
},
)).Methods(http.MethodPost).GetError(); err != nil {
return err
}
```
### 4. Update the OpenAPI spec
Run the following command to update the OpenAPI spec:
```bash
go run cmd/enterprise/*.go generate openapi
```
This will update the OpenAPI spec in `docs/api/openapi.yml` to reflect the new endpoint.
## How does OpenAPI integration work?
The `handler.New` function ties the HTTP handler to OpenAPI metadata via `OpenAPIDef`. This drives the generated OpenAPI document.
- **ID**: A unique identifier for the operation (used as the `operationId`).
- **Tags**: Logical grouping for the operation (for example, `"users"`, `"sessions"`, `"orgs"`).
- **Summary / Description**: Human-friendly documentation.
- **Request / RequestContentType**:
- `Request` is a Go type that describes the request body or form.
- `RequestContentType` is usually `"application/json"` or `"application/x-www-form-urlencoded"` (for callbacks like SAML).
- **Response / ResponseContentType**:
- `Response` is the Go type for the successful response payload.
- `ResponseContentType` is usually `"application/json"`; use `""` for responses without a body.
- **SuccessStatusCode**: The HTTP status for successful responses (for example, `http.StatusOK`, `http.StatusCreated`, `http.StatusNoContent`).
- **ErrorStatusCodes**: Additional error status codes beyond the standard ones automatically added by `handler.New`.
- **SecuritySchemes**: Auth mechanisms and scopes required by the operation.
The generic handler:
- Automatically appends `401`, `403`, and `500` to `ErrorStatusCodes` when appropriate.
- Registers request and response schemas with the OpenAPI reflector so they appear in `docs/api/openapi.yml`.
See existing examples in:
- `addUserRoutes` (for typical JSON request/response)
- `addSessionRoutes` (for form-encoded and redirect flows)
## What should I remember?
- **Keep handlers thin**: focus on HTTP concerns and delegate logic to modules/services.
- **Always register routes through `signozapiserver`** using `handler.New` and a complete `OpenAPIDef`.
- **Choose accurate request/response types** from the `types` packages so OpenAPI schemas are correct.

View File

@@ -3,13 +3,14 @@ package app
import (
"context"
"fmt"
"log/slog"
"net"
"net/http"
_ "net/http/pprof" // http profiler
"slices"
"github.com/SigNoz/signoz/pkg/cache/memorycache"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/ruler/rulestore/sqlrulestore"
"go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux"
"go.opentelemetry.io/otel/propagation"
@@ -106,7 +107,8 @@ func NewServer(config signoz.Config, signoz *signoz.SigNoz) (*Server, error) {
signoz.Prometheus,
signoz.Modules.OrgGetter,
signoz.Querier,
signoz.Instrumentation.Logger(),
signoz.Instrumentation.ToProviderSettings(),
signoz.QueryParser,
)
if err != nil {
@@ -353,8 +355,8 @@ func (s *Server) Stop(ctx context.Context) error {
return nil
}
func makeRulesManager(ch baseint.Reader, cache cache.Cache, alertmanager alertmanager.Alertmanager, sqlstore sqlstore.SQLStore, telemetryStore telemetrystore.TelemetryStore, prometheus prometheus.Prometheus, orgGetter organization.Getter, querier querier.Querier, logger *slog.Logger) (*baserules.Manager, error) {
ruleStore := sqlrulestore.NewRuleStore(sqlstore)
func makeRulesManager(ch baseint.Reader, cache cache.Cache, alertmanager alertmanager.Alertmanager, sqlstore sqlstore.SQLStore, telemetryStore telemetrystore.TelemetryStore, prometheus prometheus.Prometheus, orgGetter organization.Getter, querier querier.Querier, providerSettings factory.ProviderSettings, queryParser queryparser.QueryParser) (*baserules.Manager, error) {
ruleStore := sqlrulestore.NewRuleStore(sqlstore, queryParser, providerSettings)
maintenanceStore := sqlrulestore.NewMaintenanceStore(sqlstore)
// create manager opts
managerOpts := &baserules.ManagerOptions{
@@ -364,7 +366,7 @@ func makeRulesManager(ch baseint.Reader, cache cache.Cache, alertmanager alertma
Logger: zap.L(),
Reader: ch,
Querier: querier,
SLogger: logger,
SLogger: providerSettings.Logger,
Cache: cache,
EvalDelay: baseconst.GetEvalDelay(),
PrepareTaskFunc: rules.PrepareTaskFunc,
@@ -374,6 +376,7 @@ func makeRulesManager(ch baseint.Reader, cache cache.Cache, alertmanager alertma
RuleStore: ruleStore,
MaintenanceStore: maintenanceStore,
SqlStore: sqlstore,
QueryParser: queryParser,
}
// create Manager

View File

@@ -207,6 +207,42 @@ func (r *AnomalyRule) GetSelectedQuery() string {
return r.Condition().GetSelectedQueryName()
}
// filterNewSeries filters out new series based on the first_seen timestamp.
func (r *AnomalyRule) filterNewSeries(ctx context.Context, ts time.Time, series []*v3.Series) ([]*v3.Series, error) {
// Convert []*v3.Series to []v3.Series for filtering
v3Series := make([]v3.Series, 0, len(series))
for _, s := range series {
v3Series = append(v3Series, *s)
}
// Get indexes to skip
skipIndexes, filterErr := r.BaseRule.FilterNewSeries(ctx, ts, v3Series)
if filterErr != nil {
r.logger.ErrorContext(ctx, "Error filtering new series, ", "error", filterErr, "rule_name", r.Name())
return nil, filterErr
}
// if no series are skipped, return the original series
if len(skipIndexes) == 0 {
return series, nil
}
// Create a map of skip indexes for efficient lookup
skippedIdxMap := make(map[int]struct{}, len(skipIndexes))
for _, idx := range skipIndexes {
skippedIdxMap[idx] = struct{}{}
}
// Filter out skipped series
oldSeries := make([]*v3.Series, 0, len(series)-len(skipIndexes))
for i, s := range series {
if _, shouldSkip := skippedIdxMap[i]; !shouldSkip {
oldSeries = append(oldSeries, s)
}
}
return oldSeries, nil
}
func (r *AnomalyRule) buildAndRunQuery(ctx context.Context, orgID valuer.UUID, ts time.Time) (ruletypes.Vector, error) {
params, err := r.prepareQueryRange(ctx, ts)
@@ -239,7 +275,18 @@ func (r *AnomalyRule) buildAndRunQuery(ctx context.Context, orgID valuer.UUID, t
scoresJSON, _ := json.Marshal(queryResult.AnomalyScores)
r.logger.InfoContext(ctx, "anomaly scores", "scores", string(scoresJSON))
for _, series := range queryResult.AnomalyScores {
// Filter out new series if newGroupEvalDelay is configured
seriesToProcess := queryResult.AnomalyScores
if r.ShouldSkipNewGroups() {
filteredSeries, filterErr := r.filterNewSeries(ctx, ts, seriesToProcess)
if filterErr != nil {
r.logger.ErrorContext(ctx, "Error filtering new series, ", "error", filterErr, "rule_name", r.Name())
return nil, filterErr
}
seriesToProcess = filteredSeries
}
for _, series := range seriesToProcess {
if r.Condition() != nil && r.Condition().RequireMinPoints {
if len(series.Points) < r.Condition().RequiredNumPoints {
r.logger.InfoContext(ctx, "not enough data points to evaluate series, skipping", "ruleid", r.ID(), "numPoints", len(series.Points), "requiredPoints", r.Condition().RequiredNumPoints)
@@ -291,7 +338,18 @@ func (r *AnomalyRule) buildAndRunQueryV5(ctx context.Context, orgID valuer.UUID,
scoresJSON, _ := json.Marshal(queryResult.AnomalyScores)
r.logger.InfoContext(ctx, "anomaly scores", "scores", string(scoresJSON))
for _, series := range queryResult.AnomalyScores {
// Filter out new series if newGroupEvalDelay is configured
seriesToProcess := queryResult.AnomalyScores
if r.ShouldSkipNewGroups() {
filteredSeries, filterErr := r.filterNewSeries(ctx, ts, seriesToProcess)
if filterErr != nil {
r.logger.ErrorContext(ctx, "Error filtering new series, ", "error", filterErr, "rule_name", r.Name())
return nil, filterErr
}
seriesToProcess = filteredSeries
}
for _, series := range seriesToProcess {
if r.Condition().RequireMinPoints {
if len(series.Points) < r.Condition().RequiredNumPoints {
r.logger.InfoContext(ctx, "not enough data points to evaluate series, skipping", "ruleid", r.ID(), "numPoints", len(series.Points), "requiredPoints", r.Condition().RequiredNumPoints)

View File

@@ -37,6 +37,7 @@ func PrepareTaskFunc(opts baserules.PrepareTaskOptions) (baserules.Task, error)
opts.SLogger,
baserules.WithEvalDelay(opts.ManagerOpts.EvalDelay),
baserules.WithSQLStore(opts.SQLStore),
baserules.WithQueryParser(opts.ManagerOpts.QueryParser),
)
if err != nil {
@@ -59,6 +60,7 @@ func PrepareTaskFunc(opts baserules.PrepareTaskOptions) (baserules.Task, error)
opts.Reader,
opts.ManagerOpts.Prometheus,
baserules.WithSQLStore(opts.SQLStore),
baserules.WithQueryParser(opts.ManagerOpts.QueryParser),
)
if err != nil {
@@ -82,6 +84,7 @@ func PrepareTaskFunc(opts baserules.PrepareTaskOptions) (baserules.Task, error)
opts.Cache,
baserules.WithEvalDelay(opts.ManagerOpts.EvalDelay),
baserules.WithSQLStore(opts.SQLStore),
baserules.WithQueryParser(opts.ManagerOpts.QueryParser),
)
if err != nil {
return task, err
@@ -140,6 +143,7 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
baserules.WithSendAlways(),
baserules.WithSendUnmatched(),
baserules.WithSQLStore(opts.SQLStore),
baserules.WithQueryParser(opts.ManagerOpts.QueryParser),
)
if err != nil {
@@ -160,6 +164,7 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
baserules.WithSendAlways(),
baserules.WithSendUnmatched(),
baserules.WithSQLStore(opts.SQLStore),
baserules.WithQueryParser(opts.ManagerOpts.QueryParser),
)
if err != nil {
@@ -179,6 +184,7 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
baserules.WithSendAlways(),
baserules.WithSendUnmatched(),
baserules.WithSQLStore(opts.SQLStore),
baserules.WithQueryParser(opts.ManagerOpts.QueryParser),
)
if err != nil {
zap.L().Error("failed to prepare a new anomaly rule for test", zap.String("name", alertname), zap.Error(err))

View File

@@ -300,7 +300,7 @@ function QueryAddOns({
);
return (
<div className="query-add-ons">
<div className="query-add-ons" data-testid="query-add-ons">
{selectedViews.length > 0 && (
<div className="selected-add-ons-content">
{selectedViews.find((view) => view.key === 'group_by') && (

View File

@@ -43,7 +43,10 @@ function QueryAggregationOptions({
};
return (
<div className="query-aggregation-container">
<div
className="query-aggregation-container"
data-testid="query-aggregation-container"
>
<div className="aggregation-container">
<QueryAggregationSelect
onChange={onChange}

View File

@@ -114,9 +114,9 @@ function QuerySearch({
const [isFocused, setIsFocused] = useState(false);
const editorRef = useRef<EditorView | null>(null);
const handleQueryValidation = useCallback((newQuery: string): void => {
const handleQueryValidation = useCallback((newExpression: string): void => {
try {
const validationResponse = validateQuery(newQuery);
const validationResponse = validateQuery(newExpression);
setValidation(validationResponse);
} catch (error) {
setValidation({
@@ -127,7 +127,7 @@ function QuerySearch({
}
}, []);
const getCurrentQuery = useCallback(
const getCurrentExpression = useCallback(
(): string => editorRef.current?.state.doc.toString() || '',
[],
);
@@ -167,19 +167,14 @@ function QuerySearch({
() => {
if (!isEditorReady) return;
const newQuery = queryData.filter?.expression || '';
const currentQuery = getCurrentQuery();
const newExpression = queryData.filter?.expression || '';
const currentExpression = getCurrentExpression();
/* eslint-disable-next-line sonarjs/no-collapsible-if */
if (newQuery !== currentQuery && !isFocused) {
// Prevent clearing a non-empty editor when queryData becomes empty temporarily
// Only update if newQuery has a value, or if both are empty (initial state)
if (newQuery || !currentQuery) {
updateEditorValue(newQuery, { skipOnChange: true });
if (newQuery) {
handleQueryValidation(newQuery);
}
// Do not update codemirror editor if the expression is the same
if (newExpression !== currentExpression && !isFocused) {
updateEditorValue(newExpression, { skipOnChange: true });
if (newExpression) {
handleQueryValidation(newExpression);
}
}
},
@@ -613,8 +608,8 @@ function QuerySearch({
};
const handleBlur = (): void => {
const currentQuery = getCurrentQuery();
handleQueryValidation(currentQuery);
const currentExpression = getCurrentExpression();
handleQueryValidation(currentExpression);
setIsFocused(false);
};
@@ -633,11 +628,11 @@ function QuerySearch({
const handleExampleClick = (exampleQuery: string): void => {
// If there's an existing query, append the example with AND
const currentQuery = getCurrentQuery();
const newQuery = currentQuery
? `${currentQuery} AND ${exampleQuery}`
const currentExpression = getCurrentExpression();
const newExpression = currentExpression
? `${currentExpression} AND ${exampleQuery}`
: exampleQuery;
updateEditorValue(newQuery);
updateEditorValue(newExpression);
};
// Helper function to render a badge for the current context mode
@@ -673,9 +668,9 @@ function QuerySearch({
if (word?.from === word?.to && !context.explicit) return null;
// Get current query from editor
const currentQuery = editorRef.current?.state.doc.toString() || '';
const currentExpression = getCurrentExpression();
// Get the query context at the cursor position
const queryContext = getQueryContextAtCursor(currentQuery, cursorPos.ch);
const queryContext = getQueryContextAtCursor(currentExpression, cursorPos.ch);
// Define autocomplete options based on the context
let options: {
@@ -1171,8 +1166,8 @@ function QuerySearch({
if (queryContext.isInParenthesis) {
// Different suggestions based on the context within parenthesis or bracket
const currentQuery = editorRef.current?.state.doc.toString() || '';
const curChar = currentQuery.charAt(cursorPos.ch - 1) || '';
const currentExpression = getCurrentExpression();
const curChar = currentExpression.charAt(cursorPos.ch - 1) || '';
if (curChar === '(' || curChar === '[') {
// Right after opening parenthesis/bracket
@@ -1321,7 +1316,7 @@ function QuerySearch({
style={{
position: 'absolute',
top: 8,
right: validation.isValid === false && getCurrentQuery() ? 40 : 8, // Move left when error shown
right: validation.isValid === false && getCurrentExpression() ? 40 : 8, // Move left when error shown
cursor: 'help',
zIndex: 10,
transition: 'right 0.2s ease',
@@ -1383,7 +1378,7 @@ function QuerySearch({
// Mod-Enter is usually Ctrl-Enter or Cmd-Enter based on OS
run: (): boolean => {
if (onRun && typeof onRun === 'function') {
onRun(getCurrentQuery());
onRun(getCurrentExpression());
} else {
handleRunQuery();
}
@@ -1409,7 +1404,7 @@ function QuerySearch({
onBlur={handleBlur}
/>
{getCurrentQuery() && validation.isValid === false && !isFocused && (
{getCurrentExpression() && validation.isValid === false && !isFocused && (
<div
className={cx('query-status-container', {
hasErrors: validation.errors.length > 0,

View File

@@ -1,6 +1,7 @@
/* eslint-disable @typescript-eslint/no-explicit-any */
/* eslint-disable sonarjs/cognitive-complexity */
/* eslint-disable import/named */
import { EditorView } from '@uiw/react-codemirror';
import { getKeySuggestions } from 'api/querySuggestions/getKeySuggestions';
import { getValueSuggestions } from 'api/querySuggestions/getValueSuggestion';
import { initialQueriesMap } from 'constants/queryBuilder';
@@ -151,8 +152,6 @@ describe('QuerySearch (Integration with Real CodeMirror)', () => {
>;
mockedGetKeys.mockClear();
const user = userEvent.setup({ pointerEventsCheck: 0 });
render(
<QuerySearch
onChange={jest.fn() as jest.MockedFunction<(v: string) => void>}
@@ -171,8 +170,8 @@ describe('QuerySearch (Integration with Real CodeMirror)', () => {
const editor = document.querySelector(CM_EDITOR_SELECTOR) as HTMLElement;
// Focus and type into the editor
await user.click(editor);
await user.type(editor, SAMPLE_KEY_TYPING);
await userEvent.click(editor);
await userEvent.type(editor, SAMPLE_KEY_TYPING);
// Wait for debounced API call (300ms debounce + some buffer)
await waitFor(() => expect(mockedGetKeys).toHaveBeenCalled(), {
@@ -187,8 +186,6 @@ describe('QuerySearch (Integration with Real CodeMirror)', () => {
>;
mockedGetValues.mockClear();
const user = userEvent.setup({ pointerEventsCheck: 0 });
render(
<QuerySearch
onChange={jest.fn() as jest.MockedFunction<(v: string) => void>}
@@ -204,8 +201,8 @@ describe('QuerySearch (Integration with Real CodeMirror)', () => {
});
const editor = document.querySelector(CM_EDITOR_SELECTOR) as HTMLElement;
await user.click(editor);
await user.type(editor, SAMPLE_VALUE_TYPING_INCOMPLETE);
await userEvent.click(editor);
await userEvent.type(editor, SAMPLE_VALUE_TYPING_INCOMPLETE);
// Wait for debounced API call (300ms debounce + some buffer)
await waitFor(() => expect(mockedGetValues).toHaveBeenCalled(), {
@@ -241,7 +238,6 @@ describe('QuerySearch (Integration with Real CodeMirror)', () => {
it('calls provided onRun on Mod-Enter', async () => {
const onRun = jest.fn() as jest.MockedFunction<(q: string) => void>;
const user = userEvent.setup({ pointerEventsCheck: 0 });
render(
<QuerySearch
@@ -259,8 +255,8 @@ describe('QuerySearch (Integration with Real CodeMirror)', () => {
});
const editor = document.querySelector(CM_EDITOR_SELECTOR) as HTMLElement;
await user.click(editor);
await user.type(editor, SAMPLE_STATUS_QUERY);
await userEvent.click(editor);
await userEvent.type(editor, SAMPLE_STATUS_QUERY);
// Use fireEvent for keyboard shortcuts as userEvent might not work well with CodeMirror
const modKey = navigator.platform.includes('Mac') ? 'metaKey' : 'ctrlKey';
@@ -280,8 +276,6 @@ describe('QuerySearch (Integration with Real CodeMirror)', () => {
>;
mockedHandleRunQuery.mockClear();
const user = userEvent.setup({ pointerEventsCheck: 0 });
render(
<QuerySearch
onChange={jest.fn() as jest.MockedFunction<(v: string) => void>}
@@ -297,8 +291,8 @@ describe('QuerySearch (Integration with Real CodeMirror)', () => {
});
const editor = document.querySelector(CM_EDITOR_SELECTOR) as HTMLElement;
await user.click(editor);
await user.type(editor, SAMPLE_VALUE_TYPING_COMPLETE);
await userEvent.click(editor);
await userEvent.type(editor, SAMPLE_VALUE_TYPING_COMPLETE);
// Use fireEvent for keyboard shortcuts as userEvent might not work well with CodeMirror
const modKey = navigator.platform.includes('Mac') ? 'metaKey' : 'ctrlKey';
@@ -348,4 +342,73 @@ describe('QuerySearch (Integration with Real CodeMirror)', () => {
{ timeout: 3000 },
);
});
it('handles queryData.filter.expression changes without triggering onChange', async () => {
// Spy on CodeMirror's EditorView.dispatch, which is invoked when updateEditorValue
// applies a programmatic change to the editor.
const dispatchSpy = jest.spyOn(EditorView.prototype, 'dispatch');
const initialExpression = "service.name = 'frontend'";
const updatedExpression = "service.name = 'backend'";
const onChange = jest.fn() as jest.MockedFunction<(v: string) => void>;
const initialQueryData = {
...initialQueriesMap.logs.builder.queryData[0],
filter: {
expression: initialExpression,
},
};
const { rerender } = render(
<QuerySearch
onChange={onChange}
queryData={initialQueryData}
dataSource={DataSource.LOGS}
/>,
);
// Wait for CodeMirror to initialize with the initial expression
await waitFor(
() => {
const editorContent = document.querySelector(
CM_EDITOR_SELECTOR,
) as HTMLElement;
expect(editorContent).toBeInTheDocument();
const textContent = editorContent.textContent || '';
expect(textContent).toBe(initialExpression);
},
{ timeout: 3000 },
);
// Ensure the editor is explicitly blurred (not focused)
// Blur the actual CodeMirror editor container so that QuerySearch's onBlur handler runs.
// Note: In jsdom + CodeMirror we can't reliably assert the DOM text content changes when
// the expression is updated programmatically, but we can assert that:
// 1) The component continues to render, and
// 2) No onChange is fired for programmatic updates.
const updatedQueryData = {
...initialQueryData,
filter: {
expression: updatedExpression,
},
};
// Re-render with updated queryData.filter.expression
rerender(
<QuerySearch
onChange={onChange}
queryData={updatedQueryData}
dataSource={DataSource.LOGS}
/>,
);
// updateEditorValue should have resulted in a dispatch call + onChange should not have been called
await waitFor(() => {
expect(dispatchSpy).toHaveBeenCalled();
expect(onChange).not.toHaveBeenCalled();
});
dispatchSpy.mockRestore();
});
});

View File

@@ -163,6 +163,10 @@ function formatSingleValueForFilter(
if (trimmed === 'true' || trimmed === 'false') {
return trimmed === 'true';
}
if (isQuoted(value)) {
return unquote(value);
}
}
// Return non-string values as-is, or string values that couldn't be converted

View File

@@ -1,3 +1,4 @@
import userEvent from '@testing-library/user-event';
import { FiltersType, QuickFiltersSource } from 'components/QuickFilters/types';
import { useGetAggregateValues } from 'hooks/queryBuilder/useGetAggregateValues';
import { useQueryBuilder } from 'hooks/queryBuilder/useQueryBuilder';
@@ -5,7 +6,7 @@ import { useGetQueryKeyValueSuggestions } from 'hooks/querySuggestions/useGetQue
import { quickFiltersAttributeValuesResponse } from 'mocks-server/__mockdata__/customQuickFilters';
import { rest, server } from 'mocks-server/server';
import { UseQueryResult } from 'react-query';
import { render, screen, userEvent, waitFor } from 'tests/test-utils';
import { render, screen, waitFor } from 'tests/test-utils';
import { SuccessResponse } from 'types/api';
import { IAttributeValuesResponse } from 'types/api/queryBuilder/getAttributesValues';
import { DataTypes } from 'types/api/queryBuilder/queryAutocompleteResponse';
@@ -41,13 +42,15 @@ interface MockFilterConfig {
type: FiltersType;
}
const SERVICE_NAME_KEY = 'service.name';
const createMockFilter = (
overrides: Partial<MockFilterConfig> = {},
): MockFilterConfig => ({
// eslint-disable-next-line sonarjs/no-duplicate-string
title: 'Service Name',
attributeKey: {
key: 'service.name',
key: SERVICE_NAME_KEY,
dataType: DataTypes.String,
type: 'resource',
},
@@ -68,7 +71,7 @@ const createMockQueryBuilderData = (hasActiveFilters = false): any => ({
? [
{
key: {
key: 'service.name',
key: SERVICE_NAME_KEY,
dataType: DataTypes.String,
type: 'resource',
},
@@ -188,4 +191,222 @@ describe('CheckboxFilter - User Flows', () => {
expect(screen.getByPlaceholderText('Filter values')).toBeInTheDocument();
});
});
it('should update query filters when a checkbox is clicked', async () => {
const redirectWithQueryBuilderData = jest.fn();
// Start with no active filters so clicking a checkbox creates one
mockUseQueryBuilder.mockReturnValue({
...createMockQueryBuilderData(false),
redirectWithQueryBuilderData,
} as any);
const mockFilter = createMockFilter({ defaultOpen: true });
render(
<CheckboxFilter
filter={mockFilter}
source={QuickFiltersSource.LOGS_EXPLORER}
/>,
);
// Wait for checkboxes to render
await waitFor(() => {
expect(screen.getAllByRole('checkbox')).toHaveLength(4);
});
const checkboxes = screen.getAllByRole('checkbox');
// User unchecks the first value (`mq-kafka`)
await userEvent.click(checkboxes[0]);
// Composite query params (query builder data) should be updated via redirectWithQueryBuilderData
expect(redirectWithQueryBuilderData).toHaveBeenCalledTimes(1);
const [updatedQuery] = redirectWithQueryBuilderData.mock.calls[0];
const updatedFilters = updatedQuery.builder.queryData[0].filters;
expect(updatedFilters.items).toHaveLength(1);
expect(updatedFilters.items[0].key.key).toBe(SERVICE_NAME_KEY);
// When unchecking from an "all selected" state, we use a NOT_IN filter for that value
expect(updatedFilters.items[0].op).toBe('not in');
expect(updatedFilters.items[0].value).toBe('mq-kafka');
});
it('should set an IN filter with only the clicked value when using Only', async () => {
const redirectWithQueryBuilderData = jest.fn();
// Existing filter: service.name IN ['mq-kafka', 'otel-demo']
mockUseQueryBuilder.mockReturnValue({
lastUsedQuery: 0,
currentQuery: {
builder: {
queryData: [
{
filters: {
items: [
{
key: {
key: SERVICE_NAME_KEY,
dataType: DataTypes.String,
type: 'resource',
},
op: 'in',
value: ['mq-kafka', 'otel-demo'],
},
],
op: 'AND',
},
},
],
},
},
redirectWithQueryBuilderData,
} as any);
const mockFilter = createMockFilter({ defaultOpen: true });
render(
<CheckboxFilter
filter={mockFilter}
source={QuickFiltersSource.LOGS_EXPLORER}
/>,
);
// Wait for values to render
await waitFor(() => {
expect(screen.getByText('mq-kafka')).toBeInTheDocument();
});
// Click on the value label to trigger the "Only" behavior
await userEvent.click(screen.getByText('mq-kafka'));
expect(redirectWithQueryBuilderData).toHaveBeenCalledTimes(1);
const [updatedQuery] = redirectWithQueryBuilderData.mock.calls[0];
const updatedFilters = updatedQuery.builder.queryData[0].filters;
expect(updatedFilters.items).toHaveLength(1);
expect(updatedFilters.items[0].key.key).toBe(SERVICE_NAME_KEY);
expect(updatedFilters.items[0].op).toBe('in');
expect(updatedFilters.items[0].value).toBe('mq-kafka');
});
it('should clear filters for the attribute when using All', async () => {
const redirectWithQueryBuilderData = jest.fn();
// Existing filter: service.name IN ['mq-kafka']
mockUseQueryBuilder.mockReturnValue({
lastUsedQuery: 0,
currentQuery: {
builder: {
queryData: [
{
filters: {
items: [
{
key: {
key: SERVICE_NAME_KEY,
dataType: DataTypes.String,
type: 'resource',
},
op: 'in',
value: ['mq-kafka'],
},
],
op: 'AND',
},
},
],
},
},
redirectWithQueryBuilderData,
} as any);
const mockFilter = createMockFilter({ defaultOpen: true });
render(
<CheckboxFilter
filter={mockFilter}
source={QuickFiltersSource.LOGS_EXPLORER}
/>,
);
await waitFor(() => {
expect(screen.getByText('mq-kafka')).toBeInTheDocument();
});
// Only one value is selected, so clicking it should switch to "All" (no filter for this key)
await userEvent.click(screen.getByText('mq-kafka'));
expect(redirectWithQueryBuilderData).toHaveBeenCalledTimes(1);
const [updatedQuery] = redirectWithQueryBuilderData.mock.calls[0];
const updatedFilters = updatedQuery.builder.queryData[0].filters;
const filtersForServiceName = updatedFilters.items.filter(
(item: any) => item.key?.key === SERVICE_NAME_KEY,
);
expect(filtersForServiceName).toHaveLength(0);
});
it('should extend an existing IN filter when checking an additional value', async () => {
const redirectWithQueryBuilderData = jest.fn();
// Existing filter: service.name IN 'mq-kafka'
mockUseQueryBuilder.mockReturnValue({
lastUsedQuery: 0,
currentQuery: {
builder: {
queryData: [
{
filters: {
items: [
{
key: {
key: SERVICE_NAME_KEY,
dataType: DataTypes.String,
type: 'resource',
},
op: 'in',
value: 'mq-kafka',
},
],
op: 'AND',
},
},
],
},
},
redirectWithQueryBuilderData,
} as any);
const mockFilter = createMockFilter({ defaultOpen: true });
render(
<CheckboxFilter
filter={mockFilter}
source={QuickFiltersSource.LOGS_EXPLORER}
/>,
);
// Wait for checkboxes to render
await waitFor(() => {
expect(screen.getAllByRole('checkbox')).toHaveLength(4);
});
const checkboxes = screen.getAllByRole('checkbox');
// First checkbox corresponds to 'mq-kafka' (already selected),
// second will be 'otel-demo' which we now select additionally.
await userEvent.click(checkboxes[1]);
expect(redirectWithQueryBuilderData).toHaveBeenCalledTimes(1);
const [updatedQuery] = redirectWithQueryBuilderData.mock.calls[0];
const updatedFilters = updatedQuery.builder.queryData[0].filters;
const [filterForServiceName] = updatedFilters.items;
expect(filterForServiceName.key.key).toBe(SERVICE_NAME_KEY);
expect(filterForServiceName.op).toBe('in');
expect(filterForServiceName.value).toEqual(['mq-kafka', 'otel-demo']);
});
});

View File

@@ -0,0 +1,205 @@
import { screen } from '@testing-library/react';
import { render } from 'tests/test-utils';
import ValueGraph from '../index';
import { getBackgroundColorAndThresholdCheck } from '../utils';
// Mock the utils module
jest.mock('../utils', () => ({
getBackgroundColorAndThresholdCheck: jest.fn(() => ({
threshold: {} as any,
isConflictingThresholds: false,
})),
}));
const mockGetBackgroundColorAndThresholdCheck = getBackgroundColorAndThresholdCheck as jest.MockedFunction<
typeof getBackgroundColorAndThresholdCheck
>;
const TEST_ID_VALUE_GRAPH_TEXT = 'value-graph-text';
const TEST_ID_VALUE_GRAPH_PREFIX_UNIT = 'value-graph-prefix-unit';
const TEST_ID_VALUE_GRAPH_SUFFIX_UNIT = 'value-graph-suffix-unit';
describe('ValueGraph', () => {
beforeEach(() => {
jest.clearAllMocks();
});
it('renders the numeric value correctly', () => {
const { getByTestId } = render(
<ValueGraph value="42" rawValue={42} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveTextContent('42');
});
it('renders value with suffix unit', () => {
const { getByTestId } = render(
<ValueGraph value="42ms" rawValue={42} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveTextContent('42');
expect(getByTestId(TEST_ID_VALUE_GRAPH_SUFFIX_UNIT)).toHaveTextContent('ms');
});
it('renders value with prefix unit', () => {
const { getByTestId } = render(
<ValueGraph value="$100" rawValue={100} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveTextContent('100');
expect(getByTestId(TEST_ID_VALUE_GRAPH_PREFIX_UNIT)).toHaveTextContent('$');
});
it('renders value with both prefix and suffix units', () => {
const { getByTestId } = render(
<ValueGraph value="$100USD" rawValue={100} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveTextContent('100');
expect(getByTestId(TEST_ID_VALUE_GRAPH_PREFIX_UNIT)).toHaveTextContent('$');
expect(getByTestId(TEST_ID_VALUE_GRAPH_SUFFIX_UNIT)).toHaveTextContent('USD');
});
it('renders value with K suffix', () => {
const { getByTestId } = render(
<ValueGraph value="1.5K" rawValue={1500} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveTextContent('1.5K');
});
it('applies text color when threshold format is Text', () => {
mockGetBackgroundColorAndThresholdCheck.mockReturnValue({
threshold: {
thresholdFormat: 'Text',
thresholdColor: 'red',
} as any,
isConflictingThresholds: false,
});
const { getByTestId } = render(
<ValueGraph value="42" rawValue={42} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveStyle({ color: 'red' });
});
it('applies background color when threshold format is Background', () => {
mockGetBackgroundColorAndThresholdCheck.mockReturnValue({
threshold: {
thresholdFormat: 'Background',
thresholdColor: 'blue',
} as any,
isConflictingThresholds: false,
});
const { container } = render(
<ValueGraph value="42" rawValue={42} thresholds={[]} />,
);
const containerElement = container.querySelector('.value-graph-container');
expect(containerElement).toHaveStyle({ backgroundColor: 'blue' });
});
it('displays conflicting thresholds indicator when multiple thresholds match', () => {
mockGetBackgroundColorAndThresholdCheck.mockReturnValue({
threshold: {
thresholdFormat: 'Text',
thresholdColor: 'red',
} as any,
isConflictingThresholds: true,
});
const { getByTestId } = render(
<ValueGraph value="42" rawValue={42} thresholds={[]} />,
);
expect(getByTestId('conflicting-thresholds')).toBeInTheDocument();
});
it('does not display conflicting thresholds indicator when no conflict', () => {
mockGetBackgroundColorAndThresholdCheck.mockReturnValue({
threshold: {} as any,
isConflictingThresholds: false,
});
render(<ValueGraph value="42" rawValue={42} thresholds={[]} />);
expect(
screen.queryByTestId('conflicting-thresholds'),
).not.toBeInTheDocument();
});
it('applies text color to units when threshold format is Text', () => {
mockGetBackgroundColorAndThresholdCheck.mockReturnValue({
threshold: {
thresholdFormat: 'Text',
thresholdColor: 'green',
} as any,
isConflictingThresholds: false,
});
render(<ValueGraph value="42ms" rawValue={42} thresholds={[]} />);
const unitElement = screen.getByText('ms');
expect(unitElement).toHaveStyle({ color: 'green' });
});
it('renders decimal values correctly', () => {
const { getByTestId } = render(
<ValueGraph value="42.5" rawValue={42.5} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveTextContent('42.5');
});
it('handles values with M suffix', () => {
const { getByTestId } = render(
<ValueGraph value="1.2M" rawValue={1200000} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveTextContent('1.2M');
});
it('handles values with B suffix', () => {
const { getByTestId } = render(
<ValueGraph value="2.3B" rawValue={2300000000} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveTextContent('2.3B');
});
it('handles scientific notation values', () => {
const { getByTestId } = render(
<ValueGraph value="1e-9" rawValue={1e-9} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveTextContent('1e-9');
});
it('handles scientific notation with suffix unit', () => {
const { getByTestId } = render(
<ValueGraph value="1e-9%" rawValue={1e-9} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveTextContent('1e-9');
expect(getByTestId(TEST_ID_VALUE_GRAPH_SUFFIX_UNIT)).toHaveTextContent('%');
});
it('handles scientific notation with uppercase E', () => {
const { getByTestId } = render(
<ValueGraph value="1E-9" rawValue={1e-9} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveTextContent('1E-9');
});
it('handles scientific notation with positive exponent', () => {
const { getByTestId } = render(
<ValueGraph value="1e+9" rawValue={1e9} thresholds={[]} />,
);
expect(getByTestId(TEST_ID_VALUE_GRAPH_TEXT)).toHaveTextContent('1e+9');
});
});

View File

@@ -3,11 +3,39 @@ import './ValueGraph.styles.scss';
import { ExclamationCircleFilled } from '@ant-design/icons';
import { Tooltip, Typography } from 'antd';
import { ThresholdProps } from 'container/NewWidget/RightContainer/Threshold/types';
import { useEffect, useRef, useState } from 'react';
import { useEffect, useMemo, useRef, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { getBackgroundColorAndThresholdCheck } from './utils';
function Unit({
type,
unit,
threshold,
fontSize,
}: {
type: 'prefix' | 'suffix';
unit: string;
threshold: ThresholdProps;
fontSize: string;
}): JSX.Element {
return (
<Typography.Text
className="value-graph-unit"
data-testid={`value-graph-${type}-unit`}
style={{
color:
threshold.thresholdFormat === 'Text'
? threshold.thresholdColor
: undefined,
fontSize: `calc(${fontSize} * 0.7)`,
}}
>
{unit}
</Typography.Text>
);
}
function ValueGraph({
value,
rawValue,
@@ -17,10 +45,16 @@ function ValueGraph({
const containerRef = useRef<HTMLDivElement>(null);
const [fontSize, setFontSize] = useState('2.5vw');
// Parse value to separate number and unit (assuming unit is at the end)
const matches = value.match(/([\d.]+[KMB]?)(.*)$/);
const numericValue = matches?.[1] || value;
const unit = matches?.[2]?.trim() || '';
const { numericValue, prefixUnit, suffixUnit } = useMemo(() => {
const matches = value.match(
/^([^\d.]*)?([\d.]+(?:[eE][+-]?[\d]+)?[KMB]?)([^\d.]*)?$/,
);
return {
numericValue: matches?.[2] || value,
prefixUnit: matches?.[1]?.trim() || '',
suffixUnit: matches?.[3]?.trim() || '',
};
}, [value]);
// Adjust font size based on container size
useEffect(() => {
@@ -65,8 +99,17 @@ function ValueGraph({
}}
>
<div className="value-text-container">
{prefixUnit && (
<Unit
type="prefix"
unit={prefixUnit}
threshold={threshold}
fontSize={fontSize}
/>
)}
<Typography.Text
className="value-graph-text"
data-testid="value-graph-text"
style={{
color:
threshold.thresholdFormat === 'Text'
@@ -77,19 +120,13 @@ function ValueGraph({
>
{numericValue}
</Typography.Text>
{unit && (
<Typography.Text
className="value-graph-unit"
style={{
color:
threshold.thresholdFormat === 'Text'
? threshold.thresholdColor
: undefined,
fontSize: `calc(${fontSize} * 0.7)`,
}}
>
{unit}
</Typography.Text>
{suffixUnit && (
<Unit
type="suffix"
unit={suffixUnit}
threshold={threshold}
fontSize={fontSize}
/>
)}
</div>
{isConflictingThresholds && (

View File

@@ -0,0 +1,469 @@
import { render as rtlRender, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { PANEL_TYPES } from 'constants/queryBuilder';
import { RowData } from 'lib/query/createTableColumnsFromQuery';
import { AppContext } from 'providers/App/App';
import { IAppContext } from 'providers/App/types';
import React, { MutableRefObject } from 'react';
import { QueryClient, QueryClientProvider, UseQueryResult } from 'react-query';
import { Provider } from 'react-redux';
import { MemoryRouter } from 'react-router-dom';
import configureStore from 'redux-mock-store';
import thunk from 'redux-thunk';
import { SuccessResponse, Warning } from 'types/api';
import { Widgets } from 'types/api/dashboard/getAll';
import { MetricRangePayloadProps } from 'types/api/metrics/getQueryRange';
import { EQueryType } from 'types/common/dashboard';
import { ROLES } from 'types/roles';
import { MenuItemKeys } from '../contants';
import WidgetHeader from '../index';
const TEST_WIDGET_TITLE = 'Test Widget';
const TABLE_WIDGET_TITLE = 'Table Widget';
const WIDGET_HEADER_SEARCH = 'widget-header-search';
const WIDGET_HEADER_SEARCH_INPUT = 'widget-header-search-input';
const TEST_WIDGET_TITLE_RESOLVED = 'Test Widget Title';
const mockStore = configureStore([thunk]);
const createMockStore = (): ReturnType<typeof mockStore> =>
mockStore({
app: {
role: 'ADMIN',
user: {
userId: 'test-user-id',
email: 'test@signoz.io',
name: 'TestUser',
},
isLoggedIn: true,
org: [],
},
globalTime: {
minTime: '2023-01-01T00:00:00Z',
maxTime: '2023-01-02T00:00:00Z',
},
});
const queryClient = new QueryClient({
defaultOptions: {
queries: {
refetchOnWindowFocus: false,
retry: false,
},
},
});
const createMockAppContext = (): Partial<IAppContext> => ({
user: {
accessJwt: '',
refreshJwt: '',
id: '',
email: '',
displayName: '',
createdAt: 0,
organization: '',
orgId: '',
role: 'ADMIN' as ROLES,
},
});
const render = (ui: React.ReactElement): ReturnType<typeof rtlRender> =>
rtlRender(
<MemoryRouter>
<QueryClientProvider client={queryClient}>
<Provider store={createMockStore()}>
<AppContext.Provider value={createMockAppContext() as IAppContext}>
{ui}
</AppContext.Provider>
</Provider>
</QueryClientProvider>
</MemoryRouter>,
);
jest.mock('hooks/queryBuilder/useCreateAlerts', () => ({
__esModule: true,
default: jest.fn(() => jest.fn()),
}));
jest.mock('hooks/dashboard/useGetResolvedText', () => {
// eslint-disable-next-line sonarjs/no-duplicate-string
const TEST_WIDGET_TITLE_RESOLVED = 'Test Widget Title';
return {
__esModule: true,
default: jest.fn(() => ({
truncatedText: TEST_WIDGET_TITLE_RESOLVED,
fullText: TEST_WIDGET_TITLE_RESOLVED,
})),
};
});
jest.mock('lucide-react', () => ({
CircleX: (): JSX.Element => <svg data-testid="lucide-circle-x" />,
TriangleAlert: (): JSX.Element => <svg data-testid="lucide-triangle-alert" />,
X: (): JSX.Element => <svg data-testid="lucide-x" />,
}));
jest.mock('antd', () => ({
...jest.requireActual('antd'),
Spin: (): JSX.Element => <div data-testid="antd-spin" />,
}));
const mockWidget: Widgets = {
id: 'test-widget-id',
title: TEST_WIDGET_TITLE,
description: 'Test Description',
panelTypes: PANEL_TYPES.TIME_SERIES,
query: {
builder: {
queryData: [],
queryFormulas: [],
queryTraceOperator: [],
},
promql: [],
clickhouse_sql: [],
id: 'query-id',
queryType: 'builder' as EQueryType,
},
timePreferance: 'GLOBAL_TIME',
opacity: '',
nullZeroValues: '',
yAxisUnit: '',
fillSpans: false,
softMin: null,
softMax: null,
selectedLogFields: [],
selectedTracesFields: [],
};
const mockQueryResponse = ({
data: {
payload: {
data: {
result: [],
resultType: '',
},
},
statusCode: 200,
message: 'success',
error: null,
},
isLoading: false,
isError: false,
error: null,
isFetching: false,
} as unknown) as UseQueryResult<
SuccessResponse<MetricRangePayloadProps, unknown> & {
warning?: Warning;
},
Error
>;
describe('WidgetHeader', () => {
const mockOnView = jest.fn();
const mockSetSearchTerm = jest.fn();
const tableProcessedDataRef: MutableRefObject<RowData[]> = {
current: [
{
timestamp: 1234567890,
key: 'key1',
col1: 'val1',
col2: 'val2',
},
],
};
beforeEach(() => {
jest.clearAllMocks();
});
it('renders widget header with title', () => {
render(
<WidgetHeader
title={TEST_WIDGET_TITLE}
widget={mockWidget}
onView={mockOnView}
parentHover={false}
queryResponse={mockQueryResponse}
isWarning={false}
isFetchingResponse={false}
tableProcessedDataRef={tableProcessedDataRef}
setSearchTerm={mockSetSearchTerm}
/>,
);
expect(screen.getByText(TEST_WIDGET_TITLE_RESOLVED)).toBeInTheDocument();
});
it('returns null for empty widget', () => {
const emptyWidget = {
...mockWidget,
id: PANEL_TYPES.EMPTY_WIDGET,
};
const { container } = render(
<WidgetHeader
title="Empty Widget"
widget={emptyWidget}
onView={mockOnView}
parentHover={false}
queryResponse={mockQueryResponse}
isWarning={false}
isFetchingResponse={false}
tableProcessedDataRef={tableProcessedDataRef}
setSearchTerm={mockSetSearchTerm}
/>,
);
expect(container.innerHTML).toBe('');
});
it('shows search input for table panels', async () => {
const tableWidget = {
...mockWidget,
panelTypes: PANEL_TYPES.TABLE,
};
render(
<WidgetHeader
title={TABLE_WIDGET_TITLE}
widget={tableWidget}
onView={mockOnView}
parentHover={false}
queryResponse={mockQueryResponse}
isWarning={false}
isFetchingResponse={false}
tableProcessedDataRef={tableProcessedDataRef}
setSearchTerm={mockSetSearchTerm}
/>,
);
const searchIcon = screen.getByTestId(WIDGET_HEADER_SEARCH);
expect(searchIcon).toBeInTheDocument();
await userEvent.click(searchIcon);
expect(screen.getByTestId(WIDGET_HEADER_SEARCH_INPUT)).toBeInTheDocument();
});
it('handles search input changes and closing', async () => {
const tableWidget = {
...mockWidget,
panelTypes: PANEL_TYPES.TABLE,
};
render(
<WidgetHeader
title={TABLE_WIDGET_TITLE}
widget={tableWidget}
onView={mockOnView}
parentHover={false}
queryResponse={mockQueryResponse}
isWarning={false}
isFetchingResponse={false}
tableProcessedDataRef={tableProcessedDataRef}
setSearchTerm={mockSetSearchTerm}
/>,
);
const searchIcon = screen.getByTestId(`${WIDGET_HEADER_SEARCH}`);
await userEvent.click(searchIcon);
const searchInput = screen.getByTestId(WIDGET_HEADER_SEARCH_INPUT);
await userEvent.type(searchInput, 'test search');
expect(mockSetSearchTerm).toHaveBeenCalledWith('test search');
const closeButton = screen
.getByTestId(WIDGET_HEADER_SEARCH_INPUT)
.parentElement?.querySelector('.search-header-icons');
if (closeButton) {
await userEvent.click(closeButton);
expect(mockSetSearchTerm).toHaveBeenCalledWith('');
}
});
it('shows error icon when query has error', () => {
const errorResponse = {
...mockQueryResponse,
isError: true as const,
error: { message: 'Test error' } as Error,
data: undefined,
} as UseQueryResult<
SuccessResponse<MetricRangePayloadProps, unknown> & {
warning?: Warning;
},
Error
>;
render(
<WidgetHeader
title={TEST_WIDGET_TITLE}
widget={mockWidget}
onView={mockOnView}
parentHover={false}
queryResponse={errorResponse}
isWarning={false}
isFetchingResponse={false}
tableProcessedDataRef={tableProcessedDataRef}
setSearchTerm={mockSetSearchTerm}
/>,
);
// check if CircleX icon is rendered
const circleXIcon = screen.getByTestId('lucide-circle-x');
expect(circleXIcon).toBeInTheDocument();
});
it('shows warning icon when query has warning', () => {
const warningData = mockQueryResponse.data
? {
...mockQueryResponse.data,
warning: {
code: 'WARNING_CODE',
message: 'Test warning',
url: 'https://example.com',
warnings: [{ message: 'Test warning' }],
} as Warning,
}
: undefined;
const warningResponse = {
...mockQueryResponse,
data: warningData,
} as UseQueryResult<
SuccessResponse<MetricRangePayloadProps, unknown> & {
warning?: Warning;
},
Error
>;
render(
<WidgetHeader
title={TEST_WIDGET_TITLE}
widget={mockWidget}
onView={mockOnView}
parentHover={false}
queryResponse={warningResponse}
isWarning
isFetchingResponse={false}
tableProcessedDataRef={tableProcessedDataRef}
setSearchTerm={mockSetSearchTerm}
/>,
);
const triangleAlertIcon = screen.getByTestId('lucide-triangle-alert');
expect(triangleAlertIcon).toBeInTheDocument();
});
it('shows spinner when fetching response', () => {
const fetchingResponse = {
...mockQueryResponse,
isFetching: true,
isLoading: true,
} as UseQueryResult<
SuccessResponse<MetricRangePayloadProps, unknown> & {
warning?: Warning;
},
Error
>;
render(
<WidgetHeader
title={TEST_WIDGET_TITLE}
widget={mockWidget}
onView={mockOnView}
parentHover={false}
queryResponse={fetchingResponse}
isWarning={false}
isFetchingResponse
tableProcessedDataRef={tableProcessedDataRef}
setSearchTerm={mockSetSearchTerm}
/>,
);
const antSpin = screen.getByTestId('antd-spin');
expect(antSpin).toBeInTheDocument();
});
it('renders menu options icon', () => {
render(
<WidgetHeader
title={TEST_WIDGET_TITLE}
widget={mockWidget}
onView={mockOnView}
parentHover={false}
queryResponse={mockQueryResponse}
isWarning={false}
isFetchingResponse={false}
tableProcessedDataRef={tableProcessedDataRef}
setSearchTerm={mockSetSearchTerm}
headerMenuList={[MenuItemKeys.View]}
/>,
);
const moreOptionsIcon = screen.getByTestId('widget-header-options');
expect(moreOptionsIcon).toBeInTheDocument();
});
it('shows search icon for table panels', () => {
const tableWidget = {
...mockWidget,
panelTypes: PANEL_TYPES.TABLE,
};
render(
<WidgetHeader
title={TABLE_WIDGET_TITLE}
widget={tableWidget}
onView={mockOnView}
parentHover={false}
queryResponse={mockQueryResponse}
isWarning={false}
isFetchingResponse={false}
tableProcessedDataRef={tableProcessedDataRef}
setSearchTerm={mockSetSearchTerm}
/>,
);
const searchIcon = screen.getByTestId(WIDGET_HEADER_SEARCH);
expect(searchIcon).toBeInTheDocument();
});
it('does not show search icon for non-table panels', () => {
render(
<WidgetHeader
title={TEST_WIDGET_TITLE}
widget={mockWidget}
onView={mockOnView}
parentHover={false}
queryResponse={mockQueryResponse}
isWarning={false}
isFetchingResponse={false}
tableProcessedDataRef={tableProcessedDataRef}
setSearchTerm={mockSetSearchTerm}
/>,
);
const searchIcon = screen.queryByTestId(WIDGET_HEADER_SEARCH);
expect(searchIcon).not.toBeInTheDocument();
});
it('renders threshold when provided', () => {
const threshold = <div data-testid="threshold">Threshold Component</div>;
render(
<WidgetHeader
title={TEST_WIDGET_TITLE}
widget={mockWidget}
onView={mockOnView}
parentHover={false}
queryResponse={mockQueryResponse}
isWarning={false}
isFetchingResponse={false}
tableProcessedDataRef={tableProcessedDataRef}
setSearchTerm={mockSetSearchTerm}
threshold={threshold}
/>,
);
expect(screen.getByTestId('threshold')).toBeInTheDocument();
});
});

View File

@@ -35,6 +35,7 @@ import { SuccessResponse, Warning } from 'types/api';
import { Widgets } from 'types/api/dashboard/getAll';
import APIError from 'types/api/error';
import { MetricRangePayloadProps } from 'types/api/metrics/getQueryRange';
import { buildAbsolutePath } from 'utils/app';
import { errorTooltipPosition } from './config';
import { MENUITEM_KEYS_VS_LABELS, MenuItemKeys } from './contants';
@@ -87,7 +88,10 @@ function WidgetHeader({
QueryParams.compositeQuery,
encodeURIComponent(JSON.stringify(widget.query)),
);
const generatedUrl = `${window.location.pathname}/new?${urlQuery}`;
const generatedUrl = buildAbsolutePath({
relativePath: 'new',
urlQueryString: urlQuery.toString(),
});
safeNavigate(generatedUrl);
}, [safeNavigate, urlQuery, widget.id, widget.panelTypes, widget.query]);
@@ -240,6 +244,7 @@ function WidgetHeader({
onClick={(e): void => {
e.stopPropagation();
e.preventDefault();
setSearchTerm('');
setShowGlobalSearch(false);
}}
className="search-header-icons"

View File

@@ -0,0 +1,366 @@
import { PANEL_TYPES } from 'constants/queryBuilder';
import { useGetPanelTypesQueryParam } from 'hooks/queryBuilder/useGetPanelTypesQueryParam';
import { useShareBuilderUrl } from 'hooks/queryBuilder/useShareBuilderUrl';
import { ExplorerViews } from 'pages/LogsExplorer/utils';
import { cleanup, render, screen, waitFor } from 'tests/test-utils';
import { DataTypes } from 'types/api/queryBuilder/queryAutocompleteResponse';
import { Query, QueryState } from 'types/api/queryBuilder/queryBuilderData';
import { EQueryType } from 'types/common/dashboard';
import { DataSource, QueryBuilderContextType } from 'types/common/queryBuilder';
import { explorerViewToPanelType } from 'utils/explorerUtils';
import LogExplorerQuerySection from './index';
const CM_EDITOR_SELECTOR = '.cm-editor .cm-content';
const QUERY_AGGREGATION_TEST_ID = 'query-aggregation-container';
const QUERY_ADDON_TEST_ID = 'query-add-ons';
// Mock DOM APIs that CodeMirror needs
beforeAll(() => {
// Mock getClientRects and getBoundingClientRect for Range objects
const mockRect: DOMRect = {
width: 100,
height: 20,
top: 0,
left: 0,
right: 100,
bottom: 20,
x: 0,
y: 0,
toJSON: (): DOMRect => mockRect,
} as DOMRect;
// Create a minimal Range mock with only what CodeMirror actually uses
const createMockRange = (): Range => {
let startContainer: Node = document.createTextNode('');
let endContainer: Node = document.createTextNode('');
let startOffset = 0;
let endOffset = 0;
const rectList = {
length: 1,
item: (index: number): DOMRect | null => (index === 0 ? mockRect : null),
0: mockRect,
};
const mockRange = {
// CodeMirror uses these for text measurement
getClientRects: (): DOMRectList => (rectList as unknown) as DOMRectList,
getBoundingClientRect: (): DOMRect => mockRect,
// CodeMirror calls these to set up text ranges
setStart: (node: Node, offset: number): void => {
startContainer = node;
startOffset = offset;
},
setEnd: (node: Node, offset: number): void => {
endContainer = node;
endOffset = offset;
},
// Minimal Range properties (TypeScript requires these)
get startContainer(): Node {
return startContainer;
},
get endContainer(): Node {
return endContainer;
},
get startOffset(): number {
return startOffset;
},
get endOffset(): number {
return endOffset;
},
get collapsed(): boolean {
return startContainer === endContainer && startOffset === endOffset;
},
commonAncestorContainer: document.body,
};
return (mockRange as unknown) as Range;
};
// Mock document.createRange to return a new Range instance each time
document.createRange = (): Range => createMockRange();
// Mock getBoundingClientRect for elements
Element.prototype.getBoundingClientRect = (): DOMRect => mockRect;
});
jest.mock('hooks/useDarkMode', () => ({
useIsDarkMode: (): boolean => false,
}));
jest.mock('providers/Dashboard/Dashboard', () => ({
useDashboard: (): { selectedDashboard: undefined } => ({
selectedDashboard: undefined,
}),
}));
jest.mock('api/querySuggestions/getKeySuggestions', () => ({
getKeySuggestions: jest.fn().mockResolvedValue({
data: {
data: { keys: {} },
},
}),
}));
jest.mock('api/querySuggestions/getValueSuggestion', () => ({
getValueSuggestions: jest.fn().mockResolvedValue({
data: { data: { values: { stringValues: [], numberValues: [] } } },
}),
}));
// Mock the hooks
jest.mock('hooks/queryBuilder/useGetPanelTypesQueryParam');
jest.mock('hooks/queryBuilder/useShareBuilderUrl');
const mockUseGetPanelTypesQueryParam = jest.mocked(useGetPanelTypesQueryParam);
const mockUseShareBuilderUrl = jest.mocked(useShareBuilderUrl);
const mockUpdateAllQueriesOperators = jest.fn() as jest.MockedFunction<
(query: Query, panelType: PANEL_TYPES, dataSource: DataSource) => Query
>;
const mockResetQuery = jest.fn() as jest.MockedFunction<
(newCurrentQuery?: QueryState) => void
>;
const mockRedirectWithQueryBuilderData = jest.fn() as jest.MockedFunction<
(query: Query) => void
>;
// Create a mock query that we'll use to verify persistence
const createMockQuery = (filterExpression?: string): Query => ({
id: 'test-query-id',
queryType: EQueryType.QUERY_BUILDER,
builder: {
queryData: [
{
aggregateAttribute: {
id: 'body--string----false',
dataType: DataTypes.String,
key: 'body',
type: '',
},
aggregateOperator: 'count',
dataSource: DataSource.LOGS,
disabled: false,
expression: 'A',
filters: {
items: [],
op: 'AND',
},
filter: filterExpression
? {
expression: filterExpression,
}
: undefined,
functions: [],
groupBy: [
{
key: 'cloud.account.id',
type: 'tag',
},
],
having: [],
legend: '',
limit: null,
orderBy: [{ columnName: 'timestamp', order: 'desc' }],
pageSize: 0,
queryName: 'A',
reduceTo: 'avg',
stepInterval: 60,
},
],
queryFormulas: [],
queryTraceOperator: [],
},
clickhouse_sql: [],
promql: [],
});
// Helper function to verify CodeMirror content
const verifyCodeMirrorContent = async (
expectedFilterExpression: string,
): Promise<void> => {
await waitFor(
() => {
const editorContent = document.querySelector(
CM_EDITOR_SELECTOR,
) as HTMLElement;
expect(editorContent).toBeInTheDocument();
const textContent = editorContent.textContent || '';
expect(textContent).toBe(expectedFilterExpression);
},
{ timeout: 3000 },
);
};
const VIEWS_TO_TEST = [
ExplorerViews.LIST,
ExplorerViews.TIMESERIES,
ExplorerViews.TABLE,
];
describe('LogExplorerQuerySection', () => {
let mockQuery: Query;
let mockQueryBuilderContext: Partial<QueryBuilderContextType>;
beforeEach(() => {
jest.clearAllMocks();
mockQuery = createMockQuery();
// Mock the return value of updateAllQueriesOperators to return the same query
mockUpdateAllQueriesOperators.mockReturnValue(mockQuery);
// Setup query builder context mock
mockQueryBuilderContext = {
currentQuery: mockQuery,
updateAllQueriesOperators: mockUpdateAllQueriesOperators,
resetQuery: mockResetQuery,
redirectWithQueryBuilderData: mockRedirectWithQueryBuilderData,
panelType: PANEL_TYPES.LIST,
initialDataSource: DataSource.LOGS,
addNewBuilderQuery: jest.fn() as jest.MockedFunction<() => void>,
addNewFormula: jest.fn() as jest.MockedFunction<() => void>,
handleSetConfig: jest.fn() as jest.MockedFunction<
(panelType: PANEL_TYPES, dataSource: DataSource | null) => void
>,
addTraceOperator: jest.fn() as jest.MockedFunction<() => void>,
};
// Mock useGetPanelTypesQueryParam
mockUseGetPanelTypesQueryParam.mockReturnValue(PANEL_TYPES.LIST);
// Mock useShareBuilderUrl
mockUseShareBuilderUrl.mockImplementation(() => {});
});
afterEach(() => {
jest.clearAllMocks();
});
it('should maintain query state across multiple view changes', () => {
const { rerender } = render(
<LogExplorerQuerySection selectedView={ExplorerViews.LIST} />,
undefined,
{
queryBuilderOverrides: mockQueryBuilderContext as QueryBuilderContextType,
},
);
const initialQuery = mockQueryBuilderContext.currentQuery;
VIEWS_TO_TEST.forEach((view) => {
rerender(<LogExplorerQuerySection selectedView={view} />);
expect(mockQueryBuilderContext.currentQuery).toEqual(initialQuery);
});
});
it('should persist filter expressions across view changes', async () => {
// Test with a more complex filter expression
const complexFilter =
"(service.name = 'api-gateway' OR service.name = 'backend') AND http.status_code IN [500, 502, 503] AND NOT error = 'timeout'";
const queryWithComplexFilter = createMockQuery(complexFilter);
const contextWithComplexFilter: Partial<QueryBuilderContextType> = {
...mockQueryBuilderContext,
currentQuery: queryWithComplexFilter,
};
const { rerender } = render(
<LogExplorerQuerySection selectedView={ExplorerViews.LIST} />,
undefined,
{
queryBuilderOverrides: contextWithComplexFilter as QueryBuilderContextType,
},
);
VIEWS_TO_TEST.forEach(async (view) => {
rerender(<LogExplorerQuerySection selectedView={view} />);
await verifyCodeMirrorContent(complexFilter);
});
});
it('should render QueryAggregation and QueryAddOns when switching from LIST to TIMESERIES or TABLE view', async () => {
// Helper function to verify components are rendered
const verifyComponentsRendered = async (): Promise<void> => {
await waitFor(
() => {
expect(screen.getByTestId(QUERY_AGGREGATION_TEST_ID)).toBeInTheDocument();
},
{ timeout: 3000 },
);
await waitFor(
() => {
expect(screen.getByTestId(QUERY_ADDON_TEST_ID)).toBeInTheDocument();
},
{ timeout: 3000 },
);
};
// Start with LIST view - QueryAggregation and QueryAddOns should NOT be rendered
mockUseGetPanelTypesQueryParam.mockReturnValue(PANEL_TYPES.LIST);
const contextWithList: Partial<QueryBuilderContextType> = {
...mockQueryBuilderContext,
panelType: PANEL_TYPES.LIST,
};
render(
<LogExplorerQuerySection selectedView={ExplorerViews.LIST} />,
undefined,
{
queryBuilderOverrides: contextWithList as QueryBuilderContextType,
},
);
// Verify QueryAggregation is NOT rendered in LIST view
expect(
screen.queryByTestId(QUERY_AGGREGATION_TEST_ID),
).not.toBeInTheDocument();
// Verify QueryAddOns is NOT rendered in LIST view (check for one of the add-on tabs)
expect(screen.queryByTestId(QUERY_ADDON_TEST_ID)).not.toBeInTheDocument();
cleanup();
// Switch to TIMESERIES view
const timeseriesPanelType = explorerViewToPanelType[ExplorerViews.TIMESERIES];
mockUseGetPanelTypesQueryParam.mockReturnValue(timeseriesPanelType);
const contextWithTimeseries: Partial<QueryBuilderContextType> = {
...mockQueryBuilderContext,
panelType: timeseriesPanelType,
};
render(
<LogExplorerQuerySection selectedView={ExplorerViews.TIMESERIES} />,
undefined,
{
queryBuilderOverrides: contextWithTimeseries as QueryBuilderContextType,
},
);
// Verify QueryAggregation and QueryAddOns are rendered
await verifyComponentsRendered();
cleanup();
// Switch to TABLE view
const tablePanelType = explorerViewToPanelType[ExplorerViews.TABLE];
mockUseGetPanelTypesQueryParam.mockReturnValue(tablePanelType);
const contextWithTable: Partial<QueryBuilderContextType> = {
...mockQueryBuilderContext,
panelType: tablePanelType,
};
render(
<LogExplorerQuerySection selectedView={ExplorerViews.TABLE} />,
undefined,
{
queryBuilderOverrides: contextWithTable as QueryBuilderContextType,
},
);
// Verify QueryAggregation and QueryAddOns are still rendered in TABLE view
await verifyComponentsRendered();
});
});

View File

@@ -501,7 +501,7 @@ function NewWidget({
stackedBarChart: selectedWidget?.stackedBarChart || false,
yAxisUnit: selectedWidget?.yAxisUnit,
decimalPrecision:
selectedWidget?.decimalPrecision || PrecisionOptionsEnum.TWO,
selectedWidget?.decimalPrecision ?? PrecisionOptionsEnum.TWO,
panelTypes: graphType,
query: adjustedQueryForV5,
thresholds: selectedWidget?.thresholds,
@@ -532,7 +532,7 @@ function NewWidget({
stackedBarChart: selectedWidget?.stackedBarChart || false,
yAxisUnit: selectedWidget?.yAxisUnit,
decimalPrecision:
selectedWidget?.decimalPrecision || PrecisionOptionsEnum.TWO,
selectedWidget?.decimalPrecision ?? PrecisionOptionsEnum.TWO,
panelTypes: graphType,
query: adjustedQueryForV5,
thresholds: selectedWidget?.thresholds,

View File

@@ -49,12 +49,14 @@ exports[`Value panel wrappper tests should render tooltip when there are conflic
>
<span
class="ant-typography value-graph-text css-dev-only-do-not-override-2i2tap"
data-testid="value-graph-text"
style="color: Blue; font-size: 16px;"
>
295.43
</span>
<span
class="ant-typography value-graph-unit css-dev-only-do-not-override-2i2tap"
data-testid="value-graph-suffix-unit"
style="color: Blue; font-size: calc(16px * 0.7);"
>
ms

View File

@@ -3,6 +3,7 @@ import { Tooltip, Typography } from 'antd';
import AttributeWithExpandablePopover from './AttributeWithExpandablePopover';
const EXPANDABLE_ATTRIBUTE_KEYS = ['exception.stacktrace', 'exception.message'];
const ATTRIBUTE_LENGTH_THRESHOLD = 100;
interface EventAttributeProps {
attributeKey: string;
@@ -15,7 +16,11 @@ function EventAttribute({
attributeValue,
onExpand,
}: EventAttributeProps): JSX.Element {
if (EXPANDABLE_ATTRIBUTE_KEYS.includes(attributeKey)) {
const shouldExpand =
EXPANDABLE_ATTRIBUTE_KEYS.includes(attributeKey) ||
attributeValue.length > ATTRIBUTE_LENGTH_THRESHOLD;
if (shouldExpand) {
return (
<AttributeWithExpandablePopover
attributeKey={attributeKey}

View File

@@ -51,9 +51,12 @@
padding: 10px 12px;
.item {
display: flex;
flex-direction: column;
gap: 8px;
&,
.attribute-container {
display: flex;
flex-direction: column;
gap: 8px;
}
.span-name-wrapper {
display: flex;
@@ -413,6 +416,7 @@
text-transform: uppercase;
}
.attribute-container .wrapper,
.value-wrapper {
display: flex;
align-items: center;

View File

@@ -4,6 +4,7 @@ import {
Button,
Checkbox,
Input,
Modal,
Select,
Skeleton,
Tabs,
@@ -52,6 +53,7 @@ import { formatEpochTimestamp } from 'utils/timeUtils';
import Attributes from './Attributes/Attributes';
import { RelatedSignalsViews } from './constants';
import EventAttribute from './Events/components/EventAttribute';
import Events from './Events/Events';
import LinkedSpans from './LinkedSpans/LinkedSpans';
import SpanRelatedSignals from './SpanRelatedSignals/SpanRelatedSignals';
@@ -166,11 +168,27 @@ function SpanDetailsDrawer(props: ISpanDetailsDrawerProps): JSX.Element {
setShouldUpdateUserPreference,
] = useState<boolean>(false);
const [statusMessageModalContent, setStatusMessageModalContent] = useState<{
title: string;
content: string;
} | null>(null);
const handleTimeRangeChange = useCallback((value: number): void => {
setShouldFetchSpanPercentilesData(true);
setSelectedTimeRange(value);
}, []);
const showStatusMessageModal = useCallback(
(title: string, content: string): void => {
setStatusMessageModalContent({ title, content });
},
[],
);
const handleStatusMessageModalCancel = useCallback((): void => {
setStatusMessageModalContent(null);
}, []);
const color = generateColor(
selectedSpan?.serviceName || '',
themeColors.traceDetailColors,
@@ -868,14 +886,11 @@ function SpanDetailsDrawer(props: ISpanDetailsDrawerProps): JSX.Element {
{selectedSpan.statusMessage && (
<div className="item">
<Typography.Text className="attribute-key">
status message
</Typography.Text>
<div className="value-wrapper">
<Typography.Text className="attribute-value">
{selectedSpan.statusMessage}
</Typography.Text>
</div>
<EventAttribute
attributeKey="status message"
attributeValue={selectedSpan.statusMessage}
onExpand={showStatusMessageModal}
/>
</div>
)}
<div className="item">
@@ -936,6 +951,19 @@ function SpanDetailsDrawer(props: ISpanDetailsDrawerProps): JSX.Element {
key={activeDrawerView}
/>
)}
<Modal
title={statusMessageModalContent?.title}
open={!!statusMessageModalContent}
onCancel={handleStatusMessageModalCancel}
footer={null}
width="80vw"
centered
>
<pre className="attribute-with-expandable-popover__full-view">
{statusMessageModalContent?.content}
</pre>
</Modal>
</>
);
}

View File

@@ -29,6 +29,8 @@ import {
mockEmptyLogsResponse,
mockSpan,
mockSpanLogsResponse,
mockSpanWithLongStatusMessage,
mockSpanWithShortStatusMessage,
} from './mockData';
// Get typed mocks
@@ -128,6 +130,39 @@ jest.mock('lib/uPlotLib/utils/generateColor', () => ({
generateColor: jest.fn().mockReturnValue('#1f77b4'),
}));
jest.mock(
'container/SpanDetailsDrawer/Events/components/AttributeWithExpandablePopover',
() =>
// eslint-disable-next-line func-names, @typescript-eslint/explicit-function-return-type, react/display-name
function ({
attributeKey,
attributeValue,
onExpand,
}: {
attributeKey: string;
attributeValue: string;
onExpand: (title: string, content: string) => void;
}) {
return (
<div className="attribute-container" key={attributeKey}>
<div className="attribute-key">{attributeKey}</div>
<div className="wrapper">
<div className="attribute-value">{attributeValue}</div>
<div data-testid="popover-content">
<pre>{attributeValue}</pre>
<button
type="button"
onClick={(): void => onExpand(attributeKey, attributeValue)}
>
Expand
</button>
</div>
</div>
</div>
);
},
);
// Mock getSpanPercentiles API
jest.mock('api/trace/getSpanPercentiles', () => ({
__esModule: true,
@@ -1153,3 +1188,112 @@ describe('SpanDetailsDrawer - Search Visibility User Flows', () => {
expect(searchInput).toHaveFocus();
});
});
describe('SpanDetailsDrawer - Status Message Truncation User Flows', () => {
beforeEach(() => {
jest.clearAllMocks();
mockSafeNavigate.mockClear();
mockWindowOpen.mockClear();
mockUpdateAllQueriesOperators.mockClear();
(GetMetricQueryRange as jest.Mock).mockImplementation(() =>
Promise.resolve(mockEmptyLogsResponse),
);
});
afterEach(() => {
server.resetHandlers();
});
it('should display expandable popover with Expand button for long status message', () => {
render(
<QueryBuilderContext.Provider value={mockQueryBuilderContextValue as any}>
<SpanDetailsDrawer
isSpanDetailsDocked={false}
setIsSpanDetailsDocked={jest.fn()}
selectedSpan={mockSpanWithLongStatusMessage}
traceStartTime={1640995200000}
traceEndTime={1640995260000}
/>
</QueryBuilderContext.Provider>,
);
// User sees status message label
expect(screen.getByText('status message')).toBeInTheDocument();
// User sees the status message value (appears in both original element and popover preview)
const statusMessageElements = screen.getAllByText(
mockSpanWithLongStatusMessage.statusMessage,
);
expect(statusMessageElements.length).toBeGreaterThan(0);
// User sees Expand button in popover (popover is mocked to render immediately)
const expandButton = screen.getByRole('button', { name: /expand/i });
expect(expandButton).toBeInTheDocument();
});
it('should open modal with full status message when user clicks Expand button', async () => {
render(
<QueryBuilderContext.Provider value={mockQueryBuilderContextValue as any}>
<SpanDetailsDrawer
isSpanDetailsDocked={false}
setIsSpanDetailsDocked={jest.fn()}
selectedSpan={mockSpanWithLongStatusMessage}
traceStartTime={1640995200000}
traceEndTime={1640995260000}
/>
</QueryBuilderContext.Provider>,
);
// User clicks the Expand button (popover is mocked to render immediately)
const expandButton = screen.getByRole('button', { name: /expand/i });
await fireEvent.click(expandButton);
// User sees modal with the full status message content
await waitFor(() => {
// Modal should be visible with the title
const modalTitle = document.querySelector('.ant-modal-title');
expect(modalTitle).toBeInTheDocument();
expect(modalTitle?.textContent).toBe('status message');
// Modal content should contain the full message in a pre tag
const preElement = document.querySelector(
'.attribute-with-expandable-popover__full-view',
);
expect(preElement).toBeInTheDocument();
expect(preElement?.textContent).toBe(
mockSpanWithLongStatusMessage.statusMessage,
);
});
});
it('should display short status message as simple text without popover', () => {
render(
<QueryBuilderContext.Provider value={mockQueryBuilderContextValue as any}>
<SpanDetailsDrawer
isSpanDetailsDocked={false}
setIsSpanDetailsDocked={jest.fn()}
selectedSpan={mockSpanWithShortStatusMessage}
traceStartTime={1640995200000}
traceEndTime={1640995260000}
/>
</QueryBuilderContext.Provider>,
);
// User sees status message label and value
expect(screen.getByText('status message')).toBeInTheDocument();
expect(
screen.getByText(mockSpanWithShortStatusMessage.statusMessage),
).toBeInTheDocument();
// User hovers over the status message value
const statusMessageValue = screen.getByText(
mockSpanWithShortStatusMessage.statusMessage,
);
fireEvent.mouseEnter(statusMessageValue);
// No Expand button should appear (no expandable popover for short messages)
expect(
screen.queryByRole('button', { name: /expand/i }),
).not.toBeInTheDocument();
});
});

View File

@@ -35,6 +35,19 @@ export const mockSpan: Span = {
level: 0,
};
// Mock span with long status message (> 100 characters) for testing truncation
export const mockSpanWithLongStatusMessage: Span = {
...mockSpan,
statusMessage:
'Error: Connection timeout occurred while trying to reach the database server. The connection pool was exhausted and all retry attempts failed after 30 seconds.',
};
// Mock span with short status message (<= 100 characters)
export const mockSpanWithShortStatusMessage: Span = {
...mockSpan,
statusMessage: 'Connection successful',
};
// Mock logs with proper relationships
export const mockSpanLogs: ILog[] = [
{

View File

@@ -29,6 +29,20 @@ import { QueryBuilderContextType } from 'types/common/queryBuilder';
import { ROLES, USER_ROLES } from 'types/roles';
// import { MemoryRouter as V5MemoryRouter } from 'react-router-dom-v5-compat';
// Mock ResizeObserver
class ResizeObserverMock {
// eslint-disable-next-line class-methods-use-this
observe(): void {}
// eslint-disable-next-line class-methods-use-this
unobserve(): void {}
// eslint-disable-next-line class-methods-use-this
disconnect(): void {}
}
global.ResizeObserver = (ResizeObserverMock as unknown) as typeof ResizeObserver;
const queryClient = new QueryClient({
defaultOptions: {
queries: {

View File

@@ -0,0 +1,126 @@
import { buildAbsolutePath } from '../app';
const BASE_PATH = '/some-base-path';
describe('buildAbsolutePath', () => {
const originalLocation = window.location;
afterEach(() => {
Object.defineProperty(window, 'location', {
writable: true,
value: originalLocation,
});
});
const mockLocation = (pathname: string): void => {
Object.defineProperty(window, 'location', {
writable: true,
value: {
pathname,
href: `http://localhost:8080${pathname}`,
origin: 'http://localhost:8080',
protocol: 'http:',
host: 'localhost',
hostname: 'localhost',
port: '',
search: '',
hash: '',
},
});
};
describe('when base path ends with a forward slash', () => {
beforeEach(() => {
mockLocation(`${BASE_PATH}/`);
});
it('should build absolute path without query string', () => {
const result = buildAbsolutePath({ relativePath: 'users' });
expect(result).toBe(`${BASE_PATH}/users`);
});
it('should build absolute path with query string', () => {
const result = buildAbsolutePath({
relativePath: 'users',
urlQueryString: 'id=123&sort=name',
});
expect(result).toBe(`${BASE_PATH}/users?id=123&sort=name`);
});
it('should handle nested relative paths', () => {
const result = buildAbsolutePath({ relativePath: 'users/profile/settings' });
expect(result).toBe(`${BASE_PATH}/users/profile/settings`);
});
});
describe('when base path does not end with a forward slash', () => {
beforeEach(() => {
mockLocation(`${BASE_PATH}`);
});
it('should append forward slash and build absolute path', () => {
const result = buildAbsolutePath({ relativePath: 'users' });
expect(result).toBe(`${BASE_PATH}/users`);
});
it('should append forward slash and build absolute path with query string', () => {
const result = buildAbsolutePath({
relativePath: 'users',
urlQueryString: 'filter=active',
});
expect(result).toBe(`${BASE_PATH}/users?filter=active`);
});
});
describe('edge cases', () => {
it('should handle empty relative path', () => {
mockLocation(`${BASE_PATH}/`);
const result = buildAbsolutePath({ relativePath: '' });
expect(result).toBe(`${BASE_PATH}/`);
});
it('should handle query string with empty relative path', () => {
mockLocation(`${BASE_PATH}/`);
const result = buildAbsolutePath({
relativePath: '',
urlQueryString: 'search=test',
});
expect(result).toBe(`${BASE_PATH}/?search=test`);
});
it('should handle relative path starting with forward slash', () => {
mockLocation(`${BASE_PATH}/`);
const result = buildAbsolutePath({ relativePath: '/users' });
expect(result).toBe(`${BASE_PATH}/users`);
});
it('should handle complex query strings', () => {
mockLocation(`${BASE_PATH}/dashboard`);
const result = buildAbsolutePath({
relativePath: 'reports',
urlQueryString: 'date=2024-01-01&type=summary&format=pdf',
});
expect(result).toBe(
`${BASE_PATH}/dashboard/reports?date=2024-01-01&type=summary&format=pdf`,
);
});
it('should handle undefined query string', () => {
mockLocation(`${BASE_PATH}/`);
const result = buildAbsolutePath({
relativePath: 'users',
urlQueryString: undefined,
});
expect(result).toBe(`${BASE_PATH}/users`);
});
it('should handle empty query string', () => {
mockLocation(`${BASE_PATH}/`);
const result = buildAbsolutePath({
relativePath: 'users',
urlQueryString: '',
});
expect(result).toBe(`${BASE_PATH}/users`);
});
});
});

View File

@@ -61,6 +61,58 @@ describe('extractQueryPairs', () => {
]);
});
test('should test for filter expression with freeText', () => {
const input = "disconnected deployment.env not in ['mq-kafka']";
const result = extractQueryPairs(input);
expect(result).toEqual([
{
key: 'disconnected',
operator: '',
valueList: [],
valuesPosition: [],
hasNegation: false,
isMultiValue: false,
value: undefined,
position: {
keyStart: 0,
keyEnd: 11,
operatorStart: 0,
operatorEnd: 0,
negationStart: 0,
negationEnd: 0,
valueStart: undefined,
valueEnd: undefined,
},
isComplete: false,
},
{
key: 'deployment.env',
operator: 'in',
value: "['mq-kafka']",
valueList: ["'mq-kafka'"],
valuesPosition: [
{
start: 36,
end: 45,
},
],
hasNegation: true,
isMultiValue: true,
position: {
keyStart: 13,
keyEnd: 26,
operatorStart: 32,
operatorEnd: 33,
valueStart: 35,
valueEnd: 46,
negationStart: 28,
negationEnd: 30,
},
isComplete: true,
},
]);
});
test('should extract IN with numeric list inside parentheses', () => {
const input = 'id IN (1, 2, 3)';
const result = extractQueryPairs(input);

View File

@@ -38,3 +38,33 @@ export function isIngestionActive(data: any): boolean {
return parseInt(value, 10) > 0;
}
/**
* Builds an absolute path by combining the current page's pathname with a relative path.
*
* @param {Object} params - The parameters for building the absolute path
* @param {string} params.relativePath - The relative path to append to the current pathname
* @param {string} [params.urlQueryString] - Optional query string to append to the final path (without leading '?')
*
* @returns {string} The constructed absolute path, optionally with query string
*/
export function buildAbsolutePath({
relativePath,
urlQueryString,
}: {
relativePath: string;
urlQueryString?: string;
}): string {
const { pathname } = window.location;
// ensure base path always ends with a forward slash
const basePath = pathname.endsWith('/') ? pathname : `${pathname}/`;
// handle relative path starting with a forward slash
const normalizedRelativePath = relativePath.startsWith('/')
? relativePath.slice(1)
: relativePath;
const absolutePath = basePath + normalizedRelativePath;
return urlQueryString ? `${absolutePath}?${urlQueryString}` : absolutePath;
}

View File

@@ -1339,8 +1339,7 @@ export function extractQueryPairs(query: string): IQueryPair[] {
else if (
currentPair &&
currentPair.key &&
(isConjunctionToken(token.type) ||
(token.type === FilterQueryLexer.KEY && isQueryPairComplete(currentPair)))
(isConjunctionToken(token.type) || token.type === FilterQueryLexer.KEY)
) {
queryPairs.push({
key: currentPair.key,

View File

@@ -0,0 +1,30 @@
package signozapiserver
import (
"net/http"
"github.com/SigNoz/signoz/pkg/http/handler"
"github.com/SigNoz/signoz/pkg/types"
"github.com/gorilla/mux"
)
func (provider *provider) addGlobalRoutes(router *mux.Router) error {
if err := router.Handle("/api/v1/global/config", handler.New(provider.authZ.EditAccess(provider.globalHandler.GetConfig), handler.OpenAPIDef{
ID: "GetGlobalConfig",
Tags: []string{"global"},
Summary: "Get global config",
Description: "This endpoints returns global config",
Request: nil,
RequestContentType: "",
Response: new(types.GettableGlobalConfig),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusOK,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleEditor),
})).Methods(http.MethodGet).GetError(); err != nil {
return err
}
return nil
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/SigNoz/signoz/pkg/apiserver"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/http/handler"
"github.com/SigNoz/signoz/pkg/http/middleware"
"github.com/SigNoz/signoz/pkg/modules/authdomain"
@@ -28,6 +29,7 @@ type provider struct {
sessionHandler session.Handler
authDomainHandler authdomain.Handler
preferenceHandler preference.Handler
globalHandler global.Handler
}
func NewFactory(
@@ -38,9 +40,10 @@ func NewFactory(
sessionHandler session.Handler,
authDomainHandler authdomain.Handler,
preferenceHandler preference.Handler,
globalHandler global.Handler,
) factory.ProviderFactory[apiserver.APIServer, apiserver.Config] {
return factory.NewProviderFactory(factory.MustNewName("signoz"), func(ctx context.Context, providerSettings factory.ProviderSettings, config apiserver.Config) (apiserver.APIServer, error) {
return newProvider(ctx, providerSettings, config, orgGetter, authz, orgHandler, userHandler, sessionHandler, authDomainHandler, preferenceHandler)
return newProvider(ctx, providerSettings, config, orgGetter, authz, orgHandler, userHandler, sessionHandler, authDomainHandler, preferenceHandler, globalHandler)
})
}
@@ -55,6 +58,7 @@ func newProvider(
sessionHandler session.Handler,
authDomainHandler authdomain.Handler,
preferenceHandler preference.Handler,
globalHandler global.Handler,
) (apiserver.APIServer, error) {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/pkg/apiserver/signozapiserver")
router := mux.NewRouter().UseEncodedPath()
@@ -68,6 +72,7 @@ func newProvider(
sessionHandler: sessionHandler,
authDomainHandler: authDomainHandler,
preferenceHandler: preferenceHandler,
globalHandler: globalHandler,
}
provider.authZ = middleware.NewAuthZ(settings.Logger(), orgGetter, authz)
@@ -96,11 +101,15 @@ func (provider *provider) AddToRouter(router *mux.Router) error {
return err
}
if err := provider.addPreferenceRoutes(router); err != nil {
return err
}
if err := provider.addUserRoutes(router); err != nil {
return err
}
if err := provider.addPreferenceRoutes(router); err != nil {
if err := provider.addGlobalRoutes(router); err != nil {
return err
}

41
pkg/global/config.go Normal file
View File

@@ -0,0 +1,41 @@
package global
import (
"net/url"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
)
var (
ErrCodeInvalidGlobalConfig = errors.MustNewCode("invalid_global_config")
)
type Config struct {
ExternalURL *url.URL `mapstructure:"external_url"`
IngestionURL *url.URL `mapstructure:"ingestion_url"`
}
func NewConfigFactory() factory.ConfigFactory {
return factory.NewConfigFactory(factory.MustNewName("global"), newConfig)
}
func newConfig() factory.Config {
return &Config{
ExternalURL: &url.URL{
Scheme: "",
Host: "<unset>",
Path: "",
},
IngestionURL: &url.URL{
Scheme: "",
Host: "<unset>",
Path: "",
},
}
}
func (c Config) Validate() error {
return nil
}

11
pkg/global/global.go Normal file
View File

@@ -0,0 +1,11 @@
package global
import "net/http"
type Global interface {
GetConfig() Config
}
type Handler interface {
GetConfig(http.ResponseWriter, *http.Request)
}

View File

@@ -0,0 +1,23 @@
package signozglobal
import (
"net/http"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/types"
)
type handler struct {
global global.Global
}
func NewHandler(global global.Global) global.Handler {
return &handler{global: global}
}
func (handker *handler) GetConfig(rw http.ResponseWriter, r *http.Request) {
cfg := handker.global.GetConfig()
render.Success(rw, http.StatusOK, types.NewGettableGlobalConfig(cfg.ExternalURL, cfg.IngestionURL))
}

View File

@@ -0,0 +1,31 @@
package signozglobal
import (
"context"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/global"
)
type provider struct {
config global.Config
settings factory.ScopedProviderSettings
}
func NewFactory() factory.ProviderFactory[global.Global, global.Config] {
return factory.NewProviderFactory(factory.MustNewName("signoz"), func(ctx context.Context, providerSettings factory.ProviderSettings, config global.Config) (global.Global, error) {
return newProvider(ctx, providerSettings, config)
})
}
func newProvider(_ context.Context, providerSettings factory.ProviderSettings, config global.Config) (global.Global, error) {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/pkg/global/signozglobal")
return &provider{
config: config,
settings: settings,
}, nil
}
func (provider *provider) GetConfig() global.Config {
return provider.config
}

View File

@@ -1,6 +1,7 @@
package middleware
import (
"context"
"log/slog"
"net/http"
"time"
@@ -11,6 +12,7 @@ import (
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/ctxtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"golang.org/x/sync/singleflight"
)
const (
@@ -23,10 +25,18 @@ type APIKey struct {
headers []string
logger *slog.Logger
sharder sharder.Sharder
sfGroup *singleflight.Group
}
func NewAPIKey(store sqlstore.SQLStore, headers []string, logger *slog.Logger, sharder sharder.Sharder) *APIKey {
return &APIKey{store: store, uuid: authtypes.NewUUID(), headers: headers, logger: logger, sharder: sharder}
return &APIKey{
store: store,
uuid: authtypes.NewUUID(),
headers: headers,
logger: logger,
sharder: sharder,
sfGroup: &singleflight.Group{},
}
}
func (a *APIKey) Wrap(next http.Handler) http.Handler {
@@ -109,11 +119,24 @@ func (a *APIKey) Wrap(next http.Handler) http.Handler {
next.ServeHTTP(w, r)
apiKey.LastUsed = time.Now()
_, err = a.store.BunDB().NewUpdate().Model(&apiKey).Column("last_used").Where("token = ?", apiKeyToken).Where("revoked = false").Exec(r.Context())
if err != nil {
a.logger.ErrorContext(r.Context(), "failed to update last used of api key", "error", err)
}
lastUsedCtx := context.WithoutCancel(r.Context())
_, _, _ = a.sfGroup.Do(apiKey.ID.StringValue(), func() (any, error) {
apiKey.LastUsed = time.Now()
_, err = a.
store.
BunDB().
NewUpdate().
Model(&apiKey).
Column("last_used").
Where("token = ?", apiKeyToken).
Where("revoked = false").
Exec(lastUsedCtx)
if err != nil {
a.logger.ErrorContext(lastUsedCtx, "failed to update last used of api key", "error", err)
}
return true, nil
})
})

View File

@@ -137,6 +137,28 @@ func (h *handler) GetMetricMetadata(rw http.ResponseWriter, req *http.Request) {
render.Success(rw, http.StatusOK, metadata)
}
func (h *handler) GetMetricAlerts(rw http.ResponseWriter, req *http.Request) {
claims, err := authtypes.ClaimsFromContext(req.Context())
if err != nil {
render.Error(rw, err)
return
}
metricName := strings.TrimSpace(req.URL.Query().Get("metricName"))
if metricName == "" {
render.Error(rw, errors.NewInvalidInputf(errors.CodeInvalidInput, "metricName query parameter is required"))
return
}
orgID := valuer.MustNewUUID(claims.OrgID)
out, err := h.module.GetMetricAlerts(req.Context(), orgID, metricName)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, out)
}
func (h *handler) GetMetricDashboards(rw http.ResponseWriter, req *http.Request) {
claims, err := authtypes.ClaimsFromContext(req.Context())
if err != nil {

View File

@@ -20,6 +20,7 @@ import (
"github.com/SigNoz/signoz/pkg/types/metricsexplorertypes"
"github.com/SigNoz/signoz/pkg/types/metrictypes"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/ruletypes"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/valuer"
sqlbuilder "github.com/huandu/go-sqlbuilder"
@@ -33,12 +34,13 @@ type module struct {
condBuilder qbtypes.ConditionBuilder
logger *slog.Logger
cache cache.Cache
ruleStore ruletypes.RuleStore
dashboardModule dashboard.Module
config metricsexplorer.Config
}
// NewModule constructs the metrics module with the provided dependencies.
func NewModule(ts telemetrystore.TelemetryStore, telemetryMetadataStore telemetrytypes.MetadataStore, cache cache.Cache, dashboardModule dashboard.Module, providerSettings factory.ProviderSettings, cfg metricsexplorer.Config) metricsexplorer.Module {
func NewModule(ts telemetrystore.TelemetryStore, telemetryMetadataStore telemetrytypes.MetadataStore, cache cache.Cache, ruleStore ruletypes.RuleStore, dashboardModule dashboard.Module, providerSettings factory.ProviderSettings, cfg metricsexplorer.Config) metricsexplorer.Module {
fieldMapper := telemetrymetrics.NewFieldMapper()
condBuilder := telemetrymetrics.NewConditionBuilder(fieldMapper)
return &module{
@@ -48,6 +50,7 @@ func NewModule(ts telemetrystore.TelemetryStore, telemetryMetadataStore telemetr
logger: providerSettings.Logger,
telemetryMetadataStore: telemetryMetadataStore,
cache: cache,
ruleStore: ruleStore,
dashboardModule: dashboardModule,
config: cfg,
}
@@ -197,11 +200,32 @@ func (m *module) UpdateMetricMetadata(ctx context.Context, orgID valuer.UUID, re
return nil
}
func (m *module) GetMetricAlerts(ctx context.Context, orgID valuer.UUID, metricName string) (*metricsexplorertypes.MetricAlertsResponse, error) {
if metricName == "" {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "metricName is required")
}
ruleAlerts, err := m.ruleStore.GetStoredRulesByMetricName(ctx, orgID.String(), metricName)
if err != nil {
return nil, errors.WrapInternalf(err, errors.CodeInternal, "failed to get stored rules by metric name")
}
alerts := make([]metricsexplorertypes.MetricAlert, len(ruleAlerts))
for i, ruleAlert := range ruleAlerts {
alerts[i] = metricsexplorertypes.MetricAlert{
AlertName: ruleAlert.AlertName,
AlertID: ruleAlert.AlertID,
}
}
return &metricsexplorertypes.MetricAlertsResponse{
Alerts: alerts,
}, nil
}
func (m *module) GetMetricDashboards(ctx context.Context, orgID valuer.UUID, metricName string) (*metricsexplorertypes.MetricDashboardsResponse, error) {
if metricName == "" {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "metricName is required")
}
data, err := m.dashboardModule.GetByMetricNames(ctx, orgID, []string{metricName})
if err != nil {
return nil, errors.WrapInternalf(err, errors.CodeInternal, "failed to get dashboards for metric")

View File

@@ -15,6 +15,7 @@ type Handler interface {
GetMetricMetadata(http.ResponseWriter, *http.Request)
GetMetricAttributes(http.ResponseWriter, *http.Request)
UpdateMetricMetadata(http.ResponseWriter, *http.Request)
GetMetricAlerts(http.ResponseWriter, *http.Request)
GetMetricDashboards(http.ResponseWriter, *http.Request)
GetMetricHighlights(http.ResponseWriter, *http.Request)
}
@@ -25,6 +26,7 @@ type Module interface {
GetTreemap(ctx context.Context, orgID valuer.UUID, req *metricsexplorertypes.TreemapRequest) (*metricsexplorertypes.TreemapResponse, error)
GetMetricMetadataMulti(ctx context.Context, orgID valuer.UUID, metricNames []string) (map[string]*metricsexplorertypes.MetricMetadata, error)
UpdateMetricMetadata(ctx context.Context, orgID valuer.UUID, req *metricsexplorertypes.UpdateMetricMetadataRequest) error
GetMetricAlerts(ctx context.Context, orgID valuer.UUID, metricName string) (*metricsexplorertypes.MetricAlertsResponse, error)
GetMetricDashboards(ctx context.Context, orgID valuer.UUID, metricName string) (*metricsexplorertypes.MetricDashboardsResponse, error)
GetMetricHighlights(ctx context.Context, orgID valuer.UUID, metricName string) (*metricsexplorertypes.MetricHighlightsResponse, error)
GetMetricAttributes(ctx context.Context, orgID valuer.UUID, req *metricsexplorertypes.MetricAttributesRequest) (*metricsexplorertypes.MetricAttributesResponse, error)

View File

@@ -95,6 +95,7 @@ const (
signozLocalTableAttributesMetadata = "attributes_metadata"
signozUpdatedMetricsMetadataLocalTable = "updated_metadata"
signozMetricsMetadataLocalTable = "metadata"
signozUpdatedMetricsMetadataTable = "distributed_updated_metadata"
minTimespanForProgressiveSearch = time.Hour
minTimespanForProgressiveSearchMargin = time.Minute
@@ -6439,6 +6440,73 @@ func (r *ClickHouseReader) GetUpdatedMetricsMetadata(ctx context.Context, orgID
return cachedMetadata, nil
}
// GetFirstSeenFromMetricMetadata queries the metadata table to get the first_seen timestamp
// for each metric-attribute-value combination.
// Returns a map where key is `model.MetricMetadataLookupKey` and value is first_seen in milliseconds.
func (r *ClickHouseReader) GetFirstSeenFromMetricMetadata(ctx context.Context, lookupKeys []model.MetricMetadataLookupKey) (map[model.MetricMetadataLookupKey]int64, error) {
// Chunk the lookup keys to avoid overly large queries (max 300 tuples per query)
const chunkSize = 300
result := make(map[model.MetricMetadataLookupKey]int64)
for i := 0; i < len(lookupKeys); i += chunkSize {
end := i + chunkSize
if end > len(lookupKeys) {
end = len(lookupKeys)
}
chunk := lookupKeys[i:end]
// Build the IN clause values - ClickHouse uses tuple syntax with placeholders
var valueStrings []string
var args []interface{}
for _, key := range chunk {
valueStrings = append(valueStrings, "(?, ?, ?)")
args = append(args, key.MetricName, key.AttributeName, key.AttributeValue)
}
query := fmt.Sprintf(`
SELECT
m.metric_name,
m.attr_name,
m.attr_string_value,
min(m.last_reported_unix_milli) AS first_seen
FROM %s.%s AS m
WHERE (m.metric_name, m.attr_name, m.attr_string_value) IN (%s)
GROUP BY m.metric_name, m.attr_name, m.attr_string_value
ORDER BY first_seen`,
signozMetricDBName, signozMetricsMetadataLocalTable, strings.Join(valueStrings, ", "))
valueCtx := context.WithValue(ctx, "clickhouse_max_threads", constants.MetricsExplorerClickhouseThreads)
rows, err := r.db.Query(valueCtx, query, args...)
if err != nil {
zap.L().Error("Error querying metadata for first_seen", zap.Error(err))
return nil, &model.ApiError{Typ: "ClickhouseErr", Err: fmt.Errorf("error querying metadata for first_seen: %v", err)}
}
for rows.Next() {
var metricName, attrName, attrValue string
var firstSeen uint64
if err := rows.Scan(&metricName, &attrName, &attrValue, &firstSeen); err != nil {
rows.Close()
return nil, &model.ApiError{Typ: "ClickhouseErr", Err: fmt.Errorf("error scanning metadata first_seen result: %v", err)}
}
result[model.MetricMetadataLookupKey{
MetricName: metricName,
AttributeName: attrName,
AttributeValue: attrValue,
}] = int64(firstSeen)
}
if err := rows.Err(); err != nil {
rows.Close()
return nil, &model.ApiError{Typ: "ClickhouseErr", Err: fmt.Errorf("error iterating metadata first_seen results: %v", err)}
}
rows.Close()
}
return result, nil
}
func (r *ClickHouseReader) SearchTraces(ctx context.Context, params *model.SearchTracesParams) (*[]model.SearchSpansResult, error) {
searchSpansResult := []model.SearchSpansResult{
{

View File

@@ -630,6 +630,7 @@ func (ah *APIHandler) MetricExplorerRoutes(router *mux.Router, am *middleware.Au
router.HandleFunc("/api/v2/metrics/metadata", am.ViewAccess(ah.Signoz.Handlers.MetricsExplorer.GetMetricMetadata)).Methods(http.MethodGet)
router.HandleFunc("/api/v2/metrics/{metric_name}/metadata", am.EditAccess(ah.Signoz.Handlers.MetricsExplorer.UpdateMetricMetadata)).Methods(http.MethodPost)
router.HandleFunc("/api/v2/metric/highlights", am.ViewAccess(ah.Signoz.Handlers.MetricsExplorer.GetMetricHighlights)).Methods(http.MethodGet)
router.HandleFunc("/api/v2/metric/alerts", am.ViewAccess(ah.Signoz.Handlers.MetricsExplorer.GetMetricAlerts)).Methods(http.MethodGet)
router.HandleFunc("/api/v2/metric/dashboards", am.ViewAccess(ah.Signoz.Handlers.MetricsExplorer.GetMetricDashboards)).Methods(http.MethodGet)
}

View File

@@ -3,13 +3,13 @@ package app
import (
"context"
"fmt"
"log/slog"
"net"
"net/http"
_ "net/http/pprof" // http profiler
"slices"
"github.com/SigNoz/signoz/pkg/cache/memorycache"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/ruler/rulestore/sqlrulestore"
@@ -17,6 +17,7 @@ import (
"github.com/SigNoz/signoz/pkg/alertmanager"
"github.com/SigNoz/signoz/pkg/apis/fields"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/http/middleware"
"github.com/SigNoz/signoz/pkg/licensing/nooplicensing"
"github.com/SigNoz/signoz/pkg/modules/organization"
@@ -30,6 +31,7 @@ import (
"github.com/SigNoz/signoz/pkg/query-service/app/logparsingpipeline"
"github.com/SigNoz/signoz/pkg/query-service/app/opamp"
opAmpModel "github.com/SigNoz/signoz/pkg/query-service/app/opamp/model"
"github.com/SigNoz/signoz/pkg/query-service/interfaces"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrystore"
@@ -37,10 +39,8 @@ import (
"github.com/rs/cors"
"github.com/soheilhy/cmux"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/query-service/constants"
"github.com/SigNoz/signoz/pkg/query-service/healthcheck"
"github.com/SigNoz/signoz/pkg/query-service/interfaces"
"github.com/SigNoz/signoz/pkg/query-service/rules"
"github.com/SigNoz/signoz/pkg/query-service/utils"
"go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux"
@@ -107,7 +107,8 @@ func NewServer(config signoz.Config, signoz *signoz.SigNoz) (*Server, error) {
signoz.Prometheus,
signoz.Modules.OrgGetter,
signoz.Querier,
signoz.Instrumentation.Logger(),
signoz.Instrumentation.ToProviderSettings(),
signoz.QueryParser,
)
if err != nil {
return nil, err
@@ -339,9 +340,10 @@ func makeRulesManager(
prometheus prometheus.Prometheus,
orgGetter organization.Getter,
querier querier.Querier,
logger *slog.Logger,
providerSettings factory.ProviderSettings,
queryParser queryparser.QueryParser,
) (*rules.Manager, error) {
ruleStore := sqlrulestore.NewRuleStore(sqlstore)
ruleStore := sqlrulestore.NewRuleStore(sqlstore, queryParser, providerSettings)
maintenanceStore := sqlrulestore.NewMaintenanceStore(sqlstore)
// create manager opts
managerOpts := &rules.ManagerOptions{
@@ -351,7 +353,7 @@ func makeRulesManager(
Logger: zap.L(),
Reader: ch,
Querier: querier,
SLogger: logger,
SLogger: providerSettings.Logger,
Cache: cache,
EvalDelay: constants.GetEvalDelay(),
OrgGetter: orgGetter,
@@ -359,6 +361,7 @@ func makeRulesManager(
RuleStore: ruleStore,
MaintenanceStore: maintenanceStore,
SqlStore: sqlstore,
QueryParser: queryParser,
}
// create Manager

View File

@@ -1,8 +1,17 @@
package converter
import "github.com/SigNoz/signoz/pkg/errors"
// Unit represents a unit of measurement
type Unit string
func (u Unit) Validate() error {
if !IsValidUnit(u) {
return errors.NewInvalidInputf(errors.CodeInvalidInput, "invalid unit: %s", u)
}
return nil
}
// Value represents a value with a unit of measurement
type Value struct {
F float64
@@ -60,6 +69,27 @@ func FromUnit(u Unit) Converter {
}
}
// IsValidUnit returns true if the given unit is valid
func IsValidUnit(u Unit) bool {
switch u {
// Duration unit
case "ns", "us", "µs", "ms", "s", "m", "h", "d", "min",
// Data unit
"bytes", "decbytes", "bits", "decbits", "kbytes", "decKbytes", "deckbytes", "mbytes", "decMbytes", "decmbytes", "gbytes", "decGbytes", "decgbytes", "tbytes", "decTbytes", "dectbytes", "pbytes", "decPbytes", "decpbytes", "By", "kBy", "MBy", "GBy", "TBy", "PBy",
// Data rate unit
"binBps", "Bps", "binbps", "bps", "KiBs", "Kibits", "KBs", "Kbits", "MiBs", "Mibits", "MBs", "Mbits", "GiBs", "Gibits", "GBs", "Gbits", "TiBs", "Tibits", "TBs", "Tbits", "PiBs", "Pibits", "PBs", "Pbits", "By/s", "kBy/s", "MBy/s", "GBy/s", "TBy/s", "PBy/s", "bit/s", "kbit/s", "Mbit/s", "Gbit/s", "Tbit/s", "Pbit/s",
// Percent unit
"percent", "percentunit", "%",
// Bool unit
"bool", "bool_yes_no", "bool_true_false", "bool_1_0",
// Throughput unit
"cps", "ops", "reqps", "rps", "wps", "iops", "cpm", "opm", "rpm", "wpm", "{count}/s", "{ops}/s", "{req}/s", "{read}/s", "{write}/s", "{iops}/s", "{count}/min", "{ops}/min", "{read}/min", "{write}/min":
return true
default:
return false
}
}
func UnitToName(u string) string {
switch u {
case "ns":

View File

@@ -81,6 +81,7 @@ type Reader interface {
CheckClickHouse(ctx context.Context) error
GetMetricMetadata(context.Context, valuer.UUID, string, string) (*v3.MetricMetadataResponse, error)
GetFirstSeenFromMetricMetadata(ctx context.Context, lookupKeys []model.MetricMetadataLookupKey) (map[model.MetricMetadataLookupKey]int64, error)
AddRuleStateHistory(ctx context.Context, ruleStateHistory []model.RuleStateHistory) error
GetOverallStateTransitions(ctx context.Context, ruleID string, params *model.QueryRuleStateHistory) ([]model.ReleStateItem, error)

View File

@@ -516,3 +516,9 @@ type LogsAggregateParams struct {
Function string `json:"function"`
StepSeconds int `json:"step"`
}
type MetricMetadataLookupKey struct {
MetricName string
AttributeName string
AttributeValue string
}

View File

@@ -9,6 +9,7 @@ import (
"strings"
"time"
"github.com/SigNoz/signoz/pkg/query-service/converter"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/pkg/errors"
"go.uber.org/zap"
@@ -640,6 +641,13 @@ func (c *CompositeQuery) Validate() error {
return fmt.Errorf("query type is invalid: %w", err)
}
// Validate Unit - if provided (non-empty), it should be a valid unit string
if c.Unit != "" {
if err := converter.Unit(c.Unit).Validate(); err != nil {
return err
}
}
return nil
}

View File

@@ -13,7 +13,9 @@ import (
"github.com/SigNoz/signoz/pkg/query-service/model"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
qslabels "github.com/SigNoz/signoz/pkg/query-service/utils/labels"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/sqlstore"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
ruletypes "github.com/SigNoz/signoz/pkg/types/ruletypes"
"github.com/SigNoz/signoz/pkg/valuer"
"go.uber.org/zap"
@@ -88,6 +90,11 @@ type BaseRule struct {
sqlstore sqlstore.SQLStore
evaluation ruletypes.Evaluation
// newGroupEvalDelay is the grace period for new alert groups
newGroupEvalDelay *time.Duration
queryParser queryparser.QueryParser
}
type RuleOption func(*BaseRule)
@@ -122,6 +129,12 @@ func WithSQLStore(sqlstore sqlstore.SQLStore) RuleOption {
}
}
func WithQueryParser(queryParser queryparser.QueryParser) RuleOption {
return func(r *BaseRule) {
r.queryParser = queryParser
}
}
func NewBaseRule(id string, orgID valuer.UUID, p *ruletypes.PostableRule, reader interfaces.Reader, opts ...RuleOption) (*BaseRule, error) {
if p.RuleCondition == nil || !p.RuleCondition.IsValid() {
return nil, fmt.Errorf("invalid rule condition")
@@ -154,6 +167,12 @@ func NewBaseRule(id string, orgID valuer.UUID, p *ruletypes.PostableRule, reader
evaluation: evaluation,
}
// Store newGroupEvalDelay and groupBy keys from NotificationSettings
if p.NotificationSettings != nil && p.NotificationSettings.NewGroupEvalDelay != nil {
newGroupEvalDelay := time.Duration(*p.NotificationSettings.NewGroupEvalDelay)
baseRule.newGroupEvalDelay = &newGroupEvalDelay
}
if baseRule.evalWindow == 0 {
baseRule.evalWindow = 5 * time.Minute
}
@@ -528,3 +547,166 @@ func (r *BaseRule) PopulateTemporality(ctx context.Context, orgID valuer.UUID, q
}
return nil
}
// ShouldSkipNewGroups returns true if new group filtering should be applied
func (r *BaseRule) ShouldSkipNewGroups() bool {
return r.newGroupEvalDelay != nil && *r.newGroupEvalDelay > 0
}
// isFilterNewSeriesSupported checks if the query is supported for new series filtering
func (r *BaseRule) isFilterNewSeriesSupported() bool {
if r.ruleCondition.CompositeQuery.QueryType == v3.QueryTypeBuilder {
for _, query := range r.ruleCondition.CompositeQuery.Queries {
if query.Type != qbtypes.QueryTypeBuilder {
continue
}
switch query.Spec.(type) {
// query spec is for Logs or Traces, return with blank metric names and group by fields
case qbtypes.QueryBuilderQuery[qbtypes.LogAggregation], qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]:
return false
}
}
}
return true
}
// extractMetricAndGroupBys extracts metric names and groupBy keys from the rule's query.
// TODO: implement caching for query parsing results to avoid re-parsing the query + cache invalidation
func (r *BaseRule) extractMetricAndGroupBys(ctx context.Context) ([]string, []string, error) {
var metricNames []string
var groupedFields []string
// check to avoid processing the query for Logs and Traces
// as excluding new series is not supported for Logs and Traces for now
if !r.isFilterNewSeriesSupported() {
return metricNames, groupedFields, nil
}
result, err := r.queryParser.AnalyzeCompositeQuery(ctx, r.ruleCondition.CompositeQuery)
if err != nil {
return nil, nil, err
}
metricNames = result.MetricNames
for _, col := range result.GroupByColumns {
groupedFields = append(groupedFields, col.OriginField)
}
return metricNames, groupedFields, nil
}
// FilterNewSeriesIndexes filters out items that are too new based on metadata first_seen timestamps.
// Returns the indexes that should be skipped (not included in the result).
func (r *BaseRule) FilterNewSeries(ctx context.Context, ts time.Time, series []v3.Series) ([]int, error) {
// Extract metric names and groupBy keys
metricNames, groupedFields, err := r.extractMetricAndGroupBys(ctx)
if err != nil {
return nil, err
}
if len(metricNames) == 0 || len(groupedFields) == 0 {
// No metrics or groupBy keys, nothing to filter (non-ideal case, return early)
return []int{}, nil
}
// Build lookup keys from series which will be used to query metadata from CH
lookupKeys := make([]model.MetricMetadataLookupKey, 0)
seriesIdxToLookupKeys := make(map[int][]model.MetricMetadataLookupKey) // series index -> lookup keys
for i := 0; i < len(series); i++ {
metricLabelMap := series[i].Labels
// Collect groupBy attribute-value pairs for this series
seriesKeys := make([]model.MetricMetadataLookupKey, 0)
for _, metricName := range metricNames {
for _, groupByKey := range groupedFields {
if attrValue, ok := metricLabelMap[groupByKey]; ok {
lookupKey := model.MetricMetadataLookupKey{
MetricName: metricName,
AttributeName: groupByKey,
AttributeValue: attrValue,
}
lookupKeys = append(lookupKeys, lookupKey)
seriesKeys = append(seriesKeys, lookupKey)
}
}
}
if len(seriesKeys) > 0 {
seriesIdxToLookupKeys[i] = seriesKeys
}
}
if len(lookupKeys) == 0 {
// No lookup keys to query, return empty skip list
// this can happen when the series has no labels at all
// in this case, we include all series as we don't know if it is new or old series
return []int{}, nil
}
// unique lookup keys
uniqueLookupKeysMap := make(map[model.MetricMetadataLookupKey]struct{})
uniqueLookupKeys := make([]model.MetricMetadataLookupKey, 0)
for _, key := range lookupKeys {
if _, ok := uniqueLookupKeysMap[key]; !ok {
uniqueLookupKeysMap[key] = struct{}{}
uniqueLookupKeys = append(uniqueLookupKeys, key)
}
}
// Query metadata for first_seen timestamps
firstSeenMap, err := r.reader.GetFirstSeenFromMetricMetadata(ctx, uniqueLookupKeys)
if err != nil {
return nil, err
}
// Filter series based on first_seen + delay
skipIndexes := make([]int, 0)
evalTimeMs := ts.UnixMilli()
newGroupEvalDelayMs := r.newGroupEvalDelay.Milliseconds()
for i := 0; i < len(series); i++ {
seriesKeys, ok := seriesIdxToLookupKeys[i]
if !ok {
// No matching labels used in groupBy from this series, don't exclude it
// as we can't decide if it is new or old series
continue
}
// Find the maximum first_seen across all groupBy attributes for this series
// if the latest is old enough we're good, if latest is new we need to skip it
maxFirstSeen := int64(0)
// metadataFound tracks if we have metadata for any of the lookup keys
metadataFound := false
for _, lookupKey := range seriesKeys {
if firstSeen, exists := firstSeenMap[lookupKey]; exists {
metadataFound = true
if firstSeen > maxFirstSeen {
maxFirstSeen = firstSeen
}
}
}
// if we don't have metadata for any of the lookup keys, we can't decide if it is new or old series
// in that case, we don't add it to the skip indexes
if !metadataFound {
continue
}
// Check if first_seen + delay has passed
if maxFirstSeen+newGroupEvalDelayMs > evalTimeMs {
// Still within grace period, skip this series
skipIndexes = append(skipIndexes, i)
continue
}
// Old enough, don't skip this series
}
if r.logger != nil && len(skipIndexes) > 0 {
r.logger.InfoContext(ctx, "Filtered new series", "rule_name", r.Name(), "skipped_count", len(skipIndexes), "total_count", len(series), "delay_ms", newGroupEvalDelayMs)
}
return skipIndexes, nil
}

View File

@@ -1,12 +1,31 @@
package rules
import (
"context"
"fmt"
"testing"
"time"
"github.com/stretchr/testify/require"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/cache/cachetest"
"github.com/SigNoz/signoz/pkg/instrumentation/instrumentationtest"
"github.com/SigNoz/signoz/pkg/prometheus"
"github.com/SigNoz/signoz/pkg/prometheus/prometheustest"
"github.com/SigNoz/signoz/pkg/query-service/app/clickhouseReader"
"github.com/SigNoz/signoz/pkg/query-service/model"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/telemetrystore/telemetrystoretest"
"github.com/SigNoz/signoz/pkg/types/metrictypes"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
ruletypes "github.com/SigNoz/signoz/pkg/types/ruletypes"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/valuer"
cmock "github.com/srikanthccv/ClickHouse-go-mock"
)
func TestBaseRule_RequireMinPoints(t *testing.T) {
@@ -81,3 +100,704 @@ func TestBaseRule_RequireMinPoints(t *testing.T) {
})
}
}
// createTestSeries creates a v3.Series with the given labels and optional points
// so we don't exactly need the points in the series because the labels are used to determine if the series is new or old
// we use the labels to create a lookup key for the series and then check the first_seen timestamp for the series in the metadata table
func createTestSeries(labels map[string]string, points []v3.Point) v3.Series {
if points == nil {
points = []v3.Point{}
}
return v3.Series{
Labels: labels,
Points: points,
}
}
// calculateFirstSeen calculates first_seen timestamp based on evalTime, delay, and isOld flag
func calculateFirstSeen(evalTime time.Time, delay time.Duration, isOld bool) int64 {
if isOld {
// Old: evalTime - (2 * delay)
return evalTime.Add(-2 * delay).UnixMilli()
}
// New: evalTime - (delay / 2)
return evalTime.Add(-delay / 2).UnixMilli()
}
// createFirstSeenMap creates a first_seen map for a series with given attributes
// metricName: the metric name
// groupByFields: list of groupBy field names
// evalTime: evaluation time
// delay: newGroupEvalDelay
// isOld: whether the series is old (true) or new (false)
// attributeValues: values for each groupBy field in order
func createFirstSeenMap(metricName string, groupByFields []string, evalTime time.Time, delay time.Duration, isOld bool, attributeValues ...string) map[model.MetricMetadataLookupKey]int64 {
result := make(map[model.MetricMetadataLookupKey]int64)
firstSeen := calculateFirstSeen(evalTime, delay, isOld)
for i, field := range groupByFields {
if i < len(attributeValues) {
key := model.MetricMetadataLookupKey{
MetricName: metricName,
AttributeName: field,
AttributeValue: attributeValues[i],
}
result[key] = firstSeen
}
}
return result
}
// mergeFirstSeenMaps merges multiple first_seen maps into one
// When the same key exists in multiple maps, it keeps the lowest value
// which simulatest the behavior of the ClickHouse query
// finding the minimum first_seen timestamp across all groupBy attributes for a single series
func mergeFirstSeenMaps(maps ...map[model.MetricMetadataLookupKey]int64) map[model.MetricMetadataLookupKey]int64 {
result := make(map[model.MetricMetadataLookupKey]int64)
for _, m := range maps {
for k, v := range m {
if existingValue, exists := result[k]; exists {
// Keep the lowest value
if v < existingValue {
result[k] = v
}
} else {
result[k] = v
}
}
}
return result
}
// createPostableRule creates a PostableRule with the given CompositeQuery
func createPostableRule(compositeQuery *v3.CompositeQuery) ruletypes.PostableRule {
return ruletypes.PostableRule{
AlertName: "Test Rule",
AlertType: ruletypes.AlertTypeMetric,
RuleType: ruletypes.RuleTypeThreshold,
Evaluation: &ruletypes.EvaluationEnvelope{
Kind: ruletypes.RollingEvaluation,
Spec: ruletypes.RollingWindow{
EvalWindow: ruletypes.Duration(5 * time.Minute),
Frequency: ruletypes.Duration(1 * time.Minute),
},
},
RuleCondition: &ruletypes.RuleCondition{
CompositeQuery: compositeQuery,
Thresholds: &ruletypes.RuleThresholdData{
Kind: ruletypes.BasicThresholdKind,
Spec: ruletypes.BasicRuleThresholds{
{
Name: "test-threshold",
TargetValue: func() *float64 { v := 1.0; return &v }(),
CompareOp: ruletypes.ValueIsAbove,
MatchType: ruletypes.AtleastOnce,
},
},
},
},
}
}
// setupMetadataQueryMock sets up the ClickHouse mock for GetFirstSeenFromMetricMetadata query
func setupMetadataQueryMock(telemetryStore *telemetrystoretest.Provider, metricNames []string, groupedFields []string, series []v3.Series, firstSeenMap map[model.MetricMetadataLookupKey]int64) {
if len(firstSeenMap) == 0 || len(series) == 0 {
return
}
// Build args from series the same way we build lookup keys in FilterNewSeries
var args []any
uniqueArgsMap := make(map[string]struct{})
for _, s := range series {
labelMap := s.Labels
for _, metricName := range metricNames {
for _, groupByKey := range groupedFields {
if attrValue, ok := labelMap[groupByKey]; ok {
argKey := fmt.Sprintf("%s,%s,%s", metricName, groupByKey, attrValue)
if _, ok := uniqueArgsMap[argKey]; ok {
continue
}
uniqueArgsMap[argKey] = struct{}{}
args = append(args, metricName, groupByKey, attrValue)
}
}
}
}
// Build the query pattern - it uses IN clause with tuples
// We'll match any query that contains the metadata table pattern
metadataCols := []cmock.ColumnType{
{Name: "metric_name", Type: "String"},
{Name: "attr_name", Type: "String"},
{Name: "attr_string_value", Type: "String"},
{Name: "first_seen", Type: "UInt64"},
}
var values [][]interface{}
for key, firstSeen := range firstSeenMap {
values = append(values, []interface{}{
key.MetricName,
key.AttributeName,
key.AttributeValue,
uint64(firstSeen),
})
}
rows := cmock.NewRows(metadataCols, values)
telemetryStore.Mock().
ExpectQuery("SELECT any").
WithArgs(args...).
WillReturnRows(rows)
}
// filterNewSeriesTestCase represents a test case for FilterNewSeries
type filterNewSeriesTestCase struct {
name string
compositeQuery *v3.CompositeQuery
series []v3.Series
firstSeenMap map[model.MetricMetadataLookupKey]int64
newGroupEvalDelay *time.Duration
evalTime time.Time
expectedSkipIndexes []int
expectError bool
}
func TestBaseRule_FilterNewSeries(t *testing.T) {
defaultEvalTime := time.Unix(1700000000, 0)
defaultDelay := 2 * time.Minute
defaultGroupByFields := []string{"service_name", "env"}
logger := instrumentationtest.New().Logger()
settings := instrumentationtest.New().ToProviderSettings()
tests := []filterNewSeriesTestCase{
{
name: "mixed old and new series - Builder query",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypeBuilder,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "A",
StepInterval: qbtypes.Step{Duration: 60 * time.Second},
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{
MetricName: "request_total",
TimeAggregation: metrictypes.TimeAggregationCount,
SpaceAggregation: metrictypes.SpaceAggregationSum,
},
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "service_name"}},
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "env"}},
},
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"service_name": "svc-old", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-new", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-missing", "env": "stage"}, nil),
},
firstSeenMap: mergeFirstSeenMaps(
createFirstSeenMap("request_total", defaultGroupByFields, defaultEvalTime, defaultDelay, true, "svc-old", "prod"),
createFirstSeenMap("request_total", defaultGroupByFields, defaultEvalTime, defaultDelay, false, "svc-new", "prod"),
// svc-missing has no metadata, so it will be skipped
),
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{1}, // svc-missing should be skipped as we can't decide if it is new or old series
},
{
name: "all new series - PromQL query",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypePromQL,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "P1",
Query: "sum by (service_name,env) (rate(request_total[5m]))",
Disabled: false,
Step: qbtypes.Step{Duration: 0},
Stats: false,
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"service_name": "svc-new1", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-new2", "env": "stage"}, nil),
},
firstSeenMap: mergeFirstSeenMaps(
createFirstSeenMap("request_total", defaultGroupByFields, defaultEvalTime, defaultDelay, false, "svc-new1", "prod"),
createFirstSeenMap("request_total", defaultGroupByFields, defaultEvalTime, defaultDelay, false, "svc-new2", "stage"),
),
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{0, 1}, // all should be skipped
},
{
name: "all old series - ClickHouse query",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypeClickHouseSQL,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeClickHouseSQL,
Spec: qbtypes.ClickHouseQuery{
Name: "CH1",
Query: "SELECT service_name, env FROM metrics WHERE metric_name='request_total' GROUP BY service_name, env",
Disabled: false,
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"service_name": "svc-old1", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-old2", "env": "stage"}, nil),
},
firstSeenMap: mergeFirstSeenMaps(
createFirstSeenMap("request_total", defaultGroupByFields, defaultEvalTime, defaultDelay, true, "svc-old1", "prod"),
createFirstSeenMap("request_total", defaultGroupByFields, defaultEvalTime, defaultDelay, true, "svc-old2", "stage"),
),
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{}, // none should be skipped
},
{
name: "no grouping in query - Builder",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypeBuilder,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "A",
StepInterval: qbtypes.Step{Duration: 60 * time.Second},
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{
MetricName: "request_total",
TimeAggregation: metrictypes.TimeAggregationCount,
SpaceAggregation: metrictypes.SpaceAggregationSum,
},
},
GroupBy: []qbtypes.GroupByKey{},
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
},
firstSeenMap: make(map[model.MetricMetadataLookupKey]int64),
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{}, // early return, no filtering
},
{
name: "no metric names - Builder",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypeBuilder,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "A",
StepInterval: qbtypes.Step{Duration: 60 * time.Second},
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "service_name"}},
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "env"}},
},
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
},
firstSeenMap: make(map[model.MetricMetadataLookupKey]int64),
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{}, // early return, no filtering
},
{
name: "series with no matching labels - Builder",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypeBuilder,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "A",
StepInterval: qbtypes.Step{Duration: 60 * time.Second},
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{
MetricName: "request_total",
TimeAggregation: metrictypes.TimeAggregationCount,
SpaceAggregation: metrictypes.SpaceAggregationSum,
},
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "service_name"}},
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "env"}},
},
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"status": "200"}, nil), // no service_name or env
},
firstSeenMap: make(map[model.MetricMetadataLookupKey]int64),
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{}, // series included as we can't decide if it's new or old
},
{
name: "series with missing metadata - PromQL",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypePromQL,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "P1",
Query: "sum by (service_name,env) (rate(request_total[5m]))",
Disabled: false,
Step: qbtypes.Step{Duration: 0},
Stats: false,
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"service_name": "svc-old", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-no-metadata", "env": "prod"}, nil),
},
firstSeenMap: createFirstSeenMap("request_total", defaultGroupByFields, defaultEvalTime, defaultDelay, true, "svc-old", "prod"),
// svc-no-metadata has no entry in firstSeenMap
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{}, // svc-no-metadata should not be skipped as we can't decide if it is new or old series
},
{
name: "series with partial metadata - ClickHouse",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypeClickHouseSQL,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeClickHouseSQL,
Spec: qbtypes.ClickHouseQuery{
Name: "CH1",
Query: "SELECT service_name, env FROM metrics WHERE metric_name='request_total' GROUP BY service_name, env",
Disabled: false,
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"service_name": "svc-partial", "env": "prod"}, nil),
},
// Only provide metadata for service_name, not env
firstSeenMap: map[model.MetricMetadataLookupKey]int64{
{MetricName: "request_total", AttributeName: "service_name", AttributeValue: "svc-partial"}: calculateFirstSeen(defaultEvalTime, defaultDelay, true),
// env metadata is missing
},
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{}, // has some metadata, uses max first_seen which is old
},
{
name: "empty series array - Builder",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypeBuilder,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "A",
StepInterval: qbtypes.Step{Duration: 60 * time.Second},
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{
MetricName: "request_total",
TimeAggregation: metrictypes.TimeAggregationCount,
SpaceAggregation: metrictypes.SpaceAggregationSum,
},
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "service_name"}},
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "env"}},
},
},
},
},
},
series: []v3.Series{},
firstSeenMap: make(map[model.MetricMetadataLookupKey]int64),
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{},
},
{
name: "zero delay - Builder",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypeBuilder,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "A",
StepInterval: qbtypes.Step{Duration: 60 * time.Second},
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{
MetricName: "request_total",
TimeAggregation: metrictypes.TimeAggregationCount,
SpaceAggregation: metrictypes.SpaceAggregationSum,
},
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "service_name"}},
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "env"}},
},
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
},
firstSeenMap: createFirstSeenMap("request_total", defaultGroupByFields, defaultEvalTime, defaultDelay, true, "svc1", "prod"),
newGroupEvalDelay: func() *time.Duration { d := time.Duration(0); return &d }(), // zero delay
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{}, // with zero delay, all series pass
},
{
name: "multiple metrics with same groupBy keys - Builder",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypeBuilder,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "A",
StepInterval: qbtypes.Step{Duration: 60 * time.Second},
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{
MetricName: "request_total",
TimeAggregation: metrictypes.TimeAggregationCount,
SpaceAggregation: metrictypes.SpaceAggregationSum,
},
{
MetricName: "error_total",
TimeAggregation: metrictypes.TimeAggregationCount,
SpaceAggregation: metrictypes.SpaceAggregationSum,
},
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "service_name"}},
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "env"}},
},
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
},
firstSeenMap: mergeFirstSeenMaps(
createFirstSeenMap("request_total", defaultGroupByFields, defaultEvalTime, defaultDelay, true, "svc1", "prod"),
createFirstSeenMap("error_total", defaultGroupByFields, defaultEvalTime, defaultDelay, true, "svc1", "prod"),
),
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{},
},
{
name: "series with multiple groupBy attributes where one is new and one is old - Builder",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypeBuilder,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "A",
StepInterval: qbtypes.Step{Duration: 60 * time.Second},
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{
MetricName: "request_total",
TimeAggregation: metrictypes.TimeAggregationCount,
SpaceAggregation: metrictypes.SpaceAggregationSum,
},
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "service_name"}},
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "env"}},
},
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
},
// service_name is old, env is new - should use max (new)
firstSeenMap: mergeFirstSeenMaps(
createFirstSeenMap("request_total", []string{"service_name"}, defaultEvalTime, defaultDelay, true, "svc1"),
createFirstSeenMap("request_total", []string{"env"}, defaultEvalTime, defaultDelay, false, "prod"),
),
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{0}, // max first_seen is new, so should skip
},
{
name: "Logs query - should skip filtering and return empty skip indexes",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypeBuilder,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.LogAggregation]{
Name: "A",
StepInterval: qbtypes.Step{Duration: 60 * time.Second},
Signal: telemetrytypes.SignalLogs,
Aggregations: []qbtypes.LogAggregation{
{
Expression: "count()",
},
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "service_name"}},
},
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"service_name": "svc1"}, nil),
createTestSeries(map[string]string{"service_name": "svc2"}, nil),
},
firstSeenMap: make(map[model.MetricMetadataLookupKey]int64),
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{}, // Logs queries should return early, no filtering
},
{
name: "Traces query - should skip filtering and return empty skip indexes",
compositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypeBuilder,
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "A",
StepInterval: qbtypes.Step{Duration: 60 * time.Second},
Signal: telemetrytypes.SignalTraces,
Aggregations: []qbtypes.TraceAggregation{
{
Expression: "count()",
},
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "service_name"}},
},
},
},
},
},
series: []v3.Series{
createTestSeries(map[string]string{"service_name": "svc1"}, nil),
createTestSeries(map[string]string{"service_name": "svc2"}, nil),
},
firstSeenMap: make(map[model.MetricMetadataLookupKey]int64),
newGroupEvalDelay: &defaultDelay,
evalTime: defaultEvalTime,
expectedSkipIndexes: []int{}, // Traces queries should return early, no filtering
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Create postableRule from compositeQuery
postableRule := createPostableRule(tt.compositeQuery)
// Setup telemetry store mock
telemetryStore := telemetrystoretest.New(telemetrystore.Config{}, &queryMatcherAny{})
// Create query parser
queryParser := queryparser.New(settings)
// Use query parser to extract metric names and groupBy fields
analyzeResult, err := queryParser.AnalyzeCompositeQuery(context.Background(), tt.compositeQuery)
require.NoError(t, err)
metricNames := analyzeResult.MetricNames
groupedFields := []string{}
for _, col := range analyzeResult.GroupByColumns {
groupedFields = append(groupedFields, col.OriginField)
}
// Setup metadata query mock
setupMetadataQueryMock(telemetryStore, metricNames, groupedFields, tt.series, tt.firstSeenMap)
// Create reader with mocked telemetry store
readerCache, err := cachetest.New(
cache.Config{
Provider: "memory",
Memory: cache.Memory{
NumCounters: 10 * 1000,
MaxCost: 1 << 26,
},
},
)
require.NoError(t, err)
options := clickhouseReader.NewOptions("", "", "archiveNamespace")
reader := clickhouseReader.NewReader(
nil,
telemetryStore,
prometheustest.New(context.Background(), settings, prometheus.Config{}, telemetryStore),
"",
time.Duration(time.Second),
nil,
readerCache,
options,
)
// Set newGroupEvalDelay in NotificationSettings if provided
if tt.newGroupEvalDelay != nil {
postableRule.NotificationSettings = &ruletypes.NotificationSettings{
NewGroupEvalDelay: func() *ruletypes.Duration {
d := ruletypes.Duration(*tt.newGroupEvalDelay)
return &d
}(),
}
}
// Create BaseRule using NewBaseRule
rule, err := NewBaseRule("test-rule", valuer.GenerateUUID(), &postableRule, reader, WithQueryParser(queryParser), WithLogger(logger))
require.NoError(t, err)
skipIndexes, err := rule.FilterNewSeries(context.Background(), tt.evalTime, tt.series)
if tt.expectError {
require.Error(t, err)
return
}
require.NoError(t, err)
require.ElementsMatch(t, tt.expectedSkipIndexes, skipIndexes, "skip indexes should match")
})
}
}

View File

@@ -11,6 +11,7 @@ import (
"time"
"github.com/SigNoz/signoz/pkg/query-service/utils/labels"
"github.com/SigNoz/signoz/pkg/queryparser"
"go.uber.org/zap"
@@ -103,6 +104,7 @@ type ManagerOptions struct {
RuleStore ruletypes.RuleStore
MaintenanceStore ruletypes.MaintenanceStore
SqlStore sqlstore.SQLStore
QueryParser queryparser.QueryParser
}
// The Manager manages recording and alerting rules.
@@ -125,6 +127,8 @@ type Manager struct {
alertmanager alertmanager.Alertmanager
sqlstore sqlstore.SQLStore
orgGetter organization.Getter
// queryParser is used for parsing queries for rules
queryParser queryparser.QueryParser
}
func defaultOptions(o *ManagerOptions) *ManagerOptions {
@@ -166,6 +170,7 @@ func defaultPrepareTaskFunc(opts PrepareTaskOptions) (Task, error) {
opts.SLogger,
WithEvalDelay(opts.ManagerOpts.EvalDelay),
WithSQLStore(opts.SQLStore),
WithQueryParser(opts.ManagerOpts.QueryParser),
)
if err != nil {
@@ -188,6 +193,7 @@ func defaultPrepareTaskFunc(opts PrepareTaskOptions) (Task, error) {
opts.Reader,
opts.ManagerOpts.Prometheus,
WithSQLStore(opts.SQLStore),
WithQueryParser(opts.ManagerOpts.QueryParser),
)
if err != nil {
@@ -226,6 +232,7 @@ func NewManager(o *ManagerOptions) (*Manager, error) {
alertmanager: o.Alertmanager,
orgGetter: o.OrgGetter,
sqlstore: o.SqlStore,
queryParser: o.QueryParser,
}
zap.L().Debug("Manager created successfully with NotificationGroup")

View File

@@ -119,6 +119,42 @@ func (r *PromRule) getPqlQuery() (string, error) {
return "", fmt.Errorf("invalid promql rule query")
}
// filterNewSeries filters out new series based on the first_seen timestamp.
func (r *PromRule) filterNewSeries(ctx context.Context, ts time.Time, res promql.Matrix) (promql.Matrix, error) {
// Convert promql.Matrix to []v3.Series
v3Series := make([]v3.Series, 0, len(res))
for _, series := range res {
v3Series = append(v3Series, toCommonSeries(series))
}
// Get indexes to skip
skipIndexes, filterErr := r.BaseRule.FilterNewSeries(ctx, ts, v3Series)
if filterErr != nil {
r.logger.ErrorContext(ctx, "Error filtering new series, ", "error", filterErr, "rule_name", r.Name())
return nil, filterErr
}
// if no series are skipped, return the original matrix
if len(skipIndexes) == 0 {
return res, nil
}
// Create a map of skip indexes for efficient lookup
skippedIdxMap := make(map[int]struct{}, len(skipIndexes))
for _, idx := range skipIndexes {
skippedIdxMap[idx] = struct{}{}
}
// Filter out skipped series from promql.Matrix
filteredMatrix := make(promql.Matrix, 0, len(res)-len(skipIndexes))
for i, series := range res {
if _, shouldSkip := skippedIdxMap[i]; !shouldSkip {
filteredMatrix = append(filteredMatrix, series)
}
}
return filteredMatrix, nil
}
func (r *PromRule) buildAndRunQuery(ctx context.Context, ts time.Time) (ruletypes.Vector, error) {
start, end := r.Timestamps(ts)
interval := 60 * time.Second // TODO(srikanthccv): this should be configurable
@@ -135,8 +171,19 @@ func (r *PromRule) buildAndRunQuery(ctx context.Context, ts time.Time) (ruletype
return nil, err
}
matrixToProcess := res
// Filter out new series if newGroupEvalDelay is configured
if r.ShouldSkipNewGroups() {
filteredSeries, filterErr := r.filterNewSeries(ctx, ts, matrixToProcess)
if filterErr != nil {
r.logger.ErrorContext(ctx, "Error filtering new series, ", "error", filterErr, "rule_name", r.Name())
return nil, filterErr
}
matrixToProcess = filteredSeries
}
var resultVector ruletypes.Vector
for _, series := range res {
for _, series := range matrixToProcess {
resultSeries, err := r.Threshold.Eval(toCommonSeries(series), r.Unit(), ruletypes.EvalData{
ActiveAlerts: r.ActiveAlertsLabelFP(),
})

View File

@@ -52,6 +52,7 @@ func defaultTestNotification(opts PrepareTestRuleOptions) (int, *model.ApiError)
WithSendAlways(),
WithSendUnmatched(),
WithSQLStore(opts.SQLStore),
WithQueryParser(opts.ManagerOpts.QueryParser),
)
if err != nil {
@@ -72,6 +73,7 @@ func defaultTestNotification(opts PrepareTestRuleOptions) (int, *model.ApiError)
WithSendAlways(),
WithSendUnmatched(),
WithSQLStore(opts.SQLStore),
WithQueryParser(opts.ManagerOpts.QueryParser),
)
if err != nil {

View File

@@ -378,6 +378,42 @@ func (r *ThresholdRule) GetSelectedQuery() string {
return r.ruleCondition.GetSelectedQueryName()
}
// filterNewSeries filters out new series based on the first_seen timestamp.
func (r *ThresholdRule) filterNewSeries(ctx context.Context, ts time.Time, series []*v3.Series) ([]*v3.Series, error) {
// Convert []*v3.Series to []v3.Series for filtering
v3Series := make([]v3.Series, 0, len(series))
for _, s := range series {
v3Series = append(v3Series, *s)
}
// Get indexes to skip
skipIndexes, filterErr := r.BaseRule.FilterNewSeries(ctx, ts, v3Series)
if filterErr != nil {
r.logger.ErrorContext(ctx, "Error filtering new series, ", "error", filterErr, "rule_name", r.Name())
return nil, filterErr
}
// if no series are skipped, return the original series
if len(skipIndexes) == 0 {
return series, nil
}
// Create a map of skip indexes for efficient lookup
skippedIdxMap := make(map[int]struct{}, len(skipIndexes))
for _, idx := range skipIndexes {
skippedIdxMap[idx] = struct{}{}
}
// Filter out skipped series
oldSeries := make([]*v3.Series, 0, len(series)-len(skipIndexes))
for i, s := range series {
if _, shouldSkip := skippedIdxMap[i]; !shouldSkip {
oldSeries = append(oldSeries, s)
}
}
return oldSeries, nil
}
func (r *ThresholdRule) buildAndRunQuery(ctx context.Context, orgID valuer.UUID, ts time.Time) (ruletypes.Vector, error) {
params, err := r.prepareQueryRange(ctx, ts)
@@ -481,7 +517,18 @@ func (r *ThresholdRule) buildAndRunQuery(ctx context.Context, orgID valuer.UUID,
return resultVector, nil
}
for _, series := range queryResult.Series {
// Filter out new series if newGroupEvalDelay is configured
seriesToProcess := queryResult.Series
if r.ShouldSkipNewGroups() {
filteredSeries, filterErr := r.filterNewSeries(ctx, ts, seriesToProcess)
if filterErr != nil {
r.logger.ErrorContext(ctx, "Error filtering new series, ", "error", filterErr, "rule_name", r.Name())
return nil, filterErr
}
seriesToProcess = filteredSeries
}
for _, series := range seriesToProcess {
if r.Condition() != nil && r.Condition().RequireMinPoints {
if len(series.Points) < r.ruleCondition.RequiredNumPoints {
r.logger.InfoContext(ctx, "not enough data points to evaluate series, skipping", "ruleid", r.ID(), "numPoints", len(series.Points), "requiredPoints", r.Condition().RequiredNumPoints)
@@ -560,7 +607,17 @@ func (r *ThresholdRule) buildAndRunQueryV5(ctx context.Context, orgID valuer.UUI
return resultVector, nil
}
for _, series := range queryResult.Series {
// Filter out new series if newGroupEvalDelay is configured
seriesToProcess := queryResult.Series
if r.ShouldSkipNewGroups() {
filteredSeries, filterErr := r.filterNewSeries(ctx, ts, seriesToProcess)
if filterErr != nil {
r.logger.ErrorContext(ctx, "Error filtering new series, ", "error", filterErr, "rule_name", r.Name())
return nil, filterErr
}
seriesToProcess = filteredSeries
}
for _, series := range seriesToProcess {
if r.Condition() != nil && r.Condition().RequireMinPoints {
if len(series.Points) < r.Condition().RequiredNumPoints {
r.logger.InfoContext(ctx, "not enough data points to evaluate series, skipping", "ruleid", r.ID(), "numPoints", len(series.Points), "requiredPoints", r.Condition().RequiredNumPoints)

View File

@@ -9,19 +9,22 @@ import (
// AssignReservedVars assigns values for go template vars. assumes that
// model.QueryRangeParamsV3.Start and End are Unix Nano timestamps
func AssignReservedVarsV3(queryRangeParams *v3.QueryRangeParamsV3) {
queryRangeParams.Variables["start_timestamp"] = queryRangeParams.Start / 1000
queryRangeParams.Variables["end_timestamp"] = queryRangeParams.End / 1000
queryRangeParams.Variables["start_timestamp_ms"] = queryRangeParams.Start
queryRangeParams.Variables["end_timestamp_ms"] = queryRangeParams.End
queryRangeParams.Variables["SIGNOZ_START_TIME"] = queryRangeParams.Start
queryRangeParams.Variables["SIGNOZ_END_TIME"] = queryRangeParams.End
queryRangeParams.Variables["start_timestamp_nano"] = queryRangeParams.Start * 1e6
queryRangeParams.Variables["end_timestamp_nano"] = queryRangeParams.End * 1e6
queryRangeParams.Variables["start_datetime"] = fmt.Sprintf("toDateTime(%d)", queryRangeParams.Start/1000)
queryRangeParams.Variables["end_datetime"] = fmt.Sprintf("toDateTime(%d)", queryRangeParams.End/1000)
AssignReservedVars(queryRangeParams.Variables, queryRangeParams.Start, queryRangeParams.End)
}
func AssignReservedVars(variables map[string]interface{}, start int64, end int64) {
variables["start_timestamp"] = start / 1000
variables["end_timestamp"] = end / 1000
variables["start_timestamp_ms"] = start
variables["end_timestamp_ms"] = end
variables["SIGNOZ_START_TIME"] = start
variables["SIGNOZ_END_TIME"] = end
variables["start_timestamp_nano"] = start * 1e6
variables["end_timestamp_nano"] = end * 1e6
variables["start_datetime"] = fmt.Sprintf("toDateTime(%d)", start/1000)
variables["end_datetime"] = fmt.Sprintf("toDateTime(%d)", end/1000)
}

View File

@@ -2,6 +2,7 @@ package queryfilterextractor
import (
"fmt"
"sort"
"strings"
clickhouse "github.com/AfterShip/clickhouse-sql-parser/parser"
@@ -87,6 +88,12 @@ func (e *ClickHouseFilterExtractor) Extract(query string) (*FilterResult, error)
result.GroupByColumns = append(result.GroupByColumns, colInfo)
}
// Sort the metric names and group by columns to return deterministic results
sort.Strings(result.MetricNames)
sort.Slice(result.GroupByColumns, func(i, j int) bool {
return result.GroupByColumns[i].Name < result.GroupByColumns[j].Name
})
return result, nil
}

View File

@@ -1,6 +1,8 @@
package queryfilterextractor
import (
"sort"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/promql/parser"
@@ -45,6 +47,12 @@ func (e *PromQLFilterExtractor) Extract(query string) (*FilterResult, error) {
result.GroupByColumns = append(result.GroupByColumns, ColumnInfo{Name: groupKey, OriginExpr: groupKey, OriginField: groupKey})
}
// Sort the metric names and group by columns to return deterministic results
sort.Strings(result.MetricNames)
sort.Slice(result.GroupByColumns, func(i, j int) bool {
return result.GroupByColumns[i].Name < result.GroupByColumns[j].Name
})
return result, nil
}

View File

@@ -2,7 +2,9 @@ package queryparser
import (
"context"
"fmt"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/queryparser/queryfilterextractor"
"github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
)
@@ -11,4 +13,22 @@ import (
type QueryParser interface {
// AnalyzeQueryFilter extracts filter conditions from a given query string.
AnalyzeQueryFilter(ctx context.Context, queryType querybuildertypesv5.QueryType, query string) (*queryfilterextractor.FilterResult, error)
// AnalyzeCompositeQuery extracts filter conditions from a composite query.
AnalyzeCompositeQuery(ctx context.Context, compositeQuery *v3.CompositeQuery) (*queryfilterextractor.FilterResult, error)
// ValidateCompositeQuery validates a composite query and returns an error if validation fails.
ValidateCompositeQuery(ctx context.Context, compositeQuery *v3.CompositeQuery) error
}
type QueryParseError struct {
StartPosition *int
EndPosition *int
ErrorMessage string
Query string
}
func (e *QueryParseError) Error() string {
if e.StartPosition != nil && e.EndPosition != nil {
return fmt.Sprintf("query parse error: %s at position %d:%d", e.ErrorMessage, *e.StartPosition, *e.EndPosition)
}
return fmt.Sprintf("query parse error: %s", e.ErrorMessage)
}

View File

@@ -1,40 +0,0 @@
package queryparser
import (
"context"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/queryparser/queryfilterextractor"
"github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
)
type queryParserImpl struct {
settings factory.ProviderSettings
}
// New creates a new implementation of the QueryParser service.
func New(settings factory.ProviderSettings) QueryParser {
return &queryParserImpl{
settings: settings,
}
}
func (p *queryParserImpl) AnalyzeQueryFilter(ctx context.Context, queryType querybuildertypesv5.QueryType, query string) (*queryfilterextractor.FilterResult, error) {
var extractorType queryfilterextractor.ExtractorType
switch queryType {
case querybuildertypesv5.QueryTypePromQL:
extractorType = queryfilterextractor.ExtractorTypePromQL
case querybuildertypesv5.QueryTypeClickHouseSQL:
extractorType = queryfilterextractor.ExtractorTypeClickHouseSQL
default:
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "unsupported queryType: %s. Supported values are '%s' and '%s'", queryType, querybuildertypesv5.QueryTypePromQL, querybuildertypesv5.QueryTypeClickHouseSQL)
}
// Create extractor
extractor, err := queryfilterextractor.NewExtractor(extractorType)
if err != nil {
return nil, err
}
return extractor.Extract(query)
}

View File

@@ -0,0 +1,256 @@
package queryparser
import (
"context"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/queryparser/queryfilterextractor"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
)
type queryParserImpl struct {
settings factory.ProviderSettings
}
// New creates a new implementation of the QueryParser service.
func New(settings factory.ProviderSettings) QueryParser {
return &queryParserImpl{
settings: settings,
}
}
func (p *queryParserImpl) AnalyzeQueryFilter(ctx context.Context, queryType qbtypes.QueryType, query string) (*queryfilterextractor.FilterResult, error) {
var extractorType queryfilterextractor.ExtractorType
switch queryType {
case qbtypes.QueryTypePromQL:
extractorType = queryfilterextractor.ExtractorTypePromQL
case qbtypes.QueryTypeClickHouseSQL:
extractorType = queryfilterextractor.ExtractorTypeClickHouseSQL
default:
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "unsupported queryType: %s. Supported values are '%s' and '%s'", queryType, qbtypes.QueryTypePromQL, qbtypes.QueryTypeClickHouseSQL)
}
// Create extractor
extractor, err := queryfilterextractor.NewExtractor(extractorType)
if err != nil {
return nil, err
}
return extractor.Extract(query)
}
func (p *queryParserImpl) AnalyzeCompositeQuery(ctx context.Context, compositeQuery *v3.CompositeQuery) (*queryfilterextractor.FilterResult, error) {
var result = &queryfilterextractor.FilterResult{
MetricNames: []string{},
GroupByColumns: []queryfilterextractor.ColumnInfo{},
}
for _, query := range compositeQuery.Queries {
switch query.Type {
case qbtypes.QueryTypeBuilder:
switch spec := query.Spec.(type) {
case qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]:
// extract group by fields
for _, groupBy := range spec.GroupBy {
if groupBy.Name != "" {
result.GroupByColumns = append(result.GroupByColumns, queryfilterextractor.ColumnInfo{Name: groupBy.Name, OriginExpr: groupBy.Name, OriginField: groupBy.Name})
}
}
// extract metric names
for _, aggregation := range spec.Aggregations {
if aggregation.MetricName != "" {
result.MetricNames = append(result.MetricNames, aggregation.MetricName)
}
}
default:
// TODO: add support for Traces and Logs Aggregation types
if p.settings.Logger != nil {
p.settings.Logger.WarnContext(ctx, "unsupported QueryBuilderQuery type: %T", spec)
}
continue
}
case qbtypes.QueryTypePromQL:
spec, ok := query.Spec.(qbtypes.PromQuery)
if !ok || spec.Query == "" {
continue
}
res, err := p.AnalyzeQueryFilter(ctx, qbtypes.QueryTypePromQL, spec.Query)
if err != nil {
return nil, err
}
result.MetricNames = append(result.MetricNames, res.MetricNames...)
result.GroupByColumns = append(result.GroupByColumns, res.GroupByColumns...)
case qbtypes.QueryTypeClickHouseSQL:
spec, ok := query.Spec.(qbtypes.ClickHouseQuery)
if !ok || spec.Query == "" {
continue
}
res, err := p.AnalyzeQueryFilter(ctx, qbtypes.QueryTypeClickHouseSQL, spec.Query)
if err != nil {
return nil, err
}
result.MetricNames = append(result.MetricNames, res.MetricNames...)
result.GroupByColumns = append(result.GroupByColumns, res.GroupByColumns...)
default:
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "unsupported query type: %s", query.Type)
}
}
return result, nil
}
// ValidateCompositeQuery validates a composite query by checking all queries in the queries array
func (p *queryParserImpl) ValidateCompositeQuery(ctx context.Context, compositeQuery *v3.CompositeQuery) error {
if compositeQuery == nil {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"composite query is required",
)
}
if len(compositeQuery.Queries) == 0 {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"at least one query is required",
)
}
// Validate each query
for i, envelope := range compositeQuery.Queries {
queryId := qbtypes.GetQueryIdentifier(envelope, i)
switch envelope.Type {
case qbtypes.QueryTypeBuilder, qbtypes.QueryTypeSubQuery:
switch spec := envelope.Spec.(type) {
case qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]:
if err := spec.Validate(qbtypes.RequestTypeTimeSeries); err != nil {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid %s: %s",
queryId,
err.Error(),
)
}
case qbtypes.QueryBuilderQuery[qbtypes.LogAggregation]:
if err := spec.Validate(qbtypes.RequestTypeTimeSeries); err != nil {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid %s: %s",
queryId,
err.Error(),
)
}
case qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]:
if err := spec.Validate(qbtypes.RequestTypeTimeSeries); err != nil {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid %s: %s",
queryId,
err.Error(),
)
}
default:
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"unknown query spec type for %s",
queryId,
)
}
case qbtypes.QueryTypePromQL:
spec, ok := envelope.Spec.(qbtypes.PromQuery)
if !ok {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid spec for %s",
queryId,
)
}
if spec.Query == "" {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"query expression is required for %s",
queryId,
)
}
if err := validatePromQLQuery(spec.Query); err != nil {
return err
}
case qbtypes.QueryTypeClickHouseSQL:
spec, ok := envelope.Spec.(qbtypes.ClickHouseQuery)
if !ok {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid spec for %s",
queryId,
)
}
if spec.Query == "" {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"query expression is required for %s",
queryId,
)
}
if err := validateClickHouseQuery(spec.Query); err != nil {
return err
}
case qbtypes.QueryTypeFormula:
spec, ok := envelope.Spec.(qbtypes.QueryBuilderFormula)
if !ok {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid spec for %s",
queryId,
)
}
if err := spec.Validate(); err != nil {
return err
}
case qbtypes.QueryTypeJoin:
spec, ok := envelope.Spec.(qbtypes.QueryBuilderJoin)
if !ok {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid spec for %s",
queryId,
)
}
if err := spec.Validate(); err != nil {
return err
}
case qbtypes.QueryTypeTraceOperator:
spec, ok := envelope.Spec.(qbtypes.QueryBuilderTraceOperator)
if !ok {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid spec for %s",
queryId,
)
}
err := spec.ValidateTraceOperator(compositeQuery.Queries)
if err != nil {
return err
}
default:
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"unknown query type '%s' for %s",
envelope.Type,
queryId,
).WithAdditional(
"Valid query types are: builder_query, builder_sub_query, builder_formula, builder_join, promql, clickhouse_sql, trace_operator",
)
}
}
// Check if all queries are disabled
if allDisabled := checkQueriesDisabled(compositeQuery); allDisabled {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"all queries are disabled - at least one query must be enabled",
)
}
return nil
}

View File

@@ -0,0 +1,112 @@
package queryparser
import (
"context"
"encoding/json"
"testing"
"github.com/SigNoz/signoz/pkg/instrumentation/instrumentationtest"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/queryparser/queryfilterextractor"
"github.com/stretchr/testify/require"
)
func TestBaseRule_ExtractMetricAndGroupBys(t *testing.T) {
ctx := context.Background()
tests := []struct {
name string
payload string
wantMetrics []string
wantGroupBy []queryfilterextractor.ColumnInfo
}{
{
name: "builder multiple grouping",
payload: builderQueryWithGrouping,
wantMetrics: []string{"test_metric_cardinality", "cpu_usage_total"},
wantGroupBy: []queryfilterextractor.ColumnInfo{
{Name: "service_name", Alias: "", OriginExpr: "service_name", OriginField: "service_name"},
{Name: "env", Alias: "", OriginExpr: "env", OriginField: "env"},
},
},
{
name: "builder single grouping",
payload: builderQuerySingleGrouping,
wantMetrics: []string{"latency_p50"},
wantGroupBy: []queryfilterextractor.ColumnInfo{
{Name: "namespace", Alias: "", OriginExpr: "namespace", OriginField: "namespace"},
},
},
{
name: "builder no grouping",
payload: builderQueryNoGrouping,
wantMetrics: []string{"disk_usage_total"},
wantGroupBy: []queryfilterextractor.ColumnInfo{},
},
{
name: "promql multiple grouping",
payload: promQueryWithGrouping,
wantMetrics: []string{"http_requests_total"},
wantGroupBy: []queryfilterextractor.ColumnInfo{
{Name: "pod", Alias: "", OriginExpr: "pod", OriginField: "pod"},
{Name: "region", Alias: "", OriginExpr: "region", OriginField: "region"},
},
},
{
name: "promql single grouping",
payload: promQuerySingleGrouping,
wantMetrics: []string{"cpu_usage_seconds_total"},
wantGroupBy: []queryfilterextractor.ColumnInfo{
{Name: "env", Alias: "", OriginExpr: "env", OriginField: "env"},
},
},
{
name: "promql no grouping",
payload: promQueryNoGrouping,
wantMetrics: []string{"node_cpu_seconds_total"},
wantGroupBy: []queryfilterextractor.ColumnInfo{},
},
{
name: "clickhouse multiple grouping",
payload: clickHouseQueryWithGrouping,
wantMetrics: []string{"cpu"},
wantGroupBy: []queryfilterextractor.ColumnInfo{
{Name: "region", Alias: "r", OriginExpr: "region", OriginField: "region"},
{Name: "zone", Alias: "", OriginExpr: "zone", OriginField: "zone"},
},
},
{
name: "clickhouse single grouping",
payload: clickHouseQuerySingleGrouping,
wantMetrics: []string{"cpu_usage"},
wantGroupBy: []queryfilterextractor.ColumnInfo{
{Name: "region", Alias: "r", OriginExpr: "region", OriginField: "region"},
},
},
{
name: "clickhouse no grouping",
payload: clickHouseQueryNoGrouping,
wantMetrics: []string{"memory_usage"},
wantGroupBy: []queryfilterextractor.ColumnInfo{},
},
}
queryParser := New(instrumentationtest.New().ToProviderSettings())
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cq := mustCompositeQuery(t, tt.payload)
res, err := queryParser.AnalyzeCompositeQuery(ctx, cq)
require.NoError(t, err)
require.ElementsMatch(t, tt.wantMetrics, res.MetricNames)
require.ElementsMatch(t, tt.wantGroupBy, res.GroupByColumns)
})
}
}
func mustCompositeQuery(t *testing.T, payload string) *v3.CompositeQuery {
t.Helper()
var compositeQuery v3.CompositeQuery
require.NoError(t, json.Unmarshal([]byte(payload), &compositeQuery))
return &compositeQuery
}

View File

@@ -0,0 +1,184 @@
package queryparser
var (
builderQueryWithGrouping = `
{
"queryType":"builder",
"panelType":"graph",
"queries":[
{
"type":"builder_query",
"spec":{
"name":"A",
"signal":"metrics",
"stepInterval":null,
"disabled":false,
"filter":{"expression":""},
"groupBy":[
{"name":"service_name","fieldDataType":"","fieldContext":""},
{"name":"env","fieldDataType":"","fieldContext":""}
],
"aggregations":[
{"metricName":"test_metric_cardinality","timeAggregation":"count","spaceAggregation":"sum"},
{"metricName":"cpu_usage_total","timeAggregation":"avg","spaceAggregation":"avg"}
]
}
}
]
}
`
builderQuerySingleGrouping = `
{
"queryType":"builder",
"panelType":"graph",
"queries":[
{
"type":"builder_query",
"spec":{
"name":"B",
"signal":"metrics",
"stepInterval":null,
"disabled":false,
"groupBy":[
{"name":"namespace","fieldDataType":"","fieldContext":""}
],
"aggregations":[
{"metricName":"latency_p50","timeAggregation":"avg","spaceAggregation":"max"}
]
}
}
]
}
`
builderQueryNoGrouping = `
{
"queryType":"builder",
"panelType":"graph",
"queries":[
{
"type":"builder_query",
"spec":{
"name":"C",
"signal":"metrics",
"stepInterval":null,
"disabled":false,
"groupBy":[],
"aggregations":[
{"metricName":"disk_usage_total","timeAggregation":"sum","spaceAggregation":"sum"}
]
}
}
]
}
`
promQueryWithGrouping = `
{
"queries":[
{
"type":"promql",
"spec":{
"name":"P1",
"query":"sum by (pod,region) (rate(http_requests_total[5m]))",
"disabled":false,
"step":0,
"stats":false
}
}
],
"panelType":"graph",
"queryType":"promql"
}
`
promQuerySingleGrouping = `
{
"queries":[
{
"type":"promql",
"spec":{
"name":"P2",
"query":"sum by (env)(rate(cpu_usage_seconds_total{job=\"api\"}[5m]))",
"disabled":false,
"step":0,
"stats":false
}
}
],
"panelType":"graph",
"queryType":"promql"
}
`
promQueryNoGrouping = `
{
"queries":[
{
"type":"promql",
"spec":{
"name":"P3",
"query":"rate(node_cpu_seconds_total[1m])",
"disabled":false,
"step":0,
"stats":false
}
}
],
"panelType":"graph",
"queryType":"promql"
}
`
clickHouseQueryWithGrouping = `
{
"queryType":"clickhouse_sql",
"panelType":"graph",
"queries":[
{
"type":"clickhouse_sql",
"spec":{
"name":"CH1",
"query":"SELECT region as r, zone FROM metrics WHERE metric_name='cpu' GROUP BY region, zone",
"disabled":false
}
}
]
}
`
clickHouseQuerySingleGrouping = `
{
"queryType":"clickhouse_sql",
"panelType":"graph",
"queries":[
{
"type":"clickhouse_sql",
"spec":{
"name":"CH2",
"query":"SELECT region as r FROM metrics WHERE metric_name='cpu_usage' GROUP BY region",
"disabled":false
}
}
]
}
`
clickHouseQueryNoGrouping = `
{
"queryType":"clickhouse_sql",
"panelType":"graph",
"queries":[
{
"type":"clickhouse_sql",
"spec":{
"name":"CH3",
"query":"SELECT * FROM metrics WHERE metric_name = 'memory_usage'",
"disabled":false
}
}
]
}
`
)

View File

@@ -0,0 +1,466 @@
package queryparser
import (
"context"
"testing"
"github.com/SigNoz/signoz/pkg/instrumentation/instrumentationtest"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/stretchr/testify/require"
)
func TestValidateCompositeQuery(t *testing.T) {
ctx := context.Background()
queryParser := New(instrumentationtest.New().ToProviderSettings())
tests := []struct {
name string
compositeQuery *v3.CompositeQuery
wantErr bool
errContains string
}{
{
name: "nil composite query should return error",
compositeQuery: nil,
wantErr: true,
errContains: "composite query is required",
},
{
name: "empty queries array should return error",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{},
},
wantErr: true,
errContains: "at least one query is required",
},
{
name: "valid metric builder query should pass",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "metric_query",
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
},
},
},
},
},
},
wantErr: false,
},
{
name: "valid log builder query should pass",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.LogAggregation]{
Name: "log_query",
Signal: telemetrytypes.SignalLogs,
Aggregations: []qbtypes.LogAggregation{
{
Expression: "count()",
},
},
},
},
},
},
wantErr: false,
},
{
name: "valid trace builder query should pass",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "trace_query",
Signal: telemetrytypes.SignalTraces,
Aggregations: []qbtypes.TraceAggregation{
{
Expression: "count()",
},
},
},
},
},
},
wantErr: false,
},
{
name: "valid PromQL query should pass",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "prom_query",
Query: "rate(http_requests_total[5m])",
},
},
},
},
wantErr: false,
},
{
name: "valid ClickHouse query should pass",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeClickHouseSQL,
Spec: qbtypes.ClickHouseQuery{
Name: "ch_query",
Query: "SELECT count(*) FROM metrics WHERE metric_name = 'cpu_usage'",
},
},
},
},
wantErr: false,
},
{
name: "valid formula query should pass",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeFormula,
Spec: qbtypes.QueryBuilderFormula{
Name: "formula_query",
Expression: "A + B",
},
},
},
},
wantErr: false,
},
{
name: "valid join query should pass",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeJoin,
Spec: qbtypes.QueryBuilderJoin{
Name: "join_query",
Left: qbtypes.QueryRef{Name: "A"},
Right: qbtypes.QueryRef{Name: "B"},
Type: qbtypes.JoinTypeInner,
On: "service_name",
},
},
},
},
wantErr: false,
},
{
name: "valid trace operator query should pass",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "A",
Signal: telemetrytypes.SignalTraces,
Aggregations: []qbtypes.TraceAggregation{
{
Expression: "count()",
},
},
},
},
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "B",
Signal: telemetrytypes.SignalTraces,
Aggregations: []qbtypes.TraceAggregation{
{
Expression: "count()",
},
},
},
},
{
Type: qbtypes.QueryTypeTraceOperator,
Spec: qbtypes.QueryBuilderTraceOperator{
Name: "trace_operator",
Expression: "A && B",
},
},
},
},
wantErr: false,
},
{
name: "invalid metric builder query - missing aggregation should return error",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "metric_query",
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{},
},
},
},
},
wantErr: true,
errContains: "invalid",
},
{
name: "invalid PromQL query - empty query should return error",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "prom_query",
Query: "",
},
},
},
},
wantErr: true,
errContains: "query expression is required",
},
{
name: "invalid PromQL query - syntax error should return error",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "prom_query",
Query: "rate(http_requests_total[5m",
},
},
},
},
wantErr: true,
errContains: "unclosed left parenthesis",
},
{
name: "invalid ClickHouse query - empty query should return error",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeClickHouseSQL,
Spec: qbtypes.ClickHouseQuery{
Name: "ch_query",
Query: "",
},
},
},
},
wantErr: true,
errContains: "query expression is required",
},
{
name: "invalid ClickHouse query - syntax error should return error",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeClickHouseSQL,
Spec: qbtypes.ClickHouseQuery{
Name: "ch_query",
Query: "SELECT * FROM metrics WHERE",
},
},
},
},
wantErr: true,
errContains: "query parse error",
},
{
name: "invalid formula query - empty expression should return error",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeFormula,
Spec: qbtypes.QueryBuilderFormula{
Name: "formula_query",
Expression: "",
},
},
},
},
wantErr: true,
errContains: "formula expression cannot be blank",
},
{
name: "invalid trace operator query - empty expression should return error",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeTraceOperator,
Spec: qbtypes.QueryBuilderTraceOperator{
Name: "trace_operator",
Expression: "",
},
},
},
},
wantErr: true,
errContains: "expression cannot be empty",
},
{
name: "all queries disabled should return error",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "metric_query",
Disabled: true,
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
},
},
},
},
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "prom_query",
Query: "rate(http_requests_total[5m])",
Disabled: true,
},
},
},
},
wantErr: true,
errContains: "all queries are disabled",
},
{
name: "mixed disabled and enabled queries should pass",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "metric_query",
Disabled: true,
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
},
},
},
},
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "prom_query",
Query: "rate(http_requests_total[5m])",
Disabled: false,
},
},
},
},
wantErr: false,
},
{
name: "multiple valid queries should pass",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "metric_query",
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
},
},
},
},
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "prom_query",
Query: "rate(http_requests_total[5m])",
},
},
{
Type: qbtypes.QueryTypeClickHouseSQL,
Spec: qbtypes.ClickHouseQuery{
Name: "ch_query",
Query: "SELECT count(*) FROM metrics WHERE metric_name = 'cpu_usage'",
},
},
},
},
wantErr: false,
},
{
name: "invalid query in multiple queries should return error",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "metric_query",
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
},
},
},
},
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "prom_query",
Query: "invalid promql syntax [",
},
},
},
},
wantErr: true,
errContains: "query parse error",
},
{
name: "unknown query type should return error",
compositeQuery: &v3.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryType{String: valuer.NewString("invalid_query_type")},
Spec: qbtypes.PromQuery{
Name: "prom_query",
Query: "rate(http_requests_total[5m])",
},
},
},
},
wantErr: true,
errContains: "unknown query type",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := queryParser.ValidateCompositeQuery(ctx, tt.compositeQuery)
if tt.wantErr {
require.Error(t, err)
if tt.errContains != "" {
require.Contains(t, err.Error(), tt.errContains)
}
} else {
require.NoError(t, err)
}
})
}
}

View File

@@ -0,0 +1,123 @@
package queryparser
import (
"bytes"
"text/template"
"time"
clickhouse "github.com/AfterShip/clickhouse-sql-parser/parser"
"github.com/SigNoz/signoz/pkg/errors"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
querytemplate "github.com/SigNoz/signoz/pkg/query-service/utils/queryTemplate"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/prometheus/prometheus/promql/parser"
)
// validatePromQLQuery validates a PromQL query syntax using the Prometheus parser
func validatePromQLQuery(query string) error {
_, err := parser.ParseExpr(query)
if err != nil {
if syntaxErrs, ok := err.(parser.ParseErrors); ok {
syntaxErr := syntaxErrs[0]
startPosition := int(syntaxErr.PositionRange.Start)
endPosition := int(syntaxErr.PositionRange.End)
return &QueryParseError{
StartPosition: &startPosition,
EndPosition: &endPosition,
ErrorMessage: syntaxErr.Error(),
Query: query,
}
}
}
return err
}
// validateClickHouseQuery validates a ClickHouse SQL query syntax using the ClickHouse parser
func validateClickHouseQuery(query string) error {
// Assign the default template variables with dummy values
variables := make(map[string]interface{})
start := time.Now().UnixMilli()
end := start + 1000
querytemplate.AssignReservedVars(variables, start, end)
// Apply the values for default template variables before parsing the query
tmpl := template.New("clickhouse-query")
tmpl, err := tmpl.Parse(query)
if err != nil {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"failed to parse clickhouse query: %s",
err.Error(),
)
}
var queryBuffer bytes.Buffer
err = tmpl.Execute(&queryBuffer, variables)
if err != nil {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"failed to execute clickhouse query template: %s",
err.Error(),
)
}
// Parse the ClickHouse query with the default template variables applied
p := clickhouse.NewParser(queryBuffer.String())
_, err = p.ParseStmts()
if err != nil {
// TODO: errors returned here is errors.errorString, rather than using regex to parser the error
// we should think on using some other library that parses the CH query in more accurate manner,
// current CH parser only does very minimal checks.
// Sample Error: "line 0:36 expected table name or subquery, got ;\nSELECT department, avg(salary) FROM ;\n ^\n"
return &QueryParseError{
ErrorMessage: err.Error(),
Query: query,
}
}
return nil
}
// checkQueriesDisabled checks if all queries are disabled. Returns true if all queries are disabled, false otherwise.
func checkQueriesDisabled(compositeQuery *v3.CompositeQuery) bool {
for _, envelope := range compositeQuery.Queries {
switch envelope.Type {
case qbtypes.QueryTypeBuilder, qbtypes.QueryTypeSubQuery:
switch spec := envelope.Spec.(type) {
case qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]:
if !spec.Disabled {
return false
}
case qbtypes.QueryBuilderQuery[qbtypes.LogAggregation]:
if !spec.Disabled {
return false
}
case qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]:
if !spec.Disabled {
return false
}
}
case qbtypes.QueryTypeFormula:
if spec, ok := envelope.Spec.(qbtypes.QueryBuilderFormula); ok && !spec.Disabled {
return false
}
case qbtypes.QueryTypeTraceOperator:
if spec, ok := envelope.Spec.(qbtypes.QueryBuilderTraceOperator); ok && !spec.Disabled {
return false
}
case qbtypes.QueryTypeJoin:
if spec, ok := envelope.Spec.(qbtypes.QueryBuilderJoin); ok && !spec.Disabled {
return false
}
case qbtypes.QueryTypePromQL:
if spec, ok := envelope.Spec.(qbtypes.PromQuery); ok && !spec.Disabled {
return false
}
case qbtypes.QueryTypeClickHouseSQL:
if spec, ok := envelope.Spec.(qbtypes.ClickHouseQuery); ok && !spec.Disabled {
return false
}
}
}
// If we reach here, all queries are disabled
return true
}

View File

@@ -5,6 +5,8 @@ import (
"regexp"
"github.com/DATA-DOG/go-sqlmock"
"github.com/SigNoz/signoz/pkg/factory/factorytest"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/ruler/rulestore/sqlrulestore"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/sqlstore/sqlstoretest"
@@ -21,7 +23,10 @@ type MockSQLRuleStore struct {
// NewMockSQLRuleStore creates a new MockSQLRuleStore with sqlmock
func NewMockSQLRuleStore() *MockSQLRuleStore {
sqlStore := sqlstoretest.New(sqlstore.Config{Provider: "sqlite"}, sqlmock.QueryMatcherRegexp)
ruleStore := sqlrulestore.NewRuleStore(sqlStore)
// For tests, we can pass nil for queryParser and use test provider settings
providerSettings := factorytest.NewSettings()
ruleStore := sqlrulestore.NewRuleStore(sqlStore, queryparser.New(providerSettings), providerSettings)
return &MockSQLRuleStore{
ruleStore: ruleStore,
@@ -59,6 +64,11 @@ func (m *MockSQLRuleStore) GetStoredRules(ctx context.Context, orgID string) ([]
return m.ruleStore.GetStoredRules(ctx, orgID)
}
// GetStoredRulesByMetricName implements ruletypes.RuleStore - delegates to underlying ruleStore
func (m *MockSQLRuleStore) GetStoredRulesByMetricName(ctx context.Context, orgID string, metricName string) ([]ruletypes.RuleAlert, error) {
return m.ruleStore.GetStoredRulesByMetricName(ctx, orgID, metricName)
}
// ExpectCreateRule sets up SQL expectations for CreateRule operation
func (m *MockSQLRuleStore) ExpectCreateRule(rule *ruletypes.Rule) {
rows := sqlmock.NewRows([]string{"id", "created_at", "updated_at", "created_by", "updated_by", "deleted", "data", "org_id"}).
@@ -104,6 +114,17 @@ func (m *MockSQLRuleStore) ExpectGetStoredRules(orgID string, rules []*ruletypes
WillReturnRows(rows)
}
// ExpectGetStoredRulesByMetricName sets up SQL expectations for GetStoredRulesByMetricName operation
func (m *MockSQLRuleStore) ExpectGetStoredRulesByMetricName(orgID string, metricName string, rules []*ruletypes.Rule) {
rows := sqlmock.NewRows([]string{"id", "created_at", "updated_at", "created_by", "updated_by", "deleted", "data", "org_id"})
for _, rule := range rules {
rows.AddRow(rule.ID, rule.CreatedAt, rule.UpdatedAt, rule.CreatedBy, rule.UpdatedBy, rule.Deleted, rule.Data, rule.OrgID)
}
expectedPattern := `SELECT (.+) FROM "rule".+WHERE \(.+org_id.+'` + orgID + `'\)`
m.mock.ExpectQuery(expectedPattern).
WillReturnRows(rows)
}
// AssertExpectations asserts that all SQL expectations were met
func (m *MockSQLRuleStore) AssertExpectations() error {
return m.mock.ExpectationsWereMet()

View File

@@ -2,18 +2,31 @@ package sqlrulestore
import (
"context"
"encoding/json"
"log/slog"
"slices"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/sqlstore"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
ruletypes "github.com/SigNoz/signoz/pkg/types/ruletypes"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type rule struct {
sqlstore sqlstore.SQLStore
sqlstore sqlstore.SQLStore
queryParser queryparser.QueryParser
logger *slog.Logger
}
func NewRuleStore(store sqlstore.SQLStore) ruletypes.RuleStore {
return &rule{sqlstore: store}
func NewRuleStore(store sqlstore.SQLStore, queryParser queryparser.QueryParser, providerSettings factory.ProviderSettings) ruletypes.RuleStore {
return &rule{
sqlstore: store,
queryParser: queryParser,
logger: providerSettings.Logger,
}
}
func (r *rule) CreateRule(ctx context.Context, storedRule *ruletypes.Rule, cb func(context.Context, valuer.UUID) error) (valuer.UUID, error) {
@@ -101,3 +114,92 @@ func (r *rule) GetStoredRule(ctx context.Context, id valuer.UUID) (*ruletypes.Ru
}
return rule, nil
}
func (r *rule) GetStoredRulesByMetricName(ctx context.Context, orgID string, metricName string) ([]ruletypes.RuleAlert, error) {
if metricName == "" {
return []ruletypes.RuleAlert{}, nil
}
// Get all stored rules for the organization
storedRules, err := r.GetStoredRules(ctx, orgID)
if err != nil {
return nil, err
}
alerts := make([]ruletypes.RuleAlert, 0)
seen := make(map[string]bool)
for _, storedRule := range storedRules {
var ruleData ruletypes.PostableRule
if err := json.Unmarshal([]byte(storedRule.Data), &ruleData); err != nil {
r.logger.WarnContext(ctx, "failed to unmarshal rule data", "rule_id", storedRule.ID.StringValue(), "error", err)
continue
}
// Check conditions: must be metric-based alert with valid composite query
if ruleData.AlertType != ruletypes.AlertTypeMetric ||
ruleData.RuleCondition == nil ||
ruleData.RuleCondition.CompositeQuery == nil {
continue
}
// Search for metricName in the Queries array (v5 format only)
// TODO check if we need to support v3 query format structs
found := false
for _, queryEnvelope := range ruleData.RuleCondition.CompositeQuery.Queries {
// Check based on query type
switch queryEnvelope.Type {
case qbtypes.QueryTypeBuilder:
// Cast to QueryBuilderQuery[MetricAggregation] for metrics
if spec, ok := queryEnvelope.Spec.(qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]); ok {
// Check if signal is metrics
if spec.Signal == telemetrytypes.SignalMetrics {
for _, agg := range spec.Aggregations {
if agg.MetricName == metricName {
found = true
break
}
}
}
}
case qbtypes.QueryTypePromQL:
if spec, ok := queryEnvelope.Spec.(qbtypes.PromQuery); ok {
result, err := r.queryParser.AnalyzeQueryFilter(ctx, qbtypes.QueryTypePromQL, spec.Query)
if err != nil {
r.logger.WarnContext(ctx, "failed to parse PromQL query", "query", spec.Query, "error", err)
continue
}
if slices.Contains(result.MetricNames, metricName) {
found = true
break
}
}
case qbtypes.QueryTypeClickHouseSQL:
if spec, ok := queryEnvelope.Spec.(qbtypes.ClickHouseQuery); ok {
result, err := r.queryParser.AnalyzeQueryFilter(ctx, qbtypes.QueryTypeClickHouseSQL, spec.Query)
if err != nil {
r.logger.WarnContext(ctx, "failed to parse ClickHouse query", "query", spec.Query, "error", err)
continue
}
if slices.Contains(result.MetricNames, metricName) {
found = true
break
}
}
}
if found {
break
}
}
if found && !seen[storedRule.ID.StringValue()] {
seen[storedRule.ID.StringValue()] = true
alerts = append(alerts, ruletypes.RuleAlert{
AlertName: ruleData.AlertName,
AlertID: storedRule.ID.StringValue(),
})
}
}
return alerts, nil
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/ruler"
"github.com/SigNoz/signoz/pkg/ruler/rulestore/sqlrulestore"
"github.com/SigNoz/signoz/pkg/sqlstore"
@@ -22,7 +23,8 @@ func NewFactory(sqlstore sqlstore.SQLStore) factory.ProviderFactory[ruler.Ruler,
}
func New(ctx context.Context, settings factory.ProviderSettings, config ruler.Config, sqlstore sqlstore.SQLStore) (ruler.Ruler, error) {
return &provider{ruleStore: sqlrulestore.NewRuleStore(sqlstore)}, nil
queryParser := queryparser.New(settings)
return &provider{ruleStore: sqlrulestore.NewRuleStore(sqlstore, queryParser, settings)}, nil
}
func (provider *provider) Collect(ctx context.Context, orgID valuer.UUID) (map[string]any, error) {

View File

@@ -18,6 +18,7 @@ import (
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/instrumentation"
"github.com/SigNoz/signoz/pkg/modules/metricsexplorer"
"github.com/SigNoz/signoz/pkg/prometheus"
@@ -39,6 +40,9 @@ import (
// Config defines the entire input configuration of signoz.
type Config struct {
// Global config
Global global.Config `mapstructure:"global"`
// Version config
Version version.Config `mapstructure:"version"`
@@ -141,6 +145,7 @@ func (df *DeprecatedFlags) RegisterFlags(cmd *cobra.Command) {
func NewConfig(ctx context.Context, logger *slog.Logger, resolverConfig config.ResolverConfig, deprecatedFlags DeprecatedFlags) (Config, error) {
configFactories := []factory.ConfigFactory{
global.NewConfigFactory(),
version.NewConfigFactory(),
instrumentation.NewConfigFactory(),
analytics.NewConfigFactory(),

View File

@@ -2,6 +2,8 @@ package signoz
import (
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/global/signozglobal"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/modules/apdex"
"github.com/SigNoz/signoz/pkg/modules/apdex/implapdex"
@@ -34,9 +36,10 @@ type Handlers struct {
SpanPercentile spanpercentile.Handler
Services services.Handler
MetricsExplorer metricsexplorer.Handler
Global global.Handler
}
func NewHandlers(modules Modules, providerSettings factory.ProviderSettings, querier querier.Querier, licensing licensing.Licensing) Handlers {
func NewHandlers(modules Modules, providerSettings factory.ProviderSettings, querier querier.Querier, licensing licensing.Licensing, global global.Global) Handlers {
return Handlers{
SavedView: implsavedview.NewHandler(modules.SavedView),
Apdex: implapdex.NewHandler(modules.Apdex),
@@ -47,5 +50,6 @@ func NewHandlers(modules Modules, providerSettings factory.ProviderSettings, que
Services: implservices.NewHandler(modules.Services),
MetricsExplorer: implmetricsexplorer.NewHandler(modules.MetricsExplorer),
SpanPercentile: implspanpercentile.NewHandler(modules.SpanPercentile),
Global: signozglobal.NewHandler(global),
}
}

View File

@@ -40,7 +40,7 @@ func TestNewHandlers(t *testing.T) {
require.NoError(t, err)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{})
handlers := NewHandlers(modules, providerSettings, nil, nil)
handlers := NewHandlers(modules, providerSettings, nil, nil, nil)
reflectVal := reflect.ValueOf(handlers)
for i := 0; i < reflectVal.NumField(); i++ {

View File

@@ -39,6 +39,7 @@ import (
"github.com/SigNoz/signoz/pkg/modules/user/impluser"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/ruler/rulestore/sqlrulestore"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/tokenizer"
@@ -87,6 +88,7 @@ func NewModules(
orgSetter := implorganization.NewSetter(implorganization.NewStore(sqlstore), alertmanager, quickfilter)
user := impluser.NewModule(impluser.NewStore(sqlstore, providerSettings), tokenizer, emailing, providerSettings, orgSetter, analytics)
userGetter := impluser.NewGetter(impluser.NewStore(sqlstore, providerSettings))
ruleStore := sqlrulestore.NewRuleStore(sqlstore, queryParser, providerSettings)
dashboard := impldashboard.NewModule(sqlstore, providerSettings, analytics, orgGetter, implrole.NewModule(implrole.NewStore(sqlstore), authz, nil), queryParser)
return Modules{
@@ -105,6 +107,6 @@ func NewModules(
Session: implsession.NewModule(providerSettings, authNs, user, userGetter, implauthdomain.NewModule(implauthdomain.NewStore(sqlstore), authNs), tokenizer, orgGetter),
SpanPercentile: implspanpercentile.NewModule(querier, providerSettings),
Services: implservices.NewModule(querier, telemetryStore),
MetricsExplorer: implmetricsexplorer.NewModule(telemetryStore, telemetryMetadataStore, cache, dashboard, providerSettings, config.MetricsExplorer),
MetricsExplorer: implmetricsexplorer.NewModule(telemetryStore, telemetryMetadataStore, cache, ruleStore, dashboard, providerSettings, config.MetricsExplorer),
}
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/SigNoz/signoz/pkg/apiserver"
"github.com/SigNoz/signoz/pkg/apiserver/signozapiserver"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/http/handler"
"github.com/SigNoz/signoz/pkg/instrumentation"
"github.com/SigNoz/signoz/pkg/modules/authdomain"
@@ -36,6 +37,7 @@ func NewOpenAPI(ctx context.Context, instrumentation instrumentation.Instrumenta
struct{ session.Handler }{},
struct{ authdomain.Handler }{},
struct{ preference.Handler }{},
struct{ global.Handler }{},
).New(ctx, instrumentation.ToProviderSettings(), apiserver.Config{})
if err != nil {
return nil, err

View File

@@ -18,6 +18,8 @@ import (
"github.com/SigNoz/signoz/pkg/emailing/noopemailing"
"github.com/SigNoz/signoz/pkg/emailing/smtpemailing"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/global/signozglobal"
"github.com/SigNoz/signoz/pkg/modules/authdomain/implauthdomain"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/organization/implorganization"
@@ -221,7 +223,7 @@ func NewQuerierProviderFactories(telemetryStore telemetrystore.TelemetryStore, p
)
}
func NewAPIServerProviderFactories(orgGetter organization.Getter, authz authz.AuthZ, modules Modules, handlers Handlers) factory.NamedMap[factory.ProviderFactory[apiserver.APIServer, apiserver.Config]] {
func NewAPIServerProviderFactories(orgGetter organization.Getter, authz authz.AuthZ, global global.Global, modules Modules, handlers Handlers) factory.NamedMap[factory.ProviderFactory[apiserver.APIServer, apiserver.Config]] {
return factory.MustNewNamedMap(
signozapiserver.NewFactory(
orgGetter,
@@ -231,6 +233,7 @@ func NewAPIServerProviderFactories(orgGetter organization.Getter, authz authz.Au
implsession.NewHandler(modules.Session),
implauthdomain.NewHandler(modules.AuthDomain),
implpreference.NewHandler(modules.Preference),
signozglobal.NewHandler(global),
),
)
}
@@ -242,3 +245,9 @@ func NewTokenizerProviderFactories(cache cache.Cache, sqlstore sqlstore.SQLStore
jwttokenizer.NewFactory(cache, tokenStore),
)
}
func NewGlobalProviderFactories() factory.NamedMap[factory.ProviderFactory[global.Global, global.Config]] {
return factory.MustNewNamedMap(
signozglobal.NewFactory(),
)
}

View File

@@ -83,6 +83,7 @@ func TestNewProviderFactories(t *testing.T) {
NewAPIServerProviderFactories(
implorganization.NewGetter(implorganization.NewStore(sqlstoretest.New(sqlstore.Config{Provider: "sqlite"}, sqlmock.QueryMatcherEqual)), nil),
nil,
nil,
Modules{},
Handlers{},
)

View File

@@ -345,18 +345,29 @@ func New(
telemetrymetadata.AttributesMetadataLocalTableName,
)
global, err := factory.NewProviderFromNamedMap(
ctx,
providerSettings,
config.Global,
NewGlobalProviderFactories(),
"signoz",
)
if err != nil {
return nil, err
}
// Initialize all modules
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, analytics, querier, telemetrystore, telemetryMetadataStore, authNs, authz, cache, queryParser, config)
// Initialize all handlers for the modules
handlers := NewHandlers(modules, providerSettings, querier, licensing)
handlers := NewHandlers(modules, providerSettings, querier, licensing, global)
// Initialize the API server
apiserver, err := factory.NewProviderFromNamedMap(
ctx,
providerSettings,
config.APIServer,
NewAPIServerProviderFactories(orgGetter, authz, modules, handlers),
NewAPIServerProviderFactories(orgGetter, authz, global, modules, handlers),
"signoz",
)
if err != nil {

15
pkg/types/global.go Normal file
View File

@@ -0,0 +1,15 @@
package types
import "net/url"
type GettableGlobalConfig struct {
ExternalURL string `json:"external_url"`
IngestionURL string `json:"ingestion_url"`
}
func NewGettableGlobalConfig(externalURL, ingestionURL *url.URL) *GettableGlobalConfig {
return &GettableGlobalConfig{
ExternalURL: externalURL.String(),
IngestionURL: ingestionURL.String(),
}
}

View File

@@ -221,6 +221,17 @@ type TreemapResponse struct {
Samples []TreemapEntry `json:"samples"`
}
// MetricAlert represents an alert associated with a metric.
type MetricAlert struct {
AlertName string `json:"alertName"`
AlertID string `json:"alertId"`
}
// MetricAlertsResponse represents the response for metric alerts endpoint.
type MetricAlertsResponse struct {
Alerts []MetricAlert `json:"alerts"`
}
// MetricDashboard represents a dashboard/widget referencing a metric.
type MetricDashboard struct {
DashboardName string `json:"dashboardName"`

View File

@@ -539,6 +539,11 @@ func (f Function) Copy() Function {
return c
}
// Validate validates the Function by calling Validate on its Name
func (f Function) Validate() error {
return f.Name.Validate()
}
type LimitBy struct {
// keys to limit by
Keys []string `json:"keys"`

View File

@@ -73,6 +73,53 @@ func (f *QueryBuilderFormula) UnmarshalJSON(data []byte) error {
return nil
}
// Validate validates the QueryBuilderFormula
func (f QueryBuilderFormula) Validate() error {
// Validate name is not blank
if strings.TrimSpace(f.Name) == "" {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"formula name cannot be blank",
)
}
// Validate expression is not blank
if strings.TrimSpace(f.Expression) == "" {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"formula expression cannot be blank",
)
}
// If having is not null, validate that expression is not blank
if f.Having != nil {
if strings.TrimSpace(f.Having.Expression) == "" {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"having expression cannot be blank when having clause is present",
)
}
}
// Validate functions if present
for i, fn := range f.Functions {
if err := fn.Validate(); err != nil {
fnId := fmt.Sprintf("function #%d", i+1)
if f.Name != "" {
fnId = fmt.Sprintf("function #%d in formula '%s'", i+1, f.Name)
}
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid %s: %s",
fnId,
err.Error(),
)
}
}
return nil
}
// small container to store the query name and index or alias reference
// for a variable in the formula expression
// read below for more details on aggregation references

View File

@@ -5,6 +5,7 @@ import (
"slices"
"strconv"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/valuer"
)
@@ -33,6 +34,37 @@ var (
FunctionNameFillZero = FunctionName{valuer.NewString("fillZero")}
)
// Validate validates that the FunctionName is one of the known types
func (fn FunctionName) Validate() error {
switch fn {
case FunctionNameCutOffMin,
FunctionNameCutOffMax,
FunctionNameClampMin,
FunctionNameClampMax,
FunctionNameAbsolute,
FunctionNameRunningDiff,
FunctionNameLog2,
FunctionNameLog10,
FunctionNameCumulativeSum,
FunctionNameEWMA3,
FunctionNameEWMA5,
FunctionNameEWMA7,
FunctionNameMedian3,
FunctionNameMedian5,
FunctionNameMedian7,
FunctionNameTimeShift,
FunctionNameAnomaly,
FunctionNameFillZero:
return nil
default:
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid function name: %s",
fn.StringValue(),
)
}
}
// ApplyFunction applies the given function to the result data
func ApplyFunction(fn Function, result *TimeSeries) *TimeSeries {
// Extract the function name and arguments

View File

@@ -1,6 +1,9 @@
package querybuildertypesv5
import (
"strings"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
@@ -16,6 +19,15 @@ var (
JoinTypeCross = JoinType{valuer.NewString("cross")}
)
func (j JoinType) Validate() error {
switch j {
case JoinTypeInner, JoinTypeLeft, JoinTypeRight, JoinTypeFull, JoinTypeCross:
return nil
default:
return errors.NewInvalidInputf(errors.CodeInvalidInput, "invalid join type: %s", j.StringValue())
}
}
type QueryRef struct {
Name string `json:"name"`
}
@@ -53,6 +65,25 @@ type QueryBuilderJoin struct {
Functions []Function `json:"functions,omitempty"`
}
func (q *QueryBuilderJoin) Validate() error {
if strings.TrimSpace(q.Name) == "" {
return errors.NewInvalidInputf(errors.CodeInvalidInput, "name is required")
}
if strings.TrimSpace(q.Left.Name) == "" {
return errors.NewInvalidInputf(errors.CodeInvalidInput, "left name is required")
}
if strings.TrimSpace(q.Right.Name) == "" {
return errors.NewInvalidInputf(errors.CodeInvalidInput, "right name is required")
}
if err := q.Type.Validate(); err != nil {
return err
}
if strings.TrimSpace(q.On) == "" {
return errors.NewInvalidInputf(errors.CodeInvalidInput, "on is required")
}
return nil
}
// Copy creates a deep copy of QueryBuilderJoin
func (q QueryBuilderJoin) Copy() QueryBuilderJoin {
c := q

View File

@@ -10,8 +10,8 @@ import (
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
)
// getQueryIdentifier returns a friendly identifier for a query based on its type and name/content
func getQueryIdentifier(envelope QueryEnvelope, index int) string {
// GetQueryIdentifier returns a friendly identifier for a query based on its type and name/content
func GetQueryIdentifier(envelope QueryEnvelope, index int) string {
switch envelope.Type {
case QueryTypeBuilder, QueryTypeSubQuery:
switch spec := envelope.Spec.(type) {
@@ -567,7 +567,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
switch spec := envelope.Spec.(type) {
case QueryBuilderQuery[TraceAggregation]:
if err := spec.Validate(r.RequestType); err != nil {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return wrapValidationError(err, queryId, "invalid %s: %s")
}
// Check name uniqueness for non-formula context
@@ -583,7 +583,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
}
case QueryBuilderQuery[LogAggregation]:
if err := spec.Validate(r.RequestType); err != nil {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return wrapValidationError(err, queryId, "invalid %s: %s")
}
// Check name uniqueness for non-formula context
@@ -599,7 +599,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
}
case QueryBuilderQuery[MetricAggregation]:
if err := spec.Validate(r.RequestType); err != nil {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return wrapValidationError(err, queryId, "invalid %s: %s")
}
// Check name uniqueness for non-formula context
@@ -614,7 +614,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
queryNames[spec.Name] = true
}
default:
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"unknown spec type for %s",
@@ -625,7 +625,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
// Formula validation is handled separately
spec, ok := envelope.Spec.(QueryBuilderFormula)
if !ok {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid spec for %s",
@@ -633,7 +633,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
)
}
if spec.Expression == "" {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"expression is required for %s",
@@ -644,7 +644,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
// Join validation is handled separately
_, ok := envelope.Spec.(QueryBuilderJoin)
if !ok {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid spec for %s",
@@ -654,7 +654,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
case QueryTypeTraceOperator:
spec, ok := envelope.Spec.(QueryBuilderTraceOperator)
if !ok {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid spec for %s",
@@ -662,7 +662,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
)
}
if spec.Expression == "" {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"expression is required for %s",
@@ -673,7 +673,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
// PromQL validation is handled separately
spec, ok := envelope.Spec.(PromQuery)
if !ok {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid spec for %s",
@@ -681,7 +681,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
)
}
if spec.Query == "" {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"query expression is required for %s",
@@ -692,7 +692,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
// ClickHouse SQL validation is handled separately
spec, ok := envelope.Spec.(ClickHouseQuery)
if !ok {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid spec for %s",
@@ -700,7 +700,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
)
}
if spec.Query == "" {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"query expression is required for %s",
@@ -708,7 +708,7 @@ func (r *QueryRangeRequest) validateCompositeQuery() error {
)
}
default:
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"unknown query type '%s' for %s",
@@ -735,7 +735,7 @@ func (c *CompositeQuery) Validate(requestType RequestType) error {
// Validate each query
for i, envelope := range c.Queries {
if err := validateQueryEnvelope(envelope, requestType); err != nil {
queryId := getQueryIdentifier(envelope, i)
queryId := GetQueryIdentifier(envelope, i)
return wrapValidationError(err, queryId, "invalid %s: %s")
}
}

View File

@@ -8,6 +8,7 @@ import (
"strings"
"time"
signozError "github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/query-service/model"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/query-service/utils/labels"
@@ -121,6 +122,111 @@ type RuleCondition struct {
Thresholds *RuleThresholdData `json:"thresholds,omitempty"`
}
func (rc *RuleCondition) UnmarshalJSON(data []byte) error {
type Alias RuleCondition
aux := (*Alias)(rc)
if err := json.Unmarshal(data, aux); err != nil {
return signozError.NewInvalidInputf(signozError.CodeInvalidInput, "failed to parse rule condition json: %v", err)
}
var errs []error
// Validate CompositeQuery - must be non-nil and pass validation
if rc.CompositeQuery == nil {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "composite query is required"))
} else {
if err := rc.CompositeQuery.Validate(); err != nil {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "composite query validation failed: %v", err))
}
}
// Validate AlertOnAbsent + AbsentFor - if AlertOnAbsent is true, AbsentFor must be > 0
if rc.AlertOnAbsent && rc.AbsentFor == 0 {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "absentFor must be greater than 0 when alertOnAbsent is true"))
}
// Validate Seasonality - must be one of the allowed values when provided
if !isValidSeasonality(rc.Seasonality) {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "invalid seasonality: %s", rc.Seasonality))
}
// Validate SelectedQueryName - must match one of the query names from CompositeQuery
if rc.SelectedQuery != "" && rc.CompositeQuery != nil {
queryNames := getAllQueryNames(rc.CompositeQuery)
if _, exists := queryNames[rc.SelectedQuery]; !exists {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "selected query name '%s' does not match any query in composite query", rc.SelectedQuery))
}
}
// Validate RequireMinPoints + RequiredNumPoints - if RequireMinPoints is true, RequiredNumPoints must be > 0
if rc.RequireMinPoints && rc.RequiredNumPoints <= 0 {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "requiredNumPoints must be greater than 0 when requireMinPoints is true"))
}
if len(errs) > 0 {
return signozError.Join(errs...)
}
return nil
}
// getAllQueryNames extracts all query names from CompositeQuery across all query types
// Returns a map of query names for quick lookup
func getAllQueryNames(compositeQuery *v3.CompositeQuery) map[string]struct{} {
queryNames := make(map[string]struct{})
// Extract names from Queries (v5 envelopes)
if compositeQuery != nil && compositeQuery.Queries != nil {
for _, query := range compositeQuery.Queries {
switch spec := query.Spec.(type) {
case qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]:
if spec.Name != "" {
queryNames[spec.Name] = struct{}{}
}
case qbtypes.QueryBuilderQuery[qbtypes.LogAggregation]:
if spec.Name != "" {
queryNames[spec.Name] = struct{}{}
}
case qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]:
if spec.Name != "" {
queryNames[spec.Name] = struct{}{}
}
case qbtypes.QueryBuilderFormula:
if spec.Name != "" {
queryNames[spec.Name] = struct{}{}
}
case qbtypes.QueryBuilderTraceOperator:
if spec.Name != "" {
queryNames[spec.Name] = struct{}{}
}
case qbtypes.PromQuery:
if spec.Name != "" {
queryNames[spec.Name] = struct{}{}
}
case qbtypes.ClickHouseQuery:
if spec.Name != "" {
queryNames[spec.Name] = struct{}{}
}
}
}
}
return queryNames
}
// isValidSeasonality validates that Seasonality is one of the allowed values
func isValidSeasonality(seasonality string) bool {
if seasonality == "" {
return true // empty seasonality is allowed (optional field)
}
switch seasonality {
case "hourly", "daily", "weekly":
return true
default:
return false
}
}
func (rc *RuleCondition) GetSelectedQueryName() string {
if rc != nil {
if rc.SelectedQuery != "" {

View File

@@ -70,6 +70,8 @@ type NotificationSettings struct {
GroupBy []string `json:"groupBy,omitempty"`
Renotify Renotify `json:"renotify,omitempty"`
UsePolicy bool `json:"usePolicy,omitempty"`
// NewGroupEvalDelay is the grace period for new series to be excluded from alerts evaluation
NewGroupEvalDelay *Duration `json:"newGroupEvalDelay,omitempty"`
}
type Renotify struct {
@@ -302,6 +304,39 @@ func isValidLabelValue(v string) bool {
return utf8.ValidString(v)
}
// isValidAlertType validates that the AlertType is one of the allowed enum values
func isValidAlertType(alertType AlertType) bool {
switch alertType {
case AlertTypeMetric, AlertTypeTraces, AlertTypeLogs, AlertTypeExceptions:
return true
default:
return false
}
}
// isValidRuleType validates that the RuleType is one of the allowed enum values
func isValidRuleType(ruleType RuleType) bool {
switch ruleType {
case RuleTypeThreshold, RuleTypeProm, RuleTypeAnomaly:
return true
default:
return false
}
}
// isValidVersion validates that the version is one of the supported versions
func isValidVersion(version string) bool {
if version == "" {
return true // empty version is allowed (optional field)
}
switch version {
case "v3", "v4", "v5":
return true
default:
return false
}
}
func isAllQueriesDisabled(compositeQuery *v3.CompositeQuery) bool {
if compositeQuery == nil {
return false
@@ -357,6 +392,26 @@ func (r *PostableRule) validate() error {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "all queries are disabled in rule condition"))
}
// Validate AlertName - required field
if r.AlertName == "" {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "alert name is required"))
}
// Validate AlertType - must be one of the allowed enum values
if !isValidAlertType(r.AlertType) {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "invalid alert type: %s, must be one of: METRIC_BASED_ALERT, TRACES_BASED_ALERT, LOGS_BASED_ALERT, EXCEPTIONS_BASED_ALERT", r.AlertType))
}
// Validate RuleType - must be one of the allowed enum values
if !isValidRuleType(r.RuleType) {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "invalid rule type: %s, must be one of: threshold_rule, promql_rule, anomaly_rule", r.RuleType))
}
// Validate Version - must be one of the supported versions if provided
if !isValidVersion(r.Version) {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "invalid version: %s, must be one of: v3, v4, v5", r.Version))
}
for k, v := range r.Labels {
if !isValidLabelName(k) {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "invalid label name: %s", k))

View File

@@ -111,9 +111,12 @@ func TestParseIntoRule(t *testing.T) {
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"builderQueries": {
"A": {
"queryName": "A",
"expression": "A",
"dataSource": "metrics",
"disabled": false,
"aggregateAttribute": {
"key": "test_metric"
@@ -149,12 +152,17 @@ func TestParseIntoRule(t *testing.T) {
initRule: PostableRule{},
content: []byte(`{
"alert": "DefaultsRule",
"alertType": "METRIC_BASED_ALERT",
"ruleType": "threshold_rule",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"builderQueries": {
"A": {
"queryName": "A",
"expression": "A",
"dataSource": "metrics",
"disabled": false,
"aggregateAttribute": {
"key": "test_metric"
@@ -187,9 +195,11 @@ func TestParseIntoRule(t *testing.T) {
initRule: PostableRule{},
content: []byte(`{
"alert": "PromQLRule",
"alertType": "METRIC_BASED_ALERT",
"condition": {
"compositeQuery": {
"queryType": "promql",
"panelType": "graph",
"promQueries": {
"A": {
"query": "rate(http_requests_total[5m])",
@@ -255,12 +265,17 @@ func TestParseIntoRuleSchemaVersioning(t *testing.T) {
initRule: PostableRule{},
content: []byte(`{
"alert": "SeverityLabelTest",
"alertType": "METRIC_BASED_ALERT",
"schemaVersion": "v1",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"builderQueries": {
"A": {
"queryName": "A",
"expression": "A",
"dataSource": "metrics",
"aggregateAttribute": {
"key": "cpu_usage"
}
@@ -343,12 +358,17 @@ func TestParseIntoRuleSchemaVersioning(t *testing.T) {
initRule: PostableRule{},
content: []byte(`{
"alert": "NoLabelsTest",
"alertType": "METRIC_BASED_ALERT",
"schemaVersion": "v1",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"builderQueries": {
"A": {
"queryName": "A",
"expression": "A",
"dataSource": "metrics",
"aggregateAttribute": {
"key": "memory_usage"
}
@@ -383,12 +403,17 @@ func TestParseIntoRuleSchemaVersioning(t *testing.T) {
initRule: PostableRule{},
content: []byte(`{
"alert": "OverwriteTest",
"alertType": "METRIC_BASED_ALERT",
"schemaVersion": "v1",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"builderQueries": {
"A": {
"queryName": "A",
"expression": "A",
"dataSource": "metrics",
"aggregateAttribute": {
"key": "cpu_usage"
}
@@ -473,12 +498,17 @@ func TestParseIntoRuleSchemaVersioning(t *testing.T) {
initRule: PostableRule{},
content: []byte(`{
"alert": "V2Test",
"alertType": "METRIC_BASED_ALERT",
"schemaVersion": "v2",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"builderQueries": {
"A": {
"queryName": "A",
"expression": "A",
"dataSource": "metrics",
"aggregateAttribute": {
"key": "test_metric"
}
@@ -517,11 +547,16 @@ func TestParseIntoRuleSchemaVersioning(t *testing.T) {
initRule: PostableRule{},
content: []byte(`{
"alert": "DefaultSchemaTest",
"alertType": "METRIC_BASED_ALERT",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"builderQueries": {
"A": {
"queryName": "A",
"expression": "A",
"dataSource": "metrics",
"aggregateAttribute": {
"key": "test_metric"
}
@@ -569,12 +604,16 @@ func TestParseIntoRuleSchemaVersioning(t *testing.T) {
func TestParseIntoRuleThresholdGeneration(t *testing.T) {
content := []byte(`{
"alert": "TestThresholds",
"alertType": "METRIC_BASED_ALERT",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"builderQueries": {
"A": {
"queryName": "A",
"expression": "A",
"dataSource": "metrics",
"disabled": false,
"aggregateAttribute": {
"key": "response_time"
@@ -638,14 +677,18 @@ func TestParseIntoRuleMultipleThresholds(t *testing.T) {
content := []byte(`{
"schemaVersion": "v2",
"alert": "MultiThresholdAlert",
"alertType": "METRIC_BASED_ALERT",
"ruleType": "threshold_rule",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"unit": "%",
"builderQueries": {
"A": {
"queryName": "A",
"expression": "A",
"dataSource": "metrics",
"disabled": false,
"aggregateAttribute": {
"key": "cpu_usage"
@@ -731,10 +774,12 @@ func TestAnomalyNegationEval(t *testing.T) {
name: "anomaly rule with ValueIsBelow - should alert",
ruleJSON: []byte(`{
"alert": "AnomalyBelowTest",
"alertType": "METRIC_BASED_ALERT",
"ruleType": "anomaly_rule",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"queries": [{
"type": "builder_query",
"spec": {
@@ -765,10 +810,12 @@ func TestAnomalyNegationEval(t *testing.T) {
name: "anomaly rule with ValueIsBelow; should not alert",
ruleJSON: []byte(`{
"alert": "AnomalyBelowTest",
"alertType": "METRIC_BASED_ALERT",
"ruleType": "anomaly_rule",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"queries": [{
"type": "builder_query",
"spec": {
@@ -798,10 +845,12 @@ func TestAnomalyNegationEval(t *testing.T) {
name: "anomaly rule with ValueIsAbove; should alert",
ruleJSON: []byte(`{
"alert": "AnomalyAboveTest",
"alertType": "METRIC_BASED_ALERT",
"ruleType": "anomaly_rule",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"queries": [{
"type": "builder_query",
"spec": {
@@ -832,10 +881,12 @@ func TestAnomalyNegationEval(t *testing.T) {
name: "anomaly rule with ValueIsAbove; should not alert",
ruleJSON: []byte(`{
"alert": "AnomalyAboveTest",
"alertType": "METRIC_BASED_ALERT",
"ruleType": "anomaly_rule",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"queries": [{
"type": "builder_query",
"spec": {
@@ -865,10 +916,12 @@ func TestAnomalyNegationEval(t *testing.T) {
name: "anomaly rule with ValueIsBelow and AllTheTimes; should alert",
ruleJSON: []byte(`{
"alert": "AnomalyBelowAllTest",
"alertType": "METRIC_BASED_ALERT",
"ruleType": "anomaly_rule",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"queries": [{
"type": "builder_query",
"spec": {
@@ -900,10 +953,12 @@ func TestAnomalyNegationEval(t *testing.T) {
name: "anomaly rule with ValueIsBelow and AllTheTimes; should not alert",
ruleJSON: []byte(`{
"alert": "AnomalyBelowAllTest",
"alertType": "METRIC_BASED_ALERT",
"ruleType": "anomaly_rule",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"queries": [{
"type": "builder_query",
"spec": {
@@ -934,10 +989,12 @@ func TestAnomalyNegationEval(t *testing.T) {
name: "anomaly rule with ValueOutsideBounds; should alert",
ruleJSON: []byte(`{
"alert": "AnomalyOutOfBoundsTest",
"alertType": "METRIC_BASED_ALERT",
"ruleType": "anomaly_rule",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"queries": [{
"type": "builder_query",
"spec": {
@@ -968,10 +1025,12 @@ func TestAnomalyNegationEval(t *testing.T) {
name: "non-anomaly threshold rule with ValueIsBelow; should alert",
ruleJSON: []byte(`{
"alert": "ThresholdTest",
"alertType": "METRIC_BASED_ALERT",
"ruleType": "threshold_rule",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"queries": [{
"type": "builder_query",
"spec": {
@@ -1002,10 +1061,12 @@ func TestAnomalyNegationEval(t *testing.T) {
name: "non-anomaly rule with ValueIsBelow - should not alert",
ruleJSON: []byte(`{
"alert": "ThresholdTest",
"alertType": "METRIC_BASED_ALERT",
"ruleType": "threshold_rule",
"condition": {
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"queries": [{
"type": "builder_query",
"spec": {

View File

@@ -47,10 +47,17 @@ func NewStatsFromRules(rules []*Rule) map[string]any {
return stats
}
// RuleAlert represents an alert associated with a rule, used when filtering by metric name
type RuleAlert struct {
AlertName string
AlertID string
}
type RuleStore interface {
CreateRule(context.Context, *Rule, func(context.Context, valuer.UUID) error) (valuer.UUID, error)
EditRule(context.Context, *Rule, func(context.Context) error) error
DeleteRule(context.Context, valuer.UUID, func(context.Context) error) error
GetStoredRules(context.Context, string) ([]*Rule, error)
GetStoredRule(context.Context, valuer.UUID) (*Rule, error)
GetStoredRulesByMetricName(context.Context, string, string) ([]RuleAlert, error)
}

View File

@@ -33,6 +33,12 @@ def pytest_addoption(parser: pytest.Parser):
default=False,
help="Teardown environment. Run pytest --basetemp=./tmp/ -vv --teardown src/bootstrap/setup::test_teardown to teardown your local dev environment.",
)
parser.addoption(
"--with-web",
action="store_true",
default=False,
help="Build and run with web. Run pytest --basetemp=./tmp/ -vv --with-web src/bootstrap/setup::test_setup to setup your local dev environment with web.",
)
parser.addoption(
"--sqlstore-provider",
action="store",

View File

@@ -1,6 +1,5 @@
from typing import Tuple
from http import HTTPStatus
from typing import Callable, List
from typing import Callable, List, Tuple
import pytest
import requests
@@ -134,9 +133,9 @@ def get_tokens(signoz: types.SigNoz) -> Callable[[str, str], Tuple[str, str]]:
)
assert response.status_code == HTTPStatus.OK
accessToken = response.json()["data"]["accessToken"]
refreshToken = response.json()["data"]["refreshToken"]
return accessToken, refreshToken
access_token = response.json()["data"]["accessToken"]
refresh_token = response.json()["data"]["refreshToken"]
return access_token, refresh_token
return _get_tokens

View File

@@ -1,4 +1,4 @@
from typing import Callable, Dict, Any
from typing import Any, Callable, Dict
from urllib.parse import urljoin
from xml.etree import ElementTree
@@ -71,7 +71,9 @@ def create_saml_client(
"saml_signature_canonicalization_method": "http://www.w3.org/2001/10/xml-exc-c14n#",
"saml.onetimeuse.condition": "false",
"saml.server.signature.keyinfo.xmlSigKeyInfoKeyNameTransformer": "NONE",
"saml_assertion_consumer_url_post": urljoin(f"{signoz.self.host_configs['8080'].base()}", callback_path)
"saml_assertion_consumer_url_post": urljoin(
f"{signoz.self.host_configs['8080'].base()}", callback_path
),
},
"authenticationFlowBindingOverrides": {},
"fullScopeAllowed": True,
@@ -124,9 +126,11 @@ def create_saml_client(
@pytest.fixture(name="update_saml_client_attributes", scope="function")
def update_saml_client_attributes(
idp: types.TestContainerIDP
idp: types.TestContainerIDP,
) -> Callable[[str, Dict[str, Any]], None]:
def _update_saml_client_attributes(client_id: str, attributes: Dict[str, Any]) -> None:
def _update_saml_client_attributes(
client_id: str, attributes: Dict[str, Any]
) -> None:
client = KeycloakAdmin(
server_url=idp.container.host_configs["6060"].base(),
username=IDP_ROOT_USERNAME,

View File

@@ -429,7 +429,11 @@ def ttl_legacy_logs_v2_table_setup(request, signoz: types.SigNoz):
).result_rows
assert result is not None
# Add cleanup to restore original table
request.addfinalizer(lambda: signoz.telemetrystore.conn.query("RENAME TABLE signoz_logs.logs_v2_backup TO signoz_logs.logs_v2;"))
request.addfinalizer(
lambda: signoz.telemetrystore.conn.query(
"RENAME TABLE signoz_logs.logs_v2_backup TO signoz_logs.logs_v2;"
)
)
# Create new test tables
result = signoz.telemetrystore.conn.query(
@@ -445,10 +449,15 @@ def ttl_legacy_logs_v2_table_setup(request, signoz: types.SigNoz):
assert result is not None
# Add cleanup to drop test table
request.addfinalizer(lambda: signoz.telemetrystore.conn.query("DROP TABLE IF EXISTS signoz_logs.logs_v2;"))
request.addfinalizer(
lambda: signoz.telemetrystore.conn.query(
"DROP TABLE IF EXISTS signoz_logs.logs_v2;"
)
)
yield # Test runs here
@pytest.fixture(name="ttl_legacy_logs_v2_resource_table_setup", scope="function")
def ttl_legacy_logs_v2_resource_table_setup(request, signoz: types.SigNoz):
"""
@@ -463,7 +472,11 @@ def ttl_legacy_logs_v2_resource_table_setup(request, signoz: types.SigNoz):
).result_rows
assert result is not None
# Add cleanup to restore original table
request.addfinalizer(lambda: signoz.telemetrystore.conn.query("RENAME TABLE signoz_logs.logs_v2_resource_backup TO signoz_logs.logs_v2_resource;"))
request.addfinalizer(
lambda: signoz.telemetrystore.conn.query(
"RENAME TABLE signoz_logs.logs_v2_resource_backup TO signoz_logs.logs_v2_resource;"
)
)
# Create new test tables
result = signoz.telemetrystore.conn.query(
@@ -478,6 +491,10 @@ def ttl_legacy_logs_v2_resource_table_setup(request, signoz: types.SigNoz):
assert result is not None
# Add cleanup to drop test table
request.addfinalizer(lambda: signoz.telemetrystore.conn.query("DROP TABLE IF EXISTS signoz_logs.logs_v2_resource;"))
request.addfinalizer(
lambda: signoz.telemetrystore.conn.query(
"DROP TABLE IF EXISTS signoz_logs.logs_v2_resource;"
)
)
yield # Test runs here
yield # Test runs here

View File

@@ -1,7 +1,7 @@
from os import path
import platform
import time
from http import HTTPStatus
from os import path
import docker
import docker.errors
@@ -34,14 +34,21 @@ def signoz( # pylint: disable=too-many-arguments,too-many-positional-arguments
# Run the migrations for clickhouse
request.getfixturevalue("migrator")
# Get the no-web flag
with_web = pytestconfig.getoption("--with-web")
arch = platform.machine()
if arch == "x86_64":
arch = "amd64"
# Build the image
dockerfile_path = "cmd/enterprise/Dockerfile.integration"
if with_web:
dockerfile_path = "cmd/enterprise/Dockerfile.with-web.integration"
self = DockerImage(
path="../../",
dockerfile_path="cmd/enterprise/Dockerfile.integration",
dockerfile_path=dockerfile_path,
tag="signoz:integration",
buildargs={
"TARGETARCH": arch,
@@ -53,7 +60,7 @@ def signoz( # pylint: disable=too-many-arguments,too-many-positional-arguments
env = (
{
"SIGNOZ_WEB_ENABLED": True,
"SIGNOZ_WEB_ENABLED": False,
"SIGNOZ_WEB_DIRECTORY": "/root/web",
"SIGNOZ_INSTRUMENTATION_LOGS_LEVEL": "debug",
"SIGNOZ_PROMETHEUS_ACTIVE__QUERY__TRACKER_ENABLED": False,
@@ -63,6 +70,9 @@ def signoz( # pylint: disable=too-many-arguments,too-many-positional-arguments
| clickhouse.env
)
if with_web:
env["SIGNOZ_WEB_ENABLED"] = True
container = DockerContainer("signoz:integration")
for k, v in env.items():
container.with_env(k, v)
@@ -71,7 +81,7 @@ def signoz( # pylint: disable=too-many-arguments,too-many-positional-arguments
provider = request.config.getoption("--sqlstore-provider")
if provider == "sqlite":
dir_path = path.dirname(sqlstore.env["SIGNOZ_SQLSTORE_SQLITE_PATH"])
dir_path = path.dirname(sqlstore.env["SIGNOZ_SQLSTORE_SQLITE_PATH"])
container.with_volume_mapping(
dir_path,
dir_path,

View File

@@ -27,7 +27,6 @@ def sqlite(
with engine.connect() as conn:
result = conn.execute(sql.text("SELECT 1"))
assert result.fetchone()[0] == 1
return types.TestContainerSQL(
container=types.TestContainerDocker(
@@ -53,7 +52,6 @@ def sqlite(
result = conn.execute(sql.text("SELECT 1"))
assert result.fetchone()[0] == 1
return types.TestContainerSQL(
container=types.TestContainerDocker(
id="",

View File

@@ -1,5 +1,5 @@
from http import HTTPStatus
from typing import Callable, List, Dict, Any
from typing import Any, Callable, Dict, List
import requests
from selenium import webdriver
@@ -93,10 +93,11 @@ def test_create_auth_domain(
f"{signoz.self.host_configs['8080'].address}:{signoz.self.host_configs['8080'].port}",
{
"saml_idp_initiated_sso_url_name": "idp-initiated-saml-test",
"saml_idp_initiated_sso_relay_state": relay_state_url
}
"saml_idp_initiated_sso_relay_state": relay_state_url,
},
)
def test_saml_authn(
signoz: SigNoz,
idp: TestContainerIDP, # pylint: disable=unused-argument
@@ -163,7 +164,10 @@ def test_idp_initiated_saml_authn(
assert len(session_context["orgs"]) == 1
assert len(session_context["orgs"][0]["authNSupport"]["callback"]) == 1
idp_initiated_login_url = idp.container.host_configs["6060"].base() + "/realms/master/protocol/saml/clients/idp-initiated-saml-test"
idp_initiated_login_url = (
idp.container.host_configs["6060"].base()
+ "/realms/master/protocol/saml/clients/idp-initiated-saml-test"
)
driver.get(idp_initiated_login_url)
idp_login("viewer.idp.initiated@saml.integration.test", "password")

View File

@@ -146,16 +146,16 @@ def test_generate_connection_params(
data = response_data["data"]
# ingestion_key is created by the mocked gateway and should match
assert data["ingestion_key"] == "test-ingestion-key-123456", (
"ingestion_key should match the mocked ingestion key"
)
assert (
data["ingestion_key"] == "test-ingestion-key-123456"
), "ingestion_key should match the mocked ingestion key"
# ingestion_url should be https://ingest.test.signoz.cloud based on the mocked deployment DNS
assert data["ingestion_url"] == "https://ingest.test.signoz.cloud", (
"ingestion_url should be https://ingest.test.signoz.cloud"
)
assert (
data["ingestion_url"] == "https://ingest.test.signoz.cloud"
), "ingestion_url should be https://ingest.test.signoz.cloud"
# signoz_api_url should be https://test-deployment.test.signoz.cloud based on the mocked deployment name and DNS
assert data["signoz_api_url"] == "https://test-deployment.test.signoz.cloud", (
"signoz_api_url should be https://test-deployment.test.signoz.cloud"
)
assert (
data["signoz_api_url"] == "https://test-deployment.test.signoz.cloud"
), "signoz_api_url should be https://test-deployment.test.signoz.cloud"

Some files were not shown because too many files have changed in this diff Show More