Compare commits

...

27 Commits

Author SHA1 Message Date
Abhi Kumar
17dec71695 chore: updated styles for dashboard creation modal 2025-11-20 15:40:46 +05:30
Nikhil Mantri
c7c2d2a7ef fix: Make PromQL queries work with dynamic variable ALL section (#9607) 2025-11-19 14:20:59 +05:30
primus-bot[bot]
0cfb809605 chore(release): bump to v0.102.0 (#9613)
Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
2025-11-19 12:22:05 +05:30
Abhi kumar
6a378ed7b4 fix: added fix for issue-3226, where the query was getting malformed (#9603)
* fix: added fix for issue-3226, where the query was getting malformed

* chore: added test + fixed previous tests
2025-11-19 11:57:53 +05:30
Shaheer Kochai
8e41847523 fix: minor improvements to trace explorer, exceptions, and trace funnels (#9578)
* chore: hide span selector in exceptions page

* refactor: remove unnecessary order by functionality and related components from TracesView

* chore: remove unnecessary icon from QB in trace funnels step

* chore: improve result table styles in trace funnels

* chore: fix formatting

* Revert "refactor: remove unnecessary order by functionality and related components from TracesView"

This reverts commit 724e9f67af.
2025-11-19 05:48:22 +00:00
Karan Balani
779df62093 feat: tokenizerstore package & role checks in JWT Tokenizer (#9594) 2025-11-19 09:11:02 +05:30
Shaheer Kochai
3763794531 refactor: external apis query range v5 migration (#9550)
* fix: fix the issue of aggregation incorrectly falling back to count

* refactor: add support for v5 queries in endPointDetailsDataQueries of EndPointDetails

* chore: add common utility functions

* chore: add convertFiltersWithUrlHandling helper

* fix: remove the aggregateOperator fallback logic changes

* refactor: migrate external APIs -> endpoint metrics query range request to v5  (#9494)

* refactor: migrate endpoint metrics api to v5

* fix: overall improvements

* fix: add url checks

* chore: remove unnecessary tests

* chore: remove old test

* chore: aggregateAttribute to aggregations

* refactor: migrate status bar charts to v5 (#9548)

* refactor: migrate status bar charts to v5

* chore: add tests

* chore: remove unnecessary tests

* chore: aggregateAttribute to aggregations

* fix: fix the failing test

* refactor: migrate external APIs -> domain metrics query range request to v5 (#9484)

* refactor: migrate domain metrics query_range to v5

* fix: overall bugfixes

* chore: fix the failing tests

* refactor: migrate dependent services to query range v5 (#9549)

* refactor: migrate dependent services to query_range v5

* chore: remove unnecessary tests

* chore: aggregateAttribute to aggregations

* refactor: migrate rate over time and latency charts query to v5 (#9544)

* fix: fix the issue of aggregation incorrectly falling back to count

* refactor: migrate rate over time and latency charts query to v5

* chore: write tests for rate over time and latency over time charts

* chore: overall improvements to the test

* fix: add url checks

* chore: remove the unnecessary tests

* chore: aggregateAttribute to aggregations

* fix: fix the failing tests

* chore: remove unnecessary test

* refactor: migrate "all endpoints" query range request to v5 (#9557)

* feat: add support for hiding columns in GridTableComponent

* refactor: migrate all endpoints section query payload to v5

* chore: aggregateAttribute to aggregations

* test: add V5 migration tests for all endpoints tab

* fix: add http.url exists or url.full exists to ensure we don't get null data

* fix: fallback to url.full while displaying endpoint value

* fix: update renderColumnCell type to accept variable arguments

* fix: remove type casting for renderColumnCell in getAllEndpointsWidgetData

* refactor: migrate external APIs -> domain dropdown query range request to v5 (#9495)

* refactor: migrate domain dropdown request to query_range v5

* fix: add utility to add http.url or url.full to the filter expression

* chore: aggregateAttribute to aggregations

* fix: add http.url exists or url.full exists to ensure we don't get null data

* fix: fallback to url.full if http.url doesn't exist

* fix: fix the failing test

* test: add V5 migration tests for endpoint dropdown query

* fix: fix the failing ts check

* fix: fix the failing tests

* fix: fix the failing tests
2025-11-18 16:50:47 +05:30
Vikrant Gupta
e9fa68e1f3 feat(authz): add stats reporting for public dashboards (#9605)
* feat(authz): add stats reporting for public dashboards

* feat(authz): add stats reporting for public dashboards

* feat(authz): add stats reporting for public dashboards
2025-11-18 15:52:46 +05:30
Vikrant Gupta
7bd3e1c453 feat(authz): publicly shareable dashboards (#9584)
* feat(authz): base setup for public shareable dashboards

* feat(authz): add support for public masking

* feat(authz): added public path for gettable public dashboard

* feat(authz): checkpoint-1 for widget query to query range conversion

* feat(authz): checkpoint-2 for widget query to query range conversion

* feat(authz): fix widget index issue

* feat(authz): better handling for dashboard json and query

* feat(authz): use the default time range if timerange is disabled

* feat(authz): use the default time range if timerange is disabled

* feat(authz): add authz changes

* feat(authz): integrate role with dashboard anonymous access

* feat(authz): integrate the new middleware

* feat(authz): integrate the new middleware

* feat(authz): add back licensing

* feat(authz): renaming selector callback

* feat(authz): self review

* feat(authz): self review

* feat(authz): change to promql
2025-11-18 00:21:46 +05:30
Amlan Kumar Nandy
a48455b2b3 chore: fix tmp related vulnerability (#9582) 2025-11-17 13:31:40 +00:00
Karan Balani
fbb66f14ba chore: improve otel demo app setup with docker based signoz (#9567)
## 📄 Summary

Minor improvements on local setup guide doc.
2025-11-14 22:11:22 +05:30
Karan Balani
54b67d9cfd feat: add bounded cache for opaque tokenizer only for last observed at cache (#9581)
Move away from unbounded cache for `lastObservedAt` stat, which was powered by BigCache (unbounded), to Ristretto, a bounded in-memory cache (https://github.com/dgraph-io/ristretto).

This PR is first step towards moving away from unbounded caches in the system, more PRs to follow.
2025-11-14 21:23:57 +05:30
Abhishek Kumar Singh
1a193015a7 refactor: PostableRule struct (#9448)
* refactor: PostableRule struct

- made validation part of `UnmarshalJSON`
- removed validation from `processRuleDefaults` and updated signature to remove error from return type

* refactor: updated error message for missing composite query

---------

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2025-11-13 19:45:19 +00:00
Vikrant Gupta
245179cbf7 feat(authz): openfga sql migration (#9580)
* feat(authz): openfga sql migration

* feat(authz): formatting and naming

* feat(authz): formatting and naming

* feat(authz): extract function for store and model id

* feat(authz): reorder the provider
2025-11-14 00:43:02 +05:30
Yunus M
dbb6b333c8 feat: reset error boundary on pathname change (#9570) 2025-11-13 15:58:33 +05:30
Shaheer Kochai
56f8e53d88 refactor: migrate status code table to v5 (#9546)
* fix: fix the issue of aggregation incorrectly falling back to count

* refactor: add support for v5 queries in endPointDetailsDataQueries of EndPointDetails

* chore: add common utility functions

* refactor: migrate status code table to v5

* fix: status code table formatting

* chore: add tests for status code table v5 migration

* chore: add convertFiltersWithUrlHandling helper

* chore: remove unnecessary tests

* chore: aggregateAttribute to aggregations

* fix: remove the aggregateOperator fallback logic changes

* fix: fix the failing test

* fix: add response_status_code exists to the status code table query
2025-11-12 13:55:00 +00:00
Aditya Singh
2f4e371dac Fix: Preserve query on navigation b/w views | Logs Explorer code cleanup (#9496)
* feat: synchronise panel type state

* feat: refactor explorer queries

* feat: use explorer util queries

* feat: minor refactor

* feat: update test cases

* feat: remove code

* feat: minor refactor

* feat: minor refactor

* feat: update tests

* feat: update list query logic to only support first staged query

* feat: fix export query and saved views change

* feat: test fix

* feat: export link fix

---------

Co-authored-by: Nityananda Gohain <nityanandagohain@gmail.com>
2025-11-12 17:25:24 +05:30
Nikhil Mantri
db75ec56bc chore: update Services to use QBV5 (#9287) 2025-11-12 14:02:07 +05:30
primus-bot[bot]
02755a6527 chore(release): bump to v0.101.0 (#9566)
Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
Co-authored-by: Priyanshu Shrivastava <priyanshu@signoz.io>
2025-11-12 12:39:55 +05:30
Srikanth Chekuri
9f089e0784 fix(pagerduty): add severity for labels (#9538) 2025-11-12 05:51:26 +05:30
Srikanth Chekuri
fb9a7ad3cd chore: update integration dashboard json to v5 (#9534) 2025-11-12 00:09:15 +05:30
Aditya Singh
ad631d70b6 fix: add key to allow side bar nav on error thrown (#9560) 2025-11-11 17:21:06 +05:30
Vikrant Gupta
c44efeab33 fix(sessions): do not use axios base instance (#9556)
* fix(sessions): do not use axios base instance

* fix(sessions): fix test cases

* fix(sessions): add trailing slashes
2025-11-11 08:42:16 +00:00
Tushar Vats
e9743fa7ac feat: bump cloud agent version to 0.0.6 (#9298) 2025-11-11 13:58:34 +05:30
Amlan Kumar Nandy
b7ece08d3e fix: aggregation options for metric in alert condition do not get updated (#9485)
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2025-11-11 13:47:58 +07:00
Pranjul Kalsi
e5f4f5cc72 fix: preserve SMTPRequireTLS during default merge (#8478) (#9418)
The issue was with how Mergo Treats Zero values, Mergo only fills **zero-value** fields in the destination.
Since `false` is the zero value for `bool`, it always gets **replaced** by `true` from the source. Using pointers doesn’t help—`Merge` dereferences them and still treats `false` as zero.
2025-11-11 01:28:16 +05:30
Vikrant Gupta
4437630127 fix(tokenizer): do not retry 401 email_password session request (#9541) 2025-11-10 14:04:16 +00:00
174 changed files with 41631 additions and 45081 deletions

View File

@@ -42,7 +42,7 @@ services:
timeout: 5s
retries: 3
schema-migrator-sync:
image: signoz/signoz-schema-migrator:v0.129.8
image: signoz/signoz-schema-migrator:v0.129.11
container_name: schema-migrator-sync
command:
- sync
@@ -55,7 +55,7 @@ services:
condition: service_healthy
restart: on-failure
schema-migrator-async:
image: signoz/signoz-schema-migrator:v0.129.8
image: signoz/signoz-schema-migrator:v0.129.11
container_name: schema-migrator-async
command:
- async

View File

@@ -84,10 +84,9 @@ go-run-enterprise: ## Runs the enterprise go backend server
SIGNOZ_ALERTMANAGER_PROVIDER=signoz \
SIGNOZ_TELEMETRYSTORE_PROVIDER=clickhouse \
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://127.0.0.1:9000 \
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER=cluster \
go run -race \
$(GO_BUILD_CONTEXT_ENTERPRISE)/*.go \
--config ./conf/prometheus.yml \
--cluster cluster
$(GO_BUILD_CONTEXT_ENTERPRISE)/*.go
.PHONY: go-test
go-test: ## Runs go unit tests
@@ -102,10 +101,9 @@ go-run-community: ## Runs the community go backend server
SIGNOZ_ALERTMANAGER_PROVIDER=signoz \
SIGNOZ_TELEMETRYSTORE_PROVIDER=clickhouse \
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://127.0.0.1:9000 \
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER=cluster \
go run -race \
$(GO_BUILD_CONTEXT_COMMUNITY)/*.go server \
--config ./conf/prometheus.yml \
--cluster cluster
$(GO_BUILD_CONTEXT_COMMUNITY)/*.go server
.PHONY: go-build-community $(GO_BUILD_ARCHS_COMMUNITY)
go-build-community: ## Builds the go backend server for community
@@ -208,4 +206,4 @@ py-lint: ## Run lint for integration tests
.PHONY: py-test
py-test: ## Runs integration tests
@cd tests/integration && poetry run pytest --basetemp=./tmp/ -vv --capture=no src/
@cd tests/integration && poetry run pytest --basetemp=./tmp/ -vv --capture=no src/

View File

@@ -5,9 +5,12 @@ import (
"log/slog"
"github.com/SigNoz/signoz/cmd"
"github.com/SigNoz/signoz/ee/authz/openfgaauthz"
"github.com/SigNoz/signoz/ee/authz/openfgaschema"
"github.com/SigNoz/signoz/ee/sqlstore/postgressqlstore"
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/authn"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/licensing/nooplicensing"
@@ -76,6 +79,9 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
func(ctx context.Context, providerSettings factory.ProviderSettings, store authtypes.AuthNStore, licensing licensing.Licensing) (map[authtypes.AuthNProvider]authn.AuthN, error) {
return signoz.NewAuthNs(ctx, providerSettings, store, licensing)
},
func(ctx context.Context, sqlstore sqlstore.SQLStore) factory.ProviderFactory[authz.AuthZ, authz.Config] {
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx))
},
)
if err != nil {
logger.ErrorContext(ctx, "failed to create signoz", "error", err)

View File

@@ -8,6 +8,8 @@ import (
"github.com/SigNoz/signoz/cmd"
"github.com/SigNoz/signoz/ee/authn/callbackauthn/oidccallbackauthn"
"github.com/SigNoz/signoz/ee/authn/callbackauthn/samlcallbackauthn"
"github.com/SigNoz/signoz/ee/authz/openfgaauthz"
"github.com/SigNoz/signoz/ee/authz/openfgaschema"
enterpriselicensing "github.com/SigNoz/signoz/ee/licensing"
"github.com/SigNoz/signoz/ee/licensing/httplicensing"
enterpriseapp "github.com/SigNoz/signoz/ee/query-service/app"
@@ -17,6 +19,7 @@ import (
"github.com/SigNoz/signoz/ee/zeus/httpzeus"
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/authn"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/modules/organization"
@@ -105,6 +108,9 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
return authNs, nil
},
func(ctx context.Context, sqlstore sqlstore.SQLStore) factory.ProviderFactory[authz.AuthZ, authz.Config] {
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx))
},
)
if err != nil {
logger.ErrorContext(ctx, "failed to create signoz", "error", err)

View File

@@ -176,7 +176,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.100.1
image: signoz/signoz:v0.102.0
command:
- --config=/root/config/prometheus.yml
ports:
@@ -209,7 +209,7 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:v0.129.8
image: signoz/signoz-otel-collector:v0.129.11
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
@@ -233,7 +233,7 @@ services:
- signoz
schema-migrator:
!!merge <<: *common
image: signoz/signoz-schema-migrator:v0.129.8
image: signoz/signoz-schema-migrator:v0.129.11
deploy:
restart_policy:
condition: on-failure

View File

@@ -117,7 +117,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.100.1
image: signoz/signoz:v0.102.0
command:
- --config=/root/config/prometheus.yml
ports:
@@ -150,7 +150,7 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:v0.129.8
image: signoz/signoz-otel-collector:v0.129.11
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
@@ -176,7 +176,7 @@ services:
- signoz
schema-migrator:
!!merge <<: *common
image: signoz/signoz-schema-migrator:v0.129.8
image: signoz/signoz-schema-migrator:v0.129.11
deploy:
restart_policy:
condition: on-failure

View File

@@ -179,7 +179,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.100.1}
image: signoz/signoz:${VERSION:-v0.102.0}
container_name: signoz
command:
- --config=/root/config/prometheus.yml
@@ -213,7 +213,7 @@ services:
# TODO: support otel-collector multiple replicas. Nginx/Traefik for loadbalancing?
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.129.8}
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.129.11}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
@@ -239,7 +239,7 @@ services:
condition: service_healthy
schema-migrator-sync:
!!merge <<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.8}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.11}
container_name: schema-migrator-sync
command:
- sync
@@ -250,7 +250,7 @@ services:
condition: service_healthy
schema-migrator-async:
!!merge <<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.8}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.11}
container_name: schema-migrator-async
command:
- async

View File

@@ -111,7 +111,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.100.1}
image: signoz/signoz:${VERSION:-v0.102.0}
container_name: signoz
command:
- --config=/root/config/prometheus.yml
@@ -144,7 +144,7 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.129.8}
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.129.11}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
@@ -166,7 +166,7 @@ services:
condition: service_healthy
schema-migrator-sync:
!!merge <<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.8}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.11}
container_name: schema-migrator-sync
command:
- sync
@@ -178,7 +178,7 @@ services:
restart: on-failure
schema-migrator-async:
!!merge <<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.8}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.11}
container_name: schema-migrator-async
command:
- async

View File

@@ -103,9 +103,19 @@ Remember to replace the region and ingestion key with proper values as obtained
Both SigNoz and OTel demo app [frontend-proxy service, to be accurate] share common port allocation at 8080. To prevent port allocation conflicts, modify the OTel demo application config to use port 8081 as the `ENVOY_PORT` value as shown below, and run docker compose command.
Also, both SigNoz and OTel Demo App have the same `PROMETHEUS_PORT` configured, by default both of them try to start at `9090`, which may cause either of them to fail depending upon which one acquires it first. To prevent this, we need to mofify the value of `PROMETHEUS_PORT` too.
```sh
ENVOY_PORT=8081 docker compose up -d
ENVOY_PORT=8081 PROMETHEUS_PORT=9091 docker compose up -d
```
Alternatively, we can modify these values using the `.env` file too, which reduces the command as just:
```sh
docker compose up -d
```
This spins up multiple microservices, with OpenTelemetry instrumentation enabled. you can verify this by,
```sh
docker compose ps -a

View File

@@ -48,7 +48,26 @@ func (provider *provider) Check(ctx context.Context, tuple *openfgav1.TupleKey)
}
func (provider *provider) CheckWithTupleCreation(ctx context.Context, claims authtypes.Claims, orgID valuer.UUID, relation authtypes.Relation, _ authtypes.Relation, typeable authtypes.Typeable, selectors []authtypes.Selector) error {
subject, err := authtypes.NewSubject(authtypes.TypeUser, claims.UserID, authtypes.Relation{})
subject, err := authtypes.NewSubject(authtypes.TypeableUser, claims.UserID, orgID, nil)
if err != nil {
return err
}
tuples, err := typeable.Tuples(subject, relation, selectors, orgID)
if err != nil {
return err
}
err = provider.BatchCheck(ctx, tuples)
if err != nil {
return err
}
return nil
}
func (provider *provider) CheckWithTupleCreationWithoutClaims(ctx context.Context, orgID valuer.UUID, relation authtypes.Relation, _ authtypes.Relation, typeable authtypes.Typeable, selectors []authtypes.Selector) error {
subject, err := authtypes.NewSubject(authtypes.TypeableAnonymous, authtypes.AnonymousUser.String(), orgID, nil)
if err != nil {
return err
}

View File

@@ -15,18 +15,18 @@ type anonymous
type role
relations
define assignee: [user]
define assignee: [user, anonymous]
define read: [user, role#assignee]
define update: [user, role#assignee]
define delete: [user, role#assignee]
type resources
type metaresources
relations
define create: [user, role#assignee]
define list: [user, role#assignee]
type resource
type metaresource
relations
define read: [user, anonymous, role#assignee]
define update: [user, role#assignee]
@@ -35,6 +35,6 @@ type resource
define block: [user, role#assignee]
type telemetry
type telemetryresource
relations
define read: [user, anonymous, role#assignee]
define read: [user, role#assignee]

View File

@@ -20,6 +20,10 @@ import (
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
rules "github.com/SigNoz/signoz/pkg/query-service/rules"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/dashboardtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/SigNoz/signoz/pkg/version"
"github.com/gorilla/mux"
)
@@ -99,6 +103,39 @@ func (ah *APIHandler) RegisterRoutes(router *mux.Router, am *middleware.AuthZ) {
router.HandleFunc("/api/v1/billing", am.AdminAccess(ah.getBilling)).Methods(http.MethodGet)
router.HandleFunc("/api/v1/portal", am.AdminAccess(ah.LicensingAPI.Portal)).Methods(http.MethodPost)
// dashboards
router.HandleFunc("/api/v1/dashboards/{id}/public", am.AdminAccess(ah.Signoz.Handlers.Dashboard.CreatePublic)).Methods(http.MethodPost)
router.HandleFunc("/api/v1/dashboards/{id}/public", am.AdminAccess(ah.Signoz.Handlers.Dashboard.GetPublic)).Methods(http.MethodGet)
router.HandleFunc("/api/v1/dashboards/{id}/public", am.AdminAccess(ah.Signoz.Handlers.Dashboard.UpdatePublic)).Methods(http.MethodPut)
router.HandleFunc("/api/v1/dashboards/{id}/public", am.AdminAccess(ah.Signoz.Handlers.Dashboard.DeletePublic)).Methods(http.MethodDelete)
// public access for dashboards
router.HandleFunc("/api/v1/public/dashboards/{id}", am.CheckWithoutClaims(
ah.Signoz.Handlers.Dashboard.GetPublicData,
authtypes.RelationRead, authtypes.RelationRead,
dashboardtypes.TypeableMetaResourcePublicDashboard,
func(req *http.Request, orgs []*types.Organization) ([]authtypes.Selector, valuer.UUID, error) {
id, err := valuer.NewUUID(mux.Vars(req)["id"])
if err != nil {
return nil, valuer.UUID{}, err
}
return ah.Signoz.Modules.Dashboard.GetPublicDashboardOrgAndSelectors(req.Context(), id, orgs)
})).Methods(http.MethodGet)
router.HandleFunc("/api/v1/public/dashboards/{id}/widgets/{index}/query_range", am.CheckWithoutClaims(
ah.Signoz.Handlers.Dashboard.GetPublicWidgetQueryRange,
authtypes.RelationRead, authtypes.RelationRead,
dashboardtypes.TypeableMetaResourcePublicDashboard,
func(req *http.Request, orgs []*types.Organization) ([]authtypes.Selector, valuer.UUID, error) {
id, err := valuer.NewUUID(mux.Vars(req)["id"])
if err != nil {
return nil, valuer.UUID{}, err
}
return ah.Signoz.Modules.Dashboard.GetPublicDashboardOrgAndSelectors(req.Context(), id, orgs)
})).Methods(http.MethodGet)
// v3
router.HandleFunc("/api/v3/licenses", am.AdminAccess(ah.LicensingAPI.Activate)).Methods(http.MethodPost)
router.HandleFunc("/api/v3/licenses", am.AdminAccess(ah.LicensingAPI.Refresh)).Methods(http.MethodPut)

View File

@@ -192,7 +192,7 @@ func (s Server) HealthCheckStatus() chan healthcheck.Status {
func (s *Server) createPublicServer(apiHandler *api.APIHandler, web web.Web) (*http.Server, error) {
r := baseapp.NewRouter()
am := middleware.NewAuthZ(s.signoz.Instrumentation.Logger())
am := middleware.NewAuthZ(s.signoz.Instrumentation.Logger(), s.signoz.Modules.OrgGetter, s.signoz.Authz)
r.Use(otelmux.Middleware(
"apiserver",

View File

@@ -280,6 +280,7 @@
"got": "11.8.5",
"form-data": "4.0.4",
"brace-expansion": "^2.0.2",
"on-headers": "^1.1.0"
"on-headers": "^1.1.0",
"tmp": "0.2.4"
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

View File

@@ -1,4 +1,4 @@
import { ApiBaseInstance as axios } from 'api';
import { LogEventAxiosInstance as axios } from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import { ErrorResponse, SuccessResponse } from 'types/api';

View File

@@ -1,13 +1,11 @@
/* eslint-disable sonarjs/no-duplicate-string */
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { getFieldKeys } from '../getFieldKeys';
// Mock the API instance
jest.mock('api', () => ({
ApiBaseInstance: {
get: jest.fn(),
},
get: jest.fn(),
}));
describe('getFieldKeys API', () => {
@@ -31,33 +29,33 @@ describe('getFieldKeys API', () => {
it('should call API with correct parameters when no args provided', async () => {
// Mock successful API response
(ApiBaseInstance.get as jest.Mock).mockResolvedValueOnce(mockSuccessResponse);
(axios.get as jest.Mock).mockResolvedValueOnce(mockSuccessResponse);
// Call function with no parameters
await getFieldKeys();
// Verify API was called correctly with empty params object
expect(ApiBaseInstance.get).toHaveBeenCalledWith('/fields/keys', {
expect(axios.get).toHaveBeenCalledWith('/fields/keys', {
params: {},
});
});
it('should call API with signal parameter when provided', async () => {
// Mock successful API response
(ApiBaseInstance.get as jest.Mock).mockResolvedValueOnce(mockSuccessResponse);
(axios.get as jest.Mock).mockResolvedValueOnce(mockSuccessResponse);
// Call function with signal parameter
await getFieldKeys('traces');
// Verify API was called with signal parameter
expect(ApiBaseInstance.get).toHaveBeenCalledWith('/fields/keys', {
expect(axios.get).toHaveBeenCalledWith('/fields/keys', {
params: { signal: 'traces' },
});
});
it('should call API with name parameter when provided', async () => {
// Mock successful API response
(ApiBaseInstance.get as jest.Mock).mockResolvedValueOnce({
(axios.get as jest.Mock).mockResolvedValueOnce({
status: 200,
data: {
status: 'success',
@@ -72,14 +70,14 @@ describe('getFieldKeys API', () => {
await getFieldKeys(undefined, 'service');
// Verify API was called with name parameter
expect(ApiBaseInstance.get).toHaveBeenCalledWith('/fields/keys', {
expect(axios.get).toHaveBeenCalledWith('/fields/keys', {
params: { name: 'service' },
});
});
it('should call API with both signal and name when provided', async () => {
// Mock successful API response
(ApiBaseInstance.get as jest.Mock).mockResolvedValueOnce({
(axios.get as jest.Mock).mockResolvedValueOnce({
status: 200,
data: {
status: 'success',
@@ -94,14 +92,14 @@ describe('getFieldKeys API', () => {
await getFieldKeys('logs', 'service');
// Verify API was called with both parameters
expect(ApiBaseInstance.get).toHaveBeenCalledWith('/fields/keys', {
expect(axios.get).toHaveBeenCalledWith('/fields/keys', {
params: { signal: 'logs', name: 'service' },
});
});
it('should return properly formatted response', async () => {
// Mock API to return our response
(ApiBaseInstance.get as jest.Mock).mockResolvedValueOnce(mockSuccessResponse);
(axios.get as jest.Mock).mockResolvedValueOnce(mockSuccessResponse);
// Call the function
const result = await getFieldKeys('traces');

View File

@@ -1,13 +1,11 @@
/* eslint-disable sonarjs/no-duplicate-string */
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { getFieldValues } from '../getFieldValues';
// Mock the API instance
jest.mock('api', () => ({
ApiBaseInstance: {
get: jest.fn(),
},
get: jest.fn(),
}));
describe('getFieldValues API', () => {
@@ -17,7 +15,7 @@ describe('getFieldValues API', () => {
it('should call the API with correct parameters (no options)', async () => {
// Mock API response
(ApiBaseInstance.get as jest.Mock).mockResolvedValueOnce({
(axios.get as jest.Mock).mockResolvedValueOnce({
status: 200,
data: {
status: 'success',
@@ -34,14 +32,14 @@ describe('getFieldValues API', () => {
await getFieldValues();
// Verify API was called correctly with empty params
expect(ApiBaseInstance.get).toHaveBeenCalledWith('/fields/values', {
expect(axios.get).toHaveBeenCalledWith('/fields/values', {
params: {},
});
});
it('should call the API with signal parameter', async () => {
// Mock API response
(ApiBaseInstance.get as jest.Mock).mockResolvedValueOnce({
(axios.get as jest.Mock).mockResolvedValueOnce({
status: 200,
data: {
status: 'success',
@@ -58,14 +56,14 @@ describe('getFieldValues API', () => {
await getFieldValues('traces');
// Verify API was called with signal parameter
expect(ApiBaseInstance.get).toHaveBeenCalledWith('/fields/values', {
expect(axios.get).toHaveBeenCalledWith('/fields/values', {
params: { signal: 'traces' },
});
});
it('should call the API with name parameter', async () => {
// Mock API response
(ApiBaseInstance.get as jest.Mock).mockResolvedValueOnce({
(axios.get as jest.Mock).mockResolvedValueOnce({
status: 200,
data: {
status: 'success',
@@ -82,14 +80,14 @@ describe('getFieldValues API', () => {
await getFieldValues(undefined, 'service.name');
// Verify API was called with name parameter
expect(ApiBaseInstance.get).toHaveBeenCalledWith('/fields/values', {
expect(axios.get).toHaveBeenCalledWith('/fields/values', {
params: { name: 'service.name' },
});
});
it('should call the API with value parameter', async () => {
// Mock API response
(ApiBaseInstance.get as jest.Mock).mockResolvedValueOnce({
(axios.get as jest.Mock).mockResolvedValueOnce({
status: 200,
data: {
status: 'success',
@@ -106,14 +104,14 @@ describe('getFieldValues API', () => {
await getFieldValues(undefined, 'service.name', 'front');
// Verify API was called with value parameter
expect(ApiBaseInstance.get).toHaveBeenCalledWith('/fields/values', {
expect(axios.get).toHaveBeenCalledWith('/fields/values', {
params: { name: 'service.name', searchText: 'front' },
});
});
it('should call the API with time range parameters', async () => {
// Mock API response
(ApiBaseInstance.get as jest.Mock).mockResolvedValueOnce({
(axios.get as jest.Mock).mockResolvedValueOnce({
status: 200,
data: {
status: 'success',
@@ -138,7 +136,7 @@ describe('getFieldValues API', () => {
);
// Verify API was called with time range parameters (converted to milliseconds)
expect(ApiBaseInstance.get).toHaveBeenCalledWith('/fields/values', {
expect(axios.get).toHaveBeenCalledWith('/fields/values', {
params: {
signal: 'logs',
name: 'service.name',
@@ -165,7 +163,7 @@ describe('getFieldValues API', () => {
},
};
(ApiBaseInstance.get as jest.Mock).mockResolvedValueOnce(mockResponse);
(axios.get as jest.Mock).mockResolvedValueOnce(mockResponse);
// Call the function
const result = await getFieldValues('traces', 'mixed.values');
@@ -196,7 +194,7 @@ describe('getFieldValues API', () => {
};
// Mock API to return our response
(ApiBaseInstance.get as jest.Mock).mockResolvedValueOnce(mockApiResponse);
(axios.get as jest.Mock).mockResolvedValueOnce(mockApiResponse);
// Call the function
const result = await getFieldValues('traces', 'service.name');

View File

@@ -1,4 +1,4 @@
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { ErrorResponseHandlerV2 } from 'api/ErrorResponseHandlerV2';
import { AxiosError } from 'axios';
import { ErrorV2Resp, SuccessResponseV2 } from 'types/api';
@@ -24,7 +24,7 @@ export const getFieldKeys = async (
}
try {
const response = await ApiBaseInstance.get('/fields/keys', { params });
const response = await axios.get('/fields/keys', { params });
return {
httpStatusCode: response.status,

View File

@@ -1,5 +1,5 @@
/* eslint-disable sonarjs/cognitive-complexity */
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { ErrorResponseHandlerV2 } from 'api/ErrorResponseHandlerV2';
import { AxiosError } from 'axios';
import { ErrorV2Resp, SuccessResponseV2 } from 'types/api';
@@ -47,7 +47,7 @@ export const getFieldValues = async (
}
try {
const response = await ApiBaseInstance.get('/fields/values', { params });
const response = await axios.get('/fields/values', { params });
// Normalize values from different types (stringValues, boolValues, etc.)
if (response.data?.data?.values) {

View File

@@ -86,8 +86,9 @@ const interceptorRejected = async (
if (
response.status === 401 &&
// if the session rotate call errors out with 401 or the delete sessions call returns 401 then we do not retry!
// if the session rotate call or the create session errors out with 401 or the delete sessions call returns 401 then we do not retry!
response.config.url !== '/sessions/rotate' &&
response.config.url !== '/sessions/email_password' &&
!(
response.config.url === '/sessions' && response.config.method === 'delete'
)
@@ -199,15 +200,15 @@ ApiV5Instance.interceptors.request.use(interceptorsRequestResponse);
//
// axios Base
export const ApiBaseInstance = axios.create({
export const LogEventAxiosInstance = axios.create({
baseURL: `${ENVIRONMENT.baseURL}${apiV1}`,
});
ApiBaseInstance.interceptors.response.use(
LogEventAxiosInstance.interceptors.response.use(
interceptorsResponse,
interceptorRejectedBase,
);
ApiBaseInstance.interceptors.request.use(interceptorsRequestResponse);
LogEventAxiosInstance.interceptors.request.use(interceptorsRequestResponse);
//
// gateway Api V1

View File

@@ -1,4 +1,4 @@
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError, AxiosResponse } from 'axios';
import { baseAutoCompleteIdKeysOrder } from 'constants/queryBuilder';
@@ -17,7 +17,7 @@ export const getHostAttributeKeys = async (
try {
const response: AxiosResponse<{
data: IQueryAutocompleteResponse;
}> = await ApiBaseInstance.get(
}> = await axios.get(
`/${entity}/attribute_keys?dataSource=metrics&searchText=${searchText}`,
{
params: {

View File

@@ -1,4 +1,4 @@
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import { SOMETHING_WENT_WRONG } from 'constants/api';
@@ -20,7 +20,7 @@ const getOnboardingStatus = async (props: {
}): Promise<SuccessResponse<OnboardingStatusResponse> | ErrorResponse> => {
const { endpointService, ...rest } = props;
try {
const response = await ApiBaseInstance.post(
const response = await axios.post(
`/messaging-queues/kafka/onboarding/${endpointService || 'consumers'}`,
rest,
);

View File

@@ -1,13 +1,20 @@
import axios from 'api';
import { ApiV2Instance } from 'api';
import { ErrorResponseHandlerV2 } from 'api/ErrorResponseHandlerV2';
import { AxiosError } from 'axios';
import { ErrorV2Resp } from 'types/api';
import { PayloadProps, Props } from 'types/api/metrics/getService';
const getService = async (props: Props): Promise<PayloadProps> => {
const response = await axios.post(`/services`, {
start: `${props.start}`,
end: `${props.end}`,
tags: props.selectedTags,
});
return response.data;
try {
const response = await ApiV2Instance.post(`/services`, {
start: `${props.start}`,
end: `${props.end}`,
tags: props.selectedTags,
});
return response.data.data;
} catch (error) {
ErrorResponseHandlerV2(error as AxiosError<ErrorV2Resp>);
}
};
export default getService;

View File

@@ -1,22 +1,27 @@
import axios from 'api';
import { ApiV2Instance } from 'api';
import { ErrorResponseHandlerV2 } from 'api/ErrorResponseHandlerV2';
import { AxiosError } from 'axios';
import { ErrorV2Resp } from 'types/api';
import { PayloadProps, Props } from 'types/api/metrics/getTopOperations';
const getTopOperations = async (props: Props): Promise<PayloadProps> => {
const endpoint = props.isEntryPoint
? '/service/entry_point_operations'
: '/service/top_operations';
try {
const endpoint = props.isEntryPoint
? '/service/entry_point_operations'
: '/service/top_operations';
const response = await axios.post(endpoint, {
start: `${props.start}`,
end: `${props.end}`,
service: props.service,
tags: props.selectedTags,
});
const response = await ApiV2Instance.post(endpoint, {
start: `${props.start}`,
end: `${props.end}`,
service: props.service,
tags: props.selectedTags,
limit: 5000,
});
if (props.isEntryPoint) {
return response.data.data;
} catch (error) {
ErrorResponseHandlerV2(error as AxiosError<ErrorV2Resp>);
}
return response.data;
};
export default getTopOperations;

View File

@@ -1,4 +1,4 @@
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { ErrorResponseHandler } from 'api/ErrorResponseHandler';
import { AxiosError } from 'axios';
import { ErrorResponse, SuccessResponse } from 'types/api';
@@ -9,7 +9,7 @@ const getCustomFilters = async (
): Promise<SuccessResponse<PayloadProps> | ErrorResponse> => {
const { signal } = props;
try {
const response = await ApiBaseInstance.get(`orgs/me/filters/${signal}`);
const response = await axios.get(`/orgs/me/filters/${signal}`);
return {
statusCode: 200,

View File

@@ -1,4 +1,4 @@
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { AxiosError } from 'axios';
import { SuccessResponse } from 'types/api';
import { UpdateCustomFiltersProps } from 'types/api/quickFilters/updateCustomFilters';
@@ -6,7 +6,7 @@ import { UpdateCustomFiltersProps } from 'types/api/quickFilters/updateCustomFil
const updateCustomFiltersAPI = async (
props: UpdateCustomFiltersProps,
): Promise<SuccessResponse<void> | AxiosError> =>
ApiBaseInstance.put(`orgs/me/filters`, {
axios.put(`/orgs/me/filters`, {
...props.data,
});

View File

@@ -1,4 +1,4 @@
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { ErrorResponseHandlerV2 } from 'api/ErrorResponseHandlerV2';
import { AxiosError } from 'axios';
import { ErrorV2Resp, SuccessResponseV2 } from 'types/api';
@@ -9,15 +9,12 @@ const listOverview = async (
): Promise<SuccessResponseV2<PayloadProps>> => {
const { start, end, show_ip: showIp, filter } = props;
try {
const response = await ApiBaseInstance.post(
`/third-party-apis/overview/list`,
{
start,
end,
show_ip: showIp,
filter,
},
);
const response = await axios.post(`/third-party-apis/overview/list`, {
start,
end,
show_ip: showIp,
filter,
});
return {
httpStatusCode: response.status,

View File

@@ -1,4 +1,4 @@
import { ApiBaseInstance } from 'api';
import axios from 'api';
import { ErrorResponseHandlerV2 } from 'api/ErrorResponseHandlerV2';
import { AxiosError } from 'axios';
import { ErrorV2Resp, SuccessResponseV2 } from 'types/api';
@@ -11,7 +11,7 @@ const getSpanPercentiles = async (
props: GetSpanPercentilesProps,
): Promise<SuccessResponseV2<GetSpanPercentilesResponseDataProps>> => {
try {
const response = await ApiBaseInstance.post('/span_percentile', {
const response = await axios.post('/span_percentile', {
...props,
});

View File

@@ -224,7 +224,7 @@ export const convertFiltersToExpressionWithExistingQuery = (
const visitedPairs: Set<string> = new Set(); // Set to track visited query pairs
// Map extracted query pairs to key-specific pair information for faster access
let queryPairsMap = getQueryPairsMap(existingQuery.trim());
let queryPairsMap = getQueryPairsMap(existingQuery);
filters?.items?.forEach((filter) => {
const { key, op, value } = filter;
@@ -309,7 +309,7 @@ export const convertFiltersToExpressionWithExistingQuery = (
)}${OPERATORS.IN} ${formattedValue} ${modifiedQuery.slice(
notInPair.position.valueEnd + 1,
)}`;
queryPairsMap = getQueryPairsMap(modifiedQuery.trim());
queryPairsMap = getQueryPairsMap(modifiedQuery);
}
shouldAddToNonExisting = false; // Don't add this to non-existing filters
} else if (

View File

@@ -1,4 +1,5 @@
import { Select } from 'antd';
import { ENTITY_VERSION_V5 } from 'constants/app';
import { initialQueriesMap } from 'constants/queryBuilder';
import {
getAllEndpointsWidgetData,
@@ -264,6 +265,7 @@ function AllEndPoints({
customOnDragSelect={(): void => {}}
customTimeRange={timeRange}
customOnRowClick={onRowClick}
version={ENTITY_VERSION_V5}
/>
</div>
</div>

View File

@@ -1,5 +1,6 @@
import { ENTITY_VERSION_V4 } from 'constants/app';
import { ENTITY_VERSION_V4, ENTITY_VERSION_V5 } from 'constants/app';
import { initialQueriesMap } from 'constants/queryBuilder';
import { REACT_QUERY_KEY } from 'constants/reactQueryKeys';
import { useApiMonitoringParams } from 'container/ApiMonitoring/queryParams';
import {
END_POINT_DETAILS_QUERY_KEYS_ARRAY,
@@ -178,18 +179,33 @@ function EndPointDetails({
[domainName, filters, minTime, maxTime],
);
const V5_QUERIES = [
REACT_QUERY_KEY.GET_ENDPOINT_STATUS_CODE_DATA,
REACT_QUERY_KEY.GET_ENDPOINT_STATUS_CODE_BAR_CHARTS_DATA,
REACT_QUERY_KEY.GET_ENDPOINT_STATUS_CODE_LATENCY_BAR_CHARTS_DATA,
REACT_QUERY_KEY.GET_ENDPOINT_METRICS_DATA,
REACT_QUERY_KEY.GET_ENDPOINT_DEPENDENT_SERVICES_DATA,
REACT_QUERY_KEY.GET_ENDPOINT_DROPDOWN_DATA,
] as const;
const endPointDetailsDataQueries = useQueries(
endPointDetailsQueryPayload.map((payload, index) => ({
queryKey: [
END_POINT_DETAILS_QUERY_KEYS_ARRAY[index],
payload,
filters?.items, // Include filters.items in queryKey for better caching
ENTITY_VERSION_V4,
],
queryFn: (): Promise<SuccessResponse<MetricRangePayloadProps>> =>
GetMetricQueryRange(payload, ENTITY_VERSION_V4),
enabled: !!payload,
})),
endPointDetailsQueryPayload.map((payload, index) => {
const queryKey = END_POINT_DETAILS_QUERY_KEYS_ARRAY[index];
const version = (V5_QUERIES as readonly string[]).includes(queryKey)
? ENTITY_VERSION_V5
: ENTITY_VERSION_V4;
return {
queryKey: [
END_POINT_DETAILS_QUERY_KEYS_ARRAY[index],
payload,
...(filters?.items?.length ? filters.items : []), // Include filters.items in queryKey for better caching
version,
],
queryFn: (): Promise<SuccessResponse<MetricRangePayloadProps>> =>
GetMetricQueryRange(payload, version),
enabled: !!payload,
};
}),
);
const [

View File

@@ -4,7 +4,7 @@ import { getQueryRangeV5 } from 'api/v5/queryRange/getQueryRange';
import { MetricRangePayloadV5, ScalarData } from 'api/v5/v5';
import { useNavigateToExplorer } from 'components/CeleryTask/useNavigateToExplorer';
import { withErrorBoundary } from 'components/ErrorBoundaryHOC';
import { ENTITY_VERSION_V4, ENTITY_VERSION_V5 } from 'constants/app';
import { ENTITY_VERSION_V5 } from 'constants/app';
import { REACT_QUERY_KEY } from 'constants/reactQueryKeys';
import {
END_POINT_DETAILS_QUERY_KEYS_ARRAY,
@@ -56,6 +56,10 @@ function TopErrors({
{
items: endPointName
? [
// Remove any existing http.url filters from initialFilters to avoid duplicates
...(initialFilters?.items?.filter(
(item) => item.key?.key !== SPAN_ATTRIBUTES.URL_PATH,
) || []),
{
id: '92b8a1c1',
key: {
@@ -66,7 +70,6 @@ function TopErrors({
op: '=',
value: endPointName,
},
...(initialFilters?.items || []),
]
: [...(initialFilters?.items || [])],
op: 'AND',
@@ -128,12 +131,12 @@ function TopErrors({
const endPointDropDownDataQueries = useQueries(
endPointDropDownQueryPayload.map((payload) => ({
queryKey: [
END_POINT_DETAILS_QUERY_KEYS_ARRAY[4],
END_POINT_DETAILS_QUERY_KEYS_ARRAY[2],
payload,
ENTITY_VERSION_V4,
ENTITY_VERSION_V5,
],
queryFn: (): Promise<SuccessResponse<MetricRangePayloadProps>> =>
GetMetricQueryRange(payload, ENTITY_VERSION_V4),
GetMetricQueryRange(payload, ENTITY_VERSION_V5),
enabled: !!payload,
staleTime: 60 * 1000,
})),

View File

@@ -0,0 +1,337 @@
/* eslint-disable @typescript-eslint/no-explicit-any */
/* eslint-disable react/jsx-props-no-spreading */
/* eslint-disable prefer-destructuring */
/* eslint-disable sonarjs/no-duplicate-string */
import { render, screen, waitFor } from '@testing-library/react';
import { TraceAggregation } from 'api/v5/v5';
import { ENTITY_VERSION_V5 } from 'constants/app';
import { GetMetricQueryRange } from 'lib/dashboard/getQueryResults';
import { QueryClient, QueryClientProvider } from 'react-query';
import { IBuilderQuery } from 'types/api/queryBuilder/queryBuilderData';
import DomainMetrics from './DomainMetrics';
// Mock the API call
jest.mock('lib/dashboard/getQueryResults', () => ({
GetMetricQueryRange: jest.fn(),
}));
// Mock ErrorState component
jest.mock('./ErrorState', () => ({
__esModule: true,
default: jest.fn(({ refetch }) => (
<div data-testid="error-state">
<button type="button" onClick={refetch} data-testid="retry-button">
Retry
</button>
</div>
)),
}));
describe('DomainMetrics - V5 Query Payload Tests', () => {
let queryClient: QueryClient;
const mockProps = {
domainName: '0.0.0.0',
timeRange: {
startTime: 1758259531000,
endTime: 1758261331000,
},
domainListFilters: {
items: [],
op: 'AND' as const,
} as IBuilderQuery['filters'],
};
const mockSuccessResponse = {
statusCode: 200,
error: null,
payload: {
data: {
result: [
{
table: {
rows: [
{
data: {
A: '150',
B: '125000000',
D: '2021-01-01T23:00:00Z',
F1: '5.5',
},
},
],
},
},
],
},
},
};
beforeEach(() => {
queryClient = new QueryClient({
defaultOptions: {
queries: {
retry: false,
cacheTime: 0,
},
},
});
jest.clearAllMocks();
});
afterEach(() => {
queryClient.clear();
});
const renderComponent = (props = mockProps): ReturnType<typeof render> =>
render(
<QueryClientProvider client={queryClient}>
<DomainMetrics {...props} />
</QueryClientProvider>,
);
describe('1. V5 Query Payload with Filters', () => {
it('sends correct V5 payload structure with domain name filters', async () => {
(GetMetricQueryRange as jest.Mock).mockResolvedValue(mockSuccessResponse);
renderComponent();
await waitFor(() => {
expect(GetMetricQueryRange).toHaveBeenCalledTimes(1);
});
const [payload, version] = (GetMetricQueryRange as jest.Mock).mock.calls[0];
// Verify it's using V5
expect(version).toBe(ENTITY_VERSION_V5);
// Verify time range
expect(payload.start).toBe(1758259531000);
expect(payload.end).toBe(1758261331000);
// Verify V3 payload structure (getDomainMetricsQueryPayload returns V3 format)
expect(payload.query).toBeDefined();
expect(payload.query.builder).toBeDefined();
expect(payload.query.builder.queryData).toBeDefined();
const queryData = payload.query.builder.queryData;
// Verify Query A - count with URL filter
const queryA = queryData.find((q: any) => q.queryName === 'A');
expect(queryA).toBeDefined();
expect(queryA.dataSource).toBe('traces');
expect(queryA.aggregations?.[0]).toBeDefined();
expect((queryA.aggregations?.[0] as TraceAggregation)?.expression).toBe(
'count()',
);
// Verify exact domain filter expression structure
expect(queryA.filter.expression).toContain(
"(net.peer.name = '0.0.0.0' OR server.address = '0.0.0.0')",
);
expect(queryA.filter.expression).toContain(
'url.full EXISTS OR http.url EXISTS',
);
// Verify Query B - p99 latency
const queryB = queryData.find((q: any) => q.queryName === 'B');
expect(queryB).toBeDefined();
expect(queryB.aggregateOperator).toBe('p99');
expect(queryB.aggregations?.[0]).toBeDefined();
expect((queryB.aggregations?.[0] as TraceAggregation)?.expression).toBe(
'p99(duration_nano)',
);
// Verify exact domain filter expression structure
expect(queryB.filter.expression).toContain(
"(net.peer.name = '0.0.0.0' OR server.address = '0.0.0.0')",
);
// Verify Query C - error count (disabled)
const queryC = queryData.find((q: any) => q.queryName === 'C');
expect(queryC).toBeDefined();
expect(queryC.disabled).toBe(true);
expect(queryC.filter.expression).toContain(
"(net.peer.name = '0.0.0.0' OR server.address = '0.0.0.0')",
);
expect(queryC.aggregations?.[0]).toBeDefined();
expect((queryC.aggregations?.[0] as TraceAggregation)?.expression).toBe(
'count()',
);
expect(queryC.filter.expression).toContain('has_error = true');
// Verify Query D - max timestamp
const queryD = queryData.find((q: any) => q.queryName === 'D');
expect(queryD).toBeDefined();
expect(queryD.aggregateOperator).toBe('max');
expect(queryD.aggregations?.[0]).toBeDefined();
expect((queryD.aggregations?.[0] as TraceAggregation)?.expression).toBe(
'max(timestamp)',
);
// Verify exact domain filter expression structure
expect(queryD.filter.expression).toContain(
"(net.peer.name = '0.0.0.0' OR server.address = '0.0.0.0')",
);
// Verify Formula F1 - error rate calculation
const formulas = payload.query.builder.queryFormulas;
expect(formulas).toBeDefined();
expect(formulas.length).toBeGreaterThan(0);
const formulaF1 = formulas.find((f: any) => f.queryName === 'F1');
expect(formulaF1).toBeDefined();
expect(formulaF1.expression).toBe('(C/A)*100');
});
it('includes custom filters in filter expressions', async () => {
(GetMetricQueryRange as jest.Mock).mockResolvedValue(mockSuccessResponse);
const customFilters: IBuilderQuery['filters'] = {
items: [
{
id: 'test-1',
key: {
key: 'service.name',
dataType: 'string' as any,
type: 'resource',
},
op: '=',
value: 'my-service',
},
{
id: 'test-2',
key: {
key: 'deployment.environment',
dataType: 'string' as any,
type: 'resource',
},
op: '=',
value: 'production',
},
],
op: 'AND' as const,
};
renderComponent({
...mockProps,
domainListFilters: customFilters,
});
await waitFor(() => {
expect(GetMetricQueryRange).toHaveBeenCalled();
});
const [payload] = (GetMetricQueryRange as jest.Mock).mock.calls[0];
const queryData = payload.query.builder.queryData;
// Verify all queries include the custom filters
queryData.forEach((query: any) => {
if (query.filter && query.filter.expression) {
expect(query.filter.expression).toContain('service.name');
expect(query.filter.expression).toContain('my-service');
expect(query.filter.expression).toContain('deployment.environment');
expect(query.filter.expression).toContain('production');
}
});
});
});
describe('2. Data Display State', () => {
it('displays metrics when data is successfully loaded', async () => {
(GetMetricQueryRange as jest.Mock).mockResolvedValue(mockSuccessResponse);
renderComponent();
// Wait for skeletons to disappear
await waitFor(() => {
const skeletons = document.querySelectorAll('.ant-skeleton-button');
expect(skeletons.length).toBe(0);
});
// Verify all metric labels are displayed
expect(screen.getByText('EXTERNAL API')).toBeInTheDocument();
expect(screen.getByText('AVERAGE LATENCY')).toBeInTheDocument();
expect(screen.getByText('ERROR %')).toBeInTheDocument();
expect(screen.getByText('LAST USED')).toBeInTheDocument();
// Verify metric values are displayed
expect(screen.getByText('150')).toBeInTheDocument();
expect(screen.getByText('0.125s')).toBeInTheDocument();
});
});
describe('3. Empty/Missing Data State', () => {
it('displays "-" for missing data values', async () => {
const emptyResponse = {
statusCode: 200,
error: null,
payload: {
data: {
result: [
{
table: {
rows: [],
},
},
],
},
},
};
(GetMetricQueryRange as jest.Mock).mockResolvedValue(emptyResponse);
renderComponent();
await waitFor(() => {
const skeletons = document.querySelectorAll('.ant-skeleton-button');
expect(skeletons.length).toBe(0);
});
// When no data, all values should show "-"
const dashValues = screen.getAllByText('-');
expect(dashValues.length).toBeGreaterThan(0);
});
});
describe('4. Error State', () => {
it('displays error state when API call fails', async () => {
(GetMetricQueryRange as jest.Mock).mockRejectedValue(new Error('API Error'));
renderComponent();
await waitFor(() => {
expect(screen.getByTestId('error-state')).toBeInTheDocument();
});
expect(screen.getByTestId('retry-button')).toBeInTheDocument();
});
it('retries API call when retry button is clicked', async () => {
let callCount = 0;
(GetMetricQueryRange as jest.Mock).mockImplementation(() => {
callCount += 1;
if (callCount === 1) {
return Promise.reject(new Error('API Error'));
}
return Promise.resolve(mockSuccessResponse);
});
renderComponent();
// Wait for error state
await waitFor(() => {
expect(screen.getByTestId('error-state')).toBeInTheDocument();
});
// Click retry
const retryButton = screen.getByTestId('retry-button');
retryButton.click();
// Wait for successful load
await waitFor(() => {
expect(screen.getByText('150')).toBeInTheDocument();
});
expect(callCount).toBe(2);
});
});
});

View File

@@ -1,6 +1,6 @@
import { Color } from '@signozhq/design-tokens';
import { Progress, Skeleton, Tooltip, Typography } from 'antd';
import { ENTITY_VERSION_V4 } from 'constants/app';
import { ENTITY_VERSION_V5 } from 'constants/app';
import { REACT_QUERY_KEY } from 'constants/reactQueryKeys';
import {
DomainMetricsResponseRow,
@@ -44,10 +44,10 @@ function DomainMetrics({
queryKey: [
REACT_QUERY_KEY.GET_DOMAIN_METRICS_DATA,
payload,
ENTITY_VERSION_V4,
ENTITY_VERSION_V5,
],
queryFn: (): Promise<SuccessResponse<MetricRangePayloadProps>> =>
GetMetricQueryRange(payload, ENTITY_VERSION_V4),
GetMetricQueryRange(payload, ENTITY_VERSION_V5),
enabled: !!payload,
staleTime: 60 * 1000, // 1 minute stale time : optimize this part
})),
@@ -132,7 +132,9 @@ function DomainMetrics({
) : (
<Tooltip title={formattedDomainMetricsData.latency}>
<span className="round-metric-tag">
{(Number(formattedDomainMetricsData.latency) / 1000).toFixed(3)}s
{formattedDomainMetricsData.latency !== '-'
? `${(Number(formattedDomainMetricsData.latency) / 1000).toFixed(3)}s`
: '-'}
</span>
</Tooltip>
)}
@@ -143,23 +145,27 @@ function DomainMetrics({
<Skeleton.Button active size="small" />
) : (
<Tooltip title={formattedDomainMetricsData.errorRate}>
<Progress
status="active"
percent={Number(
Number(formattedDomainMetricsData.errorRate).toFixed(2),
)}
strokeLinecap="butt"
size="small"
strokeColor={((): string => {
const errorRatePercent = Number(
{formattedDomainMetricsData.errorRate !== '-' ? (
<Progress
status="active"
percent={Number(
Number(formattedDomainMetricsData.errorRate).toFixed(2),
);
if (errorRatePercent >= 90) return Color.BG_SAKURA_500;
if (errorRatePercent >= 60) return Color.BG_AMBER_500;
return Color.BG_FOREST_500;
})()}
className="progress-bar"
/>
)}
strokeLinecap="butt"
size="small"
strokeColor={((): string => {
const errorRatePercent = Number(
Number(formattedDomainMetricsData.errorRate).toFixed(2),
);
if (errorRatePercent >= 90) return Color.BG_SAKURA_500;
if (errorRatePercent >= 60) return Color.BG_AMBER_500;
return Color.BG_FOREST_500;
})()}
className="progress-bar"
/>
) : (
'-'
)}
</Tooltip>
)}
</Typography.Text>

View File

@@ -0,0 +1,419 @@
/* eslint-disable @typescript-eslint/no-explicit-any */
/* eslint-disable react/jsx-props-no-spreading */
/* eslint-disable prefer-destructuring */
/* eslint-disable sonarjs/no-duplicate-string */
import { render, screen, waitFor } from '@testing-library/react';
import { getEndPointDetailsQueryPayload } from 'container/ApiMonitoring/utils';
import { GetMetricQueryRange } from 'lib/dashboard/getQueryResults';
import { QueryClient, QueryClientProvider, UseQueryResult } from 'react-query';
import { SuccessResponse } from 'types/api';
import EndPointMetrics from './EndPointMetrics';
// Mock the API call
jest.mock('lib/dashboard/getQueryResults', () => ({
GetMetricQueryRange: jest.fn(),
}));
// Mock ErrorState component
jest.mock('./ErrorState', () => ({
__esModule: true,
default: jest.fn(({ refetch }) => (
<div data-testid="error-state">
<button type="button" onClick={refetch} data-testid="retry-button">
Retry
</button>
</div>
)),
}));
describe('EndPointMetrics - V5 Query Payload Tests', () => {
let queryClient: QueryClient;
const mockSuccessResponse = {
statusCode: 200,
error: null,
payload: {
data: {
result: [
{
table: {
rows: [
{
data: {
A: '85.5',
B: '245000000',
D: '2021-01-01T22:30:00Z',
F1: '3.2',
},
},
],
},
},
],
},
},
};
beforeEach(() => {
queryClient = new QueryClient({
defaultOptions: {
queries: {
retry: false,
cacheTime: 0,
},
},
});
jest.clearAllMocks();
});
afterEach(() => {
queryClient.clear();
});
// Helper to create mock query result
const createMockQueryResult = (
response: any,
overrides?: Partial<UseQueryResult<SuccessResponse<any>, unknown>>,
): UseQueryResult<SuccessResponse<any>, unknown> =>
({
data: response,
error: null,
isError: false,
isIdle: false,
isLoading: false,
isLoadingError: false,
isRefetchError: false,
isRefetching: false,
isStale: true,
isSuccess: true,
status: 'success' as const,
dataUpdatedAt: Date.now(),
errorUpdateCount: 0,
errorUpdatedAt: 0,
failureCount: 0,
isFetched: true,
isFetchedAfterMount: true,
isFetching: false,
isPlaceholderData: false,
isPreviousData: false,
refetch: jest.fn(),
remove: jest.fn(),
...overrides,
} as UseQueryResult<SuccessResponse<any>, unknown>);
const renderComponent = (
endPointMetricsDataQuery: UseQueryResult<SuccessResponse<any>, unknown>,
): ReturnType<typeof render> =>
render(
<QueryClientProvider client={queryClient}>
<EndPointMetrics endPointMetricsDataQuery={endPointMetricsDataQuery} />
</QueryClientProvider>,
);
// eslint-disable-next-line sonarjs/cognitive-complexity
describe('1. V5 Query Payload with Filters', () => {
// eslint-disable-next-line sonarjs/cognitive-complexity
it('sends correct V5 payload structure with domain and endpoint filters', async () => {
(GetMetricQueryRange as jest.Mock).mockResolvedValue(mockSuccessResponse);
const domainName = 'api.example.com';
const startTime = 1758259531000;
const endTime = 1758261331000;
const filters = {
items: [],
op: 'AND' as const,
};
// Get the actual payload that would be generated
const payloads = getEndPointDetailsQueryPayload(
domainName,
startTime,
endTime,
filters,
);
// First payload is for endpoint metrics
const metricsPayload = payloads[0];
// Verify it's using the correct structure (V3 format for V5 API)
expect(metricsPayload.query).toBeDefined();
expect(metricsPayload.query.builder).toBeDefined();
expect(metricsPayload.query.builder.queryData).toBeDefined();
const queryData = metricsPayload.query.builder.queryData;
// Verify Query A - rate with domain and client kind filters
const queryA = queryData.find((q: any) => q.queryName === 'A');
expect(queryA).toBeDefined();
if (queryA) {
expect(queryA.dataSource).toBe('traces');
expect(queryA.aggregateOperator).toBe('rate');
expect(queryA.timeAggregation).toBe('rate');
// Verify exact domain filter expression structure
if (queryA.filter) {
expect(queryA.filter.expression).toContain(
"(net.peer.name = 'api.example.com' OR server.address = 'api.example.com')",
);
expect(queryA.filter.expression).toContain("kind_string = 'Client'");
}
}
// Verify Query B - p99 latency with duration_nano
const queryB = queryData.find((q: any) => q.queryName === 'B');
expect(queryB).toBeDefined();
if (queryB) {
expect(queryB.aggregateOperator).toBe('p99');
if (queryB.aggregateAttribute) {
expect(queryB.aggregateAttribute.key).toBe('duration_nano');
}
expect(queryB.timeAggregation).toBe('p99');
// Verify exact domain filter expression structure
if (queryB.filter) {
expect(queryB.filter.expression).toContain(
"(net.peer.name = 'api.example.com' OR server.address = 'api.example.com')",
);
expect(queryB.filter.expression).toContain("kind_string = 'Client'");
}
}
// Verify Query C - error count (disabled)
const queryC = queryData.find((q: any) => q.queryName === 'C');
expect(queryC).toBeDefined();
if (queryC) {
expect(queryC.disabled).toBe(true);
expect(queryC.aggregateOperator).toBe('count');
if (queryC.filter) {
expect(queryC.filter.expression).toContain(
"(net.peer.name = 'api.example.com' OR server.address = 'api.example.com')",
);
expect(queryC.filter.expression).toContain("kind_string = 'Client'");
expect(queryC.filter.expression).toContain('has_error = true');
}
}
// Verify Query D - max timestamp for last used
const queryD = queryData.find((q: any) => q.queryName === 'D');
expect(queryD).toBeDefined();
if (queryD) {
expect(queryD.aggregateOperator).toBe('max');
if (queryD.aggregateAttribute) {
expect(queryD.aggregateAttribute.key).toBe('timestamp');
}
expect(queryD.timeAggregation).toBe('max');
// Verify exact domain filter expression structure
if (queryD.filter) {
expect(queryD.filter.expression).toContain(
"(net.peer.name = 'api.example.com' OR server.address = 'api.example.com')",
);
expect(queryD.filter.expression).toContain("kind_string = 'Client'");
}
}
// Verify Query E - total count (disabled)
const queryE = queryData.find((q: any) => q.queryName === 'E');
expect(queryE).toBeDefined();
if (queryE) {
expect(queryE.disabled).toBe(true);
expect(queryE.aggregateOperator).toBe('count');
if (queryE.aggregateAttribute) {
expect(queryE.aggregateAttribute.key).toBe('span_id');
}
if (queryE.filter) {
expect(queryE.filter.expression).toContain(
"(net.peer.name = 'api.example.com' OR server.address = 'api.example.com')",
);
expect(queryE.filter.expression).toContain("kind_string = 'Client'");
}
}
// Verify Formula F1 - error rate calculation
const formulas = metricsPayload.query.builder.queryFormulas;
expect(formulas).toBeDefined();
expect(formulas.length).toBeGreaterThan(0);
const formulaF1 = formulas.find((f: any) => f.queryName === 'F1');
expect(formulaF1).toBeDefined();
if (formulaF1) {
expect(formulaF1.expression).toBe('(C/E)*100');
expect(formulaF1.disabled).toBe(false);
expect(formulaF1.legend).toBe('error percentage');
}
});
it('includes custom domainListFilters in all query expressions', async () => {
(GetMetricQueryRange as jest.Mock).mockResolvedValue(mockSuccessResponse);
const customFilters = {
items: [
{
id: 'test-1',
key: {
key: 'service.name',
dataType: 'string' as any,
type: 'resource',
},
op: '=',
value: 'payment-service',
},
{
id: 'test-2',
key: {
key: 'deployment.environment',
dataType: 'string' as any,
type: 'resource',
},
op: '=',
value: 'staging',
},
],
op: 'AND' as const,
};
const payloads = getEndPointDetailsQueryPayload(
'api.internal.com',
1758259531000,
1758261331000,
customFilters,
);
const queryData = payloads[0].query.builder.queryData;
// Verify ALL queries (A, B, C, D, E) include the custom filters
const allQueryNames = ['A', 'B', 'C', 'D', 'E'];
allQueryNames.forEach((queryName) => {
const query = queryData.find((q: any) => q.queryName === queryName);
expect(query).toBeDefined();
if (query && query.filter && query.filter.expression) {
// Check for exact filter inclusion
expect(query.filter.expression).toContain('service.name');
expect(query.filter.expression).toContain('payment-service');
expect(query.filter.expression).toContain('deployment.environment');
expect(query.filter.expression).toContain('staging');
// Also verify domain filter is still present
expect(query.filter.expression).toContain(
"(net.peer.name = 'api.internal.com' OR server.address = 'api.internal.com')",
);
// Verify client kind filter is present
expect(query.filter.expression).toContain("kind_string = 'Client'");
}
});
});
});
describe('2. Data Display State', () => {
it('displays metrics when data is successfully loaded', async () => {
const mockQuery = createMockQueryResult(mockSuccessResponse);
renderComponent(mockQuery);
// Wait for skeletons to disappear
await waitFor(() => {
const skeletons = document.querySelectorAll('.ant-skeleton-button');
expect(skeletons.length).toBe(0);
});
// Verify all metric labels are displayed
expect(screen.getByText('Rate')).toBeInTheDocument();
expect(screen.getByText('AVERAGE LATENCY')).toBeInTheDocument();
expect(screen.getByText('ERROR %')).toBeInTheDocument();
expect(screen.getByText('LAST USED')).toBeInTheDocument();
// Verify metric values are displayed
expect(screen.getByText('85.5 ops/sec')).toBeInTheDocument();
expect(screen.getByText('245ms')).toBeInTheDocument();
});
});
describe('3. Empty/Missing Data State', () => {
it("displays '-' for missing data values", async () => {
const emptyResponse = {
statusCode: 200,
error: null,
payload: {
data: {
result: [
{
table: {
rows: [],
},
},
],
},
},
};
const mockQuery = createMockQueryResult(emptyResponse);
renderComponent(mockQuery);
await waitFor(() => {
const skeletons = document.querySelectorAll('.ant-skeleton-button');
expect(skeletons.length).toBe(0);
});
// When no data, all values should show "-"
const dashValues = screen.getAllByText('-');
// Should have at least 2 dashes (rate and last used - latency shows "-", error % shows progress bar)
expect(dashValues.length).toBeGreaterThanOrEqual(2);
});
});
describe('4. Error State', () => {
it('displays error state when API call fails', async () => {
const mockQuery = createMockQueryResult(null, {
isError: true,
isSuccess: false,
status: 'error',
error: new Error('API Error'),
});
renderComponent(mockQuery);
await waitFor(() => {
expect(screen.getByTestId('error-state')).toBeInTheDocument();
});
expect(screen.getByTestId('retry-button')).toBeInTheDocument();
});
it('retries API call when retry button is clicked', async () => {
const refetch = jest.fn().mockResolvedValue(mockSuccessResponse);
// Start with error state
const mockQuery = createMockQueryResult(null, {
isError: true,
isSuccess: false,
status: 'error',
error: new Error('API Error'),
refetch,
});
const { rerender } = renderComponent(mockQuery);
// Wait for error state
await waitFor(() => {
expect(screen.getByTestId('error-state')).toBeInTheDocument();
});
// Click retry
const retryButton = screen.getByTestId('retry-button');
retryButton.click();
// Verify refetch was called
expect(refetch).toHaveBeenCalledTimes(1);
// Simulate successful refetch by rerendering with success state
const successQuery = createMockQueryResult(mockSuccessResponse);
rerender(
<QueryClientProvider client={queryClient}>
<EndPointMetrics endPointMetricsDataQuery={successQuery} />
</QueryClientProvider>,
);
// Wait for successful load
await waitFor(() => {
expect(screen.getByText('85.5 ops/sec')).toBeInTheDocument();
});
});
});
});

View File

@@ -1,12 +1,16 @@
import { Color } from '@signozhq/design-tokens';
import { Progress, Skeleton, Tooltip, Typography } from 'antd';
import { getFormattedEndPointMetricsData } from 'container/ApiMonitoring/utils';
import {
getDisplayValue,
getFormattedEndPointMetricsData,
} from 'container/ApiMonitoring/utils';
import { useMemo } from 'react';
import { UseQueryResult } from 'react-query';
import { SuccessResponse } from 'types/api';
import ErrorState from './ErrorState';
// eslint-disable-next-line sonarjs/cognitive-complexity
function EndPointMetrics({
endPointMetricsDataQuery,
}: {
@@ -70,7 +74,9 @@ function EndPointMetrics({
<Skeleton.Button active size="small" />
) : (
<Tooltip title={metricsData?.rate}>
<span className="round-metric-tag">{metricsData?.rate} ops/sec</span>
<span className="round-metric-tag">
{metricsData?.rate !== '-' ? `${metricsData?.rate} ops/sec` : '-'}
</span>
</Tooltip>
)}
</Typography.Text>
@@ -79,7 +85,7 @@ function EndPointMetrics({
<Skeleton.Button active size="small" />
) : (
<Tooltip title={metricsData?.latency}>
<span className="round-metric-tag">{metricsData?.latency}ms</span>
{metricsData?.latency !== '-' ? `${metricsData?.latency}ms` : '-'}
</Tooltip>
)}
</Typography.Text>
@@ -88,21 +94,25 @@ function EndPointMetrics({
<Skeleton.Button active size="small" />
) : (
<Tooltip title={metricsData?.errorRate}>
<Progress
status="active"
percent={Number(Number(metricsData?.errorRate ?? 0).toFixed(2))}
strokeLinecap="butt"
size="small"
strokeColor={((): string => {
const errorRatePercent = Number(
Number(metricsData?.errorRate ?? 0).toFixed(2),
);
if (errorRatePercent >= 90) return Color.BG_SAKURA_500;
if (errorRatePercent >= 60) return Color.BG_AMBER_500;
return Color.BG_FOREST_500;
})()}
className="progress-bar"
/>
{metricsData?.errorRate !== '-' ? (
<Progress
status="active"
percent={Number(Number(metricsData?.errorRate ?? 0).toFixed(2))}
strokeLinecap="butt"
size="small"
strokeColor={((): string => {
const errorRatePercent = Number(
Number(metricsData?.errorRate ?? 0).toFixed(2),
);
if (errorRatePercent >= 90) return Color.BG_SAKURA_500;
if (errorRatePercent >= 60) return Color.BG_AMBER_500;
return Color.BG_FOREST_500;
})()}
className="progress-bar"
/>
) : (
'-'
)}
</Tooltip>
)}
</Typography.Text>
@@ -110,7 +120,9 @@ function EndPointMetrics({
{isLoading || isRefetching ? (
<Skeleton.Button active size="small" />
) : (
<Tooltip title={metricsData?.lastUsed}>{metricsData?.lastUsed}</Tooltip>
<Tooltip title={metricsData?.lastUsed}>
{getDisplayValue(metricsData?.lastUsed)}
</Tooltip>
)}
</Typography.Text>
</div>

View File

@@ -1,4 +1,5 @@
import { Card } from 'antd';
import { ENTITY_VERSION_V5 } from 'constants/app';
import GridCard from 'container/GridCardLayout/GridCard';
import { Widgets } from 'types/api/dashboard/getAll';
@@ -22,6 +23,7 @@ function MetricOverTimeGraph({
customOnDragSelect={(): void => {}}
customTimeRange={timeRange}
customTimeRangeWindowForCoRelation="5m"
version={ENTITY_VERSION_V5}
/>
</div>
</Card>

View File

@@ -8,17 +8,11 @@ import {
endPointStatusCodeColumns,
extractPortAndEndpoint,
formatDataForTable,
getAllEndpointsWidgetData,
getCustomFiltersForBarChart,
getEndPointDetailsQueryPayload,
getFormattedDependentServicesData,
getFormattedEndPointDropDownData,
getFormattedEndPointMetricsData,
getFormattedEndPointStatusCodeChartData,
getFormattedEndPointStatusCodeData,
getGroupByFiltersFromGroupByValues,
getLatencyOverTimeWidgetData,
getRateOverTimeWidgetData,
getStatusCodeBarChartWidgetData,
getTopErrorsColumnsConfig,
getTopErrorsCoRelationQueryFilters,
@@ -49,119 +43,13 @@ jest.mock('../utils', () => {
});
describe('API Monitoring Utils', () => {
describe('getAllEndpointsWidgetData', () => {
it('should create a widget with correct configuration', () => {
// Arrange
const groupBy = [
{
dataType: DataTypes.String,
// eslint-disable-next-line sonarjs/no-duplicate-string
key: 'http.method',
type: '',
},
];
// eslint-disable-next-line sonarjs/no-duplicate-string
const domainName = 'test-domain';
const filters = {
items: [
{
// eslint-disable-next-line sonarjs/no-duplicate-string
id: 'test-filter',
key: {
dataType: DataTypes.String,
key: 'test-key',
type: '',
},
op: '=',
// eslint-disable-next-line sonarjs/no-duplicate-string
value: 'test-value',
},
],
op: 'AND',
};
// Act
const result = getAllEndpointsWidgetData(
groupBy as BaseAutocompleteData[],
domainName,
filters as IBuilderQuery['filters'],
);
// Assert
expect(result).toBeDefined();
expect(result.id).toBeDefined();
// Title is a React component, not a string
expect(result.title).toBeDefined();
expect(result.panelTypes).toBe(PANEL_TYPES.TABLE);
// Check that each query includes the domainName filter
result.query.builder.queryData.forEach((query) => {
const serverNameFilter = query.filters?.items?.find(
(item) => item.key && item.key.key === SPAN_ATTRIBUTES.SERVER_NAME,
);
expect(serverNameFilter).toBeDefined();
expect(serverNameFilter?.value).toBe(domainName);
// Check that the custom filters were included
const testFilter = query.filters?.items?.find(
(item) => item.id === 'test-filter',
);
expect(testFilter).toBeDefined();
});
// Verify groupBy was included in queries
if (result.query.builder.queryData[0].groupBy) {
const hasCustomGroupBy = result.query.builder.queryData[0].groupBy.some(
(item) => item && item.key === 'http.method',
);
expect(hasCustomGroupBy).toBe(true);
}
});
it('should handle empty groupBy correctly', () => {
// Arrange
const groupBy: any[] = [];
const domainName = 'test-domain';
const filters = { items: [], op: 'AND' };
// Act
const result = getAllEndpointsWidgetData(groupBy, domainName, filters);
// Assert
expect(result).toBeDefined();
// Should only include default groupBy
if (result.query.builder.queryData[0].groupBy) {
expect(result.query.builder.queryData[0].groupBy.length).toBeGreaterThan(0);
// Check that it doesn't have extra group by fields (only defaults)
const defaultGroupByLength =
result.query.builder.queryData[0].groupBy.length;
const resultWithCustomGroupBy = getAllEndpointsWidgetData(
[
{
dataType: DataTypes.String,
key: 'custom.field',
type: '',
},
] as BaseAutocompleteData[],
domainName,
filters,
);
// Custom groupBy should have more fields than default
if (resultWithCustomGroupBy.query.builder.queryData[0].groupBy) {
expect(
resultWithCustomGroupBy.query.builder.queryData[0].groupBy.length,
).toBeGreaterThan(defaultGroupByLength);
}
}
});
});
// New tests for formatDataForTable
describe('formatDataForTable', () => {
it('should format rows correctly with valid data', () => {
const columns = APIMonitoringColumnsMock;
const data = [
[
// eslint-disable-next-line sonarjs/no-duplicate-string
'test-domain', // domainName
'10', // endpoints
'25', // rps
@@ -219,6 +107,7 @@ describe('API Monitoring Utils', () => {
const groupBy = [
{
id: 'group-by-1',
// eslint-disable-next-line sonarjs/no-duplicate-string
key: 'http.method',
dataType: DataTypes.String,
type: '',
@@ -452,243 +341,6 @@ describe('API Monitoring Utils', () => {
});
});
describe('getEndPointDetailsQueryPayload', () => {
it('should generate proper query payload with all parameters', () => {
// Arrange
const domainName = 'test-domain';
const startTime = 1609459200000; // 2021-01-01
const endTime = 1609545600000; // 2021-01-02
const filters = {
items: [
{
id: 'test-filter',
key: {
dataType: 'string',
key: 'test.key',
type: '',
},
op: '=',
value: 'test-value',
},
],
op: 'AND',
};
// Act
const result = getEndPointDetailsQueryPayload(
domainName,
startTime,
endTime,
filters as IBuilderQuery['filters'],
);
// Assert
expect(result).toHaveLength(6); // Should return 6 queries
// Check that each query includes proper parameters
result.forEach((query) => {
expect(query).toHaveProperty('start', startTime);
expect(query).toHaveProperty('end', endTime);
// Should have query property with builder data
expect(query).toHaveProperty('query');
expect(query.query).toHaveProperty('builder');
// All queries should include the domain filter
const {
query: {
builder: { queryData },
},
} = query;
queryData.forEach((qd) => {
if (qd.filters && qd.filters.items) {
const serverNameFilter = qd.filters?.items?.find(
(item) => item.key && item.key.key === SPAN_ATTRIBUTES.SERVER_NAME,
);
expect(serverNameFilter).toBeDefined();
// Only check if the serverNameFilter exists, as the actual value might vary
// depending on implementation details or domain defaults
if (serverNameFilter) {
expect(typeof serverNameFilter.value).toBe('string');
}
}
// Should include our custom filter
const customFilter = qd.filters?.items?.find(
(item) => item.id === 'test-filter',
);
expect(customFilter).toBeDefined();
});
});
});
});
describe('getRateOverTimeWidgetData', () => {
it('should generate widget configuration for rate over time', () => {
// Arrange
const domainName = 'test-domain';
const endPointName = '/api/test';
const filters = { items: [], op: 'AND' };
// Act
const result = getRateOverTimeWidgetData(
domainName,
endPointName,
filters as IBuilderQuery['filters'],
);
// Assert
expect(result).toBeDefined();
expect(result).toHaveProperty('title', 'Rate Over Time');
// Check only title since description might vary
// Check query configuration
expect(result).toHaveProperty('query');
// eslint-disable-next-line sonarjs/no-duplicate-string
expect(result).toHaveProperty('query.builder.queryData');
const queryData = result.query.builder.queryData[0];
// Should have domain filter
const domainFilter = queryData.filters?.items?.find(
(item) => item.key && item.key.key === SPAN_ATTRIBUTES.SERVER_NAME,
);
expect(domainFilter).toBeDefined();
if (domainFilter) {
expect(typeof domainFilter.value).toBe('string');
}
// Should have 'rate' time aggregation
expect(queryData).toHaveProperty('timeAggregation', 'rate');
// Should have proper legend that includes endpoint info
expect(queryData).toHaveProperty('legend');
expect(
typeof queryData.legend === 'string' ? queryData.legend : '',
).toContain('/api/test');
});
it('should handle case without endpoint name', () => {
// Arrange
const domainName = 'test-domain';
const endPointName = '';
const filters = { items: [], op: 'AND' };
// Act
const result = getRateOverTimeWidgetData(
domainName,
endPointName,
filters as IBuilderQuery['filters'],
);
// Assert
expect(result).toBeDefined();
const queryData = result.query.builder.queryData[0];
// Legend should be domain name only
expect(queryData).toHaveProperty('legend', domainName);
});
});
describe('getLatencyOverTimeWidgetData', () => {
it('should generate widget configuration for latency over time', () => {
// Arrange
const domainName = 'test-domain';
const endPointName = '/api/test';
const filters = { items: [], op: 'AND' };
// Act
const result = getLatencyOverTimeWidgetData(
domainName,
endPointName,
filters as IBuilderQuery['filters'],
);
// Assert
expect(result).toBeDefined();
expect(result).toHaveProperty('title', 'Latency Over Time');
// Check only title since description might vary
// Check query configuration
expect(result).toHaveProperty('query');
expect(result).toHaveProperty('query.builder.queryData');
const queryData = result.query.builder.queryData[0];
// Should have domain filter
const domainFilter = queryData.filters?.items?.find(
(item) => item.key && item.key.key === SPAN_ATTRIBUTES.SERVER_NAME,
);
expect(domainFilter).toBeDefined();
if (domainFilter) {
expect(typeof domainFilter.value).toBe('string');
}
// Should use duration_nano as the aggregate attribute
expect(queryData.aggregateAttribute).toHaveProperty('key', 'duration_nano');
// Should have 'p99' time aggregation
expect(queryData).toHaveProperty('timeAggregation', 'p99');
});
it('should handle case without endpoint name', () => {
// Arrange
const domainName = 'test-domain';
const endPointName = '';
const filters = { items: [], op: 'AND' };
// Act
const result = getLatencyOverTimeWidgetData(
domainName,
endPointName,
filters as IBuilderQuery['filters'],
);
// Assert
expect(result).toBeDefined();
const queryData = result.query.builder.queryData[0];
// Legend should be domain name only
expect(queryData).toHaveProperty('legend', domainName);
});
// Changed approach to verify end-to-end behavior for URL with port
it('should format legends appropriately for complete URLs with ports', () => {
// Arrange
const domainName = 'test-domain';
const endPointName = 'http://example.com:8080/api/test';
const filters = { items: [], op: 'AND' };
// Extract what we expect the function to extract
const expectedParts = extractPortAndEndpoint(endPointName);
// Act
const result = getLatencyOverTimeWidgetData(
domainName,
endPointName,
filters as IBuilderQuery['filters'],
);
// Assert
const queryData = result.query.builder.queryData[0];
// Check that legend is present and is a string
expect(queryData).toHaveProperty('legend');
expect(typeof queryData.legend).toBe('string');
// If the URL has a port and endpoint, the legend should reflect that appropriately
// (Testing the integration rather than the exact formatting)
if (expectedParts.port !== '-') {
// Verify that both components are incorporated into the legend in some way
// This tests the behavior without relying on the exact implementation details
const legendStr = queryData.legend as string;
expect(legendStr).not.toBe(domainName); // Legend should be different when URL has port/endpoint
}
});
});
describe('getFormattedEndPointDropDownData', () => {
it('should format endpoint dropdown data correctly', () => {
// Arrange
@@ -698,6 +350,7 @@ describe('API Monitoring Utils', () => {
data: {
// eslint-disable-next-line sonarjs/no-duplicate-string
[URL_PATH_KEY]: '/api/users',
'url.full': 'http://example.com/api/users',
A: 150, // count or other metric
},
},
@@ -705,6 +358,7 @@ describe('API Monitoring Utils', () => {
data: {
// eslint-disable-next-line sonarjs/no-duplicate-string
[URL_PATH_KEY]: '/api/orders',
'url.full': 'http://example.com/api/orders',
A: 75,
},
},
@@ -788,87 +442,6 @@ describe('API Monitoring Utils', () => {
});
});
describe('getFormattedEndPointMetricsData', () => {
it('should format endpoint metrics data correctly', () => {
// Arrange
const mockData = [
{
data: {
A: '50', // rate
B: '15000000', // latency in nanoseconds
C: '5', // required by type
D: '1640995200000000', // timestamp in nanoseconds
F1: '5.5', // error rate
},
},
];
// Act
const result = getFormattedEndPointMetricsData(mockData as any);
// Assert
expect(result).toBeDefined();
expect(result.key).toBeDefined();
expect(result.rate).toBe('50');
expect(result.latency).toBe(15); // Should be converted from ns to ms
expect(result.errorRate).toBe(5.5);
expect(typeof result.lastUsed).toBe('string'); // Time formatting is tested elsewhere
});
// eslint-disable-next-line sonarjs/no-duplicate-string
it('should handle undefined values in data', () => {
// Arrange
const mockData = [
{
data: {
A: undefined,
B: 'n/a',
C: '', // required by type
D: undefined,
F1: 'n/a',
},
},
];
// Act
const result = getFormattedEndPointMetricsData(mockData as any);
// Assert
expect(result).toBeDefined();
expect(result.rate).toBe('-');
expect(result.latency).toBe('-');
expect(result.errorRate).toBe(0);
expect(result.lastUsed).toBe('-');
});
it('should handle empty input array', () => {
// Act
const result = getFormattedEndPointMetricsData([]);
// Assert
expect(result).toBeDefined();
expect(result.rate).toBe('-');
expect(result.latency).toBe('-');
expect(result.errorRate).toBe(0);
expect(result.lastUsed).toBe('-');
});
it('should handle undefined input', () => {
// Arrange
const undefinedInput = undefined as any;
// Act
const result = getFormattedEndPointMetricsData(undefinedInput);
// Assert
expect(result).toBeDefined();
expect(result.rate).toBe('-');
expect(result.latency).toBe('-');
expect(result.errorRate).toBe(0);
expect(result.lastUsed).toBe('-');
});
});
describe('getFormattedEndPointStatusCodeData', () => {
it('should format status code data correctly', () => {
// Arrange
@@ -1005,139 +578,6 @@ describe('API Monitoring Utils', () => {
});
});
describe('getFormattedDependentServicesData', () => {
it('should format dependent services data correctly', () => {
// Arrange
const mockData = [
{
data: {
// eslint-disable-next-line sonarjs/no-duplicate-string
'service.name': 'auth-service',
A: '500', // count
B: '120000000', // latency in nanoseconds
C: '15', // rate
F1: '2.5', // error percentage
},
},
{
data: {
'service.name': 'db-service',
A: '300',
B: '80000000',
C: '10',
F1: '1.2',
},
},
];
// Act
const result = getFormattedDependentServicesData(mockData as any);
// Assert
expect(result).toBeDefined();
expect(result.length).toBe(2);
// Check first service
expect(result[0].key).toBeDefined();
expect(result[0].serviceData.serviceName).toBe('auth-service');
expect(result[0].serviceData.count).toBe(500);
expect(typeof result[0].serviceData.percentage).toBe('number');
expect(result[0].latency).toBe(120); // Should be converted from ns to ms
expect(result[0].rate).toBe('15');
expect(result[0].errorPercentage).toBe('2.5');
// Check second service
expect(result[1].serviceData.serviceName).toBe('db-service');
expect(result[1].serviceData.count).toBe(300);
expect(result[1].latency).toBe(80);
expect(result[1].rate).toBe('10');
expect(result[1].errorPercentage).toBe('1.2');
// Verify percentage calculation
const totalCount = 500 + 300;
expect(result[0].serviceData.percentage).toBeCloseTo(
(500 / totalCount) * 100,
2,
);
expect(result[1].serviceData.percentage).toBeCloseTo(
(300 / totalCount) * 100,
2,
);
});
it('should handle undefined values in data', () => {
// Arrange
const mockData = [
{
data: {
'service.name': 'auth-service',
A: 'n/a',
B: undefined,
C: 'n/a',
F1: undefined,
},
},
];
// Act
const result = getFormattedDependentServicesData(mockData as any);
// Assert
expect(result).toBeDefined();
expect(result.length).toBe(1);
expect(result[0].serviceData.serviceName).toBe('auth-service');
expect(result[0].serviceData.count).toBe('-');
expect(result[0].serviceData.percentage).toBe(0);
expect(result[0].latency).toBe('-');
expect(result[0].rate).toBe('-');
expect(result[0].errorPercentage).toBe(0);
});
it('should handle empty input array', () => {
// Act
const result = getFormattedDependentServicesData([]);
// Assert
expect(result).toBeDefined();
expect(result).toEqual([]);
});
it('should handle undefined input', () => {
// Arrange
const undefinedInput = undefined as any;
// Act
const result = getFormattedDependentServicesData(undefinedInput);
// Assert
expect(result).toBeDefined();
expect(result).toEqual([]);
});
it('should handle missing service name', () => {
// Arrange
const mockData = [
{
data: {
// Missing service.name
A: '200',
B: '50000000',
C: '8',
F1: '0.5',
},
},
];
// Act
const result = getFormattedDependentServicesData(mockData as any);
// Assert
expect(result).toBeDefined();
expect(result.length).toBe(1);
expect(result[0].serviceData.serviceName).toBe('-');
});
});
describe('getFormattedEndPointStatusCodeChartData', () => {
afterEach(() => {
jest.resetAllMocks();

View File

@@ -0,0 +1,221 @@
/* eslint-disable @typescript-eslint/no-explicit-any */
/* eslint-disable sonarjs/no-duplicate-string */
/**
* V5 Migration Tests for All Endpoints Widget (Endpoint Overview)
*
* These tests validate the migration from V4 to V5 format for getAllEndpointsWidgetData:
* - Filter format change: filters.items[] → filter.expression
* - Aggregation format: aggregateAttribute → aggregations[] array
* - Domain filter: (net.peer.name OR server.address)
* - Kind filter: kind_string = 'Client'
* - Four queries: A (count), B (p99 latency), C (max timestamp), D (error count - disabled)
* - GroupBy: Both http.url AND url.full with type 'attribute'
*/
import { getAllEndpointsWidgetData } from 'container/ApiMonitoring/utils';
import {
BaseAutocompleteData,
DataTypes,
} from 'types/api/queryBuilder/queryAutocompleteResponse';
import { IBuilderQuery } from 'types/api/queryBuilder/queryBuilderData';
describe('AllEndpointsWidget - V5 Migration Validation', () => {
const mockDomainName = 'api.example.com';
const emptyFilters: IBuilderQuery['filters'] = {
items: [],
op: 'AND',
};
const emptyGroupBy: BaseAutocompleteData[] = [];
describe('1. V5 Format Migration - All Four Queries', () => {
it('all queries use filter.expression format (not filters.items)', () => {
const widget = getAllEndpointsWidgetData(
emptyGroupBy,
mockDomainName,
emptyFilters,
);
const { queryData } = widget.query.builder;
// All 4 queries must use V5 filter.expression format
queryData.forEach((query) => {
expect(query.filter).toBeDefined();
expect(query.filter?.expression).toBeDefined();
expect(typeof query.filter?.expression).toBe('string');
// OLD V4 format should NOT exist
expect(query).not.toHaveProperty('filters');
});
// Verify we have exactly 4 queries
expect(queryData).toHaveLength(4);
});
it('all queries use aggregations array format (not aggregateAttribute)', () => {
const widget = getAllEndpointsWidgetData(
emptyGroupBy,
mockDomainName,
emptyFilters,
);
const [queryA, queryB, queryC, queryD] = widget.query.builder.queryData;
// Query A: count()
expect(queryA.aggregations).toBeDefined();
expect(Array.isArray(queryA.aggregations)).toBe(true);
expect(queryA.aggregations).toEqual([{ expression: 'count()' }]);
expect(queryA).not.toHaveProperty('aggregateAttribute');
// Query B: p99(duration_nano)
expect(queryB.aggregations).toBeDefined();
expect(Array.isArray(queryB.aggregations)).toBe(true);
expect(queryB.aggregations).toEqual([{ expression: 'p99(duration_nano)' }]);
expect(queryB).not.toHaveProperty('aggregateAttribute');
// Query C: max(timestamp)
expect(queryC.aggregations).toBeDefined();
expect(Array.isArray(queryC.aggregations)).toBe(true);
expect(queryC.aggregations).toEqual([{ expression: 'max(timestamp)' }]);
expect(queryC).not.toHaveProperty('aggregateAttribute');
// Query D: count() (disabled, for errors)
expect(queryD.aggregations).toBeDefined();
expect(Array.isArray(queryD.aggregations)).toBe(true);
expect(queryD.aggregations).toEqual([{ expression: 'count()' }]);
expect(queryD).not.toHaveProperty('aggregateAttribute');
});
it('all queries have correct base filter expressions', () => {
const widget = getAllEndpointsWidgetData(
emptyGroupBy,
mockDomainName,
emptyFilters,
);
const [queryA, queryB, queryC, queryD] = widget.query.builder.queryData;
const baseExpression = `(net.peer.name = '${mockDomainName}' OR server.address = '${mockDomainName}') AND kind_string = 'Client'`;
// Queries A, B, C have identical base filter
expect(queryA.filter?.expression).toBe(
`${baseExpression} AND (http.url EXISTS OR url.full EXISTS)`,
);
expect(queryB.filter?.expression).toBe(
`${baseExpression} AND (http.url EXISTS OR url.full EXISTS)`,
);
expect(queryC.filter?.expression).toBe(
`${baseExpression} AND (http.url EXISTS OR url.full EXISTS)`,
);
// Query D has additional has_error filter
expect(queryD.filter?.expression).toBe(
`${baseExpression} AND has_error = true AND (http.url EXISTS OR url.full EXISTS)`,
);
});
});
describe('2. GroupBy Structure', () => {
it('default groupBy includes both http.url and url.full with type attribute', () => {
const widget = getAllEndpointsWidgetData(
emptyGroupBy,
mockDomainName,
emptyFilters,
);
const { queryData } = widget.query.builder;
// All queries should have the same default groupBy
queryData.forEach((query) => {
expect(query.groupBy).toHaveLength(2);
// http.url
expect(query.groupBy).toContainEqual({
dataType: DataTypes.String,
isColumn: false,
isJSON: false,
key: 'http.url',
type: 'attribute',
});
// url.full
expect(query.groupBy).toContainEqual({
dataType: DataTypes.String,
isColumn: false,
isJSON: false,
key: 'url.full',
type: 'attribute',
});
});
});
it('custom groupBy is appended after defaults', () => {
const customGroupBy: BaseAutocompleteData[] = [
{
dataType: DataTypes.String,
key: 'service.name',
type: 'resource',
},
{
dataType: DataTypes.String,
key: 'deployment.environment',
type: 'resource',
},
];
const widget = getAllEndpointsWidgetData(
customGroupBy,
mockDomainName,
emptyFilters,
);
const { queryData } = widget.query.builder;
// All queries should have defaults + custom groupBy
queryData.forEach((query) => {
expect(query.groupBy).toHaveLength(4); // 2 defaults + 2 custom
// First two should be defaults (http.url, url.full)
expect(query.groupBy[0].key).toBe('http.url');
expect(query.groupBy[1].key).toBe('url.full');
// Last two should be custom (matching subset of properties)
expect(query.groupBy[2]).toMatchObject({
dataType: DataTypes.String,
key: 'service.name',
type: 'resource',
});
expect(query.groupBy[3]).toMatchObject({
dataType: DataTypes.String,
key: 'deployment.environment',
type: 'resource',
});
});
});
});
describe('3. Query-Specific Validations', () => {
it('query D has has_error filter and is disabled', () => {
const widget = getAllEndpointsWidgetData(
emptyGroupBy,
mockDomainName,
emptyFilters,
);
const [queryA, queryB, queryC, queryD] = widget.query.builder.queryData;
// Query D should be disabled
expect(queryD.disabled).toBe(true);
// Queries A, B, C should NOT be disabled
expect(queryA.disabled).toBe(false);
expect(queryB.disabled).toBe(false);
expect(queryC.disabled).toBe(false);
// Query D should have has_error in filter
expect(queryD.filter?.expression).toContain('has_error = true');
// Queries A, B, C should NOT have has_error
expect(queryA.filter?.expression).not.toContain('has_error');
expect(queryB.filter?.expression).not.toContain('has_error');
expect(queryC.filter?.expression).not.toContain('has_error');
});
});
});

View File

@@ -1,211 +0,0 @@
import { render, screen } from '@testing-library/react';
import { getFormattedEndPointMetricsData } from 'container/ApiMonitoring/utils';
import { SuccessResponse } from 'types/api';
import EndPointMetrics from '../Explorer/Domains/DomainDetails/components/EndPointMetrics';
import ErrorState from '../Explorer/Domains/DomainDetails/components/ErrorState';
// Create a partial mock of the UseQueryResult interface for testing
interface MockQueryResult {
isLoading: boolean;
isRefetching: boolean;
isError: boolean;
data?: any;
refetch: () => void;
}
// Mock the utils function
jest.mock('container/ApiMonitoring/utils', () => ({
getFormattedEndPointMetricsData: jest.fn(),
}));
// Mock the ErrorState component
jest.mock('../Explorer/Domains/DomainDetails/components/ErrorState', () => ({
__esModule: true,
default: jest.fn().mockImplementation(({ refetch }) => (
<div data-testid="error-state-mock">
<button type="button" data-testid="refetch-button" onClick={refetch}>
Retry
</button>
</div>
)),
}));
// Mock antd components
jest.mock('antd', () => {
const originalModule = jest.requireActual('antd');
return {
...originalModule,
Progress: jest
.fn()
.mockImplementation(() => <div data-testid="progress-bar-mock" />),
Skeleton: {
Button: jest
.fn()
.mockImplementation(() => <div data-testid="skeleton-button-mock" />),
},
Tooltip: jest
.fn()
.mockImplementation(({ children }) => (
<div data-testid="tooltip-mock">{children}</div>
)),
Typography: {
Text: jest.fn().mockImplementation(({ children, className }) => (
<div data-testid={`typography-${className}`} className={className}>
{children}
</div>
)),
},
};
});
describe('EndPointMetrics', () => {
// Common metric data to use in tests
const mockMetricsData = {
key: 'test-key',
rate: '42',
latency: 99,
errorRate: 5.5,
lastUsed: '5 minutes ago',
};
// Basic props for tests
const refetchFn = jest.fn();
beforeEach(() => {
jest.clearAllMocks();
(getFormattedEndPointMetricsData as jest.Mock).mockReturnValue(
mockMetricsData,
);
});
it('renders loading state correctly', () => {
const mockQuery: MockQueryResult = {
isLoading: true,
isRefetching: false,
isError: false,
data: undefined,
refetch: refetchFn,
};
render(<EndPointMetrics endPointMetricsDataQuery={mockQuery as any} />);
// Verify skeleton loaders are visible
const skeletonElements = screen.getAllByTestId('skeleton-button-mock');
expect(skeletonElements.length).toBe(4);
// Verify labels are visible even during loading
expect(screen.getByText('Rate')).toBeInTheDocument();
expect(screen.getByText('AVERAGE LATENCY')).toBeInTheDocument();
expect(screen.getByText('ERROR %')).toBeInTheDocument();
expect(screen.getByText('LAST USED')).toBeInTheDocument();
});
it('renders error state correctly', () => {
const mockQuery: MockQueryResult = {
isLoading: false,
isRefetching: false,
isError: true,
data: undefined,
refetch: refetchFn,
};
render(<EndPointMetrics endPointMetricsDataQuery={mockQuery as any} />);
// Verify error state is shown
expect(screen.getByTestId('error-state-mock')).toBeInTheDocument();
expect(ErrorState).toHaveBeenCalledWith(
{ refetch: expect.any(Function) },
expect.anything(),
);
});
it('renders data correctly when loaded', () => {
const mockData = {
payload: {
data: {
result: [
{
table: {
rows: [
{ data: { A: '42', B: '99000000', D: '1609459200000000', F1: '5.5' } },
],
},
},
],
},
},
} as SuccessResponse<any>;
const mockQuery: MockQueryResult = {
isLoading: false,
isRefetching: false,
isError: false,
data: mockData,
refetch: refetchFn,
};
render(<EndPointMetrics endPointMetricsDataQuery={mockQuery as any} />);
// Verify the utils function was called with the data
expect(getFormattedEndPointMetricsData).toHaveBeenCalledWith(
mockData.payload.data.result[0].table.rows,
);
// Verify data is displayed
expect(
screen.getByText(`${mockMetricsData.rate} ops/sec`),
).toBeInTheDocument();
expect(screen.getByText(`${mockMetricsData.latency}ms`)).toBeInTheDocument();
expect(screen.getByText(mockMetricsData.lastUsed)).toBeInTheDocument();
expect(screen.getByTestId('progress-bar-mock')).toBeInTheDocument(); // For error rate
});
it('handles refetching state correctly', () => {
const mockQuery: MockQueryResult = {
isLoading: false,
isRefetching: true,
isError: false,
data: undefined,
refetch: refetchFn,
};
render(<EndPointMetrics endPointMetricsDataQuery={mockQuery as any} />);
// Verify skeleton loaders are visible during refetching
const skeletonElements = screen.getAllByTestId('skeleton-button-mock');
expect(skeletonElements.length).toBe(4);
});
it('handles null metrics data gracefully', () => {
// Mock the utils function to return null to simulate missing data
(getFormattedEndPointMetricsData as jest.Mock).mockReturnValue(null);
const mockData = {
payload: {
data: {
result: [
{
table: {
rows: [],
},
},
],
},
},
} as SuccessResponse<any>;
const mockQuery: MockQueryResult = {
isLoading: false,
isRefetching: false,
isError: false,
data: mockData,
refetch: refetchFn,
};
render(<EndPointMetrics endPointMetricsDataQuery={mockQuery as any} />);
// Even with null data, the component should render without crashing
expect(screen.getByText('Rate')).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,173 @@
/* eslint-disable @typescript-eslint/no-explicit-any */
/* eslint-disable sonarjs/no-duplicate-string */
/**
* V5 Migration Tests for Endpoint Dropdown Query
*
* These tests validate the migration from V4 to V5 format for the third payload
* in getEndPointDetailsQueryPayload (endpoint dropdown data):
* - Filter format change: filters.items[] → filter.expression
* - Domain handling: (net.peer.name OR server.address)
* - Kind filter: kind_string = 'Client'
* - Existence check: (http.url EXISTS OR url.full EXISTS)
* - Aggregation: count() expression
* - GroupBy: Both http.url AND url.full with type 'attribute'
*/
import { getEndPointDetailsQueryPayload } from 'container/ApiMonitoring/utils';
import { IBuilderQuery } from 'types/api/queryBuilder/queryBuilderData';
describe('EndpointDropdown - V5 Migration Validation', () => {
const mockDomainName = 'api.example.com';
const mockStartTime = 1000;
const mockEndTime = 2000;
const emptyFilters: IBuilderQuery['filters'] = {
items: [],
op: 'AND',
};
describe('1. V5 Format Migration - Structure and Base Filters', () => {
it('migrates to V5 format with correct filter expression structure, aggregations, and groupBy', () => {
const payload = getEndPointDetailsQueryPayload(
mockDomainName,
mockStartTime,
mockEndTime,
emptyFilters,
);
// Third payload is the endpoint dropdown query (index 2)
const dropdownQuery = payload[2];
const queryA = dropdownQuery.query.builder.queryData[0];
// CRITICAL V5 MIGRATION: filter.expression (not filters.items)
expect(queryA.filter).toBeDefined();
expect(queryA.filter?.expression).toBeDefined();
expect(typeof queryA.filter?.expression).toBe('string');
expect(queryA).not.toHaveProperty('filters');
// Base filter 1: Domain (net.peer.name OR server.address)
expect(queryA.filter?.expression).toContain(
`(net.peer.name = '${mockDomainName}' OR server.address = '${mockDomainName}')`,
);
// Base filter 2: Kind
expect(queryA.filter?.expression).toContain("kind_string = 'Client'");
// Base filter 3: Existence check
expect(queryA.filter?.expression).toContain(
'(http.url EXISTS OR url.full EXISTS)',
);
// V5 Aggregation format: aggregations array (not aggregateAttribute)
expect(queryA.aggregations).toBeDefined();
expect(Array.isArray(queryA.aggregations)).toBe(true);
expect(queryA.aggregations?.[0]).toEqual({
expression: 'count()',
});
expect(queryA).not.toHaveProperty('aggregateAttribute');
// GroupBy: Both http.url and url.full
expect(queryA.groupBy).toHaveLength(2);
expect(queryA.groupBy).toContainEqual({
key: 'http.url',
dataType: 'string',
type: 'attribute',
});
expect(queryA.groupBy).toContainEqual({
key: 'url.full',
dataType: 'string',
type: 'attribute',
});
});
});
describe('2. Custom Filters Integration', () => {
it('merges custom filters into filter expression with AND logic', () => {
const customFilters: IBuilderQuery['filters'] = {
items: [
{
id: 'test-1',
key: {
key: 'service.name',
dataType: 'string' as any,
type: 'resource',
},
op: '=',
value: 'user-service',
},
{
id: 'test-2',
key: {
key: 'deployment.environment',
dataType: 'string' as any,
type: 'resource',
},
op: '=',
value: 'production',
},
],
op: 'AND',
};
const payload = getEndPointDetailsQueryPayload(
mockDomainName,
mockStartTime,
mockEndTime,
customFilters,
);
const dropdownQuery = payload[2];
const expression =
dropdownQuery.query.builder.queryData[0].filter?.expression;
// Exact filter expression with custom filters merged
expect(expression).toBe(
"(net.peer.name = 'api.example.com' OR server.address = 'api.example.com') AND kind_string = 'Client' AND (http.url EXISTS OR url.full EXISTS) service.name = 'user-service' AND deployment.environment = 'production'",
);
});
});
describe('3. HTTP URL Filter Special Handling', () => {
it('converts http.url filter to (http.url OR url.full) expression', () => {
const filtersWithHttpUrl: IBuilderQuery['filters'] = {
items: [
{
id: 'http-url-filter',
key: {
key: 'http.url',
dataType: 'string' as any,
type: 'tag',
},
op: '=',
value: '/api/users',
},
{
id: 'service-filter',
key: {
key: 'service.name',
dataType: 'string' as any,
type: 'resource',
},
op: '=',
value: 'user-service',
},
],
op: 'AND',
};
const payload = getEndPointDetailsQueryPayload(
mockDomainName,
mockStartTime,
mockEndTime,
filtersWithHttpUrl,
);
const dropdownQuery = payload[2];
const expression =
dropdownQuery.query.builder.queryData[0].filter?.expression;
// CRITICAL: Exact filter expression with http.url converted to OR logic
expect(expression).toBe(
"(net.peer.name = 'api.example.com' OR server.address = 'api.example.com') AND kind_string = 'Client' AND (http.url EXISTS OR url.full EXISTS) service.name = 'user-service' AND (http.url = '/api/users' OR url.full = '/api/users')",
);
});
});
});

View File

@@ -0,0 +1,173 @@
import {
getLatencyOverTimeWidgetData,
getRateOverTimeWidgetData,
} from 'container/ApiMonitoring/utils';
import { DataTypes } from 'types/api/queryBuilder/queryAutocompleteResponse';
import { IBuilderQuery } from 'types/api/queryBuilder/queryBuilderData';
describe('MetricOverTime - V5 Migration Validation', () => {
const mockDomainName = 'api.example.com';
// eslint-disable-next-line sonarjs/no-duplicate-string
const mockEndpointName = '/api/users';
const emptyFilters: IBuilderQuery['filters'] = {
items: [],
op: 'AND',
};
describe('1. Rate Over Time - V5 Payload Structure', () => {
it('generates V5 filter expression format (not V3 filters.items)', () => {
const widget = getRateOverTimeWidgetData(
mockDomainName,
mockEndpointName,
emptyFilters,
);
const queryData = widget.query.builder.queryData[0];
// CRITICAL: Must use V5 format (filter.expression), not V3 format (filters.items)
expect(queryData.filter).toBeDefined();
expect(queryData?.filter?.expression).toBeDefined();
expect(typeof queryData?.filter?.expression).toBe('string');
// OLD V3 format should NOT exist
expect(queryData).not.toHaveProperty('filters.items');
});
it('uses new domain filter format: (net.peer.name OR server.address)', () => {
const widget = getRateOverTimeWidgetData(
mockDomainName,
mockEndpointName,
emptyFilters,
);
const queryData = widget.query.builder.queryData[0];
// Verify EXACT new filter format with OR operator
expect(queryData?.filter?.expression).toContain(
`(net.peer.name = '${mockDomainName}' OR server.address = '${mockDomainName}')`,
);
// Endpoint name is used in legend, not filter
expect(queryData.legend).toContain('/api/users');
});
it('merges custom filters into filter expression', () => {
const customFilters: IBuilderQuery['filters'] = {
items: [
{
id: 'test-1',
key: {
// eslint-disable-next-line sonarjs/no-duplicate-string
key: 'service.name',
dataType: DataTypes.String,
type: 'resource',
},
op: '=',
// eslint-disable-next-line sonarjs/no-duplicate-string
value: 'user-service',
},
{
id: 'test-2',
key: {
key: 'deployment.environment',
dataType: DataTypes.String,
type: 'resource',
},
op: '=',
value: 'production',
},
],
op: 'AND',
};
const widget = getRateOverTimeWidgetData(
mockDomainName,
mockEndpointName,
customFilters,
);
const queryData = widget.query.builder.queryData[0];
// Verify domain filter is present
expect(queryData?.filter?.expression).toContain(
`(net.peer.name = '${mockDomainName}' OR server.address = '${mockDomainName}')`,
);
// Verify custom filters are merged into the expression
expect(queryData?.filter?.expression).toContain('service.name');
expect(queryData?.filter?.expression).toContain('user-service');
expect(queryData?.filter?.expression).toContain('deployment.environment');
expect(queryData?.filter?.expression).toContain('production');
});
});
describe('2. Latency Over Time - V5 Payload Structure', () => {
it('generates V5 filter expression format (not V3 filters.items)', () => {
const widget = getLatencyOverTimeWidgetData(
mockDomainName,
mockEndpointName,
emptyFilters,
);
const queryData = widget.query.builder.queryData[0];
// CRITICAL: Must use V5 format (filter.expression), not V3 format (filters.items)
expect(queryData.filter).toBeDefined();
expect(queryData?.filter?.expression).toBeDefined();
expect(typeof queryData?.filter?.expression).toBe('string');
// OLD V3 format should NOT exist
expect(queryData).not.toHaveProperty('filters.items');
});
it('uses new domain filter format: (net.peer.name OR server.address)', () => {
const widget = getLatencyOverTimeWidgetData(
mockDomainName,
mockEndpointName,
emptyFilters,
);
const queryData = widget.query.builder.queryData[0];
// Verify EXACT new filter format with OR operator
expect(queryData.filter).toBeDefined();
expect(queryData?.filter?.expression).toContain(
`(net.peer.name = '${mockDomainName}' OR server.address = '${mockDomainName}')`,
);
// Endpoint name is used in legend, not filter
expect(queryData.legend).toContain('/api/users');
});
it('merges custom filters into filter expression', () => {
const customFilters: IBuilderQuery['filters'] = {
items: [
{
id: 'test-1',
key: {
key: 'service.name',
dataType: DataTypes.String,
type: 'resource',
},
op: '=',
value: 'user-service',
},
],
op: 'AND',
};
const widget = getLatencyOverTimeWidgetData(
mockDomainName,
mockEndpointName,
customFilters,
);
const queryData = widget.query.builder.queryData[0];
// Verify domain filter is present
expect(queryData?.filter?.expression).toContain(
`(net.peer.name = '${mockDomainName}' OR server.address = '${mockDomainName}') service.name = 'user-service'`,
);
});
});
});

View File

@@ -0,0 +1,237 @@
/* eslint-disable @typescript-eslint/no-explicit-any */
/* eslint-disable sonarjs/no-duplicate-string */
/**
* V5 Migration Tests for Status Code Bar Chart Queries
*
* These tests validate the migration to V5 format for the bar chart payloads
* in getEndPointDetailsQueryPayload (5th and 6th payloads):
* - Number of Calls Chart (count aggregation)
* - Latency Chart (p99 aggregation)
*
* V5 Changes:
* - Filter format change: filters.items[] → filter.expression
* - Domain filter: (net.peer.name OR server.address)
* - Kind filter: kind_string = 'Client'
* - stepInterval: 60 → null
* - Grouped by response_status_code
*/
import { TraceAggregation } from 'api/v5/v5';
import { getEndPointDetailsQueryPayload } from 'container/ApiMonitoring/utils';
import { IBuilderQuery } from 'types/api/queryBuilder/queryBuilderData';
describe('StatusCodeBarCharts - V5 Migration Validation', () => {
const mockDomainName = '0.0.0.0';
const mockStartTime = 1762573673000;
const mockEndTime = 1762832873000;
const emptyFilters: IBuilderQuery['filters'] = {
items: [],
op: 'AND',
};
describe('1. Number of Calls Chart - V5 Payload Structure', () => {
it('generates correct V5 payload for count aggregation grouped by status code', () => {
const payload = getEndPointDetailsQueryPayload(
mockDomainName,
mockStartTime,
mockEndTime,
emptyFilters,
);
// 5th payload (index 4) is the number of calls bar chart
const callsChartQuery = payload[4];
const queryA = callsChartQuery.query.builder.queryData[0];
// V5 format: filter.expression (not filters.items)
expect(queryA.filter).toBeDefined();
expect(queryA.filter?.expression).toBeDefined();
expect(typeof queryA.filter?.expression).toBe('string');
expect(queryA).not.toHaveProperty('filters.items');
// Base filter 1: Domain (net.peer.name OR server.address)
expect(queryA.filter?.expression).toContain(
`(net.peer.name = '${mockDomainName}' OR server.address = '${mockDomainName}')`,
);
// Base filter 2: Kind
expect(queryA.filter?.expression).toContain("kind_string = 'Client'");
// Aggregation: count
expect(queryA.queryName).toBe('A');
expect(queryA.aggregateOperator).toBe('count');
expect(queryA.disabled).toBe(false);
// Grouped by response_status_code
expect(queryA.groupBy).toContainEqual(
expect.objectContaining({
key: 'response_status_code',
dataType: 'string',
type: 'span',
}),
);
// V5 critical: stepInterval should be null
expect(queryA.stepInterval).toBeNull();
// Time aggregation
expect(queryA.timeAggregation).toBe('rate');
});
});
describe('2. Latency Chart - V5 Payload Structure', () => {
it('generates correct V5 payload for p99 aggregation grouped by status code', () => {
const payload = getEndPointDetailsQueryPayload(
mockDomainName,
mockStartTime,
mockEndTime,
emptyFilters,
);
// 6th payload (index 5) is the latency bar chart
const latencyChartQuery = payload[5];
const queryA = latencyChartQuery.query.builder.queryData[0];
// V5 format: filter.expression (not filters.items)
expect(queryA.filter).toBeDefined();
expect(queryA.filter?.expression).toBeDefined();
expect(typeof queryA.filter?.expression).toBe('string');
expect(queryA).not.toHaveProperty('filters.items');
// Base filter 1: Domain (net.peer.name OR server.address)
expect(queryA.filter?.expression).toContain(
`(net.peer.name = '${mockDomainName}' OR server.address = '${mockDomainName}')`,
);
// Base filter 2: Kind
expect(queryA.filter?.expression).toContain("kind_string = 'Client'");
// Aggregation: p99 on duration_nano
expect(queryA.queryName).toBe('A');
expect(queryA.aggregateOperator).toBe('p99');
expect(queryA.aggregations?.[0]).toBeDefined();
expect((queryA.aggregations?.[0] as TraceAggregation)?.expression).toBe(
'p99(duration_nano)',
);
expect(queryA.disabled).toBe(false);
// Grouped by response_status_code
expect(queryA.groupBy).toContainEqual(
expect.objectContaining({
key: 'response_status_code',
dataType: 'string',
type: 'span',
}),
);
// V5 critical: stepInterval should be null
expect(queryA.stepInterval).toBeNull();
// Time aggregation
expect(queryA.timeAggregation).toBe('p99');
});
});
describe('3. Custom Filters Integration', () => {
it('merges custom filters into filter expression for both charts', () => {
const customFilters: IBuilderQuery['filters'] = {
items: [
{
id: 'test-1',
key: {
key: 'service.name',
dataType: 'string' as any,
type: 'resource',
},
op: '=',
value: 'user-service',
},
{
id: 'test-2',
key: {
key: 'deployment.environment',
dataType: 'string' as any,
type: 'resource',
},
op: '=',
value: 'production',
},
],
op: 'AND',
};
const payload = getEndPointDetailsQueryPayload(
mockDomainName,
mockStartTime,
mockEndTime,
customFilters,
);
const callsChartQuery = payload[4];
const latencyChartQuery = payload[5];
const callsExpression =
callsChartQuery.query.builder.queryData[0].filter?.expression;
const latencyExpression =
latencyChartQuery.query.builder.queryData[0].filter?.expression;
// Both charts should have the same filter expression
expect(callsExpression).toBe(latencyExpression);
// Verify base filters
expect(callsExpression).toContain('net.peer.name');
expect(callsExpression).toContain("kind_string = 'Client'");
// Verify custom filters are merged
expect(callsExpression).toContain('service.name');
expect(callsExpression).toContain('user-service');
expect(callsExpression).toContain('deployment.environment');
expect(callsExpression).toContain('production');
});
});
describe('4. HTTP URL Filter Handling', () => {
it('converts http.url filter to (http.url OR url.full) expression in both charts', () => {
const filtersWithHttpUrl: IBuilderQuery['filters'] = {
items: [
{
id: 'http-url-filter',
key: {
key: 'http.url',
dataType: 'string' as any,
type: 'tag',
},
op: '=',
value: '/api/metrics',
},
],
op: 'AND',
};
const payload = getEndPointDetailsQueryPayload(
mockDomainName,
mockStartTime,
mockEndTime,
filtersWithHttpUrl,
);
const callsChartQuery = payload[4];
const latencyChartQuery = payload[5];
const callsExpression =
callsChartQuery.query.builder.queryData[0].filter?.expression;
const latencyExpression =
latencyChartQuery.query.builder.queryData[0].filter?.expression;
// CRITICAL: http.url converted to OR logic
expect(callsExpression).toContain(
"(http.url = '/api/metrics' OR url.full = '/api/metrics')",
);
expect(latencyExpression).toContain(
"(http.url = '/api/metrics' OR url.full = '/api/metrics')",
);
// Base filters still present
expect(callsExpression).toContain('net.peer.name');
expect(callsExpression).toContain("kind_string = 'Client'");
});
});
});

View File

@@ -0,0 +1,226 @@
/* eslint-disable @typescript-eslint/no-explicit-any */
/* eslint-disable sonarjs/no-duplicate-string */
/**
* V5 Migration Tests for Status Code Table Query
*
* These tests validate the migration from V4 to V5 format for the second payload
* in getEndPointDetailsQueryPayload (status code table data):
* - Filter format change: filters.items[] → filter.expression
* - URL handling: Special logic for (http.url OR url.full)
* - Domain filter: (net.peer.name OR server.address)
* - Kind filter: kind_string = 'Client'
* - Kind filter: response_status_code EXISTS
* - Three queries: A (count), B (p99 latency), C (rate)
* - All grouped by response_status_code
*/
import { TraceAggregation } from 'api/v5/v5';
import { getEndPointDetailsQueryPayload } from 'container/ApiMonitoring/utils';
import { IBuilderQuery } from 'types/api/queryBuilder/queryBuilderData';
describe('StatusCodeTable - V5 Migration Validation', () => {
const mockDomainName = 'api.example.com';
const mockStartTime = 1000;
const mockEndTime = 2000;
const emptyFilters: IBuilderQuery['filters'] = {
items: [],
op: 'AND',
};
describe('1. V5 Format Migration with Base Filters', () => {
it('migrates to V5 format with correct filter expression structure and base filters', () => {
const payload = getEndPointDetailsQueryPayload(
mockDomainName,
mockStartTime,
mockEndTime,
emptyFilters,
);
// Second payload is the status code table query
const statusCodeQuery = payload[1];
const queryA = statusCodeQuery.query.builder.queryData[0];
// CRITICAL V5 MIGRATION: filter.expression (not filters.items)
expect(queryA.filter).toBeDefined();
expect(queryA.filter?.expression).toBeDefined();
expect(typeof queryA.filter?.expression).toBe('string');
expect(queryA).not.toHaveProperty('filters.items');
// Base filter 1: Domain (net.peer.name OR server.address)
expect(queryA.filter?.expression).toContain(
`(net.peer.name = '${mockDomainName}' OR server.address = '${mockDomainName}')`,
);
// Base filter 2: Kind
expect(queryA.filter?.expression).toContain("kind_string = 'Client'");
// Base filter 3: response_status_code EXISTS
expect(queryA.filter?.expression).toContain('response_status_code EXISTS');
});
});
describe('2. Three Queries Structure and Consistency', () => {
it('generates three queries (count, p99, rate) all grouped by response_status_code with identical filters', () => {
const payload = getEndPointDetailsQueryPayload(
mockDomainName,
mockStartTime,
mockEndTime,
emptyFilters,
);
const statusCodeQuery = payload[1];
const [queryA, queryB, queryC] = statusCodeQuery.query.builder.queryData;
// Query A: Count
expect(queryA.queryName).toBe('A');
expect(queryA.aggregateOperator).toBe('count');
expect(queryA.aggregations?.[0]).toBeDefined();
expect((queryA.aggregations?.[0] as TraceAggregation)?.expression).toBe(
'count(span_id)',
);
expect(queryA.disabled).toBe(false);
// Query B: P99 Latency
expect(queryB.queryName).toBe('B');
expect(queryB.aggregateOperator).toBe('p99');
expect((queryB.aggregations?.[0] as TraceAggregation)?.expression).toBe(
'p99(duration_nano)',
);
expect(queryB.disabled).toBe(false);
// Query C: Rate
expect(queryC.queryName).toBe('C');
expect(queryC.aggregateOperator).toBe('rate');
expect(queryC.disabled).toBe(false);
// All group by response_status_code
[queryA, queryB, queryC].forEach((query) => {
expect(query.groupBy).toContainEqual(
expect.objectContaining({
key: 'response_status_code',
dataType: 'string',
type: 'span',
}),
);
});
// CRITICAL: All have identical filter expressions
expect(queryA.filter?.expression).toBe(queryB.filter?.expression);
expect(queryB.filter?.expression).toBe(queryC.filter?.expression);
});
});
describe('3. Custom Filters Integration', () => {
it('merges custom filters into filter expression with AND logic', () => {
const customFilters: IBuilderQuery['filters'] = {
items: [
{
id: 'test-1',
key: {
key: 'service.name',
dataType: 'string' as any,
type: 'resource',
},
op: '=',
value: 'user-service',
},
{
id: 'test-2',
key: {
key: 'deployment.environment',
dataType: 'string' as any,
type: 'resource',
},
op: '=',
value: 'production',
},
],
op: 'AND',
};
const payload = getEndPointDetailsQueryPayload(
mockDomainName,
mockStartTime,
mockEndTime,
customFilters,
);
const statusCodeQuery = payload[1];
const expression =
statusCodeQuery.query.builder.queryData[0].filter?.expression;
// Base filters present
expect(expression).toContain('net.peer.name');
expect(expression).toContain("kind_string = 'Client'");
expect(expression).toContain('response_status_code EXISTS');
// Custom filters merged
expect(expression).toContain('service.name');
expect(expression).toContain('user-service');
expect(expression).toContain('deployment.environment');
expect(expression).toContain('production');
// All three queries have the same merged expression
const queries = statusCodeQuery.query.builder.queryData;
expect(queries[0].filter?.expression).toBe(queries[1].filter?.expression);
expect(queries[1].filter?.expression).toBe(queries[2].filter?.expression);
});
});
describe('4. HTTP URL Filter Handling', () => {
it('converts http.url filter to (http.url OR url.full) expression', () => {
const filtersWithHttpUrl: IBuilderQuery['filters'] = {
items: [
{
id: 'http-url-filter',
key: {
key: 'http.url',
dataType: 'string' as any,
type: 'tag',
},
op: '=',
value: '/api/users',
},
{
id: 'service-filter',
key: {
key: 'service.name',
dataType: 'string' as any,
type: 'resource',
},
op: '=',
value: 'user-service',
},
],
op: 'AND',
};
const payload = getEndPointDetailsQueryPayload(
mockDomainName,
mockStartTime,
mockEndTime,
filtersWithHttpUrl,
);
const statusCodeQuery = payload[1];
const expression =
statusCodeQuery.query.builder.queryData[0].filter?.expression;
// CRITICAL: http.url converted to OR logic
expect(expression).toContain(
"(http.url = '/api/users' OR url.full = '/api/users')",
);
// Other filters still present
expect(expression).toContain('service.name');
expect(expression).toContain('user-service');
// Base filters present
expect(expression).toContain('net.peer.name');
expect(expression).toContain("kind_string = 'Client'");
expect(expression).toContain('response_status_code EXISTS');
// All ANDed together (at least 2 ANDs: domain+kind, custom filter, url condition)
expect(expression?.match(/AND/g)?.length).toBeGreaterThanOrEqual(2);
});
});
});

View File

@@ -1,9 +1,11 @@
import { BuilderQuery } from 'api/v5/v5';
import { useNavigateToExplorer } from 'components/CeleryTask/useNavigateToExplorer';
import { rest, server } from 'mocks-server/server';
import { fireEvent, render, screen, waitFor, within } from 'tests/test-utils';
import { DataSource } from 'types/common/queryBuilder';
import TopErrors from '../Explorer/Domains/DomainDetails/TopErrors';
import { getTopErrorsQueryPayload } from '../utils';
// Mock the EndPointsDropDown component to avoid issues
jest.mock(
@@ -36,6 +38,7 @@ describe('TopErrors', () => {
const V5_QUERY_RANGE_API_PATH = '*/api/v5/query_range';
const mockProps = {
// eslint-disable-next-line sonarjs/no-duplicate-string
domainName: 'test-domain',
timeRange: {
startTime: 1000000000,
@@ -305,45 +308,14 @@ describe('TopErrors', () => {
});
it('sends query_range v5 API call with required filters including has_error', async () => {
let capturedRequest: any;
// let capturedRequest: any;
// Override the v5 API mock to capture the request
server.use(
rest.post(V5_QUERY_RANGE_API_PATH, async (req, res, ctx) => {
capturedRequest = await req.json();
return res(
ctx.status(200),
ctx.json({
data: {
data: {
results: [
{
columns: [
{
name: 'http.url',
fieldDataType: 'string',
fieldContext: 'attribute',
},
{
name: 'response_status_code',
fieldDataType: 'string',
fieldContext: 'span',
},
{
name: 'status_message',
fieldDataType: 'string',
fieldContext: 'span',
},
{ name: 'count()', fieldDataType: 'int64', fieldContext: '' },
],
data: [['/api/test', '500', 'Internal Server Error', 10]],
},
],
},
},
}),
);
}),
const topErrorsPayload = getTopErrorsQueryPayload(
'test-domain',
mockProps.timeRange.startTime,
mockProps.timeRange.endTime,
{ items: [], op: 'AND' },
false,
);
// eslint-disable-next-line react/jsx-props-no-spreading
@@ -351,20 +323,18 @@ describe('TopErrors', () => {
// Wait for the API call to be made
await waitFor(() => {
expect(capturedRequest).toBeDefined();
expect(topErrorsPayload).toBeDefined();
});
// Extract the filter expression from the captured request
const filterExpression =
capturedRequest.compositeQuery.queries[0].spec.filter.expression;
// getTopErrorsQueryPayload returns a builder_query with TraceBuilderQuery spec
const builderQuery = topErrorsPayload.compositeQuery.queries[0]
.spec as BuilderQuery;
const filterExpression = builderQuery.filter?.expression;
// Verify all required filters are present
expect(filterExpression).toContain(`kind_string = 'Client'`);
expect(filterExpression).toContain(`(http.url EXISTS OR url.full EXISTS)`);
expect(filterExpression).toContain(
`(net.peer.name = 'test-domain' OR server.address = 'test-domain')`,
`kind_string = 'Client' AND (http.url EXISTS OR url.full EXISTS) AND (net.peer.name = 'test-domain' OR server.address = 'test-domain') AND has_error = true`,
);
expect(filterExpression).toContain(`has_error = true`);
expect(filterExpression).toContain(`status_message EXISTS`); // toggle is on by default
});
});

File diff suppressed because it is too large Load Diff

View File

@@ -112,6 +112,8 @@ function AppLayout(props: AppLayoutProps): JSX.Element {
setShowPaymentFailedWarning,
] = useState<boolean>(false);
const errorBoundaryRef = useRef<Sentry.ErrorBoundary>(null);
const [showSlowApiWarning, setShowSlowApiWarning] = useState(false);
const [slowApiWarningShown, setSlowApiWarningShown] = useState(false);
@@ -378,6 +380,13 @@ function AppLayout(props: AppLayoutProps): JSX.Element {
getChangelogByVersionResponse.isSuccess,
]);
// reset error boundary on route change
useEffect(() => {
if (errorBoundaryRef.current) {
errorBoundaryRef.current.resetErrorBoundary();
}
}, [pathname]);
const isToDisplayLayout = isLoggedIn;
const routeKey = useMemo(() => getRouteKey(pathname), [pathname]);
@@ -836,7 +845,10 @@ function AppLayout(props: AppLayoutProps): JSX.Element {
})}
data-overlayscrollbars-initialize
>
<Sentry.ErrorBoundary fallback={<ErrorBoundaryFallback />}>
<Sentry.ErrorBoundary
fallback={<ErrorBoundaryFallback />}
ref={errorBoundaryRef}
>
<LayoutContent data-overlayscrollbars-initialize>
<OverlayScrollbar>
<ChildrenContainer>

View File

@@ -17,6 +17,7 @@ function ExplorerOptionWrapper({
isOneChartPerQuery,
splitedQueries,
signalSource,
handleChangeSelectedView,
}: ExplorerOptionsWrapperProps): JSX.Element {
const [isExplorerOptionHidden, setIsExplorerOptionHidden] = useState(false);
@@ -38,6 +39,7 @@ function ExplorerOptionWrapper({
setIsExplorerOptionHidden={setIsExplorerOptionHidden}
isOneChartPerQuery={isOneChartPerQuery}
splitedQueries={splitedQueries}
handleChangeSelectedView={handleChangeSelectedView}
/>
);
}

View File

@@ -72,10 +72,11 @@ import { Query } from 'types/api/queryBuilder/queryBuilderData';
import { ViewProps } from 'types/api/saveViews/types';
import { DataSource, StringOperators } from 'types/common/queryBuilder';
import { USER_ROLES } from 'types/roles';
import { panelTypeToExplorerView } from 'utils/explorerUtils';
import { PreservedViewsTypes } from './constants';
import ExplorerOptionsHideArea from './ExplorerOptionsHideArea';
import { PreservedViewsInLocalStorage } from './types';
import { ChangeViewFunctionType, PreservedViewsInLocalStorage } from './types';
import {
DATASOURCE_VS_ROUTES,
generateRGBAFromHex,
@@ -98,6 +99,7 @@ function ExplorerOptions({
setIsExplorerOptionHidden,
isOneChartPerQuery = false,
splitedQueries = [],
handleChangeSelectedView,
}: ExplorerOptionsProps): JSX.Element {
const [isExport, setIsExport] = useState<boolean>(false);
const [isSaveModalOpen, setIsSaveModalOpen] = useState(false);
@@ -412,13 +414,22 @@ function ExplorerOptions({
if (!currentViewDetails) return;
const { query, name, id, panelType: currentPanelType } = currentViewDetails;
handleExplorerTabChange(currentPanelType, {
query,
name,
id,
});
if (handleChangeSelectedView) {
handleChangeSelectedView(panelTypeToExplorerView[currentPanelType], {
query,
name,
id,
});
} else {
// to remove this after traces cleanup
handleExplorerTabChange(currentPanelType, {
query,
name,
id,
});
}
},
[viewsData, handleExplorerTabChange],
[viewsData, handleExplorerTabChange, handleChangeSelectedView],
);
const updatePreservedViewInLocalStorage = (option: {
@@ -524,6 +535,10 @@ function ExplorerOptions({
return;
}
if (handleChangeSelectedView) {
handleChangeSelectedView(panelTypeToExplorerView[PANEL_TYPES.LIST]);
}
history.replace(DATASOURCE_VS_ROUTES[sourcepage]);
};
@@ -1020,6 +1035,7 @@ export interface ExplorerOptionsProps {
setIsExplorerOptionHidden?: Dispatch<SetStateAction<boolean>>;
isOneChartPerQuery?: boolean;
splitedQueries?: Query[];
handleChangeSelectedView?: ChangeViewFunctionType;
}
ExplorerOptions.defaultProps = {
@@ -1029,6 +1045,7 @@ ExplorerOptions.defaultProps = {
isOneChartPerQuery: false,
splitedQueries: [],
signalSource: '',
handleChangeSelectedView: undefined,
};
export default ExplorerOptions;

View File

@@ -2,6 +2,8 @@ import { NotificationInstance } from 'antd/es/notification/interface';
import { AxiosResponse } from 'axios';
import { SaveViewWithNameProps } from 'components/ExplorerCard/types';
import { PANEL_TYPES } from 'constants/queryBuilder';
import { ICurrentQueryData } from 'hooks/useHandleExplorerTabChange';
import { ExplorerViews } from 'pages/LogsExplorer/utils';
import { Dispatch, SetStateAction } from 'react';
import { UseMutateAsyncFunction } from 'react-query';
import { ICompositeMetricQuery } from 'types/api/alerts/compositeQuery';
@@ -38,3 +40,8 @@ export type PreservedViewType =
export type PreservedViewsInLocalStorage = Partial<
Record<PreservedViewType, { key: string; value: string }>
>;
export type ChangeViewFunctionType = (
view: ExplorerViews,
querySearchParameters?: ICurrentQueryData,
) => void;

View File

@@ -49,17 +49,29 @@ function GridTableComponent({
panelType,
queryRangeRequest,
decimalPrecision,
hiddenColumns = [],
...props
}: GridTableComponentProps): JSX.Element {
const { t } = useTranslation(['valueGraph']);
// create columns and dataSource in the ui friendly structure
// use the query from the widget here to extract the legend information
const { columns, dataSource: originalDataSource } = useMemo(
const { columns: allColumns, dataSource: originalDataSource } = useMemo(
() => createColumnsAndDataSource((data as unknown) as TableData, query),
[query, data],
);
// Filter out hidden columns from being displayed
const columns = useMemo(
() =>
allColumns.filter(
(column) =>
!('dataIndex' in column) ||
!hiddenColumns.includes(column.dataIndex as string),
),
[allColumns, hiddenColumns],
);
const createDataInCorrectFormat = useCallback(
(dataSource: RowData[]): RowData[] =>
dataSource.map((d) => {

View File

@@ -30,6 +30,7 @@ export type GridTableComponentProps = {
contextLinks?: ContextLinksData;
panelType?: PANEL_TYPES;
queryRangeRequest?: QueryRangeRequestV5;
hiddenColumns?: string[];
} & Pick<LogsExplorerTableProps, 'data'> &
Omit<TableProps<RowData>, 'columns' | 'dataSource'>;

View File

@@ -68,7 +68,7 @@
.template-list-item {
display: flex;
gap: 8px;
padding: 4px 12px;
padding: 8px 12px;
align-items: center;
cursor: pointer;
height: 32px;
@@ -76,8 +76,10 @@
.template-icon {
display: flex;
height: 14px;
width: 14px;
height: 20px;
width: 20px;
border-radius: 2px;
padding: 4px;
align-items: center;
justify-content: center;
}
@@ -97,6 +99,17 @@
&.active {
border-radius: 3px;
background: rgba(171, 189, 255, 0.08);
position: relative;
&::before {
content: '';
position: absolute;
top: 0;
bottom: 0;
left: 0;
width: 2px;
background: var(--bg-robin-500);
}
}
}
}
@@ -159,18 +172,38 @@
display: flex;
justify-content: center;
align-items: center;
margin: 24px;
padding: 16px;
height: calc(100% - 144px);
position: relative;
img {
&-container {
position: relative;
width: 100%;
max-width: 100%;
padding: 24px;
padding: 48px 24px;
border-radius: 4px;
border: 1px solid var(--bg-ink-50);
background: var(--bg-ink-300);
background: linear-gradient(98.66deg, #7a97fa 4.42%, #f977ff 96.6%);
max-height: 100%;
&::before {
content: '';
position: absolute;
inset: 0;
background: url('/public/Images/grains.png');
background-size: contain;
background-repeat: repeat;
opacity: 0.1;
}
}
img {
position: relative;
width: 100%;
max-width: 100%;
max-height: 540px;
object-fit: contain;
border-radius: 4px;
}
}
}

View File

@@ -4,6 +4,7 @@
/* eslint-disable @typescript-eslint/explicit-function-return-type */
import './DashboardTemplatesModal.styles.scss';
import { Color } from '@signozhq/design-tokens';
import { Button, Input, Modal, Typography } from 'antd';
import ApacheIcon from 'assets/CustomIcons/ApacheIcon';
import DockerIcon from 'assets/CustomIcons/DockerIcon';
@@ -16,7 +17,14 @@ import NginxIcon from 'assets/CustomIcons/NginxIcon';
import PostgreSQLIcon from 'assets/CustomIcons/PostgreSQLIcon';
import RedisIcon from 'assets/CustomIcons/RedisIcon';
import cx from 'classnames';
import { ConciergeBell, DraftingCompass, Drill, Plus, X } from 'lucide-react';
import {
ConciergeBell,
DraftingCompass,
Drill,
Plus,
Search,
X,
} from 'lucide-react';
import { ChangeEvent, useState } from 'react';
import { DashboardTemplate } from 'types/api/dashboard/getAll';
@@ -162,7 +170,9 @@ export default function DashboardTemplatesModal({
<div className="new-dashboard-templates-list">
<Input
className="new-dashboard-templates-search"
placeholder="🔍 Search..."
placeholder="Search..."
size="middle"
prefix={<Search size={12} color={Color.TEXT_VANILLA_400} />}
onChange={handleDashboardTemplateSearch}
/>
@@ -212,10 +222,12 @@ export default function DashboardTemplatesModal({
</div>
<div className="template-preview-image">
<img
src={selectedDashboardTemplate.previewImage}
alt={`${selectedDashboardTemplate.name}-preview`}
/>
<div className="template-preview-image-container">
<img
src={selectedDashboardTemplate.previewImage}
alt={`${selectedDashboardTemplate.name}-preview`}
/>
</div>
</div>
</div>
</div>

View File

@@ -7,10 +7,9 @@ import './DashboardList.styles.scss';
import { Color } from '@signozhq/design-tokens';
import {
Button,
Dropdown,
Flex,
Input,
MenuProps,
// MenuProps,
Modal,
Popover,
Skeleton,
@@ -47,14 +46,14 @@ import {
Ellipsis,
EllipsisVertical,
Expand,
ExternalLink,
// ExternalLink,
FileJson,
Github,
// Github,
HdmiPort,
LayoutGrid,
// LayoutGrid,
Link2,
Plus,
Radius,
// Radius,
RotateCw,
Search,
SquareArrowOutUpRight,
@@ -71,7 +70,6 @@ import {
Key,
useCallback,
useEffect,
useMemo,
useRef,
useState,
} from 'react';
@@ -597,61 +595,61 @@ function DashboardsList(): JSX.Element {
},
];
const getCreateDashboardItems = useMemo(() => {
const menuItems: MenuProps['items'] = [
{
label: (
<div
className="create-dashboard-menu-item"
onClick={(): void => onModalHandler(false)}
>
<Radius size={14} /> Import JSON
</div>
),
key: '1',
},
{
label: (
<a
href="https://signoz.io/docs/dashboards/dashboard-templates/overview/"
target="_blank"
rel="noopener noreferrer"
>
<Flex
justify="space-between"
align="center"
style={{ width: '100%' }}
gap="small"
>
<div className="create-dashboard-menu-item">
<Github size={14} /> View templates
</div>
<ExternalLink size={14} />
</Flex>
</a>
),
key: '2',
},
];
// const getCreateDashboardItems = useMemo(() => {
// const menuItems: MenuProps['items'] = [
// {
// label: (
// <div
// className="create-dashboard-menu-item"
// onClick={(): void => onModalHandler(false)}
// >
// <Radius size={14} /> Import JSON
// </div>
// ),
// key: '1',
// },
// {
// label: (
// <a
// href="https://signoz.io/docs/dashboards/dashboard-templates/overview/"
// target="_blank"
// rel="noopener noreferrer"
// >
// <Flex
// justify="space-between"
// align="center"
// style={{ width: '100%' }}
// gap="small"
// >
// <div className="create-dashboard-menu-item">
// <Github size={14} /> View templates
// </div>
// <ExternalLink size={14} />
// </Flex>
// </a>
// ),
// key: '2',
// },
// ];
if (createNewDashboard) {
menuItems.unshift({
label: (
<div
className="create-dashboard-menu-item"
onClick={(): void => {
onNewDashboardHandler();
}}
>
<LayoutGrid size={14} /> Create dashboard
</div>
),
key: '0',
});
}
// if (createNewDashboard) {
// menuItems.unshift({
// label: (
// <div
// className="create-dashboard-menu-item"
// onClick={(): void => {
// onNewDashboardHandler();
// }}
// >
// <LayoutGrid size={14} /> Create dashboard
// </div>
// ),
// key: '0',
// });
// }
return menuItems;
}, [createNewDashboard, onNewDashboardHandler]);
// return menuItems;
// }, [createNewDashboard, onNewDashboardHandler]);
const showPaginationItem = (total: number, range: number[]): JSX.Element => (
<>
@@ -763,23 +761,16 @@ function DashboardsList(): JSX.Element {
{createNewDashboard && (
<section className="actions">
<Dropdown
overlayClassName="new-dashboard-menu"
menu={{ items: getCreateDashboardItems }}
placement="bottomRight"
trigger={['click']}
<Button
type="text"
className="new-dashboard"
icon={<Plus size={14} />}
onClick={(): void => {
logEvent('Dashboard List: New dashboard clicked', {});
}}
>
<Button
type="text"
className="new-dashboard"
icon={<Plus size={14} />}
onClick={(): void => {
logEvent('Dashboard List: New dashboard clicked', {});
}}
>
New Dashboard
</Button>
</Dropdown>
New Dashboard
</Button>
<Button
type="text"
className="learn-more"
@@ -807,23 +798,17 @@ function DashboardsList(): JSX.Element {
onChange={handleSearch}
/>
{createNewDashboard && (
<Dropdown
overlayClassName="new-dashboard-menu"
menu={{ items: getCreateDashboardItems }}
placement="bottomRight"
trigger={['click']}
<Button
type="primary"
className="periscope-btn primary btn"
icon={<Plus size={14} />}
onClick={(): void => {
logEvent('Dashboard List: New dashboard clicked', {});
setShowNewDashboardTemplatesModal(true);
}}
>
<Button
type="primary"
className="periscope-btn primary btn"
icon={<Plus size={14} />}
onClick={(): void => {
logEvent('Dashboard List: New dashboard clicked', {});
}}
>
New dashboard
</Button>
</Dropdown>
New dashboard
</Button>
)}
</div>

View File

@@ -6,7 +6,6 @@ import { useGetExplorerQueryRange } from 'hooks/queryBuilder/useGetExplorerQuery
import { logsQueryRangeEmptyResponse } from 'mocks-server/__mockdata__/logs_query_range';
import { server } from 'mocks-server/server';
import { rest } from 'msw';
import { ExplorerViews } from 'pages/LogsExplorer/utils';
import { PreferenceContextProvider } from 'providers/preferences/context/PreferenceContextProvider';
import { QueryBuilderContext } from 'providers/QueryBuilder';
import { render, screen } from 'tests/test-utils';
@@ -122,12 +121,12 @@ describe('LogsExplorerList - empty states', () => {
<QueryBuilderContext.Provider value={mockTraceToLogsContextValue as any}>
<PreferenceContextProvider>
<LogsExplorerViews
selectedView={ExplorerViews.LIST}
setIsLoadingQueries={(): void => {}}
listQueryKeyRef={{ current: {} }}
chartQueryKeyRef={{ current: {} }}
setWarning={(): void => {}}
showLiveLogs={false}
handleChangeSelectedView={(): void => {}}
/>
</PreferenceContextProvider>
</QueryBuilderContext.Provider>,
@@ -187,12 +186,12 @@ describe('LogsExplorerList - empty states', () => {
<QueryBuilderContext.Provider value={mockTraceToLogsContextValue as any}>
<PreferenceContextProvider>
<LogsExplorerViews
selectedView={ExplorerViews.LIST}
setIsLoadingQueries={(): void => {}}
listQueryKeyRef={{ current: {} }}
chartQueryKeyRef={{ current: {} }}
setWarning={(): void => {}}
showLiveLogs={false}
handleChangeSelectedView={(): void => {}}
/>
</PreferenceContextProvider>
</QueryBuilderContext.Provider>,

View File

@@ -0,0 +1,210 @@
import {
initialQueryBuilderFormValues,
OPERATORS,
PANEL_TYPES,
} from 'constants/queryBuilder';
import { getPaginationQueryDataV2 } from 'lib/newQueryBuilder/getPaginationQueryData';
import { DataTypes } from 'types/api/queryBuilder/queryAutocompleteResponse';
import {
IBuilderQuery,
Query,
TagFilter,
} from 'types/api/queryBuilder/queryBuilderData';
import { Filter } from 'types/api/v5/queryRange';
import { LogsAggregatorOperator } from 'types/common/queryBuilder';
import { v4 } from 'uuid';
export const getListQuery = (
stagedQuery: Query | null,
): IBuilderQuery | null => {
if (!stagedQuery || stagedQuery.builder.queryData.length < 1) return null;
return stagedQuery.builder.queryData[0] ?? null;
};
export const getFrequencyChartData = (
stagedQuery: Query | null,
activeLogId: string | null,
): Query | null => {
if (!stagedQuery) {
return null;
}
const baseFirstQuery = getListQuery(stagedQuery);
if (!baseFirstQuery) {
return null;
}
let updatedFilterExpression = baseFirstQuery.filter?.expression || '';
if (activeLogId) {
updatedFilterExpression = `${updatedFilterExpression} id <= '${activeLogId}'`.trim();
}
const modifiedQueryData: IBuilderQuery = {
...baseFirstQuery,
disabled: false,
aggregateOperator: LogsAggregatorOperator.COUNT,
filter: {
...baseFirstQuery.filter,
expression: updatedFilterExpression || '',
},
...(activeLogId && {
filters: {
...baseFirstQuery.filters,
items: [
...(baseFirstQuery?.filters?.items || []),
{
id: v4(),
key: {
key: 'id',
type: '',
dataType: DataTypes.String,
},
op: OPERATORS['<='],
value: activeLogId,
},
],
op: 'AND',
},
}),
groupBy: [
{
key: 'severity_text',
dataType: DataTypes.String,
type: '',
id: 'severity_text--string----true',
},
],
legend: '{{severity_text}}',
orderBy: [],
having: {
expression: '',
},
};
const modifiedQuery: Query = {
...stagedQuery,
builder: {
...stagedQuery.builder,
queryData: [modifiedQueryData], // single query data required for list chart
},
};
return modifiedQuery;
};
export const getQueryByPanelType = (
query: Query | null,
selectedPanelType: PANEL_TYPES,
params: {
page?: number;
pageSize?: number;
filters?: TagFilter;
filter?: Filter;
activeLogId?: string | null;
orderBy?: string;
},
): Query | null => {
if (!query) return null;
let queryData: IBuilderQuery[] = query.builder.queryData.map((item) => ({
...item,
}));
if (selectedPanelType === PANEL_TYPES.LIST) {
const { activeLogId = null, orderBy = 'timestamp:desc' } = params;
const paginateData = getPaginationQueryDataV2({
page: params.page ?? 1,
pageSize: params.pageSize ?? 10,
});
let updatedFilters = params.filters;
let updatedFilterExpression = params.filter?.expression || '';
if (activeLogId) {
updatedFilters = {
...params.filters,
items: [
...(params.filters?.items || []),
{
id: v4(),
key: {
key: 'id',
type: '',
dataType: DataTypes.String,
},
op: OPERATORS['<='],
value: activeLogId,
},
],
op: 'AND',
};
updatedFilterExpression = `${updatedFilterExpression} id <= '${activeLogId}'`.trim();
}
// Create orderBy array based on orderDirection
const [columnName, order] = orderBy.split(':');
const newOrderBy = [
{ columnName: columnName || 'timestamp', order: order || 'desc' },
{ columnName: 'id', order: order || 'desc' },
];
queryData = [
{
...(getListQuery(query) || initialQueryBuilderFormValues),
...paginateData,
...(updatedFilters ? { filters: updatedFilters } : {}),
filter: { expression: updatedFilterExpression || '' },
groupBy: [],
having: {
expression: '',
},
orderBy: newOrderBy,
disabled: false,
},
];
}
const data: Query = {
...query,
builder: {
...query.builder,
queryData,
},
};
return data;
};
export const getExportQueryData = (
query: Query | null,
panelType: PANEL_TYPES,
): Query | null => {
if (!query) return null;
if (panelType === PANEL_TYPES.LIST) {
const listQuery = getListQuery(query);
if (!listQuery) return null;
return {
...query,
builder: {
...query.builder,
queryData: [
{
...listQuery,
orderBy: [
{
columnName: 'timestamp',
order: 'desc',
},
],
limit: null,
},
],
},
};
}
return query;
};

View File

@@ -11,29 +11,29 @@ import { QueryParams } from 'constants/query';
import {
initialFilters,
initialQueriesMap,
initialQueryBuilderFormValues,
OPERATORS,
PANEL_TYPES,
} from 'constants/queryBuilder';
import { DEFAULT_PER_PAGE_VALUE } from 'container/Controls/config';
import ExplorerOptionWrapper from 'container/ExplorerOptions/ExplorerOptionWrapper';
import { ChangeViewFunctionType } from 'container/ExplorerOptions/types';
import GoToTop from 'container/GoToTop';
import {} from 'container/LiveLogs/constants';
import LogsExplorerChart from 'container/LogsExplorerChart';
import LogsExplorerList from 'container/LogsExplorerList';
import LogsExplorerTable from 'container/LogsExplorerTable';
import {
getExportQueryData,
getFrequencyChartData,
getListQuery,
getQueryByPanelType,
} from 'container/LogsExplorerViews/explorerUtils';
import TimeSeriesView from 'container/TimeSeriesView/TimeSeriesView';
import { useCopyLogLink } from 'hooks/logs/useCopyLogLink';
import { useGetExplorerQueryRange } from 'hooks/queryBuilder/useGetExplorerQueryRange';
import { useGetPanelTypesQueryParam } from 'hooks/queryBuilder/useGetPanelTypesQueryParam';
import { useQueryBuilder } from 'hooks/queryBuilder/useQueryBuilder';
import { useHandleExplorerTabChange } from 'hooks/useHandleExplorerTabChange';
import { useSafeNavigate } from 'hooks/useSafeNavigate';
import useUrlQueryData from 'hooks/useUrlQueryData';
import { getPaginationQueryDataV2 } from 'lib/newQueryBuilder/getPaginationQueryData';
import { cloneDeep, defaultTo, isEmpty, isUndefined, set } from 'lodash-es';
import { isEmpty, isUndefined } from 'lodash-es';
import LiveLogs from 'pages/LiveLogs';
import { ExplorerViews } from 'pages/LogsExplorer/utils';
import {
Dispatch,
memo,
@@ -52,15 +52,10 @@ import { Warning } from 'types/api';
import { Dashboard } from 'types/api/dashboard/getAll';
import APIError from 'types/api/error';
import { ILog } from 'types/api/logs/log';
import { DataTypes } from 'types/api/queryBuilder/queryAutocompleteResponse';
import {
IBuilderQuery,
Query,
TagFilter,
} from 'types/api/queryBuilder/queryBuilderData';
import { Query, TagFilter } from 'types/api/queryBuilder/queryBuilderData';
import { Filter } from 'types/api/v5/queryRange';
import { QueryDataV3 } from 'types/api/widgets/getQuery';
import { DataSource, LogsAggregatorOperator } from 'types/common/queryBuilder';
import { DataSource } from 'types/common/queryBuilder';
import { GlobalReducer } from 'types/reducer/globalTime';
import { generateExportToDashboardLink } from 'utils/dashboard/generateExportToDashboardLink';
import { v4 } from 'uuid';
@@ -68,14 +63,13 @@ import { v4 } from 'uuid';
import LogsActionsContainer from './LogsActionsContainer';
function LogsExplorerViewsContainer({
selectedView,
setIsLoadingQueries,
listQueryKeyRef,
chartQueryKeyRef,
setWarning,
showLiveLogs,
handleChangeSelectedView,
}: {
selectedView: ExplorerViews;
setIsLoadingQueries: React.Dispatch<React.SetStateAction<boolean>>;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
listQueryKeyRef: MutableRefObject<any>;
@@ -83,19 +77,14 @@ function LogsExplorerViewsContainer({
chartQueryKeyRef: MutableRefObject<any>;
setWarning: Dispatch<SetStateAction<Warning | undefined>>;
showLiveLogs: boolean;
handleChangeSelectedView: ChangeViewFunctionType;
}): JSX.Element {
const { safeNavigate } = useSafeNavigate();
const dispatch = useDispatch();
const [showFrequencyChart, setShowFrequencyChart] = useState(false);
useEffect(() => {
const frequencyChart = getFromLocalstorage(LOCALSTORAGE.SHOW_FREQUENCY_CHART);
setShowFrequencyChart(frequencyChart === 'true');
}, []);
// this is to respect the panel type present in the URL rather than defaulting it to list always.
const panelTypes = useGetPanelTypesQueryParam(PANEL_TYPES.LIST);
const [showFrequencyChart, setShowFrequencyChart] = useState(
() => getFromLocalstorage(LOCALSTORAGE.SHOW_FREQUENCY_CHART) === 'true',
);
const { activeLogId } = useCopyLogLink();
@@ -117,14 +106,9 @@ function LogsExplorerViewsContainer({
stagedQuery,
panelType,
updateAllQueriesOperators,
handleSetConfig,
} = useQueryBuilder();
const [selectedPanelType, setSelectedPanelType] = useState<PANEL_TYPES>(
panelType || PANEL_TYPES.LIST,
);
const { handleExplorerTabChange } = useHandleExplorerTabChange();
const selectedPanelType = panelType || PANEL_TYPES.LIST;
// State
const [page, setPage] = useState<number>(1);
@@ -135,27 +119,9 @@ function LogsExplorerViewsContainer({
const [orderBy, setOrderBy] = useState<string>('timestamp:desc');
const listQuery = useMemo(() => {
if (!stagedQuery || stagedQuery.builder.queryData.length < 1) return null;
return stagedQuery.builder.queryData.find((item) => !item.disabled) || null;
}, [stagedQuery]);
const isMultipleQueries = useMemo(
() =>
currentQuery?.builder?.queryData?.length > 1 ||
currentQuery?.builder?.queryFormulas?.length > 0,
[currentQuery],
);
const isGroupByExist = useMemo(() => {
const groupByCount: number = currentQuery?.builder?.queryData?.reduce<number>(
(acc, query) => acc + query.groupBy.length,
0,
);
return groupByCount > 0;
}, [currentQuery]);
const listQuery = useMemo(() => getListQuery(stagedQuery) || null, [
stagedQuery,
]);
const isLimit: boolean = useMemo(() => {
if (!listQuery) return false;
@@ -165,66 +131,9 @@ function LogsExplorerViewsContainer({
}, [logs.length, listQuery]);
useEffect(() => {
if (!stagedQuery || !listQuery) {
setListChartQuery(null);
return;
}
let updatedFilterExpression = listQuery.filter?.expression || '';
if (activeLogId) {
updatedFilterExpression = `${updatedFilterExpression} id <= '${activeLogId}'`.trim();
}
const modifiedQueryData: IBuilderQuery = {
...listQuery,
aggregateOperator: LogsAggregatorOperator.COUNT,
groupBy: [
{
key: 'severity_text',
dataType: DataTypes.String,
type: '',
id: 'severity_text--string----true',
},
],
legend: '{{severity_text}}',
filter: {
...listQuery?.filter,
expression: updatedFilterExpression || '',
},
...(activeLogId && {
filters: {
...listQuery?.filters,
items: [
...(listQuery?.filters?.items || []),
{
id: v4(),
key: {
key: 'id',
type: '',
dataType: DataTypes.String,
},
op: OPERATORS['<='],
value: activeLogId,
},
],
op: 'AND',
},
}),
};
const modifiedQuery: Query = {
...stagedQuery,
builder: {
...stagedQuery.builder,
queryData: stagedQuery.builder.queryData.map((item) => ({
...item,
...modifiedQueryData,
})),
},
};
const modifiedQuery = getFrequencyChartData(stagedQuery, activeLogId);
setListChartQuery(modifiedQuery);
}, [stagedQuery, listQuery, activeLogId]);
}, [stagedQuery, activeLogId]);
const exportDefaultQuery = useMemo(
() =>
@@ -246,7 +155,9 @@ function LogsExplorerViewsContainer({
ENTITY_VERSION_V5,
{
enabled:
showFrequencyChart && !!listChartQuery && panelType === PANEL_TYPES.LIST,
showFrequencyChart &&
!!listChartQuery &&
selectedPanelType === PANEL_TYPES.LIST,
},
{},
undefined,
@@ -264,7 +175,7 @@ function LogsExplorerViewsContainer({
error,
} = useGetExplorerQueryRange(
requestData,
panelType,
selectedPanelType,
ENTITY_VERSION_V5,
{
keepPreviousData: true,
@@ -296,77 +207,13 @@ function LogsExplorerViewsContainer({
filters: TagFilter;
filter: Filter;
},
): Query | null => {
if (!query) return null;
const paginateData = getPaginationQueryDataV2({
page: params.page,
pageSize: params.pageSize,
});
// Add filter for activeLogId if present
let updatedFilters = params.filters;
let updatedFilterExpression = params.filter?.expression || '';
if (activeLogId) {
updatedFilters = {
...params.filters,
items: [
...(params.filters?.items || []),
{
id: v4(),
key: {
key: 'id',
type: '',
dataType: DataTypes.String,
},
op: OPERATORS['<='],
value: activeLogId,
},
],
op: 'AND',
};
updatedFilterExpression = `${updatedFilterExpression} id <= '${activeLogId}'`.trim();
}
// Create orderBy array based on orderDirection
const [columnName, order] = orderBy.split(':');
const newOrderBy = [
{ columnName: columnName || 'timestamp', order: order || 'desc' },
{ columnName: 'id', order: order || 'desc' },
];
const queryData: IBuilderQuery[] =
query.builder.queryData.length > 1
? query.builder.queryData.map((item) => ({
...item,
...(selectedView !== ExplorerViews.LIST ? { order: [] } : {}),
}))
: [
{
...(listQuery || initialQueryBuilderFormValues),
...paginateData,
...(updatedFilters ? { filters: updatedFilters } : {}),
filter: {
expression: updatedFilterExpression || '',
},
...(selectedView === ExplorerViews.LIST
? { order: newOrderBy, orderBy: newOrderBy }
: { order: [] }),
},
];
const data: Query = {
...query,
builder: {
...query.builder,
queryData,
},
};
return data;
},
[activeLogId, orderBy, listQuery, selectedView],
): Query | null =>
getQueryByPanelType(query, selectedPanelType, {
...params,
activeLogId,
orderBy,
}),
[activeLogId, orderBy, selectedPanelType],
);
useEffect(() => {
@@ -412,7 +259,7 @@ function LogsExplorerViewsContainer({
if (!logEventCalledRef.current && !isUndefined(data?.payload)) {
const currentData = data?.payload?.data?.newResult?.data?.result || [];
logEvent('Logs Explorer: Page visited', {
panelType,
panelType: selectedPanelType,
isEmpty: !currentData?.[0]?.list,
});
logEventCalledRef.current = true;
@@ -420,31 +267,24 @@ function LogsExplorerViewsContainer({
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [data?.payload]);
const getUpdatedQueryForExport = useCallback((): Query => {
const updatedQuery = cloneDeep(currentQuery);
set(updatedQuery, 'builder.queryData[0].pageSize', 10);
return updatedQuery;
}, [currentQuery]);
const handleExport = useCallback(
(dashboard: Dashboard | null, isNewDashboard?: boolean): void => {
if (!dashboard || !panelType) return;
if (!dashboard || !selectedPanelType) return;
const panelTypeParam = AVAILABLE_EXPORT_PANEL_TYPES.includes(panelType)
? panelType
const panelTypeParam = AVAILABLE_EXPORT_PANEL_TYPES.includes(
selectedPanelType,
)
? selectedPanelType
: PANEL_TYPES.TIME_SERIES;
const widgetId = v4();
const query =
panelType === PANEL_TYPES.LIST
? getUpdatedQueryForExport()
: exportDefaultQuery;
const query = getExportQueryData(requestData, selectedPanelType);
if (!query) return;
logEvent('Logs Explorer: Add to dashboard successful', {
panelType,
panelType: selectedPanelType,
isNewDashboard,
dashboardName: dashboard?.data?.title,
});
@@ -458,36 +298,9 @@ function LogsExplorerViewsContainer({
safeNavigate(dashboardEditView);
},
[getUpdatedQueryForExport, exportDefaultQuery, safeNavigate, panelType],
[safeNavigate, requestData, selectedPanelType],
);
useEffect(() => {
const shouldChangeView = isMultipleQueries || isGroupByExist;
if (selectedPanelType === PANEL_TYPES.LIST && shouldChangeView) {
handleExplorerTabChange(PANEL_TYPES.TIME_SERIES);
setSelectedPanelType(PANEL_TYPES.TIME_SERIES);
}
if (panelType) {
setSelectedPanelType(panelType);
}
}, [
isMultipleQueries,
isGroupByExist,
selectedPanelType,
selectedView,
handleExplorerTabChange,
panelType,
]);
useEffect(() => {
if (selectedView && selectedView === ExplorerViews.LIST && handleSetConfig) {
handleSetConfig(defaultTo(panelTypes, PANEL_TYPES.LIST), DataSource.LOGS);
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [handleSetConfig, panelTypes]);
useEffect(() => {
const currentData = data?.payload?.data?.newResult?.data?.result || [];
if (currentData.length > 0 && currentData[0].list) {
@@ -546,19 +359,17 @@ function LogsExplorerViewsContainer({
pageSize,
minTime,
activeLogId,
panelType,
selectedView,
selectedPanelType,
dispatch,
selectedTime,
maxTime,
orderBy,
selectedPanelType,
]);
const chartData = useMemo(() => {
if (!stagedQuery) return [];
if (panelType === PANEL_TYPES.LIST) {
if (selectedPanelType === PANEL_TYPES.LIST) {
if (listChartData && listChartData.payload.data?.result.length > 0) {
return listChartData.payload.data.result;
}
@@ -578,7 +389,7 @@ function LogsExplorerViewsContainer({
const firstPayloadQueryArray = firstPayloadQuery ? [firstPayloadQuery] : [];
return isGroupByExist ? data.payload.data.result : firstPayloadQueryArray;
}, [stagedQuery, panelType, data, listChartData, listQuery]);
}, [stagedQuery, selectedPanelType, data, listChartData, listQuery]);
useEffect(() => {
if (
@@ -639,7 +450,7 @@ function LogsExplorerViewsContainer({
className="logs-frequency-chart"
isLoading={isFetchingListChartData || isLoadingListChartData}
data={chartData}
isLogsExplorerViews={panelType === PANEL_TYPES.LIST}
isLogsExplorerViews={selectedPanelType === PANEL_TYPES.LIST}
/>
</div>
)}
@@ -695,6 +506,7 @@ function LogsExplorerViewsContainer({
query={exportDefaultQuery}
onExport={handleExport}
sourcepage={DataSource.LOGS}
handleChangeSelectedView={handleChangeSelectedView}
/>
</div>
);

View File

@@ -5,12 +5,12 @@ import { useGetExplorerQueryRange } from 'hooks/queryBuilder/useGetExplorerQuery
import { logsQueryRangeSuccessResponse } from 'mocks-server/__mockdata__/logs_query_range';
import { server } from 'mocks-server/server';
import { rest } from 'msw';
import { ExplorerViews } from 'pages/LogsExplorer/utils';
import { PreferenceContextProvider } from 'providers/preferences/context/PreferenceContextProvider';
import { QueryBuilderContext } from 'providers/QueryBuilder';
import { VirtuosoMockContext } from 'react-virtuoso';
import { fireEvent, render, RenderResult, waitFor } from 'tests/test-utils';
import { TagFilterItem } from 'types/api/queryBuilder/queryBuilderData';
import { LogsAggregatorOperator } from 'types/common/queryBuilder';
import LogsExplorerViews from '..';
import {
@@ -152,12 +152,12 @@ const renderer = (): RenderResult =>
>
<PreferenceContextProvider>
<LogsExplorerViews
selectedView={ExplorerViews.LIST}
setIsLoadingQueries={(): void => {}}
listQueryKeyRef={{ current: {} }}
chartQueryKeyRef={{ current: {} }}
setWarning={(): void => {}}
showLiveLogs={false}
handleChangeSelectedView={(): void => {}}
/>
</PreferenceContextProvider>
</VirtuosoMockContext.Provider>,
@@ -218,12 +218,12 @@ describe('LogsExplorerViews -', () => {
<QueryBuilderContext.Provider value={mockQueryBuilderContextValue}>
<PreferenceContextProvider>
<LogsExplorerViews
selectedView={ExplorerViews.LIST}
setIsLoadingQueries={(): void => {}}
listQueryKeyRef={{ current: {} }}
chartQueryKeyRef={{ current: {} }}
setWarning={(): void => {}}
showLiveLogs={false}
handleChangeSelectedView={(): void => {}}
/>
</PreferenceContextProvider>
</QueryBuilderContext.Provider>,
@@ -295,12 +295,12 @@ describe('LogsExplorerViews -', () => {
<QueryBuilderContext.Provider value={customContext as any}>
<PreferenceContextProvider>
<LogsExplorerViews
selectedView={ExplorerViews.LIST}
setIsLoadingQueries={(): void => {}}
listQueryKeyRef={{ current: {} }}
chartQueryKeyRef={{ current: {} }}
setWarning={(): void => {}}
showLiveLogs={false}
handleChangeSelectedView={(): void => {}}
/>
</PreferenceContextProvider>
</QueryBuilderContext.Provider>,
@@ -323,4 +323,120 @@ describe('LogsExplorerViews -', () => {
}
});
});
describe('Queries by View', () => {
it('builds Frequency Chart query with COUNT and severity_text grouping and activeLogId bound', async () => {
// Enable frequency chart via localstorage and provide activeLogId
(useCopyLogLink as jest.Mock).mockReturnValue({
activeLogId: ACTIVE_LOG_ID,
});
// Ensure default mock return exists
(useGetExplorerQueryRange as jest.Mock).mockReturnValue({
data: { payload: logsQueryRangeSuccessNewFormatResponse },
});
// Render with LIST panel type so the frequency chart hook runs with TIME_SERIES
render(
<VirtuosoMockContext.Provider
value={{ viewportHeight: 300, itemHeight: 100 }}
>
<PreferenceContextProvider>
<QueryBuilderContext.Provider
value={
{ ...mockQueryBuilderContextValue, panelType: PANEL_TYPES.LIST } as any
}
>
<LogsExplorerViews
setIsLoadingQueries={(): void => {}}
listQueryKeyRef={{ current: {} }}
chartQueryKeyRef={{ current: {} }}
setWarning={(): void => {}}
showLiveLogs={false}
handleChangeSelectedView={(): void => {}}
/>
</QueryBuilderContext.Provider>
</PreferenceContextProvider>
</VirtuosoMockContext.Provider>,
);
await waitFor(() => {
const chartCall = (useGetExplorerQueryRange as jest.Mock).mock.calls.find(
(call) => call[1] === PANEL_TYPES.TIME_SERIES && call[0],
);
expect(chartCall).toBeDefined();
if (chartCall) {
const frequencyQuery = chartCall[0];
const first = frequencyQuery.builder.queryData[0];
// Panel type used for chart fetch
expect(chartCall[1]).toBe(PANEL_TYPES.TIME_SERIES);
// Transformations
expect(first.aggregateOperator).toBe(LogsAggregatorOperator.COUNT);
expect(first.groupBy?.[0]?.key).toBe('severity_text');
expect(first.legend).toBe('{{severity_text}}');
expect(Array.isArray(first.orderBy) && first.orderBy.length === 0).toBe(
true,
);
expect(first.having?.expression).toBe('');
// activeLogId constraints
expect(first.filter?.expression).toContain(`id <= '${ACTIVE_LOG_ID}'`);
expect(
first.filters?.items?.some(
(it: any) =>
it.key?.key === 'id' && it.op === '<=' && it.value === ACTIVE_LOG_ID,
),
).toBe(true);
}
});
});
it('builds List View query with orderBy and clears groupBy/having', async () => {
(useCopyLogLink as jest.Mock).mockReturnValue({ activeLogId: undefined });
(useGetExplorerQueryRange as jest.Mock).mockReturnValue({
data: { payload: logsQueryRangeSuccessNewFormatResponse },
});
render(
<VirtuosoMockContext.Provider
value={{ viewportHeight: 300, itemHeight: 100 }}
>
<PreferenceContextProvider>
<QueryBuilderContext.Provider
value={
{ ...mockQueryBuilderContextValue, panelType: PANEL_TYPES.LIST } as any
}
>
<LogsExplorerViews
setIsLoadingQueries={(): void => {}}
listQueryKeyRef={{ current: {} }}
chartQueryKeyRef={{ current: {} }}
setWarning={(): void => {}}
showLiveLogs={false}
handleChangeSelectedView={(): void => {}}
/>
</QueryBuilderContext.Provider>
</PreferenceContextProvider>
</VirtuosoMockContext.Provider>,
);
await waitFor(() => {
const listCall = (useGetExplorerQueryRange as jest.Mock).mock.calls.find(
(call) => call[1] === PANEL_TYPES.LIST && call[0],
);
expect(listCall).toBeDefined();
if (listCall) {
const listQueryArg = listCall[0];
const first = listQueryArg.builder.queryData[0];
expect(first.groupBy?.length ?? 0).toBe(0);
expect(first.having?.expression).toBe('');
// Default orderBy should be timestamp desc, then id desc
expect(first.orderBy).toEqual([
{ columnName: 'timestamp', order: 'desc' },
{ columnName: 'id', order: 'desc' },
]);
// Ensure the query is enabled for fetch
expect(first.disabled).toBe(false);
}
});
});
});
});

View File

@@ -115,19 +115,25 @@ describe('TopOperation API Integration', () => {
server.use(
rest.post(
'http://localhost/api/v1/service/top_operations',
'http://localhost/api/v2/service/top_operations',
async (req, res, ctx) => {
const body = await req.json();
apiCalls.push({ endpoint: TOP_OPERATIONS_ENDPOINT, body });
return res(ctx.status(200), ctx.json(mockTopOperationsData));
return res(
ctx.status(200),
ctx.json({ status: 'success', data: mockTopOperationsData }),
);
},
),
rest.post(
'http://localhost/api/v1/service/entry_point_operations',
'http://localhost/api/v2/service/entry_point_operations',
async (req, res, ctx) => {
const body = await req.json();
apiCalls.push({ endpoint: ENTRY_POINT_OPERATIONS_ENDPOINT, body });
return res(ctx.status(200), ctx.json({ data: mockEntryPointData }));
return res(
ctx.status(200),
ctx.json({ status: 'success', data: mockEntryPointData }),
);
},
),
);
@@ -162,6 +168,7 @@ describe('TopOperation API Integration', () => {
end: `${defaultApiCallExpectation.end}`,
service: defaultApiCallExpectation.service,
tags: defaultApiCallExpectation.selectedTags,
limit: 5000,
});
});
@@ -195,6 +202,7 @@ describe('TopOperation API Integration', () => {
end: `${defaultApiCallExpectation.end}`,
service: defaultApiCallExpectation.service,
tags: defaultApiCallExpectation.selectedTags,
limit: 5000,
});
});

View File

@@ -41,6 +41,7 @@ function TablePanelWrapper({
panelType={widget.panelTypes}
queryRangeRequest={queryRangeRequest}
decimalPrecision={widget.decimalPrecision}
hiddenColumns={widget.hiddenColumns}
// eslint-disable-next-line react/jsx-props-no-spreading
{...GRID_TABLE_CONFIG}
/>

View File

@@ -17,7 +17,7 @@ export type QueryTableProps = Omit<
query: Query;
renderActionCell?: (record: RowData) => ReactNode;
modifyColumns?: (columns: ColumnsType<RowData>) => ColumnsType<RowData>;
renderColumnCell?: Record<string, (record: RowData) => ReactNode>;
renderColumnCell?: Record<string, (...args: any[]) => ReactNode>;
downloadOption?: DownloadOptions;
columns?: ColumnsType<RowData>;
dataSource?: RowData[];

View File

@@ -57,7 +57,7 @@ function ResourceAttributesFilter(): JSX.Element | null {
query={query}
onChange={handleChangeTagFilters}
operatorConfigKey={OperatorConfigKeys.EXCEPTIONS}
hideSpanScopeSelector={false}
hideSpanScopeSelector
/>
</div>
);

View File

@@ -245,5 +245,81 @@ describe('useQueryBuilderOperations - Empty Aggregate Attribute Type', () => {
}),
);
});
it('should reset operators when going from gauge -> empty -> gauge', () => {
// Start with a gauge metric
const gaugeQuery: IBuilderQuery = {
...defaultMockQuery,
aggregateAttribute: {
key: 'original_gauge',
dataType: DataTypes.Float64,
type: ATTRIBUTE_TYPES.GAUGE,
} as BaseAutocompleteData,
aggregations: [
{
timeAggregation: MetricAggregateOperator.COUNT_DISTINCT,
metricName: 'original_gauge',
temporality: '',
spaceAggregation: '',
},
],
};
const { result, rerender } = renderHook(
({ query }) =>
useQueryOperations({
query,
index: 0,
entityVersion: ENTITY_VERSION_V5,
}),
{
initialProps: { query: gaugeQuery },
},
);
// Re-render with empty attribute
const emptyAttribute: BaseAutocompleteData = {
key: '',
dataType: DataTypes.Float64,
type: '',
};
const emptyQuery: IBuilderQuery = {
...defaultMockQuery,
aggregateAttribute: emptyAttribute,
aggregations: [
{
timeAggregation: MetricAggregateOperator.COUNT,
metricName: '',
temporality: '',
spaceAggregation: MetricAggregateOperator.SUM,
},
],
};
rerender({ query: emptyQuery });
// Change to a new gauge metric
const newGaugeAttribute: BaseAutocompleteData = {
key: 'new_gauge',
dataType: DataTypes.Float64,
type: ATTRIBUTE_TYPES.GAUGE,
};
act(() => {
result.current.handleChangeAggregatorAttribute(newGaugeAttribute);
});
expect(mockHandleSetQueryData).toHaveBeenLastCalledWith(
0,
expect.objectContaining({
aggregateAttribute: newGaugeAttribute,
aggregations: [
{
timeAggregation: MetricAggregateOperator.AVG,
metricName: 'new_gauge',
temporality: '',
spaceAggregation: '',
},
],
}),
);
});
});
});

View File

@@ -89,6 +89,8 @@ export const useQueryOperations: UseQueryOperations = ({
name: metricName,
type: metricType,
});
} else {
setPreviousMetricInfo(null);
}
}
}, [query]);
@@ -295,7 +297,6 @@ export const useQueryOperations: UseQueryOperations = ({
if (!isEditMode) {
// Get current metric info
const currentMetricName = newQuery.aggregateAttribute?.key || '';
const currentMetricType = newQuery.aggregateAttribute?.type || '';
const prevMetricType = previousMetricInfo?.type
@@ -378,14 +379,6 @@ export const useQueryOperations: UseQueryOperations = ({
];
}
}
// Update the tracked metric info for next comparison only if we have valid data
if (currentMetricName && currentMetricType) {
setPreviousMetricInfo({
name: currentMetricName,
type: currentMetricType,
});
}
}
}

View File

@@ -9,6 +9,12 @@ import { DataSource } from 'types/common/queryBuilder';
import { useGetSearchQueryParam } from './queryBuilder/useGetSearchQueryParam';
import { useQueryBuilder } from './queryBuilder/useQueryBuilder';
export interface ICurrentQueryData {
name: string;
id: string;
query: Query;
}
export const useHandleExplorerTabChange = (): {
handleExplorerTabChange: (
type: string,
@@ -87,9 +93,3 @@ export const useHandleExplorerTabChange = (): {
return { handleExplorerTabChange };
};
interface ICurrentQueryData {
name: string;
id: string;
query: Query;
}

View File

@@ -26,8 +26,11 @@ export const handlers = [
res(ctx.status(200), ctx.json(queryRangeSuccessResponse)),
),
rest.post('http://localhost/api/v1/services', (req, res, ctx) =>
res(ctx.status(200), ctx.json(serviceSuccessResponse)),
rest.post('http://localhost/api/v2/services', (req, res, ctx) =>
res(
ctx.status(200),
ctx.json({ status: 'success', data: serviceSuccessResponse }),
),
),
rest.post(

View File

@@ -10,8 +10,7 @@ import QuickFilters from 'components/QuickFilters/QuickFilters';
import { QuickFiltersSource, SignalType } from 'components/QuickFilters/types';
import WarningPopover from 'components/WarningPopover/WarningPopover';
import { LOCALSTORAGE } from 'constants/localStorage';
import { QueryParams } from 'constants/query';
import { initialQueriesMap, PANEL_TYPES } from 'constants/queryBuilder';
import { PANEL_TYPES } from 'constants/queryBuilder';
import LogExplorerQuerySection from 'container/LogExplorerQuerySection';
import LogsExplorerViewsContainer from 'container/LogsExplorerViews';
import {
@@ -25,34 +24,33 @@ import RightToolbarActions from 'container/QueryBuilder/components/ToolbarAction
import Toolbar from 'container/Toolbar/Toolbar';
import { useGetPanelTypesQueryParam } from 'hooks/queryBuilder/useGetPanelTypesQueryParam';
import { useQueryBuilder } from 'hooks/queryBuilder/useQueryBuilder';
import { useShareBuilderUrl } from 'hooks/queryBuilder/useShareBuilderUrl';
import { useHandleExplorerTabChange } from 'hooks/useHandleExplorerTabChange';
import {
ICurrentQueryData,
useHandleExplorerTabChange,
} from 'hooks/useHandleExplorerTabChange';
import useUrlQueryData from 'hooks/useUrlQueryData';
import { isEmpty, isEqual, isNull } from 'lodash-es';
import { defaultTo, isEmpty, isEqual, isNull } from 'lodash-es';
import ErrorBoundaryFallback from 'pages/ErrorBoundaryFallback/ErrorBoundaryFallback';
import { EventSourceProvider } from 'providers/EventSource';
import { usePreferenceContext } from 'providers/preferences/context/PreferenceContextProvider';
import { useCallback, useEffect, useMemo, useRef, useState } from 'react';
import { useSearchParams } from 'react-router-dom-v5-compat';
import { Warning } from 'types/api';
import { Query } from 'types/api/queryBuilder/queryBuilderData';
import { DataSource } from 'types/common/queryBuilder';
import {
getExplorerViewForPanelType,
getExplorerViewFromUrl,
explorerViewToPanelType,
panelTypeToExplorerView,
} from 'utils/explorerUtils';
import { ExplorerViews } from './utils';
function LogsExplorer(): JSX.Element {
const [searchParams] = useSearchParams();
const [showLiveLogs, setShowLiveLogs] = useState<boolean>(false);
// Get panel type from URL
const panelTypesFromUrl = useGetPanelTypesQueryParam(PANEL_TYPES.LIST);
const [selectedView, setSelectedView] = useState<ExplorerViews>(() =>
getExplorerViewFromUrl(searchParams, panelTypesFromUrl),
const [selectedView, setSelectedView] = useState<ExplorerViews>(
() => panelTypeToExplorerView[panelTypesFromUrl],
);
const { logs } = usePreferenceContext();
const { preferences } = logs;
@@ -67,30 +65,7 @@ function LogsExplorer(): JSX.Element {
return true;
});
// Update selected view when panel type from URL changes
useEffect(() => {
if (panelTypesFromUrl) {
const newView = getExplorerViewForPanelType(panelTypesFromUrl);
if (newView && newView !== selectedView) {
setSelectedView(newView);
}
}
}, [panelTypesFromUrl, selectedView]);
// Update URL when selectedView changes (without triggering re-renders)
useEffect(() => {
const url = new URL(window.location.href);
url.searchParams.set(QueryParams.selectedExplorerView, selectedView);
window.history.replaceState({}, '', url.toString());
}, [selectedView]);
const {
handleRunQuery,
handleSetConfig,
updateAllQueriesOperators,
currentQuery,
updateQueriesData,
} = useQueryBuilder();
const { handleRunQuery, handleSetConfig } = useQueryBuilder();
const { handleExplorerTabChange } = useHandleExplorerTabChange();
@@ -102,49 +77,12 @@ function LogsExplorer(): JSX.Element {
const [warning, setWarning] = useState<Warning | undefined>(undefined);
const [shouldReset, setShouldReset] = useState(false);
const [defaultQuery, setDefaultQuery] = useState<Query>(() =>
updateAllQueriesOperators(
initialQueriesMap.logs,
PANEL_TYPES.LIST,
DataSource.LOGS,
),
);
const handleChangeSelectedView = useCallback(
(view: ExplorerViews): void => {
if (selectedView === ExplorerViews.LIST) {
handleSetConfig(PANEL_TYPES.LIST, DataSource.LOGS);
}
if (view === ExplorerViews.LIST) {
if (
selectedView !== ExplorerViews.LIST &&
currentQuery?.builder?.queryData?.[0]
) {
const filterToRetain = currentQuery.builder.queryData[0].filter;
const newDefaultQuery = updateAllQueriesOperators(
initialQueriesMap.logs,
PANEL_TYPES.LIST,
DataSource.LOGS,
);
const newListQuery = updateQueriesData(
newDefaultQuery,
'queryData',
(item, index) => {
if (index === 0) {
return { ...item, filter: filterToRetain };
}
return item;
},
);
setDefaultQuery(newListQuery);
}
setShouldReset(true);
}
(view: ExplorerViews, querySearchParameters?: ICurrentQueryData): void => {
handleSetConfig(
defaultTo(explorerViewToPanelType[view], PANEL_TYPES.LIST),
DataSource.LOGS,
);
setSelectedView(view);
@@ -153,38 +91,13 @@ function LogsExplorer(): JSX.Element {
}
handleExplorerTabChange(
view === ExplorerViews.TIMESERIES ? PANEL_TYPES.TIME_SERIES : view,
explorerViewToPanelType[view],
querySearchParameters,
);
},
[
handleSetConfig,
handleExplorerTabChange,
selectedView,
currentQuery,
updateAllQueriesOperators,
updateQueriesData,
setSelectedView,
],
[handleSetConfig, handleExplorerTabChange, setSelectedView],
);
useShareBuilderUrl({
defaultValue: defaultQuery,
forceReset: shouldReset,
});
useEffect(() => {
if (shouldReset) {
setShouldReset(false);
setDefaultQuery(
updateAllQueriesOperators(
initialQueriesMap.logs,
PANEL_TYPES.LIST,
DataSource.LOGS,
),
);
}
}, [shouldReset, updateAllQueriesOperators]);
const handleFilterVisibilityChange = (): void => {
setLocalStorageApi(
LOCALSTORAGE.SHOW_LOGS_QUICK_FILTERS,
@@ -399,12 +312,12 @@ function LogsExplorer(): JSX.Element {
</div>
<div className="logs-explorer-views">
<LogsExplorerViewsContainer
selectedView={selectedView}
listQueryKeyRef={listQueryKeyRef}
chartQueryKeyRef={chartQueryKeyRef}
setIsLoadingQueries={setIsLoadingQueries}
setWarning={setWarning}
showLiveLogs={showLiveLogs}
handleChangeSelectedView={handleChangeSelectedView}
/>
</div>
</div>

View File

@@ -19,8 +19,6 @@ function MetricsApplication(): JSX.Element {
servicename: string;
}>();
const servicename = decodeURIComponent(encodedServiceName);
const activeKey = useMetricsApplicationTabKey();
const urlQuery = useUrlQuery();
@@ -46,7 +44,7 @@ function MetricsApplication(): JSX.Element {
const onTabChange = (tab: string): void => {
urlQuery.set(QueryParams.tab, tab);
safeNavigate(`/services/${servicename}?${urlQuery.toString()}`);
safeNavigate(`/services/${encodedServiceName}?${urlQuery.toString()}`);
};
return (

View File

@@ -15,7 +15,7 @@ import { FilterSelect } from 'components/CeleryOverview/CeleryOverviewConfigOpti
import { QueryParams } from 'constants/query';
import { initialQueriesMap } from 'constants/queryBuilder';
import QueryBuilderSearchV2 from 'container/QueryBuilder/filters/QueryBuilderSearchV2/QueryBuilderSearchV2';
import { ChevronDown, HardHat, PencilLine } from 'lucide-react';
import { ChevronDown, PencilLine } from 'lucide-react';
import { LatencyPointers } from 'pages/TracesFunnelDetails/constants';
import { useFunnelContext } from 'pages/TracesFunnels/FunnelContext';
import { useAppContext } from 'providers/App/App';
@@ -194,7 +194,6 @@ function FunnelStep({
}
hasPopupContainer={false}
placeholder="Search for filters..."
suffixIcon={<HardHat size={12} color="var(--bg-vanilla-400)" />}
rootClassName="traces-funnel-where-filter"
/>
</Form.Item>

View File

@@ -1,12 +1,17 @@
:root {
--bg-vanilla-100-rgb: 255, 255, 255;
}
.funnel-table {
border-radius: 3px;
border: 1px solid var(--bg-slate-500);
background: linear-gradient(
0deg,
rgba(171, 189, 255, 0.01) 0%,
rgba(171, 189, 255, 0.01) 100%
),
#0b0c0e;
table {
background: linear-gradient(
0deg,
rgba(171, 189, 255, 0.01) 0%,
rgba(171, 189, 255, 0.01) 100%
),
#0b0c0e;
}
&__header {
padding: 12px 14px 12px;
@@ -97,7 +102,7 @@
}
.table-row-dark {
background: var(--bg-ink-300);
background: rgba(var(--bg-vanilla-100-rgb), 0.01);
}
.trace-id-cell {

View File

@@ -136,6 +136,7 @@ export interface Widgets extends IBaseWidget {
query: Query;
renderColumnCell?: QueryTableProps['renderColumnCell'];
customColTitles?: Record<string, string>;
hiddenColumns?: string[];
}
export interface PromQLWidgets extends IBaseWidget {

View File

@@ -26,7 +26,16 @@ describe('extractQueryPairs', () => {
valuesPosition: [],
hasNegation: true,
isMultiValue: false,
position: expect.any(Object),
position: {
keyStart: 0,
keyEnd: 5,
negationEnd: 9,
negationStart: 7,
operatorEnd: 16,
operatorStart: 11,
valueEnd: undefined,
valueStart: undefined,
},
isComplete: false,
},
{
@@ -37,7 +46,16 @@ describe('extractQueryPairs', () => {
valuesPosition: [],
hasNegation: true,
isMultiValue: false,
position: expect.any(Object),
position: {
keyEnd: 25,
keyStart: 22,
negationEnd: 29,
negationStart: 27,
operatorEnd: 34,
operatorStart: 31,
valueEnd: 42,
valueStart: 36,
},
isComplete: true,
},
]);
@@ -54,12 +72,11 @@ describe('extractQueryPairs', () => {
isComplete: true,
value: expect.stringMatching(/^\(.*\)$/),
valueList: ['1', '2', '3'],
valuesPosition: expect.arrayContaining([
expect.objectContaining({
start: expect.any(Number),
end: expect.any(Number),
}),
]),
valuesPosition: [
{ start: 7, end: 7 },
{ start: 10, end: 10 },
{ start: 13, end: 13 },
],
}),
]);
});
@@ -75,6 +92,31 @@ describe('extractQueryPairs', () => {
isComplete: true,
value: expect.stringMatching(/^\[.*\]$/),
valueList: ["'a'", "'b'", "'c'"],
valuesPosition: [
{ start: 11, end: 13 },
{ start: 18, end: 20 },
{ start: 25, end: 27 },
],
}),
]);
});
test('should extract correct query pairs when the query has space at the start of the value', () => {
const input = " label IN [ 'a' , 'b' , 'c' ]";
const result = extractQueryPairs(input);
expect(result).toEqual([
expect.objectContaining({
key: 'label',
operator: 'IN',
isMultiValue: true,
isComplete: true,
value: expect.stringMatching(/^\[.*\]$/),
valueList: ["'a'", "'b'", "'c'"],
valuesPosition: [
{ start: 13, end: 15 },
{ start: 20, end: 22 },
{ start: 27, end: 29 },
],
}),
]);
});

View File

@@ -15,6 +15,13 @@ export const panelTypeToExplorerView: Record<PANEL_TYPES, ExplorerViews> = {
[PANEL_TYPES.EMPTY_WIDGET]: ExplorerViews.LIST,
};
export const explorerViewToPanelType = {
[ExplorerViews.LIST]: PANEL_TYPES.LIST,
[ExplorerViews.TIMESERIES]: PANEL_TYPES.TIME_SERIES,
[ExplorerViews.TRACE]: PANEL_TYPES.TRACE,
[ExplorerViews.TABLE]: PANEL_TYPES.TABLE,
} as Record<ExplorerViews, PANEL_TYPES>;
/**
* Get the explorer view based on panel type from URL or saved view
* @param searchParams - URL search parameters

View File

@@ -13832,11 +13832,6 @@ ora@^5.4.1:
strip-ansi "^6.0.0"
wcwidth "^1.0.1"
os-tmpdir@~1.0.2:
version "1.0.2"
resolved "https://registry.yarnpkg.com/os-tmpdir/-/os-tmpdir-1.0.2.tgz#bbe67406c79aa85c5cfec766fe5734555dfa1274"
integrity sha512-D2FR03Vir7FIu45XBY20mTb+/ZSWB00sjU9jdQXt83gDrI4Ztz5Fs7/yy74g2N5SVQY4xY1qDr4rNddwYRVX0g==
outvariant@^1.2.1, outvariant@^1.4.0:
version "1.4.0"
resolved "https://registry.yarnpkg.com/outvariant/-/outvariant-1.4.0.tgz#e742e4bda77692da3eca698ef5bfac62d9fba06e"
@@ -17325,12 +17320,10 @@ tinycolor2@1, tinycolor2@1.6.0, tinycolor2@^1.6.0:
resolved "https://registry.yarnpkg.com/tinycolor2/-/tinycolor2-1.6.0.tgz#f98007460169b0263b97072c5ae92484ce02d09e"
integrity sha512-XPaBkWQJdsf3pLKJV9p4qN/S+fm2Oj8AIPo1BTUhg5oxkvm9+SVEGFdhyOz7tTdUTfvxMiAs4sp6/eZO2Ew+pw==
tmp@^0.0.33:
version "0.0.33"
resolved "https://registry.yarnpkg.com/tmp/-/tmp-0.0.33.tgz#6d34335889768d21b2bcda0aa277ced3b1bfadf9"
integrity sha512-jRCJlojKnZ3addtTOjdIqoRuPEKBvNXcGYqzO6zWZX8KfKEpnGY5jfggJQ3EjKuu8D4bJRr0y+cYJFmYbImXGw==
dependencies:
os-tmpdir "~1.0.2"
tmp@0.2.4, tmp@^0.0.33:
version "0.2.4"
resolved "https://registry.yarnpkg.com/tmp/-/tmp-0.2.4.tgz#c6db987a2ccc97f812f17137b36af2b6521b0d13"
integrity sha512-UdiSoX6ypifLmrfQ/XfiawN6hkjSBpCjhKxxZcWlUUmoXLaCKQU0bx4HF/tdDK2uzRuchf1txGvrWBzYREssoQ==
tmpl@1.0.5:
version "1.0.5"

2
go.mod
View File

@@ -9,11 +9,11 @@ require (
github.com/DATA-DOG/go-sqlmock v1.5.2
github.com/SigNoz/govaluate v0.0.0-20240203125216-988004ccc7fd
github.com/SigNoz/signoz-otel-collector v0.129.4
github.com/allegro/bigcache/v3 v3.1.0
github.com/antlr4-go/antlr/v4 v4.13.1
github.com/antonmedv/expr v1.15.3
github.com/cespare/xxhash/v2 v2.3.0
github.com/coreos/go-oidc/v3 v3.14.1
github.com/dgraph-io/ristretto/v2 v2.3.0
github.com/dustin/go-humanize v1.0.1
github.com/go-co-op/gocron v1.30.1
github.com/go-openapi/runtime v0.28.0

6
go.sum
View File

@@ -118,8 +118,6 @@ github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRF
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/alecthomas/units v0.0.0-20240927000941-0f3dac36c52b h1:mimo19zliBX/vSQ6PWWSL9lK8qwHozUj03+zLoEB8O0=
github.com/alecthomas/units v0.0.0-20240927000941-0f3dac36c52b/go.mod h1:fvzegU4vN3H1qMT+8wDmzjAcDONcgo2/SZ/TyfdUOFs=
github.com/allegro/bigcache/v3 v3.1.0 h1:H2Vp8VOvxcrB91o86fUSVJFqeuz8kpyyB02eH3bSzwk=
github.com/allegro/bigcache/v3 v3.1.0/go.mod h1:aPyh7jEvrog9zAwx5N7+JUQX5dZTSGpxF1LAR4dr35I=
github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
@@ -211,6 +209,10 @@ github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dennwc/varint v1.0.0 h1:kGNFFSSw8ToIy3obO/kKr8U9GZYUAxQEVuix4zfDWzE=
github.com/dennwc/varint v1.0.0/go.mod h1:hnItb35rvZvJrbTALZtY/iQfDs48JKRG1RPpgziApxA=
github.com/dgraph-io/ristretto/v2 v2.3.0 h1:qTQ38m7oIyd4GAed/QkUZyPFNMnvVWyazGXRwvOt5zk=
github.com/dgraph-io/ristretto/v2 v2.3.0/go.mod h1:gpoRV3VzrEY1a9dWAYV6T1U7YzfgttXdd/ZzL1s9OZM=
github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da h1:aIftn67I1fkbMa512G+w+Pxci9hJPB8oMnkcP3iZF38=
github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/digitalocean/godo v1.144.0 h1:rDCsmpwcDe5egFQ3Ae45HTde685/GzX037mWRMPufW0=

View File

@@ -18,6 +18,8 @@ type AuthZ interface {
// CheckWithTupleCreation takes upon the responsibility for generating the tuples alongside everything Check does.
CheckWithTupleCreation(context.Context, authtypes.Claims, valuer.UUID, authtypes.Relation, authtypes.Relation, authtypes.Typeable, []authtypes.Selector) error
CheckWithTupleCreationWithoutClaims(context.Context, valuer.UUID, authtypes.Relation, authtypes.Relation, authtypes.Typeable, []authtypes.Selector) error
// Batch Check returns error when the upstream authorization server is unavailable or for all the tuples of subject (s) doesn't have relation (r) on object (o).
BatchCheck(context.Context, []*openfgav1.TupleKey) error

View File

@@ -2,6 +2,7 @@ package openfgaauthz
import (
"context"
"strconv"
"sync"
authz "github.com/SigNoz/signoz/pkg/authz"
@@ -94,6 +95,153 @@ func (provider *provider) Stop(ctx context.Context) error {
return nil
}
func (provider *provider) Check(ctx context.Context, tupleReq *openfgav1.TupleKey) error {
storeID, modelID := provider.getStoreIDandModelID()
checkResponse, err := provider.openfgaServer.Check(
ctx,
&openfgav1.CheckRequest{
StoreId: storeID,
AuthorizationModelId: modelID,
TupleKey: &openfgav1.CheckRequestTupleKey{
User: tupleReq.User,
Relation: tupleReq.Relation,
Object: tupleReq.Object,
},
})
if err != nil {
return errors.Newf(errors.TypeInternal, authtypes.ErrCodeAuthZUnavailable, "authorization server is unavailable").WithAdditional(err.Error())
}
if !checkResponse.Allowed {
return errors.Newf(errors.TypeForbidden, authtypes.ErrCodeAuthZForbidden, "subject %s cannot %s object %s", tupleReq.User, tupleReq.Relation, tupleReq.Object)
}
return nil
}
func (provider *provider) BatchCheck(ctx context.Context, tupleReq []*openfgav1.TupleKey) error {
storeID, modelID := provider.getStoreIDandModelID()
batchCheckItems := make([]*openfgav1.BatchCheckItem, 0)
for idx, tuple := range tupleReq {
batchCheckItems = append(batchCheckItems, &openfgav1.BatchCheckItem{
TupleKey: &openfgav1.CheckRequestTupleKey{
User: tuple.User,
Relation: tuple.Relation,
Object: tuple.Object,
},
// the batch check response is map[string] keyed by correlationID.
CorrelationId: strconv.Itoa(idx),
})
}
checkResponse, err := provider.openfgaServer.BatchCheck(
ctx,
&openfgav1.BatchCheckRequest{
StoreId: storeID,
AuthorizationModelId: modelID,
Checks: batchCheckItems,
})
if err != nil {
return errors.Newf(errors.TypeInternal, authtypes.ErrCodeAuthZUnavailable, "authorization server is unavailable").WithAdditional(err.Error())
}
for _, checkResponse := range checkResponse.Result {
if checkResponse.GetAllowed() {
return nil
}
}
return errors.New(errors.TypeForbidden, authtypes.ErrCodeAuthZForbidden, "")
}
func (provider *provider) CheckWithTupleCreation(ctx context.Context, claims authtypes.Claims, orgID valuer.UUID, _ authtypes.Relation, translation authtypes.Relation, _ authtypes.Typeable, _ []authtypes.Selector) error {
subject, err := authtypes.NewSubject(authtypes.TypeableUser, claims.UserID, orgID, nil)
if err != nil {
return err
}
tuples, err := authtypes.TypeableOrganization.Tuples(subject, translation, []authtypes.Selector{authtypes.MustNewSelector(authtypes.TypeOrganization, orgID.StringValue())}, orgID)
if err != nil {
return err
}
err = provider.BatchCheck(ctx, tuples)
if err != nil {
return err
}
return nil
}
func (provider *provider) CheckWithTupleCreationWithoutClaims(ctx context.Context, orgID valuer.UUID, _ authtypes.Relation, translation authtypes.Relation, _ authtypes.Typeable, _ []authtypes.Selector) error {
subject, err := authtypes.NewSubject(authtypes.TypeableAnonymous, authtypes.AnonymousUser.String(), orgID, nil)
if err != nil {
return err
}
tuples, err := authtypes.TypeableOrganization.Tuples(subject, translation, []authtypes.Selector{authtypes.MustNewSelector(authtypes.TypeOrganization, orgID.StringValue())}, orgID)
if err != nil {
return err
}
err = provider.BatchCheck(ctx, tuples)
if err != nil {
return err
}
return nil
}
func (provider *provider) Write(ctx context.Context, additions []*openfgav1.TupleKey, deletions []*openfgav1.TupleKey) error {
storeID, modelID := provider.getStoreIDandModelID()
deletionTuplesWithoutCondition := make([]*openfgav1.TupleKeyWithoutCondition, len(deletions))
for idx, tuple := range deletions {
deletionTuplesWithoutCondition[idx] = &openfgav1.TupleKeyWithoutCondition{User: tuple.User, Object: tuple.Object, Relation: tuple.Relation}
}
_, err := provider.openfgaServer.Write(ctx, &openfgav1.WriteRequest{
StoreId: storeID,
AuthorizationModelId: modelID,
Writes: func() *openfgav1.WriteRequestWrites {
if len(additions) == 0 {
return nil
}
return &openfgav1.WriteRequestWrites{
TupleKeys: additions,
OnDuplicate: "ignore",
}
}(),
Deletes: func() *openfgav1.WriteRequestDeletes {
if len(deletionTuplesWithoutCondition) == 0 {
return nil
}
return &openfgav1.WriteRequestDeletes{
TupleKeys: deletionTuplesWithoutCondition,
OnMissing: "ignore",
}
}(),
})
return err
}
func (provider *provider) ListObjects(ctx context.Context, subject string, relation authtypes.Relation, typeable authtypes.Typeable) ([]*authtypes.Object, error) {
storeID, modelID := provider.getStoreIDandModelID()
response, err := provider.openfgaServer.ListObjects(ctx, &openfgav1.ListObjectsRequest{
StoreId: storeID,
AuthorizationModelId: modelID,
User: subject,
Relation: relation.StringValue(),
Type: typeable.Type().StringValue(),
})
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, authtypes.ErrCodeAuthZUnavailable, "cannot list objects for subject %s with relation %s for type %s", subject, relation.StringValue(), typeable.Type().StringValue())
}
return authtypes.MustNewObjectsFromStringSlice(response.Objects), nil
}
func (provider *provider) getOrCreateStore(ctx context.Context, name string) (string, error) {
stores, err := provider.openfgaServer.ListStores(ctx, &openfgav1.ListStoresRequest{})
if err != nil {
@@ -176,112 +324,12 @@ func (provider *provider) isModelEqual(expected *openfgav1.AuthorizationModel, a
}
func (provider *provider) Check(ctx context.Context, tupleReq *openfgav1.TupleKey) error {
checkResponse, err := provider.openfgaServer.Check(
ctx,
&openfgav1.CheckRequest{
StoreId: provider.storeID,
AuthorizationModelId: provider.modelID,
TupleKey: &openfgav1.CheckRequestTupleKey{
User: tupleReq.User,
Relation: tupleReq.Relation,
Object: tupleReq.Object,
},
})
if err != nil {
return errors.Newf(errors.TypeInternal, authtypes.ErrCodeAuthZUnavailable, "authorization server is unavailable").WithAdditional(err.Error())
}
func (provider *provider) getStoreIDandModelID() (string, string) {
provider.mtx.RLock()
defer provider.mtx.RUnlock()
if !checkResponse.Allowed {
return errors.Newf(errors.TypeForbidden, authtypes.ErrCodeAuthZForbidden, "subject %s cannot %s object %s", tupleReq.User, tupleReq.Relation, tupleReq.Object)
}
storeID := provider.storeID
modelID := provider.modelID
return nil
}
func (provider *provider) BatchCheck(ctx context.Context, tupleReq []*openfgav1.TupleKey) error {
batchCheckItems := make([]*openfgav1.BatchCheckItem, 0)
for _, tuple := range tupleReq {
batchCheckItems = append(batchCheckItems, &openfgav1.BatchCheckItem{
TupleKey: &openfgav1.CheckRequestTupleKey{
User: tuple.User,
Relation: tuple.Relation,
Object: tuple.Object,
},
})
}
checkResponse, err := provider.openfgaServer.BatchCheck(
ctx,
&openfgav1.BatchCheckRequest{
StoreId: provider.storeID,
AuthorizationModelId: provider.modelID,
Checks: batchCheckItems,
})
if err != nil {
return errors.Newf(errors.TypeInternal, authtypes.ErrCodeAuthZUnavailable, "authorization server is unavailable").WithAdditional(err.Error())
}
for _, checkResponse := range checkResponse.Result {
if checkResponse.GetAllowed() {
return nil
}
}
return errors.New(errors.TypeForbidden, authtypes.ErrCodeAuthZForbidden, "")
}
func (provider *provider) CheckWithTupleCreation(ctx context.Context, claims authtypes.Claims, orgID valuer.UUID, _ authtypes.Relation, translation authtypes.Relation, _ authtypes.Typeable, _ []authtypes.Selector) error {
subject, err := authtypes.NewSubject(authtypes.TypeUser, claims.UserID, authtypes.Relation{})
if err != nil {
return err
}
tuples, err := authtypes.TypeableOrganization.Tuples(subject, translation, []authtypes.Selector{authtypes.MustNewSelector(authtypes.TypeOrganization, orgID.StringValue())}, orgID)
if err != nil {
return err
}
err = provider.BatchCheck(ctx, tuples)
if err != nil {
return err
}
return nil
}
func (provider *provider) Write(ctx context.Context, additions []*openfgav1.TupleKey, deletions []*openfgav1.TupleKey) error {
deletionTuplesWithoutCondition := make([]*openfgav1.TupleKeyWithoutCondition, len(deletions))
for idx, tuple := range deletions {
deletionTuplesWithoutCondition[idx] = &openfgav1.TupleKeyWithoutCondition{User: tuple.User, Object: tuple.Object, Relation: tuple.Relation}
}
_, err := provider.openfgaServer.Write(ctx, &openfgav1.WriteRequest{
StoreId: provider.storeID,
AuthorizationModelId: provider.modelID,
Writes: &openfgav1.WriteRequestWrites{
TupleKeys: additions,
},
Deletes: &openfgav1.WriteRequestDeletes{
TupleKeys: deletionTuplesWithoutCondition,
},
})
return err
}
func (provider *provider) ListObjects(ctx context.Context, subject string, relation authtypes.Relation, typeable authtypes.Typeable) ([]*authtypes.Object, error) {
response, err := provider.openfgaServer.ListObjects(ctx, &openfgav1.ListObjectsRequest{
StoreId: provider.storeID,
AuthorizationModelId: provider.modelID,
User: subject,
Relation: relation.StringValue(),
Type: typeable.Type().StringValue(),
})
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, authtypes.ErrCodeAuthZUnavailable, "cannot list objects for subject %s with relation %s for type %s", subject, relation.StringValue(), typeable.Type().StringValue())
}
return authtypes.MustNewObjectsFromStringSlice(response.Objects), nil
return storeID, modelID
}

3
pkg/cache/cache.go vendored
View File

@@ -14,8 +14,7 @@ type Cache interface {
Set(ctx context.Context, orgID valuer.UUID, cacheKey string, data cachetypes.Cacheable, ttl time.Duration) error
// Get gets the cacheble entity in the dest entity passed.
// TODO: Remove allowExpired from Get.
Get(ctx context.Context, orgID valuer.UUID, cacheKey string, dest cachetypes.Cacheable, allowExpired bool) error
Get(ctx context.Context, orgID valuer.UUID, cacheKey string, dest cachetypes.Cacheable) error
// Delete deletes the cacheable entity from cache
Delete(ctx context.Context, orgID valuer.UUID, cacheKey string)

View File

@@ -67,7 +67,7 @@ func (provider *provider) Set(ctx context.Context, orgID valuer.UUID, cacheKey s
return nil
}
func (provider *provider) Get(ctx context.Context, orgID valuer.UUID, cacheKey string, dest cachetypes.Cacheable, allowExpired bool) error {
func (provider *provider) Get(ctx context.Context, orgID valuer.UUID, cacheKey string, dest cachetypes.Cacheable) error {
_, span := provider.settings.Tracer().Start(ctx, "memory.get", trace.WithAttributes(
attribute.String(semconv.AttributeDBSystem, "memory"),
attribute.String(semconv.AttributeDBStatement, "get "+strings.Join([]string{orgID.StringValue(), cacheKey}, "::")),

View File

@@ -97,7 +97,7 @@ func TestCloneableSetGet(t *testing.T) {
assert.IsType(t, &CloneableA{}, insideCache)
cached := new(CloneableA)
assert.NoError(t, cache.Get(context.Background(), orgID, "key", cached, false))
assert.NoError(t, cache.Get(context.Background(), orgID, "key", cached))
assert.Equal(t, cloneable, cached)
// confirm that the cached cloneable is a different pointer
@@ -127,7 +127,7 @@ func TestCacheableSetGet(t *testing.T) {
assert.Equal(t, "{\"Key\":\"some-random-key\",\"Value\":1,\"Expiry\":1000}", string(insideCache.([]byte)))
cached := new(CacheableB)
assert.NoError(t, cache.Get(context.Background(), orgID, "key", cached, false))
assert.NoError(t, cache.Get(context.Background(), orgID, "key", cached))
assert.Equal(t, cacheable, cached)
assert.NotSame(t, cacheable, cached)
@@ -141,7 +141,7 @@ func TestGetWithNilPointer(t *testing.T) {
require.NoError(t, err)
var cloneable *CloneableA
assert.Error(t, cache.Get(context.Background(), valuer.GenerateUUID(), "key", cloneable, false))
assert.Error(t, cache.Get(context.Background(), valuer.GenerateUUID(), "key", cloneable))
}
func TestSetGetWithDifferentTypes(t *testing.T) {
@@ -161,7 +161,7 @@ func TestSetGetWithDifferentTypes(t *testing.T) {
assert.NoError(t, cache.Set(context.Background(), orgID, "key", cloneable, 10*time.Second))
cachedCacheable := new(CacheableB)
err = cache.Get(context.Background(), orgID, "key", cachedCacheable, false)
err = cache.Get(context.Background(), orgID, "key", cachedCacheable)
assert.Error(t, err)
}
@@ -197,7 +197,7 @@ func TestCloneableConcurrentSetGet(t *testing.T) {
for i := 0; i < numGoroutines; i++ {
go func(id int) {
cachedCloneable := new(CloneableA)
err := cache.Get(context.Background(), orgID, fmt.Sprintf("key-%d", id), cachedCloneable, false)
err := cache.Get(context.Background(), orgID, fmt.Sprintf("key-%d", id), cachedCloneable)
// Some keys might not exist due to concurrent access, which is expected
_ = err
done <- true
@@ -210,7 +210,7 @@ func TestCloneableConcurrentSetGet(t *testing.T) {
for i := 0; i < numGoroutines; i++ {
cachedCloneable := new(CloneableA)
assert.NoError(t, cache.Get(context.Background(), orgID, fmt.Sprintf("key-%d", i), cachedCloneable, false))
assert.NoError(t, cache.Get(context.Background(), orgID, fmt.Sprintf("key-%d", i), cachedCloneable))
assert.Equal(t, fmt.Sprintf("key-%d", i), cachedCloneable.Key)
assert.Equal(t, i, cachedCloneable.Value)
// confirm that the cached cacheable is a different pointer

View File

@@ -52,7 +52,7 @@ func (c *provider) Set(ctx context.Context, orgID valuer.UUID, cacheKey string,
return c.client.Set(ctx, strings.Join([]string{orgID.StringValue(), cacheKey}, "::"), data, ttl).Err()
}
func (c *provider) Get(ctx context.Context, orgID valuer.UUID, cacheKey string, dest cachetypes.Cacheable, allowExpired bool) error {
func (c *provider) Get(ctx context.Context, orgID valuer.UUID, cacheKey string, dest cachetypes.Cacheable) error {
err := c.client.Get(ctx, strings.Join([]string{orgID.StringValue(), cacheKey}, "::")).Scan(dest)
if err != nil {
if errors.Is(err, redis.Nil) {

View File

@@ -42,4 +42,4 @@ type URLShareableOptions struct {
SelectColumns []v3.AttributeKey `json:"selectColumns"`
}
var PredefinedAlertLabels = []string{ruletypes.LabelThresholdName}
var PredefinedAlertLabels = []string{ruletypes.LabelThresholdName, ruletypes.LabelSeverityName, ruletypes.LabelLastSeen}

View File

@@ -6,6 +6,7 @@ import (
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/gorilla/mux"
@@ -17,15 +18,16 @@ const (
type AuthZ struct {
logger *slog.Logger
orgGetter organization.Getter
authzService authz.AuthZ
}
func NewAuthZ(logger *slog.Logger) *AuthZ {
func NewAuthZ(logger *slog.Logger, orgGetter organization.Getter, authzService authz.AuthZ) *AuthZ {
if logger == nil {
panic("cannot build authz middleware, logger is empty")
}
return &AuthZ{logger: logger}
return &AuthZ{logger: logger, orgGetter: orgGetter, authzService: authzService}
}
func (middleware *AuthZ) ViewAccess(next http.HandlerFunc) http.HandlerFunc {
@@ -107,7 +109,7 @@ func (middleware *AuthZ) OpenAccess(next http.HandlerFunc) http.HandlerFunc {
})
}
func (middleware *AuthZ) Check(next http.HandlerFunc, relation authtypes.Relation, translation authtypes.Relation, typeable authtypes.Typeable, cb authtypes.SelectorCallbackFn) http.HandlerFunc {
func (middleware *AuthZ) Check(next http.HandlerFunc, relation authtypes.Relation, translation authtypes.Relation, typeable authtypes.Typeable, cb authtypes.SelectorCallbackWithClaimsFn) http.HandlerFunc {
return http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
claims, err := authtypes.ClaimsFromContext(req.Context())
if err != nil {
@@ -121,7 +123,7 @@ func (middleware *AuthZ) Check(next http.HandlerFunc, relation authtypes.Relatio
return
}
selectors, err := cb(req.Context(), claims)
selectors, err := cb(req, claims)
if err != nil {
render.Error(rw, err)
return
@@ -136,3 +138,28 @@ func (middleware *AuthZ) Check(next http.HandlerFunc, relation authtypes.Relatio
next(rw, req)
})
}
func (middleware *AuthZ) CheckWithoutClaims(next http.HandlerFunc, relation authtypes.Relation, translation authtypes.Relation, typeable authtypes.Typeable, cb authtypes.SelectorCallbackWithoutClaimsFn) http.HandlerFunc {
return http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
ctx := req.Context()
orgs, err := middleware.orgGetter.ListByOwnedKeyRange(ctx)
if err != nil {
render.Error(rw, err)
return
}
selectors, orgID, err := cb(req, orgs)
if err != nil {
render.Error(rw, err)
return
}
err = middleware.authzService.CheckWithTupleCreationWithoutClaims(ctx, orgID, relation, translation, typeable, selectors)
if err != nil {
render.Error(rw, err)
return
}
next(rw, req)
})
}

View File

@@ -7,11 +7,30 @@ import (
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/statsreporter"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/dashboardtypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type Module interface {
// enables public sharing for dashboard.
CreatePublic(context.Context, valuer.UUID, *dashboardtypes.PublicDashboard) error
// gets the config for public sharing by org_id and dashboard_id.
GetPublic(context.Context, valuer.UUID, valuer.UUID) (*dashboardtypes.PublicDashboard, error)
// get the dashboard data by public dashboard id
GetDashboardByPublicID(context.Context, valuer.UUID) (*dashboardtypes.Dashboard, error)
// gets the org for the given public dashboard
GetPublicDashboardOrgAndSelectors(ctx context.Context, id valuer.UUID, orgs []*types.Organization) ([]authtypes.Selector, valuer.UUID, error)
// updates the config for public sharing.
UpdatePublic(context.Context, *dashboardtypes.PublicDashboard) error
// disables the public sharing for the dashboard.
DeletePublic(context.Context, valuer.UUID, valuer.UUID) error
Create(ctx context.Context, orgID valuer.UUID, createdBy string, creator valuer.UUID, data dashboardtypes.PostableDashboard) (*dashboardtypes.Dashboard, error)
Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*dashboardtypes.Dashboard, error)
@@ -32,6 +51,18 @@ type Module interface {
}
type Handler interface {
CreatePublic(http.ResponseWriter, *http.Request)
GetPublic(http.ResponseWriter, *http.Request)
GetPublicData(http.ResponseWriter, *http.Request)
GetPublicWidgetQueryRange(http.ResponseWriter, *http.Request)
UpdatePublic(http.ResponseWriter, *http.Request)
DeletePublic(http.ResponseWriter, *http.Request)
Create(http.ResponseWriter, *http.Request)
Update(http.ResponseWriter, *http.Request)

View File

@@ -4,12 +4,16 @@ import (
"context"
"encoding/json"
"net/http"
"strconv"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/http/binding"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/transition"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/ctxtypes"
@@ -21,10 +25,12 @@ import (
type handler struct {
module dashboard.Module
providerSettings factory.ProviderSettings
querier querier.Querier
licensing licensing.Licensing
}
func NewHandler(module dashboard.Module, providerSettings factory.ProviderSettings) dashboard.Handler {
return &handler{module: module, providerSettings: providerSettings}
func NewHandler(module dashboard.Module, providerSettings factory.ProviderSettings, querier querier.Querier, licensing licensing.Licensing) dashboard.Handler {
return &handler{module: module, providerSettings: providerSettings, querier: querier, licensing: licensing}
}
func (handler *handler) Create(rw http.ResponseWriter, r *http.Request) {
@@ -196,3 +202,278 @@ func (handler *handler) Delete(rw http.ResponseWriter, r *http.Request) {
render.Success(rw, http.StatusNoContent, nil)
}
func (handler *handler) CreatePublic(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
_, err = handler.licensing.GetActive(ctx, valuer.MustNewUUID(claims.OrgID))
if err != nil {
render.Error(rw, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error()))
return
}
id, err := valuer.NewUUID(mux.Vars(r)["id"])
if err != nil {
render.Error(rw, err)
return
}
_, err = handler.module.Get(ctx, valuer.MustNewUUID(claims.OrgID), id)
if err != nil {
render.Error(rw, err)
return
}
req := new(dashboardtypes.PostablePublicDashboard)
if err := binding.JSON.BindBody(r.Body, req); err != nil {
render.Error(rw, err)
return
}
publicDashboard := dashboardtypes.NewPublicDashboard(req.TimeRangeEnabled, req.DefaultTimeRange, id)
err = handler.module.CreatePublic(ctx, valuer.MustNewUUID(claims.OrgID), publicDashboard)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusCreated, nil)
}
func (handler *handler) GetPublic(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
_, err = handler.licensing.GetActive(ctx, valuer.MustNewUUID(claims.OrgID))
if err != nil {
render.Error(rw, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error()))
return
}
id, err := valuer.NewUUID(mux.Vars(r)["id"])
if err != nil {
render.Error(rw, err)
return
}
_, err = handler.module.Get(ctx, valuer.MustNewUUID(claims.OrgID), id)
if err != nil {
render.Error(rw, err)
return
}
publicDashboard, err := handler.module.GetPublic(ctx, valuer.MustNewUUID(claims.OrgID), id)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, dashboardtypes.NewGettablePublicDashboard(publicDashboard))
}
func (handler *handler) GetPublicData(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
id, err := valuer.NewUUID(mux.Vars(r)["id"])
if err != nil {
render.Error(rw, err)
return
}
dashboard, err := handler.module.GetDashboardByPublicID(ctx, id)
if err != nil {
render.Error(rw, err)
return
}
publicDashboard, err := handler.module.GetPublic(ctx, dashboard.OrgID, valuer.MustNewUUID(dashboard.ID))
if err != nil {
render.Error(rw, err)
return
}
gettablePublicDashboardData, err := dashboardtypes.NewPublicDashboardDataFromDashboard(dashboard, publicDashboard)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, gettablePublicDashboardData)
}
func (handler *handler) GetPublicWidgetQueryRange(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
id, err := valuer.NewUUID(mux.Vars(r)["id"])
if err != nil {
render.Error(rw, err)
return
}
widgetIndex, ok := mux.Vars(r)["index"]
if !ok {
render.Error(rw, errors.New(errors.TypeInvalidInput, dashboardtypes.ErrCodePublicDashboardInvalidInput, "widget index is missing from the path"))
return
}
dashboard, err := handler.module.GetDashboardByPublicID(ctx, id)
if err != nil {
render.Error(rw, err)
return
}
publicDashboard, err := handler.module.GetPublic(ctx, dashboard.OrgID, valuer.MustNewUUID(dashboard.ID))
if err != nil {
render.Error(rw, err)
return
}
widgetIdxInt, err := strconv.ParseInt(widgetIndex, 10, 64)
if err != nil {
render.Error(rw, errors.New(errors.TypeInvalidInput, dashboardtypes.ErrCodePublicDashboardInvalidInput, "invalid widget index"))
return
}
var startTime, endTime uint64
if publicDashboard.TimeRangeEnabled {
startTimeUint, err := strconv.ParseUint(r.URL.Query().Get("startTime"), 10, 64)
if err != nil {
render.Error(rw, errors.New(errors.TypeInvalidInput, dashboardtypes.ErrCodePublicDashboardInvalidInput, "invalid startTime"))
return
}
endTimeUint, err := strconv.ParseUint(r.URL.Query().Get("endTime"), 10, 64)
if err != nil {
render.Error(rw, errors.New(errors.TypeInvalidInput, dashboardtypes.ErrCodePublicDashboardInvalidInput, "invalid endTime"))
return
}
startTime = startTimeUint
endTime = endTimeUint
} else {
timeRange, err := time.ParseDuration(publicDashboard.DefaultTimeRange)
if err != nil {
// this should't happen as we shouldn't let such values in DB
panic(err)
}
startTime = uint64(time.Now().Add(-timeRange).UnixMilli())
endTime = uint64(time.Now().UnixMilli())
}
query, err := dashboard.GetWidgetQuery(startTime, endTime, widgetIdxInt, handler.providerSettings.Logger)
if err != nil {
render.Error(rw, err)
return
}
queryRangeResults, err := handler.querier.QueryRange(ctx, dashboard.OrgID, query)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, queryRangeResults)
}
func (handler *handler) UpdatePublic(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
_, err = handler.licensing.GetActive(ctx, valuer.MustNewUUID(claims.OrgID))
if err != nil {
render.Error(rw, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error()))
return
}
id, err := valuer.NewUUID(mux.Vars(r)["id"])
if err != nil {
render.Error(rw, err)
return
}
_, err = handler.module.Get(ctx, valuer.MustNewUUID(claims.OrgID), id)
if err != nil {
render.Error(rw, err)
return
}
req := new(dashboardtypes.UpdatablePublicDashboard)
if err := binding.JSON.BindBody(r.Body, req); err != nil {
render.Error(rw, err)
return
}
publicDashboard, err := handler.module.GetPublic(ctx, valuer.MustNewUUID(claims.OrgID), id)
if err != nil {
render.Error(rw, err)
return
}
publicDashboard.Update(req.TimeRangeEnabled, req.DefaultTimeRange)
err = handler.module.UpdatePublic(ctx, publicDashboard)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusNoContent, nil)
}
func (handler *handler) DeletePublic(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
_, err = handler.licensing.GetActive(ctx, valuer.MustNewUUID(claims.OrgID))
if err != nil {
render.Error(rw, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error()))
return
}
id, err := valuer.NewUUID(mux.Vars(r)["id"])
if err != nil {
render.Error(rw, err)
return
}
_, err = handler.module.Get(ctx, valuer.MustNewUUID(claims.OrgID), id)
if err != nil {
render.Error(rw, err)
return
}
err = handler.module.DeletePublic(ctx, valuer.MustNewUUID(claims.OrgID), id)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusNoContent, nil)
}

View File

@@ -2,16 +2,20 @@ package impldashboard
import (
"context"
"maps"
"strings"
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/dashboardtypes"
"github.com/SigNoz/signoz/pkg/types/roletypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
@@ -19,14 +23,18 @@ type module struct {
store dashboardtypes.Store
settings factory.ScopedProviderSettings
analytics analytics.Analytics
orgGetter organization.Getter
role role.Module
}
func NewModule(sqlstore sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics) dashboard.Module {
func NewModule(sqlstore sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, role role.Module) dashboard.Module {
scopedProviderSettings := factory.NewScopedProviderSettings(settings, "github.com/SigNoz/signoz/pkg/modules/impldashboard")
return &module{
store: NewStore(sqlstore),
settings: scopedProviderSettings,
analytics: analytics,
orgGetter: orgGetter,
role: role,
}
}
@@ -50,17 +58,87 @@ func (module *module) Create(ctx context.Context, orgID valuer.UUID, createdBy s
return dashboard, nil
}
func (module *module) CreatePublic(ctx context.Context, orgID valuer.UUID, publicDashboard *dashboardtypes.PublicDashboard) error {
storablePublicDashboard, err := module.store.GetPublic(ctx, publicDashboard.DashboardID.StringValue())
if err != nil && !errors.Ast(err, errors.TypeNotFound) {
return err
}
if storablePublicDashboard != nil {
return errors.Newf(errors.TypeAlreadyExists, dashboardtypes.ErrCodePublicDashboardAlreadyExists, "dashboard with id %s is already public", storablePublicDashboard.DashboardID)
}
role, err := module.role.GetOrCreate(ctx, roletypes.NewRole(roletypes.AnonymousUserRoleName, roletypes.AnonymousUserRoleDescription, roletypes.RoleTypeManaged.StringValue(), orgID))
if err != nil {
return err
}
err = module.role.Assign(ctx, role.ID, orgID, authtypes.MustNewSubject(authtypes.TypeableAnonymous, authtypes.AnonymousUser.StringValue(), orgID, nil))
if err != nil {
return err
}
additionObject := authtypes.MustNewObject(
authtypes.Resource{
Name: dashboardtypes.TypeableMetaResourcePublicDashboard.Name(),
Type: authtypes.TypeMetaResource,
},
authtypes.MustNewSelector(authtypes.TypeMetaResource, publicDashboard.ID.String()),
)
err = module.role.PatchObjects(ctx, orgID, role.ID, authtypes.RelationRead, []*authtypes.Object{additionObject}, nil)
if err != nil {
return err
}
err = module.store.CreatePublic(ctx, dashboardtypes.NewStorablePublicDashboardFromPublicDashboard(publicDashboard))
if err != nil {
return err
}
return nil
}
func (module *module) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*dashboardtypes.Dashboard, error) {
storableDashboard, err := module.store.Get(ctx, orgID, id)
if err != nil {
return nil, err
}
dashboard, err := dashboardtypes.NewDashboardFromStorableDashboard(storableDashboard)
return dashboardtypes.NewDashboardFromStorableDashboard(storableDashboard), nil
}
func (module *module) GetPublic(ctx context.Context, orgID valuer.UUID, dashboardID valuer.UUID) (*dashboardtypes.PublicDashboard, error) {
storablePublicDashboard, err := module.store.GetPublic(ctx, dashboardID.StringValue())
if err != nil {
return nil, err
}
return dashboard, nil
return dashboardtypes.NewPublicDashboardFromStorablePublicDashboard(storablePublicDashboard), nil
}
func (module *module) GetDashboardByPublicID(ctx context.Context, id valuer.UUID) (*dashboardtypes.Dashboard, error) {
storableDashboard, err := module.store.GetDashboardByPublicID(ctx, id.StringValue())
if err != nil {
return nil, err
}
return dashboardtypes.NewDashboardFromStorableDashboard(storableDashboard), nil
}
func (module *module) GetPublicDashboardOrgAndSelectors(ctx context.Context, id valuer.UUID, orgs []*types.Organization) ([]authtypes.Selector, valuer.UUID, error) {
orgIDs := make([]string, len(orgs))
for idx, org := range orgs {
orgIDs[idx] = org.ID.StringValue()
}
storableDashboard, err := module.store.GetDashboardByOrgsAndPublicID(ctx, orgIDs, id.StringValue())
if err != nil {
return nil, valuer.UUID{}, err
}
return []authtypes.Selector{
authtypes.MustNewSelector(authtypes.TypeMetaResource, id.StringValue()),
}, storableDashboard.OrgID, nil
}
func (module *module) List(ctx context.Context, orgID valuer.UUID) ([]*dashboardtypes.Dashboard, error) {
@@ -69,12 +147,7 @@ func (module *module) List(ctx context.Context, orgID valuer.UUID) ([]*dashboard
return nil, err
}
dashboards, err := dashboardtypes.NewDashboardsFromStorableDashboards(storableDashboards)
if err != nil {
return nil, err
}
return dashboards, nil
return dashboardtypes.NewDashboardsFromStorableDashboards(storableDashboards), nil
}
func (module *module) Update(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, updatableDashboard dashboardtypes.UpdatableDashboard, diff int) (*dashboardtypes.Dashboard, error) {
@@ -101,6 +174,10 @@ func (module *module) Update(ctx context.Context, orgID valuer.UUID, id valuer.U
return dashboard, nil
}
func (module *module) UpdatePublic(ctx context.Context, publicDashboard *dashboardtypes.PublicDashboard) error {
return module.store.UpdatePublic(ctx, dashboardtypes.NewStorablePublicDashboardFromPublicDashboard(publicDashboard))
}
func (module *module) LockUnlock(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, role types.Role, lock bool) error {
dashboard, err := module.Get(ctx, orgID, id)
if err != nil {
@@ -134,7 +211,56 @@ func (module *module) Delete(ctx context.Context, orgID valuer.UUID, id valuer.U
return errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "dashboard is locked, please unlock the dashboard to be delete it")
}
return module.store.Delete(ctx, orgID, id)
err = module.store.RunInTx(ctx, func(ctx context.Context) error {
err := module.DeletePublic(ctx, orgID, id)
if err != nil && !errors.Ast(err, errors.TypeNotFound) {
return err
}
err = module.store.Delete(ctx, orgID, id)
if err != nil {
return err
}
return nil
})
if err != nil {
return err
}
return nil
}
func (module *module) DeletePublic(ctx context.Context, orgID valuer.UUID, dashboardID valuer.UUID) error {
publicDashboard, err := module.GetPublic(ctx, orgID, dashboardID)
if err != nil {
return err
}
role, err := module.role.GetOrCreate(ctx, roletypes.NewRole(roletypes.AnonymousUserRoleName, roletypes.AnonymousUserRoleDescription, roletypes.RoleTypeManaged.StringValue(), orgID))
if err != nil {
return err
}
deletionObject := authtypes.MustNewObject(
authtypes.Resource{
Name: dashboardtypes.TypeableMetaResourcePublicDashboard.Name(),
Type: authtypes.TypeMetaResource,
},
authtypes.MustNewSelector(authtypes.TypeMetaResource, publicDashboard.ID.String()),
)
err = module.role.PatchObjects(ctx, orgID, role.ID, authtypes.RelationRead, nil, []*authtypes.Object{deletionObject})
if err != nil {
return err
}
err = module.store.DeletePublic(ctx, dashboardID.StringValue())
if err != nil {
return err
}
return nil
}
func (module *module) GetByMetricNames(ctx context.Context, orgID valuer.UUID, metricNames []string) (map[string][]map[string]string, error) {
@@ -221,9 +347,17 @@ func (module *module) Collect(ctx context.Context, orgID valuer.UUID) (map[strin
return nil, err
}
return dashboardtypes.NewStatsFromStorableDashboards(dashboards), nil
publicDashboards, err := module.store.ListPublic(ctx, orgID)
if err != nil {
return nil, err
}
stats := make(map[string]any)
maps.Copy(stats, dashboardtypes.NewStatsFromStorableDashboards(dashboards))
maps.Copy(stats, dashboardtypes.NewStatsFromStorablePublicDashboards(publicDashboards))
return stats, nil
}
func (module *module) MustGetTypeables() []authtypes.Typeable {
return []authtypes.Typeable{dashboardtypes.TypeableResourceDashboard, dashboardtypes.TypeableResourcesDashboards}
return []authtypes.Typeable{dashboardtypes.TypeableMetaResourceDashboard, dashboardtypes.TypeableMetaResourcesDashboards}
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types/dashboardtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/uptrace/bun"
)
type store struct {
@@ -31,9 +32,22 @@ func (store *store) Create(ctx context.Context, storabledashboard *dashboardtype
return nil
}
func (store *store) CreatePublic(ctx context.Context, storable *dashboardtypes.StorablePublicDashboard) error {
_, err := store.
sqlstore.
BunDBCtx(ctx).
NewInsert().
Model(storable).
Exec(ctx)
if err != nil {
return store.sqlstore.WrapAlreadyExistsErrf(err, dashboardtypes.ErrCodePublicDashboardAlreadyExists, "dashboard with id %s is already public", storable.DashboardID)
}
return nil
}
func (store *store) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*dashboardtypes.StorableDashboard, error) {
storableDashboard := new(dashboardtypes.StorableDashboard)
err := store.
sqlstore.
BunDB().
@@ -49,9 +63,61 @@ func (store *store) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID)
return storableDashboard, nil
}
func (store *store) GetPublic(ctx context.Context, dashboardID string) (*dashboardtypes.StorablePublicDashboard, error) {
storable := new(dashboardtypes.StorablePublicDashboard)
err := store.
sqlstore.
BunDB().
NewSelect().
Model(storable).
Where("dashboard_id = ?", dashboardID).
Scan(ctx)
if err != nil {
return nil, store.sqlstore.WrapNotFoundErrf(err, dashboardtypes.ErrCodePublicDashboardNotFound, "dashboard with id %s isn't public", dashboardID)
}
return storable, nil
}
func (store *store) GetDashboardByOrgsAndPublicID(ctx context.Context, orgIDs []string, id string) (*dashboardtypes.StorableDashboard, error) {
storable := new(dashboardtypes.StorableDashboard)
err := store.
sqlstore.
BunDB().
NewSelect().
Model(storable).
Join("JOIN public_dashboard").
JoinOn("public_dashboard.dashboard_id = dashboard.id").
Where("public_dashboard.id = ?", id).
Where("org_id IN (?)", bun.In(orgIDs)).
Scan(ctx)
if err != nil {
return nil, store.sqlstore.WrapNotFoundErrf(err, dashboardtypes.ErrCodePublicDashboardNotFound, "couldn't find dashboard with id %s ", id)
}
return storable, nil
}
func (store *store) GetDashboardByPublicID(ctx context.Context, id string) (*dashboardtypes.StorableDashboard, error) {
storable := new(dashboardtypes.StorableDashboard)
err := store.
sqlstore.
BunDB().
NewSelect().
Model(storable).
Join("JOIN public_dashboard").
JoinOn("public_dashboard.dashboard_id = dashboard.id").
Where("public_dashboard.id = ?", id).
Scan(ctx)
if err != nil {
return nil, store.sqlstore.WrapNotFoundErrf(err, dashboardtypes.ErrCodePublicDashboardNotFound, "couldn't find dashboard with id %s ", id)
}
return storable, nil
}
func (store *store) List(ctx context.Context, orgID valuer.UUID) ([]*dashboardtypes.StorableDashboard, error) {
storableDashboards := make([]*dashboardtypes.StorableDashboard, 0)
err := store.
sqlstore.
BunDB().
@@ -60,12 +126,30 @@ func (store *store) List(ctx context.Context, orgID valuer.UUID) ([]*dashboardty
Where("org_id = ?", orgID).
Scan(ctx)
if err != nil {
return nil, store.sqlstore.WrapNotFoundErrf(err, errors.CodeNotFound, "no dashboards found in orgID %s", orgID)
return nil, err
}
return storableDashboards, nil
}
func (store *store) ListPublic(ctx context.Context, orgID valuer.UUID) ([]*dashboardtypes.StorablePublicDashboard, error) {
storable := make([]*dashboardtypes.StorablePublicDashboard, 0)
err := store.
sqlstore.
BunDB().
NewSelect().
Model(&storable).
Join("JOIN dashboard").
JoinOn("public_dashboard.dashboard_id = dashboard.id").
Where("dashboard.org_id = ?", orgID).
Scan(ctx)
if err != nil {
return nil, err
}
return storable, nil
}
func (store *store) Update(ctx context.Context, orgID valuer.UUID, storableDashboard *dashboardtypes.StorableDashboard) error {
_, err := store.
sqlstore.
@@ -76,7 +160,22 @@ func (store *store) Update(ctx context.Context, orgID valuer.UUID, storableDashb
Where("org_id = ?", orgID).
Exec(ctx)
if err != nil {
return store.sqlstore.WrapNotFoundErrf(err, errors.CodeAlreadyExists, "dashboard with id %s doesn't exist", storableDashboard.ID)
return store.sqlstore.WrapNotFoundErrf(err, errors.CodeNotFound, "dashboard with id %s doesn't exist", storableDashboard.ID)
}
return nil
}
func (store *store) UpdatePublic(ctx context.Context, storable *dashboardtypes.StorablePublicDashboard) error {
_, err := store.
sqlstore.
BunDB().
NewUpdate().
Model(storable).
WherePK().
Exec(ctx)
if err != nil {
return store.sqlstore.WrapNotFoundErrf(err, dashboardtypes.ErrCodePublicDashboardNotFound, "dashboard with id %s isn't public", storable.DashboardID)
}
return nil
@@ -97,3 +196,24 @@ func (store *store) Delete(ctx context.Context, orgID valuer.UUID, id valuer.UUI
return nil
}
func (store *store) DeletePublic(ctx context.Context, dashboardID string) error {
_, err := store.
sqlstore.
BunDB().
NewDelete().
Model(new(dashboardtypes.StorablePublicDashboard)).
Where("dashboard_id = ?", dashboardID).
Exec(ctx)
if err != nil {
return store.sqlstore.WrapNotFoundErrf(err, dashboardtypes.ErrCodePublicDashboardNotFound, "dashboard with id %s isn't public", dashboardID)
}
return nil
}
func (store *store) RunInTx(ctx context.Context, cb func(ctx context.Context) error) error {
return store.sqlstore.RunInTxCtx(ctx, nil, func(ctx context.Context) error {
return cb(ctx)
})
}

View File

@@ -17,8 +17,8 @@ type handler struct {
module role.Module
}
func NewHandler(module role.Module) (role.Handler, error) {
return &handler{module: module}, nil
func NewHandler(module role.Module) role.Handler {
return &handler{module: module}
}
func (handler *handler) Create(rw http.ResponseWriter, r *http.Request) {
@@ -28,11 +28,6 @@ func (handler *handler) Create(rw http.ResponseWriter, r *http.Request) {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
req := new(roletypes.PostableRole)
if err := binding.JSON.BindBody(r.Body, req); err != nil {
@@ -40,13 +35,13 @@ func (handler *handler) Create(rw http.ResponseWriter, r *http.Request) {
return
}
role, err := handler.module.Create(ctx, orgID, req.DisplayName, req.Description)
err = handler.module.Create(ctx, roletypes.NewRole(req.Name, req.Description, roletypes.RoleTypeCustom.StringValue(), valuer.MustNewUUID(claims.OrgID)))
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusCreated, role.ID.StringValue())
render.Success(rw, http.StatusCreated, nil)
}
func (handler *handler) Get(rw http.ResponseWriter, r *http.Request) {
@@ -56,11 +51,6 @@ func (handler *handler) Get(rw http.ResponseWriter, r *http.Request) {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
id, ok := mux.Vars(r)["id"]
if !ok {
@@ -73,7 +63,7 @@ func (handler *handler) Get(rw http.ResponseWriter, r *http.Request) {
return
}
role, err := handler.module.Get(ctx, orgID, roleID)
role, err := handler.module.Get(ctx, valuer.MustNewUUID(claims.OrgID), roleID)
if err != nil {
render.Error(rw, err)
return
@@ -89,11 +79,6 @@ func (handler *handler) GetObjects(rw http.ResponseWriter, r *http.Request) {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
id, ok := mux.Vars(r)["id"]
if !ok {
@@ -117,7 +102,7 @@ func (handler *handler) GetObjects(rw http.ResponseWriter, r *http.Request) {
return
}
objects, err := handler.module.GetObjects(ctx, orgID, roleID, relation)
objects, err := handler.module.GetObjects(ctx, valuer.MustNewUUID(claims.OrgID), roleID, relation)
if err != nil {
render.Error(rw, err)
return
@@ -147,13 +132,8 @@ func (handler *handler) List(rw http.ResponseWriter, r *http.Request) {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
roles, err := handler.module.List(ctx, orgID)
roles, err := handler.module.List(ctx, valuer.MustNewUUID(claims.OrgID))
if err != nil {
render.Error(rw, err)
return
@@ -169,18 +149,8 @@ func (handler *handler) Patch(rw http.ResponseWriter, r *http.Request) {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
id, ok := mux.Vars(r)["id"]
if !ok {
render.Error(rw, errors.New(errors.TypeInvalidInput, roletypes.ErrCodeRoleInvalidInput, "id is missing from the request"))
return
}
roleID, err := valuer.NewUUID(id)
id, err := valuer.NewUUID(mux.Vars(r)["id"])
if err != nil {
render.Error(rw, err)
return
@@ -192,7 +162,14 @@ func (handler *handler) Patch(rw http.ResponseWriter, r *http.Request) {
return
}
err = handler.module.Patch(ctx, orgID, roleID, req.DisplayName, req.Description)
role, err := handler.module.Get(ctx, valuer.MustNewUUID(claims.OrgID), id)
if err != nil {
render.Error(rw, err)
return
}
role.PatchMetadata(req.Name, req.Description)
err = handler.module.Patch(ctx, valuer.MustNewUUID(claims.OrgID), role)
if err != nil {
render.Error(rw, err)
return
@@ -208,29 +185,14 @@ func (handler *handler) PatchObjects(rw http.ResponseWriter, r *http.Request) {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
id, err := valuer.NewUUID(mux.Vars(r)["id"])
if err != nil {
render.Error(rw, err)
return
}
id, ok := mux.Vars(r)["id"]
if !ok {
render.Error(rw, errors.New(errors.TypeInvalidInput, roletypes.ErrCodeRoleInvalidInput, "id is missing from the request"))
return
}
roleID, err := valuer.NewUUID(id)
if err != nil {
render.Error(rw, err)
return
}
relationStr, ok := mux.Vars(r)["relation"]
if !ok {
render.Error(rw, errors.New(errors.TypeInvalidInput, roletypes.ErrCodeRoleInvalidInput, "relation is missing from the request"))
return
}
relation, err := authtypes.NewRelation(relationStr)
relation, err := authtypes.NewRelation(mux.Vars(r)["relation"])
if err != nil {
render.Error(rw, err)
return
@@ -248,7 +210,7 @@ func (handler *handler) PatchObjects(rw http.ResponseWriter, r *http.Request) {
return
}
err = handler.module.PatchObjects(ctx, orgID, roleID, relation, patchableObjects.Additions, patchableObjects.Deletions)
err = handler.module.PatchObjects(ctx, valuer.MustNewUUID(claims.OrgID), id, relation, patchableObjects.Additions, patchableObjects.Deletions)
if err != nil {
render.Error(rw, err)
return
@@ -264,24 +226,14 @@ func (handler *handler) Delete(rw http.ResponseWriter, r *http.Request) {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
id, err := valuer.NewUUID(mux.Vars(r)["id"])
if err != nil {
render.Error(rw, err)
return
}
id, ok := mux.Vars(r)["id"]
if !ok {
render.Error(rw, errors.New(errors.TypeInvalidInput, roletypes.ErrCodeRoleInvalidInput, "id is missing from the request"))
return
}
roleID, err := valuer.NewUUID(id)
if err != nil {
render.Error(rw, err)
return
}
err = handler.module.Delete(ctx, orgID, roleID)
err = handler.module.Delete(ctx, valuer.MustNewUUID(claims.OrgID), id)
if err != nil {
render.Error(rw, err)
return

View File

@@ -5,6 +5,7 @@ import (
"slices"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/roletypes"
@@ -17,23 +18,31 @@ type module struct {
authz authz.AuthZ
}
func NewModule(ctx context.Context, store roletypes.Store, authz authz.AuthZ, registry []role.RegisterTypeable) (role.Module, error) {
func NewModule(store roletypes.Store, authz authz.AuthZ, registry []role.RegisterTypeable) role.Module {
return &module{
store: store,
authz: authz,
registry: registry,
}, nil
}
}
func (module *module) Create(ctx context.Context, orgID valuer.UUID, displayName, description string) (*roletypes.Role, error) {
role := roletypes.NewRole(displayName, description, orgID)
func (module *module) Create(ctx context.Context, role *roletypes.Role) error {
return module.store.Create(ctx, roletypes.NewStorableRoleFromRole(role))
}
storableRole, err := roletypes.NewStorableRoleFromRole(role)
func (module *module) GetOrCreate(ctx context.Context, role *roletypes.Role) (*roletypes.Role, error) {
existingRole, err := module.store.GetByNameAndOrgID(ctx, role.Name, role.OrgID)
if err != nil {
return nil, err
if !errors.Ast(err, errors.TypeNotFound) {
return nil, err
}
}
err = module.store.Create(ctx, storableRole)
if existingRole != nil {
return roletypes.NewRoleFromStorableRole(existingRole), nil
}
err = module.store.Create(ctx, roletypes.NewStorableRoleFromRole(role))
if err != nil {
return nil, err
}
@@ -63,12 +72,7 @@ func (module *module) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID
return nil, err
}
role, err := roletypes.NewRoleFromStorableRole(storableRole)
if err != nil {
return nil, err
}
return role, nil
return roletypes.NewRoleFromStorableRole(storableRole), nil
}
func (module *module) GetObjects(ctx context.Context, orgID valuer.UUID, id valuer.UUID, relation authtypes.Relation) ([]*authtypes.Object, error) {
@@ -84,7 +88,7 @@ func (module *module) GetObjects(ctx context.Context, orgID valuer.UUID, id valu
authz.
ListObjects(
ctx,
authtypes.MustNewSubject(authtypes.TypeRole, storableRole.ID.String(), authtypes.RelationAssignee),
authtypes.MustNewSubject(authtypes.TypeableRole, storableRole.ID.String(), orgID, &authtypes.RelationAssignee),
relation,
authtypes.MustNewTypeableFromType(resource.Type, resource.Name),
)
@@ -107,39 +111,14 @@ func (module *module) List(ctx context.Context, orgID valuer.UUID) ([]*roletypes
roles := make([]*roletypes.Role, len(storableRoles))
for idx, storableRole := range storableRoles {
role, err := roletypes.NewRoleFromStorableRole(storableRole)
if err != nil {
return nil, err
}
roles[idx] = role
roles[idx] = roletypes.NewRoleFromStorableRole(storableRole)
}
return roles, nil
}
func (module *module) Patch(ctx context.Context, orgID valuer.UUID, id valuer.UUID, displayName, description *string) error {
storableRole, err := module.store.Get(ctx, orgID, id)
if err != nil {
return err
}
role, err := roletypes.NewRoleFromStorableRole(storableRole)
if err != nil {
return err
}
role.PatchMetadata(displayName, description)
updatedRole, err := roletypes.NewStorableRoleFromRole(role)
if err != nil {
return err
}
err = module.store.Update(ctx, orgID, updatedRole)
if err != nil {
return err
}
return nil
func (module *module) Patch(ctx context.Context, orgID valuer.UUID, role *roletypes.Role) error {
return module.store.Update(ctx, orgID, roletypes.NewStorableRoleFromRole(role))
}
func (module *module) PatchObjects(ctx context.Context, orgID valuer.UUID, id valuer.UUID, relation authtypes.Relation, additions, deletions []*authtypes.Object) error {
@@ -161,6 +140,21 @@ func (module *module) PatchObjects(ctx context.Context, orgID valuer.UUID, id va
return nil
}
func (module *module) Assign(ctx context.Context, id valuer.UUID, orgID valuer.UUID, subject string) error {
tuples, err := authtypes.TypeableRole.Tuples(
subject,
authtypes.RelationAssignee,
[]authtypes.Selector{
authtypes.MustNewSelector(authtypes.TypeRole, id.StringValue()),
},
orgID,
)
if err != nil {
return err
}
return module.authz.Write(ctx, tuples, nil)
}
func (module *module) Delete(ctx context.Context, orgID valuer.UUID, id valuer.UUID) error {
return module.store.Delete(ctx, orgID, id)
}

View File

@@ -13,8 +13,8 @@ type store struct {
sqlstore sqlstore.SQLStore
}
func NewStore(sqlstore sqlstore.SQLStore) (roletypes.Store, error) {
return &store{sqlstore: sqlstore}, nil
func NewStore(sqlstore sqlstore.SQLStore) roletypes.Store {
return &store{sqlstore: sqlstore}
}
func (store *store) Create(ctx context.Context, role *roletypes.StorableRole) error {
@@ -38,7 +38,7 @@ func (store *store) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID)
BunDB().
NewSelect().
Model(role).
Where("orgID = ?", orgID).
Where("org_id = ?", orgID).
Where("id = ?", id).
Scan(ctx)
if err != nil {
@@ -48,6 +48,23 @@ func (store *store) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID)
return role, nil
}
func (store *store) GetByNameAndOrgID(ctx context.Context, name string, orgID valuer.UUID) (*roletypes.StorableRole, error) {
role := new(roletypes.StorableRole)
err := store.
sqlstore.
BunDB().
NewSelect().
Model(role).
Where("org_id = ?", orgID).
Where("name = ?", name).
Scan(ctx)
if err != nil {
return nil, store.sqlstore.WrapNotFoundErrf(err, roletypes.ErrCodeRoleNotFound, "role with name: %s doesn't exist", name)
}
return role, nil
}
func (store *store) List(ctx context.Context, orgID valuer.UUID) ([]*roletypes.StorableRole, error) {
roles := make([]*roletypes.StorableRole, 0)
err := store.
@@ -55,7 +72,7 @@ func (store *store) List(ctx context.Context, orgID valuer.UUID) ([]*roletypes.S
BunDB().
NewSelect().
Model(&roles).
Where("orgID = ?", orgID).
Where("org_id = ?", orgID).
Scan(ctx)
if err != nil {
return nil, store.sqlstore.WrapNotFoundErrf(err, roletypes.ErrCodeRoleNotFound, "no roles found in org_id: %s", orgID)

View File

@@ -10,30 +10,36 @@ import (
)
type Module interface {
// Creates the role metadata
Create(context.Context, valuer.UUID, string, string) (*roletypes.Role, error)
// Creates the role.
Create(context.Context, *roletypes.Role) error
// Gets the role metadata
// Gets the role if it exists or creates one.
GetOrCreate(context.Context, *roletypes.Role) (*roletypes.Role, error)
// Gets the role
Get(context.Context, valuer.UUID, valuer.UUID) (*roletypes.Role, error)
// Gets the objects associated with the given role and relation
// Gets the objects associated with the given role and relation.
GetObjects(context.Context, valuer.UUID, valuer.UUID, authtypes.Relation) ([]*authtypes.Object, error)
// Lists all the roles metadata for the organization
// Lists all the roles for the organization.
List(context.Context, valuer.UUID) ([]*roletypes.Role, error)
// Gets all the typeable resources registered from role registry
// Gets all the typeable resources registered from role registry.
GetResources(context.Context) []*authtypes.Resource
// Patches the roles metadata
Patch(context.Context, valuer.UUID, valuer.UUID, *string, *string) error
// Patches the role.
Patch(context.Context, valuer.UUID, *roletypes.Role) error
// Patches the objects in authorization server associated with the given role and relation
PatchObjects(context.Context, valuer.UUID, valuer.UUID, authtypes.Relation, []*authtypes.Object, []*authtypes.Object) error
// Deletes the role metadata and tuples in authorization server
// Deletes the role and tuples in authorization server.
Delete(context.Context, valuer.UUID, valuer.UUID) error
// Assigns role to the given subject.
Assign(context.Context, valuer.UUID, valuer.UUID, string) error
RegisterTypeable
}

View File

@@ -0,0 +1,100 @@
package implservices
import (
"net/http"
"github.com/SigNoz/signoz/pkg/http/binding"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/modules/services"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/servicetypes/servicetypesv1"
"github.com/SigNoz/signoz/pkg/valuer"
)
type handler struct {
Module services.Module
}
func NewHandler(m services.Module) services.Handler {
return &handler{
Module: m,
}
}
func (h *handler) Get(rw http.ResponseWriter, req *http.Request) {
claims, err := authtypes.ClaimsFromContext(req.Context())
if err != nil {
render.Error(rw, err)
return
}
var in servicetypesv1.Request
if err := binding.JSON.BindBody(req.Body, &in); err != nil {
render.Error(rw, err)
return
}
orgUUID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
out, err := h.Module.Get(req.Context(), orgUUID, &in)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, out)
}
func (h *handler) GetTopOperations(rw http.ResponseWriter, req *http.Request) {
claims, err := authtypes.ClaimsFromContext(req.Context())
if err != nil {
render.Error(rw, err)
return
}
var in servicetypesv1.OperationsRequest
if err := binding.JSON.BindBody(req.Body, &in); err != nil {
render.Error(rw, err)
return
}
orgUUID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
out, err := h.Module.GetTopOperations(req.Context(), orgUUID, &in)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, out)
}
func (h *handler) GetEntryPointOperations(rw http.ResponseWriter, req *http.Request) {
claims, err := authtypes.ClaimsFromContext(req.Context())
if err != nil {
render.Error(rw, err)
return
}
var in servicetypesv1.OperationsRequest
if err := binding.JSON.BindBody(req.Body, &in); err != nil {
render.Error(rw, err)
return
}
orgUUID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
out, err := h.Module.GetEntryPointOperations(req.Context(), orgUUID, &in)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, out)
}

View File

@@ -0,0 +1,132 @@
package implservices
import (
"fmt"
"strings"
"github.com/SigNoz/signoz/pkg/errors"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/servicetypes/servicetypesv1"
)
// validateTagFilterItems validates the tag filter items. This should be used before using
// buildFilterExpression or any other function that uses tag filter items.
func validateTagFilterItems(tags []servicetypesv1.TagFilterItem) error {
for _, t := range tags {
if t.Key == "" {
return errors.NewInvalidInputf(errors.CodeInvalidInput, "key is required")
}
if strings.ToLower(t.Operator) != "in" && strings.ToLower(t.Operator) != "notin" {
return errors.NewInvalidInputf(errors.CodeInvalidInput, "only in and notin operators are supported")
}
if len(t.StringValues) == 0 && len(t.BoolValues) == 0 && len(t.NumberValues) == 0 {
return errors.NewInvalidInputf(errors.CodeInvalidInput, "at least one of stringValues, boolValues, or numberValues must be populated")
}
}
return nil
}
// buildFilterExpression converts tag filters into a QBv5-compatible filter expression and set of variableItems.
// before calling this function, should validate tags with validateTagFilterItems first.
func buildFilterExpression(tags []servicetypesv1.TagFilterItem) (string, map[string]qbtypes.VariableItem) {
variables := make(map[string]qbtypes.VariableItem)
parts := make([]string, 0, len(tags))
valueItr := 1
for _, t := range tags {
valueIdentifier := fmt.Sprintf("%d", valueItr)
switch strings.ToLower(t.Operator) {
case "notin":
if vals, ok := pickInValuesFromTag(t); ok {
variables[valueIdentifier] = qbtypes.VariableItem{Type: qbtypes.DynamicVariableType, Value: vals}
} else {
continue
}
parts = append(parts, fmt.Sprintf("%s NOT IN $%s", t.Key, valueIdentifier))
case "in":
if vals, ok := pickInValuesFromTag(t); ok {
variables[valueIdentifier] = qbtypes.VariableItem{Type: qbtypes.DynamicVariableType, Value: vals}
} else {
continue
}
parts = append(parts, fmt.Sprintf("%s IN $%s", t.Key, valueIdentifier))
default:
continue
}
valueItr++
}
filterExpr := strings.Join(parts, " AND ")
return filterExpr, variables
}
// pickInValuesFromTag returns a []any for IN operator in the precedence order of
// StringValues, BoolValues, NumberValues. Returns false if none are populated.
func pickInValuesFromTag(t servicetypesv1.TagFilterItem) ([]any, bool) {
if len(t.StringValues) > 0 {
vals := make([]any, 0, len(t.StringValues))
for _, v := range t.StringValues {
vals = append(vals, v)
}
return vals, true
}
if len(t.BoolValues) > 0 {
vals := make([]any, 0, len(t.BoolValues))
for _, v := range t.BoolValues {
vals = append(vals, v)
}
return vals, true
}
if len(t.NumberValues) > 0 {
vals := make([]any, 0, len(t.NumberValues))
for _, v := range t.NumberValues {
vals = append(vals, v)
}
return vals, true
}
return nil, false
}
// toFloat safely converts a cell value to float64, returning 0 on type mismatch.
func toFloat(row []any, idx int) float64 {
if idx < 0 || idx >= len(row) || row[idx] == nil {
return 0
}
v, ok := row[idx].(float64)
if !ok {
return 0
}
return v
}
// toUint64 safely converts a cell value to uint64.
func toUint64(row []any, idx int) uint64 {
if idx < 0 || idx >= len(row) || row[idx] == nil {
return 0
}
v, ok := row[idx].(uint64)
if !ok {
return 0
}
return v
}
// applyOpsToItems sets topLevelOps for matching service names.
// If opsMap is nil, it performs no changes.
func applyOpsToItems(items []*servicetypesv1.ResponseItem, opsMap map[string][]string) {
if len(items) == 0 {
return
}
if opsMap == nil {
return
}
for i := range items {
if items[i] == nil {
continue
}
if tops, ok := opsMap[items[i].ServiceName]; ok {
items[i].DataWarning.TopLevelOps = tops
}
}
}

View File

@@ -0,0 +1,266 @@
package implservices
import (
"testing"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/servicetypes/servicetypesv1"
"github.com/stretchr/testify/assert"
)
func TestToFloat(t *testing.T) {
tests := []struct {
name string
row []any
idx int
want float64
}{
{name: "float64", row: []any{1.5}, idx: 0, want: 1.5},
{name: "nil", row: []any{nil}, idx: 0, want: 0},
{name: "oob", row: []any{1}, idx: 1, want: 0},
{name: "wrong type -> 0", row: []any{"not-number"}, idx: 0, want: 0},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := toFloat(tt.row, tt.idx)
assert.Equal(t, tt.want, got)
})
}
}
func TestToUint64(t *testing.T) {
tests := []struct {
name string
row []any
idx int
want uint64
}{
{name: "uint64", row: []any{uint64(5)}, idx: 0, want: 5},
{name: "nil -> 0", row: []any{nil}, idx: 0, want: 0},
{name: "oob -> 0", row: []any{1}, idx: 2, want: 0},
{name: "wrong type -> 0", row: []any{"not-number"}, idx: 0, want: 0},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := toUint64(tt.row, tt.idx)
assert.Equal(t, tt.want, got)
})
}
}
func TestApplyOpsToItems(t *testing.T) {
tests := []struct {
name string
items []*servicetypesv1.ResponseItem
ops map[string][]string
want [][]string
}{
{
name: "maps ops to matching services",
items: []*servicetypesv1.ResponseItem{
{ServiceName: "svc-a", DataWarning: servicetypesv1.DataWarning{TopLevelOps: []string{}}},
{ServiceName: "svc-b", DataWarning: servicetypesv1.DataWarning{TopLevelOps: []string{}}},
},
ops: map[string][]string{
"svc-a": {"op1", "op2"},
"svc-c": {"opx"},
},
want: [][]string{
{"op1", "op2"},
{},
},
},
{
name: "nil ops map is no-op",
items: []*servicetypesv1.ResponseItem{
{ServiceName: "svc-a", DataWarning: servicetypesv1.DataWarning{TopLevelOps: []string{}}},
{ServiceName: "svc-b", DataWarning: servicetypesv1.DataWarning{TopLevelOps: []string{}}},
},
ops: nil,
want: [][]string{{}, {}},
},
{
name: "empty items slice is no-op",
items: []*servicetypesv1.ResponseItem{},
ops: map[string][]string{"svc-a": {"op1"}},
want: [][]string{},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
applyOpsToItems(tt.items, tt.ops)
if len(tt.items) != len(tt.want) {
assert.Equal(t, len(tt.want), len(tt.items))
return
}
for i := range tt.items {
if tt.items[i] == nil {
continue
}
assert.Equal(t, tt.want[i], tt.items[i].DataWarning.TopLevelOps)
}
})
}
}
func TestBuildFilterExpression(t *testing.T) {
tests := []struct {
name string
tags []servicetypesv1.TagFilterItem
wantExpr string
assertV func(t *testing.T, vars map[string]qbtypes.VariableItem)
}{
{
name: "no tags -> empty expr",
tags: nil,
wantExpr: "",
assertV: func(t *testing.T, vars map[string]qbtypes.VariableItem) {
assert.Equal(t, 0, len(vars))
},
},
{
name: "not in multiple strings",
tags: []servicetypesv1.TagFilterItem{
{Key: "service.name", Operator: "NotIn", StringValues: []string{"svc-a", "svc-b"}},
},
wantExpr: "service.name NOT IN $1",
assertV: func(t *testing.T, vars map[string]qbtypes.VariableItem) {
arr, ok := vars["1"].Value.([]any)
assert.True(t, ok)
assert.ElementsMatch(t, []any{"svc-a", "svc-b"}, arr)
},
},
{
name: "in single string",
tags: []servicetypesv1.TagFilterItem{
{Key: "deployment.environment", Operator: "in", StringValues: []string{"staging"}},
},
wantExpr: "deployment.environment IN $1",
assertV: func(t *testing.T, vars map[string]qbtypes.VariableItem) {
arr, ok := vars["1"].Value.([]any)
assert.True(t, ok)
assert.Len(t, arr, 1)
assert.Equal(t, "staging", arr[0])
},
},
{
name: "in multiple strings",
tags: []servicetypesv1.TagFilterItem{
{Key: "service.name", Operator: "IN", StringValues: []string{"svc-a", "svc-b"}},
},
wantExpr: "service.name IN $1",
assertV: func(t *testing.T, vars map[string]qbtypes.VariableItem) {
arr, ok := vars["1"].Value.([]any)
assert.True(t, ok)
assert.ElementsMatch(t, []any{"svc-a", "svc-b"}, arr)
},
},
{
name: "in multiple numbers",
tags: []servicetypesv1.TagFilterItem{
{Key: "http.status_code", Operator: "in", NumberValues: []float64{200, 500}},
},
wantExpr: "http.status_code IN $1",
assertV: func(t *testing.T, vars map[string]qbtypes.VariableItem) {
arr, ok := vars["1"].Value.([]any)
assert.True(t, ok)
assert.ElementsMatch(t, []any{200.0, 500.0}, arr)
},
},
{
name: "in multiple bools",
tags: []servicetypesv1.TagFilterItem{
{Key: "feature.flag", Operator: "IN", BoolValues: []bool{true, false}},
},
wantExpr: "feature.flag IN $1",
assertV: func(t *testing.T, vars map[string]qbtypes.VariableItem) {
arr, ok := vars["1"].Value.([]any)
assert.True(t, ok)
assert.ElementsMatch(t, []any{true, false}, arr)
},
},
{
name: "in and not in both conditions",
tags: []servicetypesv1.TagFilterItem{
{Key: "service.name", Operator: "In", StringValues: []string{"svc-a", "svc-b"}},
{Key: "deployment.environment", Operator: "NotIn", StringValues: []string{"production", "staging"}},
},
wantExpr: "service.name IN $1 AND deployment.environment NOT IN $2",
assertV: func(t *testing.T, vars map[string]qbtypes.VariableItem) {
arr, ok := vars["1"].Value.([]any)
assert.True(t, ok)
assert.ElementsMatch(t, []any{"svc-a", "svc-b"}, arr)
arr, ok = vars["2"].Value.([]any)
assert.True(t, ok)
assert.ElementsMatch(t, []any{"production", "staging"}, arr)
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
expr, vars := buildFilterExpression(tt.tags)
assert.Equal(t, tt.wantExpr, expr)
if tt.assertV != nil {
tt := tt
tt.assertV(t, vars)
}
})
}
}
func TestValidateTagFilterItems(t *testing.T) {
tests := []struct {
name string
tags []servicetypesv1.TagFilterItem
wantErr string
}{
{
name: "empty tags -> ok",
tags: nil,
wantErr: "",
},
{
name: "missing key -> error",
tags: []servicetypesv1.TagFilterItem{{Key: "", Operator: "in", StringValues: []string{"a"}}},
wantErr: "key is required",
},
{
name: "valid in and notin",
tags: []servicetypesv1.TagFilterItem{
{Key: "service.name", Operator: "in", StringValues: []string{"svc-a", "svc-b"}},
{Key: "deployment.environment", Operator: "notin", StringValues: []string{"prod"}},
},
wantErr: "",
},
{
name: "invalid operator -> error",
tags: []servicetypesv1.TagFilterItem{{Key: "service.name", Operator: "equals", StringValues: []string{"a"}}},
wantErr: "only in and notin operators are supported",
},
{
name: "in with no values -> error",
tags: []servicetypesv1.TagFilterItem{{Key: "env", Operator: "in"}},
wantErr: "at least one of stringValues, boolValues, or numberValues must be populated",
},
{
name: "notin with no values -> error",
tags: []servicetypesv1.TagFilterItem{{Key: "env", Operator: "notin"}},
wantErr: "at least one of stringValues, boolValues, or numberValues must be populated",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := validateTagFilterItems(tt.tags)
if tt.wantErr == "" {
assert.NoError(t, err)
} else {
if assert.Error(t, err) {
assert.Contains(t, err.Error(), tt.wantErr)
}
}
})
}
}

View File

@@ -0,0 +1,519 @@
package implservices
import (
"context"
"fmt"
"time"
"strconv"
"github.com/ClickHouse/clickhouse-go/v2"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/modules/services"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/telemetrytraces"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/servicetypes/servicetypesv1"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type module struct {
Querier querier.Querier
TelemetryStore telemetrystore.TelemetryStore
}
// NewModule constructs the services module with the provided querier dependency.
func NewModule(q querier.Querier, ts telemetrystore.TelemetryStore) services.Module {
return &module{
Querier: q,
TelemetryStore: ts,
}
}
// FetchTopLevelOperations returns top-level operations per service using db query
func (m *module) FetchTopLevelOperations(ctx context.Context, start time.Time, services []string) (map[string][]string, error) {
db := m.TelemetryStore.ClickhouseDB()
query := fmt.Sprintf("SELECT name, serviceName, max(time) as ts FROM %s.%s WHERE time >= @start", telemetrytraces.DBName, telemetrytraces.TopLevelOperationsTableName)
args := []any{clickhouse.Named("start", start)}
if len(services) > 0 {
query += " AND serviceName IN @services"
args = append(args, clickhouse.Named("services", services))
}
query += " GROUP BY name, serviceName ORDER BY ts DESC LIMIT 5000"
rows, err := db.Query(ctx, query, args...)
if err != nil {
return nil, errors.WrapInternalf(err, errors.CodeInternal, "failed to fetch top level operations")
}
defer rows.Close()
ops := make(map[string][]string)
if err := rows.Err(); err != nil {
return nil, errors.WrapInternalf(err, errors.CodeInternal, "failed to fetch top level operations")
}
for rows.Next() {
var name, serviceName string
var ts time.Time
if err := rows.Scan(&name, &serviceName, &ts); err != nil {
return nil, errors.WrapInternalf(err, errors.CodeInternal, "failed to scan top level operation")
}
if _, ok := ops[serviceName]; !ok {
ops[serviceName] = []string{"overflow_operation"}
}
ops[serviceName] = append(ops[serviceName], name)
}
return ops, nil
}
// Get implements services.Module
// Builds a QBv5 traces aggregation grouped by service.name and maps results to ResponseItem.
func (m *module) Get(ctx context.Context, orgUUID valuer.UUID, req *servicetypesv1.Request) ([]*servicetypesv1.ResponseItem, error) {
if req == nil {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "request is nil")
}
// Prepare phase
queryRangeReq, startMs, endMs, err := m.buildQueryRangeRequest(req)
if err != nil {
return nil, err
}
// Fetch phase
resp, err := m.executeQuery(ctx, orgUUID, queryRangeReq)
if err != nil {
return nil, err
}
// Process phase
items, serviceNames := m.mapQueryRangeRespToServices(resp, startMs, endMs)
if len(items) == 0 {
return []*servicetypesv1.ResponseItem{}, nil
}
// attach top level ops to service items
if len(serviceNames) > 0 {
if err := m.attachTopLevelOps(ctx, serviceNames, startMs, items); err != nil {
return nil, err
}
}
return items, nil
}
// GetTopOperations implements services.Module for QBV5 based top ops
func (m *module) GetTopOperations(ctx context.Context, orgUUID valuer.UUID, req *servicetypesv1.OperationsRequest) ([]servicetypesv1.OperationItem, error) {
if req == nil {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "request is nil")
}
qr, err := m.buildTopOpsQueryRangeRequest(req)
if err != nil {
return nil, err
}
resp, err := m.executeQuery(ctx, orgUUID, qr)
if err != nil {
return nil, err
}
items := m.mapTopOpsQueryRangeResp(resp)
return items, nil
}
// GetEntryPointOperations implements services.Module for QBV5 based entry point ops
func (m *module) GetEntryPointOperations(ctx context.Context, orgUUID valuer.UUID, req *servicetypesv1.OperationsRequest) ([]servicetypesv1.OperationItem, error) {
if req == nil {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "request is nil")
}
qr, err := m.buildEntryPointOpsQueryRangeRequest(req)
if err != nil {
return nil, err
}
resp, err := m.executeQuery(ctx, orgUUID, qr)
if err != nil {
return nil, err
}
items := m.mapEntryPointOpsQueryRangeResp(resp)
return items, nil
}
// buildQueryRangeRequest constructs the QBv5 QueryRangeRequest and computes the time window.
func (m *module) buildQueryRangeRequest(req *servicetypesv1.Request) (*qbtypes.QueryRangeRequest, uint64, uint64, error) {
// Parse start/end (nanoseconds) from strings and convert to milliseconds for QBv5
startNs, err := strconv.ParseUint(req.Start, 10, 64)
if err != nil {
return nil, 0, 0, errors.NewInvalidInputf(errors.CodeInvalidInput, "invalid start time: %v", err)
}
endNs, err := strconv.ParseUint(req.End, 10, 64)
if err != nil {
return nil, 0, 0, errors.NewInvalidInputf(errors.CodeInvalidInput, "invalid end time: %v", err)
}
if startNs >= endNs {
return nil, 0, 0, errors.NewInvalidInputf(errors.CodeInvalidInput, "start must be before end")
}
if err := validateTagFilterItems(req.Tags); err != nil {
return nil, 0, 0, err
}
startMs := startNs / 1_000_000
endMs := endNs / 1_000_000
// tags filter
filterExpr, variables := buildFilterExpression(req.Tags)
// ensure we only consider root or entry-point spans
scopeExpr := "isRoot = true OR isEntryPoint = true"
if filterExpr != "" {
filterExpr = "(" + filterExpr + ") AND (" + scopeExpr + ")"
} else {
filterExpr = scopeExpr
}
reqV5 := qbtypes.QueryRangeRequest{
Start: startMs,
End: endMs,
RequestType: qbtypes.RequestTypeScalar,
Variables: variables,
CompositeQuery: qbtypes.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "A",
Signal: telemetrytypes.SignalTraces,
Filter: &qbtypes.Filter{
Expression: filterExpr,
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{
Name: "service.name",
FieldContext: telemetrytypes.FieldContextResource,
FieldDataType: telemetrytypes.FieldDataTypeString,
Materialized: true,
}},
},
Aggregations: []qbtypes.TraceAggregation{
{Expression: "p99(duration_nano)", Alias: "p99"},
{Expression: "avg(duration_nano)", Alias: "avgDuration"},
{Expression: "count()", Alias: "numCalls"},
{Expression: "countIf(status_code = 2)", Alias: "numErrors"},
{Expression: "countIf(response_status_code >= 400 AND response_status_code < 500)", Alias: "num4XX"},
},
},
},
},
},
}
return &reqV5, startMs, endMs, nil
}
// executeQuery calls the underlying Querier with the provided request.
func (m *module) executeQuery(ctx context.Context, orgUUID valuer.UUID, qr *qbtypes.QueryRangeRequest) (*qbtypes.QueryRangeResponse, error) {
return m.Querier.QueryRange(ctx, orgUUID, qr)
}
// mapQueryRangeRespToServices converts the raw query response into service items and collected service names.
func (m *module) mapQueryRangeRespToServices(resp *qbtypes.QueryRangeResponse, startMs, endMs uint64) ([]*servicetypesv1.ResponseItem, []string) {
if resp == nil || len(resp.Data.Results) == 0 { // no rows
return []*servicetypesv1.ResponseItem{}, []string{}
}
sd, ok := resp.Data.Results[0].(*qbtypes.ScalarData) // empty rows
if !ok || sd == nil {
return []*servicetypesv1.ResponseItem{}, []string{}
}
// this stores the index at which service name is found in the response
serviceNameRespIndex := -1
aggIndexMappings := map[int]int{}
for i, c := range sd.Columns {
switch c.Type {
case qbtypes.ColumnTypeGroup:
if c.TelemetryFieldKey.Name == "service.name" {
serviceNameRespIndex = i
}
case qbtypes.ColumnTypeAggregation:
aggIndexMappings[int(c.AggregationIndex)] = i
}
}
periodSeconds := float64((endMs - startMs) / 1000)
out := make([]*servicetypesv1.ResponseItem, 0, len(sd.Data))
serviceNames := make([]string, 0, len(sd.Data))
for _, row := range sd.Data {
svcName := fmt.Sprintf("%v", row[serviceNameRespIndex])
serviceNames = append(serviceNames, svcName)
p99 := toFloat(row, aggIndexMappings[0])
avgDuration := toFloat(row, aggIndexMappings[1])
numCalls := toUint64(row, aggIndexMappings[2])
numErrors := toUint64(row, aggIndexMappings[3])
num4xx := toUint64(row, aggIndexMappings[4])
callRate := 0.0
if numCalls > 0 {
callRate = float64(numCalls) / periodSeconds
}
errorRate := 0.0
if numCalls > 0 {
errorRate = float64(numErrors) * 100 / float64(numCalls) // percentage
}
fourXXRate := 0.0
if numCalls > 0 {
fourXXRate = float64(num4xx) * 100 / float64(numCalls) // percentage
}
out = append(out, &servicetypesv1.ResponseItem{
ServiceName: svcName,
Percentile99: p99,
AvgDuration: avgDuration,
NumCalls: numCalls,
CallRate: callRate,
NumErrors: numErrors,
ErrorRate: errorRate,
Num4XX: num4xx,
FourXXRate: fourXXRate,
DataWarning: servicetypesv1.DataWarning{TopLevelOps: []string{}},
})
}
return out, serviceNames
}
// attachTopLevelOps fetches top-level ops from TelemetryStore and attaches them to items.
func (m *module) attachTopLevelOps(ctx context.Context, serviceNames []string, startMs uint64, items []*servicetypesv1.ResponseItem) error {
startTime := time.UnixMilli(int64(startMs)).UTC()
opsMap, err := m.FetchTopLevelOperations(ctx, startTime, serviceNames)
if err != nil {
return err
}
applyOpsToItems(items, opsMap)
return nil
}
func (m *module) buildTopOpsQueryRangeRequest(req *servicetypesv1.OperationsRequest) (*qbtypes.QueryRangeRequest, error) {
if req.Service == "" {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "service is required")
}
startNs, err := strconv.ParseUint(req.Start, 10, 64)
if err != nil {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "invalid start time: %v", err)
}
endNs, err := strconv.ParseUint(req.End, 10, 64)
if err != nil {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "invalid end time: %v", err)
}
if startNs >= endNs {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "start must be before end")
}
if req.Limit < 1 || req.Limit > 5000 {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "limit must be between 1 and 5000")
}
if err := validateTagFilterItems(req.Tags); err != nil {
return nil, err
}
startMs := startNs / 1_000_000
endMs := endNs / 1_000_000
serviceTag := servicetypesv1.TagFilterItem{
Key: "service.name",
Operator: "in",
StringValues: []string{req.Service},
}
tags := append([]servicetypesv1.TagFilterItem{serviceTag}, req.Tags...)
filterExpr, variables := buildFilterExpression(tags)
reqV5 := qbtypes.QueryRangeRequest{
Start: startMs,
End: endMs,
RequestType: qbtypes.RequestTypeScalar,
Variables: variables,
CompositeQuery: qbtypes.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "A",
Signal: telemetrytypes.SignalTraces,
Filter: &qbtypes.Filter{Expression: filterExpr},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{
Name: "name",
FieldContext: telemetrytypes.FieldContextSpan,
FieldDataType: telemetrytypes.FieldDataTypeString,
}},
},
Aggregations: []qbtypes.TraceAggregation{
{Expression: "p50(duration_nano)", Alias: "p50"},
{Expression: "p95(duration_nano)", Alias: "p95"},
{Expression: "p99(duration_nano)", Alias: "p99"},
{Expression: "count()", Alias: "numCalls"},
{Expression: "countIf(status_code = 2)", Alias: "errorCount"},
},
Order: []qbtypes.OrderBy{
{Key: qbtypes.OrderByKey{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "p99"}}, Direction: qbtypes.OrderDirectionDesc},
},
Limit: req.Limit,
},
},
},
},
}
return &reqV5, nil
}
func (m *module) mapTopOpsQueryRangeResp(resp *qbtypes.QueryRangeResponse) []servicetypesv1.OperationItem {
if resp == nil || len(resp.Data.Results) == 0 {
return []servicetypesv1.OperationItem{}
}
sd, ok := resp.Data.Results[0].(*qbtypes.ScalarData)
if !ok || sd == nil {
return []servicetypesv1.OperationItem{}
}
nameIdx := -1
aggIdx := map[int]int{}
for i, c := range sd.Columns {
switch c.Type {
case qbtypes.ColumnTypeGroup:
if c.TelemetryFieldKey.Name == "name" {
nameIdx = i
}
case qbtypes.ColumnTypeAggregation:
aggIdx[int(c.AggregationIndex)] = i
}
}
out := make([]servicetypesv1.OperationItem, 0, len(sd.Data))
for _, row := range sd.Data {
item := servicetypesv1.OperationItem{
Name: fmt.Sprintf("%v", row[nameIdx]),
P50: toFloat(row, aggIdx[0]),
P95: toFloat(row, aggIdx[1]),
P99: toFloat(row, aggIdx[2]),
NumCalls: toUint64(row, aggIdx[3]),
ErrorCount: toUint64(row, aggIdx[4]),
}
out = append(out, item)
}
return out
}
func (m *module) buildEntryPointOpsQueryRangeRequest(req *servicetypesv1.OperationsRequest) (*qbtypes.QueryRangeRequest, error) {
if req.Service == "" {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "service is required")
}
startNs, err := strconv.ParseUint(req.Start, 10, 64)
if err != nil {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "invalid start time: %v", err)
}
endNs, err := strconv.ParseUint(req.End, 10, 64)
if err != nil {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "invalid end time: %v", err)
}
if startNs >= endNs {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "start must be before end")
}
if req.Limit < 1 || req.Limit > 5000 {
return nil, errors.NewInvalidInputf(errors.CodeInvalidInput, "limit must be between 1 and 5000")
}
if err := validateTagFilterItems(req.Tags); err != nil {
return nil, err
}
startMs := startNs / 1_000_000
endMs := endNs / 1_000_000
serviceTag := servicetypesv1.TagFilterItem{
Key: "service.name",
Operator: "in",
StringValues: []string{req.Service},
}
tags := append([]servicetypesv1.TagFilterItem{serviceTag}, req.Tags...)
filterExpr, variables := buildFilterExpression(tags)
scopeExpr := "isRoot = true OR isEntryPoint = true"
if filterExpr != "" {
filterExpr = "(" + filterExpr + ") AND (" + scopeExpr + ")"
} else {
filterExpr = scopeExpr
}
reqV5 := qbtypes.QueryRangeRequest{
Start: startMs,
End: endMs,
RequestType: qbtypes.RequestTypeScalar,
Variables: variables,
CompositeQuery: qbtypes.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "A",
Signal: telemetrytypes.SignalTraces,
Filter: &qbtypes.Filter{Expression: filterExpr},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{
Name: "name",
FieldContext: telemetrytypes.FieldContextSpan,
FieldDataType: telemetrytypes.FieldDataTypeString,
}},
},
Aggregations: []qbtypes.TraceAggregation{
{Expression: "p50(duration_nano)", Alias: "p50"},
{Expression: "p95(duration_nano)", Alias: "p95"},
{Expression: "p99(duration_nano)", Alias: "p99"},
{Expression: "count()", Alias: "numCalls"},
{Expression: "countIf(status_code = 2)", Alias: "errorCount"},
},
Order: []qbtypes.OrderBy{
{Key: qbtypes.OrderByKey{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "p99"}}, Direction: qbtypes.OrderDirectionDesc},
},
Limit: req.Limit,
},
},
},
},
}
return &reqV5, nil
}
func (m *module) mapEntryPointOpsQueryRangeResp(resp *qbtypes.QueryRangeResponse) []servicetypesv1.OperationItem {
if resp == nil || len(resp.Data.Results) == 0 {
return []servicetypesv1.OperationItem{}
}
sd, ok := resp.Data.Results[0].(*qbtypes.ScalarData)
if !ok || sd == nil {
return []servicetypesv1.OperationItem{}
}
nameIdx := -1
aggIdx := map[int]int{}
for i, c := range sd.Columns {
switch c.Type {
case qbtypes.ColumnTypeGroup:
if c.TelemetryFieldKey.Name == "name" {
nameIdx = i
}
case qbtypes.ColumnTypeAggregation:
aggIdx[int(c.AggregationIndex)] = i
}
}
out := make([]servicetypesv1.OperationItem, 0, len(sd.Data))
for _, row := range sd.Data {
item := servicetypesv1.OperationItem{
Name: fmt.Sprintf("%v", row[nameIdx]),
P50: toFloat(row, aggIdx[0]),
P95: toFloat(row, aggIdx[1]),
P99: toFloat(row, aggIdx[2]),
NumCalls: toUint64(row, aggIdx[3]),
ErrorCount: toUint64(row, aggIdx[4]),
}
out = append(out, item)
}
return out
}

View File

@@ -0,0 +1,773 @@
package implservices
import (
"testing"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/servicetypes/servicetypesv1"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/stretchr/testify/assert"
)
func TestBuildQueryRangeRequest(t *testing.T) {
m := &module{}
tests := []struct {
name string
req servicetypesv1.Request
wantErr string
assertOK func(t *testing.T, qr *qbtypes.QueryRangeRequest, startMs, endMs uint64)
}{
{
name: "valid with tags builds scope+filter and query",
req: servicetypesv1.Request{
Start: "1000000000", // 1s in ns -> 1000 ms
End: "2000000000", // 2s in ns -> 2000 ms
Tags: []servicetypesv1.TagFilterItem{
{Key: "service.name", Operator: "in", StringValues: []string{"frontend", "backend"}},
{Key: "env", Operator: "notin", StringValues: []string{"prod"}},
},
},
assertOK: func(t *testing.T, qr *qbtypes.QueryRangeRequest, startMs, endMs uint64) {
assert.Equal(t, uint64(1000), startMs)
assert.Equal(t, uint64(2000), endMs)
assert.Equal(t, qbtypes.RequestTypeScalar, qr.RequestType)
assert.Equal(t, 1, len(qr.CompositeQuery.Queries))
qe := qr.CompositeQuery.Queries[0]
assert.Equal(t, qbtypes.QueryTypeBuilder, qe.Type)
// Spec should be a traces builder query
spec, ok := qe.Spec.(qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation])
if !ok {
t.Fatalf("unexpected spec type: %T", qe.Spec)
}
assert.Equal(t, telemetrytypes.SignalTraces, spec.Signal)
// Filter should include both user filter and the scope expression
assert.NotNil(t, spec.Filter)
expr := spec.Filter.Expression
assert.Contains(t, expr, "service.name IN $1")
assert.Contains(t, expr, "env NOT IN $2")
assert.Contains(t, expr, "isRoot = true OR isEntryPoint = true")
// GroupBy should include service.name
if assert.Equal(t, 1, len(spec.GroupBy)) {
assert.Equal(t, "service.name", spec.GroupBy[0].TelemetryFieldKey.Name)
}
// Aggregations should match expected expressions and aliases
if assert.Equal(t, 5, len(spec.Aggregations)) {
assert.Equal(t, "p99(duration_nano)", spec.Aggregations[0].Expression)
assert.Equal(t, "p99", spec.Aggregations[0].Alias)
assert.Equal(t, "avg(duration_nano)", spec.Aggregations[1].Expression)
assert.Equal(t, "avgDuration", spec.Aggregations[1].Alias)
assert.Equal(t, "count()", spec.Aggregations[2].Expression)
assert.Equal(t, "numCalls", spec.Aggregations[2].Alias)
assert.Equal(t, "countIf(status_code = 2)", spec.Aggregations[3].Expression)
assert.Equal(t, "numErrors", spec.Aggregations[3].Alias)
assert.Equal(t, "countIf(response_status_code >= 400 AND response_status_code < 500)", spec.Aggregations[4].Expression)
assert.Equal(t, "num4XX", spec.Aggregations[4].Alias)
}
},
},
{
name: "valid without tags uses only scope filter",
req: servicetypesv1.Request{
Start: "3000000000", // 3s ns -> 3000 ms
End: "5000000000", // 5s ns -> 5000 ms
},
assertOK: func(t *testing.T, qr *qbtypes.QueryRangeRequest, startMs, endMs uint64) {
assert.Equal(t, uint64(3000), startMs)
assert.Equal(t, uint64(5000), endMs)
qe := qr.CompositeQuery.Queries[0]
spec := qe.Spec.(qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation])
if assert.NotNil(t, spec.Filter) {
assert.Equal(t, "isRoot = true OR isEntryPoint = true", spec.Filter.Expression)
}
},
},
{
name: "invalid start",
req: servicetypesv1.Request{Start: "abc", End: "100"},
wantErr: "invalid start time",
},
{
name: "invalid end",
req: servicetypesv1.Request{Start: "100", End: "abc"},
wantErr: "invalid end time",
},
{
name: "start not before end",
req: servicetypesv1.Request{Start: "2000", End: "2000"},
wantErr: "start must be before end",
},
{
name: "start greater than end",
req: servicetypesv1.Request{Start: "2001", End: "2000"},
wantErr: "start must be before end",
},
{
name: "invalid tag: missing key -> error",
req: servicetypesv1.Request{
Start: "1000000000",
End: "2000000000",
Tags: []servicetypesv1.TagFilterItem{{Key: "", Operator: "in", StringValues: []string{"x"}}},
},
wantErr: "key is required",
},
{
name: "invalid tag: unsupported operator -> error",
req: servicetypesv1.Request{
Start: "1000000000",
End: "2000000000",
Tags: []servicetypesv1.TagFilterItem{{Key: "env", Operator: "equals", StringValues: []string{"staging"}}},
},
wantErr: "only in and notin operators are supported",
},
{
name: "invalid tag: in but no values -> error",
req: servicetypesv1.Request{
Start: "1000000000",
End: "2000000000",
Tags: []servicetypesv1.TagFilterItem{{Key: "env", Operator: "in"}},
},
wantErr: "at least one of stringValues, boolValues, or numberValues must be populated",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
qr, startMs, endMs, err := m.buildQueryRangeRequest(&tt.req)
if tt.wantErr != "" {
assert.Error(t, err)
assert.Contains(t, err.Error(), tt.wantErr)
return
}
assert.NoError(t, err)
if tt.assertOK != nil {
tt.assertOK(t, qr, startMs, endMs)
}
})
}
}
func TestMapQueryRangeRespToServices(t *testing.T) {
m := &module{}
groupCol := &qbtypes.ColumnDescriptor{
TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "service.name"},
Type: qbtypes.ColumnTypeGroup,
}
agg := func(idx int64) *qbtypes.ColumnDescriptor {
return &qbtypes.ColumnDescriptor{AggregationIndex: idx, Type: qbtypes.ColumnTypeAggregation}
}
tests := []struct {
name string
resp *qbtypes.QueryRangeResponse
startMs, endMs uint64
wantItems []*servicetypesv1.ResponseItem
wantServices []string
}{
{
name: "empty response -> no items",
resp: &qbtypes.QueryRangeResponse{
Type: qbtypes.RequestTypeScalar,
Data: qbtypes.QueryData{Results: []any{}},
},
startMs: 1000, endMs: 2000,
wantItems: []*servicetypesv1.ResponseItem{},
wantServices: []string{},
},
{
name: "no ScalarData -> no items",
resp: &qbtypes.QueryRangeResponse{
Type: qbtypes.RequestTypeScalar,
Data: qbtypes.QueryData{Results: []any{"not-scalar"}},
},
startMs: 1000, endMs: 2000,
wantItems: []*servicetypesv1.ResponseItem{},
wantServices: []string{},
},
{
name: "missing service.name column -> no items",
resp: &qbtypes.QueryRangeResponse{
Type: qbtypes.RequestTypeScalar,
Data: qbtypes.QueryData{
Results: []any{&qbtypes.ScalarData{
QueryName: "A",
Columns: []*qbtypes.ColumnDescriptor{agg(0)}, Data: [][]any{},
},
},
},
},
startMs: 1000, endMs: 2000,
wantItems: []*servicetypesv1.ResponseItem{},
wantServices: []string{},
},
{
name: "single row maps fields and rates",
resp: &qbtypes.QueryRangeResponse{
Type: qbtypes.RequestTypeScalar,
Data: qbtypes.QueryData{
Results: []any{
&qbtypes.ScalarData{
QueryName: "A",
Columns: []*qbtypes.ColumnDescriptor{groupCol, agg(0), agg(1), agg(2), agg(3), agg(4)},
Data: [][]any{{"svc-a", float64(123.0), float64(45.0), uint64(10), uint64(2), uint64(1)}},
},
},
},
},
startMs: 0, endMs: 10000, // 10s window -> callRate = 10/10=1, errorRate=20%, fourXXRate=10%
wantItems: []*servicetypesv1.ResponseItem{
{
ServiceName: "svc-a",
Percentile99: 123.0,
AvgDuration: 45.0,
NumCalls: 10,
CallRate: 1.0,
NumErrors: 2,
ErrorRate: 20.0, // in percentage
Num4XX: 1,
FourXXRate: 10.0, // in percentage
DataWarning: servicetypesv1.DataWarning{TopLevelOps: []string{}},
},
},
wantServices: []string{"svc-a"},
},
{
name: "group column in middle maps correctly",
resp: &qbtypes.QueryRangeResponse{
Type: qbtypes.RequestTypeScalar,
Data: qbtypes.QueryData{
Results: []any{&qbtypes.ScalarData{
QueryName: "A",
Columns: []*qbtypes.ColumnDescriptor{agg(0), groupCol, agg(1), agg(2), agg(3), agg(4)},
Data: [][]any{{float64(200.0), "svc-mid", float64(50.0), uint64(20), uint64(5), uint64(2)}},
},
},
},
},
startMs: 0, endMs: 10000, // 10s window -> callRate = 2, errorRate=25%, fourXXRate=10%
wantItems: []*servicetypesv1.ResponseItem{
{
ServiceName: "svc-mid",
Percentile99: 200.0,
AvgDuration: 50.0,
NumCalls: 20,
CallRate: 2.0,
NumErrors: 5,
ErrorRate: 25.0, // in percentage
Num4XX: 2,
FourXXRate: 10.0, // in percentage
DataWarning: servicetypesv1.DataWarning{TopLevelOps: []string{}},
},
},
wantServices: []string{"svc-mid"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotItems, gotServices := m.mapQueryRangeRespToServices(tt.resp, tt.startMs, tt.endMs)
assert.Equal(t, tt.wantServices, gotServices)
assert.Equal(t, len(tt.wantItems), len(gotItems))
if len(tt.wantItems) == 1 {
assert.InDelta(t, tt.wantItems[0].Percentile99, gotItems[0].Percentile99, 1e-9)
assert.InDelta(t, tt.wantItems[0].AvgDuration, gotItems[0].AvgDuration, 1e-9)
assert.Equal(t, tt.wantItems[0].NumCalls, gotItems[0].NumCalls)
assert.InDelta(t, tt.wantItems[0].CallRate, gotItems[0].CallRate, 1e-9)
assert.Equal(t, tt.wantItems[0].NumErrors, gotItems[0].NumErrors)
assert.InDelta(t, tt.wantItems[0].ErrorRate, gotItems[0].ErrorRate, 1e-9)
assert.Equal(t, tt.wantItems[0].Num4XX, gotItems[0].Num4XX)
assert.InDelta(t, tt.wantItems[0].FourXXRate, gotItems[0].FourXXRate, 1e-9)
assert.Equal(t, tt.wantItems[0].DataWarning.TopLevelOps, gotItems[0].DataWarning.TopLevelOps)
assert.Equal(t, tt.wantItems[0].ServiceName, gotItems[0].ServiceName)
}
})
}
}
func TestBuildTopOpsQueryRangeRequest(t *testing.T) {
m := &module{}
tests := []struct {
name string
req servicetypesv1.OperationsRequest
wantErr string
assertQ func(t *testing.T, qr *qbtypes.QueryRangeRequest)
}{
{
name: "with tag filters (In, NotIn) and no scope",
req: servicetypesv1.OperationsRequest{
Start: "1000000000",
End: "2000000000",
Service: "frontend",
Tags: []servicetypesv1.TagFilterItem{
{Key: "deployment.environment", Operator: "NotIn", StringValues: []string{"prod", "staging"}},
{Key: "http.method", Operator: "in", StringValues: []string{"GET"}},
},
Limit: 10,
},
assertQ: func(t *testing.T, qr *qbtypes.QueryRangeRequest) {
if assert.Equal(t, 1, len(qr.CompositeQuery.Queries)) {
qe := qr.CompositeQuery.Queries[0]
spec, ok := qe.Spec.(qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation])
if !ok {
t.Fatalf("unexpected spec type: %T", qe.Spec)
}
assert.NotNil(t, spec.Filter)
expr := spec.Filter.Expression
// service.name added first as $1, then user tags as $2, $3
assert.Contains(t, expr, "service.name IN $1")
assert.Contains(t, expr, "deployment.environment NOT IN $2")
assert.Contains(t, expr, "http.method IN $3")
assert.NotContains(t, expr, "isRoot = true OR isEntryPoint = true")
// variables populated correctly
if v, ok := qr.Variables["1"]; assert.True(t, ok) {
vals, _ := v.Value.([]any)
if assert.Equal(t, 1, len(vals)) {
assert.Equal(t, "frontend", vals[0])
}
}
if v, ok := qr.Variables["2"]; assert.True(t, ok) {
vals, _ := v.Value.([]any)
if assert.Equal(t, 2, len(vals)) {
assert.ElementsMatch(t, []any{"prod", "staging"}, vals)
}
}
if v, ok := qr.Variables["3"]; assert.True(t, ok) {
vals, _ := v.Value.([]any)
if assert.Equal(t, 1, len(vals)) {
assert.Equal(t, "GET", vals[0])
}
}
}
},
},
{
name: "valid minimal filters, no scope added",
req: servicetypesv1.OperationsRequest{
Start: "1000000000", // 1s ns -> 1000 ms
End: "4000000000", // 4s ns -> 4000 ms
Service: "cartservice",
Limit: 50,
},
assertQ: func(t *testing.T, qr *qbtypes.QueryRangeRequest) {
assert.Equal(t, qbtypes.RequestTypeScalar, qr.RequestType)
if assert.Equal(t, 1, len(qr.CompositeQuery.Queries)) {
qe := qr.CompositeQuery.Queries[0]
assert.Equal(t, qbtypes.QueryTypeBuilder, qe.Type)
spec, ok := qe.Spec.(qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation])
if !ok {
t.Fatalf("unexpected spec type: %T", qe.Spec)
}
assert.Equal(t, telemetrytypes.SignalTraces, spec.Signal)
if assert.NotNil(t, spec.Filter) {
expr := spec.Filter.Expression
// should contain only tag filter for service.name and NOT the scope expression
assert.Contains(t, expr, "service.name IN $1")
assert.NotContains(t, expr, "isRoot = true OR isEntryPoint = true")
}
if assert.Equal(t, 1, len(spec.GroupBy)) {
assert.Equal(t, "name", spec.GroupBy[0].TelemetryFieldKey.Name)
assert.Equal(t, telemetrytypes.FieldContextSpan, spec.GroupBy[0].TelemetryFieldKey.FieldContext)
}
if assert.Equal(t, 5, len(spec.Aggregations)) {
assert.Equal(t, "p50(duration_nano)", spec.Aggregations[0].Expression)
assert.Equal(t, "p50", spec.Aggregations[0].Alias)
assert.Equal(t, "p95(duration_nano)", spec.Aggregations[1].Expression)
assert.Equal(t, "p95", spec.Aggregations[1].Alias)
assert.Equal(t, "p99(duration_nano)", spec.Aggregations[2].Expression)
assert.Equal(t, "p99", spec.Aggregations[2].Alias)
assert.Equal(t, "count()", spec.Aggregations[3].Expression)
assert.Equal(t, "numCalls", spec.Aggregations[3].Alias)
assert.Equal(t, "countIf(status_code = 2)", spec.Aggregations[4].Expression)
assert.Equal(t, "errorCount", spec.Aggregations[4].Alias)
}
if assert.Equal(t, 1, len(spec.Order)) {
assert.Equal(t, "p99", spec.Order[0].Key.TelemetryFieldKey.Name)
assert.Equal(t, qbtypes.OrderDirectionDesc, spec.Order[0].Direction)
}
assert.Equal(t, 50, spec.Limit)
}
},
},
{
name: "missing service -> error",
req: servicetypesv1.OperationsRequest{Start: "1", End: "2"},
wantErr: "service is required",
},
{
name: "invalid limit low",
req: servicetypesv1.OperationsRequest{Start: "1", End: "2", Service: "s", Limit: 0},
wantErr: "limit must be between 1 and 5000",
},
{
name: "invalid limit high",
req: servicetypesv1.OperationsRequest{Start: "1", End: "2", Service: "s", Limit: 5001},
wantErr: "limit must be between 1 and 5000",
},
{
name: "invalid start",
req: servicetypesv1.OperationsRequest{Start: "abc", End: "2", Service: "s"},
wantErr: "invalid start time",
},
{
name: "invalid end",
req: servicetypesv1.OperationsRequest{Start: "1", End: "abc", Service: "s"},
wantErr: "invalid end time",
},
{
name: "start not before end",
req: servicetypesv1.OperationsRequest{Start: "2", End: "2", Service: "s"},
wantErr: "start must be before end",
},
{
name: "invalid tag in top ops -> error",
req: servicetypesv1.OperationsRequest{
Start: "1000000000",
End: "2000000000",
Service: "frontend",
Limit: 10,
Tags: []servicetypesv1.TagFilterItem{{Key: "", Operator: "in", StringValues: []string{"x"}}},
},
wantErr: "key is required",
},
{
name: "invalid tag: in but no values -> error (top ops)",
req: servicetypesv1.OperationsRequest{
Start: "1000000000",
End: "2000000000",
Service: "frontend",
Tags: []servicetypesv1.TagFilterItem{{Key: "env", Operator: "in"}},
Limit: 10,
},
wantErr: "at least one of stringValues, boolValues, or numberValues must be populated",
},
{
name: "valid tag in top ops -> ok",
req: servicetypesv1.OperationsRequest{
Start: "1000000000",
End: "2000000000",
Service: "frontend",
Tags: []servicetypesv1.TagFilterItem{{Key: "deployment.environment", Operator: "in", StringValues: []string{"prod"}}},
Limit: 5,
},
assertQ: func(t *testing.T, qr *qbtypes.QueryRangeRequest) {
qe := qr.CompositeQuery.Queries[0]
spec := qe.Spec.(qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation])
assert.Contains(t, spec.Filter.Expression, "service.name IN $1")
assert.Contains(t, spec.Filter.Expression, "deployment.environment IN $2")
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
qr, err := m.buildTopOpsQueryRangeRequest(&tt.req)
if tt.wantErr != "" {
assert.Error(t, err)
assert.Contains(t, err.Error(), tt.wantErr)
return
}
assert.NoError(t, err)
if tt.assertQ != nil {
tt.assertQ(t, qr)
}
})
}
}
func TestMapTopOpsQueryRangeResp(t *testing.T) {
m := &module{}
nameGroup := &qbtypes.ColumnDescriptor{
TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "name"},
Type: qbtypes.ColumnTypeGroup,
}
agg := func(idx int64) *qbtypes.ColumnDescriptor {
return &qbtypes.ColumnDescriptor{AggregationIndex: idx, Type: qbtypes.ColumnTypeAggregation}
}
tests := []struct {
name string
resp *qbtypes.QueryRangeResponse
want []servicetypesv1.OperationItem
}{
{
name: "empty results -> empty slice",
resp: &qbtypes.QueryRangeResponse{Type: qbtypes.RequestTypeScalar, Data: qbtypes.QueryData{Results: []any{}}},
want: []servicetypesv1.OperationItem{},
},
{
name: "non-scalar result -> empty slice",
resp: &qbtypes.QueryRangeResponse{Type: qbtypes.RequestTypeScalar, Data: qbtypes.QueryData{Results: []any{"x"}}},
want: []servicetypesv1.OperationItem{},
},
{
name: "single row maps correctly",
resp: &qbtypes.QueryRangeResponse{
Type: qbtypes.RequestTypeScalar,
Data: qbtypes.QueryData{Results: []any{&qbtypes.ScalarData{
QueryName: "A",
Columns: []*qbtypes.ColumnDescriptor{nameGroup, agg(0), agg(1), agg(2), agg(3), agg(4)},
Data: [][]any{{"opA", float64(10), float64(20), float64(30), uint64(100), uint64(7)}},
}}},
},
want: []servicetypesv1.OperationItem{{
Name: "opA",
P50: 10,
P95: 20,
P99: 30,
NumCalls: 100,
ErrorCount: 7,
}},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := m.mapTopOpsQueryRangeResp(tt.resp)
assert.Equal(t, tt.want, got)
})
}
}
func TestBuildEntryPointOpsQueryRangeRequest(t *testing.T) {
m := &module{}
tests := []struct {
name string
req servicetypesv1.OperationsRequest
wantErr string
assertQ func(t *testing.T, qr *qbtypes.QueryRangeRequest)
}{
{
name: "service only -> scope present, no extra filters",
req: servicetypesv1.OperationsRequest{
Start: "1000000000",
End: "2000000000",
Service: "cartservice",
// no tags
Limit: 10,
},
assertQ: func(t *testing.T, qr *qbtypes.QueryRangeRequest) {
if assert.Equal(t, 1, len(qr.CompositeQuery.Queries)) {
qe := qr.CompositeQuery.Queries[0]
spec, ok := qe.Spec.(qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation])
if !ok {
t.Fatalf("unexpected spec type: %T", qe.Spec)
}
assert.NotNil(t, spec.Filter)
expr := spec.Filter.Expression
assert.Contains(t, expr, "service.name IN $1")
assert.Contains(t, expr, "isRoot = true OR isEntryPoint = true")
// only one variable should exist
if assert.Len(t, qr.Variables, 1) {
v := qr.Variables["1"]
vals, _ := v.Value.([]any)
if assert.Equal(t, 1, len(vals)) {
assert.Equal(t, "cartservice", vals[0])
}
}
// groupBy is name (span)
if assert.Equal(t, 1, len(spec.GroupBy)) {
assert.Equal(t, "name", spec.GroupBy[0].TelemetryFieldKey.Name)
assert.Equal(t, telemetrytypes.FieldContextSpan, spec.GroupBy[0].TelemetryFieldKey.FieldContext)
}
}
},
},
{
name: "with filters and scope present",
req: servicetypesv1.OperationsRequest{
Start: "1000000000",
End: "3000000000",
Service: "frontend",
Tags: []servicetypesv1.TagFilterItem{
{Key: "deployment.environment", Operator: "NotIn", StringValues: []string{"prod", "staging"}},
{Key: "http.method", Operator: "in", StringValues: []string{"GET"}},
},
Limit: 25,
},
assertQ: func(t *testing.T, qr *qbtypes.QueryRangeRequest) {
if assert.Equal(t, 1, len(qr.CompositeQuery.Queries)) {
qe := qr.CompositeQuery.Queries[0]
spec, ok := qe.Spec.(qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation])
if !ok {
t.Fatalf("unexpected spec type: %T", qe.Spec)
}
assert.NotNil(t, spec.Filter)
expr := spec.Filter.Expression
assert.Contains(t, expr, "service.name IN $1")
assert.Contains(t, expr, "deployment.environment NOT IN $2")
assert.Contains(t, expr, "http.method IN $3")
assert.Contains(t, expr, "isRoot = true OR isEntryPoint = true")
if assert.Equal(t, 1, len(spec.GroupBy)) {
assert.Equal(t, "name", spec.GroupBy[0].TelemetryFieldKey.Name)
assert.Equal(t, telemetrytypes.FieldContextSpan, spec.GroupBy[0].TelemetryFieldKey.FieldContext)
}
if assert.Equal(t, 5, len(spec.Aggregations)) {
assert.Equal(t, "p50(duration_nano)", spec.Aggregations[0].Expression)
assert.Equal(t, "p50", spec.Aggregations[0].Alias)
assert.Equal(t, "p95(duration_nano)", spec.Aggregations[1].Expression)
assert.Equal(t, "p95", spec.Aggregations[1].Alias)
assert.Equal(t, "p99(duration_nano)", spec.Aggregations[2].Expression)
assert.Equal(t, "p99", spec.Aggregations[2].Alias)
assert.Equal(t, "count()", spec.Aggregations[3].Expression)
assert.Equal(t, "numCalls", spec.Aggregations[3].Alias)
assert.Equal(t, "countIf(status_code = 2)", spec.Aggregations[4].Expression)
assert.Equal(t, "errorCount", spec.Aggregations[4].Alias)
}
if assert.Equal(t, 1, len(spec.Order)) {
assert.Equal(t, "p99", spec.Order[0].Key.TelemetryFieldKey.Name)
assert.Equal(t, qbtypes.OrderDirectionDesc, spec.Order[0].Direction)
}
assert.Equal(t, 25, spec.Limit)
}
},
},
{
name: "missing service -> error",
req: servicetypesv1.OperationsRequest{Start: "1", End: "2"},
wantErr: "service is required",
},
{
name: "invalid start",
req: servicetypesv1.OperationsRequest{Start: "abc", End: "2", Service: "s"},
wantErr: "invalid start time",
},
{
name: "invalid end",
req: servicetypesv1.OperationsRequest{Start: "1", End: "abc", Service: "s"},
wantErr: "invalid end time",
},
{
name: "start not before end",
req: servicetypesv1.OperationsRequest{Start: "2", End: "2", Service: "s"},
wantErr: "start must be before end",
},
{
name: "invalid limit low",
req: servicetypesv1.OperationsRequest{Start: "1", End: "2", Service: "s", Limit: 0},
wantErr: "limit must be between 1 and 5000",
},
{
name: "invalid limit high",
req: servicetypesv1.OperationsRequest{Start: "1", End: "2", Service: "s", Limit: 5001},
wantErr: "limit must be between 1 and 5000",
},
{
name: "invalid tag in entry point ops -> error",
req: servicetypesv1.OperationsRequest{
Start: "1000000000",
End: "2000000000",
Service: "cartservice",
Limit: 10,
Tags: []servicetypesv1.TagFilterItem{{Key: "", Operator: "notin", StringValues: []string{"x"}}},
},
wantErr: "key is required",
},
{
name: "invalid tag: notin but no values -> error (entry ops)",
req: servicetypesv1.OperationsRequest{
Start: "1000000000",
End: "2000000000",
Service: "cartservice",
Limit: 10,
Tags: []servicetypesv1.TagFilterItem{{Key: "env", Operator: "notin"}},
},
wantErr: "at least one of stringValues, boolValues, or numberValues must be populated",
},
{
name: "valid tag in entry point ops -> ok",
req: servicetypesv1.OperationsRequest{
Start: "1000000000",
End: "2000000000",
Service: "cartservice",
Tags: []servicetypesv1.TagFilterItem{{Key: "deployment.environment", Operator: "notin", StringValues: []string{"prod"}}},
Limit: 10,
},
assertQ: func(t *testing.T, qr *qbtypes.QueryRangeRequest) {
spec := qr.CompositeQuery.Queries[0].Spec.(qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation])
assert.Contains(t, spec.Filter.Expression, "service.name IN $1")
assert.Contains(t, spec.Filter.Expression, "deployment.environment NOT IN $2")
assert.Contains(t, spec.Filter.Expression, "isRoot = true OR isEntryPoint = true")
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
qr, err := m.buildEntryPointOpsQueryRangeRequest(&tt.req)
if tt.wantErr != "" {
assert.Error(t, err)
assert.Contains(t, err.Error(), tt.wantErr)
return
}
assert.NoError(t, err)
if tt.assertQ != nil {
tt.assertQ(t, qr)
}
})
}
}
func TestMapEntryPointOpsQueryRangeResp(t *testing.T) {
m := &module{}
nameGroup := &qbtypes.ColumnDescriptor{
TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{Name: "name"},
Type: qbtypes.ColumnTypeGroup,
}
agg := func(idx int64) *qbtypes.ColumnDescriptor {
return &qbtypes.ColumnDescriptor{AggregationIndex: idx, Type: qbtypes.ColumnTypeAggregation}
}
tests := []struct {
name string
resp *qbtypes.QueryRangeResponse
want []servicetypesv1.OperationItem
}{
{
name: "empty results -> empty slice",
resp: &qbtypes.QueryRangeResponse{Type: qbtypes.RequestTypeScalar, Data: qbtypes.QueryData{Results: []any{}}},
want: []servicetypesv1.OperationItem{},
},
{
name: "non-scalar result -> empty slice",
resp: &qbtypes.QueryRangeResponse{Type: qbtypes.RequestTypeScalar, Data: qbtypes.QueryData{Results: []any{"x"}}},
want: []servicetypesv1.OperationItem{},
},
{
name: "single row maps correctly",
resp: &qbtypes.QueryRangeResponse{
Type: qbtypes.RequestTypeScalar,
Data: qbtypes.QueryData{Results: []any{&qbtypes.ScalarData{
QueryName: "A",
Columns: []*qbtypes.ColumnDescriptor{nameGroup, agg(0), agg(1), agg(2), agg(3), agg(4)},
Data: [][]any{{"op-entry", float64(5), float64(15), float64(25), uint64(12), uint64(1)}},
}}},
},
want: []servicetypesv1.OperationItem{{
Name: "op-entry",
P50: 5,
P95: 15,
P99: 25,
NumCalls: 12,
ErrorCount: 1,
}},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := m.mapEntryPointOpsQueryRangeResp(tt.resp)
assert.Equal(t, tt.want, got)
})
}
}

View File

@@ -0,0 +1,25 @@
package services
import (
"context"
"net/http"
"time"
"github.com/SigNoz/signoz/pkg/types/servicetypes/servicetypesv1"
"github.com/SigNoz/signoz/pkg/valuer"
)
// Handler exposes HTTP handler for services_qbv5
type Handler interface {
Get(http.ResponseWriter, *http.Request)
GetTopOperations(http.ResponseWriter, *http.Request)
GetEntryPointOperations(http.ResponseWriter, *http.Request)
}
// Module represents the services QBv5 module interface
type Module interface {
Get(ctx context.Context, orgID valuer.UUID, req *servicetypesv1.Request) ([]*servicetypesv1.ResponseItem, error)
FetchTopLevelOperations(ctx context.Context, start time.Time, services []string) (map[string][]string, error)
GetTopOperations(ctx context.Context, orgID valuer.UUID, req *servicetypesv1.OperationsRequest) ([]servicetypesv1.OperationItem, error)
GetEntryPointOperations(ctx context.Context, orgID valuer.UUID, req *servicetypesv1.OperationsRequest) ([]servicetypesv1.OperationItem, error)
}

Some files were not shown because too many files have changed in this diff Show More