Compare commits

...

23 Commits

Author SHA1 Message Date
primus-bot[bot]
aaf0b597dc chore(release): bump to v0.79.1 (#7655)
#### Summary
 - Release SigNoz v0.79.1

 Created by [Primus-Bot](https://github.com/apps/primus-bot)

Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
2025-04-17 00:02:05 +05:30
Prashant Shahi
19372c8194 ci(build): use unique cache key for the internal/public builds (#7654)
### Summary

- unique cache keys for the internal/public builds

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-04-16 23:37:05 +05:30
Vibhu Pandey
eb74adad44 test(integration): set the base for integration tests (#7606)
* test(integration): set the base for integration tests

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test
2025-04-16 18:54:05 +05:30
Srikanth Chekuri
d5c04e1342 chore: log original query failed to transform (#7641) 2025-04-16 14:40:54 +05:30
primus-bot[bot]
2b9632c8fd chore(release): bump to v0.79.0 (#7643)
#### Summary
 - Release SigNoz v0.79.0
 - Bump SigNoz OTel Collector to v0.111.39

 Created by [Primus-Bot](https://github.com/apps/primus-bot)

Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
2025-04-16 13:36:31 +05:30
Prashant Shahi
24920ae903 chore(prereleaser): update cron schedule - 6:30AM UTC (#7640)
### Summary

- update preleaser cron schedule to 6:30AM UTC

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-04-16 07:20:47 +00:00
Prashant Shahi
6f096632a2 chore(build-staging): only include telemetry tunnel FE envs (#7637)
### Summary

- only include telemetry tunnel FE environment variables for the staging build

---------

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-04-16 12:38:30 +05:30
Piyush Singariya
a42eacec4b chore: enhancing JSON Parser handling (#7591)
* feat: enhancing JSON Parser handling

* fix: updating collector version

* chore: updating go.mod reference for Collector

---------

Co-authored-by: Nityananda Gohain <nityanandagohain@gmail.com>
2025-04-16 11:24:59 +05:30
Nityananda Gohain
e723399f7f fix: add check for empty services (#7611) 2025-04-15 16:39:33 +00:00
Nityananda Gohain
48936bed9b chore: multitenancy in integrations (#7507)
* chore: multitenancy in integrations

* chore: multitenancy in cloud integration accounts

* chore: changes to cloudintegrationservice

* chore: rename migration

* chore: update scan function

* chore: update scan function

* chore: fix migration

* chore: fix struct

* chore: remove unwanted code

* chore: update scan function

* chore: migrate user and pat for integrations

* fix: changes to the user for integrations

* fix: address comments

* fix: copy created_at

* fix: update non revoked token

* chore: don't allow deleting pat and user for integrations

* fix: address comments

* chore: address comments

* chore: add checks for fk in dialect

* fix: service migration

* fix: don't update user if user is already migrated

* fix: update correct service config

* fix: remove unwanted code

* fix: remove migration for multiple same services which is not required

* fix: fix migration and disable disaboard if metrics disabled

* fix: don't use ee types

---------

Co-authored-by: Vikrant Gupta <vikrant@signoz.io>
2025-04-15 15:35:36 +00:00
Srikanth Chekuri
ee70474cc7 fix: missing receivers in json payload for legacy postableAlert (#7603) 2025-04-14 13:20:39 +00:00
Srikanth Chekuri
c3fa7144ee chore: add tag type filter support in attribute keys (#7522) 2025-04-14 18:43:15 +05:30
Nityananda Gohain
5dd02a5b8e fix: remove unnecssary code for email domain check error (#7566)
* fix: proper check for emailComponents

* fix: correct error handling
2025-04-14 11:15:21 +05:30
Srikanth Chekuri
c0f01e4cb9 chore: add metadatastore implementation for logs and traces (#7559)
* chore: add metadatastore implementation for logs and traces

* chore: use telemetrystore mock
2025-04-11 19:41:02 +05:30
Srikanth Chekuri
fed84cb50a chore: add condition builder attributes metadata (#7558) 2025-04-11 16:20:27 +05:30
Srikanth Chekuri
80545c4d07 chore: add materialized field extractor from table schema (#7557) 2025-04-11 15:53:55 +05:30
Srikanth Chekuri
0b1faec092 chore: add condition builder for span index v3 (#7556) 2025-04-11 15:13:04 +05:30
Srikanth Chekuri
ba6f31b1c3 chore: add virtual fields table (#7586) 2025-04-11 07:36:31 +05:30
Srikanth Chekuri
eed92978a4 chore: add non-json condition builder for logs v2 (#7555) 2025-04-10 18:23:01 +00:00
Prashant Shahi
41cbd316b5 Feat/staging (#7585)
### Summary

- Non-production build workflow using Primus
- Staging CD: new staging app and dev staging deployments
- cleanup used docker resources in stagingapp/testingapp machines

---------

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-04-10 17:46:13 +05:30
Vishal Sharma
8d7d33393d feat: include tenant_url in event attributes for logging (#7582) 2025-04-10 15:17:14 +05:30
sawhil
8d143b44b1 feat: removed ff for tp-api-monitoring from fe - 1 2025-04-09 15:44:42 +05:30
sawhil
423aebd6eb feat: removed ff for tp-api-monitoring from fe 2025-04-09 15:44:42 +05:30
96 changed files with 6976 additions and 835 deletions

View File

@@ -1,6 +1,7 @@
.git
.github
.vscode
.devenv
README.md
deploy
sample-apps

View File

@@ -2,10 +2,9 @@ name: build-community
on:
push:
branches:
- main
tags:
- v*
- 'v[0-9]+.[0-9]+.[0-9]+'
- 'v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+'
defaults:
run:
@@ -19,7 +18,6 @@ jobs:
prepare:
runs-on: ubuntu-latest
outputs:
docker_providers: ${{ steps.set-docker-providers.outputs.providers }}
version: ${{ steps.build-info.outputs.version }}
hash: ${{ steps.build-info.outputs.hash }}
time: ${{ steps.build-info.outputs.time }}
@@ -38,7 +36,7 @@ jobs:
uses: actions/checkout@v4
with:
repository: signoz/primus
ref: ${{ inputs.PRIMUS_REF }}
ref: main
path: .primus
token: ${{ steps.token.outputs.token }}
- name: build-info
@@ -47,14 +45,6 @@ jobs:
echo "hash=$($MAKE info-commit-short)" >> $GITHUB_OUTPUT
echo "time=$($MAKE info-timestamp)" >> $GITHUB_OUTPUT
echo "branch=$($MAKE info-branch)" >> $GITHUB_OUTPUT
- name: set-docker-providers
id: set-docker-providers
run: |
if [[ ${{ github.event.ref }} =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+$ || ${{ github.event.ref }} =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+-rc\.[0-9]+$ ]]; then
echo "providers=dockerhub gcp" >> $GITHUB_OUTPUT
else
echo "providers=gcp" >> $GITHUB_OUTPUT
fi
js-build:
uses: signoz/primus.workflows/.github/workflows/js-build.yaml@main
needs: prepare
@@ -88,4 +78,4 @@ jobs:
DOCKER_BASE_IMAGES: '{"alpine": "alpine:3.20.3"}'
DOCKER_DOCKERFILE_PATH: ./pkg/query-service/Dockerfile.multi-arch
DOCKER_MANIFEST: true
DOCKER_PROVIDERS: ${{ needs.prepare.outputs.docker_providers }}
DOCKER_PROVIDERS: dockerhub

View File

@@ -2,8 +2,6 @@ name: build-enterprise
on:
push:
branches:
- main
tags:
- v*
@@ -38,7 +36,7 @@ jobs:
uses: actions/checkout@v4
with:
repository: signoz/primus
ref: ${{ inputs.PRIMUS_REF }}
ref: main
path: .primus
token: ${{ steps.token.outputs.token }}
- name: build-info
@@ -86,7 +84,7 @@ jobs:
JS_INPUT_ARTIFACT_CACHE_KEY: enterprise-dotenv-${{ github.sha }}
JS_INPUT_ARTIFACT_PATH: frontend/.env
JS_OUTPUT_ARTIFACT_CACHE_KEY: enterprise-jsbuild-${{ github.sha }}
JS_OUTPUT_ARTIFACT_PATH: frontend/build
JS_OUTPUT_ARTIFACT_PATH: frontend/build
DOCKER_BUILD: false
DOCKER_MANIFEST: false
go-build:

122
.github/workflows/build-staging.yaml vendored Normal file
View File

@@ -0,0 +1,122 @@
name: build-staging
on:
push:
branches:
- main
pull_request:
types: [labeled]
defaults:
run:
shell: bash
env:
PRIMUS_HOME: .primus
MAKE: make --no-print-directory --makefile=.primus/src/make/main.mk
jobs:
prepare:
runs-on: ubuntu-latest
if: ${{ contains(github.event.label.name, 'staging:') || github.event.ref == 'refs/heads/main' }}
outputs:
version: ${{ steps.build-info.outputs.version }}
hash: ${{ steps.build-info.outputs.hash }}
time: ${{ steps.build-info.outputs.time }}
branch: ${{ steps.build-info.outputs.branch }}
deployment: ${{ steps.build-info.outputs.deployment }}
steps:
- name: self-checkout
uses: actions/checkout@v4
- id: token
name: github-token-gen
uses: actions/create-github-app-token@v1
with:
app-id: ${{ secrets.PRIMUS_APP_ID }}
private-key: ${{ secrets.PRIMUS_PRIVATE_KEY }}
owner: ${{ github.repository_owner }}
- name: primus-checkout
uses: actions/checkout@v4
with:
repository: signoz/primus
ref: main
path: .primus
token: ${{ steps.token.outputs.token }}
- name: build-info
id: build-info
run: |
echo "version=$($MAKE info-version)" >> $GITHUB_OUTPUT
echo "hash=$($MAKE info-commit-short)" >> $GITHUB_OUTPUT
echo "time=$($MAKE info-timestamp)" >> $GITHUB_OUTPUT
echo "branch=$($MAKE info-branch)" >> $GITHUB_OUTPUT
staging_label="${{ github.event.label.name }}"
if [[ "${staging_label}" == "staging:"* ]]; then
deployment=${staging_label#"staging:"}
elif [[ "${{ github.event.ref }}" == "refs/heads/main" ]]; then
deployment="staging"
else
echo "error: not able to determine deployment - please verify the PR label or the branch"
exit 1
fi
echo "deployment=${deployment}" >> $GITHUB_OUTPUT
- name: create-dotenv
run: |
mkdir -p frontend
echo 'CI=1' > frontend/.env
echo 'TUNNEL_URL=https://telemetry.staging.signoz.cloud/tunnel' >> frontend/.env
echo 'TUNNEL_DOMAIN=https://telemetry.staging.signoz.cloud' >> frontend/.env
- name: cache-dotenv
uses: actions/cache@v4
with:
path: frontend/.env
key: staging-dotenv-${{ github.sha }}
js-build:
uses: signoz/primus.workflows/.github/workflows/js-build.yaml@main
needs: prepare
secrets: inherit
with:
PRIMUS_REF: main
JS_SRC: frontend
JS_INPUT_ARTIFACT_CACHE_KEY: staging-dotenv-${{ github.sha }}
JS_INPUT_ARTIFACT_PATH: frontend/.env
JS_OUTPUT_ARTIFACT_CACHE_KEY: staging-jsbuild-${{ github.sha }}
JS_OUTPUT_ARTIFACT_PATH: frontend/build
DOCKER_BUILD: false
DOCKER_MANIFEST: false
go-build:
uses: signoz/primus.workflows/.github/workflows/go-build.yaml@main
needs: [prepare, js-build]
secrets: inherit
with:
PRIMUS_REF: main
GO_INPUT_ARTIFACT_CACHE_KEY: staging-jsbuild-${{ github.sha }}
GO_INPUT_ARTIFACT_PATH: frontend/build
GO_BUILD_CONTEXT: ./ee/query-service
GO_BUILD_FLAGS: >-
-tags timetzdata
-ldflags='-linkmode external -extldflags \"-static\" -s -w
-X github.com/SigNoz/signoz/pkg/version.version=${{ needs.prepare.outputs.version }}
-X github.com/SigNoz/signoz/pkg/version.variant=enterprise
-X github.com/SigNoz/signoz/pkg/version.hash=${{ needs.prepare.outputs.hash }}
-X github.com/SigNoz/signoz/pkg/version.time=${{ needs.prepare.outputs.time }}
-X github.com/SigNoz/signoz/pkg/version.branch=${{ needs.prepare.outputs.branch }}
-X github.com/SigNoz/signoz/ee/query-service/constants.ZeusURL=https://api.staging.signoz.cloud
-X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=https://license.staging.signoz.cloud/api/v1'
GO_CGO_ENABLED: 1
DOCKER_BASE_IMAGES: '{"alpine": "alpine:3.20.3"}'
DOCKER_DOCKERFILE_PATH: ./ee/query-service/Dockerfile.multi-arch
DOCKER_MANIFEST: true
DOCKER_PROVIDERS: gcp
staging:
if: ${{ contains(github.event.label.name, 'staging:') || github.event.ref == 'refs/heads/main' }}
uses: signoz/primus.workflows/.github/workflows/github-trigger.yaml@main
secrets: inherit
needs: [prepare, go-build]
with:
PRIMUS_REF: main
GITHUB_ENVIRONMENT: staging
GITHUB_SILENT: true
GITHUB_REPOSITORY_NAME: charts-saas-v3-staging
GITHUB_EVENT_NAME: releaser
GITHUB_EVENT_PAYLOAD: "{\"deployment\": \"${{ needs.prepare.outputs.deployment }}\", \"signoz_version\": \"${{ needs.prepare.outputs.version }}\"}"

55
.github/workflows/integrationci.yaml vendored Normal file
View File

@@ -0,0 +1,55 @@
name: integrationci
on:
pull_request:
types:
- labeled
pull_request_target:
types:
- labeled
jobs:
test:
strategy:
fail-fast: false
matrix:
src:
- bootstrap
sqlstore-provider:
- postgres
- sqlite
clickhouse-version:
- 24.1.2-alpine
- 24.12-alpine
schema-migrator-version:
- v0.111.38
postgres-version:
- 15
if: |
((github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))) && contains(github.event.pull_request.labels.*.name, 'safe-to-integrate')
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v4
- name: python
uses: actions/setup-python@v5
with:
python-version: 3.13
- name: poetry
run: |
python -m pip install poetry==2.1.2
python -m poetry config virtualenvs.in-project true
cd tests/integration && poetry install --no-root
- name: run
run: |
cd tests/integration && \
poetry run pytest -ra \
--basetemp=./tmp/ \
-vv \
--capture=no \
src/${{matrix.src}} \
--sqlstore-provider ${{matrix.sqlstore-provider}} \
--postgres-version ${{matrix.postgres-version}} \
--clickhouse-version ${{matrix.clickhouse-version}} \
--schema-migrator-version ${{matrix.schema-migrator-version}}

View File

@@ -1,9 +1,9 @@
name: prereleaser
on:
# schedule every wednesday 9:30 AM UTC (3pm IST)
# schedule every wednesday 6:30 AM UTC (12:00 PM IST)
schedule:
- cron: '30 9 * * 3'
- cron: '30 6 * * 3'
# allow manual triggering of the workflow by a maintainer
workflow_dispatch:

View File

@@ -36,12 +36,17 @@ jobs:
echo "GITHUB_BRANCH: ${GITHUB_BRANCH}"
echo "GITHUB_SHA: ${GITHUB_SHA}"
export VERSION="${GITHUB_SHA:0:7}" # needed for child process to access it
export OTELCOL_TAG="main"
export PATH="/usr/local/go/bin/:$PATH" # needed for Golang to work
export KAFKA_SPAN_EVAL="true"
docker system prune --force
docker pull signoz/signoz-otel-collector:main
docker pull signoz/signoz-schema-migrator:main
docker system prune --force --all
OTELCOL_TAG=$(curl -s https://api.github.com/repos/SigNoz/signoz-otel-collector/releases/latest | jq -r '.tag_name // "not-found"')
if [[ "${OTELCOL_TAG}" == "not-found" ]]; then
echo "warning: unable to determine latest SigNoz OtelCollector release tag, skipping latest otelcol deployment"
else
export OTELCOL_TAG=${OTELCOL_TAG}
docker pull signoz/signoz-otel-collector:${OTELCOL_TAG}
docker pull signoz/signoz-schema-migrator:${OTELCOL_TAG}
fi
cd ~/signoz
git status
git add .

View File

@@ -38,7 +38,7 @@ jobs:
export VERSION="${GITHUB_SHA:0:7}" # needed for child process to access it
export DEV_BUILD="1"
export PATH="/usr/local/go/bin/:$PATH" # needed for Golang to work
docker system prune --force
docker system prune --force --all
cd ~/signoz
git status
git add .

147
.gitignore vendored
View File

@@ -80,6 +80,153 @@ deploy/common/clickhouse/user_scripts/
queries.active
# tmp
**/tmp/**
# .devenv tmp files
.devenv/**/tmp/**
.qodo
### Python ###
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
### Python Patch ###
# Poetry local configuration file - https://python-poetry.org/docs/configuration/#local-configuration
poetry.toml
# ruff
.ruff_cache/
# LSP config files
pyrightconfig.json
# End of https://www.toptal.com/developers/gitignore/api/python

View File

@@ -10,7 +10,7 @@ COMMIT_SHORT_SHA ?= $(shell git rev-parse --short HEAD)
BRANCH_NAME ?= $(subst /,-,$(shell git rev-parse --abbrev-ref HEAD))
VERSION ?= $(BRANCH_NAME)-$(COMMIT_SHORT_SHA)
TIMESTAMP ?= $(shell date -u +"%Y-%m-%dT%H:%M:%SZ")
ARCHS = amd64 arm64
ARCHS ?= amd64 arm64
TARGET_DIR ?= $(shell pwd)/target
ZEUS_URL ?= https://api.signoz.cloud
@@ -23,6 +23,7 @@ GO_BUILD_ARCHS_COMMUNITY = $(addprefix go-build-community-,$(ARCHS))
GO_BUILD_CONTEXT_COMMUNITY = $(SRC)/pkg/query-service
GO_BUILD_LDFLAGS_COMMUNITY = $(GO_BUILD_VERSION_LDFLAGS) -X github.com/SigNoz/signoz/pkg/version.variant=community
GO_BUILD_ARCHS_ENTERPRISE = $(addprefix go-build-enterprise-,$(ARCHS))
GO_BUILD_ARCHS_ENTERPRISE_RACE = $(addprefix go-build-enterprise-race-,$(ARCHS))
GO_BUILD_CONTEXT_ENTERPRISE = $(SRC)/ee/query-service
GO_BUILD_LDFLAGS_ENTERPRISE = $(GO_BUILD_VERSION_LDFLAGS) -X github.com/SigNoz/signoz/pkg/version.variant=enterprise $(GO_BUILD_LDFLAG_ZEUS_URL) $(GO_BUILD_LDFLAG_LICENSE_SIGNOZ_IO)
@@ -119,6 +120,18 @@ $(GO_BUILD_ARCHS_ENTERPRISE): go-build-enterprise-%: $(TARGET_DIR)
CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
fi
.PHONY: go-build-enterprise-race $(GO_BUILD_ARCHS_ENTERPRISE_RACE)
go-build-enterprise-race: ## Builds the go backend server for enterprise with race
go-build-enterprise-race: $(GO_BUILD_ARCHS_ENTERPRISE_RACE)
$(GO_BUILD_ARCHS_ENTERPRISE_RACE): go-build-enterprise-race-%: $(TARGET_DIR)
@mkdir -p $(TARGET_DIR)/$(OS)-$*
@echo ">> building binary $(TARGET_DIR)/$(OS)-$*/$(NAME)"
@if [ $* = "arm64" ]; then \
CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -race -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
else \
CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -race -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
fi
##############################################################
# js commands
##############################################################
@@ -167,3 +180,20 @@ docker-buildx-enterprise: go-build-enterprise js-build
--platform linux/arm64,linux/amd64 \
--push \
--tag $(DOCKER_REGISTRY_ENTERPRISE):$(VERSION) $(SRC)
##############################################################
# python commands
##############################################################
.PHONY: py-fmt
py-fmt: ## Run black for integration tests
@cd tests/integration && poetry run black .
.PHONY: py-lint
py-lint: ## Run lint for integration tests
@cd tests/integration && poetry run isort .
@cd tests/integration && poetry run autoflake .
@cd tests/integration && poetry run pylint .
.PHONY: py-test
py-test: ## Runs integration tests
@cd tests/integration && poetry run pytest --basetemp=./tmp/ -vv --capture=no src/

View File

@@ -174,7 +174,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.78.1
image: signoz/signoz:v0.79.1
command:
- --config=/root/config/prometheus.yml
- --use-logs-new-schema=true
@@ -208,7 +208,7 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:v0.111.38
image: signoz/signoz-otel-collector:v0.111.39
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
@@ -232,7 +232,7 @@ services:
- signoz
schema-migrator:
!!merge <<: *common
image: signoz/signoz-schema-migrator:v0.111.38
image: signoz/signoz-schema-migrator:v0.111.39
deploy:
restart_policy:
condition: on-failure

View File

@@ -110,7 +110,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.78.1
image: signoz/signoz:v0.79.1
command:
- --config=/root/config/prometheus.yml
- --use-logs-new-schema=true
@@ -143,7 +143,7 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:v0.111.38
image: signoz/signoz-otel-collector:v0.111.39
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
@@ -167,7 +167,7 @@ services:
- signoz
schema-migrator:
!!merge <<: *common
image: signoz/signoz-schema-migrator:v0.111.38
image: signoz/signoz-schema-migrator:v0.111.39
deploy:
restart_policy:
condition: on-failure

View File

@@ -177,7 +177,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.78.1}
image: signoz/signoz:${VERSION:-v0.79.1}
container_name: signoz
command:
- --config=/root/config/prometheus.yml
@@ -212,7 +212,7 @@ services:
# TODO: support otel-collector multiple replicas. Nginx/Traefik for loadbalancing?
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.111.38}
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.111.39}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
@@ -238,7 +238,7 @@ services:
condition: service_healthy
schema-migrator-sync:
!!merge <<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.38}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.39}
container_name: schema-migrator-sync
command:
- sync
@@ -249,7 +249,7 @@ services:
condition: service_healthy
schema-migrator-async:
!!merge <<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.38}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.39}
container_name: schema-migrator-async
command:
- async

View File

@@ -110,7 +110,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.78.1}
image: signoz/signoz:${VERSION:-v0.79.1}
container_name: signoz
command:
- --config=/root/config/prometheus.yml
@@ -146,7 +146,7 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.111.38}
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.111.39}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
@@ -168,7 +168,7 @@ services:
condition: service_healthy
schema-migrator-sync:
!!merge <<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.38}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.39}
container_name: schema-migrator-sync
command:
- sync
@@ -180,7 +180,7 @@ services:
restart: on-failure
schema-migrator-async:
!!merge <<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.38}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.39}
container_name: schema-migrator-async
command:
- async

View File

@@ -110,7 +110,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.78.1}
image: signoz/signoz:${VERSION:-v0.79.1}
container_name: signoz
command:
- --config=/root/config/prometheus.yml
@@ -144,7 +144,7 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.111.38}
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.111.39}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
@@ -166,7 +166,7 @@ services:
condition: service_healthy
schema-migrator-sync:
!!merge <<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.38}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.39}
container_name: schema-migrator-sync
command:
- sync
@@ -178,7 +178,7 @@ services:
restart: on-failure
schema-migrator-async:
!!merge <<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.38}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.39}
container_name: schema-migrator-async
command:
- async

View File

@@ -0,0 +1,36 @@
FROM golang:1.22-bullseye
ARG OS="linux"
ARG TARGETARCH
ARG ZEUSURL
# This path is important for stacktraces
WORKDIR $GOPATH/src/github.com/signoz/signoz
WORKDIR /root
RUN set -eux; \
apt-get update; \
apt-get install -y --no-install-recommends \
g++ \
gcc \
libc6-dev \
make \
pkg-config \
; \
rm -rf /var/lib/apt/lists/*
COPY go.mod go.sum ./
RUN go mod download
COPY ./ee/ ./ee/
COPY ./pkg/ ./pkg/
COPY ./templates/email /root/templates
COPY Makefile Makefile
RUN TARGET_DIR=/root ARCHS=${TARGETARCH} ZEUS_URL=${ZEUSURL} LICENSE_URL=${ZEUSURL}/api/v1 make go-build-enterprise-race
RUN mv /root/linux-${TARGETARCH}/signoz /root/signoz
RUN chmod 755 /root /root/signoz
ENTRYPOINT ["/root/signoz"]

View File

@@ -153,9 +153,11 @@ func (ah *APIHandler) getOrCreateCloudIntegrationPAT(ctx context.Context, orgId
func (ah *APIHandler) getOrCreateCloudIntegrationUser(
ctx context.Context, orgId string, cloudProvider string,
) (*types.User, *basemodel.ApiError) {
cloudIntegrationUserId := fmt.Sprintf("%s-integration", cloudProvider)
cloudIntegrationUser := fmt.Sprintf("%s-integration", cloudProvider)
email := fmt.Sprintf("%s@signoz.io", cloudIntegrationUser)
integrationUserResult, apiErr := ah.AppDao().GetUser(ctx, cloudIntegrationUserId)
// TODO(nitya): there should be orgId here
integrationUserResult, apiErr := ah.AppDao().GetUserByEmail(ctx, email)
if apiErr != nil {
return nil, basemodel.WrapApiError(apiErr, "couldn't look for integration user")
}
@@ -170,9 +172,9 @@ func (ah *APIHandler) getOrCreateCloudIntegrationUser(
)
newUser := &types.User{
ID: cloudIntegrationUserId,
Name: fmt.Sprintf("%s integration", cloudProvider),
Email: fmt.Sprintf("%s@signoz.io", cloudIntegrationUserId),
ID: uuid.New().String(),
Name: cloudIntegrationUser,
Email: email,
TimeAuditable: types.TimeAuditable{
CreatedAt: time.Now(),
},

View File

@@ -5,16 +5,18 @@ import (
"encoding/json"
"fmt"
"net/http"
"slices"
"time"
"github.com/SigNoz/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/ee/types"
eeTypes "github.com/SigNoz/signoz/ee/types"
"github.com/SigNoz/signoz/pkg/errors"
errorsV2 "github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/query-service/auth"
baseconstants "github.com/SigNoz/signoz/pkg/query-service/constants"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/gorilla/mux"
"go.uber.org/zap"
@@ -58,7 +60,7 @@ func (ah *APIHandler) createPAT(w http.ResponseWriter, r *http.Request) {
ah.Respond(w, &pat)
}
func validatePATRequest(req types.GettablePAT) error {
func validatePATRequest(req eeTypes.GettablePAT) error {
if req.Role == "" || (req.Role != baseconstants.ViewerGroup && req.Role != baseconstants.EditorGroup && req.Role != baseconstants.AdminGroup) {
return fmt.Errorf("valid role is required")
}
@@ -74,12 +76,19 @@ func validatePATRequest(req types.GettablePAT) error {
func (ah *APIHandler) updatePAT(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
req := types.GettablePAT{}
req := eeTypes.GettablePAT{}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
idStr := mux.Vars(r)["id"]
id, err := valuer.NewUUID(idStr)
if err != nil {
render.Error(w, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "id is not a valid uuid-v7"))
return
}
user, err := auth.GetUserFromReqContext(r.Context())
if err != nil {
RespondError(w, &model.ApiError{
@@ -89,6 +98,25 @@ func (ah *APIHandler) updatePAT(w http.ResponseWriter, r *http.Request) {
return
}
//get the pat
existingPAT, paterr := ah.AppDao().GetPATByID(ctx, user.OrgID, id)
if paterr != nil {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, paterr.Error()))
return
}
// get the user
createdByUser, usererr := ah.AppDao().GetUser(ctx, existingPAT.UserID)
if usererr != nil {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, usererr.Error()))
return
}
if slices.Contains(types.AllIntegrationUserEmails, types.IntegrationUserEmail(createdByUser.Email)) {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, "integration user pat cannot be updated"))
return
}
err = validatePATRequest(req)
if err != nil {
RespondError(w, model.BadRequest(err), nil)
@@ -96,12 +124,6 @@ func (ah *APIHandler) updatePAT(w http.ResponseWriter, r *http.Request) {
}
req.UpdatedByUserID = user.ID
idStr := mux.Vars(r)["id"]
id, err := valuer.NewUUID(idStr)
if err != nil {
render.Error(w, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "id is not a valid uuid-v7"))
return
}
req.UpdatedAt = time.Now()
zap.L().Info("Got Update PAT request", zap.Any("pat", req))
var apierr basemodel.BaseApiError
@@ -149,6 +171,25 @@ func (ah *APIHandler) revokePAT(w http.ResponseWriter, r *http.Request) {
return
}
//get the pat
existingPAT, paterr := ah.AppDao().GetPATByID(ctx, user.OrgID, id)
if paterr != nil {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, paterr.Error()))
return
}
// get the user
createdByUser, usererr := ah.AppDao().GetUser(ctx, existingPAT.UserID)
if usererr != nil {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, usererr.Error()))
return
}
if slices.Contains(types.AllIntegrationUserEmails, types.IntegrationUserEmail(createdByUser.Email)) {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, "integration user pat cannot be updated"))
return
}
zap.L().Info("Revoke PAT with id", zap.String("id", id.StringValue()))
if apierr := ah.AppDao().RevokePAT(ctx, user.OrgID, id, user.ID); apierr != nil {
RespondError(w, apierr, nil)

View File

@@ -8,7 +8,6 @@ import (
basedao "github.com/SigNoz/signoz/pkg/query-service/dao"
baseint "github.com/SigNoz/signoz/pkg/query-service/interfaces"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
ossTypes "github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/google/uuid"
@@ -40,7 +39,6 @@ type ModelDao interface {
UpdatePAT(ctx context.Context, orgID string, p types.GettablePAT, id valuer.UUID) basemodel.BaseApiError
GetPAT(ctx context.Context, pat string) (*types.GettablePAT, basemodel.BaseApiError)
GetPATByID(ctx context.Context, orgID string, id valuer.UUID) (*types.GettablePAT, basemodel.BaseApiError)
GetUserByPAT(ctx context.Context, orgID string, token string) (*ossTypes.GettableUser, basemodel.BaseApiError)
ListPATs(ctx context.Context, orgID string) ([]types.GettablePAT, basemodel.BaseApiError)
RevokePAT(ctx context.Context, orgID string, id valuer.UUID, userID string) basemodel.BaseApiError
}

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"net/url"
"strings"
"time"
"github.com/SigNoz/signoz/ee/query-service/constants"
@@ -44,7 +43,7 @@ func (m *modelDao) createUserForSAMLRequest(ctx context.Context, email string) (
}
user := &types.User{
ID: uuid.NewString(),
ID: uuid.New().String(),
Name: "",
Email: email,
Password: hash,
@@ -162,12 +161,7 @@ func (m *modelDao) PrecheckLogin(ctx context.Context, email, sourceUrl string) (
// find domain from email
orgDomain, apierr := m.GetDomainByEmail(ctx, email)
if apierr != nil {
var emailDomain string
emailComponents := strings.Split(email, "@")
if len(emailComponents) > 0 {
emailDomain = emailComponents[1]
}
zap.L().Error("failed to get org domain from email", zap.String("emailDomain", emailDomain), zap.Error(apierr.ToError()))
zap.L().Error("failed to get org domain from email", zap.String("email", email), zap.Error(apierr.ToError()))
return resp, apierr
}

View File

@@ -196,27 +196,3 @@ func (m *modelDao) GetPATByID(ctx context.Context, orgID string, id valuer.UUID)
return &patWithUser, nil
}
// deprecated
func (m *modelDao) GetUserByPAT(ctx context.Context, orgID string, token string) (*ossTypes.GettableUser, basemodel.BaseApiError) {
users := []ossTypes.GettableUser{}
if err := m.DB().NewSelect().
Model(&users).
Column("u.id", "u.name", "u.email", "u.password", "u.created_at", "u.profile_picture_url", "u.org_id", "u.group_id").
Join("JOIN personal_access_tokens p ON u.id = p.user_id").
Where("p.token = ?", token).
Where("p.expires_at >= strftime('%s', 'now')").
Where("p.org_id = ?", orgID).
Scan(ctx); err != nil {
return nil, model.InternalError(fmt.Errorf("failed to fetch user from PAT, err: %v", err))
}
if len(users) != 1 {
return nil, &model.ApiError{
Typ: model.ErrorInternal,
Err: fmt.Errorf("found zero or multiple users with same PAT token"),
}
}
return &users[0], nil
}

View File

@@ -17,13 +17,15 @@ var (
)
var (
Org = "org"
User = "user"
Org = "org"
User = "user"
CloudIntegration = "cloud_integration"
)
var (
OrgReference = `("org_id") REFERENCES "organizations" ("id")`
UserReference = `("user_id") REFERENCES "users" ("id") ON DELETE CASCADE ON UPDATE CASCADE`
OrgReference = `("org_id") REFERENCES "organizations" ("id")`
UserReference = `("user_id") REFERENCES "users" ("id") ON DELETE CASCADE ON UPDATE CASCADE`
CloudIntegrationReference = `("cloud_integration_id") REFERENCES "cloud_integration" ("id") ON DELETE CASCADE`
)
type dialect struct {
@@ -211,6 +213,8 @@ func (dialect *dialect) RenameTableAndModifyModel(ctx context.Context, bun bun.I
fkReferences = append(fkReferences, OrgReference)
} else if reference == User && !slices.Contains(fkReferences, UserReference) {
fkReferences = append(fkReferences, UserReference)
} else if reference == CloudIntegration && !slices.Contains(fkReferences, CloudIntegrationReference) {
fkReferences = append(fkReferences, CloudIntegrationReference)
}
}

View File

@@ -11,9 +11,12 @@ const logEvent = async (
rateLimited?: boolean,
): Promise<SuccessResponse<EventSuccessPayloadProps> | ErrorResponse> => {
try {
// add tenant_url to attributes
const { hostname } = window.location;
const updatedAttributes = { ...attributes, tenant_url: hostname };
const response = await axios.post('/event', {
eventName,
attributes,
attributes: updatedAttributes,
eventType: eventType || 'track',
rateLimited: rateLimited || false, // TODO: Update this once we have a proper way to handle rate limiting
});

View File

@@ -8,6 +8,5 @@ export enum FeatureKeys {
PREMIUM_SUPPORT = 'PREMIUM_SUPPORT',
ANOMALY_DETECTION = 'ANOMALY_DETECTION',
ONBOARDING_V3 = 'ONBOARDING_V3',
THIRD_PARTY_API = 'THIRD_PARTY_API',
TRACE_FUNNELS = 'TRACE_FUNNELS',
}

View File

@@ -284,16 +284,6 @@ function SideNav(): JSX.Element {
manageLicenseMenuItem,
];
const isApiMonitoringEnabled = featureFlags?.find(
(flag) => flag.name === FeatureKeys.THIRD_PARTY_API,
)?.active;
if (!isApiMonitoringEnabled) {
updatedMenuItems = updatedMenuItems.filter(
(item) => item.key !== ROUTES.API_MONITORING,
);
}
if (isCloudUser || isEnterpriseSelfHostedUser) {
const isOnboardingEnabled =
featureFlags?.find((feature) => feature.name === FeatureKeys.ONBOARDING)

6
go.mod
View File

@@ -10,7 +10,8 @@ require (
github.com/ClickHouse/clickhouse-go/v2 v2.30.0
github.com/DATA-DOG/go-sqlmock v1.5.2
github.com/SigNoz/govaluate v0.0.0-20240203125216-988004ccc7fd
github.com/SigNoz/signoz-otel-collector v0.111.16
github.com/SigNoz/signoz-otel-collector v0.111.39
github.com/antlr4-go/antlr/v4 v4.13.1
github.com/antonmedv/expr v1.15.3
github.com/cespare/xxhash/v2 v2.3.0
github.com/coreos/go-oidc/v3 v3.11.0
@@ -89,10 +90,9 @@ require (
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 // indirect
github.com/ClickHouse/ch-go v0.61.5 // indirect
github.com/ClickHouse/ch-go v0.63.1 // indirect
github.com/alecthomas/units v0.0.0-20240927000941-0f3dac36c52b // indirect
github.com/andybalholm/brotli v1.1.1 // indirect
github.com/antlr4-go/antlr/v4 v4.13.1 // indirect
github.com/armon/go-metrics v0.4.1 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/aws/aws-sdk-go v1.55.5 // indirect

14
go.sum
View File

@@ -85,8 +85,8 @@ github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 h1:XHOnouVk1mx
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/ClickHouse/ch-go v0.61.5 h1:zwR8QbYI0tsMiEcze/uIMK+Tz1D3XZXLdNrlaOpeEI4=
github.com/ClickHouse/ch-go v0.61.5/go.mod h1:s1LJW/F/LcFs5HJnuogFMta50kKDO0lf9zzfrbl0RQg=
github.com/ClickHouse/ch-go v0.63.1 h1:s2JyZvWLTCSAGdtjMBBmAgQQHMco6pawLJMOXi0FODM=
github.com/ClickHouse/ch-go v0.63.1/go.mod h1:I1kJJCL3WJcBMGe1m+HVK0+nREaG+JOYYBWjrDrF3R0=
github.com/ClickHouse/clickhouse-go/v2 v2.30.0 h1:AG4D/hW39qa58+JHQIFOSnxyL46H6h2lrmGGk17dhFo=
github.com/ClickHouse/clickhouse-go/v2 v2.30.0/go.mod h1:i9ZQAojcayW3RsdCb3YR+n+wC2h65eJsZCscZ1Z1wyo=
github.com/Code-Hex/go-generics-cache v1.5.1 h1:6vhZGc5M7Y/YD8cIUcY8kcuQLB4cHR7U+0KMqAA0KcU=
@@ -100,8 +100,10 @@ github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/SigNoz/govaluate v0.0.0-20240203125216-988004ccc7fd h1:Bk43AsDYe0fhkbj57eGXx8H3ZJ4zhmQXBnrW523ktj8=
github.com/SigNoz/govaluate v0.0.0-20240203125216-988004ccc7fd/go.mod h1:nxRcH/OEdM8QxzH37xkGzomr1O0JpYBRS6pwjsWW6Pc=
github.com/SigNoz/signoz-otel-collector v0.111.16 h1:535uKH5Oux+35EsI+L3C6pnAP/Ye0PTCbVizXoL+VqE=
github.com/SigNoz/signoz-otel-collector v0.111.16/go.mod h1:HJ4m0LY1MPsuZmuRF7Ixb+bY8rxgRzI0VXzOedESsjg=
github.com/SigNoz/signoz-otel-collector v0.111.39-beta.1 h1:ZpSNrOZBOH2iCJIPeER5X0mfxOe64yP3JRX7FzBNfwY=
github.com/SigNoz/signoz-otel-collector v0.111.39-beta.1/go.mod h1:DCu/D+lqhsPNSGS4IMD+4gn7q06TGzOCKazSy+GURVc=
github.com/SigNoz/signoz-otel-collector v0.111.39 h1:Dl8QqZNAsj2atxP572OzsszPK0XPpd3LLPNPRAUJ5wo=
github.com/SigNoz/signoz-otel-collector v0.111.39/go.mod h1:DCu/D+lqhsPNSGS4IMD+4gn7q06TGzOCKazSy+GURVc=
github.com/afex/hystrix-go v0.0.0-20180502004556-fa1af6a1f4f5/go.mod h1:SkGFH1ia65gfNATL8TAiHDNxPzPdmEL5uirI2Uyuz6c=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
@@ -820,8 +822,8 @@ github.com/prometheus/prometheus v0.300.1/go.mod h1:gtTPY/XVyCdqqnjA3NzDMb0/nc5H
github.com/puzpuzpuz/xsync/v3 v3.5.0 h1:i+cMcpEDY1BkNm7lPDkCtE4oElsYLn+EKF8kAu2vXT4=
github.com/puzpuzpuz/xsync/v3 v3.5.0/go.mod h1:VjzYrABPabuM4KyBh1Ftq6u8nhwY5tBPKP9jpmh0nnA=
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/redis/go-redis/v9 v9.6.1 h1:HHDteefn6ZkTtY5fGUE8tj8uy85AHk6zP7CpzIAM0y4=
github.com/redis/go-redis/v9 v9.6.1/go.mod h1:0C0c6ycQsdpVNQpxb1njEQIqkx5UcsM8FJCQLgE9+RA=
github.com/redis/go-redis/v9 v9.6.3 h1:8Dr5ygF1QFXRxIH/m3Xg9MMG1rS8YCtAgosrsewT6i0=
github.com/redis/go-redis/v9 v9.6.3/go.mod h1:0C0c6ycQsdpVNQpxb1njEQIqkx5UcsM8FJCQLgE9+RA=
github.com/rhnvrm/simples3 v0.6.1/go.mod h1:Y+3vYm2V7Y4VijFoJHHTrja6OgPrJ2cBti8dPGkC3sA=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=

View File

@@ -25,6 +25,25 @@ type postableAlert struct {
Receivers []string `json:"receivers"`
}
func (pa *postableAlert) MarshalJSON() ([]byte, error) {
// Marshal the embedded PostableAlert to get its JSON representation.
alertJSON, err := json.Marshal(pa.PostableAlert)
if err != nil {
return nil, err
}
// Unmarshal that JSON into a map so we can add extra fields.
var m map[string]interface{}
if err := json.Unmarshal(alertJSON, &m); err != nil {
return nil, err
}
// Add the Receivers field.
m["receivers"] = pa.Receivers
return json.Marshal(m)
}
const (
alertsPath string = "/v1/alerts"
routesPath string = "/v1/routes"

View File

@@ -0,0 +1,35 @@
package legacyalertmanager
import (
"encoding/json"
"testing"
"github.com/SigNoz/signoz/pkg/types/alertmanagertypes"
"github.com/prometheus/alertmanager/api/v2/models"
"github.com/stretchr/testify/assert"
)
func TestProvider_TestAlert(t *testing.T) {
pa := &postableAlert{
PostableAlert: &alertmanagertypes.PostableAlert{
Alert: models.Alert{
Labels: models.LabelSet{
"alertname": "test",
},
GeneratorURL: "http://localhost:9090/graph?g0.expr=up&g0.tab=1",
},
Annotations: models.LabelSet{
"summary": "test",
},
},
Receivers: []string{"receiver1", "receiver2"},
}
body, err := json.Marshal(pa)
if err != nil {
t.Fatalf("failed to marshal postable alert: %v", err)
}
assert.Contains(t, string(body), "receiver1")
assert.Contains(t, string(body), "receiver2")
}

View File

@@ -3928,11 +3928,16 @@ func (r *ClickHouseReader) GetLogAttributeKeys(ctx context.Context, req *v3.Filt
var rows driver.Rows
var response v3.FilterAttributeKeyResponse
tagTypeFilter := `tag_type != 'logfield'`
if req.TagType != "" {
tagTypeFilter = fmt.Sprintf(`tag_type != 'logfield' and tag_type = '%s'`, req.TagType)
}
if len(req.SearchText) != 0 {
query = fmt.Sprintf("select distinct tag_key, tag_type, tag_data_type from %s.%s where tag_type != 'logfield' and tag_key ILIKE $1 limit $2", r.logsDB, r.logsTagAttributeTableV2)
query = fmt.Sprintf("select distinct tag_key, tag_type, tag_data_type from %s.%s where %s and tag_key ILIKE $1 limit $2", r.logsDB, r.logsTagAttributeTableV2, tagTypeFilter)
rows, err = r.db.Query(ctx, query, fmt.Sprintf("%%%s%%", req.SearchText), req.Limit)
} else {
query = fmt.Sprintf("select distinct tag_key, tag_type, tag_data_type from %s.%s where tag_type != 'logfield' limit $1", r.logsDB, r.logsTagAttributeTableV2)
query = fmt.Sprintf("select distinct tag_key, tag_type, tag_data_type from %s.%s where %s limit $1", r.logsDB, r.logsTagAttributeTableV2, tagTypeFilter)
rows, err = r.db.Query(ctx, query, req.Limit)
}
@@ -3967,13 +3972,16 @@ func (r *ClickHouseReader) GetLogAttributeKeys(ctx context.Context, req *v3.Filt
response.AttributeKeys = append(response.AttributeKeys, key)
}
// add other attributes
for _, f := range constants.StaticFieldsLogsV3 {
if (v3.AttributeKey{} == f) {
continue
}
if len(req.SearchText) == 0 || strings.Contains(f.Key, req.SearchText) {
response.AttributeKeys = append(response.AttributeKeys, f)
// add other attributes only when the tagType is not specified
// i.e retrieve all attributes
if req.TagType == "" {
for _, f := range constants.StaticFieldsLogsV3 {
if (v3.AttributeKey{} == f) {
continue
}
if len(req.SearchText) == 0 || strings.Contains(f.Key, req.SearchText) {
response.AttributeKeys = append(response.AttributeKeys, f)
}
}
}
@@ -4715,7 +4723,12 @@ func (r *ClickHouseReader) GetTraceAttributeKeys(ctx context.Context, req *v3.Fi
var rows driver.Rows
var response v3.FilterAttributeKeyResponse
query = fmt.Sprintf("SELECT DISTINCT(tag_key), tag_type, tag_data_type FROM %s.%s WHERE tag_key ILIKE $1 and tag_type != 'spanfield' LIMIT $2", r.TraceDB, r.spanAttributeTableV2)
tagTypeFilter := `tag_type != 'spanfield'`
if req.TagType != "" {
tagTypeFilter = fmt.Sprintf(`tag_type != 'spanfield' and tag_type = '%s'`, req.TagType)
}
query = fmt.Sprintf("SELECT DISTINCT(tag_key), tag_type, tag_data_type FROM %s.%s WHERE tag_key ILIKE $1 and %s LIMIT $2", r.TraceDB, r.spanAttributeTableV2, tagTypeFilter)
rows, err = r.db.Query(ctx, query, fmt.Sprintf("%%%s%%", req.SearchText), req.Limit)
@@ -4760,13 +4773,16 @@ func (r *ClickHouseReader) GetTraceAttributeKeys(ctx context.Context, req *v3.Fi
fields = constants.DeprecatedStaticFieldsTraces
}
// add the new static fields
for _, f := range fields {
if (v3.AttributeKey{} == f) {
continue
}
if len(req.SearchText) == 0 || strings.Contains(f.Key, req.SearchText) {
response.AttributeKeys = append(response.AttributeKeys, f)
// add the new static fields only when the tagType is not specified
// i.e retrieve all attributes
if req.TagType == "" {
for _, f := range fields {
if (v3.AttributeKey{} == f) {
continue
}
if len(req.SearchText) == 0 || strings.Contains(f.Key, req.SearchText) {
response.AttributeKeys = append(response.AttributeKeys, f)
}
}
}

View File

@@ -8,68 +8,59 @@ import (
"time"
"github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/google/uuid"
"github.com/jmoiron/sqlx"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/valuer"
)
type cloudProviderAccountsRepository interface {
listConnected(ctx context.Context, cloudProvider string) ([]AccountRecord, *model.ApiError)
listConnected(ctx context.Context, orgId string, provider string) ([]types.CloudIntegration, *model.ApiError)
get(ctx context.Context, cloudProvider string, id string) (*AccountRecord, *model.ApiError)
get(ctx context.Context, orgId string, provider string, id string) (*types.CloudIntegration, *model.ApiError)
getConnectedCloudAccount(
ctx context.Context, cloudProvider string, cloudAccountId string,
) (*AccountRecord, *model.ApiError)
getConnectedCloudAccount(ctx context.Context, orgId string, provider string, accountID string) (*types.CloudIntegration, *model.ApiError)
// Insert an account or update it by (cloudProvider, id)
// for specified non-empty fields
upsert(
ctx context.Context,
cloudProvider string,
orgId string,
provider string,
id *string,
config *AccountConfig,
cloudAccountId *string,
agentReport *AgentReport,
config *types.AccountConfig,
accountId *string,
agentReport *types.AgentReport,
removedAt *time.Time,
) (*AccountRecord, *model.ApiError)
) (*types.CloudIntegration, *model.ApiError)
}
func newCloudProviderAccountsRepository(db *sqlx.DB) (
func newCloudProviderAccountsRepository(store sqlstore.SQLStore) (
*cloudProviderAccountsSQLRepository, error,
) {
return &cloudProviderAccountsSQLRepository{
db: db,
store: store,
}, nil
}
type cloudProviderAccountsSQLRepository struct {
db *sqlx.DB
store sqlstore.SQLStore
}
func (r *cloudProviderAccountsSQLRepository) listConnected(
ctx context.Context, cloudProvider string,
) ([]AccountRecord, *model.ApiError) {
accounts := []AccountRecord{}
ctx context.Context, orgId string, cloudProvider string,
) ([]types.CloudIntegration, *model.ApiError) {
accounts := []types.CloudIntegration{}
err := r.store.BunDB().NewSelect().
Model(&accounts).
Where("org_id = ?", orgId).
Where("provider = ?", cloudProvider).
Where("removed_at is NULL").
Where("account_id is not NULL").
Where("last_agent_report is not NULL").
Order("created_at").
Scan(ctx)
err := r.db.SelectContext(
ctx, &accounts, `
select
cloud_provider,
id,
config_json,
cloud_account_id,
last_agent_report_json,
created_at,
removed_at
from cloud_integrations_accounts
where
cloud_provider=$1
and removed_at is NULL
and cloud_account_id is not NULL
and last_agent_report_json is not NULL
order by created_at
`, cloudProvider,
)
if err != nil {
return nil, model.InternalError(fmt.Errorf(
"could not query connected cloud accounts: %w", err,
@@ -80,27 +71,16 @@ func (r *cloudProviderAccountsSQLRepository) listConnected(
}
func (r *cloudProviderAccountsSQLRepository) get(
ctx context.Context, cloudProvider string, id string,
) (*AccountRecord, *model.ApiError) {
var result AccountRecord
ctx context.Context, orgId string, provider string, id string,
) (*types.CloudIntegration, *model.ApiError) {
var result types.CloudIntegration
err := r.db.GetContext(
ctx, &result, `
select
cloud_provider,
id,
config_json,
cloud_account_id,
last_agent_report_json,
created_at,
removed_at
from cloud_integrations_accounts
where
cloud_provider=$1
and id=$2
`,
cloudProvider, id,
)
err := r.store.BunDB().NewSelect().
Model(&result).
Where("org_id = ?", orgId).
Where("provider = ?", provider).
Where("id = ?", id).
Scan(ctx)
if err == sql.ErrNoRows {
return nil, model.NotFoundError(fmt.Errorf(
@@ -116,33 +96,22 @@ func (r *cloudProviderAccountsSQLRepository) get(
}
func (r *cloudProviderAccountsSQLRepository) getConnectedCloudAccount(
ctx context.Context, cloudProvider string, cloudAccountId string,
) (*AccountRecord, *model.ApiError) {
var result AccountRecord
ctx context.Context, orgId string, provider string, accountId string,
) (*types.CloudIntegration, *model.ApiError) {
var result types.CloudIntegration
err := r.db.GetContext(
ctx, &result, `
select
cloud_provider,
id,
config_json,
cloud_account_id,
last_agent_report_json,
created_at,
removed_at
from cloud_integrations_accounts
where
cloud_provider=$1
and cloud_account_id=$2
and last_agent_report_json is not NULL
and removed_at is NULL
`,
cloudProvider, cloudAccountId,
)
err := r.store.BunDB().NewSelect().
Model(&result).
Where("org_id = ?", orgId).
Where("provider = ?", provider).
Where("account_id = ?", accountId).
Where("last_agent_report is not NULL").
Where("removed_at is NULL").
Scan(ctx)
if err == sql.ErrNoRows {
return nil, model.NotFoundError(fmt.Errorf(
"couldn't find connected cloud account %s", cloudAccountId,
"couldn't find connected cloud account %s", accountId,
))
} else if err != nil {
return nil, model.InternalError(fmt.Errorf(
@@ -155,17 +124,18 @@ func (r *cloudProviderAccountsSQLRepository) getConnectedCloudAccount(
func (r *cloudProviderAccountsSQLRepository) upsert(
ctx context.Context,
cloudProvider string,
orgId string,
provider string,
id *string,
config *AccountConfig,
cloudAccountId *string,
agentReport *AgentReport,
config *types.AccountConfig,
accountId *string,
agentReport *types.AgentReport,
removedAt *time.Time,
) (*AccountRecord, *model.ApiError) {
) (*types.CloudIntegration, *model.ApiError) {
// Insert
if id == nil {
newId := uuid.NewString()
id = &newId
temp := valuer.GenerateUUID().StringValue()
id = &temp
}
// Prepare clause for setting values in `on conflict do update`
@@ -176,19 +146,19 @@ func (r *cloudProviderAccountsSQLRepository) upsert(
if config != nil {
onConflictSetStmts = append(
onConflictSetStmts, setColStatement("config_json"),
onConflictSetStmts, setColStatement("config"),
)
}
if cloudAccountId != nil {
if accountId != nil {
onConflictSetStmts = append(
onConflictSetStmts, setColStatement("cloud_account_id"),
onConflictSetStmts, setColStatement("account_id"),
)
}
if agentReport != nil {
onConflictSetStmts = append(
onConflictSetStmts, setColStatement("last_agent_report_json"),
onConflictSetStmts, setColStatement("last_agent_report"),
)
}
@@ -198,37 +168,45 @@ func (r *cloudProviderAccountsSQLRepository) upsert(
)
}
// set updated_at to current timestamp if it's an upsert
onConflictSetStmts = append(
onConflictSetStmts, setColStatement("updated_at"),
)
onConflictClause := ""
if len(onConflictSetStmts) > 0 {
onConflictClause = fmt.Sprintf(
"on conflict(cloud_provider, id) do update SET\n%s",
"conflict(id, provider, org_id) do update SET\n%s",
strings.Join(onConflictSetStmts, ",\n"),
)
}
insertQuery := fmt.Sprintf(`
INSERT INTO cloud_integrations_accounts (
cloud_provider,
id,
config_json,
cloud_account_id,
last_agent_report_json,
removed_at
) values ($1, $2, $3, $4, $5, $6)
%s`, onConflictClause,
)
integration := types.CloudIntegration{
OrgID: orgId,
Provider: provider,
Identifiable: types.Identifiable{ID: valuer.MustNewUUID(*id)},
TimeAuditable: types.TimeAuditable{
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
},
Config: config,
AccountID: accountId,
LastAgentReport: agentReport,
RemovedAt: removedAt,
}
_, dbErr := r.store.BunDB().NewInsert().
Model(&integration).
On(onConflictClause).
Exec(ctx)
_, dbErr := r.db.ExecContext(
ctx, insertQuery,
cloudProvider, id, config, cloudAccountId, agentReport, removedAt,
)
if dbErr != nil {
return nil, model.InternalError(fmt.Errorf(
"could not upsert cloud account record: %w", dbErr,
))
}
upsertedAccount, apiErr := r.get(ctx, cloudProvider, *id)
upsertedAccount, apiErr := r.get(ctx, orgId, provider, *id)
if apiErr != nil {
return nil, model.InternalError(fmt.Errorf(
"couldn't fetch upserted account by id: %w", apiErr.ToError(),

View File

@@ -33,12 +33,12 @@ type Controller struct {
func NewController(sqlStore sqlstore.SQLStore) (
*Controller, error,
) {
accountsRepo, err := newCloudProviderAccountsRepository(sqlStore.SQLxDB())
accountsRepo, err := newCloudProviderAccountsRepository(sqlStore)
if err != nil {
return nil, fmt.Errorf("couldn't create cloud provider accounts repo: %w", err)
}
serviceConfigRepo, err := newServiceConfigRepository(sqlStore.SQLxDB())
serviceConfigRepo, err := newServiceConfigRepository(sqlStore)
if err != nil {
return nil, fmt.Errorf("couldn't create cloud provider service config repo: %w", err)
}
@@ -49,19 +49,12 @@ func NewController(sqlStore sqlstore.SQLStore) (
}, nil
}
type Account struct {
Id string `json:"id"`
CloudAccountId string `json:"cloud_account_id"`
Config AccountConfig `json:"config"`
Status AccountStatus `json:"status"`
}
type ConnectedAccountsListResponse struct {
Accounts []Account `json:"accounts"`
Accounts []types.Account `json:"accounts"`
}
func (c *Controller) ListConnectedAccounts(
ctx context.Context, cloudProvider string,
ctx context.Context, orgId string, cloudProvider string,
) (
*ConnectedAccountsListResponse, *model.ApiError,
) {
@@ -69,14 +62,14 @@ func (c *Controller) ListConnectedAccounts(
return nil, apiErr
}
accountRecords, apiErr := c.accountsRepo.listConnected(ctx, cloudProvider)
accountRecords, apiErr := c.accountsRepo.listConnected(ctx, orgId, cloudProvider)
if apiErr != nil {
return nil, model.WrapApiError(apiErr, "couldn't list cloud accounts")
}
connectedAccounts := []Account{}
connectedAccounts := []types.Account{}
for _, a := range accountRecords {
connectedAccounts = append(connectedAccounts, a.account())
connectedAccounts = append(connectedAccounts, a.Account())
}
return &ConnectedAccountsListResponse{
@@ -88,7 +81,7 @@ type GenerateConnectionUrlRequest struct {
// Optional. To be specified for updates.
AccountId *string `json:"account_id,omitempty"`
AccountConfig AccountConfig `json:"account_config"`
AccountConfig types.AccountConfig `json:"account_config"`
AgentConfig SigNozAgentConfig `json:"agent_config"`
}
@@ -109,7 +102,7 @@ type GenerateConnectionUrlResponse struct {
}
func (c *Controller) GenerateConnectionUrl(
ctx context.Context, cloudProvider string, req GenerateConnectionUrlRequest,
ctx context.Context, orgId string, cloudProvider string, req GenerateConnectionUrlRequest,
) (*GenerateConnectionUrlResponse, *model.ApiError) {
// Account connection with a simple connection URL may not be available for all providers.
if cloudProvider != "aws" {
@@ -117,7 +110,7 @@ func (c *Controller) GenerateConnectionUrl(
}
account, apiErr := c.accountsRepo.upsert(
ctx, cloudProvider, req.AccountId, &req.AccountConfig, nil, nil, nil,
ctx, orgId, cloudProvider, req.AccountId, &req.AccountConfig, nil, nil, nil,
)
if apiErr != nil {
return nil, model.WrapApiError(apiErr, "couldn't upsert cloud account")
@@ -135,7 +128,7 @@ func (c *Controller) GenerateConnectionUrl(
"param_SigNozIntegrationAgentVersion": agentVersion,
"param_SigNozApiUrl": req.AgentConfig.SigNozAPIUrl,
"param_SigNozApiKey": req.AgentConfig.SigNozAPIKey,
"param_SigNozAccountId": account.Id,
"param_SigNozAccountId": account.ID.StringValue(),
"param_IngestionUrl": req.AgentConfig.IngestionUrl,
"param_IngestionKey": req.AgentConfig.IngestionKey,
"stackName": "signoz-integration",
@@ -148,19 +141,19 @@ func (c *Controller) GenerateConnectionUrl(
}
return &GenerateConnectionUrlResponse{
AccountId: account.Id,
AccountId: account.ID.StringValue(),
ConnectionUrl: connectionUrl,
}, nil
}
type AccountStatusResponse struct {
Id string `json:"id"`
CloudAccountId *string `json:"cloud_account_id,omitempty"`
Status AccountStatus `json:"status"`
Id string `json:"id"`
CloudAccountId *string `json:"cloud_account_id,omitempty"`
Status types.AccountStatus `json:"status"`
}
func (c *Controller) GetAccountStatus(
ctx context.Context, cloudProvider string, accountId string,
ctx context.Context, orgId string, cloudProvider string, accountId string,
) (
*AccountStatusResponse, *model.ApiError,
) {
@@ -168,23 +161,23 @@ func (c *Controller) GetAccountStatus(
return nil, apiErr
}
account, apiErr := c.accountsRepo.get(ctx, cloudProvider, accountId)
account, apiErr := c.accountsRepo.get(ctx, orgId, cloudProvider, accountId)
if apiErr != nil {
return nil, apiErr
}
resp := AccountStatusResponse{
Id: account.Id,
CloudAccountId: account.CloudAccountId,
Status: account.status(),
Id: account.ID.StringValue(),
CloudAccountId: account.AccountID,
Status: account.Status(),
}
return &resp, nil
}
type AgentCheckInRequest struct {
AccountId string `json:"account_id"`
CloudAccountId string `json:"cloud_account_id"`
ID string `json:"account_id"`
AccountID string `json:"cloud_account_id"`
// Arbitrary cloud specific Agent data
Data map[string]any `json:"data,omitempty"`
}
@@ -204,35 +197,35 @@ type IntegrationConfigForAgent struct {
}
func (c *Controller) CheckInAsAgent(
ctx context.Context, cloudProvider string, req AgentCheckInRequest,
ctx context.Context, orgId string, cloudProvider string, req AgentCheckInRequest,
) (*AgentCheckInResponse, *model.ApiError) {
if apiErr := validateCloudProviderName(cloudProvider); apiErr != nil {
return nil, apiErr
}
existingAccount, apiErr := c.accountsRepo.get(ctx, cloudProvider, req.AccountId)
if existingAccount != nil && existingAccount.CloudAccountId != nil && *existingAccount.CloudAccountId != req.CloudAccountId {
existingAccount, apiErr := c.accountsRepo.get(ctx, orgId, cloudProvider, req.ID)
if existingAccount != nil && existingAccount.AccountID != nil && *existingAccount.AccountID != req.AccountID {
return nil, model.BadRequest(fmt.Errorf(
"can't check in with new %s account id %s for account %s with existing %s id %s",
cloudProvider, req.CloudAccountId, existingAccount.Id, cloudProvider, *existingAccount.CloudAccountId,
cloudProvider, req.AccountID, existingAccount.ID.StringValue(), cloudProvider, *existingAccount.AccountID,
))
}
existingAccount, apiErr = c.accountsRepo.getConnectedCloudAccount(ctx, cloudProvider, req.CloudAccountId)
if existingAccount != nil && existingAccount.Id != req.AccountId {
existingAccount, apiErr = c.accountsRepo.getConnectedCloudAccount(ctx, orgId, cloudProvider, req.AccountID)
if existingAccount != nil && existingAccount.ID.StringValue() != req.ID {
return nil, model.BadRequest(fmt.Errorf(
"can't check in to %s account %s with id %s. already connected with id %s",
cloudProvider, req.CloudAccountId, req.AccountId, existingAccount.Id,
cloudProvider, req.AccountID, req.ID, existingAccount.ID.StringValue(),
))
}
agentReport := AgentReport{
agentReport := types.AgentReport{
TimestampMillis: time.Now().UnixMilli(),
Data: req.Data,
}
account, apiErr := c.accountsRepo.upsert(
ctx, cloudProvider, &req.AccountId, nil, &req.CloudAccountId, &agentReport, nil,
ctx, orgId, cloudProvider, &req.ID, nil, &req.AccountID, &agentReport, nil,
)
if apiErr != nil {
return nil, model.WrapApiError(apiErr, "couldn't upsert cloud account")
@@ -265,7 +258,7 @@ func (c *Controller) CheckInAsAgent(
}
svcConfigs, apiErr := c.serviceConfigRepo.getAllForAccount(
ctx, cloudProvider, *account.CloudAccountId,
ctx, orgId, account.ID.StringValue(),
)
if apiErr != nil {
return nil, model.WrapApiError(
@@ -298,54 +291,55 @@ func (c *Controller) CheckInAsAgent(
}
return &AgentCheckInResponse{
AccountId: account.Id,
CloudAccountId: *account.CloudAccountId,
AccountId: account.ID.StringValue(),
CloudAccountId: *account.AccountID,
RemovedAt: account.RemovedAt,
IntegrationConfig: agentConfig,
}, nil
}
type UpdateAccountConfigRequest struct {
Config AccountConfig `json:"config"`
Config types.AccountConfig `json:"config"`
}
func (c *Controller) UpdateAccountConfig(
ctx context.Context,
orgId string,
cloudProvider string,
accountId string,
req UpdateAccountConfigRequest,
) (*Account, *model.ApiError) {
) (*types.Account, *model.ApiError) {
if apiErr := validateCloudProviderName(cloudProvider); apiErr != nil {
return nil, apiErr
}
accountRecord, apiErr := c.accountsRepo.upsert(
ctx, cloudProvider, &accountId, &req.Config, nil, nil, nil,
ctx, orgId, cloudProvider, &accountId, &req.Config, nil, nil, nil,
)
if apiErr != nil {
return nil, model.WrapApiError(apiErr, "couldn't upsert cloud account")
}
account := accountRecord.account()
account := accountRecord.Account()
return &account, nil
}
func (c *Controller) DisconnectAccount(
ctx context.Context, cloudProvider string, accountId string,
) (*AccountRecord, *model.ApiError) {
ctx context.Context, orgId string, cloudProvider string, accountId string,
) (*types.CloudIntegration, *model.ApiError) {
if apiErr := validateCloudProviderName(cloudProvider); apiErr != nil {
return nil, apiErr
}
account, apiErr := c.accountsRepo.get(ctx, cloudProvider, accountId)
account, apiErr := c.accountsRepo.get(ctx, orgId, cloudProvider, accountId)
if apiErr != nil {
return nil, model.WrapApiError(apiErr, "couldn't disconnect account")
}
tsNow := time.Now()
account, apiErr = c.accountsRepo.upsert(
ctx, cloudProvider, &accountId, nil, nil, nil, &tsNow,
ctx, orgId, cloudProvider, &accountId, nil, nil, nil, &tsNow,
)
if apiErr != nil {
return nil, model.WrapApiError(apiErr, "couldn't disconnect account")
@@ -360,6 +354,7 @@ type ListServicesResponse struct {
func (c *Controller) ListServices(
ctx context.Context,
orgID string,
cloudProvider string,
cloudAccountId *string,
) (*ListServicesResponse, *model.ApiError) {
@@ -373,10 +368,16 @@ func (c *Controller) ListServices(
return nil, model.WrapApiError(apiErr, "couldn't list cloud services")
}
svcConfigs := map[string]*CloudServiceConfig{}
svcConfigs := map[string]*types.CloudServiceConfig{}
if cloudAccountId != nil {
activeAccount, apiErr := c.accountsRepo.getConnectedCloudAccount(
ctx, orgID, cloudProvider, *cloudAccountId,
)
if apiErr != nil {
return nil, model.WrapApiError(apiErr, "couldn't get active account")
}
svcConfigs, apiErr = c.serviceConfigRepo.getAllForAccount(
ctx, cloudProvider, *cloudAccountId,
ctx, orgID, activeAccount.ID.StringValue(),
)
if apiErr != nil {
return nil, model.WrapApiError(
@@ -400,6 +401,7 @@ func (c *Controller) ListServices(
func (c *Controller) GetServiceDetails(
ctx context.Context,
orgID string,
cloudProvider string,
serviceId string,
cloudAccountId *string,
@@ -415,8 +417,16 @@ func (c *Controller) GetServiceDetails(
}
if cloudAccountId != nil {
activeAccount, apiErr := c.accountsRepo.getConnectedCloudAccount(
ctx, orgID, cloudProvider, *cloudAccountId,
)
if apiErr != nil {
return nil, model.WrapApiError(apiErr, "couldn't get active account")
}
config, apiErr := c.serviceConfigRepo.get(
ctx, cloudProvider, *cloudAccountId, serviceId,
ctx, orgID, activeAccount.ID.StringValue(), serviceId,
)
if apiErr != nil && apiErr.Type() != model.ErrorNotFound {
return nil, model.WrapApiError(apiErr, "couldn't fetch service config")
@@ -425,15 +435,22 @@ func (c *Controller) GetServiceDetails(
if config != nil {
service.Config = config
enabled := false
if config.Metrics != nil && config.Metrics.Enabled {
// add links to service dashboards, making them clickable.
for i, d := range service.Assets.Dashboards {
dashboardUuid := c.dashboardUuid(
cloudProvider, serviceId, d.Id,
)
enabled = true
}
// add links to service dashboards, making them clickable.
for i, d := range service.Assets.Dashboards {
dashboardUuid := c.dashboardUuid(
cloudProvider, serviceId, d.Id,
)
if enabled {
service.Assets.Dashboards[i].Url = fmt.Sprintf(
"/dashboard/%s", dashboardUuid,
)
} else {
service.Assets.Dashboards[i].Url = ""
}
}
}
@@ -443,17 +460,18 @@ func (c *Controller) GetServiceDetails(
}
type UpdateServiceConfigRequest struct {
CloudAccountId string `json:"cloud_account_id"`
Config CloudServiceConfig `json:"config"`
CloudAccountId string `json:"cloud_account_id"`
Config types.CloudServiceConfig `json:"config"`
}
type UpdateServiceConfigResponse struct {
Id string `json:"id"`
Config CloudServiceConfig `json:"config"`
Id string `json:"id"`
Config types.CloudServiceConfig `json:"config"`
}
func (c *Controller) UpdateServiceConfig(
ctx context.Context,
orgID string,
cloudProvider string,
serviceId string,
req UpdateServiceConfigRequest,
@@ -465,7 +483,7 @@ func (c *Controller) UpdateServiceConfig(
// can only update config for a connected cloud account id
_, apiErr := c.accountsRepo.getConnectedCloudAccount(
ctx, cloudProvider, req.CloudAccountId,
ctx, orgID, cloudProvider, req.CloudAccountId,
)
if apiErr != nil {
return nil, model.WrapApiError(apiErr, "couldn't find connected cloud account")
@@ -478,7 +496,7 @@ func (c *Controller) UpdateServiceConfig(
}
updatedConfig, apiErr := c.serviceConfigRepo.upsert(
ctx, cloudProvider, req.CloudAccountId, serviceId, req.Config,
ctx, orgID, cloudProvider, req.CloudAccountId, serviceId, req.Config,
)
if apiErr != nil {
return nil, model.WrapApiError(apiErr, "couldn't update service config")
@@ -492,13 +510,13 @@ func (c *Controller) UpdateServiceConfig(
// All dashboards that are available based on cloud integrations configuration
// across all cloud providers
func (c *Controller) AvailableDashboards(ctx context.Context) (
func (c *Controller) AvailableDashboards(ctx context.Context, orgId string) (
[]types.Dashboard, *model.ApiError,
) {
allDashboards := []types.Dashboard{}
for _, provider := range []string{"aws"} {
providerDashboards, apiErr := c.AvailableDashboardsForCloudProvider(ctx, provider)
providerDashboards, apiErr := c.AvailableDashboardsForCloudProvider(ctx, orgId, provider)
if apiErr != nil {
return nil, model.WrapApiError(
apiErr, fmt.Sprintf("couldn't get available dashboards for %s", provider),
@@ -512,10 +530,10 @@ func (c *Controller) AvailableDashboards(ctx context.Context) (
}
func (c *Controller) AvailableDashboardsForCloudProvider(
ctx context.Context, cloudProvider string,
ctx context.Context, orgID string, cloudProvider string,
) ([]types.Dashboard, *model.ApiError) {
accountRecords, apiErr := c.accountsRepo.listConnected(ctx, cloudProvider)
accountRecords, apiErr := c.accountsRepo.listConnected(ctx, orgID, cloudProvider)
if apiErr != nil {
return nil, model.WrapApiError(apiErr, "couldn't list connected cloud accounts")
}
@@ -524,9 +542,9 @@ func (c *Controller) AvailableDashboardsForCloudProvider(
servicesWithAvailableMetrics := map[string]*time.Time{}
for _, ar := range accountRecords {
if ar.CloudAccountId != nil {
if ar.AccountID != nil {
configsBySvcId, apiErr := c.serviceConfigRepo.getAllForAccount(
ctx, cloudProvider, *ar.CloudAccountId,
ctx, orgID, ar.ID.StringValue(),
)
if apiErr != nil {
return nil, apiErr
@@ -574,6 +592,7 @@ func (c *Controller) AvailableDashboardsForCloudProvider(
}
func (c *Controller) GetDashboardById(
ctx context.Context,
orgId string,
dashboardUuid string,
) (*types.Dashboard, *model.ApiError) {
cloudProvider, _, _, apiErr := c.parseDashboardUuid(dashboardUuid)
@@ -581,7 +600,7 @@ func (c *Controller) GetDashboardById(
return nil, apiErr
}
allDashboards, apiErr := c.AvailableDashboardsForCloudProvider(ctx, cloudProvider)
allDashboards, apiErr := c.AvailableDashboardsForCloudProvider(ctx, orgId, cloudProvider)
if apiErr != nil {
return nil, model.WrapApiError(
apiErr, fmt.Sprintf("couldn't list available dashboards"),

View File

@@ -4,23 +4,30 @@ import (
"context"
"testing"
"github.com/SigNoz/signoz/pkg/query-service/auth"
"github.com/SigNoz/signoz/pkg/query-service/constants"
"github.com/SigNoz/signoz/pkg/query-service/dao"
"github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/query-service/utils"
"github.com/SigNoz/signoz/pkg/types"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
)
func TestRegenerateConnectionUrlWithUpdatedConfig(t *testing.T) {
require := require.New(t)
sqlStore, _ := utils.NewTestSqliteDB(t)
sqlStore := utils.NewQueryServiceDBForTests(t)
controller, err := NewController(sqlStore)
require.NoError(err)
user, apiErr := createTestUser()
require.Nil(apiErr)
// should be able to generate connection url for
// same account id again with updated config
testAccountConfig1 := AccountConfig{EnabledRegions: []string{"us-east-1", "us-west-1"}}
testAccountConfig1 := types.AccountConfig{EnabledRegions: []string{"us-east-1", "us-west-1"}}
resp1, apiErr := controller.GenerateConnectionUrl(
context.TODO(), "aws", GenerateConnectionUrlRequest{
context.TODO(), user.OrgID, "aws", GenerateConnectionUrlRequest{
AccountConfig: testAccountConfig1,
AgentConfig: SigNozAgentConfig{Region: "us-east-2"},
},
@@ -31,14 +38,14 @@ func TestRegenerateConnectionUrlWithUpdatedConfig(t *testing.T) {
testAccountId := resp1.AccountId
account, apiErr := controller.accountsRepo.get(
context.TODO(), "aws", testAccountId,
context.TODO(), user.OrgID, "aws", testAccountId,
)
require.Nil(apiErr)
require.Equal(testAccountConfig1, *account.Config)
testAccountConfig2 := AccountConfig{EnabledRegions: []string{"us-east-2", "us-west-2"}}
testAccountConfig2 := types.AccountConfig{EnabledRegions: []string{"us-east-2", "us-west-2"}}
resp2, apiErr := controller.GenerateConnectionUrl(
context.TODO(), "aws", GenerateConnectionUrlRequest{
context.TODO(), user.OrgID, "aws", GenerateConnectionUrlRequest{
AccountId: &testAccountId,
AccountConfig: testAccountConfig2,
AgentConfig: SigNozAgentConfig{Region: "us-east-2"},
@@ -48,7 +55,7 @@ func TestRegenerateConnectionUrlWithUpdatedConfig(t *testing.T) {
require.Equal(testAccountId, resp2.AccountId)
account, apiErr = controller.accountsRepo.get(
context.TODO(), "aws", testAccountId,
context.TODO(), user.OrgID, "aws", testAccountId,
)
require.Nil(apiErr)
require.Equal(testAccountConfig2, *account.Config)
@@ -56,18 +63,21 @@ func TestRegenerateConnectionUrlWithUpdatedConfig(t *testing.T) {
func TestAgentCheckIns(t *testing.T) {
require := require.New(t)
sqlStore, _ := utils.NewTestSqliteDB(t)
sqlStore := utils.NewQueryServiceDBForTests(t)
controller, err := NewController(sqlStore)
require.NoError(err)
user, apiErr := createTestUser()
require.Nil(apiErr)
// An agent should be able to check in from a cloud account even
// if no connection url was requested (no account with agent's account id exists)
testAccountId1 := uuid.NewString()
testCloudAccountId1 := "546311234"
resp1, apiErr := controller.CheckInAsAgent(
context.TODO(), "aws", AgentCheckInRequest{
AccountId: testAccountId1,
CloudAccountId: testCloudAccountId1,
context.TODO(), user.OrgID, "aws", AgentCheckInRequest{
ID: testAccountId1,
AccountID: testCloudAccountId1,
},
)
require.Nil(apiErr)
@@ -78,9 +88,9 @@ func TestAgentCheckIns(t *testing.T) {
// cloud account id for the same account.
testCloudAccountId2 := "99999999"
_, apiErr = controller.CheckInAsAgent(
context.TODO(), "aws", AgentCheckInRequest{
AccountId: testAccountId1,
CloudAccountId: testCloudAccountId2,
context.TODO(), user.OrgID, "aws", AgentCheckInRequest{
ID: testAccountId1,
AccountID: testCloudAccountId2,
},
)
require.NotNil(apiErr)
@@ -90,18 +100,18 @@ func TestAgentCheckIns(t *testing.T) {
// i.e. there can't be 2 connected account records for the same cloud account id
// at any point in time.
existingConnected, apiErr := controller.accountsRepo.getConnectedCloudAccount(
context.TODO(), "aws", testCloudAccountId1,
context.TODO(), user.OrgID, "aws", testCloudAccountId1,
)
require.Nil(apiErr)
require.NotNil(existingConnected)
require.Equal(testCloudAccountId1, *existingConnected.CloudAccountId)
require.Equal(testCloudAccountId1, *existingConnected.AccountID)
require.Nil(existingConnected.RemovedAt)
testAccountId2 := uuid.NewString()
_, apiErr = controller.CheckInAsAgent(
context.TODO(), "aws", AgentCheckInRequest{
AccountId: testAccountId2,
CloudAccountId: testCloudAccountId1,
context.TODO(), user.OrgID, "aws", AgentCheckInRequest{
ID: testAccountId2,
AccountID: testCloudAccountId1,
},
)
require.NotNil(apiErr)
@@ -109,29 +119,29 @@ func TestAgentCheckIns(t *testing.T) {
// After disconnecting existing account record, the agent should be able to
// connected for a particular cloud account id
_, apiErr = controller.DisconnectAccount(
context.TODO(), "aws", testAccountId1,
context.TODO(), user.OrgID, "aws", testAccountId1,
)
existingConnected, apiErr = controller.accountsRepo.getConnectedCloudAccount(
context.TODO(), "aws", testCloudAccountId1,
context.TODO(), user.OrgID, "aws", testCloudAccountId1,
)
require.Nil(existingConnected)
require.NotNil(apiErr)
require.Equal(model.ErrorNotFound, apiErr.Type())
_, apiErr = controller.CheckInAsAgent(
context.TODO(), "aws", AgentCheckInRequest{
AccountId: testAccountId2,
CloudAccountId: testCloudAccountId1,
context.TODO(), user.OrgID, "aws", AgentCheckInRequest{
ID: testAccountId2,
AccountID: testCloudAccountId1,
},
)
require.Nil(apiErr)
// should be able to keep checking in
_, apiErr = controller.CheckInAsAgent(
context.TODO(), "aws", AgentCheckInRequest{
AccountId: testAccountId2,
CloudAccountId: testCloudAccountId1,
context.TODO(), user.OrgID, "aws", AgentCheckInRequest{
ID: testAccountId2,
AccountID: testCloudAccountId1,
},
)
require.Nil(apiErr)
@@ -139,13 +149,16 @@ func TestAgentCheckIns(t *testing.T) {
func TestCantDisconnectNonExistentAccount(t *testing.T) {
require := require.New(t)
sqlStore, _ := utils.NewTestSqliteDB(t)
sqlStore := utils.NewQueryServiceDBForTests(t)
controller, err := NewController(sqlStore)
require.NoError(err)
user, apiErr := createTestUser()
require.Nil(apiErr)
// Attempting to disconnect a non-existent account should return error
account, apiErr := controller.DisconnectAccount(
context.TODO(), "aws", uuid.NewString(),
context.TODO(), user.OrgID, "aws", uuid.NewString(),
)
require.NotNil(apiErr)
require.Equal(model.ErrorNotFound, apiErr.Type())
@@ -154,15 +167,23 @@ func TestCantDisconnectNonExistentAccount(t *testing.T) {
func TestConfigureService(t *testing.T) {
require := require.New(t)
sqlStore, _ := utils.NewTestSqliteDB(t)
sqlStore := utils.NewQueryServiceDBForTests(t)
controller, err := NewController(sqlStore)
require.NoError(err)
user, apiErr := createTestUser()
require.Nil(apiErr)
// create a connected account
testCloudAccountId := "546311234"
testConnectedAccount := makeTestConnectedAccount(t, user.OrgID, controller, testCloudAccountId)
require.Nil(testConnectedAccount.RemovedAt)
require.NotEmpty(testConnectedAccount.AccountID)
require.Equal(testCloudAccountId, *testConnectedAccount.AccountID)
// should start out without any service config
svcListResp, apiErr := controller.ListServices(
context.TODO(), "aws", &testCloudAccountId,
context.TODO(), user.OrgID, "aws", &testCloudAccountId,
)
require.Nil(apiErr)
@@ -170,25 +191,20 @@ func TestConfigureService(t *testing.T) {
require.Nil(svcListResp.Services[0].Config)
svcDetails, apiErr := controller.GetServiceDetails(
context.TODO(), "aws", testSvcId, &testCloudAccountId,
context.TODO(), user.OrgID, "aws", testSvcId, &testCloudAccountId,
)
require.Nil(apiErr)
require.Equal(testSvcId, svcDetails.Id)
require.Nil(svcDetails.Config)
// should be able to configure a service for a connected account
testConnectedAccount := makeTestConnectedAccount(t, controller, testCloudAccountId)
require.Nil(testConnectedAccount.RemovedAt)
require.NotNil(testConnectedAccount.CloudAccountId)
require.Equal(testCloudAccountId, *testConnectedAccount.CloudAccountId)
testSvcConfig := CloudServiceConfig{
Metrics: &CloudServiceMetricsConfig{
testSvcConfig := types.CloudServiceConfig{
Metrics: &types.CloudServiceMetricsConfig{
Enabled: true,
},
}
updateSvcConfigResp, apiErr := controller.UpdateServiceConfig(
context.TODO(), "aws", testSvcId, UpdateServiceConfigRequest{
context.TODO(), user.OrgID, "aws", testSvcId, UpdateServiceConfigRequest{
CloudAccountId: testCloudAccountId,
Config: testSvcConfig,
},
@@ -198,14 +214,14 @@ func TestConfigureService(t *testing.T) {
require.Equal(testSvcConfig, updateSvcConfigResp.Config)
svcDetails, apiErr = controller.GetServiceDetails(
context.TODO(), "aws", testSvcId, &testCloudAccountId,
context.TODO(), user.OrgID, "aws", testSvcId, &testCloudAccountId,
)
require.Nil(apiErr)
require.Equal(testSvcId, svcDetails.Id)
require.Equal(testSvcConfig, *svcDetails.Config)
svcListResp, apiErr = controller.ListServices(
context.TODO(), "aws", &testCloudAccountId,
context.TODO(), user.OrgID, "aws", &testCloudAccountId,
)
require.Nil(apiErr)
for _, svc := range svcListResp.Services {
@@ -216,12 +232,12 @@ func TestConfigureService(t *testing.T) {
// should not be able to configure service after cloud account has been disconnected
_, apiErr = controller.DisconnectAccount(
context.TODO(), "aws", testConnectedAccount.Id,
context.TODO(), user.OrgID, "aws", testConnectedAccount.ID.StringValue(),
)
require.Nil(apiErr)
_, apiErr = controller.UpdateServiceConfig(
context.TODO(), "aws", testSvcId,
context.TODO(), user.OrgID, "aws", testSvcId,
UpdateServiceConfigRequest{
CloudAccountId: testCloudAccountId,
Config: testSvcConfig,
@@ -231,7 +247,7 @@ func TestConfigureService(t *testing.T) {
// should not be able to configure a service for a cloud account id that is not connected yet
_, apiErr = controller.UpdateServiceConfig(
context.TODO(), "aws", testSvcId,
context.TODO(), user.OrgID, "aws", testSvcId,
UpdateServiceConfigRequest{
CloudAccountId: "9999999999",
Config: testSvcConfig,
@@ -241,7 +257,7 @@ func TestConfigureService(t *testing.T) {
// should not be able to set config for an unsupported service
_, apiErr = controller.UpdateServiceConfig(
context.TODO(), "aws", "bad-service", UpdateServiceConfigRequest{
context.TODO(), user.OrgID, "aws", "bad-service", UpdateServiceConfigRequest{
CloudAccountId: testCloudAccountId,
Config: testSvcConfig,
},
@@ -250,22 +266,54 @@ func TestConfigureService(t *testing.T) {
}
func makeTestConnectedAccount(t *testing.T, controller *Controller, cloudAccountId string) *AccountRecord {
func makeTestConnectedAccount(t *testing.T, orgId string, controller *Controller, cloudAccountId string) *types.CloudIntegration {
require := require.New(t)
// a check in from SigNoz agent creates or updates a connected account.
testAccountId := uuid.NewString()
resp, apiErr := controller.CheckInAsAgent(
context.TODO(), "aws", AgentCheckInRequest{
AccountId: testAccountId,
CloudAccountId: cloudAccountId,
context.TODO(), orgId, "aws", AgentCheckInRequest{
ID: testAccountId,
AccountID: cloudAccountId,
},
)
require.Nil(apiErr)
require.Equal(testAccountId, resp.AccountId)
require.Equal(cloudAccountId, resp.CloudAccountId)
acc, err := controller.accountsRepo.get(context.TODO(), "aws", resp.AccountId)
acc, err := controller.accountsRepo.get(context.TODO(), orgId, "aws", resp.AccountId)
require.Nil(err)
return acc
}
func createTestUser() (*types.User, *model.ApiError) {
// Create a test user for auth
ctx := context.Background()
org, apiErr := dao.DB().CreateOrg(ctx, &types.Organization{
Name: "test",
})
if apiErr != nil {
return nil, apiErr
}
group, apiErr := dao.DB().GetGroupByName(ctx, constants.AdminGroup)
if apiErr != nil {
return nil, apiErr
}
auth.InitAuthCache(ctx)
userId := uuid.NewString()
return dao.DB().CreateUser(
ctx,
&types.User{
ID: userId,
Name: "test",
Email: userId[:8] + "test@test.com",
Password: "test",
OrgID: org.ID,
GroupID: group.ID,
},
true,
)
}

View File

@@ -1,123 +1,11 @@
package cloudintegrations
import (
"database/sql/driver"
"encoding/json"
"fmt"
"time"
"github.com/SigNoz/signoz/pkg/types"
)
// Represents a cloud provider account for cloud integrations
type AccountRecord struct {
CloudProvider string `json:"cloud_provider" db:"cloud_provider"`
Id string `json:"id" db:"id"`
Config *AccountConfig `json:"config" db:"config_json"`
CloudAccountId *string `json:"cloud_account_id" db:"cloud_account_id"`
LastAgentReport *AgentReport `json:"last_agent_report" db:"last_agent_report_json"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
RemovedAt *time.Time `json:"removed_at" db:"removed_at"`
}
type AccountConfig struct {
EnabledRegions []string `json:"regions"`
}
func DefaultAccountConfig() AccountConfig {
return AccountConfig{
EnabledRegions: []string{},
}
}
// For serializing from db
func (c *AccountConfig) Scan(src any) error {
data, ok := src.([]byte)
if !ok {
return fmt.Errorf("tried to scan from %T instead of bytes", src)
}
return json.Unmarshal(data, &c)
}
// For serializing to db
func (c *AccountConfig) Value() (driver.Value, error) {
if c == nil {
return nil, nil
}
serialized, err := json.Marshal(c)
if err != nil {
return nil, fmt.Errorf(
"couldn't serialize cloud account config to JSON: %w", err,
)
}
return serialized, nil
}
type AgentReport struct {
TimestampMillis int64 `json:"timestamp_millis"`
Data map[string]any `json:"data"`
}
// For serializing from db
func (r *AgentReport) Scan(src any) error {
data, ok := src.([]byte)
if !ok {
return fmt.Errorf("tried to scan from %T instead of bytes", src)
}
return json.Unmarshal(data, &r)
}
// For serializing to db
func (r *AgentReport) Value() (driver.Value, error) {
if r == nil {
return nil, nil
}
serialized, err := json.Marshal(r)
if err != nil {
return nil, fmt.Errorf(
"couldn't serialize agent report to JSON: %w", err,
)
}
return serialized, nil
}
type AccountStatus struct {
Integration AccountIntegrationStatus `json:"integration"`
}
type AccountIntegrationStatus struct {
LastHeartbeatTsMillis *int64 `json:"last_heartbeat_ts_ms"`
}
func (a *AccountRecord) status() AccountStatus {
status := AccountStatus{}
if a.LastAgentReport != nil {
lastHeartbeat := a.LastAgentReport.TimestampMillis
status.Integration.LastHeartbeatTsMillis = &lastHeartbeat
}
return status
}
func (a *AccountRecord) account() Account {
ca := Account{Id: a.Id, Status: a.status()}
if a.CloudAccountId != nil {
ca.CloudAccountId = *a.CloudAccountId
}
if a.Config != nil {
ca.Config = *a.Config
} else {
ca.Config = DefaultAccountConfig()
}
return ca
}
type CloudServiceSummary struct {
Id string `json:"id"`
Title string `json:"title"`
@@ -125,7 +13,7 @@ type CloudServiceSummary struct {
// Present only if the service has been configured in the
// context of a cloud provider account.
Config *CloudServiceConfig `json:"config,omitempty"`
Config *types.CloudServiceConfig `json:"config,omitempty"`
}
type CloudServiceDetails struct {
@@ -144,44 +32,6 @@ type CloudServiceDetails struct {
TelemetryCollectionStrategy *CloudTelemetryCollectionStrategy `json:"telemetry_collection_strategy"`
}
type CloudServiceConfig struct {
Logs *CloudServiceLogsConfig `json:"logs,omitempty"`
Metrics *CloudServiceMetricsConfig `json:"metrics,omitempty"`
}
// For serializing from db
func (c *CloudServiceConfig) Scan(src any) error {
data, ok := src.([]byte)
if !ok {
return fmt.Errorf("tried to scan from %T instead of bytes", src)
}
return json.Unmarshal(data, &c)
}
// For serializing to db
func (c *CloudServiceConfig) Value() (driver.Value, error) {
if c == nil {
return nil, nil
}
serialized, err := json.Marshal(c)
if err != nil {
return nil, fmt.Errorf(
"couldn't serialize cloud service config to JSON: %w", err,
)
}
return serialized, nil
}
type CloudServiceLogsConfig struct {
Enabled bool `json:"enabled"`
}
type CloudServiceMetricsConfig struct {
Enabled bool `json:"enabled"`
}
type CloudServiceAssets struct {
Dashboards []CloudServiceDashboard `json:"dashboards"`
}

View File

@@ -4,161 +4,161 @@ import (
"context"
"database/sql"
"fmt"
"time"
"github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/jmoiron/sqlx"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/valuer"
)
type serviceConfigRepository interface {
get(
ctx context.Context,
cloudProvider string,
orgID string,
cloudAccountId string,
serviceId string,
) (*CloudServiceConfig, *model.ApiError)
serviceType string,
) (*types.CloudServiceConfig, *model.ApiError)
upsert(
ctx context.Context,
orgID string,
cloudProvider string,
cloudAccountId string,
serviceId string,
config CloudServiceConfig,
) (*CloudServiceConfig, *model.ApiError)
config types.CloudServiceConfig,
) (*types.CloudServiceConfig, *model.ApiError)
getAllForAccount(
ctx context.Context,
cloudProvider string,
orgID string,
cloudAccountId string,
) (
configsBySvcId map[string]*CloudServiceConfig,
configsBySvcId map[string]*types.CloudServiceConfig,
apiErr *model.ApiError,
)
}
func newServiceConfigRepository(db *sqlx.DB) (
func newServiceConfigRepository(store sqlstore.SQLStore) (
*serviceConfigSQLRepository, error,
) {
return &serviceConfigSQLRepository{
db: db,
store: store,
}, nil
}
type serviceConfigSQLRepository struct {
db *sqlx.DB
store sqlstore.SQLStore
}
func (r *serviceConfigSQLRepository) get(
ctx context.Context,
cloudProvider string,
orgID string,
cloudAccountId string,
serviceId string,
) (*CloudServiceConfig, *model.ApiError) {
serviceType string,
) (*types.CloudServiceConfig, *model.ApiError) {
var result CloudServiceConfig
var result types.CloudIntegrationService
err := r.db.GetContext(
ctx, &result, `
select
config_json
from cloud_integrations_service_configs
where
cloud_provider=$1
and cloud_account_id=$2
and service_id=$3
`,
cloudProvider, cloudAccountId, serviceId,
)
err := r.store.BunDB().NewSelect().
Model(&result).
Join("JOIN cloud_integration ci ON ci.id = cis.cloud_integration_id").
Where("ci.org_id = ?", orgID).
Where("ci.id = ?", cloudAccountId).
Where("cis.type = ?", serviceType).
Scan(ctx)
if err == sql.ErrNoRows {
return nil, model.NotFoundError(fmt.Errorf(
"couldn't find %s %s config for %s",
cloudProvider, serviceId, cloudAccountId,
"couldn't find config for cloud account %s",
cloudAccountId,
))
} else if err != nil {
return nil, model.InternalError(fmt.Errorf(
"couldn't query cloud service config: %w", err,
))
}
return &result, nil
return &result.Config, nil
}
func (r *serviceConfigSQLRepository) upsert(
ctx context.Context,
orgID string,
cloudProvider string,
cloudAccountId string,
serviceId string,
config CloudServiceConfig,
) (*CloudServiceConfig, *model.ApiError) {
config types.CloudServiceConfig,
) (*types.CloudServiceConfig, *model.ApiError) {
query := `
INSERT INTO cloud_integrations_service_configs (
cloud_provider,
cloud_account_id,
service_id,
config_json
) values ($1, $2, $3, $4)
on conflict(cloud_provider, cloud_account_id, service_id)
do update set config_json=excluded.config_json
`
_, dbErr := r.db.ExecContext(
ctx, query,
cloudProvider, cloudAccountId, serviceId, &config,
)
if dbErr != nil {
// get cloud integration id from account id
// if the account is not connected, we don't need to upsert the config
var cloudIntegrationId string
err := r.store.BunDB().NewSelect().
Model((*types.CloudIntegration)(nil)).
Column("id").
Where("provider = ?", cloudProvider).
Where("account_id = ?", cloudAccountId).
Where("org_id = ?", orgID).
Where("removed_at is NULL").
Where("last_agent_report is not NULL").
Scan(ctx, &cloudIntegrationId)
if err != nil {
return nil, model.InternalError(fmt.Errorf(
"could not upsert cloud service config: %w", dbErr,
"couldn't query cloud integration id: %w", err,
))
}
upsertedConfig, apiErr := r.get(ctx, cloudProvider, cloudAccountId, serviceId)
if apiErr != nil {
serviceConfig := types.CloudIntegrationService{
Identifiable: types.Identifiable{ID: valuer.GenerateUUID()},
TimeAuditable: types.TimeAuditable{
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
},
Config: config,
Type: serviceId,
CloudIntegrationID: cloudIntegrationId,
}
_, err = r.store.BunDB().NewInsert().
Model(&serviceConfig).
On("conflict(cloud_integration_id, type) do update set config=excluded.config, updated_at=excluded.updated_at").
Exec(ctx)
if err != nil {
return nil, model.InternalError(fmt.Errorf(
"couldn't fetch upserted service config: %w", apiErr.ToError(),
"could not upsert cloud service config: %w", err,
))
}
return upsertedConfig, nil
return &serviceConfig.Config, nil
}
func (r *serviceConfigSQLRepository) getAllForAccount(
ctx context.Context,
cloudProvider string,
orgID string,
cloudAccountId string,
) (map[string]*CloudServiceConfig, *model.ApiError) {
) (map[string]*types.CloudServiceConfig, *model.ApiError) {
type ScannedServiceConfigRecord struct {
ServiceId string `db:"service_id"`
Config CloudServiceConfig `db:"config_json"`
}
serviceConfigs := []types.CloudIntegrationService{}
records := []ScannedServiceConfigRecord{}
err := r.db.SelectContext(
ctx, &records, `
select
service_id,
config_json
from cloud_integrations_service_configs
where
cloud_provider=$1
and cloud_account_id=$2
`,
cloudProvider, cloudAccountId,
)
err := r.store.BunDB().NewSelect().
Model(&serviceConfigs).
Join("JOIN cloud_integration ci ON ci.id = cis.cloud_integration_id").
Where("ci.id = ?", cloudAccountId).
Where("ci.org_id = ?", orgID).
Scan(ctx)
if err != nil {
return nil, model.InternalError(fmt.Errorf(
"could not query service configs from db: %w", err,
))
}
result := map[string]*CloudServiceConfig{}
result := map[string]*types.CloudServiceConfig{}
for _, r := range records {
result[r.ServiceId] = &r.Config
for _, r := range serviceConfigs {
result[r.Type] = &r.Config
}
return result, nil

View File

@@ -22,6 +22,7 @@ import (
errorsV2 "github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/modules/preference"
"github.com/SigNoz/signoz/pkg/query-service/app/integrations"
"github.com/SigNoz/signoz/pkg/query-service/app/metricsexplorer"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/valuer"
@@ -37,7 +38,6 @@ import (
"github.com/SigNoz/signoz/pkg/query-service/app/dashboards"
"github.com/SigNoz/signoz/pkg/query-service/app/explorer"
"github.com/SigNoz/signoz/pkg/query-service/app/inframetrics"
"github.com/SigNoz/signoz/pkg/query-service/app/integrations"
queues2 "github.com/SigNoz/signoz/pkg/query-service/app/integrations/messagingQueues/queues"
"github.com/SigNoz/signoz/pkg/query-service/app/integrations/thirdPartyApi"
"github.com/SigNoz/signoz/pkg/query-service/app/logs"
@@ -1082,14 +1082,14 @@ func (aH *APIHandler) getDashboards(w http.ResponseWriter, r *http.Request) {
}
ic := aH.IntegrationsController
installedIntegrationDashboards, err := ic.GetDashboardsForInstalledIntegrations(r.Context())
installedIntegrationDashboards, err := ic.GetDashboardsForInstalledIntegrations(r.Context(), claims.OrgID)
if err != nil {
zap.L().Error("failed to get dashboards for installed integrations", zap.Error(err))
} else {
allDashboards = append(allDashboards, installedIntegrationDashboards...)
}
cloudIntegrationDashboards, err := aH.CloudIntegrationsController.AvailableDashboards(r.Context())
cloudIntegrationDashboards, err := aH.CloudIntegrationsController.AvailableDashboards(r.Context(), claims.OrgID)
if err != nil {
zap.L().Error("failed to get cloud dashboards", zap.Error(err))
} else {
@@ -1267,7 +1267,7 @@ func (aH *APIHandler) getDashboard(w http.ResponseWriter, r *http.Request) {
if aH.CloudIntegrationsController.IsCloudIntegrationDashboardUuid(uuid) {
dashboard, apiError = aH.CloudIntegrationsController.GetDashboardById(
r.Context(), uuid,
r.Context(), claims.OrgID, uuid,
)
if apiError != nil {
RespondError(w, apiError, nil)
@@ -1276,7 +1276,7 @@ func (aH *APIHandler) getDashboard(w http.ResponseWriter, r *http.Request) {
} else {
dashboard, apiError = aH.IntegrationsController.GetInstalledIntegrationDashboardById(
r.Context(), uuid,
r.Context(), claims.OrgID, uuid,
)
if apiError != nil {
RespondError(w, apiError, nil)
@@ -2207,6 +2207,11 @@ func (aH *APIHandler) editUser(w http.ResponseWriter, r *http.Request) {
old.ProfilePictureURL = update.ProfilePictureURL
}
if slices.Contains(types.AllIntegrationUserEmails, types.IntegrationUserEmail(old.Email)) {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, "integration user cannot be updated"))
return
}
_, apiErr = dao.DB().EditUser(ctx, &types.User{
ID: old.ID,
Name: old.Name,
@@ -2238,6 +2243,11 @@ func (aH *APIHandler) deleteUser(w http.ResponseWriter, r *http.Request) {
return
}
if slices.Contains(types.AllIntegrationUserEmails, types.IntegrationUserEmail(user.Email)) {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, "integration user cannot be updated"))
return
}
if user == nil {
RespondError(w, &model.ApiError{
Typ: model.ErrorNotFound,
@@ -3497,9 +3507,14 @@ func (aH *APIHandler) ListIntegrations(
for k, values := range r.URL.Query() {
params[k] = values[0]
}
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
resp, apiErr := aH.IntegrationsController.ListIntegrations(
r.Context(), params,
r.Context(), claims.OrgID, params,
)
if apiErr != nil {
RespondError(w, apiErr, "Failed to fetch integrations")
@@ -3512,8 +3527,13 @@ func (aH *APIHandler) GetIntegration(
w http.ResponseWriter, r *http.Request,
) {
integrationId := mux.Vars(r)["integrationId"]
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
integration, apiErr := aH.IntegrationsController.GetIntegration(
r.Context(), integrationId,
r.Context(), claims.OrgID, integrationId,
)
if apiErr != nil {
RespondError(w, apiErr, "Failed to fetch integration details")
@@ -3527,8 +3547,13 @@ func (aH *APIHandler) GetIntegrationConnectionStatus(
w http.ResponseWriter, r *http.Request,
) {
integrationId := mux.Vars(r)["integrationId"]
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
isInstalled, apiErr := aH.IntegrationsController.IsIntegrationInstalled(
r.Context(), integrationId,
r.Context(), claims.OrgID, integrationId,
)
if apiErr != nil {
RespondError(w, apiErr, "failed to check if integration is installed")
@@ -3542,7 +3567,7 @@ func (aH *APIHandler) GetIntegrationConnectionStatus(
}
connectionTests, apiErr := aH.IntegrationsController.GetIntegrationConnectionTests(
r.Context(), integrationId,
r.Context(), claims.OrgID, integrationId,
)
if apiErr != nil {
RespondError(w, apiErr, "failed to fetch integration connection tests")
@@ -3741,8 +3766,14 @@ func (aH *APIHandler) InstallIntegration(
return
}
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
integration, apiErr := aH.IntegrationsController.Install(
r.Context(), &req,
r.Context(), claims.OrgID, &req,
)
if apiErr != nil {
RespondError(w, apiErr, nil)
@@ -3763,7 +3794,13 @@ func (aH *APIHandler) UninstallIntegration(
return
}
apiErr := aH.IntegrationsController.Uninstall(r.Context(), &req)
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
apiErr := aH.IntegrationsController.Uninstall(r.Context(), claims.OrgID, &req)
if apiErr != nil {
RespondError(w, apiErr, nil)
return
@@ -3819,8 +3856,14 @@ func (aH *APIHandler) CloudIntegrationsListConnectedAccounts(
) {
cloudProvider := mux.Vars(r)["cloudProvider"]
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
resp, apiErr := aH.CloudIntegrationsController.ListConnectedAccounts(
r.Context(), cloudProvider,
r.Context(), claims.OrgID, cloudProvider,
)
if apiErr != nil {
@@ -3841,8 +3884,14 @@ func (aH *APIHandler) CloudIntegrationsGenerateConnectionUrl(
return
}
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
result, apiErr := aH.CloudIntegrationsController.GenerateConnectionUrl(
r.Context(), cloudProvider, req,
r.Context(), claims.OrgID, cloudProvider, req,
)
if apiErr != nil {
@@ -3859,8 +3908,14 @@ func (aH *APIHandler) CloudIntegrationsGetAccountStatus(
cloudProvider := mux.Vars(r)["cloudProvider"]
accountId := mux.Vars(r)["accountId"]
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
resp, apiErr := aH.CloudIntegrationsController.GetAccountStatus(
r.Context(), cloudProvider, accountId,
r.Context(), claims.OrgID, cloudProvider, accountId,
)
if apiErr != nil {
@@ -3881,8 +3936,14 @@ func (aH *APIHandler) CloudIntegrationsAgentCheckIn(
return
}
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
result, apiErr := aH.CloudIntegrationsController.CheckInAsAgent(
r.Context(), cloudProvider, req,
r.Context(), claims.OrgID, cloudProvider, req,
)
if apiErr != nil {
@@ -3905,8 +3966,14 @@ func (aH *APIHandler) CloudIntegrationsUpdateAccountConfig(
return
}
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
result, apiErr := aH.CloudIntegrationsController.UpdateAccountConfig(
r.Context(), cloudProvider, accountId, req,
r.Context(), claims.OrgID, cloudProvider, accountId, req,
)
if apiErr != nil {
@@ -3923,8 +3990,14 @@ func (aH *APIHandler) CloudIntegrationsDisconnectAccount(
cloudProvider := mux.Vars(r)["cloudProvider"]
accountId := mux.Vars(r)["accountId"]
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
result, apiErr := aH.CloudIntegrationsController.DisconnectAccount(
r.Context(), cloudProvider, accountId,
r.Context(), claims.OrgID, cloudProvider, accountId,
)
if apiErr != nil {
@@ -3947,8 +4020,14 @@ func (aH *APIHandler) CloudIntegrationsListServices(
cloudAccountId = &cloudAccountIdQP
}
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
resp, apiErr := aH.CloudIntegrationsController.ListServices(
r.Context(), cloudProvider, cloudAccountId,
r.Context(), claims.OrgID, cloudProvider, cloudAccountId,
)
if apiErr != nil {
@@ -3971,8 +4050,14 @@ func (aH *APIHandler) CloudIntegrationsGetServiceDetails(
cloudAccountId = &cloudAccountIdQP
}
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
resp, apiErr := aH.CloudIntegrationsController.GetServiceDetails(
r.Context(), cloudProvider, serviceId, cloudAccountId,
r.Context(), claims.OrgID, cloudProvider, serviceId, cloudAccountId,
)
if apiErr != nil {
RespondError(w, apiErr, nil)
@@ -4211,8 +4296,14 @@ func (aH *APIHandler) CloudIntegrationsUpdateServiceConfig(
return
}
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
render.Error(w, errorsV2.Newf(errorsV2.TypeUnauthenticated, errorsV2.CodeUnauthenticated, "unauthenticated"))
return
}
result, apiErr := aH.CloudIntegrationsController.UpdateServiceConfig(
r.Context(), cloudProvider, serviceId, req,
r.Context(), claims.OrgID, cloudProvider, serviceId, req,
)
if apiErr != nil {

View File

@@ -18,7 +18,7 @@ type Controller struct {
func NewController(sqlStore sqlstore.SQLStore) (
*Controller, error,
) {
mgr, err := NewManager(sqlStore.SQLxDB())
mgr, err := NewManager(sqlStore)
if err != nil {
return nil, fmt.Errorf("couldn't create integrations manager: %w", err)
}
@@ -35,7 +35,7 @@ type IntegrationsListResponse struct {
}
func (c *Controller) ListIntegrations(
ctx context.Context, params map[string]string,
ctx context.Context, orgId string, params map[string]string,
) (
*IntegrationsListResponse, *model.ApiError,
) {
@@ -47,7 +47,7 @@ func (c *Controller) ListIntegrations(
}
}
integrations, apiErr := c.mgr.ListIntegrations(ctx, filters)
integrations, apiErr := c.mgr.ListIntegrations(ctx, orgId, filters)
if apiErr != nil {
return nil, apiErr
}
@@ -58,16 +58,15 @@ func (c *Controller) ListIntegrations(
}
func (c *Controller) GetIntegration(
ctx context.Context, integrationId string,
ctx context.Context, orgId string, integrationId string,
) (*Integration, *model.ApiError) {
return c.mgr.GetIntegration(ctx, integrationId)
return c.mgr.GetIntegration(ctx, orgId, integrationId)
}
func (c *Controller) IsIntegrationInstalled(
ctx context.Context,
integrationId string,
ctx context.Context, orgId string, integrationId string,
) (bool, *model.ApiError) {
installation, apiErr := c.mgr.getInstalledIntegration(ctx, integrationId)
installation, apiErr := c.mgr.getInstalledIntegration(ctx, orgId, integrationId)
if apiErr != nil {
return false, apiErr
}
@@ -76,9 +75,9 @@ func (c *Controller) IsIntegrationInstalled(
}
func (c *Controller) GetIntegrationConnectionTests(
ctx context.Context, integrationId string,
ctx context.Context, orgId string, integrationId string,
) (*IntegrationConnectionTests, *model.ApiError) {
return c.mgr.GetIntegrationConnectionTests(ctx, integrationId)
return c.mgr.GetIntegrationConnectionTests(ctx, orgId, integrationId)
}
type InstallIntegrationRequest struct {
@@ -87,10 +86,10 @@ type InstallIntegrationRequest struct {
}
func (c *Controller) Install(
ctx context.Context, req *InstallIntegrationRequest,
ctx context.Context, orgId string, req *InstallIntegrationRequest,
) (*IntegrationsListItem, *model.ApiError) {
res, apiErr := c.mgr.InstallIntegration(
ctx, req.IntegrationId, req.Config,
ctx, orgId, req.IntegrationId, req.Config,
)
if apiErr != nil {
return nil, apiErr
@@ -104,7 +103,7 @@ type UninstallIntegrationRequest struct {
}
func (c *Controller) Uninstall(
ctx context.Context, req *UninstallIntegrationRequest,
ctx context.Context, orgId string, req *UninstallIntegrationRequest,
) *model.ApiError {
if len(req.IntegrationId) < 1 {
return model.BadRequest(fmt.Errorf(
@@ -113,7 +112,7 @@ func (c *Controller) Uninstall(
}
apiErr := c.mgr.UninstallIntegration(
ctx, req.IntegrationId,
ctx, orgId, req.IntegrationId,
)
if apiErr != nil {
return apiErr
@@ -123,19 +122,19 @@ func (c *Controller) Uninstall(
}
func (c *Controller) GetPipelinesForInstalledIntegrations(
ctx context.Context,
ctx context.Context, orgId string,
) ([]pipelinetypes.GettablePipeline, *model.ApiError) {
return c.mgr.GetPipelinesForInstalledIntegrations(ctx)
return c.mgr.GetPipelinesForInstalledIntegrations(ctx, orgId)
}
func (c *Controller) GetDashboardsForInstalledIntegrations(
ctx context.Context,
ctx context.Context, orgId string,
) ([]types.Dashboard, *model.ApiError) {
return c.mgr.GetDashboardsForInstalledIntegrations(ctx)
return c.mgr.GetDashboardsForInstalledIntegrations(ctx, orgId)
}
func (c *Controller) GetInstalledIntegrationDashboardById(
ctx context.Context, dashboardUuid string,
ctx context.Context, orgId string, dashboardUuid string,
) (*types.Dashboard, *model.ApiError) {
return c.mgr.GetInstalledIntegrationDashboardById(ctx, dashboardUuid)
return c.mgr.GetInstalledIntegrationDashboardById(ctx, orgId, dashboardUuid)
}

View File

@@ -5,15 +5,14 @@ import (
"fmt"
"slices"
"strings"
"time"
"github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/query-service/rules"
"github.com/SigNoz/signoz/pkg/query-service/utils"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/pipelinetypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/jmoiron/sqlx"
)
type IntegrationAuthor struct {
@@ -105,16 +104,9 @@ type IntegrationsListItem struct {
IsInstalled bool `json:"is_installed"`
}
type InstalledIntegration struct {
IntegrationId string `json:"integration_id" db:"integration_id"`
Config InstalledIntegrationConfig `json:"config_json" db:"config_json"`
InstalledAt time.Time `json:"installed_at" db:"installed_at"`
}
type InstalledIntegrationConfig map[string]interface{}
type Integration struct {
IntegrationDetails
Installation *InstalledIntegration `json:"installation"`
Installation *types.InstalledIntegration `json:"installation"`
}
type Manager struct {
@@ -122,8 +114,8 @@ type Manager struct {
installedIntegrationsRepo InstalledIntegrationsRepo
}
func NewManager(db *sqlx.DB) (*Manager, error) {
iiRepo, err := NewInstalledIntegrationsSqliteRepo(db)
func NewManager(store sqlstore.SQLStore) (*Manager, error) {
iiRepo, err := NewInstalledIntegrationsSqliteRepo(store)
if err != nil {
return nil, fmt.Errorf(
"could not init sqlite DB for installed integrations: %w", err,
@@ -142,6 +134,7 @@ type IntegrationsFilter struct {
func (m *Manager) ListIntegrations(
ctx context.Context,
orgId string,
filter *IntegrationsFilter,
// Expected to have pagination over time.
) ([]IntegrationsListItem, *model.ApiError) {
@@ -152,22 +145,22 @@ func (m *Manager) ListIntegrations(
)
}
installed, apiErr := m.installedIntegrationsRepo.list(ctx)
installed, apiErr := m.installedIntegrationsRepo.list(ctx, orgId)
if apiErr != nil {
return nil, model.WrapApiError(
apiErr, "could not fetch installed integrations",
)
}
installedIds := []string{}
installedTypes := []string{}
for _, ii := range installed {
installedIds = append(installedIds, ii.IntegrationId)
installedTypes = append(installedTypes, ii.Type)
}
result := []IntegrationsListItem{}
for _, ai := range available {
result = append(result, IntegrationsListItem{
IntegrationSummary: ai.IntegrationSummary,
IsInstalled: slices.Contains(installedIds, ai.Id),
IsInstalled: slices.Contains(installedTypes, ai.Id),
})
}
@@ -188,6 +181,7 @@ func (m *Manager) ListIntegrations(
func (m *Manager) GetIntegration(
ctx context.Context,
orgId string,
integrationId string,
) (*Integration, *model.ApiError) {
integrationDetails, apiErr := m.getIntegrationDetails(
@@ -198,7 +192,7 @@ func (m *Manager) GetIntegration(
}
installation, apiErr := m.getInstalledIntegration(
ctx, integrationId,
ctx, orgId, integrationId,
)
if apiErr != nil {
return nil, apiErr
@@ -212,6 +206,7 @@ func (m *Manager) GetIntegration(
func (m *Manager) GetIntegrationConnectionTests(
ctx context.Context,
orgId string,
integrationId string,
) (*IntegrationConnectionTests, *model.ApiError) {
integrationDetails, apiErr := m.getIntegrationDetails(
@@ -225,8 +220,9 @@ func (m *Manager) GetIntegrationConnectionTests(
func (m *Manager) InstallIntegration(
ctx context.Context,
orgId string,
integrationId string,
config InstalledIntegrationConfig,
config types.InstalledIntegrationConfig,
) (*IntegrationsListItem, *model.ApiError) {
integrationDetails, apiErr := m.getIntegrationDetails(ctx, integrationId)
if apiErr != nil {
@@ -234,7 +230,7 @@ func (m *Manager) InstallIntegration(
}
_, apiErr = m.installedIntegrationsRepo.upsert(
ctx, integrationId, config,
ctx, orgId, integrationId, config,
)
if apiErr != nil {
return nil, model.WrapApiError(
@@ -250,15 +246,17 @@ func (m *Manager) InstallIntegration(
func (m *Manager) UninstallIntegration(
ctx context.Context,
orgId string,
integrationId string,
) *model.ApiError {
return m.installedIntegrationsRepo.delete(ctx, integrationId)
return m.installedIntegrationsRepo.delete(ctx, orgId, integrationId)
}
func (m *Manager) GetPipelinesForInstalledIntegrations(
ctx context.Context,
orgId string,
) ([]pipelinetypes.GettablePipeline, *model.ApiError) {
installedIntegrations, apiErr := m.getInstalledIntegrations(ctx)
installedIntegrations, apiErr := m.getInstalledIntegrations(ctx, orgId)
if apiErr != nil {
return nil, apiErr
}
@@ -308,6 +306,7 @@ func (m *Manager) parseDashboardUuid(dashboardUuid string) (
func (m *Manager) GetInstalledIntegrationDashboardById(
ctx context.Context,
orgId string,
dashboardUuid string,
) (*types.Dashboard, *model.ApiError) {
integrationId, dashboardId, apiErr := m.parseDashboardUuid(dashboardUuid)
@@ -315,7 +314,7 @@ func (m *Manager) GetInstalledIntegrationDashboardById(
return nil, apiErr
}
integration, apiErr := m.GetIntegration(ctx, integrationId)
integration, apiErr := m.GetIntegration(ctx, orgId, integrationId)
if apiErr != nil {
return nil, apiErr
}
@@ -355,8 +354,9 @@ func (m *Manager) GetInstalledIntegrationDashboardById(
func (m *Manager) GetDashboardsForInstalledIntegrations(
ctx context.Context,
orgId string,
) ([]types.Dashboard, *model.ApiError) {
installedIntegrations, apiErr := m.getInstalledIntegrations(ctx)
installedIntegrations, apiErr := m.getInstalledIntegrations(ctx, orgId)
if apiErr != nil {
return nil, apiErr
}
@@ -421,10 +421,11 @@ func (m *Manager) getIntegrationDetails(
func (m *Manager) getInstalledIntegration(
ctx context.Context,
orgId string,
integrationId string,
) (*InstalledIntegration, *model.ApiError) {
) (*types.InstalledIntegration, *model.ApiError) {
iis, apiErr := m.installedIntegrationsRepo.get(
ctx, []string{integrationId},
ctx, orgId, []string{integrationId},
)
if apiErr != nil {
return nil, model.WrapApiError(apiErr, fmt.Sprintf(
@@ -441,32 +442,33 @@ func (m *Manager) getInstalledIntegration(
func (m *Manager) getInstalledIntegrations(
ctx context.Context,
orgId string,
) (
map[string]Integration, *model.ApiError,
) {
installations, apiErr := m.installedIntegrationsRepo.list(ctx)
installations, apiErr := m.installedIntegrationsRepo.list(ctx, orgId)
if apiErr != nil {
return nil, apiErr
}
installedIds := utils.MapSlice(installations, func(i InstalledIntegration) string {
return i.IntegrationId
installedTypes := utils.MapSlice(installations, func(i types.InstalledIntegration) string {
return i.Type
})
integrationDetails, apiErr := m.availableIntegrationsRepo.get(ctx, installedIds)
integrationDetails, apiErr := m.availableIntegrationsRepo.get(ctx, installedTypes)
if apiErr != nil {
return nil, apiErr
}
result := map[string]Integration{}
for _, ii := range installations {
iDetails, exists := integrationDetails[ii.IntegrationId]
iDetails, exists := integrationDetails[ii.Type]
if !exists {
return nil, model.InternalError(fmt.Errorf(
"couldn't find integration details for %s", ii.IntegrationId,
"couldn't find integration details for %s", ii.Type,
))
}
result[ii.IntegrationId] = Integration{
result[ii.Type] = Integration{
Installation: &ii,
IntegrationDetails: iDetails,
}

View File

@@ -14,18 +14,23 @@ func TestIntegrationLifecycle(t *testing.T) {
mgr := NewTestIntegrationsManager(t)
ctx := context.Background()
user, apiErr := createTestUser()
if apiErr != nil {
t.Fatalf("could not create test user: %v", apiErr)
}
ii := true
installedIntegrationsFilter := &IntegrationsFilter{
IsInstalled: &ii,
}
installedIntegrations, apiErr := mgr.ListIntegrations(
ctx, installedIntegrationsFilter,
ctx, user.OrgID, installedIntegrationsFilter,
)
require.Nil(apiErr)
require.Equal([]IntegrationsListItem{}, installedIntegrations)
availableIntegrations, apiErr := mgr.ListIntegrations(ctx, nil)
availableIntegrations, apiErr := mgr.ListIntegrations(ctx, user.OrgID, nil)
require.Nil(apiErr)
require.Equal(2, len(availableIntegrations))
require.False(availableIntegrations[0].IsInstalled)
@@ -33,44 +38,44 @@ func TestIntegrationLifecycle(t *testing.T) {
testIntegrationConfig := map[string]interface{}{}
installed, apiErr := mgr.InstallIntegration(
ctx, availableIntegrations[1].Id, testIntegrationConfig,
ctx, user.OrgID, availableIntegrations[1].Id, testIntegrationConfig,
)
require.Nil(apiErr)
require.Equal(installed.Id, availableIntegrations[1].Id)
integration, apiErr := mgr.GetIntegration(ctx, availableIntegrations[1].Id)
integration, apiErr := mgr.GetIntegration(ctx, user.OrgID, availableIntegrations[1].Id)
require.Nil(apiErr)
require.Equal(integration.Id, availableIntegrations[1].Id)
require.NotNil(integration.Installation)
installedIntegrations, apiErr = mgr.ListIntegrations(
ctx, installedIntegrationsFilter,
ctx, user.OrgID, installedIntegrationsFilter,
)
require.Nil(apiErr)
require.Equal(1, len(installedIntegrations))
require.Equal(availableIntegrations[1].Id, installedIntegrations[0].Id)
availableIntegrations, apiErr = mgr.ListIntegrations(ctx, nil)
availableIntegrations, apiErr = mgr.ListIntegrations(ctx, user.OrgID, nil)
require.Nil(apiErr)
require.Equal(2, len(availableIntegrations))
require.False(availableIntegrations[0].IsInstalled)
require.True(availableIntegrations[1].IsInstalled)
apiErr = mgr.UninstallIntegration(ctx, installed.Id)
apiErr = mgr.UninstallIntegration(ctx, user.OrgID, installed.Id)
require.Nil(apiErr)
integration, apiErr = mgr.GetIntegration(ctx, availableIntegrations[1].Id)
integration, apiErr = mgr.GetIntegration(ctx, user.OrgID, availableIntegrations[1].Id)
require.Nil(apiErr)
require.Equal(integration.Id, availableIntegrations[1].Id)
require.Nil(integration.Installation)
installedIntegrations, apiErr = mgr.ListIntegrations(
ctx, installedIntegrationsFilter,
ctx, user.OrgID, installedIntegrationsFilter,
)
require.Nil(apiErr)
require.Equal(0, len(installedIntegrations))
availableIntegrations, apiErr = mgr.ListIntegrations(ctx, nil)
availableIntegrations, apiErr = mgr.ListIntegrations(ctx, user.OrgID, nil)
require.Nil(apiErr)
require.Equal(2, len(availableIntegrations))
require.False(availableIntegrations[0].IsInstalled)

View File

@@ -2,51 +2,33 @@ package integrations
import (
"context"
"database/sql/driver"
"encoding/json"
"github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/pkg/errors"
"github.com/SigNoz/signoz/pkg/types"
)
// For serializing from db
func (c *InstalledIntegrationConfig) Scan(src interface{}) error {
if data, ok := src.([]byte); ok {
return json.Unmarshal(data, &c)
}
return nil
}
// For serializing to db
func (c *InstalledIntegrationConfig) Value() (driver.Value, error) {
filterSetJson, err := json.Marshal(c)
if err != nil {
return nil, errors.Wrap(err, "could not serialize integration config to JSON")
}
return filterSetJson, nil
}
type InstalledIntegrationsRepo interface {
list(context.Context) ([]InstalledIntegration, *model.ApiError)
list(ctx context.Context, orgId string) ([]types.InstalledIntegration, *model.ApiError)
get(
ctx context.Context, integrationIds []string,
) (map[string]InstalledIntegration, *model.ApiError)
ctx context.Context, orgId string, integrationTypes []string,
) (map[string]types.InstalledIntegration, *model.ApiError)
upsert(
ctx context.Context,
integrationId string,
config InstalledIntegrationConfig,
) (*InstalledIntegration, *model.ApiError)
orgId string,
integrationType string,
config types.InstalledIntegrationConfig,
) (*types.InstalledIntegration, *model.ApiError)
delete(ctx context.Context, integrationId string) *model.ApiError
delete(ctx context.Context, orgId string, integrationType string) *model.ApiError
}
type AvailableIntegrationsRepo interface {
list(context.Context) ([]IntegrationDetails, *model.ApiError)
get(
ctx context.Context, integrationIds []string,
ctx context.Context, integrationTypes []string,
) (map[string]IntegrationDetails, *model.ApiError)
// AvailableIntegrationsRepo implementations are expected to cache

View File

@@ -3,39 +3,37 @@ package integrations
import (
"context"
"fmt"
"strings"
"github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/jmoiron/sqlx"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/uptrace/bun"
)
type InstalledIntegrationsSqliteRepo struct {
db *sqlx.DB
store sqlstore.SQLStore
}
func NewInstalledIntegrationsSqliteRepo(db *sqlx.DB) (
func NewInstalledIntegrationsSqliteRepo(store sqlstore.SQLStore) (
*InstalledIntegrationsSqliteRepo, error,
) {
return &InstalledIntegrationsSqliteRepo{
db: db,
store: store,
}, nil
}
func (r *InstalledIntegrationsSqliteRepo) list(
ctx context.Context,
) ([]InstalledIntegration, *model.ApiError) {
integrations := []InstalledIntegration{}
orgId string,
) ([]types.InstalledIntegration, *model.ApiError) {
integrations := []types.InstalledIntegration{}
err := r.db.SelectContext(
ctx, &integrations, `
select
integration_id,
config_json,
installed_at
from integrations_installed
order by installed_at
`,
)
err := r.store.BunDB().NewSelect().
Model(&integrations).
Where("org_id = ?", orgId).
Order("installed_at").
Scan(ctx)
if err != nil {
return nil, model.InternalError(fmt.Errorf(
"could not query installed integrations: %w", err,
@@ -45,38 +43,28 @@ func (r *InstalledIntegrationsSqliteRepo) list(
}
func (r *InstalledIntegrationsSqliteRepo) get(
ctx context.Context, integrationIds []string,
) (map[string]InstalledIntegration, *model.ApiError) {
integrations := []InstalledIntegration{}
ctx context.Context, orgId string, integrationTypes []string,
) (map[string]types.InstalledIntegration, *model.ApiError) {
integrations := []types.InstalledIntegration{}
idPlaceholders := []string{}
idValues := []interface{}{}
for _, id := range integrationIds {
idPlaceholders = append(idPlaceholders, "?")
idValues = append(idValues, id)
typeValues := []interface{}{}
for _, integrationType := range integrationTypes {
typeValues = append(typeValues, integrationType)
}
err := r.db.SelectContext(
ctx, &integrations, fmt.Sprintf(`
select
integration_id,
config_json,
installed_at
from integrations_installed
where integration_id in (%s)`,
strings.Join(idPlaceholders, ", "),
),
idValues...,
)
err := r.store.BunDB().NewSelect().Model(&integrations).
Where("org_id = ?", orgId).
Where("type IN (?)", bun.In(typeValues)).
Scan(ctx)
if err != nil {
return nil, model.InternalError(fmt.Errorf(
"could not query installed integrations: %w", err,
))
}
result := map[string]InstalledIntegration{}
result := map[string]types.InstalledIntegration{}
for _, ii := range integrations {
result[ii.IntegrationId] = ii
result[ii.Type] = ii
}
return result, nil
@@ -84,55 +72,57 @@ func (r *InstalledIntegrationsSqliteRepo) get(
func (r *InstalledIntegrationsSqliteRepo) upsert(
ctx context.Context,
integrationId string,
config InstalledIntegrationConfig,
) (*InstalledIntegration, *model.ApiError) {
serializedConfig, err := config.Value()
if err != nil {
return nil, model.BadRequest(fmt.Errorf(
"could not serialize integration config: %w", err,
))
orgId string,
integrationType string,
config types.InstalledIntegrationConfig,
) (*types.InstalledIntegration, *model.ApiError) {
integration := types.InstalledIntegration{
Identifiable: types.Identifiable{
ID: valuer.GenerateUUID(),
},
OrgID: orgId,
Type: integrationType,
Config: config,
}
_, dbErr := r.db.ExecContext(
ctx, `
INSERT INTO integrations_installed (
integration_id,
config_json
) values ($1, $2)
on conflict(integration_id) do update
set config_json=excluded.config_json
`, integrationId, serializedConfig,
)
_, dbErr := r.store.BunDB().NewInsert().
Model(&integration).
On("conflict (type, org_id) DO UPDATE").
Set("config = EXCLUDED.config").
Exec(ctx)
if dbErr != nil {
return nil, model.InternalError(fmt.Errorf(
"could not insert record for integration installation: %w", dbErr,
))
}
res, apiErr := r.get(ctx, []string{integrationId})
res, apiErr := r.get(ctx, orgId, []string{integrationType})
if apiErr != nil || len(res) < 1 {
return nil, model.WrapApiError(
apiErr, "could not fetch installed integration",
)
}
installed := res[integrationId]
installed := res[integrationType]
return &installed, nil
}
func (r *InstalledIntegrationsSqliteRepo) delete(
ctx context.Context, integrationId string,
ctx context.Context, orgId string, integrationType string,
) *model.ApiError {
_, dbErr := r.db.ExecContext(ctx, `
DELETE FROM integrations_installed where integration_id = ?
`, integrationId)
_, dbErr := r.store.BunDB().NewDelete().
Model(&types.InstalledIntegration{}).
Where("type = ?", integrationType).
Where("org_id = ?", orgId).
Exec(ctx)
if dbErr != nil {
return model.InternalError(fmt.Errorf(
"could not delete installed integration record for %s: %w",
integrationId, dbErr,
integrationType, dbErr,
))
}

View File

@@ -5,18 +5,22 @@ import (
"slices"
"testing"
"github.com/SigNoz/signoz/pkg/query-service/auth"
"github.com/SigNoz/signoz/pkg/query-service/constants"
"github.com/SigNoz/signoz/pkg/query-service/dao"
"github.com/SigNoz/signoz/pkg/query-service/model"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/query-service/rules"
"github.com/SigNoz/signoz/pkg/query-service/utils"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/pipelinetypes"
"github.com/google/uuid"
)
func NewTestIntegrationsManager(t *testing.T) *Manager {
testDB := utils.NewQueryServiceDBForTests(t)
installedIntegrationsRepo, err := NewInstalledIntegrationsSqliteRepo(testDB.SQLxDB())
installedIntegrationsRepo, err := NewInstalledIntegrationsSqliteRepo(testDB)
if err != nil {
t.Fatalf("could not init sqlite DB for installed integrations: %v", err)
}
@@ -27,6 +31,38 @@ func NewTestIntegrationsManager(t *testing.T) *Manager {
}
}
func createTestUser() (*types.User, *model.ApiError) {
// Create a test user for auth
ctx := context.Background()
org, apiErr := dao.DB().CreateOrg(ctx, &types.Organization{
Name: "test",
})
if apiErr != nil {
return nil, apiErr
}
group, apiErr := dao.DB().GetGroupByName(ctx, constants.AdminGroup)
if apiErr != nil {
return nil, apiErr
}
auth.InitAuthCache(ctx)
userId := uuid.NewString()
return dao.DB().CreateUser(
ctx,
&types.User{
ID: userId,
Name: "test",
Email: userId[:8] + "test@test.com",
Password: "test",
OrgID: org.ID,
GroupID: group.ID,
},
true,
)
}
type TestAvailableIntegrationsRepo struct{}
func (t *TestAvailableIntegrationsRepo) list(

View File

@@ -25,12 +25,12 @@ import (
type LogParsingPipelineController struct {
Repo
GetIntegrationPipelines func(context.Context) ([]pipelinetypes.GettablePipeline, *model.ApiError)
GetIntegrationPipelines func(context.Context, string) ([]pipelinetypes.GettablePipeline, *model.ApiError)
}
func NewLogParsingPipelinesController(
sqlStore sqlstore.SQLStore,
getIntegrationPipelines func(context.Context) ([]pipelinetypes.GettablePipeline, *model.ApiError),
getIntegrationPipelines func(context.Context, string) ([]pipelinetypes.GettablePipeline, *model.ApiError),
) (*LogParsingPipelineController, error) {
repo := NewRepo(sqlStore)
return &LogParsingPipelineController{
@@ -164,7 +164,7 @@ func (ic *LogParsingPipelineController) getEffectivePipelinesByVersion(
result = savedPipelines
}
integrationPipelines, apiErr := ic.GetIntegrationPipelines(ctx)
integrationPipelines, apiErr := ic.GetIntegrationPipelines(ctx, defaultOrgID)
if apiErr != nil {
return nil, model.WrapApiError(
apiErr, "could not get pipelines for installed integrations",

View File

@@ -131,9 +131,11 @@ func getOperators(ops []pipelinetypes.PipelineOperator) ([]pipelinetypes.Pipelin
)
}
operator.If = fmt.Sprintf(
`%s && %s matches "^\\s*{.*}\\s*$"`, parseFromNotNilCheck, operator.ParseFrom,
`%s && (
(typeOf(%s) == "string" && %s matches "^\\s*{.*}\\s*$" ) ||
typeOf(%s) == "map[string]any"
)`, parseFromNotNilCheck, operator.ParseFrom, operator.ParseFrom, operator.ParseFrom,
)
} else if operator.Type == "add" {
if strings.HasPrefix(operator.Value, "EXPR(") && strings.HasSuffix(operator.Value, ")") {
expression := strings.TrimSuffix(strings.TrimPrefix(operator.Value, "EXPR("), ")")

View File

@@ -646,7 +646,7 @@ func TestMembershipOpInProcessorFieldExpressions(t *testing.T) {
require := require.New(t)
testLogs := []model.SignozLog{
makeTestSignozLog("test log", map[string]interface{}{
makeTestSignozLog("test log", map[string]any{
"http.method": "GET",
"order.products": `{"ids": ["pid0", "pid1"]}`,
}),

View File

@@ -719,6 +719,21 @@ func parseFilterAttributeKeyRequest(r *http.Request) (*v3.FilterAttributeKeyRequ
aggregateOperator := v3.AggregateOperator(r.URL.Query().Get("aggregateOperator"))
aggregateAttribute := r.URL.Query().Get("aggregateAttribute")
limit, err := strconv.Atoi(r.URL.Query().Get("limit"))
tagType := v3.TagType(r.URL.Query().Get("tagType"))
// empty string is a valid tagType
// i.e retrieve all attributes
if tagType != "" {
// what is happening here?
// if tagType is undefined(uh oh javascript) or any invalid value, set it to empty string
// instead of failing the request. Ideally, we should fail the request.
// but we are not doing that to maintain backward compatibility.
if err := tagType.Validate(); err != nil {
// if the tagType is invalid, set it to empty string
tagType = ""
}
}
if err != nil {
limit = 50
}
@@ -739,6 +754,7 @@ func parseFilterAttributeKeyRequest(r *http.Request) (*v3.FilterAttributeKeyRequ
AggregateAttribute: aggregateAttribute,
Limit: limit,
SearchText: r.URL.Query().Get("searchText"),
TagType: tagType,
}
return &req, nil
}
@@ -861,7 +877,7 @@ func chTransformQuery(query string, variables map[string]interface{}) {
transformer := chVariables.NewQueryTransformer(query, varsForTransform)
transformedQuery, err := transformer.Transform()
if err != nil {
zap.L().Warn("failed to transform clickhouse query", zap.Error(err))
zap.L().Warn("failed to transform clickhouse query", zap.String("query", query), zap.Error(err))
}
zap.L().Info("transformed clickhouse query", zap.String("transformedQuery", transformedQuery), zap.String("originalQuery", query))
}

View File

@@ -112,6 +112,7 @@ func TestParseFilterAttributeKeyRequest(t *testing.T) {
expectedSearchText string
expectErr bool
errMsg string
expectedTagType v3.TagType
}{
{
desc: "valid operator and data source",
@@ -168,6 +169,38 @@ func TestParseFilterAttributeKeyRequest(t *testing.T) {
expectedDataSource: v3.DataSourceTraces,
expectedLimit: 50,
},
{
desc: "invalid tag type",
queryString: "aggregateOperator=avg&dataSource=traces&tagType=invalid",
expectedOperator: v3.AggregateOperatorAvg,
expectedDataSource: v3.DataSourceTraces,
expectedTagType: "",
expectedLimit: 50,
},
{
desc: "valid tag type",
queryString: "aggregateOperator=avg&dataSource=traces&tagType=resource",
expectedOperator: v3.AggregateOperatorAvg,
expectedDataSource: v3.DataSourceTraces,
expectedTagType: v3.TagTypeResource,
expectedLimit: 50,
},
{
desc: "valid tag type",
queryString: "aggregateOperator=avg&dataSource=traces&tagType=scope",
expectedOperator: v3.AggregateOperatorAvg,
expectedDataSource: v3.DataSourceTraces,
expectedTagType: v3.TagTypeInstrumentationScope,
expectedLimit: 50,
},
{
desc: "valid tag type",
queryString: "aggregateOperator=avg&dataSource=traces&tagType=tag",
expectedOperator: v3.AggregateOperatorAvg,
expectedDataSource: v3.DataSourceTraces,
expectedTagType: v3.TagTypeTag,
expectedLimit: 50,
},
}
for _, reqCase := range reqCases {

View File

@@ -439,7 +439,7 @@ func RegisterFirstUser(ctx context.Context, req *RegisterRequest) (*types.User,
}
user := &types.User{
ID: uuid.NewString(),
ID: uuid.New().String(),
Name: req.Name,
Email: req.Email,
Password: hash,
@@ -519,7 +519,7 @@ func RegisterInvitedUser(ctx context.Context, req *RegisterRequest, nopassword b
}
user := &types.User{
ID: uuid.NewString(),
ID: uuid.New().String(),
Name: req.Name,
Email: req.Email,
Password: hash,

View File

@@ -3,6 +3,7 @@ package auth
import (
"context"
errorsV2 "github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/query-service/constants"
"github.com/SigNoz/signoz/pkg/query-service/dao"
"github.com/SigNoz/signoz/pkg/types"
@@ -51,7 +52,7 @@ func InitAuthCache(ctx context.Context) error {
func GetUserFromReqContext(ctx context.Context) (*types.GettableUser, error) {
claims, ok := authtypes.ClaimsFromContext(ctx)
if !ok {
return nil, errors.New("no claims found in context")
return nil, errorsV2.New(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, "no claims found in context")
}
user := &types.GettableUser{

View File

@@ -248,6 +248,7 @@ func (q TagType) Validate() error {
type FilterAttributeKeyRequest struct {
DataSource DataSource `json:"dataSource"`
AggregateOperator AggregateOperator `json:"aggregateOperator"`
TagType TagType `json:"tagType"`
AggregateAttribute string `json:"aggregateAttribute"`
SearchText string `json:"searchText"`
Limit int `json:"limit"`

View File

@@ -35,7 +35,7 @@ func TestAWSIntegrationAccountLifecycle(t *testing.T) {
)
// Should be able to generate a connection url from UI - initializing an integration account
testAccountConfig := cloudintegrations.AccountConfig{
testAccountConfig := types.AccountConfig{
EnabledRegions: []string{"us-east-1", "us-east-2"},
}
connectionUrlResp := testbed.GenerateConnectionUrlFromQS(
@@ -65,8 +65,8 @@ func TestAWSIntegrationAccountLifecycle(t *testing.T) {
testAWSAccountId := "4563215233"
agentCheckInResp := testbed.CheckInAsAgentWithQS(
"aws", cloudintegrations.AgentCheckInRequest{
AccountId: testAccountId,
CloudAccountId: testAWSAccountId,
ID: testAccountId,
AccountID: testAWSAccountId,
},
)
require.Equal(testAccountId, agentCheckInResp.AccountId)
@@ -91,20 +91,20 @@ func TestAWSIntegrationAccountLifecycle(t *testing.T) {
require.Equal(testAWSAccountId, accountsListResp2.Accounts[0].CloudAccountId)
// Should be able to update account config from UI
testAccountConfig2 := cloudintegrations.AccountConfig{
testAccountConfig2 := types.AccountConfig{
EnabledRegions: []string{"us-east-2", "us-west-1"},
}
latestAccount := testbed.UpdateAccountConfigWithQS(
"aws", testAccountId, testAccountConfig2,
)
require.Equal(testAccountId, latestAccount.Id)
require.Equal(testAccountId, latestAccount.ID.StringValue())
require.Equal(testAccountConfig2, *latestAccount.Config)
// The agent should now receive latest account config.
agentCheckInResp1 := testbed.CheckInAsAgentWithQS(
"aws", cloudintegrations.AgentCheckInRequest{
AccountId: testAccountId,
CloudAccountId: testAWSAccountId,
ID: testAccountId,
AccountID: testAWSAccountId,
},
)
require.Equal(testAccountId, agentCheckInResp1.AccountId)
@@ -114,14 +114,14 @@ func TestAWSIntegrationAccountLifecycle(t *testing.T) {
// Should be able to disconnect/remove account from UI.
tsBeforeDisconnect := time.Now()
latestAccount = testbed.DisconnectAccountWithQS("aws", testAccountId)
require.Equal(testAccountId, latestAccount.Id)
require.Equal(testAccountId, latestAccount.ID.StringValue())
require.LessOrEqual(tsBeforeDisconnect, *latestAccount.RemovedAt)
// The agent should receive the disconnected status in account config post disconnection
agentCheckInResp2 := testbed.CheckInAsAgentWithQS(
"aws", cloudintegrations.AgentCheckInRequest{
AccountId: testAccountId,
CloudAccountId: testAWSAccountId,
ID: testAccountId,
AccountID: testAWSAccountId,
},
)
require.Equal(testAccountId, agentCheckInResp2.AccountId)
@@ -157,13 +157,13 @@ func TestAWSIntegrationServices(t *testing.T) {
testAWSAccountId := "389389489489"
testbed.CheckInAsAgentWithQS(
"aws", cloudintegrations.AgentCheckInRequest{
AccountId: testAccountId,
CloudAccountId: testAWSAccountId,
ID: testAccountId,
AccountID: testAWSAccountId,
},
)
testSvcConfig := cloudintegrations.CloudServiceConfig{
Metrics: &cloudintegrations.CloudServiceMetricsConfig{
testSvcConfig := types.CloudServiceConfig{
Metrics: &types.CloudServiceMetricsConfig{
Enabled: true,
},
}
@@ -199,7 +199,7 @@ func TestConfigReturnedWhenAgentChecksIn(t *testing.T) {
testbed := NewCloudIntegrationsTestBed(t, nil)
// configure a connected account
testAccountConfig := cloudintegrations.AccountConfig{
testAccountConfig := types.AccountConfig{
EnabledRegions: []string{"us-east-1", "us-east-2"},
}
connectionUrlResp := testbed.GenerateConnectionUrlFromQS(
@@ -218,8 +218,8 @@ func TestConfigReturnedWhenAgentChecksIn(t *testing.T) {
testAWSAccountId := "389389489489"
checkinResp := testbed.CheckInAsAgentWithQS(
"aws", cloudintegrations.AgentCheckInRequest{
AccountId: testAccountId,
CloudAccountId: testAWSAccountId,
ID: testAccountId,
AccountID: testAWSAccountId,
},
)
@@ -237,14 +237,14 @@ func TestConfigReturnedWhenAgentChecksIn(t *testing.T) {
// helper
setServiceConfig := func(svcId string, metricsEnabled bool, logsEnabled bool) {
testSvcConfig := cloudintegrations.CloudServiceConfig{}
testSvcConfig := types.CloudServiceConfig{}
if metricsEnabled {
testSvcConfig.Metrics = &cloudintegrations.CloudServiceMetricsConfig{
testSvcConfig.Metrics = &types.CloudServiceMetricsConfig{
Enabled: metricsEnabled,
}
}
if logsEnabled {
testSvcConfig.Logs = &cloudintegrations.CloudServiceLogsConfig{
testSvcConfig.Logs = &types.CloudServiceLogsConfig{
Enabled: logsEnabled,
}
}
@@ -262,8 +262,8 @@ func TestConfigReturnedWhenAgentChecksIn(t *testing.T) {
checkinResp = testbed.CheckInAsAgentWithQS(
"aws", cloudintegrations.AgentCheckInRequest{
AccountId: testAccountId,
CloudAccountId: testAWSAccountId,
ID: testAccountId,
AccountID: testAWSAccountId,
},
)
@@ -292,13 +292,13 @@ func TestConfigReturnedWhenAgentChecksIn(t *testing.T) {
require.True(strings.HasPrefix(logGroupPrefixes[0], "/aws/rds"))
// change regions and update service configs and validate config changes for agent
testAccountConfig2 := cloudintegrations.AccountConfig{
testAccountConfig2 := types.AccountConfig{
EnabledRegions: []string{"us-east-2", "us-west-1"},
}
latestAccount := testbed.UpdateAccountConfigWithQS(
"aws", testAccountId, testAccountConfig2,
)
require.Equal(testAccountId, latestAccount.Id)
require.Equal(testAccountId, latestAccount.ID.StringValue())
require.Equal(testAccountConfig2, *latestAccount.Config)
// disable metrics for one and logs for the other.
@@ -308,8 +308,8 @@ func TestConfigReturnedWhenAgentChecksIn(t *testing.T) {
checkinResp = testbed.CheckInAsAgentWithQS(
"aws", cloudintegrations.AgentCheckInRequest{
AccountId: testAccountId,
CloudAccountId: testAWSAccountId,
ID: testAccountId,
AccountID: testAWSAccountId,
},
)
require.Equal(testAccountId, checkinResp.AccountId)
@@ -453,8 +453,8 @@ func (tb *CloudIntegrationsTestBed) CheckInAsAgentWithQS(
}
func (tb *CloudIntegrationsTestBed) UpdateAccountConfigWithQS(
cloudProvider string, accountId string, newConfig cloudintegrations.AccountConfig,
) *cloudintegrations.AccountRecord {
cloudProvider string, accountId string, newConfig types.AccountConfig,
) *types.CloudIntegration {
respDataJson := tb.RequestQS(
fmt.Sprintf(
"/api/v1/cloud-integrations/%s/accounts/%s/config",
@@ -464,7 +464,7 @@ func (tb *CloudIntegrationsTestBed) UpdateAccountConfigWithQS(
},
)
var resp cloudintegrations.AccountRecord
var resp types.CloudIntegration
err := json.Unmarshal(respDataJson, &resp)
if err != nil {
tb.t.Fatalf("could not unmarshal apiResponse.Data json into Account")
@@ -475,7 +475,7 @@ func (tb *CloudIntegrationsTestBed) UpdateAccountConfigWithQS(
func (tb *CloudIntegrationsTestBed) DisconnectAccountWithQS(
cloudProvider string, accountId string,
) *cloudintegrations.AccountRecord {
) *types.CloudIntegration {
respDataJson := tb.RequestQS(
fmt.Sprintf(
"/api/v1/cloud-integrations/%s/accounts/%s/disconnect",
@@ -483,7 +483,7 @@ func (tb *CloudIntegrationsTestBed) DisconnectAccountWithQS(
), map[string]any{},
)
var resp cloudintegrations.AccountRecord
var resp types.CloudIntegration
err := json.Unmarshal(respDataJson, &resp)
if err != nil {
tb.t.Fatalf("could not unmarshal apiResponse.Data json into Account")

View File

@@ -166,6 +166,7 @@ func createTestUser() (*types.User, *model.ApiError) {
auth.InitAuthCache(ctx)
userId := uuid.NewString()
return dao.DB().CreateUser(
ctx,
&types.User{

View File

@@ -48,9 +48,15 @@ func NewTestSqliteDB(t *testing.T) (sqlStore sqlstore.SQLStore, testDBFilePath s
sqlmigration.NewModifyDatetimeFactory(),
sqlmigration.NewModifyOrgDomainFactory(),
sqlmigration.NewUpdateOrganizationFactory(sqlStore),
sqlmigration.NewAddAlertmanagerFactory(sqlStore),
sqlmigration.NewUpdateDashboardAndSavedViewsFactory(sqlStore),
sqlmigration.NewUpdatePatAndOrgDomainsFactory(sqlStore),
sqlmigration.NewUpdatePipelines(sqlStore),
sqlmigration.NewDropLicensesSitesFactory(sqlStore),
sqlmigration.NewUpdateInvitesFactory(sqlStore),
sqlmigration.NewUpdatePatFactory(sqlStore),
sqlmigration.NewAddVirtualFieldsFactory(),
sqlmigration.NewUpdateIntegrationsFactory(sqlStore),
),
)
if err != nil {

View File

@@ -69,6 +69,8 @@ func NewSQLMigrationProviderFactories(sqlstore sqlstore.SQLStore) factory.NamedM
sqlmigration.NewUpdatePreferencesFactory(sqlstore),
sqlmigration.NewUpdateApdexTtlFactory(sqlstore),
sqlmigration.NewUpdateResetPasswordFactory(sqlstore),
sqlmigration.NewAddVirtualFieldsFactory(),
sqlmigration.NewUpdateIntegrationsFactory(sqlstore),
)
}

View File

@@ -0,0 +1,58 @@
package sqlmigration
import (
"context"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/uptrace/bun"
"github.com/uptrace/bun/migrate"
)
type addVirtualFields struct{}
func NewAddVirtualFieldsFactory() factory.ProviderFactory[SQLMigration, Config] {
return factory.NewProviderFactory(factory.MustNewName("add_virtual_fields"), newAddVirtualFields)
}
func newAddVirtualFields(_ context.Context, _ factory.ProviderSettings, _ Config) (SQLMigration, error) {
return &addVirtualFields{}, nil
}
func (migration *addVirtualFields) Register(migrations *migrate.Migrations) error {
if err := migrations.Register(migration.Up, migration.Down); err != nil {
return err
}
return nil
}
func (migration *addVirtualFields) Up(ctx context.Context, db *bun.DB) error {
// table:virtual_field op:create
if _, err := db.NewCreateTable().
Model(&struct {
bun.BaseModel `bun:"table:virtual_field"`
types.Identifiable
types.TimeAuditable
types.UserAuditable
Name string `bun:"name,type:text,notnull"`
Expression string `bun:"expression,type:text,notnull"`
Description string `bun:"description,type:text"`
Signal telemetrytypes.Signal `bun:"signal,type:text,notnull"`
OrgID string `bun:"org_id,type:text,notnull"`
}{}).
ForeignKey(`("org_id") REFERENCES "organizations" ("id") ON DELETE CASCADE`).
IfNotExists().
Exec(ctx); err != nil {
return err
}
return nil
}
func (migration *addVirtualFields) Down(ctx context.Context, db *bun.DB) error {
return nil
}

View File

@@ -0,0 +1,441 @@
package sqlmigration
import (
"context"
"database/sql"
"time"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/google/uuid"
"github.com/uptrace/bun"
"github.com/uptrace/bun/migrate"
"go.uber.org/zap"
)
type updateIntegrations struct {
store sqlstore.SQLStore
}
func NewUpdateIntegrationsFactory(sqlstore sqlstore.SQLStore) factory.ProviderFactory[SQLMigration, Config] {
return factory.NewProviderFactory(factory.MustNewName("update_integrations"), func(ctx context.Context, ps factory.ProviderSettings, c Config) (SQLMigration, error) {
return newUpdateIntegrations(ctx, ps, c, sqlstore)
})
}
func newUpdateIntegrations(_ context.Context, _ factory.ProviderSettings, _ Config, store sqlstore.SQLStore) (SQLMigration, error) {
return &updateIntegrations{
store: store,
}, nil
}
func (migration *updateIntegrations) Register(migrations *migrate.Migrations) error {
if err := migrations.Register(migration.Up, migration.Down); err != nil {
return err
}
return nil
}
type existingInstalledIntegration struct {
bun.BaseModel `bun:"table:integrations_installed"`
IntegrationID string `bun:"integration_id,pk,type:text"`
ConfigJSON string `bun:"config_json,type:text"`
InstalledAt time.Time `bun:"installed_at,default:current_timestamp"`
}
type newInstalledIntegration struct {
bun.BaseModel `bun:"table:installed_integration"`
types.Identifiable
Type string `json:"type" bun:"type,type:text,unique:org_id_type"`
Config string `json:"config" bun:"config,type:text"`
InstalledAt time.Time `json:"installed_at" bun:"installed_at,default:current_timestamp"`
OrgID string `json:"org_id" bun:"org_id,type:text,unique:org_id_type"`
}
type existingCloudIntegration struct {
bun.BaseModel `bun:"table:cloud_integrations_accounts"`
CloudProvider string `bun:"cloud_provider,type:text,unique:cloud_provider_id"`
ID string `bun:"id,type:text,notnull,unique:cloud_provider_id"`
ConfigJSON string `bun:"config_json,type:text"`
CloudAccountID string `bun:"cloud_account_id,type:text"`
LastAgentReportJSON string `bun:"last_agent_report_json,type:text"`
CreatedAt time.Time `bun:"created_at,notnull,default:current_timestamp"`
RemovedAt *time.Time `bun:"removed_at,type:timestamp"`
}
type newCloudIntegration struct {
bun.BaseModel `bun:"table:cloud_integration"`
types.Identifiable
types.TimeAuditable
Provider string `json:"provider" bun:"provider,type:text"`
Config string `json:"config" bun:"config,type:text"`
AccountID string `json:"account_id" bun:"account_id,type:text"`
LastAgentReport string `json:"last_agent_report" bun:"last_agent_report,type:text"`
RemovedAt *time.Time `json:"removed_at" bun:"removed_at,type:timestamp"`
OrgID string `json:"org_id" bun:"org_id,type:text"`
}
type existingCloudIntegrationService struct {
bun.BaseModel `bun:"table:cloud_integrations_service_configs,alias:c1"`
CloudProvider string `bun:"cloud_provider,type:text,notnull,unique:service_cloud_provider_account"`
CloudAccountID string `bun:"cloud_account_id,type:text,notnull,unique:service_cloud_provider_account"`
ServiceID string `bun:"service_id,type:text,notnull,unique:service_cloud_provider_account"`
ConfigJSON string `bun:"config_json,type:text"`
CreatedAt time.Time `bun:"created_at,default:current_timestamp"`
}
type newCloudIntegrationService struct {
bun.BaseModel `bun:"table:cloud_integration_service,alias:cis"`
types.Identifiable
types.TimeAuditable
Type string `bun:"type,type:text,notnull,unique:cloud_integration_id_type"`
Config string `bun:"config,type:text"`
CloudIntegrationID string `bun:"cloud_integration_id,type:text,notnull,unique:cloud_integration_id_type"`
}
type StorablePersonalAccessToken struct {
bun.BaseModel `bun:"table:personal_access_token"`
types.Identifiable
types.TimeAuditable
OrgID string `json:"orgId" bun:"org_id,type:text,notnull"`
Role string `json:"role" bun:"role,type:text,notnull,default:'ADMIN'"`
UserID string `json:"userId" bun:"user_id,type:text,notnull"`
Token string `json:"token" bun:"token,type:text,notnull,unique"`
Name string `json:"name" bun:"name,type:text,notnull"`
ExpiresAt int64 `json:"expiresAt" bun:"expires_at,notnull,default:0"`
LastUsed int64 `json:"lastUsed" bun:"last_used,notnull,default:0"`
Revoked bool `json:"revoked" bun:"revoked,notnull,default:false"`
UpdatedByUserID string `json:"updatedByUserId" bun:"updated_by_user_id,type:text,notnull,default:''"`
}
func (migration *updateIntegrations) Up(ctx context.Context, db *bun.DB) error {
// begin transaction
tx, err := db.BeginTx(ctx, nil)
if err != nil {
return err
}
defer tx.Rollback()
// don't run the migration if there are multiple org ids
orgIDs := make([]string, 0)
err = migration.store.BunDB().NewSelect().Model((*types.Organization)(nil)).Column("id").Scan(ctx, &orgIDs)
if err != nil {
return err
}
if len(orgIDs) > 1 {
return nil
}
// ---
// installed integrations
// ---
err = migration.
store.
Dialect().
RenameTableAndModifyModel(ctx, tx, new(existingInstalledIntegration), new(newInstalledIntegration), []string{OrgReference}, func(ctx context.Context) error {
existingIntegrations := make([]*existingInstalledIntegration, 0)
err = tx.
NewSelect().
Model(&existingIntegrations).
Scan(ctx)
if err != nil {
if err != sql.ErrNoRows {
return err
}
}
if err == nil && len(existingIntegrations) > 0 {
newIntegrations := migration.
CopyOldIntegrationsToNewIntegrations(tx, orgIDs[0], existingIntegrations)
_, err = tx.
NewInsert().
Model(&newIntegrations).
Exec(ctx)
if err != nil {
return err
}
}
return nil
})
if err != nil {
return err
}
// ---
// cloud integrations
// ---
err = migration.
store.
Dialect().
RenameTableAndModifyModel(ctx, tx, new(existingCloudIntegration), new(newCloudIntegration), []string{OrgReference}, func(ctx context.Context) error {
existingIntegrations := make([]*existingCloudIntegration, 0)
err = tx.
NewSelect().
Model(&existingIntegrations).
Where("removed_at IS NULL"). // we will only copy the accounts that are not removed
Scan(ctx)
if err != nil {
if err != sql.ErrNoRows {
return err
}
}
if err == nil && len(existingIntegrations) > 0 {
newIntegrations := migration.
CopyOldCloudIntegrationsToNewCloudIntegrations(tx, orgIDs[0], existingIntegrations)
_, err = tx.
NewInsert().
Model(&newIntegrations).
Exec(ctx)
if err != nil {
return err
}
}
return nil
})
if err != nil {
return err
}
// add unique constraint to cloud_integration table
_, err = tx.ExecContext(ctx, `CREATE UNIQUE INDEX IF NOT EXISTS unique_cloud_integration ON cloud_integration (id, provider, org_id)`)
if err != nil {
return err
}
// ---
// cloud integration service
// ---
err = migration.
store.
Dialect().
RenameTableAndModifyModel(ctx, tx, new(existingCloudIntegrationService), new(newCloudIntegrationService), []string{CloudIntegrationReference}, func(ctx context.Context) error {
existingServices := make([]*existingCloudIntegrationService, 0)
// only one service per provider,account id and type
// so there won't be any duplicates.
// just that these will be enabled as soon as the integration for the account is enabled
err = tx.
NewSelect().
Model(&existingServices).
Scan(ctx)
if err != nil {
if err != sql.ErrNoRows {
return err
}
}
if err == nil && len(existingServices) > 0 {
newServices := migration.
CopyOldCloudIntegrationServicesToNewCloudIntegrationServices(tx, orgIDs[0], existingServices)
if len(newServices) > 0 {
_, err = tx.
NewInsert().
Model(&newServices).
Exec(ctx)
if err != nil {
return err
}
}
}
return nil
})
if err != nil {
return err
}
if len(orgIDs) == 0 {
err = tx.Commit()
if err != nil {
return err
}
return nil
}
// copy the old aws integration user to the new user
err = migration.copyOldAwsIntegrationUser(tx, orgIDs[0])
if err != nil {
return err
}
err = tx.Commit()
if err != nil {
return err
}
return nil
}
func (migration *updateIntegrations) Down(ctx context.Context, db *bun.DB) error {
return nil
}
func (migration *updateIntegrations) CopyOldIntegrationsToNewIntegrations(tx bun.IDB, orgID string, existingIntegrations []*existingInstalledIntegration) []*newInstalledIntegration {
newIntegrations := make([]*newInstalledIntegration, 0)
for _, integration := range existingIntegrations {
newIntegrations = append(newIntegrations, &newInstalledIntegration{
Identifiable: types.Identifiable{
ID: valuer.GenerateUUID(),
},
Type: integration.IntegrationID,
Config: integration.ConfigJSON,
InstalledAt: integration.InstalledAt,
OrgID: orgID,
})
}
return newIntegrations
}
func (migration *updateIntegrations) CopyOldCloudIntegrationsToNewCloudIntegrations(tx bun.IDB, orgID string, existingIntegrations []*existingCloudIntegration) []*newCloudIntegration {
newIntegrations := make([]*newCloudIntegration, 0)
for _, integration := range existingIntegrations {
newIntegrations = append(newIntegrations, &newCloudIntegration{
Identifiable: types.Identifiable{
ID: valuer.GenerateUUID(),
},
TimeAuditable: types.TimeAuditable{
CreatedAt: integration.CreatedAt,
UpdatedAt: integration.CreatedAt,
},
Provider: integration.CloudProvider,
AccountID: integration.CloudAccountID,
Config: integration.ConfigJSON,
LastAgentReport: integration.LastAgentReportJSON,
RemovedAt: integration.RemovedAt,
OrgID: orgID,
})
}
return newIntegrations
}
func (migration *updateIntegrations) CopyOldCloudIntegrationServicesToNewCloudIntegrationServices(tx bun.IDB, orgID string, existingServices []*existingCloudIntegrationService) []*newCloudIntegrationService {
newServices := make([]*newCloudIntegrationService, 0)
for _, service := range existingServices {
var cloudIntegrationID string
err := tx.NewSelect().
Model((*newCloudIntegration)(nil)).
Column("id").
Where("account_id = ?", service.CloudAccountID).
Where("provider = ?", service.CloudProvider).
Where("org_id = ?", orgID).
Scan(context.Background(), &cloudIntegrationID)
if err != nil {
if err == sql.ErrNoRows {
continue
}
zap.L().Error("failed to get cloud integration id", zap.Error(err))
return nil
}
newServices = append(newServices, &newCloudIntegrationService{
Identifiable: types.Identifiable{
ID: valuer.GenerateUUID(),
},
TimeAuditable: types.TimeAuditable{
CreatedAt: service.CreatedAt,
UpdatedAt: service.CreatedAt,
},
Type: service.ServiceID,
Config: service.ConfigJSON,
CloudIntegrationID: cloudIntegrationID,
})
}
return newServices
}
func (migration *updateIntegrations) copyOldAwsIntegrationUser(tx bun.IDB, orgID string) error {
user := &types.User{}
err := tx.NewSelect().Model(user).Where("email = ?", "aws-integration@signoz.io").Scan(context.Background())
if err != nil {
if err == sql.ErrNoRows {
return nil
}
return err
}
// check if the id is already an uuid
if _, err := uuid.Parse(user.ID); err == nil {
return nil
}
// new user
newUser := &types.User{
ID: uuid.New().String(),
TimeAuditable: types.TimeAuditable{
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
},
OrgID: orgID,
Name: user.Name,
Email: user.Email,
GroupID: user.GroupID,
Password: user.Password,
}
// get the pat for old user
pat := &StorablePersonalAccessToken{}
err = tx.NewSelect().Model(pat).Where("user_id = ? and revoked = false", "aws-integration").Scan(context.Background())
if err != nil {
if err == sql.ErrNoRows {
// delete the old user
_, err = tx.ExecContext(context.Background(), `DELETE FROM users WHERE id = ?`, user.ID)
if err != nil {
return err
}
return nil
}
return err
}
// new pat
newPAT := &StorablePersonalAccessToken{
Identifiable: types.Identifiable{ID: valuer.GenerateUUID()},
TimeAuditable: types.TimeAuditable{
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
},
OrgID: orgID,
UserID: newUser.ID,
Token: pat.Token,
Name: pat.Name,
ExpiresAt: pat.ExpiresAt,
LastUsed: pat.LastUsed,
Revoked: pat.Revoked,
Role: pat.Role,
}
// delete old user
_, err = tx.ExecContext(context.Background(), `DELETE FROM users WHERE id = ?`, user.ID)
if err != nil {
return err
}
// insert the new user
_, err = tx.NewInsert().Model(newUser).Exec(context.Background())
if err != nil {
return err
}
// insert the new pat
_, err = tx.NewInsert().Model(newPAT).Exec(context.Background())
if err != nil {
return err
}
return nil
}

View File

@@ -26,8 +26,9 @@ var (
)
var (
OrgReference = "org"
UserReference = "user"
OrgReference = "org"
UserReference = "user"
CloudIntegrationReference = "cloud_integration"
)
func New(

View File

@@ -17,13 +17,15 @@ var (
)
var (
Org = "org"
User = "user"
Org = "org"
User = "user"
CloudIntegration = "cloud_integration"
)
var (
OrgReference = `("org_id") REFERENCES "organizations" ("id")`
UserReference = `("user_id") REFERENCES "users" ("id") ON DELETE CASCADE ON UPDATE CASCADE`
OrgReference = `("org_id") REFERENCES "organizations" ("id")`
UserReference = `("user_id") REFERENCES "users" ("id") ON DELETE CASCADE ON UPDATE CASCADE`
CloudIntegrationReference = `("cloud_integration_id") REFERENCES "cloud_integration" ("id") ON DELETE CASCADE`
)
type dialect struct {
@@ -202,6 +204,8 @@ func (dialect *dialect) RenameTableAndModifyModel(ctx context.Context, bun bun.I
fkReferences = append(fkReferences, OrgReference)
} else if reference == User && !slices.Contains(fkReferences, UserReference) {
fkReferences = append(fkReferences, UserReference)
} else if reference == CloudIntegration && !slices.Contains(fkReferences, CloudIntegrationReference) {
fkReferences = append(fkReferences, CloudIntegrationReference)
}
}

View File

@@ -0,0 +1,275 @@
package telemetrylogs
import (
"context"
"fmt"
schema "github.com/SigNoz/signoz-otel-collector/cmd/signozschemamigrator/schema_migrator"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/huandu/go-sqlbuilder"
)
var (
logsV2Columns = map[string]*schema.Column{
"ts_bucket_start": {Name: "ts_bucket_start", Type: schema.ColumnTypeUInt64},
"resource_fingerprint": {Name: "resource_fingerprint", Type: schema.ColumnTypeString},
"timestamp": {Name: "timestamp", Type: schema.ColumnTypeUInt64},
"observed_timestamp": {Name: "observed_timestamp", Type: schema.ColumnTypeUInt64},
"id": {Name: "id", Type: schema.ColumnTypeString},
"trace_id": {Name: "trace_id", Type: schema.ColumnTypeString},
"span_id": {Name: "span_id", Type: schema.ColumnTypeString},
"trace_flags": {Name: "trace_flags", Type: schema.ColumnTypeUInt32},
"severity_text": {Name: "severity_text", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"severity_number": {Name: "severity_number", Type: schema.ColumnTypeUInt8},
"body": {Name: "body", Type: schema.ColumnTypeString},
"attributes_string": {Name: "attributes_string", Type: schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}},
"attributes_number": {Name: "attributes_number", Type: schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeFloat64,
}},
"attributes_bool": {Name: "attributes_bool", Type: schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeBool,
}},
"resources_string": {Name: "resources_string", Type: schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}},
"scope_name": {Name: "scope_name", Type: schema.ColumnTypeString},
"scope_version": {Name: "scope_version", Type: schema.ColumnTypeString},
"scope_string": {Name: "scope_string", Type: schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}},
}
)
var _ qbtypes.ConditionBuilder = &conditionBuilder{}
type conditionBuilder struct {
}
func NewConditionBuilder() qbtypes.ConditionBuilder {
return &conditionBuilder{}
}
func (c *conditionBuilder) GetColumn(ctx context.Context, key *telemetrytypes.TelemetryFieldKey) (*schema.Column, error) {
switch key.FieldContext {
case telemetrytypes.FieldContextResource:
return logsV2Columns["resources_string"], nil
case telemetrytypes.FieldContextScope:
switch key.Name {
case "name", "scope.name", "scope_name":
return logsV2Columns["scope_name"], nil
case "version", "scope.version", "scope_version":
return logsV2Columns["scope_version"], nil
}
return logsV2Columns["scope_string"], nil
case telemetrytypes.FieldContextAttribute:
switch key.FieldDataType {
case telemetrytypes.FieldDataTypeString:
return logsV2Columns["attributes_string"], nil
case telemetrytypes.FieldDataTypeInt64, telemetrytypes.FieldDataTypeFloat64, telemetrytypes.FieldDataTypeNumber:
return logsV2Columns["attributes_number"], nil
case telemetrytypes.FieldDataTypeBool:
return logsV2Columns["attributes_bool"], nil
}
case telemetrytypes.FieldContextLog:
col, ok := logsV2Columns[key.Name]
if !ok {
return nil, qbtypes.ErrColumnNotFound
}
return col, nil
}
return nil, qbtypes.ErrColumnNotFound
}
func (c *conditionBuilder) GetTableFieldName(ctx context.Context, key *telemetrytypes.TelemetryFieldKey) (string, error) {
column, err := c.GetColumn(ctx, key)
if err != nil {
return "", err
}
switch column.Type {
case schema.ColumnTypeString,
schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
schema.ColumnTypeUInt64,
schema.ColumnTypeUInt32,
schema.ColumnTypeUInt8:
return column.Name, nil
case schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}:
// a key could have been materialized, if so return the materialized column name
if key.Materialized {
return telemetrytypes.FieldKeyToMaterializedColumnName(key), nil
}
return fmt.Sprintf("%s['%s']", column.Name, key.Name), nil
case schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeFloat64,
}:
// a key could have been materialized, if so return the materialized column name
if key.Materialized {
return telemetrytypes.FieldKeyToMaterializedColumnName(key), nil
}
return fmt.Sprintf("%s['%s']", column.Name, key.Name), nil
case schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeBool,
}:
// a key could have been materialized, if so return the materialized column name
if key.Materialized {
return telemetrytypes.FieldKeyToMaterializedColumnName(key), nil
}
return fmt.Sprintf("%s['%s']", column.Name, key.Name), nil
}
// should not reach here
return column.Name, nil
}
func (c *conditionBuilder) GetCondition(
ctx context.Context,
key *telemetrytypes.TelemetryFieldKey,
operator qbtypes.FilterOperator,
value any,
sb *sqlbuilder.SelectBuilder,
) (string, error) {
column, err := c.GetColumn(ctx, key)
if err != nil {
return "", err
}
tblFieldName, err := c.GetTableFieldName(ctx, key)
if err != nil {
return "", err
}
tblFieldName, value = telemetrytypes.DataTypeCollisionHandledFieldName(key, value, tblFieldName)
// regular operators
switch operator {
// regular operators
case qbtypes.FilterOperatorEqual:
return sb.E(tblFieldName, value), nil
case qbtypes.FilterOperatorNotEqual:
return sb.NE(tblFieldName, value), nil
case qbtypes.FilterOperatorGreaterThan:
return sb.G(tblFieldName, value), nil
case qbtypes.FilterOperatorGreaterThanOrEq:
return sb.GE(tblFieldName, value), nil
case qbtypes.FilterOperatorLessThan:
return sb.LT(tblFieldName, value), nil
case qbtypes.FilterOperatorLessThanOrEq:
return sb.LE(tblFieldName, value), nil
// like and not like
case qbtypes.FilterOperatorLike:
return sb.Like(tblFieldName, value), nil
case qbtypes.FilterOperatorNotLike:
return sb.NotLike(tblFieldName, value), nil
case qbtypes.FilterOperatorILike:
return sb.ILike(tblFieldName, value), nil
case qbtypes.FilterOperatorNotILike:
return sb.NotILike(tblFieldName, value), nil
case qbtypes.FilterOperatorContains:
return sb.ILike(tblFieldName, fmt.Sprintf("%%%s%%", value)), nil
case qbtypes.FilterOperatorNotContains:
return sb.NotILike(tblFieldName, fmt.Sprintf("%%%s%%", value)), nil
case qbtypes.FilterOperatorRegexp:
exp := fmt.Sprintf(`match(%s, %s)`, tblFieldName, sb.Var(value))
return sb.And(exp), nil
case qbtypes.FilterOperatorNotRegexp:
exp := fmt.Sprintf(`not match(%s, %s)`, tblFieldName, sb.Var(value))
return sb.And(exp), nil
// between and not between
case qbtypes.FilterOperatorBetween:
values, ok := value.([]any)
if !ok {
return "", qbtypes.ErrBetweenValues
}
if len(values) != 2 {
return "", qbtypes.ErrBetweenValues
}
return sb.Between(tblFieldName, values[0], values[1]), nil
case qbtypes.FilterOperatorNotBetween:
values, ok := value.([]any)
if !ok {
return "", qbtypes.ErrBetweenValues
}
if len(values) != 2 {
return "", qbtypes.ErrBetweenValues
}
return sb.NotBetween(tblFieldName, values[0], values[1]), nil
// in and not in
case qbtypes.FilterOperatorIn:
values, ok := value.([]any)
if !ok {
return "", qbtypes.ErrInValues
}
return sb.In(tblFieldName, values...), nil
case qbtypes.FilterOperatorNotIn:
values, ok := value.([]any)
if !ok {
return "", qbtypes.ErrInValues
}
return sb.NotIn(tblFieldName, values...), nil
// exists and not exists
// but how could you live and have no story to tell
// in the UI based query builder, `exists` and `not exists` are used for
// key membership checks, so depending on the column type, the condition changes
case qbtypes.FilterOperatorExists, qbtypes.FilterOperatorNotExists:
var value any
switch column.Type {
case schema.ColumnTypeString, schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}:
value = ""
if operator == qbtypes.FilterOperatorExists {
return sb.NE(tblFieldName, value), nil
} else {
return sb.E(tblFieldName, value), nil
}
case schema.ColumnTypeUInt64, schema.ColumnTypeUInt32, schema.ColumnTypeUInt8:
value = 0
if operator == qbtypes.FilterOperatorExists {
return sb.NE(tblFieldName, value), nil
} else {
return sb.E(tblFieldName, value), nil
}
case schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}, schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeBool,
}, schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeFloat64,
}:
leftOperand := fmt.Sprintf("mapContains(%s, '%s')", column.Name, key.Name)
if key.Materialized {
leftOperand = telemetrytypes.FieldKeyToMaterializedColumnNameForExists(key)
}
if operator == qbtypes.FilterOperatorExists {
return sb.E(leftOperand, true), nil
} else {
return sb.NE(leftOperand, true), nil
}
default:
return "", fmt.Errorf("exists operator is not supported for column type %s", column.Type)
}
}
return "", fmt.Errorf("unsupported operator: %v", operator)
}

View File

@@ -0,0 +1,620 @@
package telemetrylogs
import (
"context"
"testing"
schema "github.com/SigNoz/signoz-otel-collector/cmd/signozschemamigrator/schema_migrator"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/huandu/go-sqlbuilder"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestGetColumn(t *testing.T) {
ctx := context.Background()
conditionBuilder := NewConditionBuilder()
testCases := []struct {
name string
key telemetrytypes.TelemetryFieldKey
expectedCol *schema.Column
expectedError error
}{
{
name: "Resource field",
key: telemetrytypes.TelemetryFieldKey{
Name: "service.name",
FieldContext: telemetrytypes.FieldContextResource,
},
expectedCol: logsV2Columns["resources_string"],
expectedError: nil,
},
{
name: "Scope field - scope name",
key: telemetrytypes.TelemetryFieldKey{
Name: "name",
FieldContext: telemetrytypes.FieldContextScope,
},
expectedCol: logsV2Columns["scope_name"],
expectedError: nil,
},
{
name: "Scope field - scope.name",
key: telemetrytypes.TelemetryFieldKey{
Name: "scope.name",
FieldContext: telemetrytypes.FieldContextScope,
},
expectedCol: logsV2Columns["scope_name"],
expectedError: nil,
},
{
name: "Scope field - scope_name",
key: telemetrytypes.TelemetryFieldKey{
Name: "scope_name",
FieldContext: telemetrytypes.FieldContextScope,
},
expectedCol: logsV2Columns["scope_name"],
expectedError: nil,
},
{
name: "Scope field - version",
key: telemetrytypes.TelemetryFieldKey{
Name: "version",
FieldContext: telemetrytypes.FieldContextScope,
},
expectedCol: logsV2Columns["scope_version"],
expectedError: nil,
},
{
name: "Scope field - other scope field",
key: telemetrytypes.TelemetryFieldKey{
Name: "custom.scope.field",
FieldContext: telemetrytypes.FieldContextScope,
},
expectedCol: logsV2Columns["scope_string"],
expectedError: nil,
},
{
name: "Attribute field - string type",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
expectedCol: logsV2Columns["attributes_string"],
expectedError: nil,
},
{
name: "Attribute field - number type",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.size",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeNumber,
},
expectedCol: logsV2Columns["attributes_number"],
expectedError: nil,
},
{
name: "Attribute field - int64 type",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.duration",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeInt64,
},
expectedCol: logsV2Columns["attributes_number"],
expectedError: nil,
},
{
name: "Attribute field - float64 type",
key: telemetrytypes.TelemetryFieldKey{
Name: "cpu.utilization",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeFloat64,
},
expectedCol: logsV2Columns["attributes_number"],
expectedError: nil,
},
{
name: "Attribute field - bool type",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.success",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeBool,
},
expectedCol: logsV2Columns["attributes_bool"],
expectedError: nil,
},
{
name: "Log field - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextLog,
},
expectedCol: logsV2Columns["timestamp"],
expectedError: nil,
},
{
name: "Log field - body",
key: telemetrytypes.TelemetryFieldKey{
Name: "body",
FieldContext: telemetrytypes.FieldContextLog,
},
expectedCol: logsV2Columns["body"],
expectedError: nil,
},
{
name: "Log field - nonexistent",
key: telemetrytypes.TelemetryFieldKey{
Name: "nonexistent_field",
FieldContext: telemetrytypes.FieldContextLog,
},
expectedCol: nil,
expectedError: qbtypes.ErrColumnNotFound,
},
{
name: "did_user_login",
key: telemetrytypes.TelemetryFieldKey{
Name: "did_user_login",
Signal: telemetrytypes.SignalLogs,
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeBool,
},
expectedCol: logsV2Columns["attributes_bool"],
expectedError: nil,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
col, err := conditionBuilder.GetColumn(ctx, &tc.key)
if tc.expectedError != nil {
assert.Equal(t, tc.expectedError, err)
} else {
require.NoError(t, err)
assert.Equal(t, tc.expectedCol, col)
}
})
}
}
func TestGetFieldKeyName(t *testing.T) {
ctx := context.Background()
conditionBuilder := &conditionBuilder{}
testCases := []struct {
name string
key telemetrytypes.TelemetryFieldKey
expectedResult string
expectedError error
}{
{
name: "Simple column type - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextLog,
},
expectedResult: "timestamp",
expectedError: nil,
},
{
name: "Map column type - string attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
expectedResult: "attributes_string['user.id']",
expectedError: nil,
},
{
name: "Map column type - number attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.size",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeNumber,
},
expectedResult: "attributes_number['request.size']",
expectedError: nil,
},
{
name: "Map column type - bool attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.success",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeBool,
},
expectedResult: "attributes_bool['request.success']",
expectedError: nil,
},
{
name: "Map column type - resource attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "service.name",
FieldContext: telemetrytypes.FieldContextResource,
},
expectedResult: "resources_string['service.name']",
expectedError: nil,
},
{
name: "Non-existent column",
key: telemetrytypes.TelemetryFieldKey{
Name: "nonexistent_field",
FieldContext: telemetrytypes.FieldContextLog,
},
expectedResult: "",
expectedError: qbtypes.ErrColumnNotFound,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result, err := conditionBuilder.GetTableFieldName(ctx, &tc.key)
if tc.expectedError != nil {
assert.Equal(t, tc.expectedError, err)
} else {
require.NoError(t, err)
assert.Equal(t, tc.expectedResult, result)
}
})
}
}
func TestGetCondition(t *testing.T) {
ctx := context.Background()
conditionBuilder := NewConditionBuilder()
testCases := []struct {
name string
key telemetrytypes.TelemetryFieldKey
operator qbtypes.FilterOperator
value any
expectedSQL string
expectedError error
}{
{
name: "Equal operator - string",
key: telemetrytypes.TelemetryFieldKey{
Name: "body",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorEqual,
value: "error message",
expectedSQL: "body = ?",
expectedError: nil,
},
{
name: "Not Equal operator - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorNotEqual,
value: uint64(1617979338000000000),
expectedSQL: "timestamp <> ?",
expectedError: nil,
},
{
name: "Greater Than operator - number attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.duration",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeNumber,
},
operator: qbtypes.FilterOperatorGreaterThan,
value: float64(100),
expectedSQL: "attributes_number['request.duration'] > ?",
expectedError: nil,
},
{
name: "Less Than operator - number attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.size",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeNumber,
},
operator: qbtypes.FilterOperatorLessThan,
value: float64(1024),
expectedSQL: "attributes_number['request.size'] < ?",
expectedError: nil,
},
{
name: "Greater Than Or Equal operator - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorGreaterThanOrEq,
value: uint64(1617979338000000000),
expectedSQL: "timestamp >= ?",
expectedError: nil,
},
{
name: "Less Than Or Equal operator - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorLessThanOrEq,
value: uint64(1617979338000000000),
expectedSQL: "timestamp <= ?",
expectedError: nil,
},
{
name: "Like operator - body",
key: telemetrytypes.TelemetryFieldKey{
Name: "body",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorLike,
value: "%error%",
expectedSQL: "body LIKE ?",
expectedError: nil,
},
{
name: "Not Like operator - body",
key: telemetrytypes.TelemetryFieldKey{
Name: "body",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorNotLike,
value: "%error%",
expectedSQL: "body NOT LIKE ?",
expectedError: nil,
},
{
name: "ILike operator - string attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
operator: qbtypes.FilterOperatorILike,
value: "%admin%",
expectedSQL: "WHERE LOWER(attributes_string['user.id']) LIKE LOWER(?)",
expectedError: nil,
},
{
name: "Not ILike operator - string attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
operator: qbtypes.FilterOperatorNotILike,
value: "%admin%",
expectedSQL: "WHERE LOWER(attributes_string['user.id']) NOT LIKE LOWER(?)",
expectedError: nil,
},
{
name: "Contains operator - string attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
operator: qbtypes.FilterOperatorContains,
value: "admin",
expectedSQL: "WHERE LOWER(attributes_string['user.id']) LIKE LOWER(?)",
expectedError: nil,
},
{
name: "Between operator - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorBetween,
value: []any{uint64(1617979338000000000), uint64(1617979348000000000)},
expectedSQL: "timestamp BETWEEN ? AND ?",
expectedError: nil,
},
{
name: "Between operator - invalid value",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorBetween,
value: "invalid",
expectedSQL: "",
expectedError: qbtypes.ErrBetweenValues,
},
{
name: "Between operator - insufficient values",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorBetween,
value: []any{uint64(1617979338000000000)},
expectedSQL: "",
expectedError: qbtypes.ErrBetweenValues,
},
{
name: "Not Between operator - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorNotBetween,
value: []any{uint64(1617979338000000000), uint64(1617979348000000000)},
expectedSQL: "timestamp NOT BETWEEN ? AND ?",
expectedError: nil,
},
{
name: "In operator - severity_text",
key: telemetrytypes.TelemetryFieldKey{
Name: "severity_text",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorIn,
value: []any{"error", "fatal", "critical"},
expectedSQL: "severity_text IN (?, ?, ?)",
expectedError: nil,
},
{
name: "In operator - invalid value",
key: telemetrytypes.TelemetryFieldKey{
Name: "severity_text",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorIn,
value: "error",
expectedSQL: "",
expectedError: qbtypes.ErrInValues,
},
{
name: "Not In operator - severity_text",
key: telemetrytypes.TelemetryFieldKey{
Name: "severity_text",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorNotIn,
value: []any{"debug", "info", "trace"},
expectedSQL: "severity_text NOT IN (?, ?, ?)",
expectedError: nil,
},
{
name: "Exists operator - string field",
key: telemetrytypes.TelemetryFieldKey{
Name: "body",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorExists,
value: nil,
expectedSQL: "body <> ?",
expectedError: nil,
},
{
name: "Not Exists operator - string field",
key: telemetrytypes.TelemetryFieldKey{
Name: "body",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorNotExists,
value: nil,
expectedSQL: "body = ?",
expectedError: nil,
},
{
name: "Exists operator - number field",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorExists,
value: nil,
expectedSQL: "timestamp <> ?",
expectedError: nil,
},
{
name: "Exists operator - map field",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
operator: qbtypes.FilterOperatorExists,
value: nil,
expectedSQL: "mapContains(attributes_string, 'user.id') = ?",
expectedError: nil,
},
{
name: "Not Exists operator - map field",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
operator: qbtypes.FilterOperatorNotExists,
value: nil,
expectedSQL: "mapContains(attributes_string, 'user.id') <> ?",
expectedError: nil,
},
{
name: "Non-existent column",
key: telemetrytypes.TelemetryFieldKey{
Name: "nonexistent_field",
FieldContext: telemetrytypes.FieldContextLog,
},
operator: qbtypes.FilterOperatorEqual,
value: "value",
expectedSQL: "",
expectedError: qbtypes.ErrColumnNotFound,
},
}
for _, tc := range testCases {
sb := sqlbuilder.NewSelectBuilder()
t.Run(tc.name, func(t *testing.T) {
cond, err := conditionBuilder.GetCondition(ctx, &tc.key, tc.operator, tc.value, sb)
sb.Where(cond)
if tc.expectedError != nil {
assert.Equal(t, tc.expectedError, err)
} else {
require.NoError(t, err)
sql, _ := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
assert.Contains(t, sql, tc.expectedSQL)
}
})
}
}
func TestGetConditionMultiple(t *testing.T) {
ctx := context.Background()
conditionBuilder := NewConditionBuilder()
testCases := []struct {
name string
keys []*telemetrytypes.TelemetryFieldKey
operator qbtypes.FilterOperator
value any
expectedSQL string
expectedError error
}{
{
name: "Equal operator - string",
keys: []*telemetrytypes.TelemetryFieldKey{
{
Name: "body",
FieldContext: telemetrytypes.FieldContextLog,
},
{
Name: "severity_text",
FieldContext: telemetrytypes.FieldContextLog,
},
},
operator: qbtypes.FilterOperatorEqual,
value: "error message",
expectedSQL: "body = ? AND severity_text = ?",
expectedError: nil,
},
}
for _, tc := range testCases {
sb := sqlbuilder.NewSelectBuilder()
t.Run(tc.name, func(t *testing.T) {
var err error
for _, key := range tc.keys {
cond, err := conditionBuilder.GetCondition(ctx, key, tc.operator, tc.value, sb)
sb.Where(cond)
if err != nil {
t.Fatalf("Error getting condition for key %s: %v", key.Name, err)
}
}
if tc.expectedError != nil {
assert.Equal(t, tc.expectedError, err)
} else {
require.NoError(t, err)
sql, _ := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
assert.Contains(t, sql, tc.expectedSQL)
}
})
}
}

View File

@@ -0,0 +1,9 @@
package telemetrylogs
const (
DBName = "signoz_logs"
LogsV2TableName = "distributed_logs_v2"
LogsV2LocalTableName = "logs_v2"
TagAttributesV2TableName = "distributed_tag_attributes_v2"
TagAttributesV2LocalTableName = "tag_attributes_v2"
)

View File

@@ -0,0 +1,149 @@
package telemetrymetadata
import (
"context"
"fmt"
schema "github.com/SigNoz/signoz-otel-collector/cmd/signozschemamigrator/schema_migrator"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/huandu/go-sqlbuilder"
)
var (
attributeMetadataColumns = map[string]*schema.Column{
"resource_attributes": {Name: "resource_attributes", Type: schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}},
"attributes": {Name: "attributes", Type: schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}},
}
)
type conditionBuilder struct {
}
func NewConditionBuilder() qbtypes.ConditionBuilder {
return &conditionBuilder{}
}
func (c *conditionBuilder) GetColumn(ctx context.Context, key *telemetrytypes.TelemetryFieldKey) (*schema.Column, error) {
switch key.FieldContext {
case telemetrytypes.FieldContextResource:
return attributeMetadataColumns["resource_attributes"], nil
case telemetrytypes.FieldContextAttribute:
return attributeMetadataColumns["attributes"], nil
}
return nil, qbtypes.ErrColumnNotFound
}
func (c *conditionBuilder) GetTableFieldName(ctx context.Context, key *telemetrytypes.TelemetryFieldKey) (string, error) {
column, err := c.GetColumn(ctx, key)
if err != nil {
return "", err
}
switch column.Type {
case schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}:
return fmt.Sprintf("%s['%s']", column.Name, key.Name), nil
}
return column.Name, nil
}
func (c *conditionBuilder) GetCondition(
ctx context.Context,
key *telemetrytypes.TelemetryFieldKey,
operator qbtypes.FilterOperator,
value any,
sb *sqlbuilder.SelectBuilder,
) (string, error) {
column, err := c.GetColumn(ctx, key)
if err != nil {
// if we don't have a column, we can't build a condition for related values
return "", nil
}
tblFieldName, err := c.GetTableFieldName(ctx, key)
if err != nil {
// if we don't have a table field name, we can't build a condition for related values
return "", nil
}
if key.FieldDataType != telemetrytypes.FieldDataTypeString {
// if the field data type is not string, we can't build a condition for related values
return "", nil
}
// key must exists to apply main filter
containsExp := fmt.Sprintf("mapContains(%s, %s)", column.Name, sb.Var(key.Name))
// regular operators
switch operator {
// regular operators
case qbtypes.FilterOperatorEqual:
return sb.And(containsExp, sb.E(tblFieldName, value)), nil
case qbtypes.FilterOperatorNotEqual:
return sb.And(containsExp, sb.NE(tblFieldName, value)), nil
// like and not like
case qbtypes.FilterOperatorLike:
return sb.And(containsExp, sb.Like(tblFieldName, value)), nil
case qbtypes.FilterOperatorNotLike:
return sb.And(containsExp, sb.NotLike(tblFieldName, value)), nil
case qbtypes.FilterOperatorILike:
return sb.And(containsExp, sb.ILike(tblFieldName, value)), nil
case qbtypes.FilterOperatorNotILike:
return sb.And(containsExp, sb.NotILike(tblFieldName, value)), nil
case qbtypes.FilterOperatorContains:
return sb.And(containsExp, sb.ILike(tblFieldName, fmt.Sprintf("%%%s%%", value))), nil
case qbtypes.FilterOperatorNotContains:
return sb.And(containsExp, sb.NotILike(tblFieldName, fmt.Sprintf("%%%s%%", value))), nil
case qbtypes.FilterOperatorRegexp:
exp := fmt.Sprintf(`match(%s, %s)`, tblFieldName, sb.Var(value))
return sb.And(containsExp, exp), nil
case qbtypes.FilterOperatorNotRegexp:
exp := fmt.Sprintf(`not match(%s, %s)`, tblFieldName, sb.Var(value))
return sb.And(containsExp, exp), nil
// in and not in
case qbtypes.FilterOperatorIn:
values, ok := value.([]any)
if !ok {
return "", qbtypes.ErrInValues
}
return sb.And(containsExp, sb.In(tblFieldName, values...)), nil
case qbtypes.FilterOperatorNotIn:
values, ok := value.([]any)
if !ok {
return "", qbtypes.ErrInValues
}
return sb.And(containsExp, sb.NotIn(tblFieldName, values...)), nil
// exists and not exists
// in the query builder, `exists` and `not exists` are used for
// key membership checks, so depending on the column type, the condition changes
case qbtypes.FilterOperatorExists, qbtypes.FilterOperatorNotExists:
switch column.Type {
case schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}:
leftOperand := fmt.Sprintf("mapContains(%s, '%s')", column.Name, key.Name)
if operator == qbtypes.FilterOperatorExists {
return sb.E(leftOperand, true), nil
} else {
return sb.NE(leftOperand, true), nil
}
}
}
return "", nil
}

View File

@@ -0,0 +1,272 @@
package telemetrymetadata
import (
"context"
"testing"
schema "github.com/SigNoz/signoz-otel-collector/cmd/signozschemamigrator/schema_migrator"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/huandu/go-sqlbuilder"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestGetColumn(t *testing.T) {
ctx := context.Background()
conditionBuilder := NewConditionBuilder()
testCases := []struct {
name string
key telemetrytypes.TelemetryFieldKey
expectedCol *schema.Column
expectedError error
}{
{
name: "Resource field",
key: telemetrytypes.TelemetryFieldKey{
Name: "service.name",
FieldContext: telemetrytypes.FieldContextResource,
},
expectedCol: attributeMetadataColumns["resource_attributes"],
expectedError: nil,
},
{
name: "Scope field - scope name",
key: telemetrytypes.TelemetryFieldKey{
Name: "name",
FieldContext: telemetrytypes.FieldContextScope,
},
expectedCol: nil,
expectedError: qbtypes.ErrColumnNotFound,
},
{
name: "Scope field - scope.name",
key: telemetrytypes.TelemetryFieldKey{
Name: "scope.name",
FieldContext: telemetrytypes.FieldContextScope,
},
expectedCol: nil,
expectedError: qbtypes.ErrColumnNotFound,
},
{
name: "Scope field - scope_name",
key: telemetrytypes.TelemetryFieldKey{
Name: "scope_name",
FieldContext: telemetrytypes.FieldContextScope,
},
expectedCol: nil,
expectedError: qbtypes.ErrColumnNotFound,
},
{
name: "Scope field - version",
key: telemetrytypes.TelemetryFieldKey{
Name: "version",
FieldContext: telemetrytypes.FieldContextScope,
},
expectedCol: nil,
expectedError: qbtypes.ErrColumnNotFound,
},
{
name: "Scope field - other scope field",
key: telemetrytypes.TelemetryFieldKey{
Name: "custom.scope.field",
FieldContext: telemetrytypes.FieldContextScope,
},
expectedCol: nil,
expectedError: qbtypes.ErrColumnNotFound,
},
{
name: "Attribute field - string type",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
expectedCol: attributeMetadataColumns["attributes"],
expectedError: nil,
},
{
name: "Attribute field - number type",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.size",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeNumber,
},
expectedCol: attributeMetadataColumns["attributes"],
expectedError: nil,
},
{
name: "Attribute field - int64 type",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.duration",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeInt64,
},
expectedCol: attributeMetadataColumns["attributes"],
expectedError: nil,
},
{
name: "Attribute field - float64 type",
key: telemetrytypes.TelemetryFieldKey{
Name: "cpu.utilization",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeFloat64,
},
expectedCol: attributeMetadataColumns["attributes"],
expectedError: nil,
},
{
name: "Log field - nonexistent",
key: telemetrytypes.TelemetryFieldKey{
Name: "nonexistent_field",
FieldContext: telemetrytypes.FieldContextLog,
},
expectedCol: nil,
expectedError: qbtypes.ErrColumnNotFound,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
col, err := conditionBuilder.GetColumn(ctx, &tc.key)
if tc.expectedError != nil {
assert.Equal(t, tc.expectedError, err)
} else {
require.NoError(t, err)
assert.Equal(t, tc.expectedCol, col)
}
})
}
}
func TestGetFieldKeyName(t *testing.T) {
ctx := context.Background()
conditionBuilder := &conditionBuilder{}
testCases := []struct {
name string
key telemetrytypes.TelemetryFieldKey
expectedResult string
expectedError error
}{
{
name: "Map column type - string attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
expectedResult: "attributes['user.id']",
expectedError: nil,
},
{
name: "Map column type - number attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.size",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeNumber,
},
expectedResult: "attributes['request.size']",
expectedError: nil,
},
{
name: "Map column type - bool attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.success",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeBool,
},
expectedResult: "attributes['request.success']",
expectedError: nil,
},
{
name: "Map column type - resource attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "service.name",
FieldContext: telemetrytypes.FieldContextResource,
},
expectedResult: "resource_attributes['service.name']",
expectedError: nil,
},
{
name: "Non-existent column",
key: telemetrytypes.TelemetryFieldKey{
Name: "nonexistent_field",
FieldContext: telemetrytypes.FieldContextLog,
},
expectedResult: "",
expectedError: qbtypes.ErrColumnNotFound,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result, err := conditionBuilder.GetTableFieldName(ctx, &tc.key)
if tc.expectedError != nil {
assert.Equal(t, tc.expectedError, err)
} else {
require.NoError(t, err)
assert.Equal(t, tc.expectedResult, result)
}
})
}
}
func TestGetCondition(t *testing.T) {
ctx := context.Background()
conditionBuilder := NewConditionBuilder()
testCases := []struct {
name string
key telemetrytypes.TelemetryFieldKey
operator qbtypes.FilterOperator
value any
expectedSQL string
expectedError error
}{
{
name: "ILike operator - string attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
operator: qbtypes.FilterOperatorILike,
value: "%admin%",
expectedSQL: "WHERE (mapContains(attributes, ?) AND LOWER(attributes['user.id']) LIKE LOWER(?))",
expectedError: nil,
},
{
name: "Not ILike operator - string attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
operator: qbtypes.FilterOperatorNotILike,
value: "%admin%",
expectedSQL: "WHERE (mapContains(attributes, ?) AND LOWER(attributes['user.id']) NOT LIKE LOWER(?))",
expectedError: nil,
},
}
for _, tc := range testCases {
sb := sqlbuilder.NewSelectBuilder()
t.Run(tc.name, func(t *testing.T) {
cond, err := conditionBuilder.GetCondition(ctx, &tc.key, tc.operator, tc.value, sb)
sb.Where(cond)
if tc.expectedError != nil {
assert.Equal(t, tc.expectedError, err)
} else {
require.NoError(t, err)
sql, _ := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
assert.Contains(t, sql, tc.expectedSQL)
}
})
}
}

View File

@@ -0,0 +1,691 @@
package telemetrymetadata
import (
"context"
"fmt"
"strings"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/telemetrystore"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/huandu/go-sqlbuilder"
"go.uber.org/zap"
)
var (
ErrFailedToGetTracesKeys = errors.Newf(errors.TypeInternal, errors.CodeInternal, "failed to get traces keys")
ErrFailedToGetLogsKeys = errors.Newf(errors.TypeInternal, errors.CodeInternal, "failed to get logs keys")
ErrFailedToGetTblStatement = errors.Newf(errors.TypeInternal, errors.CodeInternal, "failed to get tbl statement")
ErrFailedToGetMetricsKeys = errors.Newf(errors.TypeInternal, errors.CodeInternal, "failed to get metrics keys")
ErrFailedToGetRelatedValues = errors.Newf(errors.TypeInternal, errors.CodeInternal, "failed to get related values")
)
type telemetryMetaStore struct {
telemetrystore telemetrystore.TelemetryStore
tracesDBName string
tracesFieldsTblName string
indexV3TblName string
metricsDBName string
metricsFieldsTblName string
timeseries1WTblName string
logsDBName string
logsFieldsTblName string
logsV2TblName string
relatedMetadataDBName string
relatedMetadataTblName string
conditionBuilder qbtypes.ConditionBuilder
}
func NewTelemetryMetaStore(
telemetrystore telemetrystore.TelemetryStore,
tracesDBName string,
tracesFieldsTblName string,
indexV3TblName string,
metricsDBName string,
metricsFieldsTblName string,
timeseries1WTblName string,
logsDBName string,
logsV2TblName string,
logsFieldsTblName string,
relatedMetadataDBName string,
relatedMetadataTblName string,
) (telemetrytypes.MetadataStore, error) {
return &telemetryMetaStore{
telemetrystore: telemetrystore,
tracesDBName: tracesDBName,
tracesFieldsTblName: tracesFieldsTblName,
indexV3TblName: indexV3TblName,
metricsDBName: metricsDBName,
metricsFieldsTblName: metricsFieldsTblName,
timeseries1WTblName: timeseries1WTblName,
logsDBName: logsDBName,
logsV2TblName: logsV2TblName,
logsFieldsTblName: logsFieldsTblName,
relatedMetadataDBName: relatedMetadataDBName,
relatedMetadataTblName: relatedMetadataTblName,
conditionBuilder: NewConditionBuilder(),
}, nil
}
// tracesTblStatementToFieldKeys returns materialised attribute/resource/scope keys from the traces table
func (t *telemetryMetaStore) tracesTblStatementToFieldKeys(ctx context.Context) ([]*telemetrytypes.TelemetryFieldKey, error) {
query := fmt.Sprintf("SHOW CREATE TABLE %s.%s", t.tracesDBName, t.indexV3TblName)
statements := []telemetrytypes.ShowCreateTableStatement{}
err := t.telemetrystore.ClickhouseDB().Select(ctx, &statements, query)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, ErrFailedToGetTblStatement.Error())
}
return ExtractFieldKeysFromTblStatement(statements[0].Statement)
}
// getTracesKeys returns the keys from the spans that match the field selection criteria
func (t *telemetryMetaStore) getTracesKeys(ctx context.Context, fieldKeySelectors []*telemetrytypes.FieldKeySelector) ([]*telemetrytypes.TelemetryFieldKey, error) {
if len(fieldKeySelectors) == 0 {
return nil, nil
}
// pre-fetch the materialised keys from the traces table
matKeys, err := t.tracesTblStatementToFieldKeys(ctx)
if err != nil {
return nil, err
}
mapOfKeys := make(map[string]*telemetrytypes.TelemetryFieldKey)
for _, key := range matKeys {
mapOfKeys[key.Name+";"+key.FieldContext.StringValue()+";"+key.FieldDataType.StringValue()] = key
}
sb := sqlbuilder.Select("tag_key", "tag_type", "tag_data_type", `
CASE
WHEN tag_type = 'spanfield' THEN 1
WHEN tag_type = 'resource' THEN 2
WHEN tag_type = 'scope' THEN 3
WHEN tag_type = 'tag' THEN 4
ELSE 5
END as priority`).From(t.tracesDBName + "." + t.tracesFieldsTblName)
var limit int
conds := []string{}
for _, fieldKeySelector := range fieldKeySelectors {
if fieldKeySelector.StartUnixMilli != 0 {
conds = append(conds, sb.GE("unix_milli", fieldKeySelector.StartUnixMilli))
}
if fieldKeySelector.EndUnixMilli != 0 {
conds = append(conds, sb.LE("unix_milli", fieldKeySelector.EndUnixMilli))
}
// key part of the selector
fieldKeyConds := []string{}
if fieldKeySelector.SelectorMatchType == telemetrytypes.FieldSelectorMatchTypeExact {
fieldKeyConds = append(fieldKeyConds, sb.E("tag_key", fieldKeySelector.Name))
} else {
fieldKeyConds = append(fieldKeyConds, sb.Like("tag_key", "%"+fieldKeySelector.Name+"%"))
}
// now look at the field context
if fieldKeySelector.FieldContext != telemetrytypes.FieldContextUnspecified {
fieldKeyConds = append(fieldKeyConds, sb.E("tag_type", fieldKeySelector.FieldContext.TagType()))
}
// now look at the field data type
if fieldKeySelector.FieldDataType != telemetrytypes.FieldDataTypeUnspecified {
fieldKeyConds = append(fieldKeyConds, sb.E("tag_data_type", fieldKeySelector.FieldDataType.TagDataType()))
}
conds = append(conds, sb.And(fieldKeyConds...))
limit += fieldKeySelector.Limit
}
sb.Where(sb.Or(conds...))
if limit == 0 {
limit = 1000
}
mainSb := sqlbuilder.Select("tag_key", "tag_type", "tag_data_type", "max(priority) as priority")
mainSb.From(mainSb.BuilderAs(sb, "sub_query"))
mainSb.GroupBy("tag_key", "tag_type", "tag_data_type")
mainSb.OrderBy("priority")
mainSb.Limit(limit)
query, args := mainSb.BuildWithFlavor(sqlbuilder.ClickHouse)
rows, err := t.telemetrystore.ClickhouseDB().Query(ctx, query, args...)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, ErrFailedToGetTracesKeys.Error())
}
defer rows.Close()
keys := []*telemetrytypes.TelemetryFieldKey{}
for rows.Next() {
var name string
var fieldContext telemetrytypes.FieldContext
var fieldDataType telemetrytypes.FieldDataType
var priority uint8
err = rows.Scan(&name, &fieldContext, &fieldDataType, &priority)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, ErrFailedToGetTracesKeys.Error())
}
key, ok := mapOfKeys[name+";"+fieldContext.StringValue()+";"+fieldDataType.StringValue()]
// if there is no materialised column, create a key with the field context and data type
if !ok {
key = &telemetrytypes.TelemetryFieldKey{
Name: name,
FieldContext: fieldContext,
FieldDataType: fieldDataType,
}
}
keys = append(keys, key)
}
if rows.Err() != nil {
return nil, errors.Wrapf(rows.Err(), errors.TypeInternal, errors.CodeInternal, ErrFailedToGetTracesKeys.Error())
}
return keys, nil
}
// logsTblStatementToFieldKeys returns materialised attribute/resource/scope keys from the logs table
func (t *telemetryMetaStore) logsTblStatementToFieldKeys(ctx context.Context) ([]*telemetrytypes.TelemetryFieldKey, error) {
query := fmt.Sprintf("SHOW CREATE TABLE %s.%s", t.logsDBName, t.logsV2TblName)
statements := []telemetrytypes.ShowCreateTableStatement{}
err := t.telemetrystore.ClickhouseDB().Select(ctx, &statements, query)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, ErrFailedToGetTblStatement.Error())
}
return ExtractFieldKeysFromTblStatement(statements[0].Statement)
}
// getLogsKeys returns the keys from the spans that match the field selection criteria
func (t *telemetryMetaStore) getLogsKeys(ctx context.Context, fieldKeySelectors []*telemetrytypes.FieldKeySelector) ([]*telemetrytypes.TelemetryFieldKey, error) {
if len(fieldKeySelectors) == 0 {
return nil, nil
}
// pre-fetch the materialised keys from the logs table
matKeys, err := t.logsTblStatementToFieldKeys(ctx)
if err != nil {
return nil, err
}
mapOfKeys := make(map[string]*telemetrytypes.TelemetryFieldKey)
for _, key := range matKeys {
mapOfKeys[key.Name+";"+key.FieldContext.StringValue()+";"+key.FieldDataType.StringValue()] = key
}
sb := sqlbuilder.Select("tag_key", "tag_type", "tag_data_type", `
CASE
WHEN tag_type = 'logfield' THEN 1
WHEN tag_type = 'resource' THEN 2
WHEN tag_type = 'scope' THEN 3
WHEN tag_type = 'tag' THEN 4
ELSE 5
END as priority`).From(t.logsDBName + "." + t.logsFieldsTblName)
var limit int
conds := []string{}
for _, fieldKeySelector := range fieldKeySelectors {
if fieldKeySelector.StartUnixMilli != 0 {
conds = append(conds, sb.GE("unix_milli", fieldKeySelector.StartUnixMilli))
}
if fieldKeySelector.EndUnixMilli != 0 {
conds = append(conds, sb.LE("unix_milli", fieldKeySelector.EndUnixMilli))
}
// key part of the selector
fieldKeyConds := []string{}
if fieldKeySelector.SelectorMatchType == telemetrytypes.FieldSelectorMatchTypeExact {
fieldKeyConds = append(fieldKeyConds, sb.E("tag_key", fieldKeySelector.Name))
} else {
fieldKeyConds = append(fieldKeyConds, sb.Like("tag_key", "%"+fieldKeySelector.Name+"%"))
}
// now look at the field context
if fieldKeySelector.FieldContext != telemetrytypes.FieldContextUnspecified {
fieldKeyConds = append(fieldKeyConds, sb.E("tag_type", fieldKeySelector.FieldContext.TagType()))
}
// now look at the field data type
if fieldKeySelector.FieldDataType != telemetrytypes.FieldDataTypeUnspecified {
fieldKeyConds = append(fieldKeyConds, sb.E("tag_data_type", fieldKeySelector.FieldDataType.TagDataType()))
}
conds = append(conds, sb.And(fieldKeyConds...))
limit += fieldKeySelector.Limit
}
sb.Where(sb.Or(conds...))
if limit == 0 {
limit = 1000
}
mainSb := sqlbuilder.Select("tag_key", "tag_type", "tag_data_type", "max(priority) as priority")
mainSb.From(mainSb.BuilderAs(sb, "sub_query"))
mainSb.GroupBy("tag_key", "tag_type", "tag_data_type")
mainSb.OrderBy("priority")
mainSb.Limit(limit)
query, args := mainSb.BuildWithFlavor(sqlbuilder.ClickHouse)
rows, err := t.telemetrystore.ClickhouseDB().Query(ctx, query, args...)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, ErrFailedToGetLogsKeys.Error())
}
defer rows.Close()
keys := []*telemetrytypes.TelemetryFieldKey{}
for rows.Next() {
var name string
var fieldContext telemetrytypes.FieldContext
var fieldDataType telemetrytypes.FieldDataType
var priority uint8
err = rows.Scan(&name, &fieldContext, &fieldDataType, &priority)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, ErrFailedToGetLogsKeys.Error())
}
key, ok := mapOfKeys[name+";"+fieldContext.StringValue()+";"+fieldDataType.StringValue()]
// if there is no materialised column, create a key with the field context and data type
if !ok {
key = &telemetrytypes.TelemetryFieldKey{
Name: name,
FieldContext: fieldContext,
FieldDataType: fieldDataType,
}
}
keys = append(keys, key)
}
if rows.Err() != nil {
return nil, errors.Wrapf(rows.Err(), errors.TypeInternal, errors.CodeInternal, ErrFailedToGetLogsKeys.Error())
}
return keys, nil
}
// getMetricsKeys returns the keys from the metrics that match the field selection criteria
// TODO(srikanthccv): update the implementation after the dot metrics migration is done
func (t *telemetryMetaStore) getMetricsKeys(ctx context.Context, fieldKeySelectors []*telemetrytypes.FieldKeySelector) ([]*telemetrytypes.TelemetryFieldKey, error) {
if len(fieldKeySelectors) == 0 {
return nil, nil
}
var whereClause, innerWhereClause string
var limit int
args := []any{}
for _, fieldKeySelector := range fieldKeySelectors {
if fieldKeySelector.MetricContext != nil {
innerWhereClause += "metric_name IN ? AND"
args = append(args, fieldKeySelector.MetricContext.MetricName)
}
}
innerWhereClause += " __normalized = true"
for idx, fieldKeySelector := range fieldKeySelectors {
if fieldKeySelector.SelectorMatchType == telemetrytypes.FieldSelectorMatchTypeExact {
whereClause += "(distinctTagKey = ? AND distinctTagKey NOT LIKE '\\_\\_%%')"
args = append(args, fieldKeySelector.Name)
} else {
whereClause += "(distinctTagKey ILIKE ? AND distinctTagKey NOT LIKE '\\_\\_%%')"
args = append(args, fmt.Sprintf("%%%s%%", fieldKeySelector.Name))
}
if idx != len(fieldKeySelectors)-1 {
whereClause += " OR "
}
limit += fieldKeySelector.Limit
}
args = append(args, limit)
query := fmt.Sprintf(`
SELECT
arrayJoin(tagKeys) AS distinctTagKey
FROM (
SELECT JSONExtractKeys(labels) AS tagKeys
FROM %s.%s
WHERE `+innerWhereClause+`
GROUP BY tagKeys
)
WHERE `+whereClause+`
GROUP BY distinctTagKey
LIMIT ?
`, t.metricsDBName, t.timeseries1WTblName)
rows, err := t.telemetrystore.ClickhouseDB().Query(ctx, query, args...)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, ErrFailedToGetMetricsKeys.Error())
}
defer rows.Close()
keys := []*telemetrytypes.TelemetryFieldKey{}
for rows.Next() {
var name string
err = rows.Scan(&name)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, ErrFailedToGetMetricsKeys.Error())
}
key := &telemetrytypes.TelemetryFieldKey{
Name: name,
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
}
keys = append(keys, key)
}
if rows.Err() != nil {
return nil, errors.Wrapf(rows.Err(), errors.TypeInternal, errors.CodeInternal, ErrFailedToGetMetricsKeys.Error())
}
return keys, nil
}
func (t *telemetryMetaStore) GetKeys(ctx context.Context, fieldKeySelector *telemetrytypes.FieldKeySelector) (map[string][]*telemetrytypes.TelemetryFieldKey, error) {
var keys []*telemetrytypes.TelemetryFieldKey
var err error
switch fieldKeySelector.Signal {
case telemetrytypes.SignalTraces:
keys, err = t.getTracesKeys(ctx, []*telemetrytypes.FieldKeySelector{fieldKeySelector})
case telemetrytypes.SignalLogs:
keys, err = t.getLogsKeys(ctx, []*telemetrytypes.FieldKeySelector{fieldKeySelector})
case telemetrytypes.SignalMetrics:
keys, err = t.getMetricsKeys(ctx, []*telemetrytypes.FieldKeySelector{fieldKeySelector})
case telemetrytypes.SignalUnspecified:
// get traces keys
tracesKeys, err := t.getTracesKeys(ctx, []*telemetrytypes.FieldKeySelector{fieldKeySelector})
if err != nil {
return nil, err
}
keys = append(keys, tracesKeys...)
// get logs keys
logsKeys, err := t.getLogsKeys(ctx, []*telemetrytypes.FieldKeySelector{fieldKeySelector})
if err != nil {
return nil, err
}
keys = append(keys, logsKeys...)
// get metrics keys
metricsKeys, err := t.getMetricsKeys(ctx, []*telemetrytypes.FieldKeySelector{fieldKeySelector})
if err != nil {
return nil, err
}
keys = append(keys, metricsKeys...)
}
if err != nil {
return nil, err
}
mapOfKeys := make(map[string][]*telemetrytypes.TelemetryFieldKey)
for _, key := range keys {
mapOfKeys[key.Name] = append(mapOfKeys[key.Name], key)
}
return mapOfKeys, nil
}
func (t *telemetryMetaStore) GetKeysMulti(ctx context.Context, fieldKeySelectors []*telemetrytypes.FieldKeySelector) (map[string][]*telemetrytypes.TelemetryFieldKey, error) {
logsSelectors := []*telemetrytypes.FieldKeySelector{}
tracesSelectors := []*telemetrytypes.FieldKeySelector{}
metricsSelectors := []*telemetrytypes.FieldKeySelector{}
for _, fieldKeySelector := range fieldKeySelectors {
switch fieldKeySelector.Signal {
case telemetrytypes.SignalLogs:
logsSelectors = append(logsSelectors, fieldKeySelector)
case telemetrytypes.SignalTraces:
tracesSelectors = append(tracesSelectors, fieldKeySelector)
case telemetrytypes.SignalMetrics:
metricsSelectors = append(metricsSelectors, fieldKeySelector)
case telemetrytypes.SignalUnspecified:
logsSelectors = append(logsSelectors, fieldKeySelector)
tracesSelectors = append(tracesSelectors, fieldKeySelector)
metricsSelectors = append(metricsSelectors, fieldKeySelector)
}
}
logsKeys, err := t.getLogsKeys(ctx, logsSelectors)
if err != nil {
return nil, err
}
tracesKeys, err := t.getTracesKeys(ctx, tracesSelectors)
if err != nil {
return nil, err
}
metricsKeys, err := t.getMetricsKeys(ctx, metricsSelectors)
if err != nil {
return nil, err
}
mapOfKeys := make(map[string][]*telemetrytypes.TelemetryFieldKey)
for _, key := range logsKeys {
mapOfKeys[key.Name] = append(mapOfKeys[key.Name], key)
}
for _, key := range tracesKeys {
mapOfKeys[key.Name] = append(mapOfKeys[key.Name], key)
}
for _, key := range metricsKeys {
mapOfKeys[key.Name] = append(mapOfKeys[key.Name], key)
}
return mapOfKeys, nil
}
func (t *telemetryMetaStore) GetKey(ctx context.Context, fieldKeySelector *telemetrytypes.FieldKeySelector) ([]*telemetrytypes.TelemetryFieldKey, error) {
keys, err := t.GetKeys(ctx, fieldKeySelector)
if err != nil {
return nil, err
}
return keys[fieldKeySelector.Name], nil
}
func (t *telemetryMetaStore) getRelatedValues(ctx context.Context, fieldValueSelector *telemetrytypes.FieldValueSelector) ([]string, error) {
args := []any{}
var andConditions []string
andConditions = append(andConditions, `unix_milli >= ?`)
args = append(args, fieldValueSelector.StartUnixMilli)
andConditions = append(andConditions, `unix_milli <= ?`)
args = append(args, fieldValueSelector.EndUnixMilli)
if len(fieldValueSelector.ExistingQuery) != 0 {
// TODO(srikanthccv): add the existing query to the where clause
}
whereClause := strings.Join(andConditions, " AND ")
key := telemetrytypes.TelemetryFieldKey{
Name: fieldValueSelector.Name,
Signal: fieldValueSelector.Signal,
FieldContext: fieldValueSelector.FieldContext,
FieldDataType: fieldValueSelector.FieldDataType,
}
// TODO(srikanthccv): add the select column
selectColumn, _ := t.conditionBuilder.GetTableFieldName(ctx, &key)
args = append(args, fieldValueSelector.Limit)
filterSubQuery := fmt.Sprintf(
"SELECT DISTINCT %s FROM %s.%s WHERE %s LIMIT ?",
selectColumn,
t.relatedMetadataDBName,
t.relatedMetadataTblName,
whereClause,
)
zap.L().Debug("filterSubQuery for related values", zap.String("query", filterSubQuery), zap.Any("args", args))
rows, err := t.telemetrystore.ClickhouseDB().Query(ctx, filterSubQuery, args...)
if err != nil {
return nil, ErrFailedToGetRelatedValues
}
defer rows.Close()
var attributeValues []string
for rows.Next() {
var value string
if err := rows.Scan(&value); err != nil {
return nil, ErrFailedToGetRelatedValues
}
if value != "" {
attributeValues = append(attributeValues, value)
}
}
return attributeValues, nil
}
func (t *telemetryMetaStore) GetRelatedValues(ctx context.Context, fieldValueSelector *telemetrytypes.FieldValueSelector) ([]string, error) {
return t.getRelatedValues(ctx, fieldValueSelector)
}
func (t *telemetryMetaStore) getSpanFieldValues(ctx context.Context, fieldValueSelector *telemetrytypes.FieldValueSelector) (*telemetrytypes.TelemetryFieldValues, error) {
// build the query to get the keys from the spans that match the field selection criteria
var limit int
sb := sqlbuilder.Select("DISTINCT string_value, number_value").From(t.tracesDBName + "." + t.tracesFieldsTblName)
if fieldValueSelector.Name != "" {
sb.Where(sb.E("tag_key", fieldValueSelector.Name))
}
// now look at the field context
if fieldValueSelector.FieldContext != telemetrytypes.FieldContextUnspecified {
sb.Where(sb.E("tag_type", fieldValueSelector.FieldContext.TagType()))
}
// now look at the field data type
if fieldValueSelector.FieldDataType != telemetrytypes.FieldDataTypeUnspecified {
sb.Where(sb.E("tag_data_type", fieldValueSelector.FieldDataType.TagDataType()))
}
if fieldValueSelector.Value != "" {
if fieldValueSelector.FieldDataType == telemetrytypes.FieldDataTypeString {
sb.Where(sb.Like("string_value", "%"+fieldValueSelector.Value+"%"))
} else if fieldValueSelector.FieldDataType == telemetrytypes.FieldDataTypeNumber {
sb.Where(sb.IsNotNull("number_value"))
sb.Where(sb.Like("toString(number_value)", "%"+fieldValueSelector.Value+"%"))
}
}
if limit == 0 {
limit = 50
}
sb.Limit(limit)
query, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
rows, err := t.telemetrystore.ClickhouseDB().Query(ctx, query, args...)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, ErrFailedToGetLogsKeys.Error())
}
defer rows.Close()
values := &telemetrytypes.TelemetryFieldValues{}
seen := make(map[string]bool)
for rows.Next() {
var stringValue string
var numberValue float64
if err := rows.Scan(&stringValue, &numberValue); err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, ErrFailedToGetLogsKeys.Error())
}
if _, ok := seen[stringValue]; !ok {
values.StringValues = append(values.StringValues, stringValue)
seen[stringValue] = true
}
if _, ok := seen[fmt.Sprintf("%f", numberValue)]; !ok && numberValue != 0 {
values.NumberValues = append(values.NumberValues, numberValue)
seen[fmt.Sprintf("%f", numberValue)] = true
}
}
return values, nil
}
func (t *telemetryMetaStore) getLogFieldValues(ctx context.Context, fieldValueSelector *telemetrytypes.FieldValueSelector) (*telemetrytypes.TelemetryFieldValues, error) {
// build the query to get the keys from the spans that match the field selection criteria
var limit int
sb := sqlbuilder.Select("DISTINCT string_value, number_value").From(t.logsDBName + "." + t.logsFieldsTblName)
if fieldValueSelector.Name != "" {
sb.Where(sb.E("tag_key", fieldValueSelector.Name))
}
if fieldValueSelector.FieldContext != telemetrytypes.FieldContextUnspecified {
sb.Where(sb.E("tag_type", fieldValueSelector.FieldContext.TagType()))
}
if fieldValueSelector.FieldDataType != telemetrytypes.FieldDataTypeUnspecified {
sb.Where(sb.E("tag_data_type", fieldValueSelector.FieldDataType.TagDataType()))
}
if fieldValueSelector.Value != "" {
if fieldValueSelector.FieldDataType == telemetrytypes.FieldDataTypeString {
sb.Where(sb.Like("string_value", "%"+fieldValueSelector.Value+"%"))
} else if fieldValueSelector.FieldDataType == telemetrytypes.FieldDataTypeNumber {
sb.Where(sb.IsNotNull("number_value"))
sb.Where(sb.Like("toString(number_value)", "%"+fieldValueSelector.Value+"%"))
}
}
if limit == 0 {
limit = 50
}
sb.Limit(limit)
query, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
rows, err := t.telemetrystore.ClickhouseDB().Query(ctx, query, args...)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, ErrFailedToGetLogsKeys.Error())
}
defer rows.Close()
values := &telemetrytypes.TelemetryFieldValues{}
seen := make(map[string]bool)
for rows.Next() {
var stringValue string
var numberValue float64
if err := rows.Scan(&stringValue, &numberValue); err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, ErrFailedToGetLogsKeys.Error())
}
if _, ok := seen[stringValue]; !ok {
values.StringValues = append(values.StringValues, stringValue)
seen[stringValue] = true
}
if _, ok := seen[fmt.Sprintf("%f", numberValue)]; !ok && numberValue != 0 {
values.NumberValues = append(values.NumberValues, numberValue)
seen[fmt.Sprintf("%f", numberValue)] = true
}
}
return values, nil
}
func (t *telemetryMetaStore) getMetricFieldValues(_ context.Context, fieldValueSelector *telemetrytypes.FieldValueSelector) (*telemetrytypes.TelemetryFieldValues, error) {
// TODO(srikanthccv): implement this. use new tables?
return nil, nil
}
func (t *telemetryMetaStore) GetAllValues(ctx context.Context, fieldValueSelector *telemetrytypes.FieldValueSelector) (*telemetrytypes.TelemetryFieldValues, error) {
var values *telemetrytypes.TelemetryFieldValues
var err error
switch fieldValueSelector.Signal {
case telemetrytypes.SignalTraces:
values, err = t.getSpanFieldValues(ctx, fieldValueSelector)
case telemetrytypes.SignalLogs:
values, err = t.getLogFieldValues(ctx, fieldValueSelector)
case telemetrytypes.SignalMetrics:
values, err = t.getMetricFieldValues(ctx, fieldValueSelector)
}
if err != nil {
return nil, err
}
return values, nil
}

View File

@@ -0,0 +1,86 @@
package telemetrymetadata
import (
"context"
"fmt"
"regexp"
"testing"
"github.com/SigNoz/signoz/pkg/telemetrylogs"
"github.com/SigNoz/signoz/pkg/telemetrymetrics"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/telemetrystore/telemetrystoretest"
"github.com/SigNoz/signoz/pkg/telemetrytraces"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
cmock "github.com/srikanthccv/ClickHouse-go-mock"
)
type regexMatcher struct {
}
func (m *regexMatcher) Match(expectedSQL, actualSQL string) error {
re, err := regexp.Compile(expectedSQL)
if err != nil {
return err
}
if !re.MatchString(actualSQL) {
return fmt.Errorf("expected query to contain %s, got %s", expectedSQL, actualSQL)
}
return nil
}
func TestGetKeys(t *testing.T) {
mockTelemetryStore := telemetrystoretest.New(telemetrystore.Config{}, &regexMatcher{})
mock := mockTelemetryStore.Mock()
metadata, err := NewTelemetryMetaStore(
mockTelemetryStore,
telemetrytraces.DBName,
telemetrytraces.TagAttributesV2TableName,
telemetrytraces.SpanIndexV3TableName,
telemetrymetrics.DBName,
telemetrymetrics.TimeseriesV41weekTableName,
telemetrymetrics.TimeseriesV41weekTableName,
telemetrylogs.DBName,
telemetrylogs.LogsV2TableName,
telemetrylogs.TagAttributesV2TableName,
DBName,
AttributesMetadataLocalTableName,
)
if err != nil {
t.Fatalf("Failed to create telemetry metadata store: %v", err)
}
rows := cmock.NewRows([]cmock.ColumnType{
{Name: "statement", Type: "String"},
}, [][]any{{"CREATE TABLE signoz_traces.signoz_index_v3"}})
mock.
ExpectSelect("SHOW CREATE TABLE signoz_traces.distributed_signoz_index_v3").
WillReturnRows(rows)
query := `SELECT.*`
mock.ExpectQuery(query).
WithArgs("%http.method%", telemetrytypes.FieldContextSpan.TagType(), telemetrytypes.FieldDataTypeString.TagDataType(), 10).
WillReturnRows(cmock.NewRows([]cmock.ColumnType{
{Name: "tag_key", Type: "String"},
{Name: "tag_type", Type: "String"},
{Name: "tag_data_type", Type: "String"},
{Name: "priority", Type: "UInt8"},
}, [][]any{{"http.method", "tag", "String", 1}, {"http.method", "tag", "String", 1}}))
keys, err := metadata.GetKeys(context.Background(), &telemetrytypes.FieldKeySelector{
Signal: telemetrytypes.SignalTraces,
FieldContext: telemetrytypes.FieldContextSpan,
FieldDataType: telemetrytypes.FieldDataTypeString,
Name: "http.method",
Limit: 10,
})
if err != nil {
t.Fatalf("Failed to get keys: %v", err)
}
t.Logf("Keys: %v", keys)
}

View File

@@ -0,0 +1,132 @@
package telemetrymetadata
import (
"strings"
"github.com/AfterShip/clickhouse-sql-parser/parser"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
)
// TelemetryFieldVisitor is an AST visitor for extracting telemetry fields
type TelemetryFieldVisitor struct {
parser.DefaultASTVisitor
Fields []*telemetrytypes.TelemetryFieldKey
}
func NewTelemetryFieldVisitor() *TelemetryFieldVisitor {
return &TelemetryFieldVisitor{
Fields: make([]*telemetrytypes.TelemetryFieldKey, 0),
}
}
// VisitColumnDef is called when visiting a column definition
func (v *TelemetryFieldVisitor) VisitColumnDef(expr *parser.ColumnDef) error {
// Check if this is a materialized column with DEFAULT expression
if expr.DefaultExpr == nil {
return nil
}
// Parse column name to extract context and data type
columnName := expr.Name.String()
// Remove backticks if present
columnName = strings.TrimPrefix(columnName, "`")
columnName = strings.TrimSuffix(columnName, "`")
// Parse the column name to extract components
parts := strings.Split(columnName, "_")
if len(parts) < 2 {
return nil
}
context := parts[0]
dataType := parts[1]
// Check if this is a valid telemetry column
var fieldContext telemetrytypes.FieldContext
switch context {
case "resource":
fieldContext = telemetrytypes.FieldContextResource
case "scope":
fieldContext = telemetrytypes.FieldContextScope
case "attribute":
fieldContext = telemetrytypes.FieldContextAttribute
default:
return nil // Not a telemetry column
}
// Check and convert data type
var fieldDataType telemetrytypes.FieldDataType
switch dataType {
case "string":
fieldDataType = telemetrytypes.FieldDataTypeString
case "bool":
fieldDataType = telemetrytypes.FieldDataTypeBool
case "int", "int64":
fieldDataType = telemetrytypes.FieldDataTypeFloat64
case "float", "float64":
fieldDataType = telemetrytypes.FieldDataTypeFloat64
case "number":
fieldDataType = telemetrytypes.FieldDataTypeFloat64
default:
return nil // Unknown data type
}
// Extract field name from the DEFAULT expression
// The DEFAULT expression should be something like: resources_string['k8s.cluster.name']
// We need to extract the key inside the square brackets
defaultExprStr := expr.DefaultExpr.String()
// Look for the pattern: map['key']
startIdx := strings.Index(defaultExprStr, "['")
endIdx := strings.Index(defaultExprStr, "']")
if startIdx == -1 || endIdx == -1 || startIdx+2 >= endIdx {
return nil // Invalid DEFAULT expression format
}
fieldName := defaultExprStr[startIdx+2 : endIdx]
// Create and store the TelemetryFieldKey
field := telemetrytypes.TelemetryFieldKey{
Name: fieldName,
FieldContext: fieldContext,
FieldDataType: fieldDataType,
Materialized: true,
}
v.Fields = append(v.Fields, &field)
return nil
}
func ExtractFieldKeysFromTblStatement(statement string) ([]*telemetrytypes.TelemetryFieldKey, error) {
// Parse the CREATE TABLE statement using the ClickHouse parser
p := parser.NewParser(statement)
stmts, err := p.ParseStmts()
if err != nil {
return nil, err
}
// Create a visitor to collect telemetry fields
visitor := NewTelemetryFieldVisitor()
// Visit each statement
for _, stmt := range stmts {
// We're looking for CreateTable statements
createTable, ok := stmt.(*parser.CreateTable)
if !ok {
continue
}
// Visit the table schema to extract column definitions
if createTable.TableSchema != nil {
for _, column := range createTable.TableSchema.Columns {
if err := column.Accept(visitor); err != nil {
return nil, err
}
}
}
}
return visitor.Fields, nil
}

View File

@@ -0,0 +1,148 @@
package telemetrymetadata
import (
"slices"
"testing"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
)
func TestExtractFieldKeysFromTblStatement(t *testing.T) {
var statement = `CREATE TABLE signoz_logs.logs_v2
(
` + "`ts_bucket_start`" + ` UInt64 CODEC(DoubleDelta, LZ4),
` + "`resource_fingerprint`" + ` String CODEC(ZSTD(1)),
` + "`timestamp`" + ` UInt64 CODEC(DoubleDelta, LZ4),
` + "`observed_timestamp`" + ` UInt64 CODEC(DoubleDelta, LZ4),
` + "`id`" + ` String CODEC(ZSTD(1)),
` + "`trace_id`" + ` String CODEC(ZSTD(1)),
` + "`span_id`" + ` String CODEC(ZSTD(1)),
` + "`trace_flags`" + ` UInt32,
` + "`severity_text`" + ` LowCardinality(String) CODEC(ZSTD(1)),
` + "`severity_number`" + ` UInt8,
` + "`body`" + ` String CODEC(ZSTD(2)),
` + "`attributes_string`" + ` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
` + "`attributes_number`" + ` Map(LowCardinality(String), Float64) CODEC(ZSTD(1)),
` + "`attributes_bool`" + ` Map(LowCardinality(String), Bool) CODEC(ZSTD(1)),
` + "`resources_string`" + ` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
` + "`scope_name`" + ` String CODEC(ZSTD(1)),
` + "`scope_version`" + ` String CODEC(ZSTD(1)),
` + "`scope_string`" + ` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
` + "`attribute_number_input_size`" + ` Int64 DEFAULT attributes_number['input_size'] CODEC(ZSTD(1)),
` + "`attribute_number_input_size_exists`" + ` Bool DEFAULT if(mapContains(attributes_number, 'input_size') != 0, true, false) CODEC(ZSTD(1)),
` + "`attribute_string_log$$iostream`" + ` String DEFAULT attributes_string['log.iostream'] CODEC(ZSTD(1)),
` + "`attribute_string_log$$iostream_exists`" + ` Bool DEFAULT if(mapContains(attributes_string, 'log.iostream') != 0, true, false) CODEC(ZSTD(1)),
` + "`attribute_string_log$$file$$path`" + ` String DEFAULT attributes_string['log.file.path'] CODEC(ZSTD(1)),
` + "`attribute_string_log$$file$$path_exists`" + ` Bool DEFAULT if(mapContains(attributes_string, 'log.file.path') != 0, true, false) CODEC(ZSTD(1)),
` + "`resource_string_k8s$$cluster$$name`" + ` String DEFAULT resources_string['k8s.cluster.name'] CODEC(ZSTD(1)),
` + "`resource_string_k8s$$cluster$$name_exists`" + ` Bool DEFAULT if(mapContains(resources_string, 'k8s.cluster.name') != 0, true, false) CODEC(ZSTD(1)),
` + "`resource_string_k8s$$namespace$$name`" + ` String DEFAULT resources_string['k8s.namespace.name'] CODEC(ZSTD(1)),
` + "`resource_string_k8s$$namespace$$name_exists`" + ` Bool DEFAULT if(mapContains(resources_string, 'k8s.namespace.name') != 0, true, false) CODEC(ZSTD(1)),
` + "`resource_string_k8s$$pod$$name`" + ` String DEFAULT resources_string['k8s.pod.name'] CODEC(ZSTD(1)),
` + "`resource_string_k8s$$pod$$name_exists`" + ` Bool DEFAULT if(mapContains(resources_string, 'k8s.pod.name') != 0, true, false) CODEC(ZSTD(1)),
` + "`resource_string_k8s$$node$$name`" + ` String DEFAULT resources_string['k8s.node.name'] CODEC(ZSTD(1)),
` + "`resource_string_k8s$$node$$name_exists`" + ` Bool DEFAULT if(mapContains(resources_string, 'k8s.node.name') != 0, true, false) CODEC(ZSTD(1)),
` + "`resource_string_k8s$$container$$name`" + ` String DEFAULT resources_string['k8s.container.name'] CODEC(ZSTD(1)),
` + "`resource_string_k8s$$container$$name_exists`" + ` Bool DEFAULT if(mapContains(resources_string, 'k8s.container.name') != 0, true, false) CODEC(ZSTD(1)),
` + "`resource_string_k8s$$deployment$$name`" + ` String DEFAULT resources_string['k8s.deployment.name'] CODEC(ZSTD(1)),
` + "`resource_string_k8s$$deployment$$name_exists`" + ` Bool DEFAULT if(mapContains(resources_string, 'k8s.deployment.name') != 0, true, false) CODEC(ZSTD(1)),
` + "`attribute_string_processor`" + ` String DEFAULT attributes_string['processor'] CODEC(ZSTD(1)),
` + "`attribute_string_processor_exists`" + ` Bool DEFAULT if(mapContains(attributes_string, 'processor') != 0, true, false) CODEC(ZSTD(1)),
INDEX body_idx lower(body) TYPE ngrambf_v1(4, 60000, 5, 0) GRANULARITY 1,
INDEX id_minmax id TYPE minmax GRANULARITY 1,
INDEX severity_number_idx severity_number TYPE set(25) GRANULARITY 4,
INDEX severity_text_idx severity_text TYPE set(25) GRANULARITY 4,
INDEX trace_flags_idx trace_flags TYPE bloom_filter GRANULARITY 4,
INDEX scope_name_idx scope_name TYPE tokenbf_v1(10240, 3, 0) GRANULARITY 4,
INDEX ` + "`resource_string_k8s$$cluster$$name_idx`" + ` ` + "`resource_string_k8s$$cluster$$name`" + ` TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX ` + "`resource_string_k8s$$namespace$$name_idx`" + ` ` + "`resource_string_k8s$$namespace$$name`" + ` TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX ` + "`resource_string_k8s$$pod$$name_idx`" + ` ` + "`resource_string_k8s$$pod$$name`" + ` TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX ` + "`resource_string_k8s$$node$$name_idx`" + ` ` + "`resource_string_k8s$$node$$name`" + ` TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX ` + "`resource_string_k8s$$container$$name_idx`" + ` ` + "`resource_string_k8s$$container$$name`" + ` TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX ` + "`resource_string_k8s$$deployment$$name_idx`" + ` ` + "`resource_string_k8s$$deployment$$name`" + ` TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX attribute_string_processor_idx attribute_string_processor TYPE bloom_filter(0.01) GRANULARITY 64
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(timestamp / 1000000000)
ORDER BY (ts_bucket_start, resource_fingerprint, severity_text, timestamp, id)
TTL toDateTime(timestamp / 1000000000) + toIntervalSecond(2592000)
SETTINGS ttl_only_drop_parts = 1, index_granularity = 8192`
keys, err := ExtractFieldKeysFromTblStatement(statement)
if err != nil {
t.Fatalf("failed to extract field keys from tbl statement: %v", err)
}
// some expected keys
expectedKeys := []*telemetrytypes.TelemetryFieldKey{
{
Name: "k8s.pod.name",
FieldContext: telemetrytypes.FieldContextResource,
FieldDataType: telemetrytypes.FieldDataTypeString,
Materialized: true,
},
{
Name: "k8s.cluster.name",
FieldContext: telemetrytypes.FieldContextResource,
FieldDataType: telemetrytypes.FieldDataTypeString,
Materialized: true,
},
{
Name: "k8s.namespace.name",
FieldContext: telemetrytypes.FieldContextResource,
FieldDataType: telemetrytypes.FieldDataTypeString,
Materialized: true,
},
{
Name: "k8s.deployment.name",
FieldContext: telemetrytypes.FieldContextResource,
FieldDataType: telemetrytypes.FieldDataTypeString,
Materialized: true,
},
{
Name: "k8s.node.name",
FieldContext: telemetrytypes.FieldContextResource,
FieldDataType: telemetrytypes.FieldDataTypeString,
Materialized: true,
},
{
Name: "k8s.container.name",
FieldContext: telemetrytypes.FieldContextResource,
FieldDataType: telemetrytypes.FieldDataTypeString,
Materialized: true,
},
{
Name: "processor",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
Materialized: true,
},
{
Name: "input_size",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeFloat64,
Materialized: true,
},
{
Name: "log.iostream",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
Materialized: true,
},
{
Name: "log.file.path",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
Materialized: true,
},
}
for _, key := range expectedKeys {
if !slices.ContainsFunc(keys, func(k *telemetrytypes.TelemetryFieldKey) bool {
return k.Name == key.Name && k.FieldContext == key.FieldContext && k.FieldDataType == key.FieldDataType && k.Materialized == key.Materialized
}) {
t.Errorf("expected key %v not found", key)
}
}
}

View File

@@ -0,0 +1,7 @@
package telemetrymetadata
const (
DBName = "signoz_metadata"
AttributesMetadataTableName = "distributed_attributes_metadata"
AttributesMetadataLocalTableName = "attributes_metadata"
)

View File

@@ -0,0 +1,21 @@
package telemetrymetrics
const (
DBName = "signoz_metrics"
SamplesV4TableName = "distributed_samples_v4"
SamplesV4LocalTableName = "samples_v4"
SamplesV4Agg5mTableName = "distributed_samples_v4_agg_5m"
SamplesV4Agg5mLocalTableName = "samples_v4_agg_5m"
SamplesV4Agg30mTableName = "distributed_samples_v4_agg_30m"
SamplesV4Agg30mLocalTableName = "samples_v4_agg_30m"
ExpHistogramTableName = "distributed_exp_hist"
ExpHistogramLocalTableName = "exp_hist"
TimeseriesV4TableName = "distributed_time_series_v4"
TimeseriesV4LocalTableName = "time_series_v4"
TimeseriesV46hrsTableName = "distributed_time_series_v4_6hrs"
TimeseriesV46hrsLocalTableName = "time_series_v4_6hrs"
TimeseriesV41dayTableName = "distributed_time_series_v4_1day"
TimeseriesV41dayLocalTableName = "time_series_v4_1day"
TimeseriesV41weekTableName = "distributed_time_series_v4_1week"
TimeseriesV41weekLocalTableName = "time_series_v4_1week"
)

View File

@@ -0,0 +1,352 @@
package telemetrytraces
import (
"context"
"fmt"
schema "github.com/SigNoz/signoz-otel-collector/cmd/signozschemamigrator/schema_migrator"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/huandu/go-sqlbuilder"
)
var (
indexV3Columns = map[string]*schema.Column{
"ts_bucket_start": {Name: "ts_bucket_start", Type: schema.ColumnTypeUInt64},
"resource_fingerprint": {Name: "resource_fingerprint", Type: schema.ColumnTypeString},
// intrinsic columns
"timestamp": {Name: "timestamp", Type: schema.DateTime64ColumnType{Precision: 9, Timezone: "UTC"}},
"trace_id": {Name: "trace_id", Type: schema.FixedStringColumnType{Length: 32}},
"span_id": {Name: "span_id", Type: schema.ColumnTypeString},
"trace_state": {Name: "trace_state", Type: schema.ColumnTypeString},
"parent_span_id": {Name: "parent_span_id", Type: schema.ColumnTypeString},
"flags": {Name: "flags", Type: schema.ColumnTypeUInt32},
"name": {Name: "name", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"kind": {Name: "kind", Type: schema.ColumnTypeInt8},
"kind_string": {Name: "kind_string", Type: schema.ColumnTypeString},
"duration_nano": {Name: "duration_nano", Type: schema.ColumnTypeUInt64},
"status_code": {Name: "status_code", Type: schema.ColumnTypeInt16},
"status_message": {Name: "status_message", Type: schema.ColumnTypeString},
"status_code_string": {Name: "status_code_string", Type: schema.ColumnTypeString},
// attributes columns
"attributes_string": {Name: "attributes_string", Type: schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}},
"attributes_number": {Name: "attributes_number", Type: schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeFloat64,
}},
"attributes_bool": {Name: "attributes_bool", Type: schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeBool,
}},
"resources_string": {Name: "resources_string", Type: schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}},
"events": {Name: "events", Type: schema.ArrayColumnType{
ElementType: schema.ColumnTypeString,
}},
"links": {Name: "links", Type: schema.ColumnTypeString},
// derived columns
"response_status_code": {Name: "response_status_code", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"external_http_url": {Name: "external_http_url", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"http_url": {Name: "http_url", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"external_http_method": {Name: "external_http_method", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"http_method": {Name: "http_method", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"http_host": {Name: "http_host", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"db_name": {Name: "db_name", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"db_operation": {Name: "db_operation", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"has_error": {Name: "has_error", Type: schema.ColumnTypeBool},
"is_remote": {Name: "is_remote", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
// materialized columns
"resource_string_service$$name": {Name: "resource_string_service$$name", Type: schema.ColumnTypeString},
"attribute_string_http$$route": {Name: "attribute_string_http$$route", Type: schema.ColumnTypeString},
"attribute_string_messaging$$system": {Name: "attribute_string_messaging$$system", Type: schema.ColumnTypeString},
"attribute_string_messaging$$operation": {Name: "attribute_string_messaging$$operation", Type: schema.ColumnTypeString},
"attribute_string_db$$system": {Name: "attribute_string_db$$system", Type: schema.ColumnTypeString},
"attribute_string_rpc$$system": {Name: "attribute_string_rpc$$system", Type: schema.ColumnTypeString},
"attribute_string_rpc$$service": {Name: "attribute_string_rpc$$service", Type: schema.ColumnTypeString},
"attribute_string_rpc$$method": {Name: "attribute_string_rpc$$method", Type: schema.ColumnTypeString},
"attribute_string_peer$$service": {Name: "attribute_string_peer$$service", Type: schema.ColumnTypeString},
// deprecated intrinsic columns
"traceID": {Name: "traceID", Type: schema.FixedStringColumnType{Length: 32}},
"spanID": {Name: "spanID", Type: schema.ColumnTypeString},
"parentSpanID": {Name: "parentSpanID", Type: schema.ColumnTypeString},
"spanKind": {Name: "spanKind", Type: schema.ColumnTypeString},
"durationNano": {Name: "durationNano", Type: schema.ColumnTypeUInt64},
"statusCode": {Name: "statusCode", Type: schema.ColumnTypeInt16},
"statusMessage": {Name: "statusMessage", Type: schema.ColumnTypeString},
"statusCodeString": {Name: "statusCodeString", Type: schema.ColumnTypeString},
// deprecated derived columns
"references": {Name: "references", Type: schema.ColumnTypeString},
"responseStatusCode": {Name: "responseStatusCode", Type: schema.ColumnTypeString},
"externalHttpUrl": {Name: "externalHttpUrl", Type: schema.ColumnTypeString},
"httpUrl": {Name: "httpUrl", Type: schema.ColumnTypeString},
"externalHttpMethod": {Name: "externalHttpMethod", Type: schema.ColumnTypeString},
"httpMethod": {Name: "httpMethod", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"httpHost": {Name: "httpHost", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"dbName": {Name: "dbName", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"dbOperation": {Name: "dbOperation", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"hasError": {Name: "hasError", Type: schema.ColumnTypeBool},
"isRemote": {Name: "isRemote", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"serviceName": {Name: "serviceName", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"httpRoute": {Name: "httpRoute", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"msgSystem": {Name: "msgSystem", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"msgOperation": {Name: "msgOperation", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"dbSystem": {Name: "dbSystem", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"rpcSystem": {Name: "rpcSystem", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"rpcService": {Name: "rpcService", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"rpcMethod": {Name: "rpcMethod", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
"peerService": {Name: "peerService", Type: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString}},
// materialized exists columns
"resource_string_service$$name_exists": {Name: "resource_string_service$$name_exists", Type: schema.ColumnTypeBool},
"attribute_string_http$$route_exists": {Name: "attribute_string_http$$route_exists", Type: schema.ColumnTypeBool},
"attribute_string_messaging$$system_exists": {Name: "attribute_string_messaging$$system_exists", Type: schema.ColumnTypeBool},
"attribute_string_messaging$$operation_exists": {Name: "attribute_string_messaging$$operation_exists", Type: schema.ColumnTypeBool},
"attribute_string_db$$system_exists": {Name: "attribute_string_db$$system_exists", Type: schema.ColumnTypeBool},
"attribute_string_rpc$$system_exists": {Name: "attribute_string_rpc$$system_exists", Type: schema.ColumnTypeBool},
"attribute_string_rpc$$service_exists": {Name: "attribute_string_rpc$$service_exists", Type: schema.ColumnTypeBool},
"attribute_string_rpc$$method_exists": {Name: "attribute_string_rpc$$method_exists", Type: schema.ColumnTypeBool},
"attribute_string_peer$$service_exists": {Name: "attribute_string_peer$$service_exists", Type: schema.ColumnTypeBool},
}
)
// interface check
var _ qbtypes.ConditionBuilder = &conditionBuilder{}
type conditionBuilder struct {
}
func NewConditionBuilder() qbtypes.ConditionBuilder {
return &conditionBuilder{}
}
func (c *conditionBuilder) GetColumn(ctx context.Context, key *telemetrytypes.TelemetryFieldKey) (*schema.Column, error) {
switch key.FieldContext {
case telemetrytypes.FieldContextResource:
return indexV3Columns["resources_string"], nil
case telemetrytypes.FieldContextScope:
// we don't have scope data stored in the spans yet
return nil, qbtypes.ErrColumnNotFound
case telemetrytypes.FieldContextAttribute:
switch key.FieldDataType {
case telemetrytypes.FieldDataTypeString:
return indexV3Columns["attributes_string"], nil
case telemetrytypes.FieldDataTypeInt64, telemetrytypes.FieldDataTypeFloat64, telemetrytypes.FieldDataTypeNumber:
return indexV3Columns["attributes_number"], nil
case telemetrytypes.FieldDataTypeBool:
return indexV3Columns["attributes_bool"], nil
}
case telemetrytypes.FieldContextSpan:
col, ok := indexV3Columns[key.Name]
if !ok {
return nil, qbtypes.ErrColumnNotFound
}
return col, nil
}
return nil, qbtypes.ErrColumnNotFound
}
func (c *conditionBuilder) GetTableFieldName(ctx context.Context, key *telemetrytypes.TelemetryFieldKey) (string, error) {
column, err := c.GetColumn(ctx, key)
if err != nil {
return "", err
}
switch column.Type {
case schema.ColumnTypeString,
schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
schema.ColumnTypeUInt64,
schema.ColumnTypeUInt32,
schema.ColumnTypeInt8,
schema.ColumnTypeInt16,
schema.ColumnTypeBool,
schema.DateTime64ColumnType{Precision: 9, Timezone: "UTC"},
schema.FixedStringColumnType{Length: 32}:
return column.Name, nil
case schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}:
// a key could have been materialized, if so return the materialized column name
if key.Materialized {
return telemetrytypes.FieldKeyToMaterializedColumnName(key), nil
}
return fmt.Sprintf("%s['%s']", column.Name, key.Name), nil
case schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeFloat64,
}:
// a key could have been materialized, if so return the materialized column name
if key.Materialized {
return telemetrytypes.FieldKeyToMaterializedColumnName(key), nil
}
return fmt.Sprintf("%s['%s']", column.Name, key.Name), nil
case schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeBool,
}:
// a key could have been materialized, if so return the materialized column name
if key.Materialized {
return telemetrytypes.FieldKeyToMaterializedColumnName(key), nil
}
return fmt.Sprintf("%s['%s']", column.Name, key.Name), nil
}
// should not reach here
return column.Name, nil
}
func (c *conditionBuilder) GetCondition(
ctx context.Context,
key *telemetrytypes.TelemetryFieldKey,
operator qbtypes.FilterOperator,
value any,
sb *sqlbuilder.SelectBuilder,
) (string, error) {
column, err := c.GetColumn(ctx, key)
if err != nil {
return "", err
}
tblFieldName, err := c.GetTableFieldName(ctx, key)
if err != nil {
return "", err
}
tblFieldName, value = telemetrytypes.DataTypeCollisionHandledFieldName(key, value, tblFieldName)
// regular operators
switch operator {
// regular operators
case qbtypes.FilterOperatorEqual:
return sb.E(tblFieldName, value), nil
case qbtypes.FilterOperatorNotEqual:
return sb.NE(tblFieldName, value), nil
case qbtypes.FilterOperatorGreaterThan:
return sb.G(tblFieldName, value), nil
case qbtypes.FilterOperatorGreaterThanOrEq:
return sb.GE(tblFieldName, value), nil
case qbtypes.FilterOperatorLessThan:
return sb.LT(tblFieldName, value), nil
case qbtypes.FilterOperatorLessThanOrEq:
return sb.LE(tblFieldName, value), nil
// like and not like
case qbtypes.FilterOperatorLike:
return sb.Like(tblFieldName, value), nil
case qbtypes.FilterOperatorNotLike:
return sb.NotLike(tblFieldName, value), nil
case qbtypes.FilterOperatorILike:
return sb.ILike(tblFieldName, value), nil
case qbtypes.FilterOperatorNotILike:
return sb.NotILike(tblFieldName, value), nil
case qbtypes.FilterOperatorContains:
return sb.ILike(tblFieldName, fmt.Sprintf("%%%s%%", value)), nil
case qbtypes.FilterOperatorNotContains:
return sb.NotILike(tblFieldName, fmt.Sprintf("%%%s%%", value)), nil
case qbtypes.FilterOperatorRegexp:
exp := fmt.Sprintf(`match(%s, %s)`, tblFieldName, sb.Var(value))
return sb.And(exp), nil
case qbtypes.FilterOperatorNotRegexp:
exp := fmt.Sprintf(`not match(%s, %s)`, tblFieldName, sb.Var(value))
return sb.And(exp), nil
// between and not between
case qbtypes.FilterOperatorBetween:
values, ok := value.([]any)
if !ok {
return "", qbtypes.ErrBetweenValues
}
if len(values) != 2 {
return "", qbtypes.ErrBetweenValues
}
return sb.Between(tblFieldName, values[0], values[1]), nil
case qbtypes.FilterOperatorNotBetween:
values, ok := value.([]any)
if !ok {
return "", qbtypes.ErrBetweenValues
}
if len(values) != 2 {
return "", qbtypes.ErrBetweenValues
}
return sb.NotBetween(tblFieldName, values[0], values[1]), nil
// in and not in
case qbtypes.FilterOperatorIn:
values, ok := value.([]any)
if !ok {
return "", qbtypes.ErrInValues
}
return sb.In(tblFieldName, values...), nil
case qbtypes.FilterOperatorNotIn:
values, ok := value.([]any)
if !ok {
return "", qbtypes.ErrInValues
}
return sb.NotIn(tblFieldName, values...), nil
// exists and not exists
// in the query builder, `exists` and `not exists` are used for
// key membership checks, so depending on the column type, the condition changes
case qbtypes.FilterOperatorExists, qbtypes.FilterOperatorNotExists:
var value any
switch column.Type {
case schema.ColumnTypeString,
schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
schema.FixedStringColumnType{Length: 32},
schema.DateTime64ColumnType{Precision: 9, Timezone: "UTC"}:
value = ""
if operator == qbtypes.FilterOperatorExists {
return sb.NE(tblFieldName, value), nil
} else {
return sb.E(tblFieldName, value), nil
}
case schema.ColumnTypeUInt64,
schema.ColumnTypeUInt32,
schema.ColumnTypeUInt8,
schema.ColumnTypeInt8,
schema.ColumnTypeInt16,
schema.ColumnTypeBool:
value = 0
if operator == qbtypes.FilterOperatorExists {
return sb.NE(tblFieldName, value), nil
} else {
return sb.E(tblFieldName, value), nil
}
case schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeString,
}, schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeBool,
}, schema.MapColumnType{
KeyType: schema.LowCardinalityColumnType{ElementType: schema.ColumnTypeString},
ValueType: schema.ColumnTypeFloat64,
}:
leftOperand := fmt.Sprintf("mapContains(%s, '%s')", column.Name, key.Name)
if key.Materialized {
leftOperand = telemetrytypes.FieldKeyToMaterializedColumnNameForExists(key)
}
if operator == qbtypes.FilterOperatorExists {
return sb.E(leftOperand, true), nil
} else {
return sb.NE(leftOperand, true), nil
}
default:
return "", fmt.Errorf("exists operator is not supported for column type %s", column.Type)
}
}
return "", nil
}

View File

@@ -0,0 +1,298 @@
package telemetrytraces
import (
"context"
"testing"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/huandu/go-sqlbuilder"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestGetFieldKeyName(t *testing.T) {
ctx := context.Background()
conditionBuilder := &conditionBuilder{}
testCases := []struct {
name string
key telemetrytypes.TelemetryFieldKey
expectedResult string
expectedError error
}{
{
name: "Simple column type - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextSpan,
},
expectedResult: "timestamp",
expectedError: nil,
},
{
name: "Map column type - string attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
expectedResult: "attributes_string['user.id']",
expectedError: nil,
},
{
name: "Map column type - number attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.size",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeNumber,
},
expectedResult: "attributes_number['request.size']",
expectedError: nil,
},
{
name: "Map column type - bool attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.success",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeBool,
},
expectedResult: "attributes_bool['request.success']",
expectedError: nil,
},
{
name: "Map column type - resource attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "service.name",
FieldContext: telemetrytypes.FieldContextResource,
},
expectedResult: "resources_string['service.name']",
expectedError: nil,
},
{
name: "Non-existent column",
key: telemetrytypes.TelemetryFieldKey{
Name: "nonexistent_field",
FieldContext: telemetrytypes.FieldContextSpan,
},
expectedResult: "",
expectedError: qbtypes.ErrColumnNotFound,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result, err := conditionBuilder.GetTableFieldName(ctx, &tc.key)
if tc.expectedError != nil {
assert.Equal(t, tc.expectedError, err)
} else {
require.NoError(t, err)
assert.Equal(t, tc.expectedResult, result)
}
})
}
}
func TestGetCondition(t *testing.T) {
ctx := context.Background()
conditionBuilder := NewConditionBuilder()
testCases := []struct {
name string
key telemetrytypes.TelemetryFieldKey
operator qbtypes.FilterOperator
value any
expectedSQL string
expectedError error
}{
{
name: "Not Equal operator - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextSpan,
},
operator: qbtypes.FilterOperatorNotEqual,
value: uint64(1617979338000000000),
expectedSQL: "timestamp <> ?",
expectedError: nil,
},
{
name: "Greater Than operator - number attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.duration",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeNumber,
},
operator: qbtypes.FilterOperatorGreaterThan,
value: float64(100),
expectedSQL: "attributes_number['request.duration'] > ?",
expectedError: nil,
},
{
name: "Less Than operator - number attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "request.size",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeNumber,
},
operator: qbtypes.FilterOperatorLessThan,
value: float64(1024),
expectedSQL: "attributes_number['request.size'] < ?",
expectedError: nil,
},
{
name: "Greater Than Or Equal operator - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextSpan,
},
operator: qbtypes.FilterOperatorGreaterThanOrEq,
value: uint64(1617979338000000000),
expectedSQL: "timestamp >= ?",
expectedError: nil,
},
{
name: "Less Than Or Equal operator - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextSpan,
},
operator: qbtypes.FilterOperatorLessThanOrEq,
value: uint64(1617979338000000000),
expectedSQL: "timestamp <= ?",
expectedError: nil,
},
{
name: "ILike operator - string attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
operator: qbtypes.FilterOperatorILike,
value: "%admin%",
expectedSQL: "WHERE LOWER(attributes_string['user.id']) LIKE LOWER(?)",
expectedError: nil,
},
{
name: "Not ILike operator - string attribute",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
operator: qbtypes.FilterOperatorNotILike,
value: "%admin%",
expectedSQL: "WHERE LOWER(attributes_string['user.id']) NOT LIKE LOWER(?)",
expectedError: nil,
},
{
name: "Between operator - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextSpan,
},
operator: qbtypes.FilterOperatorBetween,
value: []any{uint64(1617979338000000000), uint64(1617979348000000000)},
expectedSQL: "timestamp BETWEEN ? AND ?",
expectedError: nil,
},
{
name: "Between operator - invalid value",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextSpan,
},
operator: qbtypes.FilterOperatorBetween,
value: "invalid",
expectedSQL: "",
expectedError: qbtypes.ErrBetweenValues,
},
{
name: "Between operator - insufficient values",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextSpan,
},
operator: qbtypes.FilterOperatorBetween,
value: []any{uint64(1617979338000000000)},
expectedSQL: "",
expectedError: qbtypes.ErrBetweenValues,
},
{
name: "Not Between operator - timestamp",
key: telemetrytypes.TelemetryFieldKey{
Name: "timestamp",
FieldContext: telemetrytypes.FieldContextSpan,
},
operator: qbtypes.FilterOperatorNotBetween,
value: []any{uint64(1617979338000000000), uint64(1617979348000000000)},
expectedSQL: "timestamp NOT BETWEEN ? AND ?",
expectedError: nil,
},
{
name: "Exists operator - map field",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
operator: qbtypes.FilterOperatorExists,
value: nil,
expectedSQL: "mapContains(attributes_string, 'user.id') = ?",
expectedError: nil,
},
{
name: "Not Exists operator - map field",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
operator: qbtypes.FilterOperatorNotExists,
value: nil,
expectedSQL: "mapContains(attributes_string, 'user.id') <> ?",
expectedError: nil,
},
{
name: "Contains operator - map field",
key: telemetrytypes.TelemetryFieldKey{
Name: "user.id",
FieldContext: telemetrytypes.FieldContextAttribute,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
operator: qbtypes.FilterOperatorContains,
value: "admin",
expectedSQL: "WHERE LOWER(attributes_string['user.id']) LIKE LOWER(?)",
expectedError: nil,
},
{
name: "Non-existent column",
key: telemetrytypes.TelemetryFieldKey{
Name: "nonexistent_field",
FieldContext: telemetrytypes.FieldContextSpan,
},
operator: qbtypes.FilterOperatorEqual,
value: "value",
expectedSQL: "",
expectedError: qbtypes.ErrColumnNotFound,
},
}
for _, tc := range testCases {
sb := sqlbuilder.NewSelectBuilder()
t.Run(tc.name, func(t *testing.T) {
cond, err := conditionBuilder.GetCondition(ctx, &tc.key, tc.operator, tc.value, sb)
sb.Where(cond)
if tc.expectedError != nil {
assert.Equal(t, tc.expectedError, err)
} else {
require.NoError(t, err)
sql, _ := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
assert.Contains(t, sql, tc.expectedSQL)
}
})
}
}

View File

@@ -0,0 +1,10 @@
package telemetrytraces
const (
DBName = "signoz_traces"
SpanIndexV3TableName = "distributed_signoz_index_v3"
SpanIndexV3LocalTableName = "signoz_index_v3"
TagAttributesV2TableName = "distributed_tag_attributes_v2"
TagAttributesV2LocalTableName = "tag_attributes_v2"
TopLevelOperationsTableName = "distributed_top_level_operations"
)

View File

@@ -1,37 +1,246 @@
package types
import (
"database/sql/driver"
"encoding/json"
"fmt"
"time"
"github.com/pkg/errors"
"github.com/uptrace/bun"
)
type Integration struct {
bun.BaseModel `bun:"table:integrations_installed"`
type IntegrationUserEmail string
IntegrationID string `bun:"integration_id,pk,type:text"`
ConfigJSON string `bun:"config_json,type:text"`
InstalledAt time.Time `bun:"installed_at,default:current_timestamp"`
const (
AWSIntegrationUserEmail IntegrationUserEmail = "aws-integration@signoz.io"
)
var AllIntegrationUserEmails = []IntegrationUserEmail{
AWSIntegrationUserEmail,
}
type CloudIntegrationAccount struct {
bun.BaseModel `bun:"table:cloud_integrations_accounts"`
// --------------------------------------------------------------------------
// Normal integration uses just the installed_integration table
// --------------------------------------------------------------------------
CloudProvider string `bun:"cloud_provider,type:text,unique:cloud_provider_id"`
ID string `bun:"id,type:text,notnull,unique:cloud_provider_id"`
ConfigJSON string `bun:"config_json,type:text"`
CloudAccountID string `bun:"cloud_account_id,type:text"`
LastAgentReportJSON string `bun:"last_agent_report_json,type:text"`
CreatedAt time.Time `bun:"created_at,notnull,default:current_timestamp"`
RemovedAt time.Time `bun:"removed_at,type:timestamp"`
type InstalledIntegration struct {
bun.BaseModel `bun:"table:installed_integration"`
Identifiable
Type string `json:"type" bun:"type,type:text,unique:org_id_type"`
Config InstalledIntegrationConfig `json:"config" bun:"config,type:text"`
InstalledAt time.Time `json:"installed_at" bun:"installed_at,default:current_timestamp"`
OrgID string `json:"org_id" bun:"org_id,type:text,unique:org_id_type,references:organizations(id),on_delete:cascade"`
}
type CloudIntegrationServiceConfig struct {
bun.BaseModel `bun:"table:cloud_integrations_service_configs"`
type InstalledIntegrationConfig map[string]interface{}
CloudProvider string `bun:"cloud_provider,type:text,notnull,unique:service_cloud_provider_account"`
CloudAccountID string `bun:"cloud_account_id,type:text,notnull,unique:service_cloud_provider_account"`
ServiceID string `bun:"service_id,type:text,notnull,unique:service_cloud_provider_account"`
ConfigJSON string `bun:"config_json,type:text"`
CreatedAt time.Time `bun:"created_at,default:current_timestamp"`
// For serializing from db
func (c *InstalledIntegrationConfig) Scan(src interface{}) error {
var data []byte
switch v := src.(type) {
case []byte:
data = v
case string:
data = []byte(v)
default:
return fmt.Errorf("tried to scan from %T instead of string or bytes", src)
}
return json.Unmarshal(data, c)
}
// For serializing to db
func (c *InstalledIntegrationConfig) Value() (driver.Value, error) {
filterSetJson, err := json.Marshal(c)
if err != nil {
return nil, errors.Wrap(err, "could not serialize integration config to JSON")
}
return filterSetJson, nil
}
// --------------------------------------------------------------------------
// Cloud integration uses the cloud_integration table
// and cloud_integrations_service table
// --------------------------------------------------------------------------
type CloudIntegration struct {
bun.BaseModel `bun:"table:cloud_integration"`
Identifiable
TimeAuditable
Provider string `json:"provider" bun:"provider,type:text,unique:provider_id"`
Config *AccountConfig `json:"config" bun:"config,type:text"`
AccountID *string `json:"account_id" bun:"account_id,type:text"`
LastAgentReport *AgentReport `json:"last_agent_report" bun:"last_agent_report,type:text"`
RemovedAt *time.Time `json:"removed_at" bun:"removed_at,type:timestamp,nullzero"`
OrgID string `bun:"org_id,type:text,unique:provider_id"`
}
func (a *CloudIntegration) Status() AccountStatus {
status := AccountStatus{}
if a.LastAgentReport != nil {
lastHeartbeat := a.LastAgentReport.TimestampMillis
status.Integration.LastHeartbeatTsMillis = &lastHeartbeat
}
return status
}
func (a *CloudIntegration) Account() Account {
ca := Account{Id: a.ID.StringValue(), Status: a.Status()}
if a.AccountID != nil {
ca.CloudAccountId = *a.AccountID
}
if a.Config != nil {
ca.Config = *a.Config
} else {
ca.Config = DefaultAccountConfig()
}
return ca
}
type Account struct {
Id string `json:"id"`
CloudAccountId string `json:"cloud_account_id"`
Config AccountConfig `json:"config"`
Status AccountStatus `json:"status"`
}
type AccountStatus struct {
Integration AccountIntegrationStatus `json:"integration"`
}
type AccountIntegrationStatus struct {
LastHeartbeatTsMillis *int64 `json:"last_heartbeat_ts_ms"`
}
func DefaultAccountConfig() AccountConfig {
return AccountConfig{
EnabledRegions: []string{},
}
}
type AccountConfig struct {
EnabledRegions []string `json:"regions"`
}
// For serializing from db
func (c *AccountConfig) Scan(src any) error {
var data []byte
switch v := src.(type) {
case []byte:
data = v
case string:
data = []byte(v)
default:
return fmt.Errorf("tried to scan from %T instead of string or bytes", src)
}
return json.Unmarshal(data, c)
}
// For serializing to db
func (c *AccountConfig) Value() (driver.Value, error) {
if c == nil {
return nil, nil
}
serialized, err := json.Marshal(c)
if err != nil {
return nil, fmt.Errorf(
"couldn't serialize cloud account config to JSON: %w", err,
)
}
return serialized, nil
}
type AgentReport struct {
TimestampMillis int64 `json:"timestamp_millis"`
Data map[string]any `json:"data"`
}
// For serializing from db
func (r *AgentReport) Scan(src any) error {
var data []byte
switch v := src.(type) {
case []byte:
data = v
case string:
data = []byte(v)
default:
return fmt.Errorf("tried to scan from %T instead of string or bytes", src)
}
return json.Unmarshal(data, r)
}
// For serializing to db
func (r *AgentReport) Value() (driver.Value, error) {
if r == nil {
return nil, nil
}
serialized, err := json.Marshal(r)
if err != nil {
return nil, fmt.Errorf(
"couldn't serialize agent report to JSON: %w", err,
)
}
return serialized, nil
}
type CloudIntegrationService struct {
bun.BaseModel `bun:"table:cloud_integration_service,alias:cis"`
Identifiable
TimeAuditable
Type string `bun:"type,type:text,notnull,unique:cloud_integration_id_type"`
Config CloudServiceConfig `bun:"config,type:text"`
CloudIntegrationID string `bun:"cloud_integration_id,type:text,notnull,unique:cloud_integration_id_type,references:cloud_integrations(id),on_delete:cascade"`
}
type CloudServiceLogsConfig struct {
Enabled bool `json:"enabled"`
}
type CloudServiceMetricsConfig struct {
Enabled bool `json:"enabled"`
}
type CloudServiceConfig struct {
Logs *CloudServiceLogsConfig `json:"logs,omitempty"`
Metrics *CloudServiceMetricsConfig `json:"metrics,omitempty"`
}
// For serializing from db
func (c *CloudServiceConfig) Scan(src any) error {
var data []byte
switch src := src.(type) {
case []byte:
data = src
case string:
data = []byte(src)
default:
return fmt.Errorf("tried to scan from %T instead of string or bytes", src)
}
return json.Unmarshal(data, c)
}
// For serializing to db
func (c *CloudServiceConfig) Value() (driver.Value, error) {
if c == nil {
return nil, nil
}
serialized, err := json.Marshal(c)
if err != nil {
return nil, fmt.Errorf(
"couldn't serialize cloud service config to JSON: %w", err,
)
}
return serialized, nil
}

View File

@@ -4,10 +4,17 @@ import (
"context"
schema "github.com/SigNoz/signoz-otel-collector/cmd/signozschemamigrator/schema_migrator"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/huandu/go-sqlbuilder"
)
var (
ErrColumnNotFound = errors.Newf(errors.TypeNotFound, errors.CodeNotFound, "column not found")
ErrBetweenValues = errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "(not) between operator requires two values")
ErrInValues = errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "(not) in operator requires a list of values")
)
// FilterOperator is the operator for the filter.
type FilterOperator int

View File

@@ -86,11 +86,11 @@ func GetFieldKeyFromKeyText(key string) TelemetryFieldKey {
return fieldKeySelector
}
func FieldKeyToMaterializedColumnName(key TelemetryFieldKey) string {
func FieldKeyToMaterializedColumnName(key *TelemetryFieldKey) string {
return fmt.Sprintf("%s_%s_%s", key.FieldContext, key.FieldDataType.String, strings.ReplaceAll(key.Name, ".", "$$"))
}
func FieldKeyToMaterializedColumnNameForExists(key TelemetryFieldKey) string {
func FieldKeyToMaterializedColumnNameForExists(key *TelemetryFieldKey) string {
return fmt.Sprintf("%s_%s_%s_exists", key.FieldContext, key.FieldDataType.String, strings.ReplaceAll(key.Name, ".", "$$"))
}
@@ -123,3 +123,52 @@ type FieldValueSelector struct {
Value string `json:"value"`
Limit int `json:"limit"`
}
func DataTypeCollisionHandledFieldName(key *TelemetryFieldKey, value any, tblFieldName string) (string, any) {
// This block of code exists to handle the data type collisions
// We don't want to fail the requests when there is a key with more than one data type
// Let's take an example of `http.status_code`, and consider user sent a string value and number value
// When they search for `http.status_code=200`, we will search across both the number columns and string columns
// and return the results from both the columns
// While we expect user not to send the mixed data types, it inevitably happens
// So we handle the data type collisions here
switch key.FieldDataType {
case FieldDataTypeString:
switch value.(type) {
case float64:
// try to convert the string value to to number
tblFieldName = fmt.Sprintf(`toFloat64OrNull(%s)`, tblFieldName)
case []any:
areFloats := true
for _, v := range value.([]any) {
if _, ok := v.(float64); !ok {
areFloats = false
break
}
}
if areFloats {
tblFieldName = fmt.Sprintf(`toFloat64OrNull(%s)`, tblFieldName)
}
case bool:
// we don't have a toBoolOrNull in ClickHouse, so we need to convert the bool to a string
value = fmt.Sprintf("%t", value)
case string:
// nothing to do
}
case FieldDataTypeFloat64, FieldDataTypeInt64, FieldDataTypeNumber:
switch value.(type) {
case string:
// try to convert the string value to to number
tblFieldName = fmt.Sprintf(`toString(%s)`, tblFieldName)
case float64:
// nothing to do
}
case FieldDataTypeBool:
switch value.(type) {
case string:
// try to convert the string value to to number
tblFieldName = fmt.Sprintf(`toString(%s)`, tblFieldName)
}
}
return tblFieldName, value
}

View File

@@ -0,0 +1,21 @@
package telemetrytypes
import (
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/uptrace/bun"
)
type VirtualField struct {
bun.BaseModel `bun:"table:virtual_field"`
types.Identifiable
types.TimeAuditable
types.UserAuditable
Name string `bun:"name,type:text,notnull" json:"name"`
Expression string `bun:"expression,type:text,notnull" json:"expression"`
Description string `bun:"description,type:text" json:"description"`
Signal Signal `bun:"signal,type:text,notnull" json:"signal"`
OrgID valuer.UUID `bun:"org_id,type:text,notnull" json:"orgId"`
}

View File

@@ -0,0 +1,48 @@
import pytest
pytest_plugins = [
"fixtures.auth",
"fixtures.clickhouse",
"fixtures.fs",
"fixtures.http",
"fixtures.migrator",
"fixtures.network",
"fixtures.postgres",
"fixtures.sql",
"fixtures.sqlite",
"fixtures.zookeeper",
"fixtures.signoz",
]
def pytest_addoption(parser: pytest.Parser):
parser.addoption(
"--sqlstore-provider",
action="store",
default="postgres",
help="sqlstore provider",
)
parser.addoption(
"--postgres-version",
action="store",
default="15",
help="postgres version",
)
parser.addoption(
"--clickhouse-version",
action="store",
default="24.1.2-alpine",
help="clickhouse version",
)
parser.addoption(
"--zookeeper-version",
action="store",
default="3.7.1",
help="zookeeper version",
)
parser.addoption(
"--schema-migrator-version",
action="store",
default="v0.111.38",
help="schema migrator version",
)

View File

View File

@@ -0,0 +1,44 @@
from http import HTTPStatus
import pytest
import requests
from fixtures import types
@pytest.fixture(name="create_first_user", scope="function")
def create_first_user(signoz: types.SigNoz) -> None:
def _create_user(name: str, email: str, password: str) -> None:
response = requests.post(
signoz.self.host_config.get("/api/v1/register"),
json={
"name": name,
"orgId": "",
"orgName": "",
"email": email,
"password": password,
},
timeout=5,
)
assert response.status_code == HTTPStatus.OK
return _create_user
@pytest.fixture(name="get_jwt_token", scope="module")
def get_jwt_token(signoz: types.SigNoz) -> str:
def _get_jwt_token(email: str, password: str) -> str:
response = requests.post(
signoz.self.host_config.get("/api/v1/login"),
json={
"email": email,
"password": password,
},
timeout=5,
)
assert response.status_code == HTTPStatus.OK
return response.json()["accessJwt"]
return _get_jwt_token

View File

@@ -0,0 +1,111 @@
import os
from typing import Any, Generator
import clickhouse_driver
import pytest
from testcontainers.clickhouse import ClickHouseContainer
from testcontainers.core.container import Network
from fixtures import types
@pytest.fixture(name="clickhouse", scope="package")
def clickhouse(
tmpfs: Generator[types.LegacyPath, Any, None],
network: Network,
zookeeper: types.TestContainerDocker,
request: pytest.FixtureRequest,
) -> types.TestContainerClickhouse:
"""
Package-scoped fixture for Clickhouse TestContainer.
"""
version = request.config.getoption("--clickhouse-version")
container = ClickHouseContainer(
image=f"clickhouse/clickhouse-server:{version}",
port=9000,
username="signoz",
password="password",
)
cluster_config = f"""
<clickhouse>
<logger>
<level>information</level>
<formatting>
<type>json</type>
</formatting>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<size>1000M</size>
<count>3</count>
<console>1</console>
</logger>
<macros>
<shard>01</shard>
<replica>01</replica>
</macros>
<zookeeper>
<node>
<host>{zookeeper.container_config.address}</host>
<port>{zookeeper.container_config.port}</port>
</node>
</zookeeper>
<remote_servers>
<cluster>
<shard>
<replica>
<host>127.0.0.1</host>
<port>9000</port>
</replica>
</shard>
</cluster>
</remote_servers>
<distributed_ddl>
<path>/clickhouse/task_queue/ddl</path>
<profile>default</profile>
</distributed_ddl>
</clickhouse>
"""
tmp_dir = tmpfs("clickhouse")
cluster_config_file_path = os.path.join(tmp_dir, "cluster.xml")
with open(cluster_config_file_path, "w", encoding="utf-8") as f:
f.write(cluster_config)
container.with_volume_mapping(
cluster_config_file_path, "/etc/clickhouse-server/config.d/cluster.xml"
)
container.with_network(network)
container.start()
connection = clickhouse_driver.connect(
user=container.username,
password=container.password,
host=container.get_container_host_ip(),
port=container.get_exposed_port(9000),
)
def stop():
connection.close()
container.stop(delete_volume=True)
request.addfinalizer(stop)
return types.TestContainerClickhouse(
container=container,
host_config=types.TestContainerUrlConfig(
"tcp", container.get_container_host_ip(), container.get_exposed_port(9000)
),
container_config=types.TestContainerUrlConfig(
"tcp", container.get_wrapped_container().name, 9000
),
conn=connection,
env={
"SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN": f"tcp://{container.username}:{container.password}@{container.get_wrapped_container().name}:{9000}" # pylint: disable=line-too-long
},
)

View File

@@ -0,0 +1,15 @@
from typing import Any, Generator
import pytest
from fixtures import types
@pytest.fixture(scope="package")
def tmpfs(
tmp_path_factory: pytest.TempPathFactory,
) -> Generator[types.LegacyPath, Any, None]:
def _tmp(basename: str):
return tmp_path_factory.mktemp(basename)
yield _tmp

View File

@@ -0,0 +1,53 @@
from typing import List
import pytest
from testcontainers.core.container import Network
from wiremock.client import (
Mapping,
Mappings,
)
from wiremock.constants import Config
from wiremock.testing.testcontainer import WireMockContainer
from fixtures import types
@pytest.fixture(name="zeus", scope="package")
def zeus(
network: Network, request: pytest.FixtureRequest
) -> types.TestContainerWiremock:
"""
Package-scoped fixture for running zeus
"""
container = WireMockContainer(image="wiremock/wiremock:2.35.1-1", secure=False)
container.with_network(network)
container.start()
def stop():
container.stop(delete_volume=True)
request.addfinalizer(stop)
return types.TestContainerWiremock(
container=container,
host_config=types.TestContainerUrlConfig(
"http", container.get_container_host_ip(), container.get_exposed_port(8080)
),
container_config=types.TestContainerUrlConfig(
"http", container.get_wrapped_container().name, 8080
),
)
@pytest.fixture(name="make_http_mocks", scope="function")
def make_http_mocks():
def _make_http_mocks(container: WireMockContainer, mappings: List[Mapping]):
Config.base_url = container.get_url("__admin")
for mapping in mappings:
Mappings.create_mapping(mapping=mapping)
yield _make_http_mocks
Mappings.delete_all_mappings()

View File

@@ -0,0 +1,55 @@
import docker
import pytest
from testcontainers.core.container import Network
from fixtures import types
@pytest.fixture(name="migrator", scope="package")
def migrator(
network: Network,
clickhouse: types.TestContainerClickhouse,
request: pytest.FixtureRequest,
) -> None:
"""
Package-scoped fixture for running schema migrations.
"""
version = request.config.getoption("--schema-migrator-version")
client = docker.from_env()
container = client.containers.run(
image=f"signoz/signoz-schema-migrator:{version}",
command=f"sync --replication=true --cluster-name=cluster --up= --dsn={clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN"]}", # pylint: disable=line-too-long
detach=True,
auto_remove=False,
network=network.id,
)
result = container.wait()
if result["StatusCode"] != 0:
logs = container.logs().decode(encoding="utf-8")
container.remove()
print(logs)
raise RuntimeError("failed to run migrations on clickhouse")
container.remove()
container = client.containers.run(
image=f"signoz/signoz-schema-migrator:{version}",
command=f"async --replication=true --cluster-name=cluster --up= --dsn={clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN"]}", # pylint: disable=line-too-long
detach=True,
auto_remove=False,
network=network.id,
)
result = container.wait()
if result["StatusCode"] != 0:
logs = container.logs().decode(encoding="utf-8")
container.remove()
print(logs)
raise RuntimeError("failed to run migrations on clickhouse")
container.remove()

View File

@@ -0,0 +1,18 @@
import pytest
from testcontainers.core.container import Network
@pytest.fixture(name="network", scope="package")
def network(request: pytest.FixtureRequest) -> Network:
"""
Package-Scoped fixture for creating a network
"""
nw = Network()
nw.create()
def stop():
nw.remove()
request.addfinalizer(stop)
return nw

View File

@@ -0,0 +1,58 @@
import psycopg2
import pytest
from testcontainers.core.container import Network
from testcontainers.postgres import PostgresContainer
from fixtures import types
@pytest.fixture(name="postgres", scope="package")
def postgres(
network: Network, request: pytest.FixtureRequest
) -> types.TestContainerSQL:
"""
Package-scoped fixture for PostgreSQL TestContainer.
"""
version = request.config.getoption("--postgres-version")
container = PostgresContainer(
image=f"postgres:{version}",
port=5432,
username="signoz",
password="password",
dbname="signoz",
driver="psycopg2",
network=network.id,
)
container.start()
connection = psycopg2.connect(
dbname=container.dbname,
user=container.username,
password=container.password,
host=container.get_container_host_ip(),
port=container.get_exposed_port(5432),
)
def stop():
connection.close()
container.stop(delete_volume=True)
request.addfinalizer(stop)
return types.TestContainerSQL(
container=container,
host_config=types.TestContainerUrlConfig(
"postgresql",
container.get_container_host_ip(),
container.get_exposed_port(5432),
),
container_config=types.TestContainerUrlConfig(
"postgresql", container.get_wrapped_container().name, 5432
),
conn=connection,
env={
"SIGNOZ_SQLSTORE_PROVIDER": "postgres",
"SIGNOZ_SQLSTORE_POSTGRES_DSN": f"postgresql://{container.username}:{container.password}@{container.get_wrapped_container().name}:{5432}/{container.dbname}", # pylint: disable=line-too-long
},
)

View File

@@ -0,0 +1,111 @@
import platform
import time
from http import HTTPStatus
import pytest
import requests
from testcontainers.core.container import DockerContainer, Network
from testcontainers.core.image import DockerImage
from fixtures import types
@pytest.fixture(name="signoz", scope="package")
def signoz(
network: Network,
zeus: types.TestContainerWiremock,
sqlstore: types.TestContainerSQL,
clickhouse: types.TestContainerClickhouse,
request: pytest.FixtureRequest,
) -> types.SigNoz:
"""
Package-scoped fixture for setting up SigNoz.
"""
# Run the migrations for clickhouse
request.getfixturevalue("migrator")
# Build the image
self = DockerImage(
path="../../",
dockerfile_path="ee/query-service/Dockerfile.integration",
tag="signoz:integration",
)
arch = platform.machine()
if arch == "x86_64":
arch = "amd64"
self.build(
buildargs={
"TARGETARCH": arch,
"ZEUSURL": zeus.container_config.base(),
}
)
env = (
{
"SIGNOZ_WEB_ENABLED": False,
"SIGNOZ_INSTRUMENTATION_LOGS_LEVEL": "debug",
"SIGNOZ_PROMETHEUS_ACTIVE__QUERY__TRACKER_ENABLED": False,
}
| sqlstore.env
| clickhouse.env
)
container = DockerContainer("signoz:integration")
for k, v in env.items():
container.with_env(k, v)
container.with_exposed_ports(8080)
container.with_network(network=network)
provider = request.config.getoption("--sqlstore-provider")
if provider == "sqlite":
container.with_volume_mapping(
sqlstore.env["SIGNOZ_SQLSTORE_SQLITE_PATH"],
sqlstore.env["SIGNOZ_SQLSTORE_SQLITE_PATH"],
"rw",
)
container.start()
def ready(container: DockerContainer) -> None:
for attempt in range(30):
try:
response = requests.get(
f"http://{container.get_container_host_ip()}:{container.get_exposed_port(8080)}/api/v1/health", # pylint: disable=line-too-long
timeout=2,
)
return response.status_code == HTTPStatus.OK
except Exception: # pylint: disable=broad-exception-caught
print(f"attempt {attempt} at health check failed")
time.sleep(2)
raise TimeoutError("timeout exceeded while waiting")
ready(container=container)
def stop():
logs = container.get_wrapped_container().logs(tail=100)
print(logs.decode(encoding="utf-8"))
container.stop(delete_volume=True)
request.addfinalizer(stop)
return types.SigNoz(
self=types.TestContainerDocker(
container=container,
host_config=types.TestContainerUrlConfig(
"http",
container.get_container_host_ip(),
container.get_exposed_port(8080),
),
container_config=types.TestContainerUrlConfig(
"http",
container.get_wrapped_container().name,
8080,
),
),
sqlstore=sqlstore,
telemetrystore=clickhouse,
zeus=zeus,
)

View File

@@ -0,0 +1,26 @@
import pytest
from fixtures import types
@pytest.fixture(name="sqlstore", scope="package")
def sqlstore(
request: pytest.FixtureRequest,
) -> types.TestContainerSQL:
"""
Packaged-scoped fixture for creating sql store.
"""
provider = request.config.getoption("--sqlstore-provider")
if provider == "postgres":
store = request.getfixturevalue("postgres")
return store
if provider == "sqlite":
store = request.getfixturevalue("sqlite")
return store
raise pytest.FixtureLookupError(
argname=f"{provider}",
request=request,
msg=f"{provider} does not have a fixture",
)

View File

@@ -0,0 +1,37 @@
import sqlite3
from collections import namedtuple
from typing import Any, Generator
import pytest
from fixtures import types
ConnectionTuple = namedtuple("ConnectionTuple", "connection config")
@pytest.fixture(name="sqlite", scope="package")
def sqlite(
tmpfs: Generator[types.LegacyPath, Any, None], request: pytest.FixtureRequest
) -> types.TestContainerSQL:
"""
Package-scoped fixture for SQLite.
"""
tmpdir = tmpfs("sqlite")
path = tmpdir / "signoz.db"
connection = sqlite3.connect(path, check_same_thread=False)
def stop():
connection.close()
request.addfinalizer(stop)
return types.TestContainerSQL(
None,
host_config=None,
container_config=None,
conn=connection,
env={
"SIGNOZ_SQLSTORE_PROVIDER": "sqlite",
"SIGNOZ_SQLSTORE_SQLITE_PATH": str(path),
},
)

View File

@@ -0,0 +1,63 @@
from dataclasses import dataclass
from typing import Dict
from urllib.parse import urljoin
import py
from clickhouse_driver.dbapi import Connection
from testcontainers.core.container import DockerContainer
from wiremock.testing.testcontainer import WireMockContainer
LegacyPath = py.path.local
@dataclass
class TestContainerUrlConfig:
__test__ = False
scheme: str
address: str
port: int
def base(self) -> str:
return f"{self.scheme}://{self.address}:{self.port}"
def get(self, path: str) -> str:
return urljoin(self.base(), path)
@dataclass
class TestContainerDocker:
__test__ = False
container: DockerContainer
host_config: TestContainerUrlConfig
container_config: TestContainerUrlConfig
@dataclass
class TestContainerWiremock(TestContainerDocker):
__test__ = False
container: WireMockContainer
@dataclass
class TestContainerSQL(TestContainerDocker):
__test__ = False
container: DockerContainer
conn: any
env: Dict[str, str]
@dataclass
class TestContainerClickhouse(TestContainerDocker):
__test__ = False
container: DockerContainer
conn: Connection
env: Dict[str, str]
@dataclass
class SigNoz:
__test__ = False
self: TestContainerDocker
sqlstore: TestContainerSQL
telemetrystore: TestContainerClickhouse
zeus: TestContainerWiremock

View File

@@ -0,0 +1,40 @@
import pytest
from testcontainers.core.container import DockerContainer, Network
from fixtures import types
@pytest.fixture(name="zookeeper", scope="package")
def zookeeper(
network: Network, request: pytest.FixtureRequest
) -> types.TestContainerDocker:
"""
Package-scoped fixture for Zookeeper TestContainer.
"""
version = request.config.getoption("--zookeeper-version")
container = DockerContainer(image=f"bitnami/zookeeper:{version}")
container.with_env("ALLOW_ANONYMOUS_LOGIN", "yes")
container.with_exposed_ports(2181)
container.with_network(network=network)
container.start()
def stop():
container.stop(delete_volume=True)
request.addfinalizer(stop)
return types.TestContainerDocker(
container=container,
host_config=types.TestContainerUrlConfig(
"tcp",
container.get_container_host_ip(),
container.get_exposed_port(2181),
),
container_config=types.TestContainerUrlConfig(
"tcp",
container.get_wrapped_container().name,
2181,
),
)

911
tests/integration/poetry.lock generated Normal file
View File

@@ -0,0 +1,911 @@
# This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand.
[[package]]
name = "astroid"
version = "3.3.9"
description = "An abstract syntax tree for Python with inference support."
optional = false
python-versions = ">=3.9.0"
groups = ["dev"]
files = [
{file = "astroid-3.3.9-py3-none-any.whl", hash = "sha256:d05bfd0acba96a7bd43e222828b7d9bc1e138aaeb0649707908d3702a9831248"},
{file = "astroid-3.3.9.tar.gz", hash = "sha256:622cc8e3048684aa42c820d9d218978021c3c3d174fb03a9f0d615921744f550"},
]
[[package]]
name = "autoflake"
version = "2.3.1"
description = "Removes unused imports and unused variables"
optional = false
python-versions = ">=3.8"
groups = ["dev"]
files = [
{file = "autoflake-2.3.1-py3-none-any.whl", hash = "sha256:3ae7495db9084b7b32818b4140e6dc4fc280b712fb414f5b8fe57b0a8e85a840"},
{file = "autoflake-2.3.1.tar.gz", hash = "sha256:c98b75dc5b0a86459c4f01a1d32ac7eb4338ec4317a4469515ff1e687ecd909e"},
]
[package.dependencies]
pyflakes = ">=3.0.0"
[[package]]
name = "black"
version = "25.1.0"
description = "The uncompromising code formatter."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "black-25.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:759e7ec1e050a15f89b770cefbf91ebee8917aac5c20483bc2d80a6c3a04df32"},
{file = "black-25.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0e519ecf93120f34243e6b0054db49c00a35f84f195d5bce7e9f5cfc578fc2da"},
{file = "black-25.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:055e59b198df7ac0b7efca5ad7ff2516bca343276c466be72eb04a3bcc1f82d7"},
{file = "black-25.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:db8ea9917d6f8fc62abd90d944920d95e73c83a5ee3383493e35d271aca872e9"},
{file = "black-25.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a39337598244de4bae26475f77dda852ea00a93bd4c728e09eacd827ec929df0"},
{file = "black-25.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:96c1c7cd856bba8e20094e36e0f948718dc688dba4a9d78c3adde52b9e6c2299"},
{file = "black-25.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:bce2e264d59c91e52d8000d507eb20a9aca4a778731a08cfff7e5ac4a4bb7096"},
{file = "black-25.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:172b1dbff09f86ce6f4eb8edf9dede08b1fce58ba194c87d7a4f1a5aa2f5b3c2"},
{file = "black-25.1.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:4b60580e829091e6f9238c848ea6750efed72140b91b048770b64e74fe04908b"},
{file = "black-25.1.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1e2978f6df243b155ef5fa7e558a43037c3079093ed5d10fd84c43900f2d8ecc"},
{file = "black-25.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3b48735872ec535027d979e8dcb20bf4f70b5ac75a8ea99f127c106a7d7aba9f"},
{file = "black-25.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:ea0213189960bda9cf99be5b8c8ce66bb054af5e9e861249cd23471bd7b0b3ba"},
{file = "black-25.1.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8f0b18a02996a836cc9c9c78e5babec10930862827b1b724ddfe98ccf2f2fe4f"},
{file = "black-25.1.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:afebb7098bfbc70037a053b91ae8437c3857482d3a690fefc03e9ff7aa9a5fd3"},
{file = "black-25.1.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:030b9759066a4ee5e5aca28c3c77f9c64789cdd4de8ac1df642c40b708be6171"},
{file = "black-25.1.0-cp313-cp313-win_amd64.whl", hash = "sha256:a22f402b410566e2d1c950708c77ebf5ebd5d0d88a6a2e87c86d9fb48afa0d18"},
{file = "black-25.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a1ee0a0c330f7b5130ce0caed9936a904793576ef4d2b98c40835d6a65afa6a0"},
{file = "black-25.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f3df5f1bf91d36002b0a75389ca8663510cf0531cca8aa5c1ef695b46d98655f"},
{file = "black-25.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d9e6827d563a2c820772b32ce8a42828dc6790f095f441beef18f96aa6f8294e"},
{file = "black-25.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:bacabb307dca5ebaf9c118d2d2f6903da0d62c9faa82bd21a33eecc319559355"},
{file = "black-25.1.0-py3-none-any.whl", hash = "sha256:95e8176dae143ba9097f351d174fdaf0ccd29efb414b362ae3fd72bf0f710717"},
{file = "black-25.1.0.tar.gz", hash = "sha256:33496d5cd1222ad73391352b4ae8da15253c5de89b93a80b3e2c8d9a19ec2666"},
]
[package.dependencies]
click = ">=8.0.0"
mypy-extensions = ">=0.4.3"
packaging = ">=22.0"
pathspec = ">=0.9.0"
platformdirs = ">=2"
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.10)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "certifi"
version = "2025.1.31"
description = "Python package for providing Mozilla's CA Bundle."
optional = false
python-versions = ">=3.6"
groups = ["main"]
files = [
{file = "certifi-2025.1.31-py3-none-any.whl", hash = "sha256:ca78db4565a652026a4db2bcdf68f2fb589ea80d0be70e03929ed730746b84fe"},
{file = "certifi-2025.1.31.tar.gz", hash = "sha256:3d5da6925056f6f18f119200434a4780a94263f10d1c21d032a6f6b2baa20651"},
]
[[package]]
name = "charset-normalizer"
version = "3.4.1"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "charset_normalizer-3.4.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:91b36a978b5ae0ee86c394f5a54d6ef44db1de0815eb43de826d41d21e4af3de"},
{file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7461baadb4dc00fd9e0acbe254e3d7d2112e7f92ced2adc96e54ef6501c5f176"},
{file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e218488cd232553829be0664c2292d3af2eeeb94b32bea483cf79ac6a694e037"},
{file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:80ed5e856eb7f30115aaf94e4a08114ccc8813e6ed1b5efa74f9f82e8509858f"},
{file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b010a7a4fd316c3c484d482922d13044979e78d1861f0e0650423144c616a46a"},
{file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4532bff1b8421fd0a320463030c7520f56a79c9024a4e88f01c537316019005a"},
{file = "charset_normalizer-3.4.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:d973f03c0cb71c5ed99037b870f2be986c3c05e63622c017ea9816881d2dd247"},
{file = "charset_normalizer-3.4.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:3a3bd0dcd373514dcec91c411ddb9632c0d7d92aed7093b8c3bbb6d69ca74408"},
{file = "charset_normalizer-3.4.1-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:d9c3cdf5390dcd29aa8056d13e8e99526cda0305acc038b96b30352aff5ff2bb"},
{file = "charset_normalizer-3.4.1-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:2bdfe3ac2e1bbe5b59a1a63721eb3b95fc9b6817ae4a46debbb4e11f6232428d"},
{file = "charset_normalizer-3.4.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:eab677309cdb30d047996b36d34caeda1dc91149e4fdca0b1a039b3f79d9a807"},
{file = "charset_normalizer-3.4.1-cp310-cp310-win32.whl", hash = "sha256:c0429126cf75e16c4f0ad00ee0eae4242dc652290f940152ca8c75c3a4b6ee8f"},
{file = "charset_normalizer-3.4.1-cp310-cp310-win_amd64.whl", hash = "sha256:9f0b8b1c6d84c8034a44893aba5e767bf9c7a211e313a9605d9c617d7083829f"},
{file = "charset_normalizer-3.4.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:8bfa33f4f2672964266e940dd22a195989ba31669bd84629f05fab3ef4e2d125"},
{file = "charset_normalizer-3.4.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:28bf57629c75e810b6ae989f03c0828d64d6b26a5e205535585f96093e405ed1"},
{file = "charset_normalizer-3.4.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f08ff5e948271dc7e18a35641d2f11a4cd8dfd5634f55228b691e62b37125eb3"},
{file = "charset_normalizer-3.4.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:234ac59ea147c59ee4da87a0c0f098e9c8d169f4dc2a159ef720f1a61bbe27cd"},
{file = "charset_normalizer-3.4.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd4ec41f914fa74ad1b8304bbc634b3de73d2a0889bd32076342a573e0779e00"},
{file = "charset_normalizer-3.4.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eea6ee1db730b3483adf394ea72f808b6e18cf3cb6454b4d86e04fa8c4327a12"},
{file = "charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c96836c97b1238e9c9e3fe90844c947d5afbf4f4c92762679acfe19927d81d77"},
{file = "charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:4d86f7aff21ee58f26dcf5ae81a9addbd914115cdebcbb2217e4f0ed8982e146"},
{file = "charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:09b5e6733cbd160dcc09589227187e242a30a49ca5cefa5a7edd3f9d19ed53fd"},
{file = "charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:5777ee0881f9499ed0f71cc82cf873d9a0ca8af166dfa0af8ec4e675b7df48e6"},
{file = "charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:237bdbe6159cff53b4f24f397d43c6336c6b0b42affbe857970cefbb620911c8"},
{file = "charset_normalizer-3.4.1-cp311-cp311-win32.whl", hash = "sha256:8417cb1f36cc0bc7eaba8ccb0e04d55f0ee52df06df3ad55259b9a323555fc8b"},
{file = "charset_normalizer-3.4.1-cp311-cp311-win_amd64.whl", hash = "sha256:d7f50a1f8c450f3925cb367d011448c39239bb3eb4117c36a6d354794de4ce76"},
{file = "charset_normalizer-3.4.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:73d94b58ec7fecbc7366247d3b0b10a21681004153238750bb67bd9012414545"},
{file = "charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dad3e487649f498dd991eeb901125411559b22e8d7ab25d3aeb1af367df5efd7"},
{file = "charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c30197aa96e8eed02200a83fba2657b4c3acd0f0aa4bdc9f6c1af8e8962e0757"},
{file = "charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2369eea1ee4a7610a860d88f268eb39b95cb588acd7235e02fd5a5601773d4fa"},
{file = "charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc2722592d8998c870fa4e290c2eec2c1569b87fe58618e67d38b4665dfa680d"},
{file = "charset_normalizer-3.4.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ffc9202a29ab3920fa812879e95a9e78b2465fd10be7fcbd042899695d75e616"},
{file = "charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:804a4d582ba6e5b747c625bf1255e6b1507465494a40a2130978bda7b932c90b"},
{file = "charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:0f55e69f030f7163dffe9fd0752b32f070566451afe180f99dbeeb81f511ad8d"},
{file = "charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:c4c3e6da02df6fa1410a7680bd3f63d4f710232d3139089536310d027950696a"},
{file = "charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:5df196eb874dae23dcfb968c83d4f8fdccb333330fe1fc278ac5ceeb101003a9"},
{file = "charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e358e64305fe12299a08e08978f51fc21fac060dcfcddd95453eabe5b93ed0e1"},
{file = "charset_normalizer-3.4.1-cp312-cp312-win32.whl", hash = "sha256:9b23ca7ef998bc739bf6ffc077c2116917eabcc901f88da1b9856b210ef63f35"},
{file = "charset_normalizer-3.4.1-cp312-cp312-win_amd64.whl", hash = "sha256:6ff8a4a60c227ad87030d76e99cd1698345d4491638dfa6673027c48b3cd395f"},
{file = "charset_normalizer-3.4.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:aabfa34badd18f1da5ec1bc2715cadc8dca465868a4e73a0173466b688f29dda"},
{file = "charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22e14b5d70560b8dd51ec22863f370d1e595ac3d024cb8ad7d308b4cd95f8313"},
{file = "charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8436c508b408b82d87dc5f62496973a1805cd46727c34440b0d29d8a2f50a6c9"},
{file = "charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2d074908e1aecee37a7635990b2c6d504cd4766c7bc9fc86d63f9c09af3fa11b"},
{file = "charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:955f8851919303c92343d2f66165294848d57e9bba6cf6e3625485a70a038d11"},
{file = "charset_normalizer-3.4.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:44ecbf16649486d4aebafeaa7ec4c9fed8b88101f4dd612dcaf65d5e815f837f"},
{file = "charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:0924e81d3d5e70f8126529951dac65c1010cdf117bb75eb02dd12339b57749dd"},
{file = "charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:2967f74ad52c3b98de4c3b32e1a44e32975e008a9cd2a8cc8966d6a5218c5cb2"},
{file = "charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:c75cb2a3e389853835e84a2d8fb2b81a10645b503eca9bcb98df6b5a43eb8886"},
{file = "charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:09b26ae6b1abf0d27570633b2b078a2a20419c99d66fb2823173d73f188ce601"},
{file = "charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:fa88b843d6e211393a37219e6a1c1df99d35e8fd90446f1118f4216e307e48cd"},
{file = "charset_normalizer-3.4.1-cp313-cp313-win32.whl", hash = "sha256:eb8178fe3dba6450a3e024e95ac49ed3400e506fd4e9e5c32d30adda88cbd407"},
{file = "charset_normalizer-3.4.1-cp313-cp313-win_amd64.whl", hash = "sha256:b1ac5992a838106edb89654e0aebfc24f5848ae2547d22c2c3f66454daa11971"},
{file = "charset_normalizer-3.4.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f30bf9fd9be89ecb2360c7d94a711f00c09b976258846efe40db3d05828e8089"},
{file = "charset_normalizer-3.4.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:97f68b8d6831127e4787ad15e6757232e14e12060bec17091b85eb1486b91d8d"},
{file = "charset_normalizer-3.4.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7974a0b5ecd505609e3b19742b60cee7aa2aa2fb3151bc917e6e2646d7667dcf"},
{file = "charset_normalizer-3.4.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fc54db6c8593ef7d4b2a331b58653356cf04f67c960f584edb7c3d8c97e8f39e"},
{file = "charset_normalizer-3.4.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:311f30128d7d333eebd7896965bfcfbd0065f1716ec92bd5638d7748eb6f936a"},
{file = "charset_normalizer-3.4.1-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:7d053096f67cd1241601111b698f5cad775f97ab25d81567d3f59219b5f1adbd"},
{file = "charset_normalizer-3.4.1-cp37-cp37m-musllinux_1_2_i686.whl", hash = "sha256:807f52c1f798eef6cf26beb819eeb8819b1622ddfeef9d0977a8502d4db6d534"},
{file = "charset_normalizer-3.4.1-cp37-cp37m-musllinux_1_2_ppc64le.whl", hash = "sha256:dccbe65bd2f7f7ec22c4ff99ed56faa1e9f785482b9bbd7c717e26fd723a1d1e"},
{file = "charset_normalizer-3.4.1-cp37-cp37m-musllinux_1_2_s390x.whl", hash = "sha256:2fb9bd477fdea8684f78791a6de97a953c51831ee2981f8e4f583ff3b9d9687e"},
{file = "charset_normalizer-3.4.1-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:01732659ba9b5b873fc117534143e4feefecf3b2078b0a6a2e925271bb6f4cfa"},
{file = "charset_normalizer-3.4.1-cp37-cp37m-win32.whl", hash = "sha256:7a4f97a081603d2050bfaffdefa5b02a9ec823f8348a572e39032caa8404a487"},
{file = "charset_normalizer-3.4.1-cp37-cp37m-win_amd64.whl", hash = "sha256:7b1bef6280950ee6c177b326508f86cad7ad4dff12454483b51d8b7d673a2c5d"},
{file = "charset_normalizer-3.4.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:ecddf25bee22fe4fe3737a399d0d177d72bc22be6913acfab364b40bce1ba83c"},
{file = "charset_normalizer-3.4.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8c60ca7339acd497a55b0ea5d506b2a2612afb2826560416f6894e8b5770d4a9"},
{file = "charset_normalizer-3.4.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b7b2d86dd06bfc2ade3312a83a5c364c7ec2e3498f8734282c6c3d4b07b346b8"},
{file = "charset_normalizer-3.4.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dd78cfcda14a1ef52584dbb008f7ac81c1328c0f58184bf9a84c49c605002da6"},
{file = "charset_normalizer-3.4.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6e27f48bcd0957c6d4cb9d6fa6b61d192d0b13d5ef563e5f2ae35feafc0d179c"},
{file = "charset_normalizer-3.4.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:01ad647cdd609225c5350561d084b42ddf732f4eeefe6e678765636791e78b9a"},
{file = "charset_normalizer-3.4.1-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:619a609aa74ae43d90ed2e89bdd784765de0a25ca761b93e196d938b8fd1dbbd"},
{file = "charset_normalizer-3.4.1-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:89149166622f4db9b4b6a449256291dc87a99ee53151c74cbd82a53c8c2f6ccd"},
{file = "charset_normalizer-3.4.1-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:7709f51f5f7c853f0fb938bcd3bc59cdfdc5203635ffd18bf354f6967ea0f824"},
{file = "charset_normalizer-3.4.1-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:345b0426edd4e18138d6528aed636de7a9ed169b4aaf9d61a8c19e39d26838ca"},
{file = "charset_normalizer-3.4.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:0907f11d019260cdc3f94fbdb23ff9125f6b5d1039b76003b5b0ac9d6a6c9d5b"},
{file = "charset_normalizer-3.4.1-cp38-cp38-win32.whl", hash = "sha256:ea0d8d539afa5eb2728aa1932a988a9a7af94f18582ffae4bc10b3fbdad0626e"},
{file = "charset_normalizer-3.4.1-cp38-cp38-win_amd64.whl", hash = "sha256:329ce159e82018d646c7ac45b01a430369d526569ec08516081727a20e9e4af4"},
{file = "charset_normalizer-3.4.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b97e690a2118911e39b4042088092771b4ae3fc3aa86518f84b8cf6888dbdb41"},
{file = "charset_normalizer-3.4.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78baa6d91634dfb69ec52a463534bc0df05dbd546209b79a3880a34487f4b84f"},
{file = "charset_normalizer-3.4.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1a2bc9f351a75ef49d664206d51f8e5ede9da246602dc2d2726837620ea034b2"},
{file = "charset_normalizer-3.4.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:75832c08354f595c760a804588b9357d34ec00ba1c940c15e31e96d902093770"},
{file = "charset_normalizer-3.4.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0af291f4fe114be0280cdd29d533696a77b5b49cfde5467176ecab32353395c4"},
{file = "charset_normalizer-3.4.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0167ddc8ab6508fe81860a57dd472b2ef4060e8d378f0cc555707126830f2537"},
{file = "charset_normalizer-3.4.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:2a75d49014d118e4198bcee5ee0a6f25856b29b12dbf7cd012791f8a6cc5c496"},
{file = "charset_normalizer-3.4.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:363e2f92b0f0174b2f8238240a1a30142e3db7b957a5dd5689b0e75fb717cc78"},
{file = "charset_normalizer-3.4.1-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:ab36c8eb7e454e34e60eb55ca5d241a5d18b2c6244f6827a30e451c42410b5f7"},
{file = "charset_normalizer-3.4.1-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:4c0907b1928a36d5a998d72d64d8eaa7244989f7aaaf947500d3a800c83a3fd6"},
{file = "charset_normalizer-3.4.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:04432ad9479fa40ec0f387795ddad4437a2b50417c69fa275e212933519ff294"},
{file = "charset_normalizer-3.4.1-cp39-cp39-win32.whl", hash = "sha256:3bed14e9c89dcb10e8f3a29f9ccac4955aebe93c71ae803af79265c9ca5644c5"},
{file = "charset_normalizer-3.4.1-cp39-cp39-win_amd64.whl", hash = "sha256:49402233c892a461407c512a19435d1ce275543138294f7ef013f0b63d5d3765"},
{file = "charset_normalizer-3.4.1-py3-none-any.whl", hash = "sha256:d98b1668f06378c6dbefec3b92299716b931cd4e6061f3c875a71ced1780ab85"},
{file = "charset_normalizer-3.4.1.tar.gz", hash = "sha256:44251f18cd68a75b56585dd00dae26183e102cd5e0f9f1466e6df5da2ed64ea3"},
]
[[package]]
name = "click"
version = "8.1.8"
description = "Composable command line interface toolkit"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "click-8.1.8-py3-none-any.whl", hash = "sha256:63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2"},
{file = "click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a"},
]
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "clickhouse-driver"
version = "0.2.9"
description = "Python driver with native interface for ClickHouse"
optional = false
python-versions = "<4,>=3.7"
groups = ["main"]
files = [
{file = "clickhouse-driver-0.2.9.tar.gz", hash = "sha256:050ea4870ead993910b39e7fae965dc1c347b2e8191dcd977cd4b385f9e19f87"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6ce04e9d0d0f39561f312d1ac1a8147bc9206e4267e1a23e20e0423ebac95534"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:7ae5c8931bf290b9d85582e7955b9aad7f19ff9954e48caa4f9a180ea4d01078"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3e51792f3bd12c32cb15a907f12de3c9d264843f0bb33dce400e3966c9f09a3f"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:42fc546c31e4a04c97b749769335a679c9044dc693fa7a93e38c97fd6727173d"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6a383a403d185185c64e49edd6a19b2ec973c5adcb8ebff7ed2fc539a2cc65a5"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f05321a97e816afc75b3e4f9eda989848fecf14ecf1a91d0f22c04258123d1f7"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:be47e793846aac28442b6b1c6554e0731b848a5a7759a54aa2489997354efe4a"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:780e42a215d1ae2f6d695d74dd6f087781fb2fa51c508b58f79e68c24c5364e0"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:9e28f1fe850675e173db586e9f1ac790e8f7edd507a4227cd54cd7445f8e75b6"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:125aae7f1308d3083dadbb3c78f828ae492e060f13e4007a0cf53a8169ed7b39"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:2f3c4fbb61e75c62a1ab93a1070d362de4cb5682f82833b2c12deccb3bae888d"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:0dc03196a84e32d23b88b665be69afae98f57426f5fdf203e16715b756757961"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-win32.whl", hash = "sha256:25695d78a1d7ad6e221e800612eac08559f6182bf6dee0a220d08de7b612d993"},
{file = "clickhouse_driver-0.2.9-cp310-cp310-win_amd64.whl", hash = "sha256:367acac95398d721a0a2a6cf87e93638c5588b79498a9848676ce7f182540a6c"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5a7353a7a08eee3aa0001d8a5d771cb1f37e2acae1b48178002431f23892121a"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6af1c6cbc3481205503ab72a34aa76d6519249c904aa3f7a84b31e7b435555be"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:48033803abd1100bfff6b9a1769d831b672cd3cda5147e0323b956fd1416d38d"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1f202a58a540c85e47c31dabc8f84b6fe79dca5315c866450a538d58d6fa0571"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e4df50fd84bfa4aa1eb7b52d48136066bfb64fabb7ceb62d4c318b45a296200b"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:433a650571a0d7766eb6f402e8f5930222997686c2ee01ded22f1d8fd46af9d4"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:232ee260475611cbf7adb554b81db6b5790b36e634fe2164f4ffcd2ca3e63a71"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:09049f7e71f15c9c9a03f597f77fc1f7b61ababd155c06c0d9e64d1453d945d7"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:424153d1d5f5a807f596a48cc88119f9fb3213ca7e38f57b8d15dcc964dd91f7"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:4f078fd1cf19c4ca63b8d1e0803df665310c8d5b644c5b02bf2465e8d6ef8f55"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:f138d939e26e767537f891170b69a55a88038919f5c10d8865b67b8777fe4848"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:9aafabc7e32942f85dcb46f007f447ab69024831575df97cae28c6ed127654d1"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-win32.whl", hash = "sha256:935e16ebf1a1998d8493979d858821a755503c9b8af572d9c450173d4b88868c"},
{file = "clickhouse_driver-0.2.9-cp311-cp311-win_amd64.whl", hash = "sha256:306b3102cba278b5dfec6f5f7dc8b78416c403901510475c74913345b56c9e42"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:fcb2fd00e58650ae206a6d5dbc83117240e622471aa5124733fbf2805eb8bda0"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:b7a3e6b0a1eb218e3d870a94c76daaf65da46dca8f6888ea6542f94905c24d88"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a8d8e2888a857d8db3d98765a5ad23ab561241feaef68bbffc5a0bd9c142342"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:85d50c011467f5ff6772c4059345968b854b72e07a0219030b7c3f68419eb7f7"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:93b395c1370629ccce8fb3e14cd5be2646d227bd32018c21f753c543e9a7e96b"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6dbcee870c60d9835e5dce1456ab6b9d807e6669246357f4b321ef747b90fa43"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fffa5a5f317b1ec92e406a30a008929054cf3164d2324a3c465d0a0330273bf8"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:476702740a279744badbd177ae1c4a2d089ec128bd676861219d1f92078e4530"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:5cd6d95fab5ff80e9dc9baedc9a926f62f74072d42d5804388d63b63bec0bb63"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:05027d32d7cf3e46cb8d04f8c984745ae01bd1bc7b3579f9dadf9b3cca735697"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:3d11831842250b4c1b26503a6e9c511fc03db096608b7c6af743818c421a3032"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:81b4b671b785ebb0b8aeabf2432e47072413d81db959eb8cfd8b6ab58c5799c6"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-win32.whl", hash = "sha256:e893bd4e014877174a59e032b0e99809c95ec61328a0e6bd9352c74a2f6111a8"},
{file = "clickhouse_driver-0.2.9-cp312-cp312-win_amd64.whl", hash = "sha256:de6624e28eeffd01668803d28ae89e3d4e359b1bff8b60e4933e1cb3c6f86f18"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:909205324089a9ee59bee7ecbfa94595435118cca310fd62efdf13f225aa2965"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03f31d6e47dc2b0f367f598f5629147ed056d7216c1788e25190fcfbfa02e749"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ed84179914b2b7bb434c2322a6e7fd83daa681c97a050450511b66d917a129bb"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:67d1bf63efb4ba14ae6c6da99622e4a549e68fc3ee14d859bf611d8e6a61b3fa"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eed23ea41dd582d76f7a2ec7e09cbe5e9fec008f11a4799fa35ce44a3ebd283"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a654291132766efa2703058317749d7c69b69f02d89bac75703eaf7f775e20da"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:1c26c5ef16d0ef3cabc5bc03e827e01b0a4afb5b4eaf8850b7cf740cee04a1d4"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:b57e83d7986d3cbda6096974a9510eb53cb33ad9072288c87c820ba5eee3370e"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:153cc03b36f22cbde55aa6a5bbe99072a025567a54c48b262eb0da15d8cd7c83"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:83a857d99192936091f495826ae97497cd1873af213b1e069d56369fb182ab8e"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:bb05a9bb22cbe9ad187ad268f86adf7e60df6083331fe59c01571b7b725212dd"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-win32.whl", hash = "sha256:3e282c5c25e32d96ed151e5460d2bf4ecb805ea64449197dd918e84e768016df"},
{file = "clickhouse_driver-0.2.9-cp37-cp37m-win_amd64.whl", hash = "sha256:c46dccfb04a9afd61a1b0e60bfefceff917f76da2c863f9b36b39248496d5c77"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:612ca9028c718f362c97f552e63d313cf1a70a616ef8532ddb0effdaf12ebef9"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:471b884d318e012f68d858476052742048918854f7dfe87d78e819f87a848ffb"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:58ee63c35e99da887eb035c8d6d9e64fd298a0efc1460395297dd5cc281a6912"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0819bb63d2c5025a1fb9589f57ef82602687cef11081d6dfa6f2ce44606a1772"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f6680ee18870bca1fbab1736c8203a965efaec119ab4c37821ad99add248ee08"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:713c498741b54debd3a10a5529e70b6ed85ca33c3e8629e24ae5cd8160b5a5f2"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:730837b8f63941065c9c955c44286aef0987fb084ffb3f55bf1e4fe07df62269"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:9f4e38b2ea09214c8e7848a19391009a18c56a3640e1ba1a606b9e57aeb63404"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:457f1d6639e0345b717ae603c79bd087a35361ce68c1c308d154b80b841e5e7d"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:49a55aeb8ea625a87965a96e361bbb1ad67d0931bfb2a575f899c1064e70c2da"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:9230058d8c9b1a04079afae4650fb67745f0f1c39db335728f64d48bd2c19246"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:8798258bd556542dd9c6b8ebe62f9c5110c9dcdf97c57fb077e7b8b6d6da0826"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-win32.whl", hash = "sha256:ce8e3f4be46bcc63555863f70ab0035202b082b37e6f16876ef50e7bc4b47056"},
{file = "clickhouse_driver-0.2.9-cp38-cp38-win_amd64.whl", hash = "sha256:2d982959ff628255808d895a67493f2dab0c3a9bfc65eeda0f00c8ae9962a1b3"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a46b227fab4420566ed24ee70d90076226d16fcf09c6ad4d428717efcf536446"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7eaa2ce5ea08cf5fddebb8c274c450e102f329f9e6966b6cd85aa671c48e5552"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f97f0083194d6e23b5ef6156ed0d5388c37847b298118199d7937ba26412a9e2"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a6cab5cdbb0f8ee51d879d977b78f07068b585225ac656f3c081896c362e8f83"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cdb1b011a53ee71539e9dc655f268b111bac484db300da92829ed59e910a8fd0"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bf51bb761b281d20910b4b689c699ef98027845467daa5bb5dfdb53bd6ee404"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8ea462e3cebb121ff55002e9c8a9a0a3fd9b5bbbf688b4960f0a83c0172fb31"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:70bee21c245226ad0d637bf470472e2d487b86911b6d673a862127b934336ff4"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:253a3c223b944d691bf0abbd599f592ea3b36f0a71d2526833b1718f37eca5c2"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:a6549b53fc5c403dc556cb39b2ae94d73f9b113daa00438a660bb1dd5380ae4d"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:1c685cd4abe61af1c26279ff04b9f567eb4d6c1ec7fb265af7481b1f153043aa"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:7e25144219577491929d032a6c3ddd63c6cd7fa764af829a5637f798190d9b26"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-win32.whl", hash = "sha256:0b9925610d25405a8e6d83ff4f54fc2456a121adb0155999972f5edd6ba3efc8"},
{file = "clickhouse_driver-0.2.9-cp39-cp39-win_amd64.whl", hash = "sha256:b243de483cfa02716053b0148d73558f4694f3c27b97fc1eaa97d7079563a14d"},
{file = "clickhouse_driver-0.2.9-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:45a3d5b1d06750fd6a18c29b871494a2635670099ec7693e756a5885a4a70dbf"},
{file = "clickhouse_driver-0.2.9-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8415ffebd6ca9eef3024763abc450f8659f1716d015bd563c537d01c7fbc3569"},
{file = "clickhouse_driver-0.2.9-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ace48db993aa4bd31c42de0fa8d38c94ad47405916d6b61f7a7168a48fb52ac1"},
{file = "clickhouse_driver-0.2.9-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b07123334fe143bfe6fa4e3d4b732d647d5fd2cfb9ec7f2f76104b46fe9d20c6"},
{file = "clickhouse_driver-0.2.9-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:e2af3efa73d296420ce6362789f5b1febf75d4aa159a479393f01549115509d5"},
{file = "clickhouse_driver-0.2.9-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:baf57eede88d07a1eb04352d26fc58a4d97991ca3d8840f7c5d48691dec9f251"},
{file = "clickhouse_driver-0.2.9-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:275d0ccdab9c3571bdb3e9acfab4497930aa584ff2766b035bb2f854deaf8b82"},
{file = "clickhouse_driver-0.2.9-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:293da77bfcac3168fb35b27c242f97c1a05502435c0686ecbb8e2e4abcb3de26"},
{file = "clickhouse_driver-0.2.9-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8d6c2e5830705e4eeef33070ca4d5a24dfa221f28f2f540e5e6842c26e70b10b"},
{file = "clickhouse_driver-0.2.9-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:11934bd78d97dd7e1a23a6222b5edd1e1b4d34e1ead5c846dc2b5c56fdc35ff5"},
{file = "clickhouse_driver-0.2.9-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:b802b6f0fbdcc3ab81b87f09b694dde91ab049f44d1d2c08c3dc8ea9a5950cfa"},
{file = "clickhouse_driver-0.2.9-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7af871c5315eb829ecf4533c790461ea8f73b3bfd5f533b0467e479fdf6ddcfd"},
{file = "clickhouse_driver-0.2.9-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d577dd4867b9e26cf60590e1f500990c8701a6e3cfbb9e644f4d0c0fb607028"},
{file = "clickhouse_driver-0.2.9-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2ed3dea2d1eca85fef5b8564ddd76dedb15a610c77d55d555b49d9f7c896b64b"},
{file = "clickhouse_driver-0.2.9-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:91ec96f2c48e5bdeac9eea43a9bc9cc19acb2d2c59df0a13d5520dfc32457605"},
{file = "clickhouse_driver-0.2.9-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:7667ab423452754f36ba8fb41e006a46baace9c94e2aca2a745689b9f2753dfb"},
{file = "clickhouse_driver-0.2.9-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:653583b1f3b088d106f180d6f02c90917ecd669ec956b62903a05df4a7f44863"},
{file = "clickhouse_driver-0.2.9-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7ef3dd0cbdf2f0171caab90389af0ede068ec802bf46c6a77f14e6edc86671bc"},
{file = "clickhouse_driver-0.2.9-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11b1833ee8ff8d5df39a34a895e060b57bd81e05ea68822bc60476daff4ce1c8"},
{file = "clickhouse_driver-0.2.9-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:8a3195639e6393b9d4aafe736036881ff86b6be5855d4bf7d9f5c31637181ec3"},
]
[package.dependencies]
pytz = "*"
tzlocal = "*"
[package.extras]
lz4 = ["clickhouse-cityhash (>=1.0.2.1)", "lz4 (<=3.0.1) ; implementation_name == \"pypy\"", "lz4 ; implementation_name != \"pypy\""]
numpy = ["numpy (>=1.12.0)", "pandas (>=0.24.0)"]
zstd = ["clickhouse-cityhash (>=1.0.2.1)", "zstd"]
[[package]]
name = "colorama"
version = "0.4.6"
description = "Cross-platform colored terminal text."
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
groups = ["main", "dev"]
files = [
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
]
markers = {main = "sys_platform == \"win32\" or platform_system == \"Windows\"", dev = "sys_platform == \"win32\""}
[[package]]
name = "dill"
version = "0.3.9"
description = "serialize all of Python"
optional = false
python-versions = ">=3.8"
groups = ["dev"]
files = [
{file = "dill-0.3.9-py3-none-any.whl", hash = "sha256:468dff3b89520b474c0397703366b7b95eebe6303f108adf9b19da1f702be87a"},
{file = "dill-0.3.9.tar.gz", hash = "sha256:81aa267dddf68cbfe8029c42ca9ec6a4ab3b22371d1c450abc54422577b4512c"},
]
[package.extras]
graph = ["objgraph (>=1.7.2)"]
profile = ["gprof2dot (>=2022.7.29)"]
[[package]]
name = "docker"
version = "7.1.0"
description = "A Python library for the Docker Engine API."
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "docker-7.1.0-py3-none-any.whl", hash = "sha256:c96b93b7f0a746f9e77d325bcfb87422a3d8bd4f03136ae8a85b37f1898d5fc0"},
{file = "docker-7.1.0.tar.gz", hash = "sha256:ad8c70e6e3f8926cb8a92619b832b4ea5299e2831c14284663184e200546fa6c"},
]
[package.dependencies]
pywin32 = {version = ">=304", markers = "sys_platform == \"win32\""}
requests = ">=2.26.0"
urllib3 = ">=1.26.0"
[package.extras]
dev = ["coverage (==7.2.7)", "pytest (==7.4.2)", "pytest-cov (==4.1.0)", "pytest-timeout (==2.1.0)", "ruff (==0.1.8)"]
docs = ["myst-parser (==0.18.0)", "sphinx (==5.1.1)"]
ssh = ["paramiko (>=2.4.3)"]
websockets = ["websocket-client (>=1.3.0)"]
[[package]]
name = "idna"
version = "3.10"
description = "Internationalized Domain Names in Applications (IDNA)"
optional = false
python-versions = ">=3.6"
groups = ["main"]
files = [
{file = "idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3"},
{file = "idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9"},
]
[package.extras]
all = ["flake8 (>=7.1.1)", "mypy (>=1.11.2)", "pytest (>=8.3.2)", "ruff (>=0.6.2)"]
[[package]]
name = "importlib-resources"
version = "5.13.0"
description = "Read resources from Python packages"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "importlib_resources-5.13.0-py3-none-any.whl", hash = "sha256:9f7bd0c97b79972a6cce36a366356d16d5e13b09679c11a58f1014bfdf8e64b2"},
{file = "importlib_resources-5.13.0.tar.gz", hash = "sha256:82d5c6cca930697dbbd86c93333bb2c2e72861d4789a11c2662b933e5ad2b528"},
]
[package.extras]
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
testing = ["pytest (>=6)", "pytest-black (>=0.3.7) ; platform_python_implementation != \"PyPy\"", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1) ; platform_python_implementation != \"PyPy\"", "pytest-ruff"]
[[package]]
name = "iniconfig"
version = "2.1.0"
description = "brain-dead simple config-ini parsing"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760"},
{file = "iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7"},
]
[[package]]
name = "isort"
version = "6.0.1"
description = "A Python utility / library to sort Python imports."
optional = false
python-versions = ">=3.9.0"
groups = ["dev"]
files = [
{file = "isort-6.0.1-py3-none-any.whl", hash = "sha256:2dc5d7f65c9678d94c88dfc29161a320eec67328bc97aad576874cb4be1e9615"},
{file = "isort-6.0.1.tar.gz", hash = "sha256:1cb5df28dfbc742e490c5e41bad6da41b805b0a8be7bc93cd0fb2a8a890ac450"},
]
[package.extras]
colors = ["colorama"]
plugins = ["setuptools"]
[[package]]
name = "mccabe"
version = "0.7.0"
description = "McCabe checker, plugin for flake8"
optional = false
python-versions = ">=3.6"
groups = ["dev"]
files = [
{file = "mccabe-0.7.0-py2.py3-none-any.whl", hash = "sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e"},
{file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"},
]
[[package]]
name = "mypy-extensions"
version = "1.0.0"
description = "Type system extensions for programs checked with the mypy type checker."
optional = false
python-versions = ">=3.5"
groups = ["main"]
files = [
{file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"},
{file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
]
[[package]]
name = "packaging"
version = "24.2"
description = "Core utilities for Python packages"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "packaging-24.2-py3-none-any.whl", hash = "sha256:09abb1bccd265c01f4a3aa3f7a7db064b36514d2cba19a2f694fe6150451a759"},
{file = "packaging-24.2.tar.gz", hash = "sha256:c228a6dc5e932d346bc5739379109d49e8853dd8223571c7c5b55260edc0b97f"},
]
[[package]]
name = "pathspec"
version = "0.12.1"
description = "Utility library for gitignore style pattern matching of file paths."
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "pathspec-0.12.1-py3-none-any.whl", hash = "sha256:a0d503e138a4c123b27490a4f7beda6a01c6f288df0e4a8b79c7eb0dc7b4cc08"},
{file = "pathspec-0.12.1.tar.gz", hash = "sha256:a482d51503a1ab33b1c67a6c3813a26953dbdc71c31dacaef9a838c4e29f5712"},
]
[[package]]
name = "platformdirs"
version = "4.3.7"
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`."
optional = false
python-versions = ">=3.9"
groups = ["main", "dev"]
files = [
{file = "platformdirs-4.3.7-py3-none-any.whl", hash = "sha256:a03875334331946f13c549dbd8f4bac7a13a50a895a0eb1e8c6a8ace80d40a94"},
{file = "platformdirs-4.3.7.tar.gz", hash = "sha256:eb437d586b6a0986388f0d6f74aa0cde27b48d0e3d66843640bfb6bdcdb6e351"},
]
[package.extras]
docs = ["furo (>=2024.8.6)", "proselint (>=0.14)", "sphinx (>=8.1.3)", "sphinx-autodoc-typehints (>=3)"]
test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=8.3.4)", "pytest-cov (>=6)", "pytest-mock (>=3.14)"]
type = ["mypy (>=1.14.1)"]
[[package]]
name = "pluggy"
version = "1.5.0"
description = "plugin and hook calling mechanisms for python"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669"},
{file = "pluggy-1.5.0.tar.gz", hash = "sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1"},
]
[package.extras]
dev = ["pre-commit", "tox"]
testing = ["pytest", "pytest-benchmark"]
[[package]]
name = "psycopg2"
version = "2.9.10"
description = "psycopg2 - Python-PostgreSQL Database Adapter"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "psycopg2-2.9.10-cp310-cp310-win32.whl", hash = "sha256:5df2b672140f95adb453af93a7d669d7a7bf0a56bcd26f1502329166f4a61716"},
{file = "psycopg2-2.9.10-cp310-cp310-win_amd64.whl", hash = "sha256:c6f7b8561225f9e711a9c47087388a97fdc948211c10a4bccbf0ba68ab7b3b5a"},
{file = "psycopg2-2.9.10-cp311-cp311-win32.whl", hash = "sha256:47c4f9875125344f4c2b870e41b6aad585901318068acd01de93f3677a6522c2"},
{file = "psycopg2-2.9.10-cp311-cp311-win_amd64.whl", hash = "sha256:0435034157049f6846e95103bd8f5a668788dd913a7c30162ca9503fdf542cb4"},
{file = "psycopg2-2.9.10-cp312-cp312-win32.whl", hash = "sha256:65a63d7ab0e067e2cdb3cf266de39663203d38d6a8ed97f5ca0cb315c73fe067"},
{file = "psycopg2-2.9.10-cp312-cp312-win_amd64.whl", hash = "sha256:4a579d6243da40a7b3182e0430493dbd55950c493d8c68f4eec0b302f6bbf20e"},
{file = "psycopg2-2.9.10-cp313-cp313-win_amd64.whl", hash = "sha256:91fd603a2155da8d0cfcdbf8ab24a2d54bca72795b90d2a3ed2b6da8d979dee2"},
{file = "psycopg2-2.9.10-cp39-cp39-win32.whl", hash = "sha256:9d5b3b94b79a844a986d029eee38998232451119ad653aea42bb9220a8c5066b"},
{file = "psycopg2-2.9.10-cp39-cp39-win_amd64.whl", hash = "sha256:88138c8dedcbfa96408023ea2b0c369eda40fe5d75002c0964c78f46f11fa442"},
{file = "psycopg2-2.9.10.tar.gz", hash = "sha256:12ec0b40b0273f95296233e8750441339298e6a572f7039da5b260e3c8b60e11"},
]
[[package]]
name = "pyflakes"
version = "3.3.2"
description = "passive checker of Python programs"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "pyflakes-3.3.2-py2.py3-none-any.whl", hash = "sha256:5039c8339cbb1944045f4ee5466908906180f13cc99cc9949348d10f82a5c32a"},
{file = "pyflakes-3.3.2.tar.gz", hash = "sha256:6dfd61d87b97fba5dcfaaf781171ac16be16453be6d816147989e7f6e6a9576b"},
]
[[package]]
name = "pylint"
version = "3.3.6"
description = "python code static checker"
optional = false
python-versions = ">=3.9.0"
groups = ["dev"]
files = [
{file = "pylint-3.3.6-py3-none-any.whl", hash = "sha256:8b7c2d3e86ae3f94fb27703d521dd0b9b6b378775991f504d7c3a6275aa0a6a6"},
{file = "pylint-3.3.6.tar.gz", hash = "sha256:b634a041aac33706d56a0d217e6587228c66427e20ec21a019bc4cdee48c040a"},
]
[package.dependencies]
astroid = ">=3.3.8,<=3.4.0.dev0"
colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
dill = {version = ">=0.3.7", markers = "python_version >= \"3.12\""}
isort = ">=4.2.5,<5.13 || >5.13,<7"
mccabe = ">=0.6,<0.8"
platformdirs = ">=2.2"
tomlkit = ">=0.10.1"
[package.extras]
spelling = ["pyenchant (>=3.2,<4.0)"]
testutils = ["gitpython (>3)"]
[[package]]
name = "pytest"
version = "8.3.5"
description = "pytest: simple powerful testing with Python"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "pytest-8.3.5-py3-none-any.whl", hash = "sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820"},
{file = "pytest-8.3.5.tar.gz", hash = "sha256:f4efe70cc14e511565ac476b57c279e12a855b11f48f212af1080ef2263d3845"},
]
[package.dependencies]
colorama = {version = "*", markers = "sys_platform == \"win32\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=1.5,<2"
[package.extras]
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"]
[[package]]
name = "python-dotenv"
version = "1.1.0"
description = "Read key-value pairs from a .env file and set them as environment variables"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "python_dotenv-1.1.0-py3-none-any.whl", hash = "sha256:d7c01d9e2293916c18baf562d95698754b0dbbb5e74d457c45d4f6561fb9d55d"},
{file = "python_dotenv-1.1.0.tar.gz", hash = "sha256:41f90bc6f5f177fb41f53e87666db362025010eb28f60a01c9143bfa33a2b2d5"},
]
[package.extras]
cli = ["click (>=5.0)"]
[[package]]
name = "pytz"
version = "2025.2"
description = "World timezone definitions, modern and historical"
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "pytz-2025.2-py2.py3-none-any.whl", hash = "sha256:5ddf76296dd8c44c26eb8f4b6f35488f3ccbf6fbbd7adee0b7262d43f0ec2f00"},
{file = "pytz-2025.2.tar.gz", hash = "sha256:360b9e3dbb49a209c21ad61809c7fb453643e048b38924c765813546746e81c3"},
]
[[package]]
name = "pywin32"
version = "310"
description = "Python for Window Extensions"
optional = false
python-versions = "*"
groups = ["main"]
markers = "sys_platform == \"win32\""
files = [
{file = "pywin32-310-cp310-cp310-win32.whl", hash = "sha256:6dd97011efc8bf51d6793a82292419eba2c71cf8e7250cfac03bba284454abc1"},
{file = "pywin32-310-cp310-cp310-win_amd64.whl", hash = "sha256:c3e78706e4229b915a0821941a84e7ef420bf2b77e08c9dae3c76fd03fd2ae3d"},
{file = "pywin32-310-cp310-cp310-win_arm64.whl", hash = "sha256:33babed0cf0c92a6f94cc6cc13546ab24ee13e3e800e61ed87609ab91e4c8213"},
{file = "pywin32-310-cp311-cp311-win32.whl", hash = "sha256:1e765f9564e83011a63321bb9d27ec456a0ed90d3732c4b2e312b855365ed8bd"},
{file = "pywin32-310-cp311-cp311-win_amd64.whl", hash = "sha256:126298077a9d7c95c53823934f000599f66ec9296b09167810eb24875f32689c"},
{file = "pywin32-310-cp311-cp311-win_arm64.whl", hash = "sha256:19ec5fc9b1d51c4350be7bb00760ffce46e6c95eaf2f0b2f1150657b1a43c582"},
{file = "pywin32-310-cp312-cp312-win32.whl", hash = "sha256:8a75a5cc3893e83a108c05d82198880704c44bbaee4d06e442e471d3c9ea4f3d"},
{file = "pywin32-310-cp312-cp312-win_amd64.whl", hash = "sha256:bf5c397c9a9a19a6f62f3fb821fbf36cac08f03770056711f765ec1503972060"},
{file = "pywin32-310-cp312-cp312-win_arm64.whl", hash = "sha256:2349cc906eae872d0663d4d6290d13b90621eaf78964bb1578632ff20e152966"},
{file = "pywin32-310-cp313-cp313-win32.whl", hash = "sha256:5d241a659c496ada3253cd01cfaa779b048e90ce4b2b38cd44168ad555ce74ab"},
{file = "pywin32-310-cp313-cp313-win_amd64.whl", hash = "sha256:667827eb3a90208ddbdcc9e860c81bde63a135710e21e4cb3348968e4bd5249e"},
{file = "pywin32-310-cp313-cp313-win_arm64.whl", hash = "sha256:e308f831de771482b7cf692a1f308f8fca701b2d8f9dde6cc440c7da17e47b33"},
{file = "pywin32-310-cp38-cp38-win32.whl", hash = "sha256:0867beb8addefa2e3979d4084352e4ac6e991ca45373390775f7084cc0209b9c"},
{file = "pywin32-310-cp38-cp38-win_amd64.whl", hash = "sha256:30f0a9b3138fb5e07eb4973b7077e1883f558e40c578c6925acc7a94c34eaa36"},
{file = "pywin32-310-cp39-cp39-win32.whl", hash = "sha256:851c8d927af0d879221e616ae1f66145253537bbdd321a77e8ef701b443a9a1a"},
{file = "pywin32-310-cp39-cp39-win_amd64.whl", hash = "sha256:96867217335559ac619f00ad70e513c0fcf84b8a3af9fc2bba3b59b97da70475"},
]
[[package]]
name = "requests"
version = "2.32.3"
description = "Python HTTP for Humans."
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "requests-2.32.3-py3-none-any.whl", hash = "sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6"},
{file = "requests-2.32.3.tar.gz", hash = "sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760"},
]
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<4"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<3"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "testcontainers"
version = "4.10.0"
description = "Python library for throwaway instances of anything that can run in a Docker container"
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "testcontainers-4.10.0-py3-none-any.whl", hash = "sha256:31ed1a81238c7e131a2a29df6db8f23717d892b592fa5a1977fd0dcd0c23fc23"},
{file = "testcontainers-4.10.0.tar.gz", hash = "sha256:03f85c3e505d8b4edeb192c72a961cebbcba0dd94344ae778b4a159cb6dcf8d3"},
]
[package.dependencies]
docker = "*"
python-dotenv = "*"
typing-extensions = "*"
urllib3 = "*"
wrapt = "*"
[package.extras]
arangodb = ["python-arango (>=7.8,<8.0)"]
aws = ["boto3", "httpx"]
azurite = ["azure-storage-blob (>=12.19,<13.0)"]
chroma = ["chromadb-client"]
clickhouse = ["clickhouse-driver"]
cosmosdb = ["azure-cosmos"]
db2 = ["ibm_db_sa", "sqlalchemy"]
generic = ["httpx", "redis"]
google = ["google-cloud-datastore (>=2)", "google-cloud-pubsub (>=2)"]
influxdb = ["influxdb", "influxdb-client"]
k3s = ["kubernetes", "pyyaml"]
keycloak = ["python-keycloak"]
localstack = ["boto3"]
mailpit = ["cryptography"]
minio = ["minio"]
mongodb = ["pymongo"]
mssql = ["pymssql", "sqlalchemy"]
mysql = ["pymysql[rsa]", "sqlalchemy"]
nats = ["nats-py"]
neo4j = ["neo4j"]
opensearch = ["opensearch-py"]
oracle = ["oracledb", "sqlalchemy"]
oracle-free = ["oracledb", "sqlalchemy"]
qdrant = ["qdrant-client"]
rabbitmq = ["pika"]
redis = ["redis"]
registry = ["bcrypt"]
scylla = ["cassandra-driver (==3.29.1)"]
selenium = ["selenium"]
sftp = ["cryptography"]
test-module-import = ["httpx"]
trino = ["trino"]
weaviate = ["weaviate-client (>=4.5.4,<5.0.0)"]
[[package]]
name = "tomlkit"
version = "0.13.2"
description = "Style preserving TOML library"
optional = false
python-versions = ">=3.8"
groups = ["dev"]
files = [
{file = "tomlkit-0.13.2-py3-none-any.whl", hash = "sha256:7a974427f6e119197f670fbbbeae7bef749a6c14e793db934baefc1b5f03efde"},
{file = "tomlkit-0.13.2.tar.gz", hash = "sha256:fff5fe59a87295b278abd31bec92c15d9bc4a06885ab12bcea52c71119392e79"},
]
[[package]]
name = "typing-extensions"
version = "4.13.2"
description = "Backported and Experimental Type Hints for Python 3.8+"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "typing_extensions-4.13.2-py3-none-any.whl", hash = "sha256:a439e7c04b49fec3e5d3e2beaa21755cadbbdc391694e28ccdd36ca4a1408f8c"},
{file = "typing_extensions-4.13.2.tar.gz", hash = "sha256:e6c81219bd689f51865d9e372991c540bda33a0379d5573cddb9a3a23f7caaef"},
]
[[package]]
name = "tzdata"
version = "2025.2"
description = "Provider of IANA time zone data"
optional = false
python-versions = ">=2"
groups = ["main"]
markers = "platform_system == \"Windows\""
files = [
{file = "tzdata-2025.2-py2.py3-none-any.whl", hash = "sha256:1a403fada01ff9221ca8044d701868fa132215d84beb92242d9acd2147f667a8"},
{file = "tzdata-2025.2.tar.gz", hash = "sha256:b60a638fcc0daffadf82fe0f57e53d06bdec2f36c4df66280ae79bce6bd6f2b9"},
]
[[package]]
name = "tzlocal"
version = "5.3.1"
description = "tzinfo object for the local timezone"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "tzlocal-5.3.1-py3-none-any.whl", hash = "sha256:eb1a66c3ef5847adf7a834f1be0800581b683b5608e74f86ecbcef8ab91bb85d"},
{file = "tzlocal-5.3.1.tar.gz", hash = "sha256:cceffc7edecefea1f595541dbd6e990cb1ea3d19bf01b2809f362a03dd7921fd"},
]
[package.dependencies]
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["check-manifest", "pytest (>=4.3)", "pytest-cov", "pytest-mock (>=3.3)", "zest.releaser"]
[[package]]
name = "urllib3"
version = "2.4.0"
description = "HTTP library with thread-safe connection pooling, file post, and more."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "urllib3-2.4.0-py3-none-any.whl", hash = "sha256:4e16665048960a0900c702d4a66415956a584919c03361cac9f1df5c5dd7e813"},
{file = "urllib3-2.4.0.tar.gz", hash = "sha256:414bc6535b787febd7567804cc015fee39daab8ad86268f1310a9250697de466"},
]
[package.extras]
brotli = ["brotli (>=1.0.9) ; platform_python_implementation == \"CPython\"", "brotlicffi (>=0.8.0) ; platform_python_implementation != \"CPython\""]
h2 = ["h2 (>=4,<5)"]
socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
zstd = ["zstandard (>=0.18.0)"]
[[package]]
name = "wiremock"
version = "2.6.1"
description = "Wiremock Admin API Client"
optional = false
python-versions = ">=3.7,<4.0"
groups = ["main"]
files = [
{file = "wiremock-2.6.1-py3-none-any.whl", hash = "sha256:417a803b0bba3ab6240410aedb4de15a32581fb29b1310b05289b4aa1a7c9ffd"},
{file = "wiremock-2.6.1.tar.gz", hash = "sha256:89b64d763a68a1808274aa4daf802f7ce3f9bff2a18ac6bf8923c997a21d67c1"},
]
[package.dependencies]
importlib-resources = ">=5.12.0,<6.0.0"
requests = ">=2.20.0,<3.0.0"
[package.extras]
testing = ["docker (>=6.1.0,<7.0.0)", "testcontainers (>=3.7.1,<4.0.0)"]
[[package]]
name = "wrapt"
version = "1.17.2"
description = "Module for decorators, wrappers and monkey patching."
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "wrapt-1.17.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:3d57c572081fed831ad2d26fd430d565b76aa277ed1d30ff4d40670b1c0dd984"},
{file = "wrapt-1.17.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b5e251054542ae57ac7f3fba5d10bfff615b6c2fb09abeb37d2f1463f841ae22"},
{file = "wrapt-1.17.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:80dd7db6a7cb57ffbc279c4394246414ec99537ae81ffd702443335a61dbf3a7"},
{file = "wrapt-1.17.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a6e821770cf99cc586d33833b2ff32faebdbe886bd6322395606cf55153246c"},
{file = "wrapt-1.17.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b60fb58b90c6d63779cb0c0c54eeb38941bae3ecf7a73c764c52c88c2dcb9d72"},
{file = "wrapt-1.17.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b870b5df5b71d8c3359d21be8f0d6c485fa0ebdb6477dda51a1ea54a9b558061"},
{file = "wrapt-1.17.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:4011d137b9955791f9084749cba9a367c68d50ab8d11d64c50ba1688c9b457f2"},
{file = "wrapt-1.17.2-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:1473400e5b2733e58b396a04eb7f35f541e1fb976d0c0724d0223dd607e0f74c"},
{file = "wrapt-1.17.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:3cedbfa9c940fdad3e6e941db7138e26ce8aad38ab5fe9dcfadfed9db7a54e62"},
{file = "wrapt-1.17.2-cp310-cp310-win32.whl", hash = "sha256:582530701bff1dec6779efa00c516496968edd851fba224fbd86e46cc6b73563"},
{file = "wrapt-1.17.2-cp310-cp310-win_amd64.whl", hash = "sha256:58705da316756681ad3c9c73fd15499aa4d8c69f9fd38dc8a35e06c12468582f"},
{file = "wrapt-1.17.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ff04ef6eec3eee8a5efef2401495967a916feaa353643defcc03fc74fe213b58"},
{file = "wrapt-1.17.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4db983e7bca53819efdbd64590ee96c9213894272c776966ca6306b73e4affda"},
{file = "wrapt-1.17.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9abc77a4ce4c6f2a3168ff34b1da9b0f311a8f1cfd694ec96b0603dff1c79438"},
{file = "wrapt-1.17.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b929ac182f5ace000d459c59c2c9c33047e20e935f8e39371fa6e3b85d56f4a"},
{file = "wrapt-1.17.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f09b286faeff3c750a879d336fb6d8713206fc97af3adc14def0cdd349df6000"},
{file = "wrapt-1.17.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1a7ed2d9d039bd41e889f6fb9364554052ca21ce823580f6a07c4ec245c1f5d6"},
{file = "wrapt-1.17.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:129a150f5c445165ff941fc02ee27df65940fcb8a22a61828b1853c98763a64b"},
{file = "wrapt-1.17.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:1fb5699e4464afe5c7e65fa51d4f99e0b2eadcc176e4aa33600a3df7801d6662"},
{file = "wrapt-1.17.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9a2bce789a5ea90e51a02dfcc39e31b7f1e662bc3317979aa7e5538e3a034f72"},
{file = "wrapt-1.17.2-cp311-cp311-win32.whl", hash = "sha256:4afd5814270fdf6380616b321fd31435a462019d834f83c8611a0ce7484c7317"},
{file = "wrapt-1.17.2-cp311-cp311-win_amd64.whl", hash = "sha256:acc130bc0375999da18e3d19e5a86403667ac0c4042a094fefb7eec8ebac7cf3"},
{file = "wrapt-1.17.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:d5e2439eecc762cd85e7bd37161d4714aa03a33c5ba884e26c81559817ca0925"},
{file = "wrapt-1.17.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:3fc7cb4c1c744f8c05cd5f9438a3caa6ab94ce8344e952d7c45a8ed59dd88392"},
{file = "wrapt-1.17.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8fdbdb757d5390f7c675e558fd3186d590973244fab0c5fe63d373ade3e99d40"},
{file = "wrapt-1.17.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5bb1d0dbf99411f3d871deb6faa9aabb9d4e744d67dcaaa05399af89d847a91d"},
{file = "wrapt-1.17.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d18a4865f46b8579d44e4fe1e2bcbc6472ad83d98e22a26c963d46e4c125ef0b"},
{file = "wrapt-1.17.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc570b5f14a79734437cb7b0500376b6b791153314986074486e0b0fa8d71d98"},
{file = "wrapt-1.17.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6d9187b01bebc3875bac9b087948a2bccefe464a7d8f627cf6e48b1bbae30f82"},
{file = "wrapt-1.17.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:9e8659775f1adf02eb1e6f109751268e493c73716ca5761f8acb695e52a756ae"},
{file = "wrapt-1.17.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e8b2816ebef96d83657b56306152a93909a83f23994f4b30ad4573b00bd11bb9"},
{file = "wrapt-1.17.2-cp312-cp312-win32.whl", hash = "sha256:468090021f391fe0056ad3e807e3d9034e0fd01adcd3bdfba977b6fdf4213ea9"},
{file = "wrapt-1.17.2-cp312-cp312-win_amd64.whl", hash = "sha256:ec89ed91f2fa8e3f52ae53cd3cf640d6feff92ba90d62236a81e4e563ac0e991"},
{file = "wrapt-1.17.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:6ed6ffac43aecfe6d86ec5b74b06a5be33d5bb9243d055141e8cabb12aa08125"},
{file = "wrapt-1.17.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:35621ae4c00e056adb0009f8e86e28eb4a41a4bfa8f9bfa9fca7d343fe94f998"},
{file = "wrapt-1.17.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a604bf7a053f8362d27eb9fefd2097f82600b856d5abe996d623babd067b1ab5"},
{file = "wrapt-1.17.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cbabee4f083b6b4cd282f5b817a867cf0b1028c54d445b7ec7cfe6505057cf8"},
{file = "wrapt-1.17.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:49703ce2ddc220df165bd2962f8e03b84c89fee2d65e1c24a7defff6f988f4d6"},
{file = "wrapt-1.17.2-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8112e52c5822fc4253f3901b676c55ddf288614dc7011634e2719718eaa187dc"},
{file = "wrapt-1.17.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:9fee687dce376205d9a494e9c121e27183b2a3df18037f89d69bd7b35bcf59e2"},
{file = "wrapt-1.17.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:18983c537e04d11cf027fbb60a1e8dfd5190e2b60cc27bc0808e653e7b218d1b"},
{file = "wrapt-1.17.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:703919b1633412ab54bcf920ab388735832fdcb9f9a00ae49387f0fe67dad504"},
{file = "wrapt-1.17.2-cp313-cp313-win32.whl", hash = "sha256:abbb9e76177c35d4e8568e58650aa6926040d6a9f6f03435b7a522bf1c487f9a"},
{file = "wrapt-1.17.2-cp313-cp313-win_amd64.whl", hash = "sha256:69606d7bb691b50a4240ce6b22ebb319c1cfb164e5f6569835058196e0f3a845"},
{file = "wrapt-1.17.2-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:4a721d3c943dae44f8e243b380cb645a709ba5bd35d3ad27bc2ed947e9c68192"},
{file = "wrapt-1.17.2-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:766d8bbefcb9e00c3ac3b000d9acc51f1b399513f44d77dfe0eb026ad7c9a19b"},
{file = "wrapt-1.17.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:e496a8ce2c256da1eb98bd15803a79bee00fc351f5dfb9ea82594a3f058309e0"},
{file = "wrapt-1.17.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40d615e4fe22f4ad3528448c193b218e077656ca9ccb22ce2cb20db730f8d306"},
{file = "wrapt-1.17.2-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a5aaeff38654462bc4b09023918b7f21790efb807f54c000a39d41d69cf552cb"},
{file = "wrapt-1.17.2-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a7d15bbd2bc99e92e39f49a04653062ee6085c0e18b3b7512a4f2fe91f2d681"},
{file = "wrapt-1.17.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:e3890b508a23299083e065f435a492b5435eba6e304a7114d2f919d400888cc6"},
{file = "wrapt-1.17.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:8c8b293cd65ad716d13d8dd3624e42e5a19cc2a2f1acc74b30c2c13f15cb61a6"},
{file = "wrapt-1.17.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:4c82b8785d98cdd9fed4cac84d765d234ed3251bd6afe34cb7ac523cb93e8b4f"},
{file = "wrapt-1.17.2-cp313-cp313t-win32.whl", hash = "sha256:13e6afb7fe71fe7485a4550a8844cc9ffbe263c0f1a1eea569bc7091d4898555"},
{file = "wrapt-1.17.2-cp313-cp313t-win_amd64.whl", hash = "sha256:eaf675418ed6b3b31c7a989fd007fa7c3be66ce14e5c3b27336383604c9da85c"},
{file = "wrapt-1.17.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5c803c401ea1c1c18de70a06a6f79fcc9c5acfc79133e9869e730ad7f8ad8ef9"},
{file = "wrapt-1.17.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f917c1180fdb8623c2b75a99192f4025e412597c50b2ac870f156de8fb101119"},
{file = "wrapt-1.17.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:ecc840861360ba9d176d413a5489b9a0aff6d6303d7e733e2c4623cfa26904a6"},
{file = "wrapt-1.17.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb87745b2e6dc56361bfde481d5a378dc314b252a98d7dd19a651a3fa58f24a9"},
{file = "wrapt-1.17.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58455b79ec2661c3600e65c0a716955adc2410f7383755d537584b0de41b1d8a"},
{file = "wrapt-1.17.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b4e42a40a5e164cbfdb7b386c966a588b1047558a990981ace551ed7e12ca9c2"},
{file = "wrapt-1.17.2-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:91bd7d1773e64019f9288b7a5101f3ae50d3d8e6b1de7edee9c2ccc1d32f0c0a"},
{file = "wrapt-1.17.2-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:bb90fb8bda722a1b9d48ac1e6c38f923ea757b3baf8ebd0c82e09c5c1a0e7a04"},
{file = "wrapt-1.17.2-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:08e7ce672e35efa54c5024936e559469436f8b8096253404faeb54d2a878416f"},
{file = "wrapt-1.17.2-cp38-cp38-win32.whl", hash = "sha256:410a92fefd2e0e10d26210e1dfb4a876ddaf8439ef60d6434f21ef8d87efc5b7"},
{file = "wrapt-1.17.2-cp38-cp38-win_amd64.whl", hash = "sha256:95c658736ec15602da0ed73f312d410117723914a5c91a14ee4cdd72f1d790b3"},
{file = "wrapt-1.17.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:99039fa9e6306880572915728d7f6c24a86ec57b0a83f6b2491e1d8ab0235b9a"},
{file = "wrapt-1.17.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2696993ee1eebd20b8e4ee4356483c4cb696066ddc24bd70bcbb80fa56ff9061"},
{file = "wrapt-1.17.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:612dff5db80beef9e649c6d803a8d50c409082f1fedc9dbcdfde2983b2025b82"},
{file = "wrapt-1.17.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:62c2caa1585c82b3f7a7ab56afef7b3602021d6da34fbc1cf234ff139fed3cd9"},
{file = "wrapt-1.17.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c958bcfd59bacc2d0249dcfe575e71da54f9dcf4a8bdf89c4cb9a68a1170d73f"},
{file = "wrapt-1.17.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fc78a84e2dfbc27afe4b2bd7c80c8db9bca75cc5b85df52bfe634596a1da846b"},
{file = "wrapt-1.17.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:ba0f0eb61ef00ea10e00eb53a9129501f52385c44853dbd6c4ad3f403603083f"},
{file = "wrapt-1.17.2-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:1e1fe0e6ab7775fd842bc39e86f6dcfc4507ab0ffe206093e76d61cde37225c8"},
{file = "wrapt-1.17.2-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:c86563182421896d73858e08e1db93afdd2b947a70064b813d515d66549e15f9"},
{file = "wrapt-1.17.2-cp39-cp39-win32.whl", hash = "sha256:f393cda562f79828f38a819f4788641ac7c4085f30f1ce1a68672baa686482bb"},
{file = "wrapt-1.17.2-cp39-cp39-win_amd64.whl", hash = "sha256:36ccae62f64235cf8ddb682073a60519426fdd4725524ae38874adf72b5f2aeb"},
{file = "wrapt-1.17.2-py3-none-any.whl", hash = "sha256:b18f2d1533a71f069c7f82d524a52599053d4c7166e9dd374ae2136b7f40f7c8"},
{file = "wrapt-1.17.2.tar.gz", hash = "sha256:41388e9d4d1522446fe79d3213196bd9e3b301a336965b9e27ca2788ebd122f3"},
]
[metadata]
lock-version = "2.1"
python-versions = "^3.13"
content-hash = "678b22f11117e1b73abfc5359fc144a7c0eeb8c67ed7fb8ef1a66d6587b47232"

View File

@@ -0,0 +1,48 @@
[tool.poetry]
name = "integration"
version = "0.1.0"
description = ""
authors = ["grandwizard28 <vibhupandey28@gmail.com>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.13"
pytest = "^8.3.5"
psycopg2 = "^2.9.10"
testcontainers = "^4.10.0"
black = "^25.1.0"
clickhouse-driver = "^0.2.9"
requests = "^2.32.3"
wiremock = "^2.6.1"
[tool.poetry.group.dev.dependencies]
pylint = "^3.3.6"
isort = "^6.0.1"
autoflake = "^2.3.1"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
python_files = "src/**/**.py"
[tool.pylint.main]
ignore = [".venv"]
[tool.pylint.format]
max-line-length = "88"
[tool.pylint."messages control"]
disable = ["missing-module-docstring", "missing-function-docstring", "missing-class-docstring"]
[tool.isort]
profile = "black"
[tool.autoflake]
recursive = true
remove-all-unused-imports = true
remove-unused-variables = true
exclude = [".venv/**"]
in-place = true

View File

@@ -0,0 +1,17 @@
from clickhouse_driver.dbapi.cursor import Cursor
from fixtures import types
def test_telemetry_databases(signoz: types.SigNoz) -> None:
cursor = signoz.telemetrystore.conn.cursor()
assert isinstance(cursor, Cursor)
cursor.execute("SHOW DATABASES")
records = cursor.fetchall()
assert any("signoz_metrics" in record for record in records)
assert any("signoz_logs" in record for record in records)
assert any("signoz_traces" in record for record in records)
assert any("signoz_metadata" in record for record in records)
assert any("signoz_analytics" in record for record in records)

View File

@@ -0,0 +1,30 @@
from http import HTTPStatus
import requests
from fixtures import types
def test_register(signoz: types.SigNoz) -> None:
response = requests.get(signoz.self.host_config.get("/api/v1/version"), timeout=2)
assert response.status_code == HTTPStatus.OK
assert response.json()["setupCompleted"] is False
response = requests.post(
signoz.self.host_config.get("/api/v1/register"),
json={
"name": "admin",
"orgId": "",
"orgName": "",
"email": "admin@admin.com",
"password": "password",
},
timeout=2,
)
assert response.status_code == HTTPStatus.OK
response = requests.get(signoz.self.host_config.get("/api/v1/version"), timeout=2)
assert response.status_code == HTTPStatus.OK
assert response.json()["setupCompleted"] is True

View File

@@ -0,0 +1,71 @@
import http
import requests
from wiremock.client import (
HttpMethods,
Mapping,
MappingRequest,
MappingResponse,
WireMockMatchers,
)
from fixtures.types import SigNoz
def test_apply_license(signoz: SigNoz, make_http_mocks, get_jwt_token) -> None:
make_http_mocks(
signoz.zeus.container,
[
Mapping(
request=MappingRequest(
method=HttpMethods.GET,
url="/v2/licenses/me",
headers={
"X-Signoz-Cloud-Api-Key": {
WireMockMatchers.EQUAL_TO: "secret-key"
}
},
),
response=MappingResponse(
status=200,
json_body={
"status": "success",
"data": {
"id": "0196360e-90cd-7a74-8313-1aa815ce2a67",
"key": "secret-key",
"valid_from": 1732146923,
"valid_until": -1,
"status": "VALID",
"state": "EVALUATING",
"plan": {
"name": "ENTERPRISE",
},
"platform": "CLOUD",
"features": [],
"event_queue": {},
},
},
),
persistent=False,
)
],
)
access_token = get_jwt_token("admin@admin.com", "password")
response = requests.post(
url=signoz.self.host_config.get("/api/v3/licenses"),
json={"key": "secret-key"},
headers={"Authorization": "Bearer " + access_token},
timeout=5,
)
assert response.status_code == http.HTTPStatus.ACCEPTED
response = requests.post(
url=signoz.zeus.host_config.get("/__admin/requests/count"),
json={"method": "GET", "url": "/v2/licenses/me"},
timeout=5,
)
assert response.json()["count"] >= 1