Compare commits

...

234 Commits

Author SHA1 Message Date
vikrantgupta25
3fe6aa9fdf feat(cache): add deprecated flag 2025-05-01 12:39:17 +05:30
vikrantgupta25
caaab625cc feat(cache): cleanuo the interface pt1 2025-05-01 01:25:59 +05:30
vikrantgupta25
8160e1a499 feat(cache): add query_cache tests 2025-05-01 00:08:20 +05:30
Vikrant Gupta
fcf633b397 Merge branch 'main' into feat/multi-tenant-cache 2025-04-30 23:39:24 +05:30
vikrantgupta25
ef36f1e84a feat(cache): remove the old cache implementation and usages 2025-04-30 23:38:31 +05:30
Vibhu Pandey
73f57d8bee chore(codeowners): add codeowners for sqlmigration (#7779)
### Summary

- add codeowners for sqlmigration
2025-04-30 08:41:31 +00:00
Prashant Shahi
ab17bf3558 ci(build): include USERPILOT_KEY FE envs (#7777)
### Summary

- include USERPILOT_KEY FE envs in the build workflows

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-04-30 13:32:19 +05:30
primus-bot[bot]
eb5a1b76b8 chore(release): bump to v0.81.0 (#7776)
Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
2025-04-30 12:19:26 +05:30
Shivanshu Raj Shrivastava
130ff925bd feat: adds error toggle in top error page (#7773)
Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
2025-04-30 11:05:30 +05:30
Sahil Khan
75d86cea60 fix: api monitoring cosmetic changes (#7771)
fix: minor changes
2025-04-29 21:26:07 +00:00
CheetoDa
cf451d335c feat: added new datasources (#7769) 2025-04-29 22:05:34 +05:30
Yunus M
e47c7cc17b feat: initialize sentry only once (#7768) 2025-04-29 16:01:28 +00:00
Srikanth Chekuri
629c54d3f9 fix: nil pointer error on failed to create rule (#7767) 2025-04-29 15:01:31 +00:00
sawhil
ed3026eeb5 fix: removed unused file 2025-04-29 20:21:12 +05:30
Sahil Khan
ccf26883c4 chore: api monitoring tests (#7750)
* feat: added url sharing for main domain list page api monitoring

* feat: added shivanshus suggestions in qb payloads for spanid and kind string client filter

* fix: limited the endpoints table limit to 1000

* feat: date picker in domain details drawer

* feat: added top errors tab in domain details

* fix: removed console logs

* feat: new dep services top 10 errors localised date picker agrregate domain details etc

* feat: added domain level and endpoint level stats

* feat: added custom cell rendering in gridcard, added new table view in all endpoints

* feat: added port column in endpoints table

* feat: added custom title handling in gridtablecomponent

* fix: fixed the traces corelation query for status code bar charts

* feat: added zoom functionality on domain details charts

* chore: add constants for standardisation

Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>

* chore: add constants for standardisation in the API

Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>

* feat: add tooltip to Endpoint Overview

Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>

* feat: api monitoring feedback till 28th april

* feat: added top errors to traces corelation

* feat: added new rate col to status code table

* feat: custom color mapping for uplot tooltip implemented

* chore: added ApiMonitoringPage.test

* chore: added uts for all endpoints, top errors and their utils

* fix: minor fix

* chore: moved test files to proper folder

* chore: added endpoint details uts and its imported utils ut

* chore: added endpoint dropdown uts and its imported utils ut

* chore: added endpoint metrics uts and its imported utils ut

* chore: added dependent services uts and its imported utils ut

* chore: added status code bar chart uts and its imported utils ut

* chore: added status code table uts and its imported utils ut
2025-04-29 20:21:12 +05:30
sawhil
958924befe feat: custom color mapping for uplot tooltip implemented 2025-04-29 20:21:12 +05:30
sawhil
b70c570cdc feat: added new rate col to status code table 2025-04-29 20:21:12 +05:30
sawhil
42a026469b feat: added top errors to traces corelation 2025-04-29 20:21:12 +05:30
sawhil
6de0908a62 feat: api monitoring feedback till 28th april 2025-04-29 20:21:12 +05:30
Shivanshu Raj Shrivastava
fd21a4955e feat: add tooltip to Endpoint Overview
Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
2025-04-29 20:21:12 +05:30
Shivanshu Raj Shrivastava
3dce13d29f chore: add constants for standardisation in the API
Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
2025-04-29 20:21:12 +05:30
Shivanshu Raj Shrivastava
2ce4b60c55 chore: add constants for standardisation
Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
2025-04-29 20:21:12 +05:30
sawhil
c9888804cd feat: added zoom functionality on domain details charts 2025-04-29 20:21:12 +05:30
sawhil
413b0d9fae fix: fixed the traces corelation query for status code bar charts 2025-04-29 20:21:12 +05:30
sawhil
b24095236f feat: added custom title handling in gridtablecomponent 2025-04-29 20:21:12 +05:30
sawhil
21d239ce68 feat: added port column in endpoints table 2025-04-29 20:21:12 +05:30
sawhil
d6e4e3c5ed feat: added custom cell rendering in gridcard, added new table view in all endpoints 2025-04-29 20:21:12 +05:30
sawhil
552b103e8b feat: added domain level and endpoint level stats 2025-04-29 20:21:12 +05:30
sawhil
1123a9a93d feat: new dep services top 10 errors localised date picker agrregate domain details etc 2025-04-29 20:21:12 +05:30
sawhil
8b30e3cc5c fix: removed console logs 2025-04-29 20:21:12 +05:30
sawhil
b86e65d2ca feat: added top errors tab in domain details 2025-04-29 20:21:12 +05:30
sawhil
d5e2841083 feat: date picker in domain details drawer 2025-04-29 20:21:12 +05:30
sawhil
7dad5dcd17 fix: limited the endpoints table limit to 1000 2025-04-29 20:21:12 +05:30
sawhil
ac0b640146 feat: added shivanshus suggestions in qb payloads for spanid and kind string client filter 2025-04-29 20:21:12 +05:30
sawhil
e125d146b5 feat: added url sharing for main domain list page api monitoring 2025-04-29 20:21:12 +05:30
sawhil
a41ffceca4 fix: changed the error percentage calculation 2025-04-29 20:21:12 +05:30
sawhil
7edb047c0c fix: added support for group by sorting in endpoints table 2025-04-29 20:21:12 +05:30
sawhil
6504f2565b fix: fixed last seen sorting in endpoint table 2025-04-29 20:21:12 +05:30
sawhil
6b418a125b fix: changed error rate to error percentage 2025-04-29 20:21:12 +05:30
sawhil
36827a1667 fix: added fallback for undefined data and added support for sorting 2025-04-29 20:21:12 +05:30
sawhil
1118c56356 feat: new dep. services table added 2025-04-29 20:21:12 +05:30
sawhil
bd071e3e60 feat: added new queries to handle error rates of endpoints 2025-04-29 20:21:12 +05:30
sawhil
36f3a2e26d feat: added sorting and error rate to endpoints table 2025-04-29 20:21:12 +05:30
sawhil
fee7e96176 feat: added sorting in domains list page 2025-04-29 20:21:12 +05:30
Gabber235
ef4e3a30fb fix: black gap on on cloud in sidebar (#7383) (#7427)
Co-authored-by: Vikrant Gupta <vikrant@signoz.io>
2025-04-28 20:32:32 +00:00
Vikrant Gupta
39532d5da0 fix(zeus): build pipelines LD flags (#7754) 2025-04-28 19:40:22 +00:00
Vishal Sharma
4d216bae4d feat: init userpilot (#7579) 2025-04-28 18:42:14 +00:00
Vikrant Gupta
21563914c7 fix(ruler): telemetry for rules (#7751)
### Summary

- fix the telemetry for rules and notification channels 
- not adding bun as this needs to go away soon
2025-04-28 23:07:28 +05:30
Vibhu Pandey
accb77f227 chore(use-*): remove use-new-traces-schema and use-new-logs-schema flags (#7741)
### Summary

remove use-new-traces-schema and use-new-logs-schema flags
2025-04-28 21:01:35 +05:30
Vibhu Pandey
e73e1bd078 feat(zeus): add zeus package (#7745)
* feat(zeus): add zeus package

* feat(signoz): add DI for zeus

* feat(zeus): integrate with the codebase

* ci(make): change makefile

* ci: change workflows to point to the new zeus url

* Update ee/query-service/usage/manager.go

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update ee/query-service/license/manager.go

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* fix: fix nil retriable

* fix: fix zeus DI

* fix: fix path of ldflag

* feat(zeus): added zeus integration tests

* feat(zeus): added zeus integration tests

* feat(zeus): format the pytest

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: vikrantgupta25 <vikrant.thomso@gmail.com>
Co-authored-by: Vikrant Gupta <vikrant@signoz.io>
2025-04-28 14:20:47 +00:00
Vikrant Gupta
940313d28b fix(organization): return display name instead of name for organization (#7747)
### Summary

- return display name instead of name for organization
2025-04-28 10:54:53 +00:00
Vibhu Pandey
9815ec7d81 chore: remove references to unused flags (#7739)
### Summary

remove references to unused flags
2025-04-28 09:27:26 +00:00
Vibhu Pandey
a7cad0f1a5 chore(conf): add clickhouse settings (#7743) 2025-04-27 14:26:36 +00:00
SagarRajput-7
a624b4758d chore: fix failing typecheck (#7742) 2025-04-27 19:51:05 +05:30
SagarRajput-7
ee5684b130 feat: added permission restriction for viewer for planned Maintaince (#7736)
* feat: added permission restriction for viewer for planned Maintaince

* feat: added test cases
2025-04-27 17:24:56 +05:30
SagarRajput-7
2f8da5957b feat: added custom single and multiselect components (#7497)
* feat: added new Select component for multi and single select

* feat: refactored code and added keyboard navigations in single select

* feat: different state handling in single select

* feat: updated the playground page

* feat: multi-select updates

* feat: fixed multiselect selection issues

* feat: multiselect cleanup

* feat: multiselect key navigation cleanup

* feat: added tokenization in multiselect

* feat: add on enter and handle duplicates

* feat: design update to the components

* feat: design update to the components

* feat: design update to the components

* feat: updated the playground page

* feat: edited playground data

* feat: edited styles

* feat: code cleanup

* feat: added shift + keys navigation and selection

* feat: improved styles and added darkmode styles

* feat: removed scroll bar hover style

* feat: added scroll bar on hover

* feat: added regex wrapper support

* feat: fixed right arrow navigation across chips

* feat: addressed all the single select feedbacks

* feat: addressed all the single select feedbacks

* feat: added only-all-toggle feat with ALL selection tag

* feat: remove clear, update footer info content and style and misc fixes

* feat: misc style fixes

* feat: added quotes exception to the multiselect tagging

* feat: removing demo page, and cleanup PR for reviews

* feat: resolved comments and refactoring

* feat: added test cases
2025-04-27 16:55:53 +05:30
dependabot[bot]
3f6f77d0e2 chore(deps): bump axios from 1.7.7 to 1.8.2 in /frontend (#7249)
Bumps [axios](https://github.com/axios/axios) from 1.7.7 to 1.8.2.
- [Release notes](https://github.com/axios/axios/releases)
- [Changelog](https://github.com/axios/axios/blob/v1.x/CHANGELOG.md)
- [Commits](https://github.com/axios/axios/compare/v1.7.7...v1.8.2)

---
updated-dependencies:
- dependency-name: axios
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-27 11:16:16 +00:00
Vibhu Pandey
5bceffbeaa fix: fix modules and handler (#7737)
* fix: fix modules and handler

* fix: fix sqlmigration package

* fix: fix other fmt issues

* fix: fix tests

* fix: fix tests
2025-04-27 16:38:34 +05:30
Vibhu Pandey
9e449e2858 feat(auth): drop group table (#7672)
### Summary

drop group table
2025-04-26 15:50:02 +05:30
Shivanshu Raj Shrivastava
b60588a749 chore: use count instead of count distinct (#7711)
Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
2025-04-25 21:42:05 +05:30
Vikrant Gupta
c322657666 feat(organization): schema changes for the organizations entity (#7684)
* feat(organization): add hname and alias for organization

* fix: boolean values are not shown in the list panel's column

* fix: moved logic to component level

* fix: added type

* fix: added test cases

* fix: added test cases

* chore: update copy webpack plugin

* Revert "fix: display same key with multiple data types in filter suggestions by enhancing the deduping logic (#7255)"

This reverts commit 1e85981a17.

* fix: use query search v2 for traces data source to handle multiple data types for the same key

* fix(QueryBuilderSearchV2): add user typed option if it doesn't exist in the payload

* fix(QueryBuilderSearchV2): increase the height of search dropdown for non-logs data sources

* fix: display span scope selector for trace data source

* chore: remove the span scope selector from qb search v1 and move the component to search v2

* fix: write test to ensure that we display span scope selector for traces data source

* fix: limit converting  ->   only to log data source

* fix: don't display empty suggestion if only spaces are typed

* chore: tests for span scope selector

* chore: qb search flow (key, operator, value) test cases

* refactor: fix the Maximum update depth reached issue while running tests

* chore: overall improvements to span scope selector tests

Resource attr filter: style fix and quick filter changes (#7691)

* chore: resource attr filter init

* chore: resource attr filter api integration

* chore: operator config updated

* chore: fliter show hide logic and styles

* chore: add support for custom operator list to qb

* chore: minor refactor

* chore: minor code refactor

* test: quick filters test suite added

* test: quick filters test suite added

* test: all errors test suite added

* chore: style fix

* test: all errors mock fix

* chore: test case fix and mixpanel update

* chore: color update

* chore: minor refactor

* chore: style fix

* chore: set default query in exceptions tab

* chore: style fix

* chore: minor refactor

* chore: minor refactor

* chore: minor refactor

* chore: test update

* chore: fix filter header with no query name

* fix: scroll fix

* chore: add data source traces to quick filters

* chore: replace div with fragment

---------

Co-authored-by: Aditya Singh <adityasingh@Adityas-MacBook-Pro.local>

fix: handle rate operators for table panel (#7695)

* fix: handle rate operators for table panel

chore: fix error rate (#7701)

Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>

* feat(organization): minor cleanups

* feat(organization): better naming for api and usecase

* feat(organization): better packaging for modules

* feat(organization): change hname to displayName

* feat(organization): update the migration to use dialect

* feat(organization): update the migration to use dialect

* feat(organization): update the migration to use dialect

* feat(organization): revert back to impl

* feat(organization): remove DI from organization

* feat(organization): address review comments

* feat(organization): address review comments

* feat(organization): address review comments

---------

Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
2025-04-25 19:38:15 +05:30
CheetoDa
a1846c008a chore: added new datasources (#7659)
* chore: added new datasources

* chore: added integrations to json

* chore: added quickstart

* feat: handle internal redirects

* chore: update onboarding-config-with-links.json

* Update onboarding-config-with-links.json

---------

Co-authored-by: Yunus M <myounis.ar@live.com>
2025-04-24 02:08:59 +05:30
Amlan Kumar Nandy
a6824db622 fix: metric details something went wrong message (#7686) 2025-04-23 10:42:09 +00:00
Amlan Kumar Nandy
e6f69aa74c fix: query builder in metrics explorer picking up wrong datasource (#7676)
* fix: query builder in metrics explorer picking up wrong datasource

* chore: add UTs

---------

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2025-04-23 10:36:33 +00:00
Srikanth Chekuri
a9c09f33cb chore: always add reserved vars (#7689) 2025-04-23 09:14:10 +00:00
Srikanth Chekuri
9eb2196617 chore: use attributes table for metric keys and values (#7680) 2025-04-23 09:05:56 +00:00
primus-bot[bot]
131759ec96 chore(release): bump to v0.80.0 (#7703)
Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
2025-04-23 12:20:51 +05:30
Shivanshu Raj Shrivastava
365a3e250f chore: fix error rate (#7701)
Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
2025-04-22 20:26:37 +00:00
Nityananda Gohain
f3a1f3cc20 fix: handle rate operators for table panel (#7695)
* fix: handle rate operators for table panel
2025-04-22 19:09:08 +00:00
Aditya Singh
ae509b4ae9 Resource attr filter: style fix and quick filter changes (#7691)
* chore: resource attr filter init

* chore: resource attr filter api integration

* chore: operator config updated

* chore: fliter show hide logic and styles

* chore: add support for custom operator list to qb

* chore: minor refactor

* chore: minor code refactor

* test: quick filters test suite added

* test: quick filters test suite added

* test: all errors test suite added

* chore: style fix

* test: all errors mock fix

* chore: test case fix and mixpanel update

* chore: color update

* chore: minor refactor

* chore: style fix

* chore: set default query in exceptions tab

* chore: style fix

* chore: minor refactor

* chore: minor refactor

* chore: minor refactor

* chore: test update

* chore: fix filter header with no query name

* fix: scroll fix

* chore: add data source traces to quick filters

* chore: replace div with fragment

---------

Co-authored-by: Aditya Singh <adityasingh@Adityas-MacBook-Pro.local>
2025-04-22 15:45:49 +00:00
Shaheer Kochai
43e2be0333 feat: use search v2 component for traces (#7537)
* Revert "fix: display same key with multiple data types in filter suggestions by enhancing the deduping logic (#7255)"

This reverts commit 1e85981a17.

* fix: use query search v2 for traces data source to handle multiple data types for the same key

* fix(QueryBuilderSearchV2): add user typed option if it doesn't exist in the payload

* fix(QueryBuilderSearchV2): increase the height of search dropdown for non-logs data sources

* fix: display span scope selector for trace data source

* chore: remove the span scope selector from qb search v1 and move the component to search v2

* fix: write test to ensure that we display span scope selector for traces data source

* fix: limit converting  ->   only to log data source

* fix: don't display empty suggestion if only spaces are typed

* chore: tests for span scope selector

* chore: qb search flow (key, operator, value) test cases

* refactor: fix the Maximum update depth reached issue while running tests

* chore: overall improvements to span scope selector tests
2025-04-22 15:24:03 +00:00
Yunus M
20a40b33ce chore: update copy webpack plugin (#7687)
* chore: update copy webpack plugin
2025-04-22 20:36:57 +05:30
sawhil
a9b07c4b47 feat: added test case for checking copying functionality 2025-04-22 15:18:38 +05:30
sawhil
2a5c7cc0ab feat: added test cases for copy span link functionality 2025-04-22 15:18:38 +05:30
sawhil
afb18b8142 fix: pr comments - used useSafeNavigate hook 2025-04-22 15:18:38 +05:30
sawhil
9a580915e6 fix: removed not required prop 2025-04-22 15:18:38 +05:30
sawhil
0944af3d31 feat: added copy span link support alongside span click expand in waterfall graph 2025-04-22 15:18:38 +05:30
SagarRajput-7
9338efcefc fix: boolean values are not shown in the list panel's column (#7668)
* fix: boolean values are not shown in the list panel's column

* fix: moved logic to component level

* fix: added type

* fix: added test cases

* fix: added test cases
2025-04-22 12:03:28 +05:30
sawhil
6b9e0ce799 fix: minor comment update 2025-04-21 12:45:04 +05:30
sawhil
d4c3c24849 feat: added test cases 2025-04-21 12:45:04 +05:30
sawhil
30d935a768 fix: used existing constant 2025-04-21 12:45:04 +05:30
sawhil
073d42c416 fix: removed timestamp and id from being passed to query from log details drawer 2025-04-21 12:45:04 +05:30
Aditya Singh
f11b9644cf Introduce new Resource Attribute FIlter in exceptions tab (#7589)
* chore: resource attr filter init

* chore: resource attr filter api integration

* chore: operator config updated

* chore: fliter show hide logic and styles

* chore: add support for custom operator list to qb

* chore: minor refactor

* chore: minor code refactor

* test: quick filters test suite added

* test: quick filters test suite added

* test: all errors test suite added

* chore: style fix

* test: all errors mock fix

* chore: test case fix and mixpanel update

* chore: color update

* chore: minor refactor

* chore: style fix

* chore: set default query in exceptions tab

* chore: style fix

* chore: minor refactor

* chore: minor refactor

* chore: minor refactor

* chore: test update

* chore: fix filter header with no query name

---------

Co-authored-by: Aditya Singh <adityasingh@Adityas-MacBook-Pro.local>
2025-04-21 05:41:05 +00:00
Yunus M
87922e9577 feat: show notification for alert rule ID logic change (#7673)
* feat: show notification for alert rule ID logic change

* Update frontend/src/container/Home/Home.tsx

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>

---------

Co-authored-by: Vikrant Gupta <vikrant@signoz.io>
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2025-04-19 14:25:07 +05:30
Prashant Shahi
8412727414 chore: deprecate stagingapp/testingapp (#7649)
### Summary

- deprecate stagingapp/testingapp workflows
- remove `docker-compose.testing.yaml`

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-04-18 15:31:52 +00:00
Srikanth Chekuri
f0a95503d9 chore: add API endpoints for fields keys and values (#7560) 2025-04-18 13:33:17 +00:00
Vikrant Gupta
16e0fa2eef feat(ruler): update the ruler and planned maintenance tables (#7535)
* feat(ruler): base setup for rules and planned maintenance tables

* feat(ruler): more changes for making ruler org aware

* feat(ruler): fix lint

* feat(ruler): update the edit planned maintenance function

* feat(ruler): local testing edits for planned maintenance

* feat(ruler): abstract store and types from rules pkg

* feat(ruler): abstract store and types from rules pkg

* feat(ruler): abstract out store and add migration

* feat(ruler): frontend changes and review comments

* feat(ruler): add back compareAndSelectConfig

* feat(ruler): changes for alertmanager matchers

* feat(ruler): addressed review comments

* feat(ruler): remove the cascade operations from rules table

* feat(ruler): update the template for alertmanager

* feat(ruler): implement the rule history changes

* feat(ruler): implement the rule history changes

* feat(ruler): implement the rule history changes

---------

Co-authored-by: Vibhu Pandey <vibhupandey28@gmail.com>
2025-04-18 00:04:25 +05:30
Yunus M
2fa944d254 feat: update callback url for billing (#7667) 2025-04-17 20:21:14 +05:30
Srikanth Chekuri
b0d19035a4 chore: add where clause visitor implementation for query expression (#7564) 2025-04-17 15:54:36 +05:30
Nityananda Gohain
054dea366e fix: proper formatting of floating point (#7653)
* fix: proper formatting of floating point

* fix: old trace

* fix: add tests

* fix: use strconv.FormatFloat instead
2025-04-17 06:02:41 +00:00
primus-bot[bot]
aaf0b597dc chore(release): bump to v0.79.1 (#7655)
#### Summary
 - Release SigNoz v0.79.1

 Created by [Primus-Bot](https://github.com/apps/primus-bot)

Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
2025-04-17 00:02:05 +05:30
Prashant Shahi
19372c8194 ci(build): use unique cache key for the internal/public builds (#7654)
### Summary

- unique cache keys for the internal/public builds

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-04-16 23:37:05 +05:30
Vibhu Pandey
eb74adad44 test(integration): set the base for integration tests (#7606)
* test(integration): set the base for integration tests

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test

* ci: add ci pipeline for integration test
2025-04-16 18:54:05 +05:30
Srikanth Chekuri
d5c04e1342 chore: log original query failed to transform (#7641) 2025-04-16 14:40:54 +05:30
primus-bot[bot]
2b9632c8fd chore(release): bump to v0.79.0 (#7643)
#### Summary
 - Release SigNoz v0.79.0
 - Bump SigNoz OTel Collector to v0.111.39

 Created by [Primus-Bot](https://github.com/apps/primus-bot)

Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
2025-04-16 13:36:31 +05:30
Prashant Shahi
24920ae903 chore(prereleaser): update cron schedule - 6:30AM UTC (#7640)
### Summary

- update preleaser cron schedule to 6:30AM UTC

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-04-16 07:20:47 +00:00
Prashant Shahi
6f096632a2 chore(build-staging): only include telemetry tunnel FE envs (#7637)
### Summary

- only include telemetry tunnel FE environment variables for the staging build

---------

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-04-16 12:38:30 +05:30
Piyush Singariya
a42eacec4b chore: enhancing JSON Parser handling (#7591)
* feat: enhancing JSON Parser handling

* fix: updating collector version

* chore: updating go.mod reference for Collector

---------

Co-authored-by: Nityananda Gohain <nityanandagohain@gmail.com>
2025-04-16 11:24:59 +05:30
Nityananda Gohain
e723399f7f fix: add check for empty services (#7611) 2025-04-15 16:39:33 +00:00
Nityananda Gohain
48936bed9b chore: multitenancy in integrations (#7507)
* chore: multitenancy in integrations

* chore: multitenancy in cloud integration accounts

* chore: changes to cloudintegrationservice

* chore: rename migration

* chore: update scan function

* chore: update scan function

* chore: fix migration

* chore: fix struct

* chore: remove unwanted code

* chore: update scan function

* chore: migrate user and pat for integrations

* fix: changes to the user for integrations

* fix: address comments

* fix: copy created_at

* fix: update non revoked token

* chore: don't allow deleting pat and user for integrations

* fix: address comments

* chore: address comments

* chore: add checks for fk in dialect

* fix: service migration

* fix: don't update user if user is already migrated

* fix: update correct service config

* fix: remove unwanted code

* fix: remove migration for multiple same services which is not required

* fix: fix migration and disable disaboard if metrics disabled

* fix: don't use ee types

---------

Co-authored-by: Vikrant Gupta <vikrant@signoz.io>
2025-04-15 15:35:36 +00:00
Srikanth Chekuri
ee70474cc7 fix: missing receivers in json payload for legacy postableAlert (#7603) 2025-04-14 13:20:39 +00:00
Srikanth Chekuri
c3fa7144ee chore: add tag type filter support in attribute keys (#7522) 2025-04-14 18:43:15 +05:30
Nityananda Gohain
5dd02a5b8e fix: remove unnecssary code for email domain check error (#7566)
* fix: proper check for emailComponents

* fix: correct error handling
2025-04-14 11:15:21 +05:30
Srikanth Chekuri
c0f01e4cb9 chore: add metadatastore implementation for logs and traces (#7559)
* chore: add metadatastore implementation for logs and traces

* chore: use telemetrystore mock
2025-04-11 19:41:02 +05:30
Srikanth Chekuri
fed84cb50a chore: add condition builder attributes metadata (#7558) 2025-04-11 16:20:27 +05:30
Srikanth Chekuri
80545c4d07 chore: add materialized field extractor from table schema (#7557) 2025-04-11 15:53:55 +05:30
Srikanth Chekuri
0b1faec092 chore: add condition builder for span index v3 (#7556) 2025-04-11 15:13:04 +05:30
Srikanth Chekuri
ba6f31b1c3 chore: add virtual fields table (#7586) 2025-04-11 07:36:31 +05:30
Srikanth Chekuri
eed92978a4 chore: add non-json condition builder for logs v2 (#7555) 2025-04-10 18:23:01 +00:00
Prashant Shahi
41cbd316b5 Feat/staging (#7585)
### Summary

- Non-production build workflow using Primus
- Staging CD: new staging app and dev staging deployments
- cleanup used docker resources in stagingapp/testingapp machines

---------

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-04-10 17:46:13 +05:30
Vishal Sharma
8d7d33393d feat: include tenant_url in event attributes for logging (#7582) 2025-04-10 15:17:14 +05:30
sawhil
8d143b44b1 feat: removed ff for tp-api-monitoring from fe - 1 2025-04-09 15:44:42 +05:30
sawhil
423aebd6eb feat: removed ff for tp-api-monitoring from fe 2025-04-09 15:44:42 +05:30
primus-bot[bot]
8d630707af chore(release): bump to v0.78.1 (#7573)
#### Summary
 - Release SigNoz v0.78.1

 Created by [Primus-Bot](https://github.com/apps/primus-bot)

Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
2025-04-09 15:26:04 +05:30
Prashant Shahi
a5b52431b7 chore(build): include missing LDFLAGS in the community/enterprise builds (#7571)
### Summary

- include missing LDFLAGS in the community/enterprise builds

---------

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-04-09 15:10:49 +05:30
primus-bot[bot]
0138d757c8 chore(release): bump to v0.78.0 (#7568)
Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
2025-04-09 12:28:18 +05:30
SagarRajput-7
844195b84f Revert "fix: reevaluated newdashboard bool to ensure that widget doesn't exis…" (#7567)
This reverts commit e53d3d1269.
2025-04-09 10:38:31 +05:30
Srikanth Chekuri
8ff05b2e8f chore: add field type definitions for qb v5 (#7552) 2025-04-08 22:34:58 +05:30
Srikanth Chekuri
c8c56c544e chore: add generated parser files for go (#7538) 2025-04-08 13:52:40 +00:00
Vikrant Gupta
1c43655336 fix(ff): alert creation disabled for non metric alerts (#7561) 2025-04-08 13:21:59 +00:00
Prashant Shahi
c269c8c6b8 feat: publish signoz to multiple registry using primus (#7504)
### Summary

- publish signoz images to multiple registry using primus
- deprecate old build workflow

---------

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-04-08 18:06:22 +05:30
Vibhu Pandey
3142b6cc6d chore(ff): remove some more unused ffs (#7532)
### Summary

remove unused feature flags
- ENTERPRISE_PLAN = 'ENTERPRISE_PLAN',
- BASIC_PLAN = 'BASIC_PLAN',
- ALERT_CHANNEL_SLACK = 'ALERT_CHANNEL_SLACK',
- ALERT_CHANNEL_WEBHOOK = 'ALERT_CHANNEL_WEBHOOK',
- ALERT_CHANNEL_PAGERDUTY = 'ALERT_CHANNEL_PAGERDUTY',
- ALERT_CHANNEL_OPSGENIE = 'ALERT_CHANNEL_OPSGENIE',
- ALERT_CHANNEL_MSTEAMS = 'ALERT_CHANNEL_MSTEAMS',
- CUSTOM_METRICS_FUNCTION = 'CUSTOM_METRICS_FUNCTION',
- QUERY_BUILDER_PANELS = 'QUERY_BUILDER_PANELS',
- QUERY_BUILDER_ALERTS = 'QUERY_BUILDER_ALERTS',
- DISABLE_UPSELL = 'DISABLE_UPSELL',
- OSS = 'OSS',
- QUERY_BUILDER_SEARCH_V2 = 'QUERY_BUILDER_SEARCH_V2',
- AWS_INTEGRATION = 'AWS_INTEGRATION',

remove ProPlan concept
2025-04-07 22:58:46 +05:30
Vibhu Pandey
58e141685a revert(smart): add smart trace detail search back (#7547)
### Summary

- Added the trace search endpoint back
- Enabled the `smart` algorithm for all users
- Added `"algo":"smart"` for telemetry events in the old endpoint
2025-04-07 17:19:39 +00:00
Srikanth Chekuri
e17f63a50c chore: error log on query error (#7553)
* chore: error log or query error

* chore: no error on context cancel
2025-04-07 15:15:49 +00:00
Vibhu Pandey
838ef5dcc5 feat(valuer): add string valuer (#7554) 2025-04-07 20:36:04 +05:30
SagarRajput-7
e53d3d1269 fix: reevaluated newdashboard bool to ensure that widget doesn't exist, incase of a prior PUT call (#7525)
* fix: reevaluated newdashboard bool to ensure that widget doesn't exist, incase of a prior PUT call

* fix: resolved comment
2025-04-07 15:31:08 +05:30
Vibhu Pandey
2330420c0d fix(querier): remove ff (#7531) 2025-04-05 18:08:06 +00:00
Vibhu Pandey
65ac277074 fix(duration|timestamp): remove duration/timestamp sort (#7530) 2025-04-05 23:26:50 +05:30
Vibhu Pandey
b7982ca348 fix(ff): remove feature interface from ruler (#7529)
### Summary

remove feature interface from ruler
2025-04-05 12:52:26 +00:00
dependabot[bot]
2748b49a44 chore(deps): bump github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 (#7401)
Bumps [github.com/golang-jwt/jwt/v5](https://github.com/golang-jwt/jwt) from 5.2.1 to 5.2.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v5.2.1...v5.2.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v5
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2025-04-05 01:43:43 +00:00
Vibhu Pandey
7345027762 fix(ff): remove prefer rpm (#7528) 2025-04-04 23:38:16 +05:30
Vibhu Pandey
68f874e433 chore(ff): remove unused SMART_TRACE_DETAIL feature flag (#7527) 2025-04-04 20:28:54 +05:30
Vibhu Pandey
54a82b1664 fix(dashboards): remove ff interface (#7526) 2025-04-04 19:12:31 +05:30
Yunus M
93dc585145 fix: disable sidenav items for cloud users whose license has expired (#7524) 2025-04-04 18:33:55 +05:30
Vikrant Gupta
6a143efd2c feat(sqlmigration): update the user related tables according to new schema (#7518)
* feat(sqlmigration): update the alertmanager tables

* feat(sqlmigration): update the alertmanager tables

* feat(sqlmigration): make the preference package multi tenant

* feat(preference): address nit pick comments

* feat(preference): added the cascade delete for preferences

* feat(sqlmigration): update apdex and TTL status tables  (#7481)

* feat(sqlmigration): update the apdex and ttl tables

* feat(sqlmigration): register the new migration and rename table

* feat(sqlmigration): fix the ttl queries

* feat(sqlmigration): update the TTL and apdex tables

* feat(sqlmigration): update the TTL and apdex tables

* feat(sqlmigration): fix the reset password and pat tables (#7482)

* feat(sqlmigration): fix the reset password and pat tables

* feat(sqlmigration): revert PAT changes

* feat(sqlmigration): register and rename the new migration

* feat(sqlmigration): handle updates for user tables

* feat(sqlmigration): remove unwanted changes
2025-04-04 01:46:28 +05:30
Vikrant Gupta
0116eb20ab feat(sqlmigration): update apdex and TTL status tables (#7517)
* feat(sqlmigration): update the alertmanager tables

* feat(sqlmigration): update the alertmanager tables

* feat(sqlmigration): make the preference package multi tenant

* feat(preference): address nit pick comments

* feat(preference): added the cascade delete for preferences

* feat(sqlmigration): update apdex and TTL status tables  (#7481)

* feat(sqlmigration): update the apdex and ttl tables

* feat(sqlmigration): register the new migration and rename table

* feat(sqlmigration): fix the ttl queries

* feat(sqlmigration): update the TTL and apdex tables

* feat(sqlmigration): update the TTL and apdex tables
2025-04-04 01:36:47 +05:30
Vikrant Gupta
79e9d1b357 feat(preference): multi tenant preference module (#7516)
* feat(sqlmigration): update the alertmanager tables

* feat(sqlmigration): update the alertmanager tables

* feat(sqlmigration): make the preference package multi tenant

* feat(preference): address nit pick comments

* feat(preference): added the cascade delete for preferences
2025-04-04 01:25:24 +05:30
Vikrant Gupta
b89ce82e25 feat(sqlmigration): update the alertmanager tables (#7513)
* feat(sqlmigration): update the alertmanager tables
2025-04-03 17:56:49 +00:00
dependabot[bot]
b43a198fd8 chore(deps): bump github.com/expr-lang/expr from 1.16.9 to 1.17.0 (#7342)
Bumps [github.com/expr-lang/expr](https://github.com/expr-lang/expr) from 1.16.9 to 1.17.0.
- [Release notes](https://github.com/expr-lang/expr/releases)
- [Commits](https://github.com/expr-lang/expr/compare/v1.16.9...v1.17.0)

---
updated-dependencies:
- dependency-name: github.com/expr-lang/expr
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2025-04-03 08:17:40 +00:00
Srikanth Chekuri
b40ca4baf3 fix: do not crash service on panic during rule eval (#7514) 2025-04-03 13:13:58 +05:30
SagarRajput-7
8df77c9221 fix: fixed trace funnel - header style overriding other pages (#7512)
* fix: fixed trace funnel - header style overriding other pages

* fix: fixed trace funnel - header style overriding other pages

* fix: handled nesting
2025-04-03 07:29:39 +00:00
Srikanth Chekuri
f67555576f chore: add info icon tool tips for webhook/routing/integration key (#7405) 2025-04-03 09:40:41 +05:30
Srikanth Chekuri
f0a4c37073 fix: handle maintenance windows that cross day boundaries (#7494) 2025-04-02 20:48:01 +00:00
Vikrant Gupta
7972261237 Revert "fix: use search v2 component for traces data source & minor improveme…" (#7511)
This reverts commit d7a6607a25.
2025-04-03 01:15:20 +05:30
primus-bot[bot]
3b4a8e5e0f chore(release): bump to v0.77.0 (#7502)
#### Summary
 - Release SigNoz v0.77.0
 - Bump SigNoz OTel Collector to v0.111.37

 Created by [Primus-Bot](https://github.com/apps/primus-bot)
2025-04-02 12:19:03 +05:30
Yunus M
5ef3b8ee3f feat: support request data source and improve layout (#7485)
* feat: support request data source and improve layout

* feat: update config

* feat: update config with related keywords

* update config

---------

Co-authored-by: makeavish <makeavish786@gmail.com>
2025-04-02 11:59:53 +05:30
Yunus M
597752a4bc fix: licenses in community edition & improve messaging (#7456)
Enhance platform to handle cloud, self-hosted, community, and enterprise user types with tailored routing, error handling, and feature access.
2025-04-02 01:12:42 +05:30
Nityananda Gohain
07a244f569 chore: add migration for pat to add default values (#7492)
* chore: add migration for pat to add default values

* fix: minor changes

* fix: don't panic in GetClickhouseColumnName

* fix: use new function for pat

* fix: address minor comments

* fix: address comments

* fix: remove generatepat

* fix: minor changes

* fix: remove extra check

---------

Co-authored-by: Vibhu Pandey <vibhupandey28@gmail.com>
2025-04-01 23:49:37 +05:30
sawhil
eb9385840f fix: minor error handler message 2025-04-01 22:47:48 +05:30
sawhil
30b689037a fix: minor fix 2025-04-01 22:47:48 +05:30
sawhil
ba33c885d5 fix: added auth token refresh logic in event source provider error handler 2025-04-01 22:47:48 +05:30
Amlan Kumar Nandy
a4ed9e4d47 feat: base setup for inspect metrics feature (#7490) 2025-04-01 11:47:38 +00:00
Nityananda Gohain
df5767198c fix: don't panic in GetClickhouseColumnName (#7493) 2025-03-31 22:26:37 +05:30
Vibhu Pandey
81c7f3221a feat(prometheus): create a dedicated prometheus package (#7397) 2025-03-31 14:11:11 +00:00
Amlan Kumar Nandy
2cbd8733a1 chore: remove new banner from infra monitoring tab in side nav (#7483) 2025-03-31 13:22:23 +00:00
Nityananda Gohain
71d1dfe9bd chore: use new uuid in pipelines (#7487) 2025-03-31 16:45:00 +05:30
aniketio-ctrl
459712d25c fix(nil-pointer): wrong error passed | 2262 (#7463)
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2025-03-31 09:47:37 +00:00
Aditya Singh
61de2d414d fix: handle 404 redirection on root route (#7454)
* fix: handle 404 redirection on root route

* fix: add home component for root route

---------

Co-authored-by: Aditya Singh <adityasingh@Adityas-MacBook-Pro.local>
2025-03-31 14:35:58 +05:30
Sahil Khan
0b7cd4c1a7 feat: api monitoring feedback - 2 (#7432)
* feat: new dropdown styles

* fix: added new tag

* feat: added endpoint name and port in endpoint details

* feat: endpoint details feedback

* feat: analytics added

* fix: title fixed

* fix: domain list breaking for non available data

* feat: added third party api feature flag

* fix: console removed

* feat: added traces corelation in api monitoring charts

* feat: added customondragselect in grid card full view to handle breaking flow

* fix: minor failsafes added:

* fix: minor ux fix

* feat: incorporated pr comments - 0
2025-03-30 03:10:43 +05:30
Shaheer Kochai
62c033ccf8 chore: trace funnels feature flag changes (#7478)
* chore: trace funnels feature flag
2025-03-29 19:33:15 +05:30
Nageshbansal
e637487984 Fix the hyperlink for otel-demo-docs in contributing guide (#7462)
Co-authored-by: CheetoDa <31571545+Calm-Rock@users.noreply.github.com>
Co-authored-by: Vibhu Pandey <vibhupandey28@gmail.com>
2025-03-28 14:58:11 +00:00
Nityananda Gohain
8fc43a00f8 fix: collector connection to opamp without orgID (#7474) 2025-03-28 20:19:37 +05:30
Srikanth Chekuri
031d62ca44 Revert "fix: added default value to time,space aggregation to fix query_range…" (#7464)
This reverts commit 8c4c357351.
2025-03-28 12:43:31 +05:30
SagarRajput-7
8c4c357351 fix: added default value to time,space aggregation to fix query_range getting 500 for metric (#7414)
* fix: added default value to time,space aggregation to fix query_range getting 500 for metric

* fix: added all available operators as default when no attribute type is present

* fix: changed operator, time and space values to avg when empty attribute type
2025-03-28 06:36:09 +00:00
SagarRajput-7
d8d8191a32 feat: allow width customisation and persist it across users and view (#7273)
* feat: removed ellipsis prop

* feat: prevent unnecessary save calls

* feat: fix dashboard detail resize icon

* feat: adjusted resizable header - set minConstraint

* feat: fixed dashboard vanishing issue

* feat: removed dependency causing maximum callstack warning

* feat: corrected the list edit view render issue and resize handler fix

* feat: style fix

* feat: removed comments

* fix: updated test cases

* feat: updated the test cases

---------

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2025-03-28 06:06:40 +00:00
SagarRajput-7
a876c0a744 chore: added doc title to messaging queues (#7460) 2025-03-28 11:25:41 +05:30
Vishal Sharma
c36f913a90 fix: telemetry version function call (#7453) 2025-03-27 20:07:09 +05:30
SagarRajput-7
ed597f00c0 fix: fixed copy to clipboard popup getting flooded on every click (#7448) 2025-03-27 05:54:34 +00:00
Vibhu Pandey
4957d3ae93 feat(sqlstore): move postgres to enterprise codebase (#7445) 2025-03-27 11:16:43 +05:30
Srikanth Chekuri
8835e3493d chore: skip logfield/spanfield type in the suggestions (#7433) 2025-03-27 10:36:27 +05:30
Vibhu Pandey
027a1631ef feat(httpclient): add an extensible http client (#7446) 2025-03-26 19:33:52 +00:00
Shaheer Kochai
d7a6607a25 fix: use search v2 component for traces data source & minor improvements to search v2 component (#7404) 2025-03-26 18:00:54 +00:00
Sahil Khan
7a58bc58c9 fix: stage and run query button same url navigation enabled (#7415) 2025-03-26 23:25:01 +05:30
Srikanth Chekuri
88be23c3e3 chore: pass through substitutions for CH query (#7389) 2025-03-26 12:58:55 +00:00
Srikanth Chekuri
8f095dfbc9 fix: handle expected value less than zero (#7410) 2025-03-26 12:50:46 +00:00
aniketio-ctrl
72207691a3 fix(metrics-explorer): added time filter in inner sub queries of list and samples (#7436) 2025-03-26 09:57:21 +00:00
Raj Kamal Singh
8998ca652e chore: aws integration: bump recommended agent version (#7434) 2025-03-26 09:14:05 +00:00
Piyush Singariya
f4ae5f19ff feat: AWS Managed Streaming Kafka service integration (#7350)
* feat: msk integration

* feat: logs not available in msk

* fix: minor suggestions made by ellipsis

* fix: changes based on review, added Variables, Units, Legends, SVG

* fix: update in global variables, and query operators

* fix: update in rx tx panel, region variable query update

---------

Co-authored-by: Raj Kamal Singh <1133322+raj-k-singh@users.noreply.github.com>
2025-03-26 12:57:39 +05:30
Vikrant Gupta
d1ea608671 feat(sqlmigration): migrate invites table from bigint to uuid (#7428)
* feat(sqlmigration): added migration for schema cleanup

* feat(sqlmigration): drop sites,licenses table and added uuid v7 for saved views

* feat(sqlmigration): commit the transaction

* feat(sqlmigration): address review comments

* feat(sqlmigration): address review comments

* feat(sqlmigration): frontend changes for saved views

* feat(sqlmigration): frontend changes for saved views

* feat(sqlmigration): frontend changes for saved views

* feat(sqlmigration): frontend changes for saved views

* feat(sqlmigration): frontend changes for saved views

* feat(sqlmigration): migrate invites table from bigint to uuid

* feat(sqlmigration): add support for idempotant dialect based migration

* feat(sqlmigration): add support for idempotant dialect based migration

* feat(sqlmigration): add foreign key constraints for all new tables
2025-03-25 22:02:34 +05:30
Shivanshu Raj Shrivastava
ac7ecac2c1 chore: check http.url exists (#7429)
Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
2025-03-25 18:25:12 +05:30
Vikrant Gupta
64071165c4 feat(sqlmigration): cleanup the licenses and sites table (#7422)
* feat(sqlmigration): added migration for schema cleanup

* feat(sqlmigration): drop sites,licenses table and added uuid v7 for saved views

* feat(sqlmigration): commit the transaction

* feat(sqlmigration): address review comments

* feat(sqlmigration): address review comments

* feat(sqlmigration): frontend changes for saved views

* feat(sqlmigration): frontend changes for saved views

* feat(sqlmigration): frontend changes for saved views

* feat(sqlmigration): frontend changes for saved views

* feat(sqlmigration): frontend changes for saved views
2025-03-25 04:05:40 +05:30
SagarRajput-7
9c25a33cd9 fix: prevented infraMonitoring styles from overriding at other places (#7425) 2025-03-24 21:13:49 +05:30
Nityananda Gohain
3100d602c4 Revert "feat: adds trace funnels (#7315)" (#7423)
This reverts commit b36d2ec4c6.
2025-03-24 14:00:14 +00:00
Vibhu Pandey
d80908a1fc fix(logger): remove color encoding with json logs (#7421)
remove color encoding with json logs. Fixes the irritating: 
```
{"level":"\u001b[34mINFO\u001b[0m","timestamp":"2025-03-24T13:03:58.889Z",
```

in both enterprise and community
2025-03-24 13:22:44 +00:00
SagarRajput-7
bc17a10550 feat: added logarithmic scale option for panels (#7413) 2025-03-24 13:12:04 +00:00
SagarRajput-7
694c185373 feat: added logic to fill invalid value with null, so the chart dont have broken lines (#7412) 2025-03-24 18:28:41 +05:30
Sahil Khan
02f3dfefb9 feat: api monitoring domain details (#7308)
* feat: basic scaffolding for api monitoring & page level components added

* feat: added hardcoded attribute keys in querybuildersearcv2

* feat: added utils for formatting

* feat: api monitoring dashboard - 0

* feat: refactored the domain list

* feat: domain details drawer with all functionality added

* fix: minor eslint revert

* feat: adding ui styling to domain list table

* feat: adding pagination and minor styling to domain list table

* feat: added ui for domain details drawer all endpoints tab

* feat: added ui for domain details drawer all endpoints table with groupby

* feat: endpoint details tab ui revamped

* feat: endpoint details tab zero state styling

* fix: syntax error fixed

* feat: added conditional rendering of dep service and fixed graphs

* feat: added status code charts

* feat: added error states and loading states

* feat: added groupby persistence for endpoints

* feat: added domain navigation in the domain details drawer

* feat: added domain navigation in the domain details drawer - fix

* feat: isolated endpoint details zerostate

* feat: Implemented series aggregation with charts

* feat: ui for domain list table

* feat: react query keys added and basic pr comments resolved

* feat: fixed types

* feat: light mode fixed

* feat: empty states and light mode styling

* fix: bug with the endpoint filters

* feat: added port column and isolated endpoint in domain details

* feat: added port column and isolated endpoint in domain details - minor cleanup

* fix: minor type fix

* fix: pr comments incorporated - 0

* fix: pr comments incorporated - 1

---------

Co-authored-by: Sahil <sahil@Sahils-MacBook-Pro.local>
2025-03-24 18:01:39 +05:30
Elizabeth Mathew
3515686daf docs: for setting up otel demo app and sending data to signoz (#7358) 2025-03-24 11:40:51 +00:00
Prashant Shahi
c116bf05be ci: append amd64 suffix for staging/testing deployment versions (#7420)
### Summary

- update CI workflows to append amd64 suffix for staging/testing deployment versions

---------

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-03-24 11:26:37 +00:00
Nityananda Gohain
4842e3b912 fix: use sqlStore instead of bun.db (#7416)
* fix: use sqlStore instead of bun.db

* fix: get it to running state

* fix: name changes
2025-03-24 09:24:20 +00:00
Vibhu Pandey
9bdf194d70 feat: add the new version package (#7411) 2025-03-24 14:38:48 +05:30
Srikanth Chekuri
88aa29e94c chore: add event on span row click (#7406) 2025-03-24 14:21:51 +05:30
Nityananda Gohain
1dfebed93a fix: pipelines postgres support and multitenancy (#7371)
* fix: pipelines postgres support and multitenancy

* fix: minor fixes

* fix: address minor comments

* fix: rename package pipelinetypes
2025-03-24 10:17:12 +05:30
Shivanshu Raj Shrivastava
b36d2ec4c6 feat: adds trace funnels (#7315)
* feat: trace funnels

Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
2025-03-23 22:35:30 +05:30
Vibhu Pandey
089f128020 chore: remove otlp zap encoder (#7408) 2025-03-23 11:49:37 +05:30
Nityananda Gohain
e30f95c340 fix: update pat last used (#7407) 2025-03-22 17:51:25 +05:30
Shaheer Kochai
2c87d96d75 feat: trace funnels list page (#7324)
* chore: add a new tab for traces funnels

* feat: funnels list page basic UI

* feat: learn more component

* feat: get funnels list data from mock API, and handle data, loading and empty states

* chore(SignozModal): add width prop and improve button styles

* feat: implement funnel rename

* refactor: overall improvements

* feat: implement sorting in traces funnels list page

* feat: add sort column key and order to url params

* chore: move useFunnels to hooks/TracesFunnels

* feat: implement traces funnels search and refactor search and sort by extracting to custom hooks

* chore: overall improvements to rename trace funnel modal

* chore: make the rename input auto-focusable

* feat: handle create funnel modal

* feat: delete funnel modal and functionality

* fix: fix the layout shift in funnel item caused by getContainer={false}

* chore: overall improvements and use live api in traces funnels

* feat: create traces funnels details basic page + funnel -> details redirection

* fix: funnels traces light mode UI

* fix: properly display created at in funnels list item + preventDefault

* refactor: extract FunnelItemPopover into a separate component

* chore: hide funnel tab from traces explorer

* chore: add check to display trace funnels tab only in dev environment

* chore: improve funnels modals light mode

* chore: overall improvements

* fix: properly pass funnel details link

* chore: address PR review changes
2025-03-22 09:13:18 +00:00
Nityananda Gohain
1f9b13dc35 fix: fix the pat middleware (#7402) 2025-03-22 12:50:43 +05:30
Vikrant Gupta
d831c1cb88 fix(portal): url limit reached for stripe customer portal (#7392) 2025-03-21 06:02:49 +00:00
Amlan Kumar Nandy
bc72a0a591 chore: correct isMonotonic field name in metric details (#7393) 2025-03-21 10:36:54 +05:30
Amlan Kumar Nandy
ad2b75e3f0 chore: metris explorer fixes (#7377) 2025-03-20 18:58:15 +00:00
Shivanshu Raj Shrivastava
efd4e30edf fix: publish signoz as package (#7378)
Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
2025-03-20 15:31:41 +00:00
aniketio-ctrl
f79a5a2db6 fix(metrics-explorer): added updated metadata in list summary (#7381)
* fix(metrics-explorer): added updated metadata in list summary

* fix(metrics-explorer): added skipSignozMetric in aggregate attribute

* fix(metrics-explorer): updated last received query
2025-03-20 18:46:05 +05:30
Nityananda Gohain
9d8e46e5b2 fix: move pat and org domains towards postgres multitenancy (#7337)
* fix: inital commit for pat

* fix: add migration file

* fix: add domain changes

* fix: minor fixes

* fix: update migration

* fix: update migration

* fix: update pat and old migration

* fix: move domain and sso type to ee
2025-03-20 13:59:52 +05:30
SagarRajput-7
0320285a25 feat: added context redirection from panels to explorer pages (#7141)
* feat: added context redirection from panels to explorer pages

* feat: added graph coordinate - context redirection

* feat: fixed tooltip overlapping the button

* feat: code fix

* feat: removed unneccesary comment

* feat: added logic to resolve variables

* feat: added better logic to handle specific and panel redirection using query

* feat: added multi query support by datasource to panels redirction

* feat: fixing createbutton display logic

* feat: added logic and ui for specific line redirection

* feat: added logic to compute query with groupby

* feat: code fix and added aysnc await

* feat: added context redirection to fullview and edit view panels (#7252)

* feat: added context redirection to fullview and edit view panels

* feat: restricted redirection query to have only one query

* feat: added is buttonEnabled logic of graphs

* feat: code cleanup

* feat: for one query removed the queryname from onclick button

* feat: removed redirection option from action menu

* feat: redesign the format api flow to avoid delay in clickbutton appearance

* feat: updated the create filter logic for groupBys

* feat: handled the error on format api
2025-03-20 11:29:31 +05:30
Vibhu Pandey
e04e58d8b3 fix(apiserver): remove redundant logs by default (#7375) 2025-03-20 11:01:01 +05:30
Amlan Kumar Nandy
fc03303c29 chore: fix query builder datasource issue in metrics explorer (#7373) 2025-03-20 03:38:16 +00:00
Vibhu Pandey
95b94a9da7 fix(cache): rely on if-modified-since header to serve static files (#7372) 2025-03-20 01:09:52 +05:30
Amlan Kumar Nandy
3d3dd98549 chore: metrics explorer fixes (#7362) 2025-03-19 17:31:36 +05:30
Amlan Kumar Nandy
097e4ca948 chore: add in-table-search for metric name and type in metrics explorer (#7356) 2025-03-19 13:27:39 +05:30
Raj Kamal Singh
a64908e571 Feat: aws integration support for api gateway (#7232)
* chore: fix typo in RDS overview text

* feat: aws integration: get svc definition for API gateway started

* feat: aws integration: api gateway: add overview dashboard

* feat: aws integration: API gateway: add details of metrics collected

* chore: aws integration: api gateway: remove unnecessary promql query from panels
2025-03-19 07:16:29 +00:00
Raj Kamal Singh
0be9ca272b Chore: Spell fix in AWS ALB integration (#7282)
* chore: spell fix in AWS ALB integration

* chore: only ALB title

---------

Co-authored-by: Piyush Singariya <piyushsingariya@gmail.com>
Co-authored-by: piyushsignoz <piyush@signoz.io>
Co-authored-by: Raj Kamal Singh <1133322+raj-k-singh@users.noreply.github.com>
2025-03-19 07:03:33 +00:00
aniketio-ctrl
e434e3d142 chore: update metric metadata response payload (#7359)
* fix(metrics-explorer): added type in list summary api as filter
2025-03-19 06:46:38 +00:00
SagarRajput-7
4779300dea fix: fixed list panel getting no-data in full view and edit panel (#7320)
* fix: fixed list panel getting no-data in full view and edit panel

* fix: added cloneDeep for updatedQuery

---------

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2025-03-19 11:58:20 +05:30
aniketio-ctrl
7a225e0a4f fix(metrics-explorer): added type in list summary api as filter (#7357) 2025-03-19 11:30:40 +05:30
SagarRajput-7
53d3de4909 feat: added label with value (result) in Pie charts (#7322)
* fix: fixed legend format not working for Pie Chart

* fix: added enhancement to legend fit and show, also made pie-chart responsive

* fix: made some css fixes

* fix: css fixes

* feat: added label with value (result) in Pie charts

* feat: added UI for inner radiud to have total value in the center of PIE

* feat: added formatting and unit support to innder total values
2025-03-18 16:34:56 +00:00
SagarRajput-7
4ede88cc1f feat: made value panel responsive and background color to full (#7339)
* feat: made value panel responsive and background color to full

* feat: added css fix

---------

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2025-03-18 21:58:41 +05:30
Nityananda Gohain
800ca9d329 fix: min step interval to 60 (#7354) 2025-03-18 14:18:12 +00:00
Amlan Kumar Nandy
b3fa1ad60e chore: metrics explorer fixes and improvements (#7334) 2025-03-18 10:57:14 +00:00
aniketio-ctrl
5b6b5bf359 feat(summary): added update metrics metadata api (#7235)
* feat(explorer): updated metadata metrics api| 7076

* feat(explorer): added inspect metrics with resource attribute| 7076

* fix(summary): fixed dashboard name in metric metadata api

* fix(summary): removed offset from second query

* fix(summary): removed offset from second query

* feat(summary): added update metrics metadata api

* feat(summary): resolved log messages

* feat(summary): added is_monotonic column and added temporality| 7077

* feat(summary): added histogram bucket and summary quantile check| 7077

* feat(summary): added temporality and is_monotonic in update queries| 7077

* feat(summary): resolved pr comments| 7077

* feat(inspect): normalized resource attributes

* feat(update-summary): merge conflicts resolve

* feat(update-summary): merge conflicts resolve

* feat(update-metrics): updated description check

* feat(update-metrics): added kv log comments

* fix: updated testcase with reader

* fix: updated testcase with reader

* fix: updated testcase with reader

* fix: updated normalized true in metrics explorer api

* fix: removed inner join from list metrics query
2025-03-18 10:39:34 +00:00
SagarRajput-7
8a479d42ff fix: fixed legend format not working for Pie Chart (#7300)
* fix: fixed legend format not working for Pie Chart

* fix: added enhancement to legend fit and show, also made pie-chart responsive

* fix: made some css fixes

* fix: css fixes
2025-03-18 10:23:58 +00:00
primus-bot[bot]
031e78cb20 chore(release): bump to v0.76.2 (#7351)
#### Summary
 - Release SigNoz v0.76.2
2025-03-18 14:46:19 +05:30
Yunus M
f992ba9106 fix(license): OSS UI build failing due to restrictive active license check (#7345)
* fix: ui breaking due to licenses issue

* feat: handle navigations in case of oss in homepage (#7347)

* feat: handle navigations in case of oss in homepage

* fix: skip datasource and redirect to get-started from services table

---------

Co-authored-by: makeavish <makeavish786@gmail.com>

---------

Co-authored-by: makeavish <makeavish786@gmail.com>
2025-03-18 14:24:26 +05:30
Vibhu Pandey
7118829107 fix(test|lint): fix lint and test in go (#7346)
### Summary
 
- fix lint and test in go
2025-03-18 14:01:48 +05:30
Vikrant Gupta
5aea442939 fix(license): add check for current route for workspace access (#7341) 2025-03-18 04:40:31 +05:30
Prashant Shahi
bf385a9f95 fix: enable 8080 ports for testing standalone and swarm (#7336)
### Summary

- expose SigNoz port 8080 for testing standalone and swarm

Signed-off-by: Prashant Shahi <prashant@signoz.io>
2025-03-17 21:26:43 +05:30
primus-bot[bot]
c52994ff9e chore(release): bump SigNoz to v0.76.1, OTel Collector to v0.111.34 (#7335)
#### Summary
 - Release SigNoz v0.76.1
 - Bump SigNoz OTel Collector to v0.111.34

 Created by [Primus-Bot](https://github.com/apps/primus-bot)
2025-03-17 20:52:03 +05:30
Vikrant Gupta
a1eb52b034 chore(subscription): deprecate trial API (#7325) 2025-03-17 19:33:32 +05:30
Yunus M
c81760bdf7 feat: update ui based on license states (#7328)
* feat: update ui based on license states

* feat: update license based ui messaging (#7330)

* fix: billing test cases
2025-03-17 19:27:45 +05:30
Vikrant Gupta
c26277cd42 chore(subscription): update the checkout and portal endpoints to use zeus (#7310)
### Summary

- update the checkout and portal endpoints to use `zeus` instead of license server
2025-03-17 15:22:04 +05:30
Shivanshu Raj Shrivastava
458bd1171b fix: give correct span counts via waterfall api (#7287)
Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
Co-authored-by: Nityananda Gohain <nityanandagohain@gmail.com>
2025-03-17 05:33:06 +00:00
Vibhu Pandey
a806ddf74f ci(codeowners): add codeowners for core pkg packages (#7317) 2025-03-17 09:01:58 +05:30
Yunus M
b039dc6fa7 feat: in product home page (#7270)
* feat: base setup for in product home page

* feat: base state

* feat: add empty states for alerts, traces, dashboards, saved views

* feat: add checklist component

* feat: integrate all panels

* feat: integrate preference api and clean up components

* feat: handle done and skip states of the checklist

* feat: update ui

* feat: update ui

* feat: code cleanup

* feat: add events

* feat: support time interval change in services

* feat: add service time change event and cleanup code

* feat: handle light mode

* feat: address review comments

* fix: routing issues

* fix: testcase snapshot, a minor ui improvements

* fix: noopener typo in window.open
2025-03-16 19:41:08 +05:30
Vikrant Gupta
08309c380c fix(usage): set default tenant for exporting usage (#7314) 2025-03-16 15:00:31 +05:30
1008 changed files with 70799 additions and 14228 deletions

View File

@@ -1,5 +1,4 @@
services:
clickhouse:
image: clickhouse/clickhouse-server:24.1.2-alpine
container_name: clickhouse
@@ -24,7 +23,6 @@ services:
retries: 3
depends_on:
- zookeeper
zookeeper:
image: bitnami/zookeeper:3.7.1
container_name: zookeeper
@@ -41,9 +39,8 @@ services:
interval: 30s
timeout: 5s
retries: 3
schema-migrator-sync:
image: signoz/signoz-schema-migrator:0.111.29
image: signoz/signoz-schema-migrator:v0.111.40
container_name: schema-migrator-sync
command:
- sync
@@ -55,9 +52,8 @@ services:
clickhouse:
condition: service_healthy
restart: on-failure
schema-migrator-async:
image: signoz/signoz-schema-migrator:0.111.29
image: signoz/signoz-schema-migrator:v0.111.40
container_name: schema-migrator-async
command:
- async

View File

@@ -0,0 +1,27 @@
services:
postgres:
image: postgres:15
container_name: postgres
environment:
POSTGRES_DB: signoz
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
healthcheck:
test:
[
"CMD",
"pg_isready",
"-d",
"signoz",
"-U",
"postgres"
]
interval: 30s
timeout: 30s
retries: 3
restart: on-failure
ports:
- "127.0.0.1:5432:5432/tcp"
volumes:
- ${PWD}/fs/tmp/var/lib/postgresql/data/:/var/lib/postgresql/data/

View File

@@ -1,6 +1,7 @@
.git
.github
.vscode
.devenv
README.md
deploy
sample-apps

6
.github/CODEOWNERS vendored
View File

@@ -6,5 +6,9 @@
/frontend/src/container/MetricsApplication @srikanthccv
/frontend/src/container/NewWidget/RightContainer/types.ts @srikanthccv
/deploy/ @SigNoz/devops
/sample-apps/ @SigNoz/devops
.github @SigNoz/devops
/pkg/config/ @grandwizard28
/pkg/errors/ @grandwizard28
/pkg/factory/ @grandwizard28
/pkg/types/ @grandwizard28
/pkg/sqlmigration/ @vikrantgupta25

View File

@@ -1,42 +0,0 @@
# Github actions
## Testing the UI manually on each PR
First we need to make sure the UI is ready
* Check the `Start tunnel` step in `e2e-k8s/deploy-on-k3s-cluster` job and make sure you see `your url is: https://pull-<number>-signoz.loca.lt`
* This job will run until the PR is merged or closed to keep the local tunneling alive
- github will cancel this job if the PR wasn't merged after 6h
- if the job was cancel, go to the action and press `Re-run all jobs`
Now you can open your browser at https://pull-<number>-signoz.loca.lt and check the UI.
## Environment Variables
To run GitHub workflow, a few environment variables needs to add in GitHub secrets
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Example </th>
</tr>
<tr>
<td> REPONAME </td>
<td> Provide the DockerHub user/organisation name of the image. </td>
<td> signoz</td>
</tr>
<tr>
<td> DOCKERHUB_USERNAME </td>
<td> Docker hub username </td>
<td> signoz</td>
</tr>
<tr>
<td> DOCKERHUB_TOKEN </td>
<td> Docker hub password/token with push permission </td>
<td> **** </td>
</tr>
<tr>
<td> SONAR_TOKEN </td>
<td> <a href="https://sonarcloud.io">SonarCloud</a> token </td>
<td> **** </td>
</tr>

81
.github/workflows/build-community.yaml vendored Normal file
View File

@@ -0,0 +1,81 @@
name: build-community
on:
push:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+'
- 'v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+'
defaults:
run:
shell: bash
env:
PRIMUS_HOME: .primus
MAKE: make --no-print-directory --makefile=.primus/src/make/main.mk
jobs:
prepare:
runs-on: ubuntu-latest
outputs:
version: ${{ steps.build-info.outputs.version }}
hash: ${{ steps.build-info.outputs.hash }}
time: ${{ steps.build-info.outputs.time }}
branch: ${{ steps.build-info.outputs.branch }}
steps:
- name: self-checkout
uses: actions/checkout@v4
- id: token
name: github-token-gen
uses: actions/create-github-app-token@v1
with:
app-id: ${{ secrets.PRIMUS_APP_ID }}
private-key: ${{ secrets.PRIMUS_PRIVATE_KEY }}
owner: ${{ github.repository_owner }}
- name: primus-checkout
uses: actions/checkout@v4
with:
repository: signoz/primus
ref: main
path: .primus
token: ${{ steps.token.outputs.token }}
- name: build-info
run: |
echo "version=$($MAKE info-version)" >> $GITHUB_OUTPUT
echo "hash=$($MAKE info-commit-short)" >> $GITHUB_OUTPUT
echo "time=$($MAKE info-timestamp)" >> $GITHUB_OUTPUT
echo "branch=$($MAKE info-branch)" >> $GITHUB_OUTPUT
js-build:
uses: signoz/primus.workflows/.github/workflows/js-build.yaml@main
needs: prepare
secrets: inherit
with:
PRIMUS_REF: main
JS_SRC: frontend
JS_OUTPUT_ARTIFACT_CACHE_KEY: community-jsbuild-${{ github.sha }}
JS_OUTPUT_ARTIFACT_PATH: frontend/build
DOCKER_BUILD: false
DOCKER_MANIFEST: false
go-build:
uses: signoz/primus.workflows/.github/workflows/go-build.yaml@main
needs: [prepare, js-build]
secrets: inherit
with:
PRIMUS_REF: main
GO_NAME: signoz-community
GO_INPUT_ARTIFACT_CACHE_KEY: community-jsbuild-${{ github.sha }}
GO_INPUT_ARTIFACT_PATH: frontend/build
GO_BUILD_CONTEXT: ./pkg/query-service
GO_BUILD_FLAGS: >-
-tags timetzdata
-ldflags='-linkmode external -extldflags \"-static\" -s -w
-X github.com/SigNoz/signoz/pkg/version.version=${{ needs.prepare.outputs.version }}
-X github.com/SigNoz/signoz/pkg/version.variant=community
-X github.com/SigNoz/signoz/pkg/version.hash=${{ needs.prepare.outputs.hash }}
-X github.com/SigNoz/signoz/pkg/version.time=${{ needs.prepare.outputs.time }}
-X github.com/SigNoz/signoz/pkg/version.branch=${{ needs.prepare.outputs.branch }}'
GO_CGO_ENABLED: 1
DOCKER_BASE_IMAGES: '{"alpine": "alpine:3.20.3"}'
DOCKER_DOCKERFILE_PATH: ./pkg/query-service/Dockerfile.multi-arch
DOCKER_MANIFEST: true
DOCKER_PROVIDERS: dockerhub

116
.github/workflows/build-enterprise.yaml vendored Normal file
View File

@@ -0,0 +1,116 @@
name: build-enterprise
on:
push:
tags:
- v*
defaults:
run:
shell: bash
env:
PRIMUS_HOME: .primus
MAKE: make --no-print-directory --makefile=.primus/src/make/main.mk
jobs:
prepare:
runs-on: ubuntu-latest
outputs:
docker_providers: ${{ steps.set-docker-providers.outputs.providers }}
version: ${{ steps.build-info.outputs.version }}
hash: ${{ steps.build-info.outputs.hash }}
time: ${{ steps.build-info.outputs.time }}
branch: ${{ steps.build-info.outputs.branch }}
steps:
- name: self-checkout
uses: actions/checkout@v4
- id: token
name: github-token-gen
uses: actions/create-github-app-token@v1
with:
app-id: ${{ secrets.PRIMUS_APP_ID }}
private-key: ${{ secrets.PRIMUS_PRIVATE_KEY }}
owner: ${{ github.repository_owner }}
- name: primus-checkout
uses: actions/checkout@v4
with:
repository: signoz/primus
ref: main
path: .primus
token: ${{ steps.token.outputs.token }}
- name: build-info
id: build-info
run: |
echo "version=$($MAKE info-version)" >> $GITHUB_OUTPUT
echo "hash=$($MAKE info-commit-short)" >> $GITHUB_OUTPUT
echo "time=$($MAKE info-timestamp)" >> $GITHUB_OUTPUT
echo "branch=$($MAKE info-branch)" >> $GITHUB_OUTPUT
- name: set-docker-providers
id: set-docker-providers
run: |
if [[ ${{ github.event.ref }} =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+$ || ${{ github.event.ref }} =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+-rc\.[0-9]+$ ]]; then
echo "providers=dockerhub gcp" >> $GITHUB_OUTPUT
else
echo "providers=gcp" >> $GITHUB_OUTPUT
fi
- name: create-dotenv
run: |
mkdir -p frontend
echo 'CI=1' > frontend/.env
echo 'INTERCOM_APP_ID="${{ secrets.INTERCOM_APP_ID }}"' >> frontend/.env
echo 'SEGMENT_ID="${{ secrets.SEGMENT_ID }}"' >> frontend/.env
echo 'SENTRY_AUTH_TOKEN="${{ secrets.SENTRY_AUTH_TOKEN }}"' >> frontend/.env
echo 'SENTRY_ORG="${{ secrets.SENTRY_ORG }}"' >> frontend/.env
echo 'SENTRY_PROJECT_ID="${{ secrets.SENTRY_PROJECT_ID }}"' >> frontend/.env
echo 'SENTRY_DSN="${{ secrets.SENTRY_DSN }}"' >> frontend/.env
echo 'TUNNEL_URL="${{ secrets.TUNNEL_URL }}"' >> frontend/.env
echo 'TUNNEL_DOMAIN="${{ secrets.TUNNEL_DOMAIN }}"' >> frontend/.env
echo 'POSTHOG_KEY="${{ secrets.POSTHOG_KEY }}"' >> frontend/.env
echo 'CUSTOMERIO_ID="${{ secrets.CUSTOMERIO_ID }}"' >> frontend/.env
echo 'CUSTOMERIO_SITE_ID="${{ secrets.CUSTOMERIO_SITE_ID }}"' >> frontend/.env
echo 'USERPILOT_KEY="${{ secrets.USERPILOT_KEY }}"' >> frontend/.env
- name: cache-dotenv
uses: actions/cache@v4
with:
path: frontend/.env
key: enterprise-dotenv-${{ github.sha }}
js-build:
uses: signoz/primus.workflows/.github/workflows/js-build.yaml@main
needs: prepare
secrets: inherit
with:
PRIMUS_REF: main
JS_SRC: frontend
JS_INPUT_ARTIFACT_CACHE_KEY: enterprise-dotenv-${{ github.sha }}
JS_INPUT_ARTIFACT_PATH: frontend/.env
JS_OUTPUT_ARTIFACT_CACHE_KEY: enterprise-jsbuild-${{ github.sha }}
JS_OUTPUT_ARTIFACT_PATH: frontend/build
DOCKER_BUILD: false
DOCKER_MANIFEST: false
go-build:
uses: signoz/primus.workflows/.github/workflows/go-build.yaml@main
needs: [prepare, js-build]
secrets: inherit
with:
PRIMUS_REF: main
GO_INPUT_ARTIFACT_CACHE_KEY: enterprise-jsbuild-${{ github.sha }}
GO_INPUT_ARTIFACT_PATH: frontend/build
GO_BUILD_CONTEXT: ./ee/query-service
GO_BUILD_FLAGS: >-
-tags timetzdata
-ldflags='-linkmode external -extldflags \"-static\" -s -w
-X github.com/SigNoz/signoz/pkg/version.version=${{ needs.prepare.outputs.version }}
-X github.com/SigNoz/signoz/pkg/version.variant=enterprise
-X github.com/SigNoz/signoz/pkg/version.hash=${{ needs.prepare.outputs.hash }}
-X github.com/SigNoz/signoz/pkg/version.time=${{ needs.prepare.outputs.time }}
-X github.com/SigNoz/signoz/pkg/version.branch=${{ needs.prepare.outputs.branch }}
-X github.com/SigNoz/signoz/ee/zeus.url=https://api.signoz.cloud
-X github.com/SigNoz/signoz/ee/zeus.deprecatedURL=https://license.signoz.io
-X github.com/SigNoz/signoz/ee/query-service/constants.ZeusURL=https://api.signoz.cloud
-X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=https://license.signoz.io/api/v1'
GO_CGO_ENABLED: 1
DOCKER_BASE_IMAGES: '{"alpine": "alpine:3.20.3"}'
DOCKER_DOCKERFILE_PATH: ./ee/query-service/Dockerfile.multi-arch
DOCKER_MANIFEST: true
DOCKER_PROVIDERS: ${{ needs.prepare.outputs.docker_providers }}

125
.github/workflows/build-staging.yaml vendored Normal file
View File

@@ -0,0 +1,125 @@
name: build-staging
on:
push:
branches:
- main
pull_request:
types: [labeled]
defaults:
run:
shell: bash
env:
PRIMUS_HOME: .primus
MAKE: make --no-print-directory --makefile=.primus/src/make/main.mk
jobs:
prepare:
runs-on: ubuntu-latest
if: ${{ contains(github.event.label.name, 'staging:') || github.event.ref == 'refs/heads/main' }}
outputs:
version: ${{ steps.build-info.outputs.version }}
hash: ${{ steps.build-info.outputs.hash }}
time: ${{ steps.build-info.outputs.time }}
branch: ${{ steps.build-info.outputs.branch }}
deployment: ${{ steps.build-info.outputs.deployment }}
steps:
- name: self-checkout
uses: actions/checkout@v4
- id: token
name: github-token-gen
uses: actions/create-github-app-token@v1
with:
app-id: ${{ secrets.PRIMUS_APP_ID }}
private-key: ${{ secrets.PRIMUS_PRIVATE_KEY }}
owner: ${{ github.repository_owner }}
- name: primus-checkout
uses: actions/checkout@v4
with:
repository: signoz/primus
ref: main
path: .primus
token: ${{ steps.token.outputs.token }}
- name: build-info
id: build-info
run: |
echo "version=$($MAKE info-version)" >> $GITHUB_OUTPUT
echo "hash=$($MAKE info-commit-short)" >> $GITHUB_OUTPUT
echo "time=$($MAKE info-timestamp)" >> $GITHUB_OUTPUT
echo "branch=$($MAKE info-branch)" >> $GITHUB_OUTPUT
staging_label="${{ github.event.label.name }}"
if [[ "${staging_label}" == "staging:"* ]]; then
deployment=${staging_label#"staging:"}
elif [[ "${{ github.event.ref }}" == "refs/heads/main" ]]; then
deployment="staging"
else
echo "error: not able to determine deployment - please verify the PR label or the branch"
exit 1
fi
echo "deployment=${deployment}" >> $GITHUB_OUTPUT
- name: create-dotenv
run: |
mkdir -p frontend
echo 'CI=1' > frontend/.env
echo 'TUNNEL_URL="${{ secrets.NP_TUNNEL_URL }}"' >> frontend/.env
echo 'TUNNEL_DOMAIN="${{ secrets.NP_TUNNEL_DOMAIN }}"' >> frontend/.env
echo 'USERPILOT_KEY="${{ secrets.NP_USERPILOT_KEY }}"' >> frontend/.env
- name: cache-dotenv
uses: actions/cache@v4
with:
path: frontend/.env
key: staging-dotenv-${{ github.sha }}
js-build:
uses: signoz/primus.workflows/.github/workflows/js-build.yaml@main
needs: prepare
secrets: inherit
with:
PRIMUS_REF: main
JS_SRC: frontend
JS_INPUT_ARTIFACT_CACHE_KEY: staging-dotenv-${{ github.sha }}
JS_INPUT_ARTIFACT_PATH: frontend/.env
JS_OUTPUT_ARTIFACT_CACHE_KEY: staging-jsbuild-${{ github.sha }}
JS_OUTPUT_ARTIFACT_PATH: frontend/build
DOCKER_BUILD: false
DOCKER_MANIFEST: false
go-build:
uses: signoz/primus.workflows/.github/workflows/go-build.yaml@main
needs: [prepare, js-build]
secrets: inherit
with:
PRIMUS_REF: main
GO_INPUT_ARTIFACT_CACHE_KEY: staging-jsbuild-${{ github.sha }}
GO_INPUT_ARTIFACT_PATH: frontend/build
GO_BUILD_CONTEXT: ./ee/query-service
GO_BUILD_FLAGS: >-
-tags timetzdata
-ldflags='-linkmode external -extldflags \"-static\" -s -w
-X github.com/SigNoz/signoz/pkg/version.version=${{ needs.prepare.outputs.version }}
-X github.com/SigNoz/signoz/pkg/version.variant=enterprise
-X github.com/SigNoz/signoz/pkg/version.hash=${{ needs.prepare.outputs.hash }}
-X github.com/SigNoz/signoz/pkg/version.time=${{ needs.prepare.outputs.time }}
-X github.com/SigNoz/signoz/pkg/version.branch=${{ needs.prepare.outputs.branch }}
-X github.com/SigNoz/signoz/ee/zeus.url=https://api.staging.signoz.cloud
-X github.com/SigNoz/signoz/ee/zeus.deprecatedURL=https://license.staging.signoz.cloud
-X github.com/SigNoz/signoz/ee/query-service/constants.ZeusURL=https://api.staging.signoz.cloud
-X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=https://license.staging.signoz.cloud/api/v1'
GO_CGO_ENABLED: 1
DOCKER_BASE_IMAGES: '{"alpine": "alpine:3.20.3"}'
DOCKER_DOCKERFILE_PATH: ./ee/query-service/Dockerfile.multi-arch
DOCKER_MANIFEST: true
DOCKER_PROVIDERS: gcp
staging:
if: ${{ contains(github.event.label.name, 'staging:') || github.event.ref == 'refs/heads/main' }}
uses: signoz/primus.workflows/.github/workflows/github-trigger.yaml@main
secrets: inherit
needs: [prepare, go-build]
with:
PRIMUS_REF: main
GITHUB_ENVIRONMENT: staging
GITHUB_SILENT: true
GITHUB_REPOSITORY_NAME: charts-saas-v3-staging
GITHUB_EVENT_NAME: releaser
GITHUB_EVENT_PAYLOAD: "{\"deployment\": \"${{ needs.prepare.outputs.deployment }}\", \"signoz_version\": \"${{ needs.prepare.outputs.version }}\"}"

View File

@@ -1,48 +0,0 @@
name: build-pipeline
on:
pull_request:
branches:
- main
- release/v*
jobs:
build-frontend:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Install dependencies
run: cd frontend && yarn install
- name: Build frontend static files
shell: bash
run: |
make build-frontend-static
build-signoz:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup golang
uses: actions/setup-go@v4
with:
go-version: "1.22"
- name: Build signoz image
shell: bash
run: |
make build-signoz-amd64
build-signoz-community:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup golang
uses: actions/setup-go@v4
with:
go-version: "1.22"
- name: Build signoz community image
shell: bash
run: |
make build-signoz-community-amd64

View File

@@ -25,15 +25,3 @@ jobs:
else
echo "No references to 'ee' packages found in 'pkg' directory"
fi
lint:
if: |
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: lint
uses: wagoid/commitlint-github-action@v5

View File

@@ -34,3 +34,30 @@ jobs:
secrets: inherit
with:
PRIMUS_REF: main
build:
if: |
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
runs-on: ubuntu-latest
steps:
- name: self-checkout
uses: actions/checkout@v4
- name: go-install
uses: actions/setup-go@v5
with:
go-version: "1.22"
- name: qemu-install
uses: docker/setup-qemu-action@v3
- name: aarch64-install
run: |
set -ex
sudo apt-get update
sudo apt-get install -y gcc-aarch64-linux-gnu musl-tools
- name: docker-community
shell: bash
run: |
make docker-build-community
- name: docker-enterprise
shell: bash
run: |
make docker-build-enterprise

View File

@@ -22,7 +22,7 @@ jobs:
run: |
echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
- name: build-frontend
run: make build-frontend-static
run: make js-build
- name: upload-frontend-artifact
uses: actions/upload-artifact@v4
with:

View File

@@ -35,8 +35,9 @@ jobs:
echo 'POSTHOG_KEY="${{ secrets.POSTHOG_KEY }}"' >> .env
echo 'CUSTOMERIO_ID="${{ secrets.CUSTOMERIO_ID }}"' >> .env
echo 'CUSTOMERIO_SITE_ID="${{ secrets.CUSTOMERIO_SITE_ID }}"' >> .env
echo 'USERPILOT_KEY="${{ secrets.USERPILOT_KEY }}"' >> .env
- name: build-frontend
run: make build-frontend-static
run: make js-build
- name: upload-frontend-artifact
uses: actions/upload-artifact@v4
with:

53
.github/workflows/integrationci.yaml vendored Normal file
View File

@@ -0,0 +1,53 @@
name: integrationci
on:
pull_request:
types:
- labeled
pull_request_target:
types:
- labeled
jobs:
test:
strategy:
fail-fast: false
matrix:
src:
- bootstrap
sqlstore-provider:
- postgres
- sqlite
clickhouse-version:
- 24.1.2-alpine
- 24.12-alpine
schema-migrator-version:
- v0.111.38
postgres-version:
- 15
if: |
((github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))) && contains(github.event.pull_request.labels.*.name, 'safe-to-integrate')
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v4
- name: python
uses: actions/setup-python@v5
with:
python-version: 3.13
- name: poetry
run: |
python -m pip install poetry==2.1.2
python -m poetry config virtualenvs.in-project true
cd tests/integration && poetry install --no-root
- name: run
run: |
cd tests/integration && \
poetry run pytest \
--basetemp=./tmp/ \
src/${{matrix.src}} \
--sqlstore-provider ${{matrix.sqlstore-provider}} \
--postgres-version ${{matrix.postgres-version}} \
--clickhouse-version ${{matrix.clickhouse-version}} \
--schema-migrator-version ${{matrix.schema-migrator-version}}

View File

@@ -1,9 +1,9 @@
name: prereleaser
on:
# schedule every wednesday 9:30 AM UTC (3pm IST)
# schedule every wednesday 6:30 AM UTC (12:00 PM IST)
schedule:
- cron: '30 9 * * 3'
- cron: '30 6 * * 3'
# allow manual triggering of the workflow by a maintainer
workflow_dispatch:

View File

@@ -1,134 +0,0 @@
name: push
on:
push:
branches:
- main
tags:
- v*
jobs:
image-build-and-push-signoz:
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v4
- name: setup
uses: actions/setup-go@v4
with:
go-version: "1.22"
- name: setup-qemu
uses: docker/setup-qemu-action@v3
- name: setup-buildx
uses: docker/setup-buildx-action@v3
with:
version: latest
- name: docker-login
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: create-env-file
run: |
echo 'INTERCOM_APP_ID="${{ secrets.INTERCOM_APP_ID }}"' > frontend/.env
echo 'SEGMENT_ID="${{ secrets.SEGMENT_ID }}"' >> frontend/.env
echo 'SENTRY_AUTH_TOKEN="${{ secrets.SENTRY_AUTH_TOKEN }}"' >> frontend/.env
echo 'SENTRY_ORG="${{ secrets.SENTRY_ORG }}"' >> frontend/.env
echo 'SENTRY_PROJECT_ID="${{ secrets.SENTRY_PROJECT_ID }}"' >> frontend/.env
echo 'SENTRY_DSN="${{ secrets.SENTRY_DSN }}"' >> frontend/.env
echo 'TUNNEL_URL="${{ secrets.TUNNEL_URL }}"' >> frontend/.env
echo 'TUNNEL_DOMAIN="${{ secrets.TUNNEL_DOMAIN }}"' >> frontend/.env
echo 'POSTHOG_KEY="${{ secrets.POSTHOG_KEY }}"' >> frontend/.env
echo 'CUSTOMERIO_ID="${{ secrets.CUSTOMERIO_ID }}"' >> frontend/.env
echo 'CUSTOMERIO_SITE_ID="${{ secrets.CUSTOMERIO_SITE_ID }}"' >> frontend/.env
- uses: benjlevesque/short-sha@v2.2
id: short-sha
- name: branch-name
id: branch-name
uses: tj-actions/branch-names@v7.0.7
- name: docker-tag
run: |
if [ '${{ steps.branch-name.outputs.is_tag }}' == 'true' ]; then
echo "DOCKER_TAG=${{ steps.branch-name.outputs.tag }}" >> $GITHUB_ENV
elif [ '${{ steps.branch-name.outputs.current_branch }}' == 'main' ]; then
echo "DOCKER_TAG=latest" >> $GITHUB_ENV
else
echo "DOCKER_TAG=${{ steps.branch-name.outputs.current_branch }}" >> $GITHUB_ENV
fi
- name: cross-compilation-tools
run: |
set -ex
sudo apt-get update
sudo apt-get install -y gcc-aarch64-linux-gnu musl-tools
- name: publish-signoz
run: make build-push-signoz
- name: qs-docker-tag
run: |
if [ '${{ steps.branch-name.outputs.is_tag }}' == 'true' ]; then
tag="${{ steps.branch-name.outputs.tag }}"
tag="${tag:1}"
echo "DOCKER_TAG=${tag}" >> $GITHUB_ENV
elif [ '${{ steps.branch-name.outputs.current_branch }}' == 'main' ]; then
echo "DOCKER_TAG=latest" >> $GITHUB_ENV
else
echo "DOCKER_TAG=${{ steps.branch-name.outputs.current_branch }}" >> $GITHUB_ENV
fi
- name: publish-query-service
run: |
SIGNOZ_DOCKER_IMAGE=query-service make build-push-signoz
image-build-and-push-signoz-community:
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v4
- name: setup-go
uses: actions/setup-go@v4
with:
go-version: "1.22"
- name: setup-qemu
uses: docker/setup-qemu-action@v3
- name: setup-buildx
uses: docker/setup-buildx-action@v3
with:
version: latest
- name: docker-login
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- uses: benjlevesque/short-sha@v2.2
id: short-sha
- name: branch-name
id: branch-name
uses: tj-actions/branch-names@v7.0.7
- name: docker-tag
run: |
if [ '${{ steps.branch-name.outputs.is_tag }}' == 'true' ]; then
echo "DOCKER_TAG=${{ steps.branch-name.outputs.tag }}" >> $GITHUB_ENV
elif [ '${{ steps.branch-name.outputs.current_branch }}' == 'main' ]; then
echo "DOCKER_TAG=latest" >> $GITHUB_ENV
else
echo "DOCKER_TAG=${{ steps.branch-name.outputs.current_branch }}" >> $GITHUB_ENV
fi
- name: cross-compilation-tools
run: |
set -ex
sudo apt-get update
sudo apt-get install -y gcc-aarch64-linux-gnu musl-tools
- name: publish-signoz-community
run: make build-push-signoz-community
- name: qs-docker-tag
run: |
if [ '${{ steps.branch-name.outputs.is_tag }}' == 'true' ]; then
tag="${{ steps.branch-name.outputs.tag }}"
tag="${tag:1}"
echo "DOCKER_TAG=${tag}-oss" >> $GITHUB_ENV
elif [ '${{ steps.branch-name.outputs.current_branch }}' == 'main' ]; then
echo "DOCKER_TAG=latest-oss" >> $GITHUB_ENV
else
echo "DOCKER_TAG=${{ steps.branch-name.outputs.current_branch }}-oss" >> $GITHUB_ENV
fi
- name: publish-query-service-oss
run: |
SIGNOZ_COMMUNITY_DOCKER_IMAGE=query-service make build-push-signoz-community

View File

@@ -1,16 +0,0 @@
name: remove-label
on:
pull_request_target:
types: [synchronize]
jobs:
remove:
runs-on: ubuntu-latest
steps:
- name: Remove label testing-deploy from PR
uses: buildsville/add-remove-label@v2.0.0
with:
label: testing-deploy
type: remove
token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,55 +0,0 @@
name: staging-deployment
# Trigger deployment only on push to main branch
on:
push:
branches:
- main
jobs:
deploy:
name: Deploy latest main branch to staging
runs-on: ubuntu-latest
environment: staging
permissions:
contents: 'read'
id-token: 'write'
steps:
- id: 'auth'
uses: 'google-github-actions/auth@v2'
with:
workload_identity_provider: ${{ secrets.GCP_WORKLOAD_IDENTITY_PROVIDER }}
service_account: ${{ secrets.GCP_SERVICE_ACCOUNT }}
- name: 'sdk'
uses: 'google-github-actions/setup-gcloud@v2'
- name: 'ssh'
shell: bash
env:
GITHUB_BRANCH: ${{ github.head_ref || github.ref_name }}
GITHUB_SHA: ${{ github.sha }}
GCP_PROJECT: ${{ secrets.GCP_PROJECT }}
GCP_ZONE: ${{ secrets.GCP_ZONE }}
GCP_INSTANCE: ${{ secrets.GCP_INSTANCE }}
CLOUDSDK_CORE_DISABLE_PROMPTS: 1
run: |
read -r -d '' COMMAND <<EOF || true
echo "GITHUB_BRANCH: ${GITHUB_BRANCH}"
echo "GITHUB_SHA: ${GITHUB_SHA}"
export DOCKER_TAG="${GITHUB_SHA:0:7}" # needed for child process to access it
export OTELCOL_TAG="main"
export PATH="/usr/local/go/bin/:$PATH" # needed for Golang to work
export KAFKA_SPAN_EVAL="true"
docker system prune --force
docker pull signoz/signoz-otel-collector:main
docker pull signoz/signoz-schema-migrator:main
cd ~/signoz
git status
git add .
git stash push -m "stashed on $(date --iso-8601=seconds)"
git fetch origin
git checkout ${GITHUB_BRANCH}
git pull
make build-signoz-amd64
make run-testing
EOF
gcloud beta compute ssh ${GCP_INSTANCE} --zone ${GCP_ZONE} --ssh-key-expire-after=15m --tunnel-through-iap --project ${GCP_PROJECT} --command "${COMMAND}"

View File

@@ -1,55 +0,0 @@
name: testing-deployment
# Trigger deployment only on testing-deploy label on pull request
on:
pull_request:
types: [labeled]
jobs:
deploy:
name: Deploy PR branch to testing
runs-on: ubuntu-latest
environment: testing
if: ${{ github.event.label.name == 'testing-deploy' }}
permissions:
contents: 'read'
id-token: 'write'
steps:
- id: 'auth'
uses: 'google-github-actions/auth@v2'
with:
workload_identity_provider: ${{ secrets.GCP_WORKLOAD_IDENTITY_PROVIDER }}
service_account: ${{ secrets.GCP_SERVICE_ACCOUNT }}
- name: 'sdk'
uses: 'google-github-actions/setup-gcloud@v2'
- name: 'ssh'
shell: bash
env:
GITHUB_BRANCH: ${{ github.head_ref || github.ref_name }}
GITHUB_SHA: ${{ github.sha }}
GCP_PROJECT: ${{ secrets.GCP_PROJECT }}
GCP_ZONE: ${{ secrets.GCP_ZONE }}
GCP_INSTANCE: ${{ secrets.GCP_INSTANCE }}
CLOUDSDK_CORE_DISABLE_PROMPTS: 1
run: |
read -r -d '' COMMAND <<EOF || true
echo "GITHUB_BRANCH: ${GITHUB_BRANCH}"
echo "GITHUB_SHA: ${GITHUB_SHA}"
export DOCKER_TAG="${GITHUB_SHA:0:7}" # needed for child process to access it
export DEV_BUILD="1"
export PATH="/usr/local/go/bin/:$PATH" # needed for Golang to work
docker system prune --force
cd ~/signoz
git status
git add .
git stash push -m "stashed on $(date --iso-8601=seconds)"
git fetch origin
git checkout main
git pull
# This is added to include the scenerio when new commit in PR is force-pushed
git branch -D ${GITHUB_BRANCH}
git checkout --track origin/${GITHUB_BRANCH}
make build-signoz-amd64
make run-testing
EOF
gcloud beta compute ssh ${GCP_INSTANCE} --zone ${GCP_ZONE} --ssh-key-expire-after=15m --tunnel-through-iap --project ${GCP_PROJECT} --command "${COMMAND}"

148
.gitignore vendored
View File

@@ -54,6 +54,7 @@ ee/query-service/tests/test-deploy/data/
bin/
.local/
*/query-service/queries.active
ee/query-service/db
# e2e
@@ -79,6 +80,153 @@ deploy/common/clickhouse/user_scripts/
queries.active
# tmp
**/tmp/**
# .devenv tmp files
.devenv/**/tmp/**
.qodo
### Python ###
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
### Python Patch ###
# Poetry local configuration file - https://python-poetry.org/docs/configuration/#local-configuration
poetry.toml
# ruff
.ruff_cache/
# LSP config files
pyrightconfig.json
# End of https://www.toptal.com/developers/gitignore/api/python

17
.versions/alpine Normal file
View File

@@ -0,0 +1,17 @@
#### Auto generated by make docker-version-alpine. DO NOT EDIT! ####
amd64=029a752048e32e843bd6defe3841186fb8d19a28dae8ec287f433bb9d6d1ad85
unknown=5fea95373b9ec85974843f31446fa6a9df4492dddae4e1cb056193c34a20a5be
arm=b4aef1a899e0271f06d948c9a8fa626ecdb2202d3a178bc14775dd559e23df8e
unknown=a4d1e27e63a9d6353046eb25a2f0ec02945012b217f4364cd83a73fe6dfb0b15
arm=4fdafe217d0922f3c3e2b4f64cf043f8403a4636685cd9c51fea2cbd1f419740
unknown=7f21ac2018d95b2c51a5779c1d5ca6c327504adc3b0fdc747a6725d30b3f13c2
arm64=ea3c5a9671f7b3f7eb47eab06f73bc6591df978b0d5955689a9e6f943aa368c0
unknown=a8ba68c1a9e6eea8041b4b8f996c235163440808b9654a865976fdcbede0f433
386=dea9f02e103e837849f984d5679305c758aba7fea1b95b7766218597f61a05ab
unknown=3c6629bec05c8273a927d46b77428bf4a378dad911a0ae284887becdc149b734
ppc64le=0880443bffa028dfbbc4094a32dd6b7ac25684e4c0a3d50da9e0acae355c5eaf
unknown=bb48308f976b266e3ab39bbf9af84521959bd9c295d3c763690cf41f8df2a626
riscv64=d76e6fbe348ff20c2931bb7f101e49379648e026de95dd37f96e00ce1909dcf7
unknown=dd807544365f6dc187cbe6de0806adce2ea9de3e7124717d1d8e8b7a18b77b64
s390x=b815fadf80495594eb6296a6af0bc647ae5f193e0044e07acec7e5b378c9ce2d
unknown=74681be74a280a88abb53ff1e048eb1fb624b30d0066730df6d8afd02ba82e01

View File

@@ -77,3 +77,4 @@ Need assistance? Join our Slack community:
## Where do I go from here?
- Set up your [development environment](docs/contributing/development.md)
- Deploy and observe [SigNoz in action with OpenTelemetry Demo Application](docs/otel-demo-docs.md)

317
Makefile
View File

@@ -1,49 +1,45 @@
#
# Reference Guide - https://www.gnu.org/software/make/manual/make.html
#
##############################################################
# variables
##############################################################
SHELL := /bin/bash
SRC ?= $(shell pwd)
NAME ?= signoz
OS ?= $(shell uname -s | tr '[A-Z]' '[a-z]')
ARCH ?= $(shell uname -m | sed 's/x86_64/amd64/g' | sed 's/aarch64/arm64/g')
COMMIT_SHORT_SHA ?= $(shell git rev-parse --short HEAD)
BRANCH_NAME ?= $(subst /,-,$(shell git rev-parse --abbrev-ref HEAD))
VERSION ?= $(BRANCH_NAME)-$(COMMIT_SHORT_SHA)
TIMESTAMP ?= $(shell date -u +"%Y-%m-%dT%H:%M:%SZ")
ARCHS ?= amd64 arm64
TARGET_DIR ?= $(shell pwd)/target
# Build variables
BUILD_VERSION ?= $(shell git describe --always --tags)
BUILD_HASH ?= $(shell git rev-parse --short HEAD)
BUILD_TIME ?= $(shell date -u +"%Y-%m-%dT%H:%M:%SZ")
BUILD_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
LICENSE_SIGNOZ_IO ?= https://license.signoz.io/api/v1
DEV_LICENSE_SIGNOZ_IO ?= https://staging-license.signoz.io/api/v1
ZEUS_URL ?= https://api.signoz.cloud
DEV_ZEUS_URL ?= https://api.staging.signoz.cloud
DEV_BUILD ?= "" # set to any non-empty value to enable dev build
ZEUS_URL ?= https://api.signoz.cloud
GO_BUILD_LDFLAG_ZEUS_URL = -X github.com/SigNoz/signoz/ee/zeus.url=$(ZEUS_URL)
LICENSE_URL ?= https://license.signoz.io
GO_BUILD_LDFLAG_LICENSE_SIGNOZ_IO = -X github.com/SigNoz/signoz/ee/zeus.deprecatedURL=$(LICENSE_URL)
# Internal variables or constants.
FRONTEND_DIRECTORY ?= frontend
QUERY_SERVICE_DIRECTORY ?= pkg/query-service
EE_QUERY_SERVICE_DIRECTORY ?= ee/query-service
STANDALONE_DIRECTORY ?= deploy/docker
SWARM_DIRECTORY ?= deploy/docker-swarm
CH_HISTOGRAM_QUANTILE_DIRECTORY ?= scripts/clickhouse/histogramquantile
GORELEASER_BIN ?= goreleaser
GO_BUILD_VERSION_LDFLAGS = -X github.com/SigNoz/signoz/pkg/version.version=$(VERSION) -X github.com/SigNoz/signoz/pkg/version.hash=$(COMMIT_SHORT_SHA) -X github.com/SigNoz/signoz/pkg/version.time=$(TIMESTAMP) -X github.com/SigNoz/signoz/pkg/version.branch=$(BRANCH_NAME)
GO_BUILD_ARCHS_COMMUNITY = $(addprefix go-build-community-,$(ARCHS))
GO_BUILD_CONTEXT_COMMUNITY = $(SRC)/pkg/query-service
GO_BUILD_LDFLAGS_COMMUNITY = $(GO_BUILD_VERSION_LDFLAGS) -X github.com/SigNoz/signoz/pkg/version.variant=community
GO_BUILD_ARCHS_ENTERPRISE = $(addprefix go-build-enterprise-,$(ARCHS))
GO_BUILD_ARCHS_ENTERPRISE_RACE = $(addprefix go-build-enterprise-race-,$(ARCHS))
GO_BUILD_CONTEXT_ENTERPRISE = $(SRC)/ee/query-service
GO_BUILD_LDFLAGS_ENTERPRISE = $(GO_BUILD_VERSION_LDFLAGS) -X github.com/SigNoz/signoz/pkg/version.variant=enterprise $(GO_BUILD_LDFLAG_ZEUS_URL) $(GO_BUILD_LDFLAG_LICENSE_SIGNOZ_IO)
GOOS ?= $(shell go env GOOS)
GOARCH ?= $(shell go env GOARCH)
GOPATH ?= $(shell go env GOPATH)
REPONAME ?= signoz
DOCKER_TAG ?= $(BUILD_VERSION)
SIGNOZ_DOCKER_IMAGE ?= signoz
SIGNOZ_COMMUNITY_DOCKER_IMAGE ?= signoz-community
# Build-time Go variables
PACKAGE?=go.signoz.io/signoz
buildVersion=${PACKAGE}/pkg/query-service/version.buildVersion
buildHash=${PACKAGE}/pkg/query-service/version.buildHash
buildTime=${PACKAGE}/pkg/query-service/version.buildTime
gitBranch=${PACKAGE}/pkg/query-service/version.gitBranch
licenseSignozIo=${PACKAGE}/ee/query-service/constants.LicenseSignozIo
zeusURL=${PACKAGE}/ee/query-service/constants.ZeusURL
LD_FLAGS=-X ${buildHash}=${BUILD_HASH} -X ${buildTime}=${BUILD_TIME} -X ${buildVersion}=${BUILD_VERSION} -X ${gitBranch}=${BUILD_BRANCH}
PROD_LD_FLAGS=-X ${zeusURL}=${ZEUS_URL} -X ${licenseSignozIo}=${LICENSE_SIGNOZ_IO}
DEV_LD_FLAGS=-X ${zeusURL}=${DEV_ZEUS_URL} -X ${licenseSignozIo}=${DEV_LICENSE_SIGNOZ_IO}
DOCKER_BUILD_ARCHS_COMMUNITY = $(addprefix docker-build-community-,$(ARCHS))
DOCKERFILE_COMMUNITY = $(SRC)/pkg/query-service/Dockerfile
DOCKER_REGISTRY_COMMUNITY ?= docker.io/signoz/signoz-community
DOCKER_BUILD_ARCHS_ENTERPRISE = $(addprefix docker-build-enterprise-,$(ARCHS))
DOCKERFILE_ENTERPRISE = $(SRC)/ee/query-service/Dockerfile
DOCKER_REGISTRY_ENTERPRISE ?= docker.io/signoz/signoz
JS_BUILD_CONTEXT = $(SRC)/frontend
##############################################################
# directories
##############################################################
$(TARGET_DIR):
mkdir -p $(TARGET_DIR)
##############################################################
# common commands
@@ -60,11 +56,16 @@ devenv-clickhouse: ## Run clickhouse in devenv
@cd .devenv/docker/clickhouse; \
docker compose -f compose.yaml up -d
.PHONY: devenv-postgres
devenv-postgres: ## Run postgres in devenv
@cd .devenv/docker/postgres; \
docker compose -f compose.yaml up -d
##############################################################
# run commands
# go commands
##############################################################
.PHONY: run-go
run-go: ## Runs the go backend server
.PHONY: go-run-enterprise
go-run-enterprise: ## Runs the enterprise go backend server
@SIGNOZ_INSTRUMENTATION_LOGS_LEVEL=debug \
SIGNOZ_SQLSTORE_SQLITE_PATH=signoz.db \
SIGNOZ_WEB_ENABLED=false \
@@ -73,147 +74,131 @@ run-go: ## Runs the go backend server
SIGNOZ_TELEMETRYSTORE_PROVIDER=clickhouse \
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://127.0.0.1:9000 \
go run -race \
./ee/query-service/main.go \
--config ./pkg/query-service/config/prometheus.yml \
$(GO_BUILD_CONTEXT_ENTERPRISE)/main.go \
--config ./conf/prometheus.yml \
--cluster cluster \
--use-logs-new-schema true \
--use-trace-new-schema true
all: build-push-frontend build-push-signoz
.PHONY: go-test
go-test: ## Runs go unit tests
@go test -race ./...
# Steps to build static files of frontend
build-frontend-static:
@echo "------------------"
@echo "--> Building frontend static files"
@echo "------------------"
@cd $(FRONTEND_DIRECTORY) && \
rm -rf build && \
CI=1 yarn install && \
yarn build && \
ls -l build
.PHONY: go-run-community
go-run-community: ## Runs the community go backend server
@SIGNOZ_INSTRUMENTATION_LOGS_LEVEL=debug \
SIGNOZ_SQLSTORE_SQLITE_PATH=signoz.db \
SIGNOZ_WEB_ENABLED=false \
SIGNOZ_JWT_SECRET=secret \
SIGNOZ_ALERTMANAGER_PROVIDER=signoz \
SIGNOZ_TELEMETRYSTORE_PROVIDER=clickhouse \
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://127.0.0.1:9000 \
go run -race \
$(GO_BUILD_CONTEXT_COMMUNITY)/main.go \
--config ./conf/prometheus.yml \
--cluster cluster \
--use-logs-new-schema true \
--use-trace-new-schema true
# Steps to build static binary of signoz
.PHONY: build-signoz-static
build-signoz-static:
@echo "------------------"
@echo "--> Building signoz static binary"
@echo "------------------"
@if [ $(DEV_BUILD) != "" ]; then \
cd $(EE_QUERY_SERVICE_DIRECTORY) && \
CGO_ENABLED=1 go build -tags timetzdata -a -o ./bin/signoz-${GOOS}-${GOARCH} \
-ldflags "-linkmode external -extldflags '-static' -s -w ${LD_FLAGS} ${DEV_LD_FLAGS}"; \
.PHONY: go-build-community $(GO_BUILD_ARCHS_COMMUNITY)
go-build-community: ## Builds the go backend server for community
go-build-community: $(GO_BUILD_ARCHS_COMMUNITY)
$(GO_BUILD_ARCHS_COMMUNITY): go-build-community-%: $(TARGET_DIR)
@mkdir -p $(TARGET_DIR)/$(OS)-$*
@echo ">> building binary $(TARGET_DIR)/$(OS)-$*/$(NAME)-community"
@if [ $* = "arm64" ]; then \
CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_COMMUNITY) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME)-community -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_COMMUNITY)"; \
else \
cd $(EE_QUERY_SERVICE_DIRECTORY) && \
CGO_ENABLED=1 go build -tags timetzdata -a -o ./bin/signoz-${GOOS}-${GOARCH} \
-ldflags "-linkmode external -extldflags '-static' -s -w ${LD_FLAGS} ${PROD_LD_FLAGS}"; \
CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_COMMUNITY) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME)-community -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_COMMUNITY)"; \
fi
.PHONY: build-signoz-static-amd64
build-signoz-static-amd64:
make GOARCH=amd64 build-signoz-static
.PHONY: build-signoz-static-arm64
build-signoz-static-arm64:
make CC=aarch64-linux-gnu-gcc GOARCH=arm64 build-signoz-static
# Steps to build static binary of signoz for all platforms
.PHONY: build-signoz-static-all
build-signoz-static-all: build-signoz-static-amd64 build-signoz-static-arm64 build-frontend-static
# Steps to build and push docker image of signoz
.PHONY: build-signoz-amd64 build-push-signoz
# Step to build docker image of signoz in amd64 (used in build pipeline)
build-signoz-amd64: build-signoz-static-amd64 build-frontend-static
@echo "------------------"
@echo "--> Building signoz docker image for amd64"
@echo "------------------"
@docker build --file $(EE_QUERY_SERVICE_DIRECTORY)/Dockerfile \
--tag $(REPONAME)/$(SIGNOZ_DOCKER_IMAGE):$(DOCKER_TAG) \
--build-arg TARGETPLATFORM="linux/amd64" .
# Step to build and push docker image of query in amd64 and arm64 (used in push pipeline)
build-push-signoz: build-signoz-static-all
@echo "------------------"
@echo "--> Building and pushing signoz docker image"
@echo "------------------"
@docker buildx build --file $(EE_QUERY_SERVICE_DIRECTORY)/Dockerfile --progress plain \
--push --platform linux/arm64,linux/amd64 \
--tag $(REPONAME)/$(SIGNOZ_DOCKER_IMAGE):$(DOCKER_TAG) .
# Step to build docker image of signoz community in amd64 (used in build pipeline)
build-signoz-community-amd64:
@echo "------------------"
@echo "--> Building signoz docker image for amd64"
@echo "------------------"
make EE_QUERY_SERVICE_DIRECTORY=${QUERY_SERVICE_DIRECTORY} SIGNOZ_DOCKER_IMAGE=${SIGNOZ_COMMUNITY_DOCKER_IMAGE} build-signoz-amd64
# Step to build and push docker image of signoz community in amd64 and arm64 (used in push pipeline)
build-push-signoz-community:
@echo "------------------"
@echo "--> Building and pushing signoz community docker image"
@echo "------------------"
make EE_QUERY_SERVICE_DIRECTORY=${QUERY_SERVICE_DIRECTORY} SIGNOZ_DOCKER_IMAGE=${SIGNOZ_COMMUNITY_DOCKER_IMAGE} build-push-signoz
pull-signoz:
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.yaml pull
run-signoz:
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.yaml up --build -d
run-testing:
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.testing.yaml up --build -d
down-signoz:
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.yaml down -v
check-no-ee-references:
@echo "Checking for 'ee' package references in 'pkg' directory..."
@if grep -R --include="*.go" '.*/ee/.*' pkg/; then \
echo "Error: Found references to 'ee' packages in 'pkg' directory"; \
exit 1; \
.PHONY: go-build-enterprise $(GO_BUILD_ARCHS_ENTERPRISE)
go-build-enterprise: ## Builds the go backend server for enterprise
go-build-enterprise: $(GO_BUILD_ARCHS_ENTERPRISE)
$(GO_BUILD_ARCHS_ENTERPRISE): go-build-enterprise-%: $(TARGET_DIR)
@mkdir -p $(TARGET_DIR)/$(OS)-$*
@echo ">> building binary $(TARGET_DIR)/$(OS)-$*/$(NAME)"
@if [ $* = "arm64" ]; then \
CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
else \
echo "No references to 'ee' packages found in 'pkg' directory"; \
CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
fi
test:
go test ./pkg/...
########################################################
# Goreleaser
########################################################
.PHONY: gor-snapshot gor-snapshot-histogram-quantile gor-snapshot-signoz gor-snapshot-signoz-community gor-split gor-split-histogram-quantile gor-split-signoz gor-split-signoz-community gor-merge
gor-snapshot:
@if [[ ${GORELEASER_WORKDIR} ]]; then \
${GORELEASER_BIN} release --config ${GORELEASER_WORKDIR}/.goreleaser.yaml --clean --snapshot; \
.PHONY: go-build-enterprise-race $(GO_BUILD_ARCHS_ENTERPRISE_RACE)
go-build-enterprise-race: ## Builds the go backend server for enterprise with race
go-build-enterprise-race: $(GO_BUILD_ARCHS_ENTERPRISE_RACE)
$(GO_BUILD_ARCHS_ENTERPRISE_RACE): go-build-enterprise-race-%: $(TARGET_DIR)
@mkdir -p $(TARGET_DIR)/$(OS)-$*
@echo ">> building binary $(TARGET_DIR)/$(OS)-$*/$(NAME)"
@if [ $* = "arm64" ]; then \
CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -race -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
else \
${GORELEASER_BIN} release --clean --snapshot; \
CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -race -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
fi
gor-snapshot-histogram-quantile:
make GORELEASER_WORKDIR=$(CH_HISTOGRAM_QUANTILE_DIRECTORY) goreleaser-snapshot
##############################################################
# js commands
##############################################################
.PHONY: js-build
js-build: ## Builds the js frontend
@echo ">> building js frontend"
@cd $(JS_BUILD_CONTEXT) && CI=1 yarn install && yarn build
gor-snapshot-signoz: build-frontend-static
make GORELEASER_WORKDIR=$(EE_QUERY_SERVICE_DIRECTORY) goreleaser-snapshot
##############################################################
# docker commands
##############################################################
.PHONY: docker-build-community $(DOCKER_BUILD_ARCHS_COMMUNITY)
docker-build-community: ## Builds the docker image for community
docker-build-community: $(DOCKER_BUILD_ARCHS_COMMUNITY)
$(DOCKER_BUILD_ARCHS_COMMUNITY): docker-build-community-%: go-build-community-% js-build
@echo ">> building docker image for $(NAME)-community"
@docker build -t "$(DOCKER_REGISTRY_COMMUNITY):$(VERSION)-$*" \
--build-arg TARGETARCH="$*" \
-f $(DOCKERFILE_COMMUNITY) $(SRC)
gor-snapshot-signoz-community: build-frontend-static
make GORELEASER_WORKDIR=$(QUERY_SERVICE_DIRECTORY) goreleaser-snapshot
.PHONY: docker-buildx-community
docker-buildx-community: ## Builds the docker image for community using buildx
docker-buildx-community: go-build-community js-build
@echo ">> building docker image for $(NAME)-community"
@docker buildx build --file $(DOCKERFILE_COMMUNITY) \
--progress plain \
--platform linux/arm64,linux/amd64 \
--push \
--tag $(DOCKER_REGISTRY_COMMUNITY):$(VERSION) $(SRC)
gor-split:
@if [[ ${GORELEASER_WORKDIR} ]]; then \
${GORELEASER_BIN} release --config ${GORELEASER_WORKDIR}/.goreleaser.yaml --clean --split; \
else \
${GORELEASER_BIN} release --clean --split; \
fi
.PHONY: docker-build-enterprise $(DOCKER_BUILD_ARCHS_ENTERPRISE)
docker-build-enterprise: ## Builds the docker image for enterprise
docker-build-enterprise: $(DOCKER_BUILD_ARCHS_ENTERPRISE)
$(DOCKER_BUILD_ARCHS_ENTERPRISE): docker-build-enterprise-%: go-build-enterprise-% js-build
@echo ">> building docker image for $(NAME)"
@docker build -t "$(DOCKER_REGISTRY_ENTERPRISE):$(VERSION)-$*" \
--build-arg TARGETARCH="$*" \
-f $(DOCKERFILE_ENTERPRISE) $(SRC)
gor-split-histogram-quantile:
make GORELEASER_WORKDIR=$(CH_HISTOGRAM_QUANTILE_DIRECTORY) goreleaser-split
.PHONY: docker-buildx-enterprise
docker-buildx-enterprise: ## Builds the docker image for enterprise using buildx
docker-buildx-enterprise: go-build-enterprise js-build
@echo ">> building docker image for $(NAME)"
@docker buildx build --file $(DOCKERFILE_ENTERPRISE) \
--progress plain \
--platform linux/arm64,linux/amd64 \
--push \
--tag $(DOCKER_REGISTRY_ENTERPRISE):$(VERSION) $(SRC)
gor-split-signoz: build-frontend-static
make GORELEASER_WORKDIR=$(EE_QUERY_SERVICE_DIRECTORY) goreleaser-split
##############################################################
# python commands
##############################################################
.PHONY: py-fmt
py-fmt: ## Run black for integration tests
@cd tests/integration && poetry run black .
gor-split-signoz-community: build-frontend-static
make GORELEASER_WORKDIR=$(QUERY_SERVICE_DIRECTORY) goreleaser-split
.PHONY: py-lint
py-lint: ## Run lint for integration tests
@cd tests/integration && poetry run isort .
@cd tests/integration && poetry run autoflake .
@cd tests/integration && poetry run pylint .
gor-merge:
${GORELEASER_BIN} continue --merge
.PHONY: py-test
py-test: ## Runs integration tests
@cd tests/integration && poetry run pytest --basetemp=./tmp/ -vv --capture=no src/

View File

@@ -3,6 +3,12 @@
# Do not modify this file
#
##################### Version #####################
version:
banner:
# Whether to enable the version banner on startup.
enabled: true
##################### Instrumentation #####################
instrumentation:
logs:
@@ -66,7 +72,6 @@ sqlstore:
# The path to the SQLite database file.
path: /var/lib/signoz/signoz.db
##################### APIServer #####################
apiserver:
timeout:
@@ -82,21 +87,39 @@ apiserver:
# List of routes to exclude from request responselogging.
excluded_routes:
- /api/v1/health
- /api/v1/version
- /
##################### TelemetryStore #####################
telemetrystore:
# Specifies the telemetrystore provider to use.
provider: clickhouse
# Maximum number of idle connections in the connection pool.
max_idle_conns: 50
# Maximum number of open connections to the database.
max_open_conns: 100
# Maximum time to wait for a connection to be established.
dial_timeout: 5s
# Specifies the telemetrystore provider to use.
provider: clickhouse
clickhouse:
# The DSN to use for ClickHouse.
dsn: http://localhost:9000
# The DSN to use for clickhouse.
dsn: tcp://localhost:9000
# The query settings for clickhouse.
settings:
max_execution_time: 0
max_execution_time_leaf: 0
timeout_before_checking_execution_speed: 0
max_bytes_to_read: 0
max_result_rows_for_ch_query: 0
##################### Prometheus #####################
prometheus:
active_query_tracker:
# Whether to enable the active query tracker.
enabled: true
# The path to use for the active query tracker.
path: ""
# The maximum number of concurrent queries.
max_concurrent: 20
##################### Alertmanager #####################
alertmanager:
@@ -109,7 +132,7 @@ alertmanager:
# The poll interval for periodically syncing the alertmanager with the config in the store.
poll_interval: 1m
# The URL under which Alertmanager is externally reachable (for example, if Alertmanager is served via a reverse proxy). Used for generating relative and absolute links back to Alertmanager itself.
external_url: http://localhost:9093
external_url: http://localhost:8080
# The global configuration for the alertmanager. All the exahustive fields can be found in the upstream: https://github.com/prometheus/alertmanager/blob/efa05feffd644ba4accb526e98a8c6545d26a783/config/config.go#L833
global:
# ResolveTimeout is the time after which an alert is declared resolved if it has not been updated.

View File

@@ -174,13 +174,13 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.76.0
image: signoz/signoz:v0.81.0
command:
- --config=/root/config/prometheus.yml
- --use-logs-new-schema=true
- --use-trace-new-schema=true
# ports:
# - "8080:8080" # signoz port
ports:
- "8080:8080" # signoz port
# - "6060:6060" # pprof port
volumes:
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
@@ -208,7 +208,7 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:0.111.30
image: signoz/signoz-otel-collector:v0.111.40
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
@@ -232,7 +232,7 @@ services:
- signoz
schema-migrator:
!!merge <<: *common
image: signoz/signoz-schema-migrator:0.111.30
image: signoz/signoz-schema-migrator:v0.111.40
deploy:
restart_policy:
condition: on-failure

View File

@@ -110,13 +110,13 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.76.0
image: signoz/signoz:v0.81.0
command:
- --config=/root/config/prometheus.yml
- --use-logs-new-schema=true
- --use-trace-new-schema=true
# ports:
# - "8080:8080" # signoz port
ports:
- "8080:8080" # signoz port
# - "6060:6060" # pprof port
volumes:
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
@@ -143,7 +143,7 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:0.111.30
image: signoz/signoz-otel-collector:v0.111.40
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
@@ -167,7 +167,7 @@ services:
- signoz
schema-migrator:
!!merge <<: *common
image: signoz/signoz-schema-migrator:0.111.30
image: signoz/signoz-schema-migrator:v0.111.40
deploy:
restart_policy:
condition: on-failure

View File

@@ -177,7 +177,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${DOCKER_TAG:-v0.76.0}
image: signoz/signoz:${VERSION:-v0.81.0}
container_name: signoz
command:
- --config=/root/config/prometheus.yml
@@ -212,7 +212,7 @@ services:
# TODO: support otel-collector multiple replicas. Nginx/Traefik for loadbalancing?
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.111.30}
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.111.40}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
@@ -238,7 +238,7 @@ services:
condition: service_healthy
schema-migrator-sync:
!!merge <<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.30}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.40}
container_name: schema-migrator-sync
command:
- sync
@@ -249,7 +249,7 @@ services:
condition: service_healthy
schema-migrator-async:
!!merge <<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.30}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.40}
container_name: schema-migrator-async
command:
- async

View File

@@ -1,199 +0,0 @@
version: "3"
x-common: &common
networks:
- signoz-net
restart: unless-stopped
logging:
options:
max-size: 50m
max-file: "3"
x-clickhouse-defaults: &clickhouse-defaults
!!merge <<: *common
# addding non LTS version due to this fix https://github.com/ClickHouse/ClickHouse/commit/32caf8716352f45c1b617274c7508c86b7d1afab
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
labels:
signoz.io/scrape: "true"
signoz.io/port: "9363"
signoz.io/path: "/metrics"
depends_on:
init-clickhouse:
condition: service_completed_successfully
zookeeper-1:
condition: service_healthy
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- 0.0.0.0:8123/ping
interval: 30s
timeout: 5s
retries: 3
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
x-zookeeper-defaults: &zookeeper-defaults
!!merge <<: *common
image: bitnami/zookeeper:3.7.1
user: root
labels:
signoz.io/scrape: "true"
signoz.io/port: "9141"
signoz.io/path: "/metrics"
healthcheck:
test:
- CMD-SHELL
- curl -s -m 2 http://localhost:8080/commands/ruok | grep error | grep null
interval: 30s
timeout: 5s
retries: 3
x-db-depend: &db-depend
!!merge <<: *common
depends_on:
clickhouse:
condition: service_healthy
schema-migrator-sync:
condition: service_completed_successfully
services:
init-clickhouse:
!!merge <<: *common
image: clickhouse/clickhouse-server:24.1.2-alpine
container_name: signoz-init-clickhouse
command:
- bash
- -c
- |
version="v0.0.1"
node_os=$$(uname -s | tr '[:upper:]' '[:lower:]')
node_arch=$$(uname -m | sed s/aarch64/arm64/ | sed s/x86_64/amd64/)
echo "Fetching histogram-binary for $${node_os}/$${node_arch}"
cd /tmp
wget -O histogram-quantile.tar.gz "https://github.com/SigNoz/signoz/releases/download/histogram-quantile%2F$${version}/histogram-quantile_$${node_os}_$${node_arch}.tar.gz"
tar -xvzf histogram-quantile.tar.gz
mv histogram-quantile /var/lib/clickhouse/user_scripts/histogramQuantile
restart: on-failure
volumes:
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
zookeeper-1:
!!merge <<: *zookeeper-defaults
container_name: signoz-zookeeper-1
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
volumes:
- zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1
- ZOO_ENABLE_PROMETHEUS_METRICS=yes
- ZOO_PROMETHEUS_METRICS_PORT_NUMBER=9141
clickhouse:
!!merge <<: *clickhouse-defaults
container_name: signoz-clickhouse
ports:
- "9000:9000"
- "8123:8123"
- "9181:9181"
volumes:
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
- ../common/clickhouse/cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
- clickhouse:/var/lib/clickhouse/
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${DOCKER_TAG:-v0.76.0}
container_name: signoz
command:
- --config=/root/config/prometheus.yml
- --gateway-url=https://api.staging.signoz.cloud
- --use-logs-new-schema=true
- --use-trace-new-schema=true
# ports:
# - "8080:8080" # signoz port
# - "6060:6060" # pprof port
volumes:
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
- ../common/dashboards:/root/config/dashboards
- sqlite:/var/lib/signoz/
environment:
- SIGNOZ_ALERTMANAGER_PROVIDER=signoz
- SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://clickhouse:9000
- SIGNOZ_SQLSTORE_SQLITE_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-standalone-amd
- KAFKA_SPAN_EVAL=${KAFKA_SPAN_EVAL:-false}
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- localhost:8080/api/v1/health
interval: 30s
timeout: 5s
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.111.30}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
- --copy-path=/var/tmp/collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ../common/signoz/otel-collector-opamp-config.yaml:/etc/manager-config.yaml
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
- LOW_CARDINAL_EXCEPTION_GROUPING=false
ports:
# - "1777:1777" # pprof extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
depends_on:
signoz:
condition: service_healthy
schema-migrator-sync:
!!merge <<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.30}
container_name: schema-migrator-sync
command:
- sync
- --dsn=tcp://clickhouse:9000
- --up=
depends_on:
clickhouse:
condition: service_healthy
restart: on-failure
schema-migrator-async:
!!merge <<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.30}
container_name: schema-migrator-async
command:
- async
- --dsn=tcp://clickhouse:9000
- --up=
restart: on-failure
networks:
signoz-net:
name: signoz-net
volumes:
clickhouse:
name: signoz-clickhouse
sqlite:
name: signoz-sqlite
zookeeper-1:
name: signoz-zookeeper-1

View File

@@ -110,7 +110,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${DOCKER_TAG:-v0.76.0}
image: signoz/signoz:${VERSION:-v0.81.0}
container_name: signoz
command:
- --config=/root/config/prometheus.yml
@@ -144,7 +144,7 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.111.30}
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.111.40}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
@@ -166,7 +166,7 @@ services:
condition: service_healthy
schema-migrator-sync:
!!merge <<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.30}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.40}
container_name: schema-migrator-sync
command:
- sync
@@ -178,7 +178,7 @@ services:
restart: on-failure
schema-migrator-async:
!!merge <<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.30}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.40}
container_name: schema-migrator-async
command:
- async

View File

@@ -61,7 +61,7 @@ This command:
1. Run the backend server:
```bash
make run-go
make go-run-community
```
2. Verify it's working:

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

BIN
docs/img/otel-demo-helm.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 157 KiB

BIN
docs/img/otel-demo-pods.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 306 KiB

246
docs/otel-demo-docs.md Normal file
View File

@@ -0,0 +1,246 @@
# Configuring OpenTelemetry Demo App with SigNoz
[The OpenTelemetry Astronomy Shop](https://github.com/open-telemetry/opentelemetry-demo) is an e-commerce web application, with **15 core microservices** in a **distributed system** which communicate over gRPC. Designed as a **polyglot** environment, it leverages a diverse set of programming languages, including Go, Python, .NET, Java, and others, showcasing cross-language instrumentation with OpenTelemetry. The intention is to get a quickstart application to send data and experience SigNoz firsthand.
This guide provides a step-by-step walkthrough for setting up the **OpenTelemetry Demo App** with **SigNoz** as backend for observability. It outlines steps to export telemetry data to **SigNoz self-hosted with Docker**, **SigNoz self-hosted with Kubernetes** and **SigNoz cloud**.
<br/>
__Table of Contents__
- [Send data to SigNoz Self-hosted with Docker](#send-data-to-signoz-self-hosted-with-docker)
- [Prerequisites](#prerequisites)
- [Clone the OpenTelemetry Demo App Repository](#clone-the-opentelemetry-demo-app-repository)
- [Modify OpenTelemetry Collector Config](#modify-opentelemetry-collector-config)
- [Start the OpenTelemetry Demo App](#start-the-opentelemetry-demo-app)
- [Monitor with SigNoz (Docker)](#monitor-with-signoz-docker)
- [Send data to SigNoz Self-hosted with Kubernetes](#send-data-to-signoz-self-hosted-with-kubernetes)
- [Prerequisites](#prerequisites-1)
- [Install Helm Repo and Charts](#install-helm-repo-and-charts)
- [Start the OpenTelemetry Demo App](#start-the-opentelemetry-demo-app-1)
- [Moniitor with SigNoz (Kubernetes)](#monitor-with-signoz-kubernetes)
- [What's next](#whats-next)
# Send data to SigNoz Self-hosted with Docker
In this guide you will install the OTel demo application using Docker and send telemetry data to SigNoz hosted with Docker, referred as SigNoz [Docker] from now.
## Prerequisites
- Docker and Docker Compose installed
- 6 GB of RAM for the application [as per OpenTelemetry documentation]
- Nice to have Docker Desktop, for easy monitoring
## Clone the OpenTelemetry Demo App Repository
Clone the OTel demo app to any folder of your choice.
```sh
# Clone the OpenTelemetry Demo repository
git clone https://github.com/open-telemetry/opentelemetry-demo.git
cd opentelemetry-demo
```
## Modify OpenTelemetry Collector Config
By default, the collector in the demo application will merge the configuration from two files:
1. otelcol-config.yml &nbsp;&nbsp;[we don't touch this]
2. otelcol-config-extras.yml &nbsp;&nbsp; [we modify this]
To add SigNoz [Docker] as the backend, open the file `src/otel-collector/otelcol-config-extras.yml` and add the following,
```yaml
exporters:
otlp:
endpoint: "http://host.docker.internal:4317"
tls:
insecure: true
debug:
verbosity: detailed
service:
pipelines:
metrics:
exporters: [otlp]
traces:
exporters: [spanmetrics, otlp]
logs:
exporters: [otlp]
```
The SigNoz OTel collector [sigNoz's otel-collector service] listens at 4317 port on localhost. When the OTel demo app is running within a Docker container and needs to transmit telemetry data to SigNoz, it cannot directly reference 'localhost' as this would refer to the container's own internal network. Instead, Docker provides a special DNS name, `host.docker.internal`, which resolves to the host machine's IP address from within containers. By configuring the OpenTelemetry Demo application to send data to `host.docker.internal:4317`, we establish a network path that allows the containerized application to transmit telemetry data across the container boundary to the SigNoz OTel collector running on the host machine's port 4317.
>
> Note: When merging extra configuration values with the existing collector config (`src/otel-collector/otelcol-config.yml`), objects are merged and arrays are replaced resulting in previous pipeline configurations getting overridden.
The spanmetrics exporter must be included in the array of exporters for the traces pipeline if overridden. Not including this exporter will result in an error.
>
<br>
<u>To send data to SigNoz Cloud</u>
If you want to send data to cloud instead, open the file `src/otel-collector/otelcol-config-extras.yml` and add the following,
```yaml
exporters:
otlp:
endpoint: "https://ingest.{your-region}.signoz.cloud:443"
tls:
insecure: false
headers:
signoz-access-token: <SIGNOZ-KEY>
debug:
verbosity: detailed
service:
pipelines:
metrics:
exporters: [otlp]
traces:
exporters: [spanmetrics, otlp]
logs:
exporters: [otlp]
```
Remember to replace the region and ingestion key with proper values as obtained from your account.
## Start the OpenTelemetry Demo App
Both SigNoz and OTel demo app [frontend-proxy service, to be accurate] share common port allocation at 8080. To prevent port allocation conflicts, modify the OTel demo application config to use port 8081 as the `ENVOY_PORT` value as shown below, and run docker compose command.
```sh
ENVOY_PORT=8081 docker compose up -d
```
This spins up multiple microservices, with OpenTelemetry instrumentation enabled. you can verify this by,
```sh
docker compose ps -a
```
The result should look similar to this,
![](/docs/img/otel-demo-docker-containers.png)
Navigate to `http://localhost:8081/` where you can access OTel demo app UI. Generate some traffic to send to SigNoz [Docker].
## Monitor with SigNoz [Docker]
Signoz exposes its UI at `http://localhost:8080/`. You should be able to see multiple services listed down as shown in the snapshot below.
![](/docs/img/otel-demo-services.png)
This verifies that your OTel demo app is successfully sending telemetry data to SigNoz [Docker] as expected.
# Send data to SigNoz Self-hosted with Kubernetes
In this guide you will install the OTel demo application using Helm and send telemetry data to SigNoz hosted with Kubernetes, referred as SigNoz [Kubernetes] from now.
## Prerequisites
- Helm charts installed
- 6 GB of free RAM for the application [as per OpenTelemetry documentation]
- A kubernetes cluster (EKS, GKE, Minikube)
- kubectl [CLI for Kubernetes]
>Note: We will be installing OTel demo app using Helm charts, since it is recommended by OpenTelemetry. If you wish to install using kubectl, follow [this](https://opentelemetry.io/docs/demo/kubernetes-deployment/#install-using-kubectl).
## Install Helm Repo and Charts
Youll need to **install the Helm repository** to start sending data to SigNoz cloud.
```sh
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
```
The OpenTelemetry Collectors configuration is exposed in the Helm chart. All additions made will be merged into the default configuration. We use this capability to add SigNoz as an exporter, and make pipelines as desired.
For this we have to create a `values.yaml` which will override the existing configurations that comes with the Helm chart.
```yaml
default:
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: "metadata.labels['app.kubernetes.io/component']"
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: OTEL_RESOURCE_ATTRIBUTES
value: 'service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo'
- name: OTEL_COLLECTOR_NAME
value: signoz-otel-collector.<namespace>.svc.cluster.local
```
Replace namespace with your appropriate namespace. This file will replace the charts existing settings with our new ones, ensuring telemetry data is sent to SigNoz [Kubernetes].
> Note: When merging YAML values with Helm, objects are merged and arrays are replaced. The spanmetrics exporter must be included in the array of exporters for the traces pipeline if overridden. Not including this exporter will result in an error.
<br>
<u>To send data to SigNoz cloud</u>
If you wish to send data to cloud instance of SigNoz, we have to create a `values.yaml` which will override the existing configurations that comes with the Helm chart.
```sh
opentelemetry-collector:
config:
exporters:
otlp:
endpoint: "https://ingest.{your-region}.signoz.cloud:443"
tls:
insecure: false
headers:
signoz-access-token: <SIGNOZ-KEY>
debug:
verbosity: detailed
service:
pipelines:
traces:
exporters: [spanmetrics, otlp]
metrics:
exporters: [otlp]
logs:
exporters: [otlp]
```
Make sure to replace the region and key with values obtained from the account
Now **install the helm chart** with a release name and namespace of your choice. Let's take *my-otel-demo* as the release name and *otel-demo* as the namespace for the context of the code snippet below,
```sh
# Create a new Kubernetes namespace called "otel-demo"
kubectl create namespace otel-demo
# Install the OpenTelemetry Demo Helm chart with the release name "my-otel-demo"
helm install my-otel-demo open-telemetry/opentelemetry-demo --namespace otel-demo -f values.yaml
```
You should see a similar output on your terminal,
![](/docs/img/otel-demo-helm.png)
To verify if all the pods are running,
```sh
kubectl get pods -n otel-demo
```
The output should look similar to this,
![](/docs/img/otel-demo-pods.png)
## Start the OpenTelemetry Demo App
To expose the OTel demo app UI [frontend-proxy service] use the following command (replace my-otel-demo with your Helm chart release name):
```sh
kubectl port-forward svc/my-otel-demo-frontend-proxy 8080:8081
```
Navigate to `http://localhost:8081/` where you can access OTel demo app UI. Generate some traffic to send to SigNoz [Kubernetes].
## Monitor with SigNoz [Kubernetes]
Signoz exposes it's UI at `http://localhost:8080/`. You should be able to see multiple services listed down as shown in the snapshot below.
![](/docs/img/otel-demo-services.png)
This verifies that your OTel demo app is successfully sending telemetry data to SigNoz [Kubernetes] as expected.
# What's next?
Don't forget to check our OpenTelemetry [track](https://signoz.io/resource-center/opentelemetry/), guaranteed to take you from a newbie to sensei in no time!
Also from a fellow OTel fan to another, we at [SigNoz](https://signoz.io/) are building an open-source, OTel native, observability platform (one of its kind). So, show us love - star us on [GitHub](https://github.com/SigNoz/signoz), nitpick our [docs](https://signoz.io/docs/introduction/), or just tell your app were the ones wholl catch its crashes mid-flight and finally shush all the 3am panic calls!

View File

@@ -2,22 +2,31 @@ package middleware
import (
"net/http"
"time"
"go.signoz.io/signoz/pkg/types/authtypes"
eeTypes "github.com/SigNoz/signoz/ee/types"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"go.uber.org/zap"
)
type Pat struct {
store sqlstore.SQLStore
uuid *authtypes.UUID
headers []string
}
func NewPat(headers []string) *Pat {
return &Pat{uuid: authtypes.NewUUID(), headers: headers}
func NewPat(store sqlstore.SQLStore, headers []string) *Pat {
return &Pat{store: store, uuid: authtypes.NewUUID(), headers: headers}
}
func (p *Pat) Wrap(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
var values []string
var patToken string
var pat eeTypes.StorablePersonalAccessToken
for _, header := range p.headers {
values = append(values, r.Header.Get(header))
}
@@ -27,10 +36,56 @@ func (p *Pat) Wrap(next http.Handler) http.Handler {
next.ServeHTTP(w, r)
return
}
patToken, ok := authtypes.UUIDFromContext(ctx)
if !ok {
next.ServeHTTP(w, r)
return
}
err = p.store.BunDB().NewSelect().Model(&pat).Where("token = ?", patToken).Scan(r.Context())
if err != nil {
next.ServeHTTP(w, r)
return
}
if pat.ExpiresAt < time.Now().Unix() && pat.ExpiresAt != 0 {
next.ServeHTTP(w, r)
return
}
// get user from db
user := types.User{}
err = p.store.BunDB().NewSelect().Model(&user).Where("id = ?", pat.UserID).Scan(r.Context())
if err != nil {
next.ServeHTTP(w, r)
return
}
role, err := authtypes.NewRole(user.Role)
if err != nil {
next.ServeHTTP(w, r)
return
}
jwt := authtypes.Claims{
UserID: user.ID,
Role: role,
Email: user.Email,
OrgID: user.OrgID,
}
ctx = authtypes.NewContextWithClaims(ctx, jwt)
r = r.WithContext(ctx)
next.ServeHTTP(w, r)
pat.LastUsed = time.Now().Unix()
_, err = p.store.BunDB().NewUpdate().Model(&pat).Column("last_used").Where("token = ?", patToken).Where("revoked = false").Exec(r.Context())
if err != nil {
zap.L().Error("Failed to update PAT last used in db, err: %v", zap.Error(err))
}
})
}

View File

@@ -30,15 +30,15 @@ builds:
- v8.0
ldflags:
- -s -w
- -X github.com/SigNoz/signoz/pkg/query-service/version.version={{ .Version }}
- -X main.commit={{ .Commit }} -X main.date={{ .CommitDate }}
- -X main.builtBy=goreleaser
- -X go.signoz.io/signoz/pkg/query-service/version.buildVersion={{ .Version }}
- -X go.signoz.io/signoz/pkg/query-service/version.buildHash={{ .ShortCommit }}
- -X go.signoz.io/signoz/pkg/query-service/version.buildTime={{ .Date }}
- -X go.signoz.io/signoz/pkg/query-service/version.gitBranch={{ .Branch }}
- -X go.signoz.io/signoz/ee/query-service/constants.ZeusURL=https://api.signoz.cloud
- -X go.signoz.io/signoz/ee/query-service/constants.LicenseSignozIo=https://license.signoz.io/api/v1
- -X github.com/SigNoz/signoz/pkg/version.version=v{{ .Version }}
- -X github.com/SigNoz/signoz/pkg/version.variant=enterprise
- -X github.com/SigNoz/signoz/pkg/version.hash={{ .ShortCommit }}
- -X github.com/SigNoz/signoz/pkg/version.time={{ .CommitTimestamp }}
- -X github.com/SigNoz/signoz/pkg/version.branch={{ .Branch }}
- -X github.com/SigNoz/signoz/ee/zeus.url=https://api.signoz.cloud
- -X github.com/SigNoz/signoz/ee/zeus.deprecatedURL=https://license.signoz.io
- -X github.com/SigNoz/signoz/ee/query-service/constants.ZeusURL=https://api.signoz.cloud
- -X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=https://license.signoz.io/api/v1
- >-
{{- if eq .Os "linux" }}-linkmode external -extldflags '-static'{{- end }}
mod_timestamp: "{{ .CommitTimestamp }}"

View File

@@ -1,34 +1,21 @@
# use a minimal alpine image
FROM alpine:3.20.3
# Add Maintainer Info
LABEL maintainer="signoz"
# define arguments that can be passed during build time
ARG TARGETOS TARGETARCH
# add ca-certificates in case you need them
RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
# set working directory
WORKDIR /root
# copy the signoz binary
COPY ee/query-service/bin/signoz-${TARGETOS}-${TARGETARCH} /root/signoz
ARG OS="linux"
ARG TARGETARCH
# copy prometheus YAML config
COPY pkg/query-service/config/prometheus.yml /root/config/prometheus.yml
COPY pkg/query-service/templates /root/templates
RUN apk update && \
apk add ca-certificates && \
rm -rf /var/cache/apk/*
# Make signoz executable for non-root users
RUN chmod 755 /root /root/signoz
# Copy frontend
COPY ./target/${OS}-${TARGETARCH}/signoz /root/signoz
COPY ./conf/prometheus.yml /root/config/prometheus.yml
COPY ./templates/email /root/templates
COPY frontend/build/ /etc/signoz/web/
# run the binary
RUN chmod 755 /root /root/signoz
ENTRYPOINT ["./signoz"]
CMD ["-config", "/root/config/prometheus.yml"]
EXPOSE 8080
CMD ["-config", "/root/config/prometheus.yml"]

View File

@@ -0,0 +1,36 @@
FROM golang:1.22-bullseye
ARG OS="linux"
ARG TARGETARCH
ARG ZEUSURL
# This path is important for stacktraces
WORKDIR $GOPATH/src/github.com/signoz/signoz
WORKDIR /root
RUN set -eux; \
apt-get update; \
apt-get install -y --no-install-recommends \
g++ \
gcc \
libc6-dev \
make \
pkg-config \
; \
rm -rf /var/lib/apt/lists/*
COPY go.mod go.sum ./
RUN go mod download
COPY ./ee/ ./ee/
COPY ./pkg/ ./pkg/
COPY ./templates/email /root/templates
COPY Makefile Makefile
RUN TARGET_DIR=/root ARCHS=${TARGETARCH} ZEUS_URL=${ZEUSURL} LICENSE_URL=${ZEUSURL}/api/v1 make go-build-enterprise-race
RUN mv /root/linux-${TARGETARCH}/signoz /root/signoz
RUN chmod 755 /root /root/signoz
ENTRYPOINT ["/root/signoz"]

View File

@@ -0,0 +1,22 @@
ARG ALPINE_SHA="pass-a-valid-docker-sha-otherwise-this-will-fail"
FROM alpine@sha256:${ALPINE_SHA}
LABEL maintainer="signoz"
WORKDIR /root
ARG OS="linux"
ARG ARCH
RUN apk update && \
apk add ca-certificates && \
rm -rf /var/cache/apk/*
COPY ./target/${OS}-${ARCH}/signoz /root/signoz
COPY ./conf/prometheus.yml /root/config/prometheus.yml
COPY ./templates/email /root/templates
COPY frontend/build/ /etc/signoz/web/
RUN chmod 755 /root /root/signoz
ENTRYPOINT ["./signoz"]
CMD ["-config", "/root/config/prometheus.yml"]

View File

@@ -3,8 +3,8 @@ package anomaly
import (
"context"
querierV2 "go.signoz.io/signoz/pkg/query-service/app/querier/v2"
"go.signoz.io/signoz/pkg/query-service/app/queryBuilder"
querierV2 "github.com/SigNoz/signoz/pkg/query-service/app/querier/v2"
"github.com/SigNoz/signoz/pkg/query-service/app/queryBuilder"
)
type DailyProvider struct {
@@ -28,11 +28,10 @@ func NewDailyProvider(opts ...GenericProviderOption[*DailyProvider]) *DailyProvi
}
dp.querierV2 = querierV2.NewQuerier(querierV2.QuerierOptions{
Reader: dp.reader,
Cache: dp.cache,
KeyGenerator: queryBuilder.NewKeyGenerator(),
FluxInterval: dp.fluxInterval,
FeatureLookup: dp.ff,
Reader: dp.reader,
Cache: dp.cache,
KeyGenerator: queryBuilder.NewKeyGenerator(),
FluxInterval: dp.fluxInterval,
})
return dp

View File

@@ -3,8 +3,8 @@ package anomaly
import (
"context"
querierV2 "go.signoz.io/signoz/pkg/query-service/app/querier/v2"
"go.signoz.io/signoz/pkg/query-service/app/queryBuilder"
querierV2 "github.com/SigNoz/signoz/pkg/query-service/app/querier/v2"
"github.com/SigNoz/signoz/pkg/query-service/app/queryBuilder"
)
type HourlyProvider struct {
@@ -28,11 +28,10 @@ func NewHourlyProvider(opts ...GenericProviderOption[*HourlyProvider]) *HourlyPr
}
hp.querierV2 = querierV2.NewQuerier(querierV2.QuerierOptions{
Reader: hp.reader,
Cache: hp.cache,
KeyGenerator: queryBuilder.NewKeyGenerator(),
FluxInterval: hp.fluxInterval,
FeatureLookup: hp.ff,
Reader: hp.reader,
Cache: hp.cache,
KeyGenerator: queryBuilder.NewKeyGenerator(),
FluxInterval: hp.fluxInterval,
})
return hp

View File

@@ -4,8 +4,8 @@ import (
"math"
"time"
"go.signoz.io/signoz/pkg/query-service/common"
v3 "go.signoz.io/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/query-service/common"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
)
type Seasonality string

View File

@@ -5,11 +5,11 @@ import (
"math"
"time"
"go.signoz.io/signoz/pkg/query-service/cache"
"go.signoz.io/signoz/pkg/query-service/interfaces"
v3 "go.signoz.io/signoz/pkg/query-service/model/v3"
"go.signoz.io/signoz/pkg/query-service/postprocess"
"go.signoz.io/signoz/pkg/query-service/utils/labels"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/query-service/interfaces"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/query-service/postprocess"
"github.com/SigNoz/signoz/pkg/query-service/utils/labels"
"go.uber.org/zap"
)
@@ -38,12 +38,6 @@ func WithKeyGenerator[T BaseProvider](keyGenerator cache.KeyGenerator) GenericPr
}
}
func WithFeatureLookup[T BaseProvider](ff interfaces.FeatureLookup) GenericProviderOption[T] {
return func(p T) {
p.GetBaseSeasonalProvider().ff = ff
}
}
func WithReader[T BaseProvider](reader interfaces.Reader) GenericProviderOption[T] {
return func(p T) {
p.GetBaseSeasonalProvider().reader = reader
@@ -56,7 +50,6 @@ type BaseSeasonalProvider struct {
fluxInterval time.Duration
cache cache.Cache
keyGenerator cache.KeyGenerator
ff interfaces.FeatureLookup
}
func (p *BaseSeasonalProvider) getQueryParams(req *GetAnomaliesRequest) *anomalyQueryParams {
@@ -313,6 +306,9 @@ func (p *BaseSeasonalProvider) getScore(
series, prevSeries, weekSeries, weekPrevSeries, past2SeasonSeries, past3SeasonSeries *v3.Series, value float64, idx int,
) float64 {
expectedValue := p.getExpectedValue(series, prevSeries, weekSeries, weekPrevSeries, past2SeasonSeries, past3SeasonSeries, idx)
if expectedValue < 0 {
expectedValue = p.getMovingAvg(prevSeries, movingAvgWindowSize, idx)
}
return (value - expectedValue) / p.getStdDev(weekSeries)
}

View File

@@ -3,8 +3,8 @@ package anomaly
import (
"context"
querierV2 "go.signoz.io/signoz/pkg/query-service/app/querier/v2"
"go.signoz.io/signoz/pkg/query-service/app/queryBuilder"
querierV2 "github.com/SigNoz/signoz/pkg/query-service/app/querier/v2"
"github.com/SigNoz/signoz/pkg/query-service/app/queryBuilder"
)
type WeeklyProvider struct {
@@ -27,11 +27,10 @@ func NewWeeklyProvider(opts ...GenericProviderOption[*WeeklyProvider]) *WeeklyPr
}
wp.querierV2 = querierV2.NewQuerier(querierV2.QuerierOptions{
Reader: wp.reader,
Cache: wp.cache,
KeyGenerator: queryBuilder.NewKeyGenerator(),
FluxInterval: wp.fluxInterval,
FeatureLookup: wp.ff,
Reader: wp.reader,
Cache: wp.cache,
KeyGenerator: queryBuilder.NewKeyGenerator(),
FluxInterval: wp.fluxInterval,
})
return wp

View File

@@ -5,29 +5,29 @@ import (
"net/http/httputil"
"time"
"github.com/SigNoz/signoz/ee/query-service/dao"
"github.com/SigNoz/signoz/ee/query-service/integrations/gateway"
"github.com/SigNoz/signoz/ee/query-service/interfaces"
"github.com/SigNoz/signoz/ee/query-service/license"
"github.com/SigNoz/signoz/ee/query-service/usage"
"github.com/SigNoz/signoz/pkg/alertmanager"
"github.com/SigNoz/signoz/pkg/apis/fields"
"github.com/SigNoz/signoz/pkg/http/middleware"
baseapp "github.com/SigNoz/signoz/pkg/query-service/app"
"github.com/SigNoz/signoz/pkg/query-service/app/cloudintegrations"
"github.com/SigNoz/signoz/pkg/query-service/app/integrations"
"github.com/SigNoz/signoz/pkg/query-service/app/logparsingpipeline"
baseint "github.com/SigNoz/signoz/pkg/query-service/interfaces"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
rules "github.com/SigNoz/signoz/pkg/query-service/rules"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/version"
"github.com/gorilla/mux"
"go.signoz.io/signoz/ee/query-service/dao"
"go.signoz.io/signoz/ee/query-service/integrations/gateway"
"go.signoz.io/signoz/ee/query-service/interfaces"
"go.signoz.io/signoz/ee/query-service/license"
"go.signoz.io/signoz/ee/query-service/usage"
"go.signoz.io/signoz/pkg/alertmanager"
baseapp "go.signoz.io/signoz/pkg/query-service/app"
"go.signoz.io/signoz/pkg/query-service/app/cloudintegrations"
"go.signoz.io/signoz/pkg/query-service/app/integrations"
"go.signoz.io/signoz/pkg/query-service/app/logparsingpipeline"
"go.signoz.io/signoz/pkg/query-service/cache"
baseint "go.signoz.io/signoz/pkg/query-service/interfaces"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
rules "go.signoz.io/signoz/pkg/query-service/rules"
"go.signoz.io/signoz/pkg/query-service/version"
"go.signoz.io/signoz/pkg/signoz"
"go.signoz.io/signoz/pkg/types/authtypes"
)
type APIHandlerOptions struct {
DataConnector interfaces.DataConnector
SkipConfig *basemodel.SkipConfig
PreferSpanMetrics bool
AppDao dao.ModelDao
RulesManager *rules.Manager
@@ -37,7 +37,6 @@ type APIHandlerOptions struct {
IntegrationsController *integrations.Controller
CloudIntegrationsController *cloudintegrations.Controller
LogsParsingPipelineController *logparsingpipeline.LogParsingPipelineController
Cache cache.Cache
Gateway *httputil.ReverseProxy
GatewayUrl string
// Querier Influx Interval
@@ -54,10 +53,8 @@ type APIHandler struct {
// NewAPIHandler returns an APIHandler
func NewAPIHandler(opts APIHandlerOptions, signoz *signoz.SigNoz) (*APIHandler, error) {
baseHandler, err := baseapp.NewAPIHandler(baseapp.APIHandlerOpts{
Reader: opts.DataConnector,
SkipConfig: opts.SkipConfig,
PreferSpanMetrics: opts.PreferSpanMetrics,
AppDao: opts.AppDao,
RuleManager: opts.RulesManager,
@@ -65,11 +62,9 @@ func NewAPIHandler(opts APIHandlerOptions, signoz *signoz.SigNoz) (*APIHandler,
IntegrationsController: opts.IntegrationsController,
CloudIntegrationsController: opts.CloudIntegrationsController,
LogsParsingPipelineController: opts.LogsParsingPipelineController,
Cache: opts.Cache,
FluxInterval: opts.FluxInterval,
UseLogsNewSchema: opts.UseLogsNewSchema,
UseTraceNewSchema: opts.UseTraceNewSchema,
AlertmanagerAPI: alertmanager.NewAPI(signoz.Alertmanager),
FieldsAPI: fields.NewAPI(signoz.TelemetryStore),
Signoz: signoz,
})
@@ -114,7 +109,7 @@ func (ah *APIHandler) CheckFeature(f string) bool {
}
// RegisterRoutes registers routes for this handler on the given router
func (ah *APIHandler) RegisterRoutes(router *mux.Router, am *baseapp.AuthMiddleware) {
func (ah *APIHandler) RegisterRoutes(router *mux.Router, am *middleware.AuthZ) {
// note: add ee override methods first
// routes available only in ee version
@@ -157,7 +152,6 @@ func (ah *APIHandler) RegisterRoutes(router *mux.Router, am *baseapp.AuthMiddlew
router.HandleFunc("/api/v1/invite/{token}", am.OpenAccess(ah.getInvite)).Methods(http.MethodGet)
router.HandleFunc("/api/v1/register", am.OpenAccess(ah.registerUser)).Methods(http.MethodPost)
router.HandleFunc("/api/v1/login", am.OpenAccess(ah.loginUser)).Methods(http.MethodPost)
router.HandleFunc("/api/v1/traces/{traceId}", am.ViewAccess(ah.searchTraces)).Methods(http.MethodGet)
// PAT APIs
router.HandleFunc("/api/v1/pats", am.AdminAccess(ah.createPAT)).Methods(http.MethodPost)
@@ -188,7 +182,7 @@ func (ah *APIHandler) RegisterRoutes(router *mux.Router, am *baseapp.AuthMiddlew
}
func (ah *APIHandler) RegisterCloudIntegrationsRoutes(router *mux.Router, am *baseapp.AuthMiddleware) {
func (ah *APIHandler) RegisterCloudIntegrationsRoutes(router *mux.Router, am *middleware.AuthZ) {
ah.APIHandler.RegisterCloudIntegrationsRoutes(router, am)
@@ -200,9 +194,8 @@ func (ah *APIHandler) RegisterCloudIntegrationsRoutes(router *mux.Router, am *ba
}
func (ah *APIHandler) getVersion(w http.ResponseWriter, r *http.Request) {
version := version.GetVersion()
versionResponse := basemodel.GetVersionResponse{
Version: version,
Version: version.Info.Version(),
EE: "Y",
SetupCompleted: ah.SetupCompleted,
}

View File

@@ -12,10 +12,10 @@ import (
"github.com/gorilla/mux"
"go.uber.org/zap"
"go.signoz.io/signoz/ee/query-service/constants"
"go.signoz.io/signoz/ee/query-service/model"
baseauth "go.signoz.io/signoz/pkg/query-service/auth"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/ee/query-service/constants"
"github.com/SigNoz/signoz/ee/query-service/model"
baseauth "github.com/SigNoz/signoz/pkg/query-service/auth"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
)
func parseRequest(r *http.Request, req interface{}) error {
@@ -134,7 +134,7 @@ func (ah *APIHandler) registerUser(w http.ResponseWriter, r *http.Request) {
return
}
_, registerError := baseauth.Register(ctx, req, ah.Signoz.Alertmanager)
_, registerError := baseauth.Register(ctx, req, ah.Signoz.Alertmanager, ah.Signoz.Modules.Organization)
if !registerError.IsNil() {
RespondError(w, apierr, nil)
return
@@ -151,9 +151,8 @@ func (ah *APIHandler) registerUser(w http.ResponseWriter, r *http.Request) {
func (ah *APIHandler) getInvite(w http.ResponseWriter, r *http.Request) {
token := mux.Vars(r)["token"]
sourceUrl := r.URL.Query().Get("ref")
ctx := context.Background()
inviteObject, err := baseauth.GetInvite(context.Background(), token)
inviteObject, err := baseauth.GetInvite(r.Context(), token, ah.Signoz.Modules.Organization)
if err != nil {
RespondError(w, model.BadRequest(err), nil)
return
@@ -163,7 +162,7 @@ func (ah *APIHandler) getInvite(w http.ResponseWriter, r *http.Request) {
InvitationResponseObject: inviteObject,
}
precheck, apierr := ah.AppDao().PrecheckLogin(ctx, inviteObject.Email, sourceUrl)
precheck, apierr := ah.AppDao().PrecheckLogin(r.Context(), inviteObject.Email, sourceUrl)
resp.Precheck = precheck
if apierr != nil {

View File

@@ -10,15 +10,15 @@ import (
"strings"
"time"
"github.com/SigNoz/signoz/ee/query-service/constants"
eeTypes "github.com/SigNoz/signoz/ee/types"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/query-service/auth"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/google/uuid"
"github.com/gorilla/mux"
"go.signoz.io/signoz/ee/query-service/constants"
"go.signoz.io/signoz/ee/query-service/model"
"go.signoz.io/signoz/pkg/query-service/auth"
baseconstants "go.signoz.io/signoz/pkg/query-service/constants"
"go.signoz.io/signoz/pkg/query-service/dao"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.signoz.io/signoz/pkg/types"
"go.uber.org/zap"
)
@@ -30,6 +30,12 @@ type CloudIntegrationConnectionParamsResponse struct {
}
func (ah *APIHandler) CloudIntegrationsGenerateConnectionParams(w http.ResponseWriter, r *http.Request) {
claims, err := authtypes.ClaimsFromContext(r.Context())
if err != nil {
render.Error(w, err)
return
}
cloudProvider := mux.Vars(r)["cloudProvider"]
if cloudProvider != "aws" {
RespondError(w, basemodel.BadRequest(fmt.Errorf(
@@ -38,15 +44,7 @@ func (ah *APIHandler) CloudIntegrationsGenerateConnectionParams(w http.ResponseW
return
}
currentUser, err := auth.GetUserFromReqContext(r.Context())
if err != nil {
RespondError(w, basemodel.UnauthorizedError(fmt.Errorf(
"couldn't deduce current user: %w", err,
)), nil)
return
}
apiKey, apiErr := ah.getOrCreateCloudIntegrationPAT(r.Context(), currentUser.OrgID, cloudProvider)
apiKey, apiErr := ah.getOrCreateCloudIntegrationPAT(r.Context(), claims.OrgID, cloudProvider)
if apiErr != nil {
RespondError(w, basemodel.WrapApiError(
apiErr, "couldn't provision PAT for cloud integration:",
@@ -118,7 +116,7 @@ func (ah *APIHandler) getOrCreateCloudIntegrationPAT(ctx context.Context, orgId
return "", apiErr
}
allPats, err := ah.AppDao().ListPATs(ctx)
allPats, err := ah.AppDao().ListPATs(ctx, orgId)
if err != nil {
return "", basemodel.InternalError(fmt.Errorf(
"couldn't list PATs: %w", err,
@@ -135,16 +133,13 @@ func (ah *APIHandler) getOrCreateCloudIntegrationPAT(ctx context.Context, orgId
zap.String("cloudProvider", cloudProvider),
)
newPAT := model.PAT{
Token: generatePATToken(),
UserID: integrationUser.ID,
Name: integrationPATName,
Role: baseconstants.ViewerGroup,
ExpiresAt: 0,
CreatedAt: time.Now().Unix(),
UpdatedAt: time.Now().Unix(),
}
integrationPAT, err := ah.AppDao().CreatePAT(ctx, newPAT)
newPAT := eeTypes.NewGettablePAT(
integrationPATName,
authtypes.RoleViewer.String(),
integrationUser.ID,
0,
)
integrationPAT, err := ah.AppDao().CreatePAT(ctx, orgId, newPAT)
if err != nil {
return "", basemodel.InternalError(fmt.Errorf(
"couldn't create cloud integration PAT: %w", err,
@@ -156,9 +151,11 @@ func (ah *APIHandler) getOrCreateCloudIntegrationPAT(ctx context.Context, orgId
func (ah *APIHandler) getOrCreateCloudIntegrationUser(
ctx context.Context, orgId string, cloudProvider string,
) (*types.User, *basemodel.ApiError) {
cloudIntegrationUserId := fmt.Sprintf("%s-integration", cloudProvider)
cloudIntegrationUser := fmt.Sprintf("%s-integration", cloudProvider)
email := fmt.Sprintf("%s@signoz.io", cloudIntegrationUser)
integrationUserResult, apiErr := ah.AppDao().GetUser(ctx, cloudIntegrationUserId)
// TODO(nitya): there should be orgId here
integrationUserResult, apiErr := ah.AppDao().GetUserByEmail(ctx, email)
if apiErr != nil {
return nil, basemodel.WrapApiError(apiErr, "couldn't look for integration user")
}
@@ -173,20 +170,16 @@ func (ah *APIHandler) getOrCreateCloudIntegrationUser(
)
newUser := &types.User{
ID: cloudIntegrationUserId,
Name: fmt.Sprintf("%s integration", cloudProvider),
Email: fmt.Sprintf("%s@signoz.io", cloudIntegrationUserId),
ID: uuid.New().String(),
Name: cloudIntegrationUser,
Email: email,
TimeAuditable: types.TimeAuditable{
CreatedAt: time.Now(),
},
OrgID: orgId,
}
viewerGroup, apiErr := dao.DB().GetGroupByName(ctx, baseconstants.ViewerGroup)
if apiErr != nil {
return nil, basemodel.WrapApiError(apiErr, "couldn't get viewer group for creating integration user")
}
newUser.GroupID = viewerGroup.ID
newUser.Role = authtypes.RoleViewer.String()
passwordHash, err := auth.PasswordHash(uuid.NewString())
if err != nil {

View File

@@ -4,12 +4,11 @@ import (
"net/http"
"strings"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/query-service/app/dashboards"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/gorilla/mux"
"go.signoz.io/signoz/pkg/errors"
"go.signoz.io/signoz/pkg/http/render"
"go.signoz.io/signoz/pkg/query-service/app/dashboards"
"go.signoz.io/signoz/pkg/query-service/auth"
"go.signoz.io/signoz/pkg/types/authtypes"
)
func (ah *APIHandler) lockDashboard(w http.ResponseWriter, r *http.Request) {
@@ -36,18 +35,19 @@ func (ah *APIHandler) lockUnlockDashboard(w http.ResponseWriter, r *http.Request
return
}
claims, ok := authtypes.ClaimsFromContext(r.Context())
if !ok {
claims, err := authtypes.ClaimsFromContext(r.Context())
if err != nil {
render.Error(w, errors.Newf(errors.TypeUnauthenticated, errors.CodeUnauthenticated, "unauthenticated"))
return
}
dashboard, err := dashboards.GetDashboard(r.Context(), claims.OrgID, uuid)
if err != nil {
render.Error(w, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "failed to get dashboard"))
return
}
if !auth.IsAdminV2(claims) && (dashboard.CreatedBy != claims.Email) {
if err := claims.IsAdmin(); err != nil && (dashboard.CreatedBy != claims.Email) {
render.Error(w, errors.Newf(errors.TypeForbidden, errors.CodeForbidden, "You are not authorized to lock/unlock this dashboard"))
return
}

View File

@@ -6,9 +6,10 @@ import (
"fmt"
"net/http"
"github.com/SigNoz/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/ee/types"
"github.com/google/uuid"
"github.com/gorilla/mux"
"go.signoz.io/signoz/ee/query-service/model"
)
func (ah *APIHandler) listDomainsByOrg(w http.ResponseWriter, r *http.Request) {
@@ -24,7 +25,7 @@ func (ah *APIHandler) listDomainsByOrg(w http.ResponseWriter, r *http.Request) {
func (ah *APIHandler) postDomain(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
req := model.OrgDomain{}
req := types.GettableOrgDomain{}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
RespondError(w, model.BadRequest(err), nil)
@@ -54,12 +55,12 @@ func (ah *APIHandler) putDomain(w http.ResponseWriter, r *http.Request) {
return
}
req := model.OrgDomain{Id: domainId}
req := types.GettableOrgDomain{StorableOrgDomain: types.StorableOrgDomain{ID: domainId}}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
req.Id = domainId
req.ID = domainId
if err := req.Valid(nil); err != nil {
RespondError(w, model.BadRequest(err), nil)
}

View File

@@ -8,8 +8,8 @@ import (
"net/http"
"time"
"go.signoz.io/signoz/ee/query-service/constants"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/ee/query-service/constants"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"go.uber.org/zap"
)

View File

@@ -3,8 +3,8 @@ package api
import (
"testing"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/stretchr/testify/assert"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
)
func TestMergeFeatureSets(t *testing.T) {

View File

@@ -4,7 +4,7 @@ import (
"net/http"
"strings"
"go.signoz.io/signoz/ee/query-service/integrations/gateway"
"github.com/SigNoz/signoz/ee/query-service/integrations/gateway"
)
func (ah *APIHandler) ServeGatewayHTTP(rw http.ResponseWriter, req *http.Request) {

View File

@@ -3,13 +3,14 @@ package api
import (
"encoding/json"
"fmt"
"io"
"net/http"
"go.signoz.io/signoz/ee/query-service/constants"
"go.signoz.io/signoz/ee/query-service/model"
"go.signoz.io/signoz/pkg/http/render"
"go.uber.org/zap"
"github.com/SigNoz/signoz/ee/query-service/constants"
"github.com/SigNoz/signoz/ee/query-service/integrations/signozio"
"github.com/SigNoz/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/query-service/telemetry"
"github.com/SigNoz/signoz/pkg/types/authtypes"
)
type DayWiseBreakdown struct {
@@ -48,6 +49,10 @@ type details struct {
BillTotal float64 `json:"billTotal"`
}
type Redirect struct {
RedirectURL string `json:"redirectURL"`
}
type billingDetails struct {
Status string `json:"status"`
Data struct {
@@ -87,8 +92,13 @@ func (ah *APIHandler) getActiveLicenseV3(w http.ResponseWriter, r *http.Request)
// this function is called by zeus when inserting licenses in the query-service
func (ah *APIHandler) applyLicenseV3(w http.ResponseWriter, r *http.Request) {
var licenseKey ApplyLicenseRequest
claims, err := authtypes.ClaimsFromContext(r.Context())
if err != nil {
render.Error(w, err)
return
}
var licenseKey ApplyLicenseRequest
if err := json.NewDecoder(r.Body).Decode(&licenseKey); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
@@ -99,9 +109,10 @@ func (ah *APIHandler) applyLicenseV3(w http.ResponseWriter, r *http.Request) {
return
}
_, apiError := ah.LM().ActivateV3(r.Context(), licenseKey.LicenseKey)
if apiError != nil {
RespondError(w, apiError, nil)
_, err = ah.LM().ActivateV3(r.Context(), licenseKey.LicenseKey)
if err != nil {
telemetry.GetInstance().SendEvent(telemetry.TELEMETRY_LICENSE_ACT_FAILED, map[string]interface{}{"err": err.Error()}, claims.Email, true, false)
render.Error(w, err)
return
}
@@ -109,46 +120,39 @@ func (ah *APIHandler) applyLicenseV3(w http.ResponseWriter, r *http.Request) {
}
func (ah *APIHandler) refreshLicensesV3(w http.ResponseWriter, r *http.Request) {
apiError := ah.LM().RefreshLicense(r.Context())
if apiError != nil {
RespondError(w, apiError, nil)
err := ah.LM().RefreshLicense(r.Context())
if err != nil {
render.Error(w, err)
return
}
render.Success(w, http.StatusNoContent, nil)
}
func getCheckoutPortalResponse(redirectURL string) *Redirect {
return &Redirect{RedirectURL: redirectURL}
}
func (ah *APIHandler) checkout(w http.ResponseWriter, r *http.Request) {
type checkoutResponse struct {
Status string `json:"status"`
Data struct {
RedirectURL string `json:"redirectURL"`
} `json:"data"`
checkoutRequest := &model.CheckoutRequest{}
if err := json.NewDecoder(r.Body).Decode(checkoutRequest); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
hClient := &http.Client{}
req, err := http.NewRequest("POST", constants.LicenseSignozIo+"/checkout", r.Body)
license := ah.LM().GetActiveLicense()
if license == nil {
RespondError(w, model.BadRequestStr("cannot proceed with checkout without license key"), nil)
return
}
redirectUrl, err := signozio.CheckoutSession(r.Context(), checkoutRequest, license.Key, ah.Signoz.Zeus)
if err != nil {
RespondError(w, model.InternalError(err), nil)
return
}
req.Header.Add("X-SigNoz-SecretKey", constants.LicenseAPIKey)
licenseResp, err := hClient.Do(req)
if err != nil {
RespondError(w, model.InternalError(err), nil)
render.Error(w, err)
return
}
// decode response body
var resp checkoutResponse
if err := json.NewDecoder(licenseResp.Body).Decode(&resp); err != nil {
RespondError(w, model.InternalError(err), nil)
return
}
ah.Respond(w, resp.Data)
ah.Respond(w, getCheckoutPortalResponse(redirectUrl))
}
func (ah *APIHandler) getBilling(w http.ResponseWriter, r *http.Request) {
@@ -228,102 +232,27 @@ func (ah *APIHandler) listLicensesV2(w http.ResponseWriter, r *http.Request) {
Licenses: licenses,
}
var currentActiveLicenseKey string
for _, license := range licenses {
if license.IsCurrent {
currentActiveLicenseKey = license.Key
}
}
// For the case when no license is applied i.e community edition
// There will be no trial details or license details
if currentActiveLicenseKey == "" {
ah.Respond(w, resp)
return
}
// Fetch trial details
hClient := &http.Client{}
url := fmt.Sprintf("%s/trial?licenseKey=%s", constants.LicenseSignozIo, currentActiveLicenseKey)
req, err := http.NewRequest("GET", url, nil)
if err != nil {
zap.L().Error("Error while creating request for trial details", zap.Error(err))
// If there is an error in fetching trial details, we will still return the license details
// to avoid blocking the UI
ah.Respond(w, resp)
return
}
req.Header.Add("X-SigNoz-SecretKey", constants.LicenseAPIKey)
trialResp, err := hClient.Do(req)
if err != nil {
zap.L().Error("Error while fetching trial details", zap.Error(err))
// If there is an error in fetching trial details, we will still return the license details
// to avoid incorrectly blocking the UI
ah.Respond(w, resp)
return
}
defer trialResp.Body.Close()
trialRespBody, err := io.ReadAll(trialResp.Body)
if err != nil || trialResp.StatusCode != http.StatusOK {
zap.L().Error("Error while fetching trial details", zap.Error(err))
// If there is an error in fetching trial details, we will still return the license details
// to avoid incorrectly blocking the UI
ah.Respond(w, resp)
return
}
// decode response body
var trialRespData model.SubscriptionServerResp
if err := json.Unmarshal(trialRespBody, &trialRespData); err != nil {
zap.L().Error("Error while decoding trial details", zap.Error(err))
// If there is an error in fetching trial details, we will still return the license details
// to avoid incorrectly blocking the UI
ah.Respond(w, resp)
return
}
resp.TrialStart = trialRespData.Data.TrialStart
resp.TrialEnd = trialRespData.Data.TrialEnd
resp.OnTrial = trialRespData.Data.OnTrial
resp.WorkSpaceBlock = trialRespData.Data.WorkSpaceBlock
resp.TrialConvertedToSubscription = trialRespData.Data.TrialConvertedToSubscription
resp.GracePeriodEnd = trialRespData.Data.GracePeriodEnd
ah.Respond(w, resp)
}
func (ah *APIHandler) portalSession(w http.ResponseWriter, r *http.Request) {
type checkoutResponse struct {
Status string `json:"status"`
Data struct {
RedirectURL string `json:"redirectURL"`
} `json:"data"`
portalRequest := &model.PortalRequest{}
if err := json.NewDecoder(r.Body).Decode(portalRequest); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
hClient := &http.Client{}
req, err := http.NewRequest("POST", constants.LicenseSignozIo+"/portal", r.Body)
license := ah.LM().GetActiveLicense()
if license == nil {
RespondError(w, model.BadRequestStr("cannot request the portal session without license key"), nil)
return
}
redirectUrl, err := signozio.PortalSession(r.Context(), portalRequest, license.Key, ah.Signoz.Zeus)
if err != nil {
RespondError(w, model.InternalError(err), nil)
return
}
req.Header.Add("X-SigNoz-SecretKey", constants.LicenseAPIKey)
licenseResp, err := hClient.Do(req)
if err != nil {
RespondError(w, model.InternalError(err), nil)
render.Error(w, err)
return
}
// decode response body
var resp checkoutResponse
if err := json.NewDecoder(licenseResp.Body).Decode(&resp); err != nil {
RespondError(w, model.InternalError(err), nil)
return
}
ah.Respond(w, resp.Data)
ah.Respond(w, getCheckoutPortalResponse(redirectUrl))
}

View File

@@ -1,73 +1,53 @@
package api
import (
"context"
"crypto/rand"
"encoding/base64"
"encoding/json"
"fmt"
"net/http"
"slices"
"time"
"github.com/SigNoz/signoz/ee/query-service/model"
eeTypes "github.com/SigNoz/signoz/ee/types"
"github.com/SigNoz/signoz/pkg/errors"
errorsV2 "github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/http/render"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/gorilla/mux"
"go.signoz.io/signoz/ee/query-service/model"
"go.signoz.io/signoz/pkg/query-service/auth"
baseconstants "go.signoz.io/signoz/pkg/query-service/constants"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.uber.org/zap"
)
func generatePATToken() string {
// Generate a 32-byte random token.
token := make([]byte, 32)
rand.Read(token)
// Encode the token in base64.
encodedToken := base64.StdEncoding.EncodeToString(token)
return encodedToken
}
func (ah *APIHandler) createPAT(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
claims, err := authtypes.ClaimsFromContext(r.Context())
if err != nil {
render.Error(w, err)
return
}
req := model.CreatePATRequestBody{}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
user, err := auth.GetUserFromReqContext(r.Context())
if err != nil {
RespondError(w, &model.ApiError{
Typ: model.ErrorUnauthorized,
Err: err,
}, nil)
return
}
pat := model.PAT{
Name: req.Name,
Role: req.Role,
ExpiresAt: req.ExpiresInDays,
}
pat := eeTypes.NewGettablePAT(
req.Name,
req.Role,
claims.UserID,
req.ExpiresInDays,
)
err = validatePATRequest(pat)
if err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
// All the PATs are associated with the user creating the PAT.
pat.UserID = user.ID
pat.CreatedAt = time.Now().Unix()
pat.UpdatedAt = time.Now().Unix()
pat.LastUsed = 0
pat.Token = generatePATToken()
if pat.ExpiresAt != 0 {
// convert expiresAt to unix timestamp from days
pat.ExpiresAt = time.Now().Unix() + (pat.ExpiresAt * 24 * 60 * 60)
}
zap.L().Info("Got Create PAT request", zap.Any("pat", pat))
var apierr basemodel.BaseApiError
if pat, apierr = ah.AppDao().CreatePAT(ctx, pat); apierr != nil {
if pat, apierr = ah.AppDao().CreatePAT(r.Context(), claims.OrgID, pat); apierr != nil {
RespondError(w, apierr, nil)
return
}
@@ -75,34 +55,59 @@ func (ah *APIHandler) createPAT(w http.ResponseWriter, r *http.Request) {
ah.Respond(w, &pat)
}
func validatePATRequest(req model.PAT) error {
if req.Role == "" || (req.Role != baseconstants.ViewerGroup && req.Role != baseconstants.EditorGroup && req.Role != baseconstants.AdminGroup) {
return fmt.Errorf("valid role is required")
func validatePATRequest(req eeTypes.GettablePAT) error {
_, err := authtypes.NewRole(req.Role)
if err != nil {
return err
}
if req.ExpiresAt < 0 {
return fmt.Errorf("valid expiresAt is required")
}
if req.Name == "" {
return fmt.Errorf("valid name is required")
}
return nil
}
func (ah *APIHandler) updatePAT(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
claims, err := authtypes.ClaimsFromContext(r.Context())
if err != nil {
render.Error(w, err)
return
}
req := model.PAT{}
req := eeTypes.GettablePAT{}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
user, err := auth.GetUserFromReqContext(r.Context())
idStr := mux.Vars(r)["id"]
id, err := valuer.NewUUID(idStr)
if err != nil {
RespondError(w, &model.ApiError{
Typ: model.ErrorUnauthorized,
Err: err,
}, nil)
render.Error(w, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "id is not a valid uuid-v7"))
return
}
//get the pat
existingPAT, paterr := ah.AppDao().GetPATByID(r.Context(), claims.OrgID, id)
if paterr != nil {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, paterr.Error()))
return
}
// get the user
createdByUser, usererr := ah.AppDao().GetUser(r.Context(), existingPAT.UserID)
if usererr != nil {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, usererr.Error()))
return
}
if slices.Contains(types.AllIntegrationUserEmails, types.IntegrationUserEmail(createdByUser.Email)) {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, "integration user pat cannot be updated"))
return
}
@@ -112,12 +117,11 @@ func (ah *APIHandler) updatePAT(w http.ResponseWriter, r *http.Request) {
return
}
req.UpdatedByUserID = user.ID
id := mux.Vars(r)["id"]
req.UpdatedAt = time.Now().Unix()
req.UpdatedByUserID = claims.UserID
req.UpdatedAt = time.Now()
zap.L().Info("Got Update PAT request", zap.Any("pat", req))
var apierr basemodel.BaseApiError
if apierr = ah.AppDao().UpdatePAT(ctx, req, id); apierr != nil {
if apierr = ah.AppDao().UpdatePAT(r.Context(), claims.OrgID, req, id); apierr != nil {
RespondError(w, apierr, nil)
return
}
@@ -126,38 +130,56 @@ func (ah *APIHandler) updatePAT(w http.ResponseWriter, r *http.Request) {
}
func (ah *APIHandler) getPATs(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
user, err := auth.GetUserFromReqContext(r.Context())
claims, err := authtypes.ClaimsFromContext(r.Context())
if err != nil {
RespondError(w, &model.ApiError{
Typ: model.ErrorUnauthorized,
Err: err,
}, nil)
render.Error(w, err)
return
}
zap.L().Info("Get PATs for user", zap.String("user_id", user.ID))
pats, apierr := ah.AppDao().ListPATs(ctx)
pats, apierr := ah.AppDao().ListPATs(r.Context(), claims.OrgID)
if apierr != nil {
RespondError(w, apierr, nil)
return
}
ah.Respond(w, pats)
}
func (ah *APIHandler) revokePAT(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
id := mux.Vars(r)["id"]
user, err := auth.GetUserFromReqContext(r.Context())
claims, err := authtypes.ClaimsFromContext(r.Context())
if err != nil {
RespondError(w, &model.ApiError{
Typ: model.ErrorUnauthorized,
Err: err,
}, nil)
render.Error(w, err)
return
}
zap.L().Info("Revoke PAT with id", zap.String("id", id))
if apierr := ah.AppDao().RevokePAT(ctx, id, user.ID); apierr != nil {
idStr := mux.Vars(r)["id"]
id, err := valuer.NewUUID(idStr)
if err != nil {
render.Error(w, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "id is not a valid uuid-v7"))
return
}
//get the pat
existingPAT, paterr := ah.AppDao().GetPATByID(r.Context(), claims.OrgID, id)
if paterr != nil {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, paterr.Error()))
return
}
// get the user
createdByUser, usererr := ah.AppDao().GetUser(r.Context(), existingPAT.UserID)
if usererr != nil {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, usererr.Error()))
return
}
if slices.Contains(types.AllIntegrationUserEmails, types.IntegrationUserEmail(createdByUser.Email)) {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, "integration user pat cannot be updated"))
return
}
zap.L().Info("Revoke PAT with id", zap.String("id", id.StringValue()))
if apierr := ah.AppDao().RevokePAT(r.Context(), claims.OrgID, id, claims.UserID); apierr != nil {
RespondError(w, apierr, nil)
return
}

View File

@@ -6,11 +6,11 @@ import (
"io"
"net/http"
"go.signoz.io/signoz/ee/query-service/anomaly"
baseapp "go.signoz.io/signoz/pkg/query-service/app"
"go.signoz.io/signoz/pkg/query-service/app/queryBuilder"
"go.signoz.io/signoz/pkg/query-service/model"
v3 "go.signoz.io/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/ee/query-service/anomaly"
baseapp "github.com/SigNoz/signoz/pkg/query-service/app"
"github.com/SigNoz/signoz/pkg/query-service/app/queryBuilder"
"github.com/SigNoz/signoz/pkg/query-service/model"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"go.uber.org/zap"
)
@@ -85,31 +85,27 @@ func (aH *APIHandler) queryRangeV4(w http.ResponseWriter, r *http.Request) {
switch seasonality {
case anomaly.SeasonalityWeekly:
provider = anomaly.NewWeeklyProvider(
anomaly.WithCache[*anomaly.WeeklyProvider](aH.opts.Cache),
anomaly.WithCache[*anomaly.WeeklyProvider](aH.Signoz.Cache),
anomaly.WithKeyGenerator[*anomaly.WeeklyProvider](queryBuilder.NewKeyGenerator()),
anomaly.WithReader[*anomaly.WeeklyProvider](aH.opts.DataConnector),
anomaly.WithFeatureLookup[*anomaly.WeeklyProvider](aH.opts.FeatureFlags),
)
case anomaly.SeasonalityDaily:
provider = anomaly.NewDailyProvider(
anomaly.WithCache[*anomaly.DailyProvider](aH.opts.Cache),
anomaly.WithCache[*anomaly.DailyProvider](aH.Signoz.Cache),
anomaly.WithKeyGenerator[*anomaly.DailyProvider](queryBuilder.NewKeyGenerator()),
anomaly.WithReader[*anomaly.DailyProvider](aH.opts.DataConnector),
anomaly.WithFeatureLookup[*anomaly.DailyProvider](aH.opts.FeatureFlags),
)
case anomaly.SeasonalityHourly:
provider = anomaly.NewHourlyProvider(
anomaly.WithCache[*anomaly.HourlyProvider](aH.opts.Cache),
anomaly.WithCache[*anomaly.HourlyProvider](aH.Signoz.Cache),
anomaly.WithKeyGenerator[*anomaly.HourlyProvider](queryBuilder.NewKeyGenerator()),
anomaly.WithReader[*anomaly.HourlyProvider](aH.opts.DataConnector),
anomaly.WithFeatureLookup[*anomaly.HourlyProvider](aH.opts.FeatureFlags),
)
default:
provider = anomaly.NewDailyProvider(
anomaly.WithCache[*anomaly.DailyProvider](aH.opts.Cache),
anomaly.WithCache[*anomaly.DailyProvider](aH.Signoz.Cache),
anomaly.WithKeyGenerator[*anomaly.DailyProvider](queryBuilder.NewKeyGenerator()),
anomaly.WithReader[*anomaly.DailyProvider](aH.opts.DataConnector),
anomaly.WithFeatureLookup[*anomaly.DailyProvider](aH.opts.FeatureFlags),
)
}
anomalies, err := provider.GetAnomalies(r.Context(), &anomaly.GetAnomaliesRequest{Params: queryRangeParams})

View File

@@ -3,8 +3,8 @@ package api
import (
"net/http"
baseapp "go.signoz.io/signoz/pkg/query-service/app"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
baseapp "github.com/SigNoz/signoz/pkg/query-service/app"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
)
func RespondError(w http.ResponseWriter, apiErr basemodel.BaseApiError, data interface{}) {

View File

@@ -1,33 +0,0 @@
package api
import (
"net/http"
"go.signoz.io/signoz/ee/query-service/app/db"
"go.signoz.io/signoz/ee/query-service/model"
baseapp "go.signoz.io/signoz/pkg/query-service/app"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.uber.org/zap"
)
func (ah *APIHandler) searchTraces(w http.ResponseWriter, r *http.Request) {
if !ah.CheckFeature(basemodel.SmartTraceDetail) {
zap.L().Info("SmartTraceDetail feature is not enabled in this plan")
ah.APIHandler.SearchTraces(w, r)
return
}
searchTracesParams, err := baseapp.ParseSearchTracesParams(r)
if err != nil {
RespondError(w, &model.ApiError{Typ: model.ErrorBadData, Err: err}, "Error reading params")
return
}
result, err := ah.opts.DataConnector.SearchTraces(r.Context(), searchTracesParams, db.SmartTraceAlgorithm)
if ah.HandleError(w, err, http.StatusBadRequest) {
return
}
ah.WriteJSON(w, r, result)
}

View File

@@ -5,38 +5,31 @@ import (
"github.com/ClickHouse/clickhouse-go/v2"
"github.com/jmoiron/sqlx"
"go.signoz.io/signoz/pkg/cache"
basechr "go.signoz.io/signoz/pkg/query-service/app/clickhouseReader"
"go.signoz.io/signoz/pkg/query-service/interfaces"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/prometheus"
basechr "github.com/SigNoz/signoz/pkg/query-service/app/clickhouseReader"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrystore"
)
type ClickhouseReader struct {
conn clickhouse.Conn
appdb *sqlx.DB
appdb sqlstore.SQLStore
*basechr.ClickHouseReader
}
func NewDataConnector(
localDB *sqlx.DB,
ch clickhouse.Conn,
promConfigPath string,
lm interfaces.FeatureLookup,
sqlDB sqlstore.SQLStore,
telemetryStore telemetrystore.TelemetryStore,
prometheus prometheus.Prometheus,
cluster string,
useLogsNewSchema bool,
useTraceNewSchema bool,
fluxIntervalForTraceDetail time.Duration,
cache cache.Cache,
) *ClickhouseReader {
chReader := basechr.NewReader(localDB, ch, promConfigPath, lm, cluster, useLogsNewSchema, useTraceNewSchema, fluxIntervalForTraceDetail, cache)
chReader := basechr.NewReader(sqlDB, telemetryStore, prometheus, cluster, fluxIntervalForTraceDetail, cache)
return &ClickhouseReader{
conn: ch,
appdb: localDB,
conn: telemetryStore.ClickhouseDB(),
appdb: sqlDB,
ClickHouseReader: chReader,
}
}
func (r *ClickhouseReader) Start(readerReady chan bool) {
r.ClickHouseReader.Start(readerReady)
}

View File

@@ -2,7 +2,6 @@ package app
import (
"context"
"errors"
"fmt"
"net"
"net/http"
@@ -12,70 +11,56 @@ import (
"github.com/gorilla/handlers"
"github.com/jmoiron/sqlx"
eemiddleware "github.com/SigNoz/signoz/ee/http/middleware"
"github.com/SigNoz/signoz/ee/query-service/app/api"
"github.com/SigNoz/signoz/ee/query-service/app/db"
"github.com/SigNoz/signoz/ee/query-service/constants"
"github.com/SigNoz/signoz/ee/query-service/dao"
"github.com/SigNoz/signoz/ee/query-service/integrations/gateway"
"github.com/SigNoz/signoz/ee/query-service/rules"
"github.com/SigNoz/signoz/pkg/alertmanager"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/http/middleware"
"github.com/SigNoz/signoz/pkg/prometheus"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/web"
"github.com/rs/cors"
"github.com/soheilhy/cmux"
eemiddleware "go.signoz.io/signoz/ee/http/middleware"
"go.signoz.io/signoz/ee/query-service/app/api"
"go.signoz.io/signoz/ee/query-service/app/db"
"go.signoz.io/signoz/ee/query-service/auth"
"go.signoz.io/signoz/ee/query-service/constants"
"go.signoz.io/signoz/ee/query-service/dao"
"go.signoz.io/signoz/ee/query-service/integrations/gateway"
"go.signoz.io/signoz/ee/query-service/interfaces"
"go.signoz.io/signoz/ee/query-service/rules"
"go.signoz.io/signoz/pkg/alertmanager"
"go.signoz.io/signoz/pkg/http/middleware"
"go.signoz.io/signoz/pkg/signoz"
"go.signoz.io/signoz/pkg/sqlstore"
"go.signoz.io/signoz/pkg/types"
"go.signoz.io/signoz/pkg/types/authtypes"
"go.signoz.io/signoz/pkg/web"
licensepkg "go.signoz.io/signoz/ee/query-service/license"
"go.signoz.io/signoz/ee/query-service/usage"
licensepkg "github.com/SigNoz/signoz/ee/query-service/license"
"github.com/SigNoz/signoz/ee/query-service/usage"
"go.signoz.io/signoz/pkg/query-service/agentConf"
baseapp "go.signoz.io/signoz/pkg/query-service/app"
"go.signoz.io/signoz/pkg/query-service/app/cloudintegrations"
"go.signoz.io/signoz/pkg/query-service/app/dashboards"
baseexplorer "go.signoz.io/signoz/pkg/query-service/app/explorer"
"go.signoz.io/signoz/pkg/query-service/app/integrations"
"go.signoz.io/signoz/pkg/query-service/app/logparsingpipeline"
"go.signoz.io/signoz/pkg/query-service/app/opamp"
opAmpModel "go.signoz.io/signoz/pkg/query-service/app/opamp/model"
"go.signoz.io/signoz/pkg/query-service/app/preferences"
"go.signoz.io/signoz/pkg/query-service/cache"
baseconst "go.signoz.io/signoz/pkg/query-service/constants"
"go.signoz.io/signoz/pkg/query-service/healthcheck"
baseint "go.signoz.io/signoz/pkg/query-service/interfaces"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
pqle "go.signoz.io/signoz/pkg/query-service/pqlEngine"
baserules "go.signoz.io/signoz/pkg/query-service/rules"
"go.signoz.io/signoz/pkg/query-service/telemetry"
"go.signoz.io/signoz/pkg/query-service/utils"
"github.com/SigNoz/signoz/pkg/query-service/agentConf"
baseapp "github.com/SigNoz/signoz/pkg/query-service/app"
"github.com/SigNoz/signoz/pkg/query-service/app/cloudintegrations"
"github.com/SigNoz/signoz/pkg/query-service/app/dashboards"
baseexplorer "github.com/SigNoz/signoz/pkg/query-service/app/explorer"
"github.com/SigNoz/signoz/pkg/query-service/app/integrations"
"github.com/SigNoz/signoz/pkg/query-service/app/logparsingpipeline"
"github.com/SigNoz/signoz/pkg/query-service/app/opamp"
opAmpModel "github.com/SigNoz/signoz/pkg/query-service/app/opamp/model"
baseconst "github.com/SigNoz/signoz/pkg/query-service/constants"
"github.com/SigNoz/signoz/pkg/query-service/healthcheck"
baseint "github.com/SigNoz/signoz/pkg/query-service/interfaces"
baserules "github.com/SigNoz/signoz/pkg/query-service/rules"
"github.com/SigNoz/signoz/pkg/query-service/telemetry"
"github.com/SigNoz/signoz/pkg/query-service/utils"
"go.uber.org/zap"
)
const AppDbEngine = "sqlite"
type ServerOptions struct {
Config signoz.Config
SigNoz *signoz.SigNoz
PromConfigPath string
SkipTopLvlOpsPath string
HTTPHostPort string
PrivateHostPort string
// alert specific params
DisableRules bool
RuleRepoURL string
Config signoz.Config
SigNoz *signoz.SigNoz
HTTPHostPort string
PrivateHostPort string
PreferSpanMetrics bool
CacheConfigPath string
FluxInterval string
FluxIntervalForTraceDetail string
Cluster string
GatewayUrl string
UseLogsNewSchema bool
UseTraceNewSchema bool
Jwt *authtypes.JWT
}
@@ -112,15 +97,11 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
return nil, err
}
if err := baseexplorer.InitWithDSN(serverOptions.SigNoz.SQLStore.BunDB()); err != nil {
if err := baseexplorer.InitWithDSN(serverOptions.SigNoz.SQLStore); err != nil {
return nil, err
}
if err := preferences.InitDB(serverOptions.SigNoz.SQLStore.SQLxDB()); err != nil {
return nil, err
}
if err := dashboards.InitDB(serverOptions.SigNoz.SQLStore.BunDB()); err != nil {
if err := dashboards.InitDB(serverOptions.SigNoz.SQLStore); err != nil {
return nil, err
}
@@ -130,65 +111,36 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
}
// initiate license manager
lm, err := licensepkg.StartManager(serverOptions.SigNoz.SQLStore.SQLxDB(), serverOptions.SigNoz.SQLStore.BunDB())
lm, err := licensepkg.StartManager(serverOptions.SigNoz.SQLStore.SQLxDB(), serverOptions.SigNoz.SQLStore, serverOptions.SigNoz.Zeus)
if err != nil {
return nil, err
}
// set license manager as feature flag provider in dao
modelDao.SetFlagProvider(lm)
readerReady := make(chan bool)
fluxIntervalForTraceDetail, err := time.ParseDuration(serverOptions.FluxIntervalForTraceDetail)
if err != nil {
return nil, err
}
var reader interfaces.DataConnector
qb := db.NewDataConnector(
serverOptions.SigNoz.SQLStore.SQLxDB(),
serverOptions.SigNoz.TelemetryStore.ClickHouseDB(),
serverOptions.PromConfigPath,
lm,
reader := db.NewDataConnector(
serverOptions.SigNoz.SQLStore,
serverOptions.SigNoz.TelemetryStore,
serverOptions.SigNoz.Prometheus,
serverOptions.Cluster,
serverOptions.UseLogsNewSchema,
serverOptions.UseTraceNewSchema,
fluxIntervalForTraceDetail,
serverOptions.SigNoz.Cache,
)
go qb.Start(readerReady)
reader = qb
skipConfig := &basemodel.SkipConfig{}
if serverOptions.SkipTopLvlOpsPath != "" {
// read skip config
skipConfig, err = basemodel.ReadSkipConfig(serverOptions.SkipTopLvlOpsPath)
if err != nil {
return nil, err
}
}
var c cache.Cache
if serverOptions.CacheConfigPath != "" {
cacheOpts, err := cache.LoadFromYAMLCacheConfigFile(serverOptions.CacheConfigPath)
if err != nil {
return nil, err
}
c = cache.NewCache(cacheOpts)
}
<-readerReady
rm, err := makeRulesManager(
serverOptions.PromConfigPath,
serverOptions.RuleRepoURL,
serverOptions.SigNoz.SQLStore.SQLxDB(),
reader,
c,
serverOptions.DisableRules,
lm,
serverOptions.UseLogsNewSchema,
serverOptions.UseTraceNewSchema,
serverOptions.SigNoz.Cache,
serverOptions.SigNoz.Alertmanager,
serverOptions.SigNoz.SQLStore,
serverOptions.SigNoz.TelemetryStore,
serverOptions.SigNoz.Prometheus,
)
if err != nil {
@@ -217,7 +169,7 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
// ingestion pipelines manager
logParsingPipelineController, err := logparsingpipeline.NewLogParsingPipelinesController(
serverOptions.SigNoz.SQLStore.SQLxDB(), integrationsController.GetPipelinesForInstalledIntegrations,
serverOptions.SigNoz.SQLStore, integrationsController.GetPipelinesForInstalledIntegrations,
)
if err != nil {
return nil, err
@@ -233,7 +185,7 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
}
// start the usagemanager
usageManager, err := usage.New(modelDao, lm.GetRepo(), serverOptions.SigNoz.TelemetryStore.ClickHouseDB(), serverOptions.Config.TelemetryStore.ClickHouse.DSN)
usageManager, err := usage.New(modelDao, lm.GetRepo(), serverOptions.SigNoz.TelemetryStore.ClickhouseDB(), serverOptions.SigNoz.Zeus)
if err != nil {
return nil, err
}
@@ -252,7 +204,6 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
apiOpts := api.APIHandlerOptions{
DataConnector: reader,
SkipConfig: skipConfig,
PreferSpanMetrics: serverOptions.PreferSpanMetrics,
AppDao: modelDao,
RulesManager: rm,
@@ -262,12 +213,9 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
IntegrationsController: integrationsController,
CloudIntegrationsController: cloudIntegrationsController,
LogsParsingPipelineController: logParsingPipelineController,
Cache: c,
FluxInterval: fluxInterval,
Gateway: gatewayProxy,
GatewayUrl: serverOptions.GatewayUrl,
UseLogsNewSchema: serverOptions.UseLogsNewSchema,
UseTraceNewSchema: serverOptions.UseTraceNewSchema,
JWT: serverOptions.Jwt,
}
@@ -277,8 +225,6 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
}
s := &Server{
// logger: logger,
// tracer: tracer,
ruleManager: rm,
serverOptions: serverOptions,
unavailableChannel: make(chan healthcheck.Status),
@@ -304,6 +250,11 @@ func NewServer(serverOptions *ServerOptions) (*Server, error) {
&opAmpModel.AllAgents, agentConfMgr,
)
errorList := reader.PreloadMetricsMetadata(context.Background())
for _, er := range errorList {
zap.L().Error("failed to preload metrics metadata", zap.Error(er))
}
return s, nil
}
@@ -312,7 +263,7 @@ func (s *Server) createPrivateServer(apiHandler *api.APIHandler) (*http.Server,
r := baseapp.NewRouter()
r.Use(middleware.NewAuth(zap.L(), s.serverOptions.Jwt, []string{"Authorization", "Sec-WebSocket-Protocol"}).Wrap)
r.Use(eemiddleware.NewPat([]string{"SIGNOZ-API-KEY"}).Wrap)
r.Use(eemiddleware.NewPat(s.serverOptions.SigNoz.SQLStore, []string{"SIGNOZ-API-KEY"}).Wrap)
r.Use(middleware.NewTimeout(zap.L(),
s.serverOptions.Config.APIServer.Timeout.ExcludedRoutes,
s.serverOptions.Config.APIServer.Timeout.Default,
@@ -340,27 +291,11 @@ func (s *Server) createPrivateServer(apiHandler *api.APIHandler) (*http.Server,
}
func (s *Server) createPublicServer(apiHandler *api.APIHandler, web web.Web) (*http.Server, error) {
r := baseapp.NewRouter()
// add auth middleware
getUserFromRequest := func(ctx context.Context) (*types.GettableUser, error) {
user, err := auth.GetUserFromRequestContext(ctx, apiHandler)
if err != nil {
return nil, err
}
if user.User.OrgID == "" {
return nil, basemodel.UnauthorizedError(errors.New("orgId is missing in the claims"))
}
return user, nil
}
am := baseapp.NewAuthMiddleware(getUserFromRequest)
am := middleware.NewAuthZ(s.serverOptions.SigNoz.Instrumentation.Logger())
r.Use(middleware.NewAuth(zap.L(), s.serverOptions.Jwt, []string{"Authorization", "Sec-WebSocket-Protocol"}).Wrap)
r.Use(eemiddleware.NewPat([]string{"SIGNOZ-API-KEY"}).Wrap)
r.Use(eemiddleware.NewPat(s.serverOptions.SigNoz.SQLStore, []string{"SIGNOZ-API-KEY"}).Wrap)
r.Use(middleware.NewTimeout(zap.L(),
s.serverOptions.Config.APIServer.Timeout.ExcludedRoutes,
s.serverOptions.Config.APIServer.Timeout.Default,
@@ -373,6 +308,7 @@ func (s *Server) createPublicServer(apiHandler *api.APIHandler, web web.Web) (*h
apiHandler.RegisterLogsRoutes(r, am)
apiHandler.RegisterIntegrationRoutes(r, am)
apiHandler.RegisterCloudIntegrationsRoutes(r, am)
apiHandler.RegisterFieldsRoutes(r, am)
apiHandler.RegisterQueryRangeV3Routes(r, am)
apiHandler.RegisterInfraMetricsRoutes(r, am)
apiHandler.RegisterQueryRangeV4Routes(r, am)
@@ -434,14 +370,8 @@ func (s *Server) initListeners() error {
}
// Start listening on http and private http port concurrently
func (s *Server) Start() error {
// initiate rule manager first
if !s.serverOptions.DisableRules {
s.ruleManager.Start()
} else {
zap.L().Info("msg: Rules disabled as rules.disable is set to TRUE")
}
func (s *Server) Start(ctx context.Context) error {
s.ruleManager.Start(ctx)
err := s.initListeners()
if err != nil {
@@ -522,7 +452,7 @@ func (s *Server) Stop() error {
s.opampServer.Stop()
if s.ruleManager != nil {
s.ruleManager.Stop()
s.ruleManager.Stop(context.Background())
}
// stop usage manager
@@ -532,39 +462,25 @@ func (s *Server) Stop() error {
}
func makeRulesManager(
promConfigPath,
ruleRepoURL string,
db *sqlx.DB,
ch baseint.Reader,
cache cache.Cache,
disableRules bool,
fm baseint.FeatureLookup,
useLogsNewSchema bool,
useTraceNewSchema bool,
alertmanager alertmanager.Alertmanager,
sqlstore sqlstore.SQLStore,
telemetryStore telemetrystore.TelemetryStore,
prometheus prometheus.Prometheus,
) (*baserules.Manager, error) {
// create engine
pqle, err := pqle.FromConfigPath(promConfigPath)
if err != nil {
return nil, fmt.Errorf("failed to create pql engine : %v", err)
}
// create manager opts
managerOpts := &baserules.ManagerOptions{
PqlEngine: pqle,
RepoURL: ruleRepoURL,
TelemetryStore: telemetryStore,
Prometheus: prometheus,
DBConn: db,
Context: context.Background(),
Logger: zap.L(),
DisableRules: disableRules,
FeatureFlags: fm,
Reader: ch,
Cache: cache,
EvalDelay: baseconst.GetEvalDelay(),
PrepareTaskFunc: rules.PrepareTaskFunc,
UseLogsNewSchema: useLogsNewSchema,
UseTraceNewSchema: useTraceNewSchema,
PrepareTestRuleFunc: rules.TestNotification,
Alertmanager: alertmanager,
SQLStore: sqlstore,

View File

@@ -1,56 +0,0 @@
package auth
import (
"context"
"fmt"
"time"
"go.signoz.io/signoz/ee/query-service/app/api"
baseauth "go.signoz.io/signoz/pkg/query-service/auth"
"go.signoz.io/signoz/pkg/query-service/telemetry"
"go.signoz.io/signoz/pkg/types"
"go.signoz.io/signoz/pkg/types/authtypes"
"go.uber.org/zap"
)
func GetUserFromRequestContext(ctx context.Context, apiHandler *api.APIHandler) (*types.GettableUser, error) {
patToken, ok := authtypes.UUIDFromContext(ctx)
if ok && patToken != "" {
zap.L().Debug("Received a non-zero length PAT token")
ctx := context.Background()
dao := apiHandler.AppDao()
pat, err := dao.GetPAT(ctx, patToken)
if err == nil && pat != nil {
zap.L().Debug("Found valid PAT: ", zap.Any("pat", pat))
if pat.ExpiresAt < time.Now().Unix() && pat.ExpiresAt != 0 {
zap.L().Info("PAT has expired: ", zap.Any("pat", pat))
return nil, fmt.Errorf("PAT has expired")
}
group, apiErr := dao.GetGroupByName(ctx, pat.Role)
if apiErr != nil {
zap.L().Error("Error while getting group for PAT: ", zap.Any("apiErr", apiErr))
return nil, apiErr
}
user, err := dao.GetUser(ctx, pat.UserID)
if err != nil {
zap.L().Error("Error while getting user for PAT: ", zap.Error(err))
return nil, err
}
telemetry.GetInstance().SetPatTokenUser()
dao.UpdatePATLastUsed(ctx, patToken, time.Now().Unix())
user.User.GroupID = group.ID
user.User.ID = pat.Id
return &types.GettableUser{
User: user.User,
Role: pat.Role,
}, nil
}
if err != nil {
zap.L().Error("Error while getting user for PAT: ", zap.Error(err))
return nil, err
}
}
return baseauth.GetUserFromReqContext(ctx)
}

View File

@@ -1,8 +1,8 @@
package dao
import (
"go.signoz.io/signoz/ee/query-service/dao/sqlite"
"go.signoz.io/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/ee/query-service/dao/sqlite"
"github.com/SigNoz/signoz/pkg/sqlstore"
)
func InitDao(sqlStore sqlstore.SQLStore) (ModelDao, error) {

View File

@@ -4,14 +4,14 @@ import (
"context"
"net/url"
"github.com/SigNoz/signoz/ee/types"
basedao "github.com/SigNoz/signoz/pkg/query-service/dao"
baseint "github.com/SigNoz/signoz/pkg/query-service/interfaces"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/google/uuid"
"github.com/jmoiron/sqlx"
"go.signoz.io/signoz/ee/query-service/model"
basedao "go.signoz.io/signoz/pkg/query-service/dao"
baseint "go.signoz.io/signoz/pkg/query-service/interfaces"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.signoz.io/signoz/pkg/types"
"go.signoz.io/signoz/pkg/types/authtypes"
"github.com/uptrace/bun"
)
type ModelDao interface {
@@ -20,27 +20,25 @@ type ModelDao interface {
// SetFlagProvider sets the feature lookup provider
SetFlagProvider(flags baseint.FeatureLookup)
DB() *sqlx.DB
DB() *bun.DB
// auth methods
CanUsePassword(ctx context.Context, email string) (bool, basemodel.BaseApiError)
PrepareSsoRedirect(ctx context.Context, redirectUri, email string, jwt *authtypes.JWT) (redirectURL string, apierr basemodel.BaseApiError)
GetDomainFromSsoResponse(ctx context.Context, relayState *url.URL) (*model.OrgDomain, error)
GetDomainFromSsoResponse(ctx context.Context, relayState *url.URL) (*types.GettableOrgDomain, error)
// org domain (auth domains) CRUD ops
ListDomains(ctx context.Context, orgId string) ([]model.OrgDomain, basemodel.BaseApiError)
GetDomain(ctx context.Context, id uuid.UUID) (*model.OrgDomain, basemodel.BaseApiError)
CreateDomain(ctx context.Context, d *model.OrgDomain) basemodel.BaseApiError
UpdateDomain(ctx context.Context, domain *model.OrgDomain) basemodel.BaseApiError
ListDomains(ctx context.Context, orgId string) ([]types.GettableOrgDomain, basemodel.BaseApiError)
GetDomain(ctx context.Context, id uuid.UUID) (*types.GettableOrgDomain, basemodel.BaseApiError)
CreateDomain(ctx context.Context, d *types.GettableOrgDomain) basemodel.BaseApiError
UpdateDomain(ctx context.Context, domain *types.GettableOrgDomain) basemodel.BaseApiError
DeleteDomain(ctx context.Context, id uuid.UUID) basemodel.BaseApiError
GetDomainByEmail(ctx context.Context, email string) (*model.OrgDomain, basemodel.BaseApiError)
GetDomainByEmail(ctx context.Context, email string) (*types.GettableOrgDomain, basemodel.BaseApiError)
CreatePAT(ctx context.Context, p model.PAT) (model.PAT, basemodel.BaseApiError)
UpdatePAT(ctx context.Context, p model.PAT, id string) basemodel.BaseApiError
GetPAT(ctx context.Context, pat string) (*model.PAT, basemodel.BaseApiError)
UpdatePATLastUsed(ctx context.Context, pat string, lastUsed int64) basemodel.BaseApiError
GetPATByID(ctx context.Context, id string) (*model.PAT, basemodel.BaseApiError)
GetUserByPAT(ctx context.Context, token string) (*types.GettableUser, basemodel.BaseApiError)
ListPATs(ctx context.Context) ([]model.PAT, basemodel.BaseApiError)
RevokePAT(ctx context.Context, id string, userID string) basemodel.BaseApiError
CreatePAT(ctx context.Context, orgID string, p types.GettablePAT) (types.GettablePAT, basemodel.BaseApiError)
UpdatePAT(ctx context.Context, orgID string, p types.GettablePAT, id valuer.UUID) basemodel.BaseApiError
GetPAT(ctx context.Context, pat string) (*types.GettablePAT, basemodel.BaseApiError)
GetPATByID(ctx context.Context, orgID string, id valuer.UUID) (*types.GettablePAT, basemodel.BaseApiError)
ListPATs(ctx context.Context, orgID string) ([]types.GettablePAT, basemodel.BaseApiError)
RevokePAT(ctx context.Context, orgID string, id valuer.UUID, userID string) basemodel.BaseApiError
}

View File

@@ -4,18 +4,16 @@ import (
"context"
"fmt"
"net/url"
"strings"
"time"
"github.com/SigNoz/signoz/ee/query-service/constants"
"github.com/SigNoz/signoz/ee/query-service/model"
baseauth "github.com/SigNoz/signoz/pkg/query-service/auth"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/query-service/utils"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/google/uuid"
"go.signoz.io/signoz/ee/query-service/constants"
"go.signoz.io/signoz/ee/query-service/model"
baseauth "go.signoz.io/signoz/pkg/query-service/auth"
baseconst "go.signoz.io/signoz/pkg/query-service/constants"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.signoz.io/signoz/pkg/query-service/utils"
"go.signoz.io/signoz/pkg/types"
"go.signoz.io/signoz/pkg/types/authtypes"
"go.uber.org/zap"
)
@@ -37,14 +35,8 @@ func (m *modelDao) createUserForSAMLRequest(ctx context.Context, email string) (
return nil, model.InternalErrorStr("failed to generate password hash")
}
group, apiErr := m.GetGroupByName(ctx, baseconst.ViewerGroup)
if apiErr != nil {
zap.L().Error("GetGroupByName failed", zap.Error(apiErr))
return nil, apiErr
}
user := &types.User{
ID: uuid.NewString(),
ID: uuid.New().String(),
Name: "",
Email: email,
Password: hash,
@@ -52,11 +44,11 @@ func (m *modelDao) createUserForSAMLRequest(ctx context.Context, email string) (
CreatedAt: time.Now(),
},
ProfilePictureURL: "", // Currently unused
GroupID: group.ID,
OrgID: domain.OrgId,
Role: authtypes.RoleViewer.String(),
OrgID: domain.OrgID,
}
user, apiErr = m.CreateUser(ctx, user, false)
user, apiErr := m.CreateUser(ctx, user, false)
if apiErr != nil {
zap.L().Error("CreateUser failed", zap.Error(apiErr))
return nil, apiErr
@@ -116,7 +108,7 @@ func (m *modelDao) CanUsePassword(ctx context.Context, email string) (bool, base
return false, baseapierr
}
if userPayload.Role != baseconst.AdminGroup {
if userPayload.Role != authtypes.RoleAdmin.String() {
return false, model.BadRequest(fmt.Errorf("auth method not supported"))
}
@@ -162,12 +154,7 @@ func (m *modelDao) PrecheckLogin(ctx context.Context, email, sourceUrl string) (
// find domain from email
orgDomain, apierr := m.GetDomainByEmail(ctx, email)
if apierr != nil {
var emailDomain string
emailComponents := strings.Split(email, "@")
if len(emailComponents) > 0 {
emailDomain = emailComponents[1]
}
zap.L().Error("failed to get org domain from email", zap.String("emailDomain", emailDomain), zap.Error(apierr.ToError()))
zap.L().Error("failed to get org domain from email", zap.String("email", email), zap.Error(apierr.ToError()))
return resp, apierr
}

View File

@@ -9,32 +9,23 @@ import (
"strings"
"time"
"github.com/SigNoz/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/ee/types"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
ossTypes "github.com/SigNoz/signoz/pkg/types"
"github.com/google/uuid"
"go.signoz.io/signoz/ee/query-service/model"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.uber.org/zap"
)
// StoredDomain represents stored database record for org domain
type StoredDomain struct {
Id uuid.UUID `db:"id"`
Name string `db:"name"`
OrgId string `db:"org_id"`
Data string `db:"data"`
CreatedAt int64 `db:"created_at"`
UpdatedAt int64 `db:"updated_at"`
}
// GetDomainFromSsoResponse uses relay state received from IdP to fetch
// user domain. The domain is further used to process validity of the response.
// when sending login request to IdP we send relay state as URL (site url)
// with domainId or domainName as query parameter.
func (m *modelDao) GetDomainFromSsoResponse(ctx context.Context, relayState *url.URL) (*model.OrgDomain, error) {
func (m *modelDao) GetDomainFromSsoResponse(ctx context.Context, relayState *url.URL) (*types.GettableOrgDomain, error) {
// derive domain id from relay state now
var domainIdStr string
var domainNameStr string
var domain *model.OrgDomain
var domain *types.GettableOrgDomain
for k, v := range relayState.Query() {
if k == "domainId" && len(v) > 0 {
@@ -76,10 +67,14 @@ func (m *modelDao) GetDomainFromSsoResponse(ctx context.Context, relayState *url
}
// GetDomainByName returns org domain for a given domain name
func (m *modelDao) GetDomainByName(ctx context.Context, name string) (*model.OrgDomain, basemodel.BaseApiError) {
func (m *modelDao) GetDomainByName(ctx context.Context, name string) (*types.GettableOrgDomain, basemodel.BaseApiError) {
stored := StoredDomain{}
err := m.DB().Get(&stored, `SELECT * FROM org_domains WHERE name=$1 LIMIT 1`, name)
stored := types.StorableOrgDomain{}
err := m.DB().NewSelect().
Model(&stored).
Where("name = ?", name).
Limit(1).
Scan(ctx)
if err != nil {
if err == sql.ErrNoRows {
@@ -88,7 +83,7 @@ func (m *modelDao) GetDomainByName(ctx context.Context, name string) (*model.Org
return nil, model.InternalError(err)
}
domain := &model.OrgDomain{Id: stored.Id, Name: stored.Name, OrgId: stored.OrgId}
domain := &types.GettableOrgDomain{StorableOrgDomain: stored}
if err := domain.LoadConfig(stored.Data); err != nil {
return nil, model.InternalError(err)
}
@@ -96,10 +91,14 @@ func (m *modelDao) GetDomainByName(ctx context.Context, name string) (*model.Org
}
// GetDomain returns org domain for a given domain id
func (m *modelDao) GetDomain(ctx context.Context, id uuid.UUID) (*model.OrgDomain, basemodel.BaseApiError) {
func (m *modelDao) GetDomain(ctx context.Context, id uuid.UUID) (*types.GettableOrgDomain, basemodel.BaseApiError) {
stored := StoredDomain{}
err := m.DB().Get(&stored, `SELECT * FROM org_domains WHERE id=$1 LIMIT 1`, id)
stored := types.StorableOrgDomain{}
err := m.DB().NewSelect().
Model(&stored).
Where("id = ?", id).
Limit(1).
Scan(ctx)
if err != nil {
if err == sql.ErrNoRows {
@@ -108,7 +107,7 @@ func (m *modelDao) GetDomain(ctx context.Context, id uuid.UUID) (*model.OrgDomai
return nil, model.InternalError(err)
}
domain := &model.OrgDomain{Id: stored.Id, Name: stored.Name, OrgId: stored.OrgId}
domain := &types.GettableOrgDomain{StorableOrgDomain: stored}
if err := domain.LoadConfig(stored.Data); err != nil {
return nil, model.InternalError(err)
}
@@ -116,21 +115,24 @@ func (m *modelDao) GetDomain(ctx context.Context, id uuid.UUID) (*model.OrgDomai
}
// ListDomains gets the list of auth domains by org id
func (m *modelDao) ListDomains(ctx context.Context, orgId string) ([]model.OrgDomain, basemodel.BaseApiError) {
domains := []model.OrgDomain{}
func (m *modelDao) ListDomains(ctx context.Context, orgId string) ([]types.GettableOrgDomain, basemodel.BaseApiError) {
domains := []types.GettableOrgDomain{}
stored := []StoredDomain{}
err := m.DB().SelectContext(ctx, &stored, `SELECT * FROM org_domains WHERE org_id=$1`, orgId)
stored := []types.StorableOrgDomain{}
err := m.DB().NewSelect().
Model(&stored).
Where("org_id = ?", orgId).
Scan(ctx)
if err != nil {
if err == sql.ErrNoRows {
return []model.OrgDomain{}, nil
return domains, nil
}
return nil, model.InternalError(err)
}
for _, s := range stored {
domain := model.OrgDomain{Id: s.Id, Name: s.Name, OrgId: s.OrgId}
domain := types.GettableOrgDomain{StorableOrgDomain: s}
if err := domain.LoadConfig(s.Data); err != nil {
zap.L().Error("ListDomains() failed", zap.Error(err))
}
@@ -141,14 +143,14 @@ func (m *modelDao) ListDomains(ctx context.Context, orgId string) ([]model.OrgDo
}
// CreateDomain creates a new auth domain
func (m *modelDao) CreateDomain(ctx context.Context, domain *model.OrgDomain) basemodel.BaseApiError {
func (m *modelDao) CreateDomain(ctx context.Context, domain *types.GettableOrgDomain) basemodel.BaseApiError {
if domain.Id == uuid.Nil {
domain.Id = uuid.New()
if domain.ID == uuid.Nil {
domain.ID = uuid.New()
}
if domain.OrgId == "" || domain.Name == "" {
return model.BadRequest(fmt.Errorf("domain creation failed, missing fields: OrgId, Name "))
if domain.OrgID == "" || domain.Name == "" {
return model.BadRequest(fmt.Errorf("domain creation failed, missing fields: OrgID, Name "))
}
configJson, err := json.Marshal(domain)
@@ -157,14 +159,17 @@ func (m *modelDao) CreateDomain(ctx context.Context, domain *model.OrgDomain) ba
return model.InternalError(fmt.Errorf("domain creation failed"))
}
_, err = m.DB().ExecContext(ctx,
"INSERT INTO org_domains (id, name, org_id, data, created_at, updated_at) VALUES ($1, $2, $3, $4, $5, $6)",
domain.Id,
domain.Name,
domain.OrgId,
configJson,
time.Now().Unix(),
time.Now().Unix())
storableDomain := types.StorableOrgDomain{
ID: domain.ID,
Name: domain.Name,
OrgID: domain.OrgID,
Data: string(configJson),
TimeAuditable: ossTypes.TimeAuditable{CreatedAt: time.Now(), UpdatedAt: time.Now()},
}
_, err = m.DB().NewInsert().
Model(&storableDomain).
Exec(ctx)
if err != nil {
zap.L().Error("failed to insert domain in db", zap.Error(err))
@@ -175,9 +180,9 @@ func (m *modelDao) CreateDomain(ctx context.Context, domain *model.OrgDomain) ba
}
// UpdateDomain updates stored config params for a domain
func (m *modelDao) UpdateDomain(ctx context.Context, domain *model.OrgDomain) basemodel.BaseApiError {
func (m *modelDao) UpdateDomain(ctx context.Context, domain *types.GettableOrgDomain) basemodel.BaseApiError {
if domain.Id == uuid.Nil {
if domain.ID == uuid.Nil {
zap.L().Error("domain update failed", zap.Error(fmt.Errorf("OrgDomain.Id is null")))
return model.InternalError(fmt.Errorf("domain update failed"))
}
@@ -188,11 +193,19 @@ func (m *modelDao) UpdateDomain(ctx context.Context, domain *model.OrgDomain) ba
return model.InternalError(fmt.Errorf("domain update failed"))
}
_, err = m.DB().ExecContext(ctx,
"UPDATE org_domains SET data = $1, updated_at = $2 WHERE id = $3",
configJson,
time.Now().Unix(),
domain.Id)
storableDomain := &types.StorableOrgDomain{
ID: domain.ID,
Name: domain.Name,
OrgID: domain.OrgID,
Data: string(configJson),
TimeAuditable: ossTypes.TimeAuditable{UpdatedAt: time.Now()},
}
_, err = m.DB().NewUpdate().
Model(storableDomain).
Column("data", "updated_at").
WherePK().
Exec(ctx)
if err != nil {
zap.L().Error("domain update failed", zap.Error(err))
@@ -210,9 +223,11 @@ func (m *modelDao) DeleteDomain(ctx context.Context, id uuid.UUID) basemodel.Bas
return model.InternalError(fmt.Errorf("domain delete failed"))
}
_, err := m.DB().ExecContext(ctx,
"DELETE FROM org_domains WHERE id = $1",
id)
storableDomain := &types.StorableOrgDomain{ID: id}
_, err := m.DB().NewDelete().
Model(storableDomain).
WherePK().
Exec(ctx)
if err != nil {
zap.L().Error("domain delete failed", zap.Error(err))
@@ -222,7 +237,7 @@ func (m *modelDao) DeleteDomain(ctx context.Context, id uuid.UUID) basemodel.Bas
return nil
}
func (m *modelDao) GetDomainByEmail(ctx context.Context, email string) (*model.OrgDomain, basemodel.BaseApiError) {
func (m *modelDao) GetDomainByEmail(ctx context.Context, email string) (*types.GettableOrgDomain, basemodel.BaseApiError) {
if email == "" {
return nil, model.BadRequest(fmt.Errorf("could not find auth domain, missing fields: email "))
@@ -235,8 +250,12 @@ func (m *modelDao) GetDomainByEmail(ctx context.Context, email string) (*model.O
parsedDomain := components[1]
stored := StoredDomain{}
err := m.DB().Get(&stored, `SELECT * FROM org_domains WHERE name=$1 LIMIT 1`, parsedDomain)
stored := types.StorableOrgDomain{}
err := m.DB().NewSelect().
Model(&stored).
Where("name = ?", parsedDomain).
Limit(1).
Scan(ctx)
if err != nil {
if err == sql.ErrNoRows {
@@ -245,7 +264,7 @@ func (m *modelDao) GetDomainByEmail(ctx context.Context, email string) (*model.O
return nil, model.InternalError(err)
}
domain := &model.OrgDomain{Id: stored.Id, Name: stored.Name, OrgId: stored.OrgId}
domain := &types.GettableOrgDomain{StorableOrgDomain: stored}
if err := domain.LoadConfig(stored.Data); err != nil {
return nil, model.InternalError(err)
}

View File

@@ -3,11 +3,11 @@ package sqlite
import (
"fmt"
"github.com/jmoiron/sqlx"
basedao "go.signoz.io/signoz/pkg/query-service/dao"
basedsql "go.signoz.io/signoz/pkg/query-service/dao/sqlite"
baseint "go.signoz.io/signoz/pkg/query-service/interfaces"
"go.signoz.io/signoz/pkg/sqlstore"
basedao "github.com/SigNoz/signoz/pkg/query-service/dao"
basedsql "github.com/SigNoz/signoz/pkg/query-service/dao/sqlite"
baseint "github.com/SigNoz/signoz/pkg/query-service/interfaces"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/uptrace/bun"
)
type modelDao struct {
@@ -41,6 +41,6 @@ func InitDB(sqlStore sqlstore.SQLStore) (*modelDao, error) {
return m, nil
}
func (m *modelDao) DB() *sqlx.DB {
func (m *modelDao) DB() *bun.DB {
return m.ModelDaoSqlite.DB()
}

View File

@@ -3,65 +3,59 @@ package sqlite
import (
"context"
"fmt"
"strconv"
"time"
"go.signoz.io/signoz/ee/query-service/model"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.signoz.io/signoz/pkg/types"
"github.com/SigNoz/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/ee/types"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
ossTypes "github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/valuer"
"go.uber.org/zap"
)
func (m *modelDao) CreatePAT(ctx context.Context, p model.PAT) (model.PAT, basemodel.BaseApiError) {
result, err := m.DB().ExecContext(ctx,
"INSERT INTO personal_access_tokens (user_id, token, role, name, created_at, expires_at, updated_at, updated_by_user_id, last_used, revoked) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)",
p.UserID,
p.Token,
p.Role,
p.Name,
p.CreatedAt,
p.ExpiresAt,
p.UpdatedAt,
p.UpdatedByUserID,
p.LastUsed,
p.Revoked,
)
func (m *modelDao) CreatePAT(ctx context.Context, orgID string, p types.GettablePAT) (types.GettablePAT, basemodel.BaseApiError) {
p.StorablePersonalAccessToken.OrgID = orgID
p.StorablePersonalAccessToken.ID = valuer.GenerateUUID()
_, err := m.DB().NewInsert().
Model(&p.StorablePersonalAccessToken).
Exec(ctx)
if err != nil {
zap.L().Error("Failed to insert PAT in db, err: %v", zap.Error(err))
return model.PAT{}, model.InternalError(fmt.Errorf("PAT insertion failed"))
return types.GettablePAT{}, model.InternalError(fmt.Errorf("PAT insertion failed"))
}
id, err := result.LastInsertId()
if err != nil {
zap.L().Error("Failed to get last inserted id, err: %v", zap.Error(err))
return model.PAT{}, model.InternalError(fmt.Errorf("PAT insertion failed"))
}
p.Id = strconv.Itoa(int(id))
createdByUser, _ := m.GetUser(ctx, p.UserID)
if createdByUser == nil {
p.CreatedByUser = model.User{
p.CreatedByUser = types.PatUser{
NotFound: true,
}
} else {
p.CreatedByUser = model.User{
Id: createdByUser.ID,
Name: createdByUser.Name,
Email: createdByUser.Email,
CreatedAt: createdByUser.CreatedAt.Unix(),
ProfilePictureURL: createdByUser.ProfilePictureURL,
NotFound: false,
p.CreatedByUser = types.PatUser{
User: ossTypes.User{
ID: createdByUser.ID,
Name: createdByUser.Name,
Email: createdByUser.Email,
TimeAuditable: ossTypes.TimeAuditable{
CreatedAt: createdByUser.CreatedAt,
UpdatedAt: createdByUser.UpdatedAt,
},
ProfilePictureURL: createdByUser.ProfilePictureURL,
},
NotFound: false,
}
}
return p, nil
}
func (m *modelDao) UpdatePAT(ctx context.Context, p model.PAT, id string) basemodel.BaseApiError {
_, err := m.DB().ExecContext(ctx,
"UPDATE personal_access_tokens SET role=$1, name=$2, updated_at=$3, updated_by_user_id=$4 WHERE id=$5 and revoked=false;",
p.Role,
p.Name,
p.UpdatedAt,
p.UpdatedByUserID,
id)
func (m *modelDao) UpdatePAT(ctx context.Context, orgID string, p types.GettablePAT, id valuer.UUID) basemodel.BaseApiError {
_, err := m.DB().NewUpdate().
Model(&p.StorablePersonalAccessToken).
Column("role", "name", "updated_at", "updated_by_user_id").
Where("id = ?", id.StringValue()).
Where("org_id = ?", orgID).
Where("revoked = false").
Exec(ctx)
if err != nil {
zap.L().Error("Failed to update PAT in db, err: %v", zap.Error(err))
return model.InternalError(fmt.Errorf("PAT update failed"))
@@ -69,66 +63,82 @@ func (m *modelDao) UpdatePAT(ctx context.Context, p model.PAT, id string) basemo
return nil
}
func (m *modelDao) UpdatePATLastUsed(ctx context.Context, token string, lastUsed int64) basemodel.BaseApiError {
_, err := m.DB().ExecContext(ctx,
"UPDATE personal_access_tokens SET last_used=$1 WHERE token=$2 and revoked=false;",
lastUsed,
token)
if err != nil {
zap.L().Error("Failed to update PAT last used in db, err: %v", zap.Error(err))
return model.InternalError(fmt.Errorf("PAT last used update failed"))
}
return nil
}
func (m *modelDao) ListPATs(ctx context.Context, orgID string) ([]types.GettablePAT, basemodel.BaseApiError) {
pats := []types.StorablePersonalAccessToken{}
func (m *modelDao) ListPATs(ctx context.Context) ([]model.PAT, basemodel.BaseApiError) {
pats := []model.PAT{}
if err := m.DB().Select(&pats, "SELECT * FROM personal_access_tokens WHERE revoked=false ORDER by updated_at DESC;"); err != nil {
if err := m.DB().NewSelect().
Model(&pats).
Where("revoked = false").
Where("org_id = ?", orgID).
Order("updated_at DESC").
Scan(ctx); err != nil {
zap.L().Error("Failed to fetch PATs err: %v", zap.Error(err))
return nil, model.InternalError(fmt.Errorf("failed to fetch PATs"))
}
patsWithUsers := []types.GettablePAT{}
for i := range pats {
patWithUser := types.GettablePAT{
StorablePersonalAccessToken: pats[i],
}
createdByUser, _ := m.GetUser(ctx, pats[i].UserID)
if createdByUser == nil {
pats[i].CreatedByUser = model.User{
patWithUser.CreatedByUser = types.PatUser{
NotFound: true,
}
} else {
pats[i].CreatedByUser = model.User{
Id: createdByUser.ID,
Name: createdByUser.Name,
Email: createdByUser.Email,
CreatedAt: createdByUser.CreatedAt.Unix(),
ProfilePictureURL: createdByUser.ProfilePictureURL,
NotFound: false,
patWithUser.CreatedByUser = types.PatUser{
User: ossTypes.User{
ID: createdByUser.ID,
Name: createdByUser.Name,
Email: createdByUser.Email,
TimeAuditable: ossTypes.TimeAuditable{
CreatedAt: createdByUser.CreatedAt,
UpdatedAt: createdByUser.UpdatedAt,
},
ProfilePictureURL: createdByUser.ProfilePictureURL,
},
NotFound: false,
}
}
updatedByUser, _ := m.GetUser(ctx, pats[i].UpdatedByUserID)
if updatedByUser == nil {
pats[i].UpdatedByUser = model.User{
patWithUser.UpdatedByUser = types.PatUser{
NotFound: true,
}
} else {
pats[i].UpdatedByUser = model.User{
Id: updatedByUser.ID,
Name: updatedByUser.Name,
Email: updatedByUser.Email,
CreatedAt: updatedByUser.CreatedAt.Unix(),
ProfilePictureURL: updatedByUser.ProfilePictureURL,
NotFound: false,
patWithUser.UpdatedByUser = types.PatUser{
User: ossTypes.User{
ID: updatedByUser.ID,
Name: updatedByUser.Name,
Email: updatedByUser.Email,
TimeAuditable: ossTypes.TimeAuditable{
CreatedAt: updatedByUser.CreatedAt,
UpdatedAt: updatedByUser.UpdatedAt,
},
ProfilePictureURL: updatedByUser.ProfilePictureURL,
},
NotFound: false,
}
}
patsWithUsers = append(patsWithUsers, patWithUser)
}
return pats, nil
return patsWithUsers, nil
}
func (m *modelDao) RevokePAT(ctx context.Context, id string, userID string) basemodel.BaseApiError {
func (m *modelDao) RevokePAT(ctx context.Context, orgID string, id valuer.UUID, userID string) basemodel.BaseApiError {
updatedAt := time.Now().Unix()
_, err := m.DB().ExecContext(ctx,
"UPDATE personal_access_tokens SET revoked=true, updated_by_user_id = $1, updated_at=$2 WHERE id=$3",
userID, updatedAt, id)
_, err := m.DB().NewUpdate().
Model(&types.StorablePersonalAccessToken{}).
Set("revoked = ?", true).
Set("updated_by_user_id = ?", userID).
Set("updated_at = ?", updatedAt).
Where("id = ?", id.StringValue()).
Where("org_id = ?", orgID).
Exec(ctx)
if err != nil {
zap.L().Error("Failed to revoke PAT in db, err: %v", zap.Error(err))
return model.InternalError(fmt.Errorf("PAT revoke failed"))
@@ -136,10 +146,14 @@ func (m *modelDao) RevokePAT(ctx context.Context, id string, userID string) base
return nil
}
func (m *modelDao) GetPAT(ctx context.Context, token string) (*model.PAT, basemodel.BaseApiError) {
pats := []model.PAT{}
func (m *modelDao) GetPAT(ctx context.Context, token string) (*types.GettablePAT, basemodel.BaseApiError) {
pats := []types.StorablePersonalAccessToken{}
if err := m.DB().Select(&pats, `SELECT * FROM personal_access_tokens WHERE token=? and revoked=false;`, token); err != nil {
if err := m.DB().NewSelect().
Model(&pats).
Where("token = ?", token).
Where("revoked = false").
Scan(ctx); err != nil {
return nil, model.InternalError(fmt.Errorf("failed to fetch PAT"))
}
@@ -150,13 +164,22 @@ func (m *modelDao) GetPAT(ctx context.Context, token string) (*model.PAT, basemo
}
}
return &pats[0], nil
patWithUser := types.GettablePAT{
StorablePersonalAccessToken: pats[0],
}
return &patWithUser, nil
}
func (m *modelDao) GetPATByID(ctx context.Context, id string) (*model.PAT, basemodel.BaseApiError) {
pats := []model.PAT{}
func (m *modelDao) GetPATByID(ctx context.Context, orgID string, id valuer.UUID) (*types.GettablePAT, basemodel.BaseApiError) {
pats := []types.StorablePersonalAccessToken{}
if err := m.DB().Select(&pats, `SELECT * FROM personal_access_tokens WHERE id=? and revoked=false;`, id); err != nil {
if err := m.DB().NewSelect().
Model(&pats).
Where("id = ?", id.StringValue()).
Where("org_id = ?", orgID).
Where("revoked = false").
Scan(ctx); err != nil {
return nil, model.InternalError(fmt.Errorf("failed to fetch PAT"))
}
@@ -167,34 +190,9 @@ func (m *modelDao) GetPATByID(ctx context.Context, id string) (*model.PAT, basem
}
}
return &pats[0], nil
}
// deprecated
func (m *modelDao) GetUserByPAT(ctx context.Context, token string) (*types.GettableUser, basemodel.BaseApiError) {
users := []types.GettableUser{}
query := `SELECT
u.id,
u.name,
u.email,
u.password,
u.created_at,
u.profile_picture_url,
u.org_id,
u.group_id
FROM users u, personal_access_tokens p
WHERE u.id = p.user_id and p.token=? and p.expires_at >= strftime('%s', 'now');`
if err := m.DB().Select(&users, query, token); err != nil {
return nil, model.InternalError(fmt.Errorf("failed to fetch user from PAT, err: %v", err))
patWithUser := types.GettablePAT{
StorablePersonalAccessToken: pats[0],
}
if len(users) != 1 {
return nil, &model.ApiError{
Typ: model.ErrorInternal,
Err: fmt.Errorf("found zero or multiple users with same PAT token"),
}
}
return &users[0], nil
return &patWithUser, nil
}

View File

@@ -1,8 +0,0 @@
package signozio
type status string
type ValidateLicenseResponse struct {
Status status `json:"status"`
Data map[string]interface{} `json:"data"`
}

View File

@@ -1,135 +1,67 @@
package signozio
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"time"
"github.com/pkg/errors"
"go.signoz.io/signoz/ee/query-service/constants"
"go.signoz.io/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/pkg/zeus"
"github.com/tidwall/gjson"
)
var C *Client
const (
POST = "POST"
APPLICATION_JSON = "application/json"
)
type Client struct {
Prefix string
GatewayUrl string
}
func New() *Client {
return &Client{
Prefix: constants.LicenseSignozIo,
GatewayUrl: constants.ZeusURL,
}
}
func init() {
C = New()
}
func ValidateLicenseV3(licenseKey string) (*model.LicenseV3, *model.ApiError) {
// Creating an HTTP client with a timeout for better control
client := &http.Client{
Timeout: 10 * time.Second,
}
req, err := http.NewRequest("GET", C.GatewayUrl+"/v2/licenses/me", nil)
if err != nil {
return nil, model.BadRequest(errors.Wrap(err, "failed to create request"))
}
// Setting the custom header
req.Header.Set("X-Signoz-Cloud-Api-Key", licenseKey)
response, err := client.Do(req)
if err != nil {
return nil, model.BadRequest(errors.Wrap(err, "failed to make post request"))
}
body, err := io.ReadAll(response.Body)
if err != nil {
return nil, model.BadRequest(errors.Wrap(err, fmt.Sprintf("failed to read validation response from %v", C.GatewayUrl)))
}
defer response.Body.Close()
switch response.StatusCode {
case 200:
a := ValidateLicenseResponse{}
err = json.Unmarshal(body, &a)
if err != nil {
return nil, model.BadRequest(errors.Wrap(err, "failed to marshal license validation response"))
}
license, err := model.NewLicenseV3(a.Data)
if err != nil {
return nil, model.BadRequest(errors.Wrap(err, "failed to generate new license v3"))
}
return license, nil
case 400:
return nil, model.BadRequest(errors.Wrap(fmt.Errorf(string(body)),
fmt.Sprintf("bad request error received from %v", C.GatewayUrl)))
case 401:
return nil, model.Unauthorized(errors.Wrap(fmt.Errorf(string(body)),
fmt.Sprintf("unauthorized request error received from %v", C.GatewayUrl)))
default:
return nil, model.InternalError(errors.Wrap(fmt.Errorf(string(body)),
fmt.Sprintf("internal request error received from %v", C.GatewayUrl)))
}
}
func NewPostRequestWithCtx(ctx context.Context, url string, contentType string, body io.Reader) (*http.Request, error) {
req, err := http.NewRequestWithContext(ctx, POST, url, body)
func ValidateLicenseV3(ctx context.Context, licenseKey string, zeus zeus.Zeus) (*model.LicenseV3, error) {
data, err := zeus.GetLicense(ctx, licenseKey)
if err != nil {
return nil, err
}
req.Header.Add("Content-Type", contentType)
return req, err
var m map[string]any
if err = json.Unmarshal(data, &m); err != nil {
return nil, err
}
license, err := model.NewLicenseV3(m)
if err != nil {
return nil, err
}
return license, nil
}
// SendUsage reports the usage of signoz to license server
func SendUsage(ctx context.Context, usage model.UsagePayload) *model.ApiError {
reqString, _ := json.Marshal(usage)
req, err := NewPostRequestWithCtx(ctx, C.Prefix+"/usage", APPLICATION_JSON, bytes.NewBuffer(reqString))
func SendUsage(ctx context.Context, usage model.UsagePayload, zeus zeus.Zeus) error {
body, err := json.Marshal(usage)
if err != nil {
return model.BadRequest(errors.Wrap(err, "unable to create http request"))
return err
}
res, err := http.DefaultClient.Do(req)
if err != nil {
return model.BadRequest(errors.Wrap(err, "unable to connect with license.signoz.io, please check your network connection"))
}
body, err := io.ReadAll(res.Body)
if err != nil {
return model.BadRequest(errors.Wrap(err, "failed to read usage response from license.signoz.io"))
}
defer res.Body.Close()
switch res.StatusCode {
case 200, 201:
return nil
case 400, 401:
return model.BadRequest(errors.Wrap(fmt.Errorf(string(body)),
"bad request error received from license.signoz.io"))
default:
return model.InternalError(errors.Wrap(fmt.Errorf(string(body)),
"internal error received from license.signoz.io"))
}
return zeus.PutMeters(ctx, usage.LicenseKey.String(), body)
}
func CheckoutSession(ctx context.Context, checkoutRequest *model.CheckoutRequest, licenseKey string, zeus zeus.Zeus) (string, error) {
body, err := json.Marshal(checkoutRequest)
if err != nil {
return "", err
}
response, err := zeus.GetCheckoutURL(ctx, licenseKey, body)
if err != nil {
return "", err
}
return gjson.GetBytes(response, "url").String(), nil
}
func PortalSession(ctx context.Context, portalRequest *model.PortalRequest, licenseKey string, zeus zeus.Zeus) (string, error) {
body, err := json.Marshal(portalRequest)
if err != nil {
return "", err
}
response, err := zeus.GetPortalURL(ctx, licenseKey, body)
if err != nil {
return "", err
}
return gjson.GetBytes(response, "url").String(), nil
}

View File

@@ -1,12 +1,11 @@
package interfaces
import (
baseint "go.signoz.io/signoz/pkg/query-service/interfaces"
baseint "github.com/SigNoz/signoz/pkg/query-service/interfaces"
)
// Connector defines methods for interaction
// with o11y data. for example - clickhouse
type DataConnector interface {
Start(readerReady chan bool)
baseint.Reader
}

View File

@@ -9,25 +9,25 @@ import (
"github.com/jmoiron/sqlx"
"github.com/mattn/go-sqlite3"
"github.com/uptrace/bun"
"go.signoz.io/signoz/ee/query-service/model"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.signoz.io/signoz/pkg/types"
"github.com/SigNoz/signoz/ee/query-service/model"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"go.uber.org/zap"
)
// Repo is license repo. stores license keys in a secured DB
type Repo struct {
db *sqlx.DB
bundb *bun.DB
store sqlstore.SQLStore
}
// NewLicenseRepo initiates a new license repo
func NewLicenseRepo(db *sqlx.DB, bundb *bun.DB) Repo {
func NewLicenseRepo(db *sqlx.DB, store sqlstore.SQLStore) Repo {
return Repo{
db: db,
bundb: bundb,
store: store,
}
}
@@ -171,7 +171,7 @@ func (r *Repo) UpdateLicenseV3(ctx context.Context, l *model.LicenseV3) error {
func (r *Repo) CreateFeature(req *types.FeatureStatus) *basemodel.ApiError {
_, err := r.bundb.NewInsert().
_, err := r.store.BunDB().NewInsert().
Model(req).
Exec(context.Background())
if err != nil {
@@ -183,7 +183,7 @@ func (r *Repo) CreateFeature(req *types.FeatureStatus) *basemodel.ApiError {
func (r *Repo) GetFeature(featureName string) (types.FeatureStatus, error) {
var feature types.FeatureStatus
err := r.bundb.NewSelect().
err := r.store.BunDB().NewSelect().
Model(&feature).
Where("name = ?", featureName).
Scan(context.Background())
@@ -212,7 +212,7 @@ func (r *Repo) GetAllFeatures() ([]basemodel.Feature, error) {
func (r *Repo) UpdateFeature(req types.FeatureStatus) error {
_, err := r.bundb.NewUpdate().
_, err := r.store.BunDB().NewUpdate().
Model(&req).
Where("name = ?", req.Name).
Exec(context.Background())

View File

@@ -6,19 +6,18 @@ import (
"time"
"github.com/jmoiron/sqlx"
"github.com/pkg/errors"
"github.com/uptrace/bun"
"sync"
baseconstants "go.signoz.io/signoz/pkg/query-service/constants"
"go.signoz.io/signoz/pkg/types"
"go.signoz.io/signoz/pkg/types/authtypes"
baseconstants "github.com/SigNoz/signoz/pkg/query-service/constants"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/zeus"
validate "go.signoz.io/signoz/ee/query-service/integrations/signozio"
"go.signoz.io/signoz/ee/query-service/model"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
"go.signoz.io/signoz/pkg/query-service/telemetry"
validate "github.com/SigNoz/signoz/ee/query-service/integrations/signozio"
"github.com/SigNoz/signoz/ee/query-service/model"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/query-service/telemetry"
"go.uber.org/zap"
)
@@ -29,6 +28,7 @@ var validationFrequency = 24 * 60 * time.Minute
type Manager struct {
repo *Repo
zeus zeus.Zeus
mutex sync.Mutex
validatorRunning bool
// end the license validation, this is important to gracefully
@@ -45,14 +45,15 @@ type Manager struct {
activeFeatures basemodel.FeatureSet
}
func StartManager(db *sqlx.DB, bundb *bun.DB, features ...basemodel.Feature) (*Manager, error) {
func StartManager(db *sqlx.DB, store sqlstore.SQLStore, zeus zeus.Zeus, features ...basemodel.Feature) (*Manager, error) {
if LM != nil {
return LM, nil
}
repo := NewLicenseRepo(db, bundb)
repo := NewLicenseRepo(db, store)
m := &Manager{
repo: &repo,
zeus: zeus,
}
if err := m.start(features...); err != nil {
return m, err
@@ -155,7 +156,7 @@ func (lm *Manager) ValidatorV3(ctx context.Context) {
tick := time.NewTicker(validationFrequency)
defer tick.Stop()
lm.ValidateV3(ctx)
_ = lm.ValidateV3(ctx)
for {
select {
case <-lm.done:
@@ -165,24 +166,22 @@ func (lm *Manager) ValidatorV3(ctx context.Context) {
case <-lm.done:
return
case <-tick.C:
lm.ValidateV3(ctx)
_ = lm.ValidateV3(ctx)
}
}
}
}
func (lm *Manager) RefreshLicense(ctx context.Context) *model.ApiError {
license, apiError := validate.ValidateLicenseV3(lm.activeLicenseV3.Key)
if apiError != nil {
zap.L().Error("failed to validate license", zap.Error(apiError.Err))
return apiError
func (lm *Manager) RefreshLicense(ctx context.Context) error {
license, err := validate.ValidateLicenseV3(ctx, lm.activeLicenseV3.Key, lm.zeus)
if err != nil {
return err
}
err := lm.repo.UpdateLicenseV3(ctx, license)
err = lm.repo.UpdateLicenseV3(ctx, license)
if err != nil {
return model.BadRequest(errors.Wrap(err, "failed to update the new license"))
return err
}
lm.SetActiveV3(license)
@@ -190,7 +189,6 @@ func (lm *Manager) RefreshLicense(ctx context.Context) *model.ApiError {
}
func (lm *Manager) ValidateV3(ctx context.Context) (reterr error) {
zap.L().Info("License validation started")
if lm.activeLicenseV3 == nil {
return nil
}
@@ -236,28 +234,17 @@ func (lm *Manager) ValidateV3(ctx context.Context) (reterr error) {
return nil
}
func (lm *Manager) ActivateV3(ctx context.Context, licenseKey string) (licenseResponse *model.LicenseV3, errResponse *model.ApiError) {
defer func() {
if errResponse != nil {
claims, ok := authtypes.ClaimsFromContext(ctx)
if ok {
telemetry.GetInstance().SendEvent(telemetry.TELEMETRY_LICENSE_ACT_FAILED,
map[string]interface{}{"err": errResponse.Err.Error()}, claims.Email, true, false)
}
}
}()
license, apiError := validate.ValidateLicenseV3(licenseKey)
if apiError != nil {
zap.L().Error("failed to get the license", zap.Error(apiError.Err))
return nil, apiError
func (lm *Manager) ActivateV3(ctx context.Context, licenseKey string) (*model.LicenseV3, error) {
license, err := validate.ValidateLicenseV3(ctx, licenseKey, lm.zeus)
if err != nil {
return nil, err
}
// insert the new license to the sqlite db
err := lm.repo.InsertLicenseV3(ctx, license)
if err != nil {
zap.L().Error("failed to activate license", zap.Error(err))
return nil, err
modelErr := lm.repo.InsertLicenseV3(ctx, license)
if modelErr != nil {
zap.L().Error("failed to activate license", zap.Error(modelErr))
return nil, modelErr
}
// license is valid, activate it
@@ -265,6 +252,10 @@ func (lm *Manager) ActivateV3(ctx context.Context, licenseKey string) (licenseRe
return license, nil
}
func (lm *Manager) GetActiveLicense() *model.LicenseV3 {
return lm.activeLicenseV3
}
// CheckFeature will be internally used by backend routines
// for feature gating
func (lm *Manager) CheckFeature(featureKey string) error {

View File

@@ -3,89 +3,35 @@ package main
import (
"context"
"flag"
"log"
"os"
"os/signal"
"strconv"
"time"
"go.opentelemetry.io/otel/sdk/resource"
semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
"go.signoz.io/signoz/ee/query-service/app"
"go.signoz.io/signoz/pkg/config"
"go.signoz.io/signoz/pkg/config/envprovider"
"go.signoz.io/signoz/pkg/config/fileprovider"
"go.signoz.io/signoz/pkg/query-service/auth"
baseconst "go.signoz.io/signoz/pkg/query-service/constants"
"go.signoz.io/signoz/pkg/query-service/version"
"go.signoz.io/signoz/pkg/signoz"
"go.signoz.io/signoz/pkg/types/authtypes"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
prommodel "github.com/prometheus/common/model"
zapotlpencoder "github.com/SigNoz/zap_otlp/zap_otlp_encoder"
zapotlpsync "github.com/SigNoz/zap_otlp/zap_otlp_sync"
"github.com/SigNoz/signoz/ee/query-service/app"
"github.com/SigNoz/signoz/ee/sqlstore/postgressqlstore"
"github.com/SigNoz/signoz/ee/zeus"
"github.com/SigNoz/signoz/ee/zeus/httpzeus"
"github.com/SigNoz/signoz/pkg/config"
"github.com/SigNoz/signoz/pkg/config/envprovider"
"github.com/SigNoz/signoz/pkg/config/fileprovider"
baseconst "github.com/SigNoz/signoz/pkg/query-service/constants"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/sqlstore/sqlstorehook"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/version"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
func initZapLog(enableQueryServiceLogOTLPExport bool) *zap.Logger {
// Deprecated: Please use the logger from pkg/instrumentation.
func initZapLog() *zap.Logger {
config := zap.NewProductionConfig()
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt)
defer stop()
config.EncoderConfig.EncodeDuration = zapcore.MillisDurationEncoder
config.EncoderConfig.EncodeLevel = zapcore.CapitalLevelEncoder
config.EncoderConfig.TimeKey = "timestamp"
config.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
otlpEncoder := zapotlpencoder.NewOTLPEncoder(config.EncoderConfig)
consoleEncoder := zapcore.NewJSONEncoder(config.EncoderConfig)
defaultLogLevel := zapcore.InfoLevel
res := resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String("query-service"),
)
core := zapcore.NewTee(
zapcore.NewCore(consoleEncoder, os.Stdout, defaultLogLevel),
)
if enableQueryServiceLogOTLPExport {
ctx, cancel := context.WithTimeout(ctx, time.Second*30)
defer cancel()
conn, err := grpc.DialContext(ctx, baseconst.OTLPTarget, grpc.WithBlock(), grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
log.Fatalf("failed to establish connection: %v", err)
} else {
logExportBatchSizeInt, err := strconv.Atoi(baseconst.LogExportBatchSize)
if err != nil {
logExportBatchSizeInt = 512
}
ws := zapcore.AddSync(zapotlpsync.NewOtlpSyncer(conn, zapotlpsync.Options{
BatchSize: logExportBatchSizeInt,
ResourceSchema: semconv.SchemaURL,
Resource: res,
}))
core = zapcore.NewTee(
zapcore.NewCore(consoleEncoder, os.Stdout, defaultLogLevel),
zapcore.NewCore(otlpEncoder, zapcore.NewMultiWriteSyncer(ws), defaultLogLevel),
)
}
}
logger := zap.New(core, zap.AddCaller(), zap.AddStacktrace(zapcore.ErrorLevel))
logger, _ := config.Build()
return logger
}
func init() {
prommodel.NameValidationScheme = prommodel.UTF8Validation
}
func main() {
var promConfigPath, skipTopLvlOpsPath string
@@ -99,7 +45,6 @@ func main() {
var useLogsNewSchema bool
var useTraceNewSchema bool
var cacheConfigPath, fluxInterval, fluxIntervalForTraceDetail string
var enableQueryServiceLogOTLPExport bool
var preferSpanMetrics bool
var maxIdleConns int
@@ -108,32 +53,38 @@ func main() {
var gatewayUrl string
var useLicensesV3 bool
// Deprecated
flag.BoolVar(&useLogsNewSchema, "use-logs-new-schema", false, "use logs_v2 schema for logs")
// Deprecated
flag.BoolVar(&useTraceNewSchema, "use-trace-new-schema", false, "use new schema for traces")
// Deprecated
flag.StringVar(&promConfigPath, "config", "./config/prometheus.yml", "(prometheus config to read metrics)")
// Deprecated
flag.StringVar(&skipTopLvlOpsPath, "skip-top-level-ops", "", "(config file to skip top level operations)")
// Deprecated
flag.BoolVar(&disableRules, "rules.disable", false, "(disable rule evaluation)")
flag.BoolVar(&preferSpanMetrics, "prefer-span-metrics", false, "(prefer span metrics for service level metrics)")
// Deprecated
flag.IntVar(&maxIdleConns, "max-idle-conns", 50, "(number of connections to maintain in the pool.)")
// Deprecated
flag.IntVar(&maxOpenConns, "max-open-conns", 100, "(max connections for use at any time.)")
// Deprecated
flag.DurationVar(&dialTimeout, "dial-timeout", 5*time.Second, "(the maximum time to establish a connection.)")
// Deprecated
flag.StringVar(&ruleRepoURL, "rules.repo-url", baseconst.AlertHelpPage, "(host address used to build rule link in alert messages)")
flag.StringVar(&cacheConfigPath, "experimental.cache-config", "", "(cache config to use)")
flag.StringVar(&fluxInterval, "flux-interval", "5m", "(the interval to exclude data from being cached to avoid incorrect cache for data in motion)")
flag.StringVar(&fluxIntervalForTraceDetail, "flux-interval-trace-detail", "2m", "(the interval to exclude data from being cached to avoid incorrect cache for trace data in motion)")
flag.BoolVar(&enableQueryServiceLogOTLPExport, "enable.query.service.log.otlp.export", false, "(enable query service log otlp export)")
flag.StringVar(&cluster, "cluster", "cluster", "(cluster name - defaults to 'cluster')")
flag.StringVar(&gatewayUrl, "gateway-url", "", "(url to the gateway)")
// Deprecated
flag.BoolVar(&useLicensesV3, "use-licenses-v3", false, "use licenses_v3 schema for licenses")
flag.Parse()
loggerMgr := initZapLog(enableQueryServiceLogOTLPExport)
loggerMgr := initZapLog()
zap.ReplaceGlobals(loggerMgr)
defer loggerMgr.Sync() // flushes buffer, if any
version.PrintVersion()
config, err := signoz.NewConfig(context.Background(), config.ResolverConfig{
Uris: []string{"env:"},
ProviderFactories: []config.ProviderFactory{
@@ -144,21 +95,31 @@ func main() {
MaxIdleConns: maxIdleConns,
MaxOpenConns: maxOpenConns,
DialTimeout: dialTimeout,
Config: promConfigPath,
})
if err != nil {
zap.L().Fatal("Failed to create config", zap.Error(err))
}
version.Info.PrettyPrint(config.Version)
sqlStoreFactories := signoz.NewSQLStoreProviderFactories()
if err := sqlStoreFactories.Add(postgressqlstore.NewFactory(sqlstorehook.NewLoggingFactory())); err != nil {
zap.L().Fatal("Failed to add postgressqlstore factory", zap.Error(err))
}
signoz, err := signoz.New(
context.Background(),
config,
zeus.Config(),
httpzeus.NewProviderFactory(),
signoz.NewCacheProviderFactories(),
signoz.NewWebProviderFactories(),
signoz.NewSQLStoreProviderFactories(),
sqlStoreFactories,
signoz.NewTelemetryStoreProviderFactories(),
)
if err != nil {
zap.L().Fatal("Failed to create signoz struct", zap.Error(err))
zap.L().Fatal("Failed to create signoz", zap.Error(err))
}
jwtSecret := os.Getenv("SIGNOZ_JWT_SECRET")
@@ -175,19 +136,12 @@ func main() {
Config: config,
SigNoz: signoz,
HTTPHostPort: baseconst.HTTPHostPort,
PromConfigPath: promConfigPath,
SkipTopLvlOpsPath: skipTopLvlOpsPath,
PreferSpanMetrics: preferSpanMetrics,
PrivateHostPort: baseconst.PrivateHostPort,
DisableRules: disableRules,
RuleRepoURL: ruleRepoURL,
CacheConfigPath: cacheConfigPath,
FluxInterval: fluxInterval,
FluxIntervalForTraceDetail: fluxIntervalForTraceDetail,
Cluster: cluster,
GatewayUrl: gatewayUrl,
UseLogsNewSchema: useLogsNewSchema,
UseTraceNewSchema: useTraceNewSchema,
Jwt: jwt,
}
@@ -196,14 +150,10 @@ func main() {
zap.L().Fatal("Failed to create server", zap.Error(err))
}
if err := server.Start(); err != nil {
if err := server.Start(context.Background()); err != nil {
zap.L().Fatal("Could not start server", zap.Error(err))
}
if err := auth.InitAuthCache(context.Background()); err != nil {
zap.L().Fatal("Failed to initialize auth cache", zap.Error(err))
}
signoz.Start(context.Background())
if err := signoz.Wait(context.Background()); err != nil {

View File

@@ -1,7 +1,7 @@
package model
import (
basemodel "go.signoz.io/signoz/pkg/query-service/model"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
)
// GettableInvitation overrides base object and adds precheck into

View File

@@ -3,7 +3,7 @@ package model
import (
"fmt"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
)
type ApiError struct {

View File

@@ -6,8 +6,8 @@ import (
"reflect"
"time"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/pkg/errors"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
)
type License struct {
@@ -157,8 +157,6 @@ func NewLicenseV3(data map[string]interface{}) (*LicenseV3, error) {
}
switch planName {
case PlanNameTeams:
features = append(features, ProPlan...)
case PlanNameEnterprise:
features = append(features, EnterprisePlan...)
case PlanNameBasic:
@@ -236,3 +234,11 @@ func ConvertLicenseV3ToLicenseV2(l *LicenseV3) *License {
}
}
type CheckoutRequest struct {
SuccessURL string `json:"url"`
}
type PortalRequest struct {
SuccessURL string `json:"url"`
}

View File

@@ -4,10 +4,10 @@ import (
"encoding/json"
"testing"
"github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.signoz.io/signoz/pkg/query-service/model"
)
func TestNewLicenseV3(t *testing.T) {
@@ -74,21 +74,21 @@ func TestNewLicenseV3(t *testing.T) {
},
{
name: "Parse the entire license properly",
data: []byte(`{"id":"does-not-matter","key":"does-not-matter-key","category":"FREE","status":"ACTIVE","plan":{"name":"TEAMS"},"valid_from": 1730899309,"valid_until": -1}`),
data: []byte(`{"id":"does-not-matter","key":"does-not-matter-key","category":"FREE","status":"ACTIVE","plan":{"name":"ENTERPRISE"},"valid_from": 1730899309,"valid_until": -1}`),
pass: true,
expected: &LicenseV3{
ID: "does-not-matter",
Key: "does-not-matter-key",
Data: map[string]interface{}{
"plan": map[string]interface{}{
"name": "TEAMS",
"name": "ENTERPRISE",
},
"category": "FREE",
"status": "ACTIVE",
"valid_from": float64(1730899309),
"valid_until": float64(-1),
},
PlanName: PlanNameTeams,
PlanName: PlanNameEnterprise,
ValidFrom: 1730899309,
ValidUntil: -1,
Status: "ACTIVE",
@@ -98,14 +98,14 @@ func TestNewLicenseV3(t *testing.T) {
},
{
name: "Fallback to basic plan if license status is invalid",
data: []byte(`{"id":"does-not-matter","key":"does-not-matter-key","category":"FREE","status":"INVALID","plan":{"name":"TEAMS"},"valid_from": 1730899309,"valid_until": -1}`),
data: []byte(`{"id":"does-not-matter","key":"does-not-matter-key","category":"FREE","status":"INVALID","plan":{"name":"ENTERPRISE"},"valid_from": 1730899309,"valid_until": -1}`),
pass: true,
expected: &LicenseV3{
ID: "does-not-matter",
Key: "does-not-matter-key",
Data: map[string]interface{}{
"plan": map[string]interface{}{
"name": "TEAMS",
"name": "ENTERPRISE",
},
"category": "FREE",
"status": "INVALID",
@@ -122,21 +122,21 @@ func TestNewLicenseV3(t *testing.T) {
},
{
name: "fallback states for validFrom and validUntil",
data: []byte(`{"id":"does-not-matter","key":"does-not-matter-key","category":"FREE","status":"ACTIVE","plan":{"name":"TEAMS"},"valid_from":1234.456,"valid_until":5678.567}`),
data: []byte(`{"id":"does-not-matter","key":"does-not-matter-key","category":"FREE","status":"ACTIVE","plan":{"name":"ENTERPRISE"},"valid_from":1234.456,"valid_until":5678.567}`),
pass: true,
expected: &LicenseV3{
ID: "does-not-matter",
Key: "does-not-matter-key",
Data: map[string]interface{}{
"plan": map[string]interface{}{
"name": "TEAMS",
"name": "ENTERPRISE",
},
"valid_from": 1234.456,
"valid_until": 5678.567,
"category": "FREE",
"status": "ACTIVE",
},
PlanName: PlanNameTeams,
PlanName: PlanNameEnterprise,
ValidFrom: 1234,
ValidUntil: 5678,
Status: "ACTIVE",

View File

@@ -1,32 +1,7 @@
package model
type User struct {
Id string `json:"id" db:"id"`
Name string `json:"name" db:"name"`
Email string `json:"email" db:"email"`
CreatedAt int64 `json:"createdAt" db:"created_at"`
ProfilePictureURL string `json:"profilePictureURL" db:"profile_picture_url"`
NotFound bool `json:"notFound"`
}
type CreatePATRequestBody struct {
Name string `json:"name"`
Role string `json:"role"`
ExpiresInDays int64 `json:"expiresInDays"`
}
type PAT struct {
Id string `json:"id" db:"id"`
UserID string `json:"userId" db:"user_id"`
CreatedByUser User `json:"createdByUser"`
UpdatedByUser User `json:"updatedByUser"`
Token string `json:"token" db:"token"`
Role string `json:"role" db:"role"`
Name string `json:"name" db:"name"`
CreatedAt int64 `json:"createdAt" db:"created_at"`
ExpiresAt int64 `json:"expiresAt" db:"expires_at"`
UpdatedAt int64 `json:"updatedAt" db:"updated_at"`
LastUsed int64 `json:"lastUsed" db:"last_used"`
Revoked bool `json:"revoked" db:"revoked"`
UpdatedByUserID string `json:"updatedByUserId" db:"updated_by_user_id"`
}

View File

@@ -1,30 +1,26 @@
package model
import (
"go.signoz.io/signoz/pkg/query-service/constants"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
)
const SSO = "SSO"
const Basic = "BASIC_PLAN"
const Pro = "PRO_PLAN"
const Enterprise = "ENTERPRISE_PLAN"
var (
PlanNameEnterprise = "ENTERPRISE"
PlanNameTeams = "TEAMS"
PlanNameBasic = "BASIC"
)
var (
MapOldPlanKeyToNewPlanName map[string]string = map[string]string{PlanNameBasic: Basic, PlanNameTeams: Pro, PlanNameEnterprise: Enterprise}
MapOldPlanKeyToNewPlanName map[string]string = map[string]string{PlanNameBasic: Basic, PlanNameEnterprise: Enterprise}
)
var (
LicenseStatusInvalid = "INVALID"
)
const DisableUpsell = "DISABLE_UPSELL"
const Onboarding = "ONBOARDING"
const ChatSupport = "CHAT_SUPPORT"
const Gateway = "GATEWAY"
@@ -38,90 +34,6 @@ var BasicPlan = basemodel.FeatureSet{
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.OSS,
Active: false,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: DisableUpsell,
Active: false,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.SmartTraceDetail,
Active: false,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.CustomMetricsFunction,
Active: false,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.QueryBuilderPanels,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.QueryBuilderAlerts,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelSlack,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelWebhook,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelPagerduty,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelOpsgenie,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelEmail,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelMsTeams,
Active: false,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.UseSpanMetrics,
Active: false,
@@ -151,134 +63,12 @@ var BasicPlan = basemodel.FeatureSet{
Route: "",
},
basemodel.Feature{
Name: basemodel.HostsInfraMonitoring,
Active: constants.EnableHostsInfraMonitoring(),
Usage: 0,
UsageLimit: -1,
Route: "",
},
}
var ProPlan = basemodel.FeatureSet{
basemodel.Feature{
Name: SSO,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.OSS,
Name: basemodel.TraceFunnels,
Active: false,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.SmartTraceDetail,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.CustomMetricsFunction,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.QueryBuilderPanels,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.QueryBuilderAlerts,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelSlack,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelWebhook,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelPagerduty,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelOpsgenie,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelEmail,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelMsTeams,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.UseSpanMetrics,
Active: false,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: Gateway,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: PremiumSupport,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AnomalyDetection,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.HostsInfraMonitoring,
Active: constants.EnableHostsInfraMonitoring(),
Usage: 0,
UsageLimit: -1,
Route: "",
},
}
var EnterprisePlan = basemodel.FeatureSet{
@@ -289,83 +79,6 @@ var EnterprisePlan = basemodel.FeatureSet{
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.OSS,
Active: false,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.SmartTraceDetail,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.CustomMetricsFunction,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.QueryBuilderPanels,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.QueryBuilderAlerts,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelSlack,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelWebhook,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelPagerduty,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelOpsgenie,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelEmail,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.AlertChannelMsTeams,
Active: true,
Usage: 0,
UsageLimit: -1,
Route: "",
},
basemodel.Feature{
Name: basemodel.UseSpanMetrics,
Active: false,
@@ -409,8 +122,8 @@ var EnterprisePlan = basemodel.FeatureSet{
Route: "",
},
basemodel.Feature{
Name: basemodel.HostsInfraMonitoring,
Active: constants.EnableHostsInfraMonitoring(),
Name: basemodel.TraceFunnels,
Active: false,
Usage: 0,
UsageLimit: -1,
Route: "",

View File

@@ -11,22 +11,23 @@ import (
"go.uber.org/zap"
"go.signoz.io/signoz/ee/query-service/anomaly"
"go.signoz.io/signoz/pkg/query-service/cache"
"go.signoz.io/signoz/pkg/query-service/common"
"go.signoz.io/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/ee/query-service/anomaly"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/query-service/common"
"github.com/SigNoz/signoz/pkg/query-service/model"
ruletypes "github.com/SigNoz/signoz/pkg/types/ruletypes"
querierV2 "go.signoz.io/signoz/pkg/query-service/app/querier/v2"
"go.signoz.io/signoz/pkg/query-service/app/queryBuilder"
"go.signoz.io/signoz/pkg/query-service/interfaces"
v3 "go.signoz.io/signoz/pkg/query-service/model/v3"
"go.signoz.io/signoz/pkg/query-service/utils/labels"
"go.signoz.io/signoz/pkg/query-service/utils/times"
"go.signoz.io/signoz/pkg/query-service/utils/timestamp"
querierV2 "github.com/SigNoz/signoz/pkg/query-service/app/querier/v2"
"github.com/SigNoz/signoz/pkg/query-service/app/queryBuilder"
"github.com/SigNoz/signoz/pkg/query-service/interfaces"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/query-service/utils/labels"
"github.com/SigNoz/signoz/pkg/query-service/utils/times"
"github.com/SigNoz/signoz/pkg/query-service/utils/timestamp"
"go.signoz.io/signoz/pkg/query-service/formatter"
"github.com/SigNoz/signoz/pkg/query-service/formatter"
baserules "go.signoz.io/signoz/pkg/query-service/rules"
baserules "github.com/SigNoz/signoz/pkg/query-service/rules"
yaml "gopkg.in/yaml.v2"
)
@@ -52,8 +53,7 @@ type AnomalyRule struct {
func NewAnomalyRule(
id string,
p *baserules.PostableRule,
featureFlags interfaces.FeatureLookup,
p *ruletypes.PostableRule,
reader interfaces.Reader,
cache cache.Cache,
opts ...baserules.RuleOption,
@@ -61,7 +61,7 @@ func NewAnomalyRule(
zap.L().Info("creating new AnomalyRule", zap.String("id", id), zap.Any("opts", opts))
if p.RuleCondition.CompareOp == baserules.ValueIsBelow {
if p.RuleCondition.CompareOp == ruletypes.ValueIsBelow {
target := -1 * *p.RuleCondition.Target
p.RuleCondition.Target = &target
}
@@ -89,10 +89,9 @@ func NewAnomalyRule(
zap.L().Info("using seasonality", zap.String("seasonality", t.seasonality.String()))
querierOptsV2 := querierV2.QuerierOptions{
Reader: reader,
Cache: cache,
KeyGenerator: queryBuilder.NewKeyGenerator(),
FeatureLookup: featureFlags,
Reader: reader,
Cache: cache,
KeyGenerator: queryBuilder.NewKeyGenerator(),
}
t.querierV2 = querierV2.NewQuerier(querierOptsV2)
@@ -102,27 +101,24 @@ func NewAnomalyRule(
anomaly.WithCache[*anomaly.HourlyProvider](cache),
anomaly.WithKeyGenerator[*anomaly.HourlyProvider](queryBuilder.NewKeyGenerator()),
anomaly.WithReader[*anomaly.HourlyProvider](reader),
anomaly.WithFeatureLookup[*anomaly.HourlyProvider](featureFlags),
)
} else if t.seasonality == anomaly.SeasonalityDaily {
t.provider = anomaly.NewDailyProvider(
anomaly.WithCache[*anomaly.DailyProvider](cache),
anomaly.WithKeyGenerator[*anomaly.DailyProvider](queryBuilder.NewKeyGenerator()),
anomaly.WithReader[*anomaly.DailyProvider](reader),
anomaly.WithFeatureLookup[*anomaly.DailyProvider](featureFlags),
)
} else if t.seasonality == anomaly.SeasonalityWeekly {
t.provider = anomaly.NewWeeklyProvider(
anomaly.WithCache[*anomaly.WeeklyProvider](cache),
anomaly.WithKeyGenerator[*anomaly.WeeklyProvider](queryBuilder.NewKeyGenerator()),
anomaly.WithReader[*anomaly.WeeklyProvider](reader),
anomaly.WithFeatureLookup[*anomaly.WeeklyProvider](featureFlags),
)
}
return &t, nil
}
func (r *AnomalyRule) Type() baserules.RuleType {
func (r *AnomalyRule) Type() ruletypes.RuleType {
return RuleTypeAnomaly
}
@@ -162,7 +158,7 @@ func (r *AnomalyRule) GetSelectedQuery() string {
return r.Condition().GetSelectedQueryName()
}
func (r *AnomalyRule) buildAndRunQuery(ctx context.Context, ts time.Time) (baserules.Vector, error) {
func (r *AnomalyRule) buildAndRunQuery(ctx context.Context, ts time.Time) (ruletypes.Vector, error) {
params, err := r.prepareQueryRange(ts)
if err != nil {
@@ -189,7 +185,7 @@ func (r *AnomalyRule) buildAndRunQuery(ctx context.Context, ts time.Time) (baser
}
}
var resultVector baserules.Vector
var resultVector ruletypes.Vector
scoresJSON, _ := json.Marshal(queryResult.AnomalyScores)
zap.L().Info("anomaly scores", zap.String("scores", string(scoresJSON)))
@@ -218,7 +214,7 @@ func (r *AnomalyRule) Eval(ctx context.Context, ts time.Time) (interface{}, erro
defer r.mtx.Unlock()
resultFPs := map[uint64]struct{}{}
var alerts = make(map[uint64]*baserules.Alert, len(res))
var alerts = make(map[uint64]*ruletypes.Alert, len(res))
for _, smpl := range res {
l := make(map[string]string, len(smpl.Metric))
@@ -230,7 +226,7 @@ func (r *AnomalyRule) Eval(ctx context.Context, ts time.Time) (interface{}, erro
threshold := valueFormatter.Format(r.TargetVal(), r.Unit())
zap.L().Debug("Alert template data for rule", zap.String("name", r.Name()), zap.String("formatter", valueFormatter.Name()), zap.String("value", value), zap.String("threshold", threshold))
tmplData := baserules.AlertTemplateData(l, value, threshold)
tmplData := ruletypes.AlertTemplateData(l, value, threshold)
// Inject some convenience variables that are easier to remember for users
// who are not used to Go's templating system.
defs := "{{$labels := .Labels}}{{$value := .Value}}{{$threshold := .Threshold}}"
@@ -238,7 +234,7 @@ func (r *AnomalyRule) Eval(ctx context.Context, ts time.Time) (interface{}, erro
// utility function to apply go template on labels and annotations
expand := func(text string) string {
tmpl := baserules.NewTemplateExpander(
tmpl := ruletypes.NewTemplateExpander(
ctx,
defs+text,
"__alert_"+r.Name(),
@@ -283,7 +279,7 @@ func (r *AnomalyRule) Eval(ctx context.Context, ts time.Time) (interface{}, erro
return nil, err
}
alerts[h] = &baserules.Alert{
alerts[h] = &ruletypes.Alert{
Labels: lbs,
QueryResultLables: resultLabels,
Annotations: annotations,
@@ -324,7 +320,7 @@ func (r *AnomalyRule) Eval(ctx context.Context, ts time.Time) (interface{}, erro
if _, ok := resultFPs[fp]; !ok {
// If the alert was previously firing, keep it around for a given
// retention time so it is reported as resolved to the AlertManager.
if a.State == model.StatePending || (!a.ResolvedAt.IsZero() && ts.Sub(a.ResolvedAt) > baserules.ResolvedRetention) {
if a.State == model.StatePending || (!a.ResolvedAt.IsZero() && ts.Sub(a.ResolvedAt) > ruletypes.ResolvedRetention) {
delete(r.Active, fp)
}
if a.State != model.StateInactive {
@@ -380,10 +376,10 @@ func (r *AnomalyRule) Eval(ctx context.Context, ts time.Time) (interface{}, erro
func (r *AnomalyRule) String() string {
ar := baserules.PostableRule{
ar := ruletypes.PostableRule{
AlertName: r.Name(),
RuleCondition: r.Condition(),
EvalWindow: baserules.Duration(r.EvalWindow()),
EvalWindow: ruletypes.Duration(r.EvalWindow()),
Labels: r.Labels().Map(),
Annotations: r.Annotations().Map(),
PreferredChannels: r.PreferredChannels(),

View File

@@ -5,10 +5,11 @@ import (
"fmt"
"time"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
baserules "github.com/SigNoz/signoz/pkg/query-service/rules"
"github.com/SigNoz/signoz/pkg/query-service/utils/labels"
ruletypes "github.com/SigNoz/signoz/pkg/types/ruletypes"
"github.com/google/uuid"
basemodel "go.signoz.io/signoz/pkg/query-service/model"
baserules "go.signoz.io/signoz/pkg/query-service/rules"
"go.signoz.io/signoz/pkg/query-service/utils/labels"
"go.uber.org/zap"
)
@@ -18,15 +19,12 @@ func PrepareTaskFunc(opts baserules.PrepareTaskOptions) (baserules.Task, error)
var task baserules.Task
ruleId := baserules.RuleIdFromTaskName(opts.TaskName)
if opts.Rule.RuleType == baserules.RuleTypeThreshold {
if opts.Rule.RuleType == ruletypes.RuleTypeThreshold {
// create a threshold rule
tr, err := baserules.NewThresholdRule(
ruleId,
opts.Rule,
opts.FF,
opts.Reader,
opts.UseLogsNewSchema,
opts.UseTraceNewSchema,
baserules.WithEvalDelay(opts.ManagerOpts.EvalDelay),
baserules.WithSQLStore(opts.SQLStore),
)
@@ -38,9 +36,9 @@ func PrepareTaskFunc(opts baserules.PrepareTaskOptions) (baserules.Task, error)
rules = append(rules, tr)
// create ch rule task for evalution
task = newTask(baserules.TaskTypeCh, opts.TaskName, time.Duration(opts.Rule.Frequency), rules, opts.ManagerOpts, opts.NotifyFunc, opts.RuleDB)
task = newTask(baserules.TaskTypeCh, opts.TaskName, time.Duration(opts.Rule.Frequency), rules, opts.ManagerOpts, opts.NotifyFunc, opts.MaintenanceStore, opts.OrgID)
} else if opts.Rule.RuleType == baserules.RuleTypeProm {
} else if opts.Rule.RuleType == ruletypes.RuleTypeProm {
// create promql rule
pr, err := baserules.NewPromRule(
@@ -48,7 +46,7 @@ func PrepareTaskFunc(opts baserules.PrepareTaskOptions) (baserules.Task, error)
opts.Rule,
opts.Logger,
opts.Reader,
opts.ManagerOpts.PqlEngine,
opts.ManagerOpts.Prometheus,
baserules.WithSQLStore(opts.SQLStore),
)
@@ -59,14 +57,13 @@ func PrepareTaskFunc(opts baserules.PrepareTaskOptions) (baserules.Task, error)
rules = append(rules, pr)
// create promql rule task for evalution
task = newTask(baserules.TaskTypeProm, opts.TaskName, time.Duration(opts.Rule.Frequency), rules, opts.ManagerOpts, opts.NotifyFunc, opts.RuleDB)
task = newTask(baserules.TaskTypeProm, opts.TaskName, time.Duration(opts.Rule.Frequency), rules, opts.ManagerOpts, opts.NotifyFunc, opts.MaintenanceStore, opts.OrgID)
} else if opts.Rule.RuleType == baserules.RuleTypeAnomaly {
} else if opts.Rule.RuleType == ruletypes.RuleTypeAnomaly {
// create anomaly rule
ar, err := NewAnomalyRule(
ruleId,
opts.Rule,
opts.FF,
opts.Reader,
opts.Cache,
baserules.WithEvalDelay(opts.ManagerOpts.EvalDelay),
@@ -79,10 +76,10 @@ func PrepareTaskFunc(opts baserules.PrepareTaskOptions) (baserules.Task, error)
rules = append(rules, ar)
// create anomaly rule task for evalution
task = newTask(baserules.TaskTypeCh, opts.TaskName, time.Duration(opts.Rule.Frequency), rules, opts.ManagerOpts, opts.NotifyFunc, opts.RuleDB)
task = newTask(baserules.TaskTypeCh, opts.TaskName, time.Duration(opts.Rule.Frequency), rules, opts.ManagerOpts, opts.NotifyFunc, opts.MaintenanceStore, opts.OrgID)
} else {
return nil, fmt.Errorf("unsupported rule type %s. Supported types: %s, %s", opts.Rule.RuleType, baserules.RuleTypeProm, baserules.RuleTypeThreshold)
return nil, fmt.Errorf("unsupported rule type %s. Supported types: %s, %s", opts.Rule.RuleType, ruletypes.RuleTypeProm, ruletypes.RuleTypeThreshold)
}
return task, nil
@@ -107,12 +104,12 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
}
// append name to indicate this is test alert
parsedRule.AlertName = fmt.Sprintf("%s%s", alertname, baserules.TestAlertPostFix)
parsedRule.AlertName = fmt.Sprintf("%s%s", alertname, ruletypes.TestAlertPostFix)
var rule baserules.Rule
var err error
if parsedRule.RuleType == baserules.RuleTypeThreshold {
if parsedRule.RuleType == ruletypes.RuleTypeThreshold {
// add special labels for test alerts
parsedRule.Annotations[labels.AlertSummaryLabel] = fmt.Sprintf("The rule threshold is set to %.4f, and the observed metric value is {{$value}}.", *parsedRule.RuleCondition.Target)
@@ -123,21 +120,18 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
rule, err = baserules.NewThresholdRule(
alertname,
parsedRule,
opts.FF,
opts.Reader,
opts.UseLogsNewSchema,
opts.UseTraceNewSchema,
baserules.WithSendAlways(),
baserules.WithSendUnmatched(),
baserules.WithSQLStore(opts.SQLStore),
)
if err != nil {
zap.L().Error("failed to prepare a new threshold rule for test", zap.String("name", rule.Name()), zap.Error(err))
zap.L().Error("failed to prepare a new threshold rule for test", zap.String("name", alertname), zap.Error(err))
return 0, basemodel.BadRequest(err)
}
} else if parsedRule.RuleType == baserules.RuleTypeProm {
} else if parsedRule.RuleType == ruletypes.RuleTypeProm {
// create promql rule
rule, err = baserules.NewPromRule(
@@ -145,22 +139,21 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
parsedRule,
opts.Logger,
opts.Reader,
opts.ManagerOpts.PqlEngine,
opts.ManagerOpts.Prometheus,
baserules.WithSendAlways(),
baserules.WithSendUnmatched(),
baserules.WithSQLStore(opts.SQLStore),
)
if err != nil {
zap.L().Error("failed to prepare a new promql rule for test", zap.String("name", rule.Name()), zap.Error(err))
zap.L().Error("failed to prepare a new promql rule for test", zap.String("name", alertname), zap.Error(err))
return 0, basemodel.BadRequest(err)
}
} else if parsedRule.RuleType == baserules.RuleTypeAnomaly {
} else if parsedRule.RuleType == ruletypes.RuleTypeAnomaly {
// create anomaly rule
rule, err = NewAnomalyRule(
alertname,
parsedRule,
opts.FF,
opts.Reader,
opts.Cache,
baserules.WithSendAlways(),
@@ -168,7 +161,7 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
baserules.WithSQLStore(opts.SQLStore),
)
if err != nil {
zap.L().Error("failed to prepare a new anomaly rule for test", zap.String("name", rule.Name()), zap.Error(err))
zap.L().Error("failed to prepare a new anomaly rule for test", zap.String("name", alertname), zap.Error(err))
return 0, basemodel.BadRequest(err)
}
} else {
@@ -194,9 +187,9 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
// newTask returns an appropriate group for
// rule type
func newTask(taskType baserules.TaskType, name string, frequency time.Duration, rules []baserules.Rule, opts *baserules.ManagerOptions, notify baserules.NotifyFunc, ruleDB baserules.RuleDB) baserules.Task {
func newTask(taskType baserules.TaskType, name string, frequency time.Duration, rules []baserules.Rule, opts *baserules.ManagerOptions, notify baserules.NotifyFunc, maintenanceStore ruletypes.MaintenanceStore, orgID string) baserules.Task {
if taskType == baserules.TaskTypeCh {
return baserules.NewRuleTask(name, "", frequency, rules, opts, notify, ruleDB)
return baserules.NewRuleTask(name, "", frequency, rules, opts, notify, maintenanceStore, orgID)
}
return baserules.NewPromRuleTask(name, "", frequency, rules, opts, notify, ruleDB)
return baserules.NewPromRuleTask(name, "", frequency, rules, opts, notify, maintenanceStore, orgID)
}

View File

@@ -7,9 +7,9 @@ import (
"fmt"
"strings"
"github.com/SigNoz/signoz/pkg/query-service/constants"
saml2 "github.com/russellhaering/gosaml2"
dsig "github.com/russellhaering/goxmldsig"
"go.signoz.io/signoz/pkg/query-service/constants"
"go.uber.org/zap"
)

View File

@@ -4,7 +4,6 @@ import (
"context"
"encoding/json"
"fmt"
"regexp"
"strings"
"sync/atomic"
"time"
@@ -15,11 +14,11 @@ import (
"go.uber.org/zap"
"go.signoz.io/signoz/ee/query-service/dao"
licenseserver "go.signoz.io/signoz/ee/query-service/integrations/signozio"
"go.signoz.io/signoz/ee/query-service/license"
"go.signoz.io/signoz/ee/query-service/model"
"go.signoz.io/signoz/pkg/query-service/utils/encryption"
"github.com/SigNoz/signoz/ee/query-service/dao"
"github.com/SigNoz/signoz/ee/query-service/license"
"github.com/SigNoz/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/pkg/query-service/utils/encryption"
"github.com/SigNoz/signoz/pkg/zeus"
)
const (
@@ -42,26 +41,16 @@ type Manager struct {
modelDao dao.ModelDao
tenantID string
zeus zeus.Zeus
}
func New(modelDao dao.ModelDao, licenseRepo *license.Repo, clickhouseConn clickhouse.Conn, chUrl string) (*Manager, error) {
hostNameRegex := regexp.MustCompile(`tcp://(?P<hostname>.*):`)
hostNameRegexMatches := hostNameRegex.FindStringSubmatch(chUrl)
tenantID := ""
if len(hostNameRegexMatches) == 2 {
tenantID = hostNameRegexMatches[1]
tenantID = strings.TrimSuffix(tenantID, "-clickhouse")
}
func New(modelDao dao.ModelDao, licenseRepo *license.Repo, clickhouseConn clickhouse.Conn, zeus zeus.Zeus) (*Manager, error) {
m := &Manager{
// repository: repo,
clickhouseConn: clickhouseConn,
licenseRepo: licenseRepo,
scheduler: gocron.NewScheduler(time.UTC).Every(1).Day().At("00:00"), // send usage every at 00:00 UTC
modelDao: modelDao,
tenantID: tenantID,
zeus: zeus,
}
return m, nil
}
@@ -138,15 +127,6 @@ func (lm *Manager) UploadUsage() {
zap.L().Info("uploading usage data")
orgName := ""
orgNames, orgError := lm.modelDao.GetOrgs(ctx)
if orgError != nil {
zap.L().Error("failed to get org data: %v", zap.Error(orgError))
}
if len(orgNames) == 1 {
orgName = orgNames[0].Name
}
usagesPayload := []model.Usage{}
for _, usage := range usages {
usageDataBytes, err := encryption.Decrypt([]byte(usage.ExporterID[:32]), []byte(usage.Data))
@@ -165,9 +145,9 @@ func (lm *Manager) UploadUsage() {
usageData.CollectorID = usage.CollectorID
usageData.ExporterID = usage.ExporterID
usageData.Type = usage.Type
usageData.Tenant = usage.Tenant
usageData.OrgName = orgName
usageData.TenantId = lm.tenantID
usageData.Tenant = "default"
usageData.OrgName = "default"
usageData.TenantId = "default"
usagesPayload = append(usagesPayload, usageData)
}
@@ -176,24 +156,18 @@ func (lm *Manager) UploadUsage() {
LicenseKey: key,
Usage: usagesPayload,
}
lm.UploadUsageWithExponentalBackOff(ctx, payload)
}
func (lm *Manager) UploadUsageWithExponentalBackOff(ctx context.Context, payload model.UsagePayload) {
for i := 1; i <= MaxRetries; i++ {
apiErr := licenseserver.SendUsage(ctx, payload)
if apiErr != nil && i == MaxRetries {
zap.L().Error("retries stopped : %v", zap.Error(apiErr))
// not returning error here since it is captured in the failed count
return
} else if apiErr != nil {
// sleeping for exponential backoff
sleepDuration := RetryInterval * time.Duration(i)
zap.L().Error("failed to upload snapshot retrying after %v secs : %v", zap.Duration("sleepDuration", sleepDuration), zap.Error(apiErr.Err))
time.Sleep(sleepDuration)
} else {
break
}
body, errv2 := json.Marshal(payload)
if errv2 != nil {
zap.L().Error("error while marshalling usage payload: %v", zap.Error(errv2))
return
}
errv2 = lm.zeus.PutMeters(ctx, payload.LicenseKey.String(), body)
if errv2 != nil {
zap.L().Error("failed to upload usage: %v", zap.Error(errv2))
// not returning error here since it is captured in the failed count
return
}
}

View File

@@ -0,0 +1,444 @@
package postgressqlstore
import (
"context"
"fmt"
"reflect"
"slices"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/uptrace/bun"
)
var (
Identity = "id"
Integer = "bigint"
Text = "text"
)
var (
Org = "org"
User = "user"
CloudIntegration = "cloud_integration"
)
var (
OrgReference = `("org_id") REFERENCES "organizations" ("id")`
UserReference = `("user_id") REFERENCES "users" ("id") ON DELETE CASCADE ON UPDATE CASCADE`
CloudIntegrationReference = `("cloud_integration_id") REFERENCES "cloud_integration" ("id") ON DELETE CASCADE`
)
type dialect struct{}
func (dialect *dialect) IntToTimestamp(ctx context.Context, bun bun.IDB, table string, column string) error {
columnType, err := dialect.GetColumnType(ctx, bun, table, column)
if err != nil {
return err
}
// bigint for postgres and INTEGER for sqlite
if columnType != "bigint" {
return nil
}
// if the columns is integer then do this
if _, err := bun.
ExecContext(ctx, `ALTER TABLE `+table+` RENAME COLUMN `+column+` TO `+column+`_old`); err != nil {
return err
}
// add new timestamp column
if _, err := bun.
NewAddColumn().
Table(table).
ColumnExpr(column + " TIMESTAMP").
Exec(ctx); err != nil {
return err
}
if _, err := bun.
NewUpdate().
Table(table).
Set(column + " = to_timestamp(cast(" + column + "_old as INTEGER))").
Where("1=1").
Exec(ctx); err != nil {
return err
}
// drop old column
if _, err := bun.
NewDropColumn().
Table(table).
Column(column + "_old").
Exec(ctx); err != nil {
return err
}
return nil
}
func (dialect *dialect) IntToBoolean(ctx context.Context, bun bun.IDB, table string, column string) error {
columnExists, err := dialect.ColumnExists(ctx, bun, table, column)
if err != nil {
return err
}
if !columnExists {
return nil
}
columnType, err := dialect.GetColumnType(ctx, bun, table, column)
if err != nil {
return err
}
if columnType != "bigint" {
return nil
}
if _, err := bun.
ExecContext(ctx, `ALTER TABLE `+table+` RENAME COLUMN `+column+` TO `+column+`_old`); err != nil {
return err
}
// add new boolean column
if _, err := bun.
NewAddColumn().
Table(table).
ColumnExpr(column + " BOOLEAN").
Exec(ctx); err != nil {
return err
}
// copy data from old column to new column, converting from int to boolean
if _, err := bun.NewUpdate().
Table(table).
Set(column + " = CASE WHEN " + column + "_old = 1 THEN true ELSE false END").
Where("1=1").
Exec(ctx); err != nil {
return err
}
// drop old column
if _, err := bun.NewDropColumn().Table(table).Column(column + "_old").Exec(ctx); err != nil {
return err
}
return nil
}
func (dialect *dialect) GetColumnType(ctx context.Context, bun bun.IDB, table string, column string) (string, error) {
var columnType string
err := bun.NewSelect().
ColumnExpr("data_type").
TableExpr("information_schema.columns").
Where("table_name = ?", table).
Where("column_name = ?", column).
Scan(ctx, &columnType)
if err != nil {
return "", err
}
return columnType, nil
}
func (dialect *dialect) ColumnExists(ctx context.Context, bun bun.IDB, table string, column string) (bool, error) {
var count int
err := bun.NewSelect().
ColumnExpr("COUNT(*)").
TableExpr("information_schema.columns").
Where("table_name = ?", table).
Where("column_name = ?", column).
Scan(ctx, &count)
if err != nil {
return false, err
}
return count > 0, nil
}
func (dialect *dialect) AddColumn(ctx context.Context, bun bun.IDB, table string, column string, columnExpr string) error {
exists, err := dialect.ColumnExists(ctx, bun, table, column)
if err != nil {
return err
}
if !exists {
_, err = bun.
NewAddColumn().
Table(table).
ColumnExpr(column + " " + columnExpr).
Exec(ctx)
if err != nil {
return err
}
}
return nil
}
func (dialect *dialect) RenameColumn(ctx context.Context, bun bun.IDB, table string, oldColumnName string, newColumnName string) (bool, error) {
oldColumnExists, err := dialect.ColumnExists(ctx, bun, table, oldColumnName)
if err != nil {
return false, err
}
newColumnExists, err := dialect.ColumnExists(ctx, bun, table, newColumnName)
if err != nil {
return false, err
}
if newColumnExists {
return true, nil
}
if !oldColumnExists {
return false, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "old column: %s doesn't exist", oldColumnName)
}
_, err = bun.
ExecContext(ctx, "ALTER TABLE "+table+" RENAME COLUMN "+oldColumnName+" TO "+newColumnName)
if err != nil {
return false, err
}
return true, nil
}
func (dialect *dialect) DropColumn(ctx context.Context, bun bun.IDB, table string, column string) error {
exists, err := dialect.ColumnExists(ctx, bun, table, column)
if err != nil {
return err
}
if exists {
_, err = bun.
NewDropColumn().
Table(table).
Column(column).
Exec(ctx)
if err != nil {
return err
}
}
return nil
}
func (dialect *dialect) TableExists(ctx context.Context, bun bun.IDB, table interface{}) (bool, error) {
count := 0
err := bun.
NewSelect().
ColumnExpr("count(*)").
Table("pg_catalog.pg_tables").
Where("tablename = ?", bun.Dialect().Tables().Get(reflect.TypeOf(table)).Name).
Scan(ctx, &count)
if err != nil {
return false, err
}
if count == 0 {
return false, nil
}
return true, nil
}
func (dialect *dialect) RenameTableAndModifyModel(ctx context.Context, bun bun.IDB, oldModel interface{}, newModel interface{}, references []string, cb func(context.Context) error) error {
if len(references) == 0 {
return errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "cannot run migration without reference")
}
exists, err := dialect.TableExists(ctx, bun, newModel)
if err != nil {
return err
}
if exists {
return nil
}
var fkReferences []string
for _, reference := range references {
if reference == Org && !slices.Contains(fkReferences, OrgReference) {
fkReferences = append(fkReferences, OrgReference)
} else if reference == User && !slices.Contains(fkReferences, UserReference) {
fkReferences = append(fkReferences, UserReference)
} else if reference == CloudIntegration && !slices.Contains(fkReferences, CloudIntegrationReference) {
fkReferences = append(fkReferences, CloudIntegrationReference)
}
}
createTable := bun.
NewCreateTable().
IfNotExists().
Model(newModel)
for _, fk := range fkReferences {
createTable = createTable.ForeignKey(fk)
}
_, err = createTable.Exec(ctx)
if err != nil {
return err
}
err = cb(ctx)
if err != nil {
return err
}
_, err = bun.
NewDropTable().
IfExists().
Model(oldModel).
Exec(ctx)
if err != nil {
return err
}
return nil
}
func (dialect *dialect) AddNotNullDefaultToColumn(ctx context.Context, bun bun.IDB, table string, column, columnType, defaultValue string) error {
query := fmt.Sprintf("ALTER TABLE %s ALTER COLUMN %s SET DEFAULT %s, ALTER COLUMN %s SET NOT NULL", table, column, defaultValue, column)
if _, err := bun.ExecContext(ctx, query); err != nil {
return err
}
return nil
}
func (dialect *dialect) UpdatePrimaryKey(ctx context.Context, bun bun.IDB, oldModel interface{}, newModel interface{}, reference string, cb func(context.Context) error) error {
if reference == "" {
return errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "cannot run migration without reference")
}
oldTableName := bun.Dialect().Tables().Get(reflect.TypeOf(oldModel)).Name
newTableName := bun.Dialect().Tables().Get(reflect.TypeOf(newModel)).Name
columnType, err := dialect.GetColumnType(ctx, bun, oldTableName, Identity)
if err != nil {
return err
}
if columnType == Text {
return nil
}
fkReference := ""
if reference == Org {
fkReference = OrgReference
} else if reference == User {
fkReference = UserReference
}
_, err = bun.
NewCreateTable().
IfNotExists().
Model(newModel).
ForeignKey(fkReference).
Exec(ctx)
if err != nil {
return err
}
err = cb(ctx)
if err != nil {
return err
}
_, err = bun.
NewDropTable().
IfExists().
Model(oldModel).
Exec(ctx)
if err != nil {
return err
}
_, err = bun.
ExecContext(ctx, fmt.Sprintf("ALTER TABLE %s RENAME TO %s", newTableName, oldTableName))
if err != nil {
return err
}
return nil
}
func (dialect *dialect) AddPrimaryKey(ctx context.Context, bun bun.IDB, oldModel interface{}, newModel interface{}, reference string, cb func(context.Context) error) error {
if reference == "" {
return errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "cannot run migration without reference")
}
oldTableName := bun.Dialect().Tables().Get(reflect.TypeOf(oldModel)).Name
newTableName := bun.Dialect().Tables().Get(reflect.TypeOf(newModel)).Name
identityExists, err := dialect.ColumnExists(ctx, bun, oldTableName, Identity)
if err != nil {
return err
}
if identityExists {
return nil
}
fkReference := ""
if reference == Org {
fkReference = OrgReference
} else if reference == User {
fkReference = UserReference
}
_, err = bun.
NewCreateTable().
IfNotExists().
Model(newModel).
ForeignKey(fkReference).
Exec(ctx)
if err != nil {
return err
}
err = cb(ctx)
if err != nil {
return err
}
_, err = bun.
NewDropTable().
IfExists().
Model(oldModel).
Exec(ctx)
if err != nil {
return err
}
_, err = bun.
ExecContext(ctx, fmt.Sprintf("ALTER TABLE %s RENAME TO %s", newTableName, oldTableName))
if err != nil {
return err
}
return nil
}
func (dialect *dialect) DropColumnWithForeignKeyConstraint(ctx context.Context, bunIDB bun.IDB, model interface{}, column string) error {
existingTable := bunIDB.Dialect().Tables().Get(reflect.TypeOf(model))
columnExists, err := dialect.ColumnExists(ctx, bunIDB, existingTable.Name, column)
if err != nil {
return err
}
if !columnExists {
return nil
}
_, err = bunIDB.
NewDropColumn().
Model(model).
Column(column).
Exec(ctx)
if err != nil {
return err
}
return nil
}

View File

@@ -4,13 +4,15 @@ import (
"context"
"database/sql"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/jackc/pgx/v5/pgconn"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/jackc/pgx/v5/stdlib"
"github.com/jmoiron/sqlx"
"github.com/uptrace/bun"
"github.com/uptrace/bun/dialect/pgdialect"
"go.signoz.io/signoz/pkg/factory"
"go.signoz.io/signoz/pkg/sqlstore"
)
type provider struct {
@@ -18,7 +20,7 @@ type provider struct {
sqldb *sql.DB
bundb *sqlstore.BunDB
sqlxdb *sqlx.DB
dialect *PGDialect
dialect *dialect
}
func NewFactory(hookFactories ...factory.ProviderFactory[sqlstore.SQLStoreHook, sqlstore.Config]) factory.ProviderFactory[sqlstore.SQLStore, sqlstore.Config] {
@@ -37,7 +39,7 @@ func NewFactory(hookFactories ...factory.ProviderFactory[sqlstore.SQLStoreHook,
}
func New(ctx context.Context, providerSettings factory.ProviderSettings, config sqlstore.Config, hooks ...sqlstore.SQLStoreHook) (sqlstore.SQLStore, error) {
settings := factory.NewScopedProviderSettings(providerSettings, "go.signoz.io/signoz/pkg/sqlstore/postgressqlstore")
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/pkg/sqlstore/postgressqlstore")
pgConfig, err := pgxpool.ParseConfig(config.Postgres.DSN)
if err != nil {
@@ -60,7 +62,7 @@ func New(ctx context.Context, providerSettings factory.ProviderSettings, config
sqldb: sqldb,
bundb: sqlstore.NewBunDB(settings, sqldb, pgdialect.New(), hooks),
sqlxdb: sqlx.NewDb(sqldb, "postgres"),
dialect: &PGDialect{},
dialect: new(dialect),
}, nil
}
@@ -87,3 +89,20 @@ func (provider *provider) BunDBCtx(ctx context.Context) bun.IDB {
func (provider *provider) RunInTxCtx(ctx context.Context, opts *sql.TxOptions, cb func(ctx context.Context) error) error {
return provider.bundb.RunInTxCtx(ctx, opts, cb)
}
func (provider *provider) WrapNotFoundErrf(err error, code errors.Code, format string, args ...any) error {
if err == sql.ErrNoRows {
return errors.Wrapf(err, errors.TypeNotFound, code, format, args...)
}
return err
}
func (provider *provider) WrapAlreadyExistsErrf(err error, code errors.Code, format string, args ...any) error {
var pgErr *pgconn.PgError
if errors.As(err, &pgErr) && pgErr.Code == "23505" {
return errors.Wrapf(err, errors.TypeAlreadyExists, code, format, args...)
}
return err
}

View File

@@ -1,4 +1,4 @@
package model
package types
import (
"encoding/json"
@@ -6,15 +6,26 @@ import (
"net/url"
"strings"
"github.com/SigNoz/signoz/ee/query-service/sso"
"github.com/SigNoz/signoz/ee/query-service/sso/saml"
"github.com/SigNoz/signoz/pkg/types"
"github.com/google/uuid"
"github.com/pkg/errors"
saml2 "github.com/russellhaering/gosaml2"
"go.signoz.io/signoz/ee/query-service/sso"
"go.signoz.io/signoz/ee/query-service/sso/saml"
"go.signoz.io/signoz/pkg/types"
"github.com/uptrace/bun"
"go.uber.org/zap"
)
type StorableOrgDomain struct {
bun.BaseModel `bun:"table:org_domains"`
types.TimeAuditable
ID uuid.UUID `json:"id" bun:"id,pk,type:text"`
OrgID string `json:"orgId" bun:"org_id,type:text,notnull"`
Name string `json:"name" bun:"name,type:varchar(50),notnull,unique"`
Data string `json:"-" bun:"data,type:text,notnull"`
}
type SSOType string
const (
@@ -22,13 +33,12 @@ const (
GoogleAuth SSOType = "GOOGLE_AUTH"
)
// OrgDomain identify org owned web domains for auth and other purposes
type OrgDomain struct {
Id uuid.UUID `json:"id"`
Name string `json:"name"`
OrgId string `json:"orgId"`
SsoEnabled bool `json:"ssoEnabled"`
SsoType SSOType `json:"ssoType"`
// GettableOrgDomain identify org owned web domains for auth and other purposes
type GettableOrgDomain struct {
StorableOrgDomain
SsoEnabled bool `json:"ssoEnabled"`
SsoType SSOType `json:"ssoType"`
SamlConfig *SamlConfig `json:"samlConfig"`
GoogleAuthConfig *GoogleOAuthConfig `json:"googleAuthConfig"`
@@ -36,18 +46,18 @@ type OrgDomain struct {
Org *types.Organization
}
func (od *OrgDomain) String() string {
return fmt.Sprintf("[%s]%s-%s ", od.Name, od.Id.String(), od.SsoType)
func (od *GettableOrgDomain) String() string {
return fmt.Sprintf("[%s]%s-%s ", od.Name, od.ID.String(), od.SsoType)
}
// Valid is used a pipeline function to check if org domain
// loaded from db is valid
func (od *OrgDomain) Valid(err error) error {
func (od *GettableOrgDomain) Valid(err error) error {
if err != nil {
return err
}
if od.Id == uuid.Nil || od.OrgId == "" {
if od.ID == uuid.Nil || od.OrgID == "" {
return fmt.Errorf("both id and orgId are required")
}
@@ -55,9 +65,9 @@ func (od *OrgDomain) Valid(err error) error {
}
// ValidNew cheks if the org domain is valid for insertion in db
func (od *OrgDomain) ValidNew() error {
func (od *GettableOrgDomain) ValidNew() error {
if od.OrgId == "" {
if od.OrgID == "" {
return fmt.Errorf("orgId is required")
}
@@ -69,7 +79,7 @@ func (od *OrgDomain) ValidNew() error {
}
// LoadConfig loads config params from json text
func (od *OrgDomain) LoadConfig(jsondata string) error {
func (od *GettableOrgDomain) LoadConfig(jsondata string) error {
d := *od
err := json.Unmarshal([]byte(jsondata), &d)
if err != nil {
@@ -79,21 +89,21 @@ func (od *OrgDomain) LoadConfig(jsondata string) error {
return nil
}
func (od *OrgDomain) GetSAMLEntityID() string {
func (od *GettableOrgDomain) GetSAMLEntityID() string {
if od.SamlConfig != nil {
return od.SamlConfig.SamlEntity
}
return ""
}
func (od *OrgDomain) GetSAMLIdpURL() string {
func (od *GettableOrgDomain) GetSAMLIdpURL() string {
if od.SamlConfig != nil {
return od.SamlConfig.SamlIdp
}
return ""
}
func (od *OrgDomain) GetSAMLCert() string {
func (od *GettableOrgDomain) GetSAMLCert() string {
if od.SamlConfig != nil {
return od.SamlConfig.SamlCert
}
@@ -102,7 +112,7 @@ func (od *OrgDomain) GetSAMLCert() string {
// PrepareGoogleOAuthProvider creates GoogleProvider that is used in
// requesting OAuth and also used in processing response from google
func (od *OrgDomain) PrepareGoogleOAuthProvider(siteUrl *url.URL) (sso.OAuthCallbackProvider, error) {
func (od *GettableOrgDomain) PrepareGoogleOAuthProvider(siteUrl *url.URL) (sso.OAuthCallbackProvider, error) {
if od.GoogleAuthConfig == nil {
return nil, fmt.Errorf("GOOGLE OAUTH is not setup correctly for this domain")
}
@@ -111,7 +121,7 @@ func (od *OrgDomain) PrepareGoogleOAuthProvider(siteUrl *url.URL) (sso.OAuthCall
}
// PrepareSamlRequest creates a request accordingly gosaml2
func (od *OrgDomain) PrepareSamlRequest(siteUrl *url.URL) (*saml2.SAMLServiceProvider, error) {
func (od *GettableOrgDomain) PrepareSamlRequest(siteUrl *url.URL) (*saml2.SAMLServiceProvider, error) {
// this is the url Idp will call after login completion
acs := fmt.Sprintf("%s://%s/%s",
@@ -136,9 +146,9 @@ func (od *OrgDomain) PrepareSamlRequest(siteUrl *url.URL) (*saml2.SAMLServicePro
return saml.PrepareRequest(issuer, acs, sourceUrl, od.GetSAMLEntityID(), od.GetSAMLIdpURL(), od.GetSAMLCert())
}
func (od *OrgDomain) BuildSsoUrl(siteUrl *url.URL) (ssoUrl string, err error) {
func (od *GettableOrgDomain) BuildSsoUrl(siteUrl *url.URL) (ssoUrl string, err error) {
fmtDomainId := strings.Replace(od.Id.String(), "-", ":", -1)
fmtDomainId := strings.Replace(od.ID.String(), "-", ":", -1)
// build redirect url from window.location sent by frontend
redirectURL := fmt.Sprintf("%s://%s%s", siteUrl.Scheme, siteUrl.Host, siteUrl.Path)

View File

@@ -0,0 +1,76 @@
package types
import (
"crypto/rand"
"encoding/base64"
"time"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/uptrace/bun"
)
type GettablePAT struct {
CreatedByUser PatUser `json:"createdByUser"`
UpdatedByUser PatUser `json:"updatedByUser"`
StorablePersonalAccessToken
}
type PatUser struct {
types.User
NotFound bool `json:"notFound"`
}
func NewGettablePAT(name, role, userID string, expiresAt int64) GettablePAT {
return GettablePAT{
StorablePersonalAccessToken: NewStorablePersonalAccessToken(name, role, userID, expiresAt),
}
}
type StorablePersonalAccessToken struct {
bun.BaseModel `bun:"table:personal_access_token"`
types.Identifiable
types.TimeAuditable
OrgID string `json:"orgId" bun:"org_id,type:text,notnull"`
Role string `json:"role" bun:"role,type:text,notnull,default:'ADMIN'"`
UserID string `json:"userId" bun:"user_id,type:text,notnull"`
Token string `json:"token" bun:"token,type:text,notnull,unique"`
Name string `json:"name" bun:"name,type:text,notnull"`
ExpiresAt int64 `json:"expiresAt" bun:"expires_at,notnull,default:0"`
LastUsed int64 `json:"lastUsed" bun:"last_used,notnull,default:0"`
Revoked bool `json:"revoked" bun:"revoked,notnull,default:false"`
UpdatedByUserID string `json:"updatedByUserId" bun:"updated_by_user_id,type:text,notnull,default:''"`
}
func NewStorablePersonalAccessToken(name, role, userID string, expiresAt int64) StorablePersonalAccessToken {
now := time.Now()
if expiresAt != 0 {
// convert expiresAt to unix timestamp from days
expiresAt = now.Unix() + (expiresAt * 24 * 60 * 60)
}
// Generate a 32-byte random token.
token := make([]byte, 32)
rand.Read(token)
// Encode the token in base64.
encodedToken := base64.StdEncoding.EncodeToString(token)
return StorablePersonalAccessToken{
Token: encodedToken,
Name: name,
Role: role,
UserID: userID,
ExpiresAt: expiresAt,
LastUsed: 0,
Revoked: false,
UpdatedByUserID: "",
TimeAuditable: types.TimeAuditable{
CreatedAt: now,
UpdatedAt: now,
},
Identifiable: types.Identifiable{
ID: valuer.GenerateUUID(),
},
}
}

View File

@@ -1,30 +1,28 @@
package model
package types
import (
"fmt"
"context"
"fmt"
"net/url"
"golang.org/x/oauth2"
"github.com/SigNoz/signoz/ee/query-service/sso"
"github.com/coreos/go-oidc/v3/oidc"
"go.signoz.io/signoz/ee/query-service/sso"
"golang.org/x/oauth2"
)
// SamlConfig contans SAML params to generate and respond to the requests
// from SAML provider
type SamlConfig struct {
SamlEntity string `json:"samlEntity"`
SamlIdp string `json:"samlIdp"`
SamlCert string `json:"samlCert"`
}
// GoogleOauthConfig contains a generic config to support oauth
// GoogleOauthConfig contains a generic config to support oauth
type GoogleOAuthConfig struct {
ClientID string `json:"clientId"`
ClientSecret string `json:"clientSecret"`
RedirectURI string `json:"redirectURI"`
}
const (
googleIssuerURL = "https://accounts.google.com"
)
@@ -40,7 +38,7 @@ func (g *GoogleOAuthConfig) GetProvider(domain string, siteUrl *url.URL) (sso.OA
}
// default to email and profile scope as we just use google auth
// to verify identity and start a session.
// to verify identity and start a session.
scopes := []string{"email"}
// this is the url google will call after login completion
@@ -61,8 +59,7 @@ func (g *GoogleOAuthConfig) GetProvider(domain string, siteUrl *url.URL) (sso.OA
Verifier: provider.Verifier(
&oidc.Config{ClientID: g.ClientID},
),
Cancel: cancel,
HostedDomain: domain,
Cancel: cancel,
HostedDomain: domain,
}, nil
}

42
ee/zeus/config.go Normal file
View File

@@ -0,0 +1,42 @@
package zeus
import (
"fmt"
neturl "net/url"
"sync"
"github.com/SigNoz/signoz/pkg/zeus"
)
// This will be set via ldflags at build time.
var (
url string = "<unset>"
deprecatedURL string = "<unset>"
)
var (
config zeus.Config
once sync.Once
)
// initializes the Zeus configuration
func Config() zeus.Config {
once.Do(func() {
parsedURL, err := neturl.Parse(url)
if err != nil {
panic(fmt.Errorf("invalid zeus URL: %w", err))
}
deprecatedParsedURL, err := neturl.Parse(deprecatedURL)
if err != nil {
panic(fmt.Errorf("invalid zeus deprecated URL: %w", err))
}
config = zeus.Config{URL: parsedURL, DeprecatedURL: deprecatedParsedURL}
if err := config.Validate(); err != nil {
panic(fmt.Errorf("invalid zeus config: %w", err))
}
})
return config
}

View File

@@ -0,0 +1,189 @@
package httpzeus
import (
"bytes"
"context"
"io"
"net/http"
"net/url"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/http/client"
"github.com/SigNoz/signoz/pkg/zeus"
"github.com/tidwall/gjson"
)
type Provider struct {
settings factory.ScopedProviderSettings
config zeus.Config
httpClient *client.Client
}
func NewProviderFactory() factory.ProviderFactory[zeus.Zeus, zeus.Config] {
return factory.NewProviderFactory(factory.MustNewName("http"), func(ctx context.Context, providerSettings factory.ProviderSettings, config zeus.Config) (zeus.Zeus, error) {
return New(ctx, providerSettings, config)
})
}
func New(ctx context.Context, providerSettings factory.ProviderSettings, config zeus.Config) (zeus.Zeus, error) {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/ee/zeus/httpzeus")
httpClient, err := client.New(
settings.Logger(),
providerSettings.TracerProvider,
providerSettings.MeterProvider,
client.WithRequestResponseLog(true),
client.WithRetryCount(3),
)
if err != nil {
return nil, err
}
return &Provider{
settings: settings,
config: config,
httpClient: httpClient,
}, nil
}
func (provider *Provider) GetLicense(ctx context.Context, key string) ([]byte, error) {
response, err := provider.do(
ctx,
provider.config.URL.JoinPath("/v2/licenses/me"),
http.MethodGet,
key,
nil,
)
if err != nil {
return nil, err
}
return []byte(gjson.GetBytes(response, "data").String()), nil
}
func (provider *Provider) GetCheckoutURL(ctx context.Context, key string, body []byte) ([]byte, error) {
response, err := provider.do(
ctx,
provider.config.URL.JoinPath("/v2/subscriptions/me/sessions/checkout"),
http.MethodPost,
key,
body,
)
if err != nil {
return nil, err
}
return []byte(gjson.GetBytes(response, "data").String()), nil
}
func (provider *Provider) GetPortalURL(ctx context.Context, key string, body []byte) ([]byte, error) {
response, err := provider.do(
ctx,
provider.config.URL.JoinPath("/v2/subscriptions/me/sessions/portal"),
http.MethodPost,
key,
body,
)
if err != nil {
return nil, err
}
return []byte(gjson.GetBytes(response, "data").String()), nil
}
func (provider *Provider) GetDeployment(ctx context.Context, key string) ([]byte, error) {
response, err := provider.do(
ctx,
provider.config.URL.JoinPath("/v2/deployments/me"),
http.MethodGet,
key,
nil,
)
if err != nil {
return nil, err
}
return []byte(gjson.GetBytes(response, "data").String()), nil
}
func (provider *Provider) PutMeters(ctx context.Context, key string, data []byte) error {
_, err := provider.do(
ctx,
provider.config.DeprecatedURL.JoinPath("/api/v1/usage"),
http.MethodPost,
key,
data,
)
return err
}
func (provider *Provider) PutProfile(ctx context.Context, key string, body []byte) error {
_, err := provider.do(
ctx,
provider.config.URL.JoinPath("/v2/profiles/me"),
http.MethodPut,
key,
body,
)
return err
}
func (provider *Provider) PutHost(ctx context.Context, key string, body []byte) error {
_, err := provider.do(
ctx,
provider.config.URL.JoinPath("/v2/deployments/me/hosts"),
http.MethodPut,
key,
body,
)
return err
}
func (provider *Provider) do(ctx context.Context, url *url.URL, method string, key string, requestBody []byte) ([]byte, error) {
request, err := http.NewRequestWithContext(ctx, method, url.String(), bytes.NewBuffer(requestBody))
if err != nil {
return nil, err
}
request.Header.Set("X-Signoz-Cloud-Api-Key", key)
request.Header.Set("Content-Type", "application/json")
response, err := provider.httpClient.Do(request)
if err != nil {
return nil, err
}
defer func() {
_ = response.Body.Close()
}()
body, err := io.ReadAll(response.Body)
if err != nil {
return nil, err
}
if response.StatusCode/100 == 2 {
return body, nil
}
return nil, provider.errFromStatusCode(response.StatusCode)
}
// This can be taken down to the client package
func (provider *Provider) errFromStatusCode(statusCode int) error {
switch statusCode {
case http.StatusBadRequest:
return errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "bad request")
case http.StatusUnauthorized:
return errors.Newf(errors.TypeUnauthenticated, errors.CodeUnauthenticated, "unauthenticated")
case http.StatusForbidden:
return errors.Newf(errors.TypeForbidden, errors.CodeForbidden, "forbidden")
case http.StatusNotFound:
return errors.Newf(errors.TypeNotFound, errors.CodeNotFound, "not found")
}
return errors.Newf(errors.TypeInternal, errors.CodeInternal, "internal")
}

1
frontend/.gitignore vendored
View File

@@ -1,3 +1,4 @@
# Sentry Config File
.env.sentry-build-plugin
.qodo

View File

@@ -55,7 +55,7 @@
"ansi-to-html": "0.7.2",
"antd": "5.11.0",
"antd-table-saveas-excel": "2.2.1",
"axios": "1.7.7",
"axios": "1.8.2",
"babel-eslint": "^10.1.0",
"babel-jest": "^29.6.4",
"babel-loader": "9.1.3",
@@ -90,8 +90,9 @@
"less": "^4.1.2",
"less-loader": "^10.2.0",
"lodash-es": "^4.17.21",
"lucide-react": "0.379.0",
"lucide-react": "0.427.0",
"mini-css-extract-plugin": "2.4.5",
"motion": "12.4.13",
"overlayscrollbars": "^2.8.1",
"overlayscrollbars-react": "^0.5.6",
"papaparse": "5.4.1",
@@ -131,6 +132,7 @@
"tsconfig-paths-webpack-plugin": "^3.5.1",
"typescript": "^4.0.5",
"uplot": "1.6.31",
"userpilot": "1.3.9",
"uuid": "^8.3.2",
"web-vitals": "^0.2.4",
"webpack": "5.94.0",
@@ -197,7 +199,7 @@
"autoprefixer": "10.4.19",
"babel-plugin-styled-components": "^1.12.0",
"compression-webpack-plugin": "9.0.0",
"copy-webpack-plugin": "^8.1.0",
"copy-webpack-plugin": "^11.0.0",
"critters-webpack-plugin": "^3.0.1",
"eslint": "^7.32.0",
"eslint-config-airbnb": "^19.0.4",
@@ -215,6 +217,7 @@
"eslint-plugin-simple-import-sort": "^7.0.0",
"eslint-plugin-sonarjs": "^0.12.0",
"husky": "^7.0.4",
"image-webpack-loader": "8.1.0",
"is-ci": "^3.0.1",
"jest-styled-components": "^7.0.8",
"lint-staged": "^12.5.0",
@@ -253,6 +256,7 @@
"body-parser": "1.20.3",
"http-proxy-middleware": "3.0.3",
"cross-spawn": "7.0.5",
"cookie": "^0.7.1"
"cookie": "^0.7.1",
"serialize-javascript": "6.0.2"
}
}

View File

@@ -0,0 +1,34 @@
<svg width="33" height="32" viewBox="0 0 33 32" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M25.3952 7.11163L7.6041 7.08496C4.25702 7.08496 3.16577 8.44291 3.16577 11.2766V20.7C3.16577 23.0181 3.88586 24.8916 7.60633 24.8916L25.3974 24.9183C28.7379 24.9183 29.838 23.487 29.838 20.7267V11.3033C29.8358 8.18954 28.6534 7.11163 25.3952 7.11163Z" fill="#504F4F"/>
<path d="M25.8508 22.798H23.922C23.062 22.798 22.6221 22.1224 22.6221 21.2647C22.6221 20.4047 23.062 19.7314 23.922 19.7314H25.8508C26.7107 19.7314 27.1507 20.407 27.1507 21.2647C27.1463 22.0624 26.7063 22.798 25.8508 22.798Z" fill="black"/>
<path d="M25.7441 21.7801H24.0108C23.5575 21.7801 23.5686 21.5534 23.5686 21.2646C23.5686 20.9757 23.5575 20.749 24.0108 20.749H25.7441C26.1974 20.749 26.2063 20.9757 26.2063 21.2646C26.204 21.5312 26.1952 21.7801 25.7441 21.7801Z" fill="#8BC34A"/>
<path d="M9.08004 22.798H7.15125C6.2913 22.798 5.85132 22.1224 5.85132 21.2647C5.85132 20.4047 6.2913 19.7314 7.15125 19.7314H9.08004C9.93999 19.7314 10.38 20.407 10.38 21.2647C10.38 22.0624 9.93777 22.798 9.08004 22.798Z" fill="black"/>
<path d="M8.11556 22.1896C8.62609 22.1896 9.03995 21.7757 9.03995 21.2652C9.03995 20.7547 8.62609 20.3408 8.11556 20.3408C7.60503 20.3408 7.19116 20.7547 7.19116 21.2652C7.19116 21.7757 7.60503 22.1896 8.11556 22.1896Z" fill="#F44336"/>
<path d="M20.2592 22.798H18.3305C17.4705 22.798 17.0305 22.1224 17.0305 21.2647C17.0305 20.4047 17.4705 19.7314 18.3305 19.7314H20.2592C21.1192 19.7314 21.5592 20.407 21.5592 21.2647C21.5614 22.0624 21.1192 22.798 20.2592 22.798Z" fill="black"/>
<path d="M18.844 20.3408L20.375 21.2652L18.844 22.1896V20.3408Z" fill="#EEEEEE"/>
<path d="M12.7423 22.798H14.6711C15.5311 22.798 15.971 22.1224 15.971 21.2647C15.971 20.4047 15.5311 19.7314 14.6711 19.7314H12.7423C11.8824 19.7314 11.4424 20.407 11.4424 21.2647C11.4402 22.0624 11.8824 22.798 12.7423 22.798Z" fill="black"/>
<path d="M14.158 20.3408L12.627 21.2652L14.158 22.1896V20.3408Z" fill="#EEEEEE"/>
<path d="M26.6225 16.5808C26.6225 17.0519 26.2514 17.4252 25.778 17.4252H7.21345C6.74228 17.4252 6.3689 17.0541 6.3689 16.5808V11.0944C6.3689 10.6233 6.74005 10.25 7.21345 10.25H25.778C26.2492 10.25 26.6225 10.6211 26.6225 11.0944V16.5808Z" fill="#8DCC47"/>
<path opacity="0.5" d="M11.5296 13.2373L11.1674 13.5995C11.1119 13.655 11.0208 13.655 10.9652 13.5995L10.603 13.2373C10.5741 13.2084 10.5608 13.1728 10.5608 13.135V12.1262C10.5608 12.0884 10.5763 12.0529 10.603 12.024L10.9652 11.6618C11.0208 11.6062 11.1119 11.6062 11.1674 11.6618L11.5296 12.024C11.5585 12.0529 11.5718 12.0884 11.5718 12.1262V13.135C11.5718 13.175 11.5563 13.2106 11.5296 13.2373Z" fill="black"/>
<path opacity="0.5" d="M11.5296 15.6503L11.1674 16.0125C11.1119 16.0681 11.0208 16.0681 10.9652 16.0125L10.603 15.6503C10.5741 15.6215 10.5608 15.5859 10.5608 15.5481V14.5393C10.5608 14.5015 10.5763 14.466 10.603 14.4371L10.9652 14.0749C11.0208 14.0193 11.1119 14.0193 11.1674 14.0749L11.5296 14.4371C11.5585 14.466 11.5718 14.5015 11.5718 14.5393V15.5481C11.5718 15.5881 11.5563 15.6237 11.5296 15.6503Z" fill="black"/>
<path opacity="0.5" d="M13.3873 13.2373L13.0251 13.5995C12.9695 13.655 12.8784 13.655 12.8229 13.5995L12.4607 13.2373C12.4318 13.2084 12.4185 13.1728 12.4185 13.135V12.1262C12.4185 12.0884 12.434 12.0529 12.4607 12.024L12.8229 11.6618C12.8784 11.6062 12.9695 11.6062 13.0251 11.6618L13.3873 12.024C13.4162 12.0529 13.4295 12.0884 13.4295 12.1262V13.135C13.4295 13.175 13.414 13.2106 13.3873 13.2373Z" fill="black"/>
<path opacity="0.5" d="M16.0323 13.2373L15.6701 13.5995C15.6146 13.655 15.5235 13.655 15.4679 13.5995L15.1057 13.2373C15.0768 13.2084 15.0635 13.1728 15.0635 13.135V12.1262C15.0635 12.0884 15.079 12.0529 15.1057 12.024L15.4679 11.6618C15.5235 11.6062 15.6146 11.6062 15.6701 11.6618L16.0323 12.024C16.0612 12.0529 16.0745 12.0884 16.0745 12.1262V13.135C16.0745 13.175 16.059 13.2106 16.0323 13.2373Z" fill="black"/>
<path opacity="0.5" d="M16.0323 15.6503L15.6701 16.0125C15.6146 16.0681 15.5235 16.0681 15.4679 16.0125L15.1057 15.6503C15.0768 15.6215 15.0635 15.5859 15.0635 15.5481V14.5393C15.0635 14.5015 15.079 14.466 15.1057 14.4371L15.4679 14.0749C15.5235 14.0193 15.6146 14.0193 15.6701 14.0749L16.0323 14.4371C16.0612 14.466 16.0745 14.5015 16.0745 14.5393V15.5481C16.0745 15.5881 16.059 15.6237 16.0323 15.6503Z" fill="black"/>
<path opacity="0.5" d="M13.6409 11.8868L13.2787 11.5246C13.2232 11.4691 13.2232 11.3779 13.2787 11.3224L13.6409 10.9602C13.6698 10.9313 13.7054 10.918 13.7431 10.918H14.7542C14.792 10.918 14.8275 10.9335 14.8564 10.9602L15.2186 11.3224C15.2742 11.3779 15.2742 11.4691 15.2186 11.5246L14.8564 11.8868C14.8275 11.9157 14.792 11.929 14.7542 11.929H13.7431C13.7031 11.929 13.6676 11.9135 13.6409 11.8868Z" fill="black"/>
<path opacity="0.5" d="M13.6409 14.3018L13.2787 13.9396C13.2232 13.8841 13.2232 13.793 13.2787 13.7374L13.6409 13.3752C13.6698 13.3463 13.7054 13.333 13.7431 13.333H14.7542C14.792 13.333 14.8275 13.3486 14.8564 13.3752L15.2186 13.7374C15.2742 13.793 15.2742 13.8841 15.2186 13.9396L14.8564 14.3018C14.8275 14.3307 14.792 14.3441 14.7542 14.3441H13.7431C13.7031 14.3441 13.6676 14.3285 13.6409 14.3018Z" fill="black"/>
<path opacity="0.5" d="M13.6409 16.7149L13.2787 16.3527C13.2232 16.2972 13.2232 16.2061 13.2787 16.1505L13.6409 15.7883C13.6698 15.7594 13.7054 15.7461 13.7431 15.7461H14.7542C14.792 15.7461 14.8275 15.7616 14.8564 15.7883L15.2186 16.1505C15.2742 16.2061 15.2742 16.2972 15.2186 16.3527L14.8564 16.7149C14.8275 16.7438 14.792 16.7572 14.7542 16.7572H13.7431C13.7031 16.7572 13.6676 16.7416 13.6409 16.7149Z" fill="black"/>
<path opacity="0.5" d="M17.8902 13.2373L17.528 13.5995C17.4725 13.655 17.3814 13.655 17.3258 13.5995L16.9636 13.2373C16.9347 13.2084 16.9214 13.1728 16.9214 13.135V12.1262C16.9214 12.0884 16.9369 12.0529 16.9636 12.024L17.3258 11.6618C17.3814 11.6062 17.4725 11.6062 17.528 11.6618L17.8902 12.024C17.9191 12.0529 17.9324 12.0884 17.9324 12.1262V13.135C17.9324 13.175 17.9169 13.2106 17.8902 13.2373Z" fill="black"/>
<path opacity="0.5" d="M20.5374 13.2373L20.1752 13.5995C20.1197 13.655 20.0286 13.655 19.973 13.5995L19.6108 13.2373C19.5819 13.2084 19.5686 13.1728 19.5686 13.135V12.1262C19.5686 12.0884 19.5842 12.0529 19.6108 12.024L19.973 11.6618C20.0286 11.6062 20.1197 11.6062 20.1752 11.6618L20.5374 12.024C20.5663 12.0529 20.5797 12.0884 20.5797 12.1262V13.135C20.5774 13.175 20.5619 13.2106 20.5374 13.2373Z" fill="black"/>
<path opacity="0.5" d="M20.5374 15.6503L20.1752 16.0125C20.1197 16.0681 20.0286 16.0681 19.973 16.0125L19.6108 15.6503C19.5819 15.6215 19.5686 15.5859 19.5686 15.5481V14.5393C19.5686 14.5015 19.5842 14.466 19.6108 14.4371L19.973 14.0749C20.0286 14.0193 20.1197 14.0193 20.1752 14.0749L20.5374 14.4371C20.5663 14.466 20.5797 14.5015 20.5797 14.5393V15.5481C20.5774 15.5881 20.5619 15.6237 20.5374 15.6503Z" fill="black"/>
<path opacity="0.5" d="M18.1434 11.8868L17.7812 11.5246C17.7256 11.4691 17.7256 11.3779 17.7812 11.3224L18.1434 10.9602C18.1723 10.9313 18.2078 10.918 18.2456 10.918H19.2566C19.2944 10.918 19.33 10.9335 19.3589 10.9602L19.7211 11.3224C19.7766 11.3779 19.7766 11.4691 19.7211 11.5246L19.3589 11.8868C19.33 11.9157 19.2944 11.929 19.2566 11.929H18.2456C18.2056 11.929 18.17 11.9135 18.1434 11.8868Z" fill="black"/>
<path opacity="0.5" d="M18.1434 14.3018L17.7812 13.9396C17.7256 13.8841 17.7256 13.793 17.7812 13.7374L18.1434 13.3752C18.1723 13.3463 18.2078 13.333 18.2456 13.333H19.2566C19.2944 13.333 19.33 13.3486 19.3589 13.3752L19.7211 13.7374C19.7766 13.793 19.7766 13.8841 19.7211 13.9396L19.3589 14.3018C19.33 14.3307 19.2944 14.3441 19.2566 14.3441H18.2456C18.2056 14.3441 18.17 14.3285 18.1434 14.3018Z" fill="black"/>
<path opacity="0.5" d="M18.1434 16.7149L17.7812 16.3527C17.7256 16.2972 17.7256 16.2061 17.7812 16.1505L18.1434 15.7883C18.1723 15.7594 18.2078 15.7461 18.2456 15.7461H19.2566C19.2944 15.7461 19.33 15.7616 19.3589 15.7883L19.7211 16.1505C19.7766 16.2061 19.7766 16.2972 19.7211 16.3527L19.3589 16.7149C19.33 16.7438 19.2944 16.7572 19.2566 16.7572H18.2456C18.2056 16.7572 18.17 16.7416 18.1434 16.7149Z" fill="black"/>
<path opacity="0.5" d="M22.3951 13.2373L22.0329 13.5995C21.9774 13.655 21.8862 13.655 21.8307 13.5995L21.4685 13.2373C21.4396 13.2084 21.4263 13.1728 21.4263 13.135V12.1262C21.4263 12.0884 21.4418 12.0529 21.4685 12.024L21.8307 11.6618C21.8862 11.6062 21.9774 11.6062 22.0329 11.6618L22.3951 12.024C22.424 12.0529 22.4373 12.0884 22.4373 12.1262V13.135C22.4351 13.175 22.4195 13.2106 22.3951 13.2373Z" fill="black"/>
<path opacity="0.5" d="M25.0379 13.2373L24.6757 13.5995C24.6202 13.655 24.5291 13.655 24.4735 13.5995L24.1113 13.2373C24.0824 13.2084 24.0691 13.1728 24.0691 13.135V12.1262C24.0691 12.0884 24.0846 12.0529 24.1113 12.024L24.4735 11.6618C24.5291 11.6062 24.6202 11.6062 24.6757 11.6618L25.0379 12.024C25.0668 12.0529 25.0801 12.0884 25.0801 12.1262V13.135C25.0824 13.175 25.0668 13.2106 25.0379 13.2373Z" fill="black"/>
<path opacity="0.5" d="M22.3951 15.6503L22.0329 16.0125C21.9774 16.0681 21.8862 16.0681 21.8307 16.0125L21.4685 15.6503C21.4396 15.6215 21.4263 15.5859 21.4263 15.5481V14.5393C21.4263 14.5015 21.4418 14.466 21.4685 14.4371L21.8307 14.0749C21.8862 14.0193 21.9774 14.0193 22.0329 14.0749L22.3951 14.4371C22.424 14.466 22.4373 14.5015 22.4373 14.5393V15.5481C22.4351 15.5881 22.4195 15.6237 22.3951 15.6503Z" fill="black"/>
<path opacity="0.5" d="M25.0379 15.6503L24.6757 16.0125C24.6202 16.0681 24.5291 16.0681 24.4735 16.0125L24.1113 15.6503C24.0824 15.6215 24.0691 15.5859 24.0691 15.5481V14.5393C24.0691 14.5015 24.0846 14.466 24.1113 14.4371L24.4735 14.0749C24.5291 14.0193 24.6202 14.0193 24.6757 14.0749L25.0379 14.4371C25.0668 14.466 25.0801 14.5015 25.0801 14.5393V15.5481C25.0824 15.5881 25.0668 15.6237 25.0379 15.6503Z" fill="black"/>
<path opacity="0.5" d="M22.6487 11.8868L22.2865 11.5246C22.231 11.4691 22.231 11.3779 22.2865 11.3224L22.6487 10.9602C22.6776 10.9313 22.7132 10.918 22.751 10.918H23.762C23.7998 10.918 23.8353 10.9335 23.8642 10.9602L24.2264 11.3224C24.282 11.3779 24.282 11.4691 24.2264 11.5246L23.8642 11.8868C23.8353 11.9157 23.7998 11.929 23.762 11.929H22.751C22.711 11.929 22.6732 11.9135 22.6487 11.8868Z" fill="black"/>
<path opacity="0.5" d="M22.6487 14.3018L22.2865 13.9396C22.231 13.8841 22.231 13.793 22.2865 13.7374L22.6487 13.3752C22.6776 13.3463 22.7132 13.333 22.751 13.333H23.762C23.7998 13.333 23.8353 13.3486 23.8642 13.3752L24.2264 13.7374C24.282 13.793 24.282 13.8841 24.2264 13.9396L23.8642 14.3018C23.8353 14.3307 23.7998 14.3441 23.762 14.3441H22.751C22.711 14.3441 22.6732 14.3285 22.6487 14.3018Z" fill="black"/>
<path opacity="0.5" d="M22.6487 16.7149L22.2865 16.3527C22.231 16.2972 22.231 16.2061 22.2865 16.1505L22.6487 15.7883C22.6776 15.7594 22.7132 15.7461 22.751 15.7461H23.762C23.7998 15.7461 23.8353 15.7616 23.8642 15.7883L24.2264 16.1505C24.282 16.2061 24.282 16.2972 24.2264 16.3527L23.8642 16.7149C23.8353 16.7438 23.7998 16.7572 23.762 16.7572H22.751C22.711 16.7572 22.6732 16.7416 22.6487 16.7149Z" fill="black"/>
<path d="M25.7801 10.2509C26.2513 10.2509 26.6247 10.622 26.6247 11.0954V16.5805C26.6247 17.0517 26.2535 17.4251 25.7801 17.4251H7.21336C6.74219 17.4251 6.36881 17.0539 6.36881 16.5805V11.0954C6.36881 10.6242 6.73997 10.2509 7.21336 10.2509H25.7801ZM25.7801 9.09961H7.21336C6.111 9.09961 5.21533 9.99528 5.21533 11.0976V16.5828C5.21533 17.6851 6.111 18.5808 7.21336 18.5808H25.7779C26.8803 18.5808 27.7759 17.6851 27.7759 16.5828V11.0954C27.7759 9.99528 26.8803 9.09961 25.7801 9.09961Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 11 KiB

View File

@@ -0,0 +1,26 @@
<svg width="18" height="18" viewBox="0 0 18 18" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10.9732 10.1476L7.56209 10.0176L6.95337 15.8623C6.95337 15.8623 7.88832 15.7973 9.12575 15.7973C10.3632 15.7973 11.4069 15.8848 11.4069 15.8848L10.9732 10.1476Z" fill="#8A2E08"/>
<path d="M4.11226 10.9648L3.27731 10.791C3.27731 10.791 3.07232 12.1147 2.88358 12.9347C2.69484 13.7546 2.3811 14.8633 2.3811 14.8633L4.366 15.8495C4.366 15.8495 6.16215 16.007 6.1934 15.9445C6.22465 15.882 7.54833 12.8084 7.54833 12.8084L7.86331 11.626L4.11226 10.9648Z" fill="#FF6110"/>
<path d="M10.3207 11.6103L14.4805 10.7441L14.6692 11.2479C14.6692 11.2479 14.9429 12.3928 15.0942 13.0128C15.3379 14.0077 15.6516 14.8989 15.6516 14.8989L13.7855 16.0539L11.8631 15.9914L10.3207 11.6103Z" fill="#FF6110"/>
<path d="M3.26727 10.2969C3.26727 10.3256 3.28727 10.5344 3.27727 10.6869C3.26727 10.8393 3.23853 11.0268 3.23853 11.0268C3.23853 11.0268 3.35227 11.0518 3.45726 11.1193C3.56226 11.1868 3.77225 11.4443 3.77225 11.4443C3.77225 11.4443 4.18223 11.2931 4.43096 11.3868C4.81344 11.5306 4.97593 11.8843 4.97593 11.8843C4.97593 11.8843 5.36716 11.5206 5.84589 11.5981C6.31336 11.6731 6.49585 12.1143 6.49585 12.1143C6.49585 12.1143 6.84709 11.8305 7.15457 11.8468C7.5183 11.8655 7.65204 12.133 7.65204 12.133L8.19701 11.0343L4.14598 10.2981H3.26727V10.2969Z" fill="#AF0D03"/>
<path d="M2.17737 15.3802L2.39736 14.7977C2.39736 14.7977 3.37231 15.2752 4.07852 15.4852C4.78473 15.6952 5.61719 15.7239 5.61719 15.7239C5.61719 15.7239 5.60719 15.3939 5.30095 15.1552C4.99472 14.9164 4.50975 14.5152 4.50975 14.5152L5.17221 14.294L5.54094 14.1465L6.69713 15.5227L6.32465 16.4501C6.32465 16.4501 5.51219 16.5551 4.21351 16.2214C2.91483 15.8876 2.17737 15.3802 2.17737 15.3802Z" fill="#C9C9C9"/>
<path d="M10.2419 10.8418C10.2607 10.918 10.2519 11.5205 10.2519 11.5205L10.6057 11.9217C10.6057 11.9217 10.7969 11.673 11.2369 11.693C11.6769 11.7117 11.8769 11.913 11.8769 11.913C11.8769 11.913 12.1443 11.588 12.5081 11.5218C12.8718 11.4543 13.2243 11.6455 13.2243 11.6455C13.2243 11.6455 13.473 11.1868 13.8355 11.0818C14.1992 10.9768 14.6692 11.2493 14.6692 11.2493L14.4755 10.0693L10.9119 10.3256L10.2419 10.8418Z" fill="#AF0D03"/>
<path d="M15.613 14.7792L15.8417 15.3617C15.8417 15.3617 15.1443 15.9067 14.2456 16.1554C13.3468 16.4041 12.0969 16.5041 12.0969 16.5041L11.7319 14.4555L12.9644 14.083L13.6406 14.3717C13.6406 14.3717 13.0419 14.8755 12.8781 15.1067C12.5669 15.5442 12.6781 15.8217 12.6781 15.8217C12.6781 15.8217 13.6331 15.7354 14.3693 15.4204C15.1068 15.103 15.613 14.7792 15.613 14.7792Z" fill="#C9C9C9"/>
<path d="M4.97729 14.5015C4.97729 14.5015 5.40102 14.6915 5.61726 14.8452C5.88475 15.0365 6.15223 15.3227 6.23848 15.7814C6.29848 16.0977 6.32472 16.4502 6.32472 16.4502C6.32472 16.4502 6.62096 16.6789 7.20343 15.7052C7.7859 14.7315 8.08213 13.3828 8.13088 12.4279C8.17838 11.4729 8.15963 11.2529 8.15963 11.2529L7.29967 12.6666L6.02849 13.8228L4.97729 14.5015Z" fill="#D92F0A"/>
<path d="M8.16709 11.2578C8.16709 11.2578 8.26708 12.1253 7.0984 13.2015C6.37218 13.8702 5.66347 14.3302 5.1785 14.5202C5.01226 14.5852 4.86351 14.5589 4.75852 14.5577C4.41604 14.5527 4.47978 14.4939 4.67102 14.3964C4.89601 14.2814 5.54973 14.0427 6.77341 12.9152C7.75211 12.0128 7.81961 11.1103 7.81961 11.1103L8.16584 10.9229V11.2578H8.16709Z" fill="#FFFEFF"/>
<path d="M12.1444 16.4997C12.1444 16.4997 11.7144 16.6522 11.0644 15.7735C10.4145 14.8948 10.2145 13.3374 10.1857 12.8399C10.157 12.3425 10.167 11.4062 10.167 11.4062L11.3619 12.9062C11.3619 12.9062 13.2156 14.3011 13.1868 14.3111C13.1581 14.3211 12.5818 14.6836 12.3656 15.0473C11.9244 15.7823 12.1444 16.4997 12.1444 16.4997Z" fill="#D92F0A"/>
<path d="M10.1007 10.9924C10.1007 10.9924 10.0832 11.3199 10.147 11.5686C10.2807 12.0936 10.7419 12.661 11.3419 13.1935C12.1156 13.881 13.0143 14.3597 13.0806 14.3785C13.148 14.3972 13.6255 14.4172 13.6343 14.3685C13.638 14.3522 13.6668 14.281 13.5105 14.1672C13.1893 13.936 12.4781 13.5285 11.8956 13.0685C11.4844 12.7435 10.9569 12.3398 10.4982 11.5074C10.4144 11.3561 10.4444 10.9736 10.4444 10.9736L10.1007 10.9924Z" fill="white"/>
<path d="M3.74602 8.15983C3.74602 8.15983 2.75107 8.28982 2.71982 8.43107C2.68857 8.57356 2.79857 10.451 2.79857 10.451C2.79857 10.451 3.2173 10.3735 3.42853 10.4197C3.63977 10.466 4.03725 10.7234 4.03725 10.7234L4.70347 10.3135L5.18344 10.9797C5.18344 10.9797 5.52217 10.8047 5.9909 10.8509C6.45962 10.8972 6.77461 11.2134 6.77461 11.2134L7.42957 10.6747L8.16078 11.2497C8.16078 11.2497 8.49452 10.8609 9.11448 10.8497C9.73445 10.8384 10.1094 11.2947 10.1094 11.2947L10.9281 10.581L11.7356 11.2009C11.7356 11.2009 12.0518 10.9434 12.4143 10.8622C12.7768 10.7797 13.0455 10.9209 13.0455 10.9209L13.5605 10.0085L14.1455 10.4297C14.1455 10.4297 14.333 10.2072 14.6017 10.1722C14.8704 10.1372 15.1979 10.126 15.1979 10.126C15.1979 10.126 15.1742 8.86229 15.1742 8.6748C15.1742 8.48731 15.1042 8.18358 15.0217 8.12483C14.9392 8.06608 14.203 7.90234 14.203 7.90234L3.74602 8.15983Z" fill="#D92F0A"/>
<path d="M5.06221 8.70605L4.08601 8.93229L4.03601 10.7247C4.03601 10.7247 4.33349 10.6385 4.63098 10.6697C4.92846 10.7009 5.1822 10.9822 5.1822 10.9822C5.1822 10.9822 5.17595 9.81725 5.18595 9.62101C5.19595 9.42477 5.2722 9.05229 5.31344 8.98104C5.35469 8.90854 5.06221 8.70605 5.06221 8.70605Z" fill="#E1E1E1"/>
<path d="M7.63203 9.08691L6.74083 9.41315C6.74083 9.41315 6.71708 10.2381 6.73833 10.5668C6.75833 10.8956 6.77333 11.2156 6.77333 11.2156C6.77333 11.2156 7.17956 11.0806 7.50954 11.1006C7.83827 11.1206 8.1595 11.2518 8.1595 11.2518L8.147 9.48689L7.63203 9.08691Z" fill="#E1E1E1"/>
<path d="M10.0795 9.45727C10.0795 9.45727 10.1107 10.126 10.1207 10.4647C10.1307 10.8034 10.1082 11.2984 10.1082 11.2984C10.1082 11.2984 10.5932 11.0509 10.9744 11.0197C11.3544 10.9884 11.7344 11.2047 11.7344 11.2047C11.7344 11.2047 11.7044 10.506 11.6631 9.99224C11.6344 9.63351 11.5294 9.32353 11.5294 9.32353L10.5532 8.7998L10.0795 9.45727Z" fill="#E1E1E1"/>
<path d="M13.0443 10.9232C13.0443 10.9232 13.3681 10.567 13.6255 10.4845C13.883 10.402 14.1443 10.432 14.1443 10.432C14.1443 10.432 14.1293 9.59078 14.088 9.20955C14.0468 8.82957 14.0093 8.61083 14.0093 8.61083L12.8143 8.09961L12.8481 9.03956C12.8481 9.03956 12.9893 9.35329 13.0306 9.89826C13.0706 10.4432 13.0443 10.9232 13.0443 10.9232Z" fill="#E1E1E1"/>
<path d="M9.05208 4.72754L7.60965 5.37001C7.60965 5.37001 6.29847 6.3537 5.26103 7.00992C4.22358 7.66613 3.62736 7.91612 3.50362 7.96737C3.30613 8.04986 2.98615 8.14736 2.8474 8.24235C2.61742 8.40234 2.71366 8.47734 2.75116 8.48984C2.78866 8.50234 3.9361 9.01606 5.96974 9.3173C8.00338 9.61853 10.7682 9.49604 12.1857 9.22105C13.6018 8.94607 15.0218 8.12611 15.0218 8.12611C15.0218 8.12611 14.903 7.92737 14.1168 7.58614C13.3294 7.24491 12.6606 6.87743 11.5194 6.12997C10.3783 5.3825 10.1033 5.01502 10.1033 5.01502L9.05208 4.72754Z" fill="#FF6110"/>
<path d="M7.83206 5.63281C7.83206 5.63281 6.24465 7.25898 4.28975 8.61016C4.07601 8.75765 4.08601 8.93264 4.08601 8.93264C4.08601 8.93264 4.2435 8.99264 4.63098 9.08263C5.02471 9.17388 5.25095 9.19637 5.25095 9.19637C5.25095 9.19637 5.42969 8.99138 5.81092 8.61016C6.49338 7.92769 6.91711 7.38647 7.45083 6.69526C7.80456 6.23653 8.22579 5.68656 8.22579 5.68656L7.83206 5.63281Z" fill="white"/>
<path d="M8.5145 5.84187C8.5145 5.84187 7.93703 7.1793 7.66205 7.75677C7.53205 8.02801 7.24207 8.53048 6.97958 8.95046C6.75084 9.31669 6.74084 9.41419 6.74084 9.41419C6.74084 9.41419 7.25457 9.48793 7.5033 9.48793C7.75204 9.48793 8.14577 9.48543 8.14577 9.48543C8.14577 9.48543 8.34326 8.89671 8.55325 7.71678C8.67574 7.03056 8.85448 5.82812 8.85448 5.82812L8.5145 5.84187Z" fill="white"/>
<path d="M9.31445 5.89503C9.31445 5.89503 9.44445 7.18371 9.58944 7.86242C9.78568 8.78113 10.0782 9.45609 10.0782 9.45609C10.0782 9.45609 10.5731 9.43609 10.8481 9.40984C11.1231 9.38359 11.5268 9.32235 11.5268 9.32235C11.5268 9.32235 11.0544 8.59489 10.5981 7.86242C10.2969 7.37745 9.73193 6.24876 9.64069 5.81628C9.56319 5.4438 9.31445 5.89503 9.31445 5.89503Z" fill="white"/>
<path d="M10.022 5.65878C10.022 5.69753 10.437 6.58248 11.4644 7.69242C12.2906 8.58488 12.8418 9.0436 12.8418 9.0436C12.8418 9.0436 13.313 8.89861 13.5243 8.82111C13.773 8.72987 14.0093 8.61113 14.0093 8.61113C14.0093 8.61113 12.8943 7.83742 12.2381 7.29994C11.5819 6.76247 10.6769 6.01501 10.4282 5.67378C10.1795 5.3313 10.022 5.65878 10.022 5.65878Z" fill="white"/>
<path d="M9.03832 3.83621C8.76209 3.85621 8.67084 4.32119 8.5271 4.46618C8.38336 4.60992 7.6084 5.37113 7.6084 5.37113C7.6084 5.37113 7.74589 6.08985 9.11082 6.0761C10.3595 6.06235 10.597 5.47113 10.597 5.47113C10.597 5.47113 9.79953 4.62367 9.61579 4.41368C9.43205 4.20244 9.4058 3.80997 9.03832 3.83621Z" fill="#D92F0A"/>
<path d="M9.08454 3.22829C9.08454 3.22829 9.32078 3.15329 9.65701 3.19204C9.99949 3.23204 10.227 3.34328 10.5832 3.33578C11.1282 3.32578 11.5669 3.06955 11.7244 2.92455C11.8819 2.77956 12.0831 2.54457 11.9781 2.46583C11.8731 2.38708 11.4257 2.50583 10.9257 2.30834C10.4845 2.1346 10.2632 1.68087 9.63701 1.60837C9.09829 1.54713 8.9408 1.76837 8.9408 1.76837L9.08454 3.22829Z" fill="#FF6110"/>
<path d="M8.87451 1.61133V4.06995L9.26824 3.96495L9.22949 1.61133H8.87451Z" fill="#D92F0A"/>
</svg>

After

Width:  |  Height:  |  Size: 9.0 KiB

View File

@@ -0,0 +1,44 @@
<svg
width="13"
height="12"
viewBox="0 0 13 12"
fill="none"
xmlns="http://www.w3.org/2000/svg"
>
<path
d="M8.5 8H11.5"
stroke="white"
strokeLinecap="round"
strokeLinejoin="round"
/>
<path
d="M10 6.5V9.5"
stroke="white"
strokeLinecap="round"
strokeLinejoin="round"
/>
<path
d="M11 4.99995V3.99995C10.9998 3.82459 10.9535 3.65236 10.8658 3.50053C10.778 3.34871 10.6519 3.22263 10.5 3.13495L7 1.13495C6.84798 1.04718 6.67554 1.00098 6.5 1.00098C6.32446 1.00098 6.15202 1.04718 6 1.13495L2.5 3.13495C2.34813 3.22263 2.22199 3.34871 2.13423 3.50053C2.04647 3.65236 2.00018 3.82459 2 3.99995V7.99995C2.00018 8.17531 2.04647 8.34755 2.13423 8.49937C2.22199 8.65119 2.34813 8.77727 2.5 8.86495L6 10.865C6.15202 10.9527 6.32446 10.9989 6.5 10.9989C6.67554 10.9989 6.84798 10.9527 7 10.865L8 10.295"
stroke="white"
strokeLinecap="round"
strokeLinejoin="round"
/>
<path
d="M4.25 2.13477L8.75 4.70977"
stroke="white"
strokeLinecap="round"
strokeLinejoin="round"
/>
<path
d="M2.14502 3.5L6.50002 6L10.855 3.5"
stroke="white"
strokeLinecap="round"
strokeLinejoin="round"
/>
<path
d="M6.5 11V6"
stroke="white"
strokeLinecap="round"
strokeLinejoin="round"
/>
</svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

@@ -0,0 +1,52 @@
<svg
width="16"
height="16"
viewBox="0 0 16 16"
fill="none"
xmlns="http://www.w3.org/2000/svg"
>
<path
d="M7.26714 3.8662C7.49161 3.74063 8.10058 3.44726 9.35185 4.41405C10.6031 5.38083 10.3231 6.16538 10.1819 6.36985C10.0164 6.61099 8.97291 7.45221 8.97291 7.45221L3.20108 14.5809C3.20108 14.5809 2.76991 14.5286 2.19206 14.1286C1.56865 13.6974 1.06748 12.7095 1.4453 11.9939C1.65755 11.5927 6.10588 5.38972 6.95488 4.2118C7.03156 4.10623 7.10601 3.95621 7.26714 3.8662Z"
fill="#FE180A"
/>
<path
d="M8.02849 7.3418L3.06787 14.5516C3.06787 14.5516 3.47348 14.6861 3.80018 14.6727C4.2558 14.6549 4.49805 14.4571 4.49805 14.4571L9.24642 7.60072L8.02849 7.3418Z"
fill="#CA1837"
/>
<path
d="M1.84641 11.3867L1.57087 11.7978C1.57087 11.7978 1.51976 12.3256 2.17861 12.9911C2.42416 13.2388 2.6797 13.4199 2.9008 13.5533C3.26856 13.7755 3.54299 13.8599 3.54299 13.8599L3.73853 13.5866C3.73853 13.5866 3.4841 13.4988 3.21078 13.3311C2.91857 13.1511 2.5997 12.8833 2.38971 12.6478C1.76641 11.9467 1.84641 11.3867 1.84641 11.3867Z"
fill="#FDB900"
/>
<path
d="M5.03675 13.6786L4.8112 14.0052C4.8112 14.0052 4.529 14.0408 4.12124 14.0019C3.87125 13.9785 3.54016 13.8597 3.54016 13.8597L3.73015 13.583C3.73015 13.583 3.95458 13.673 4.30568 13.7086C4.65677 13.7441 5.03675 13.6786 5.03675 13.6786Z"
fill="#DF7D15"
/>
<path
d="M7.47943 4.15898C7.19722 4.42564 7.35055 5.08227 8.25828 5.69446C9.19267 6.32554 9.8082 6.23554 9.98708 5.99111C10.1748 5.73335 9.74265 5.04227 9.14157 4.59674C8.51049 4.12899 7.78608 3.869 7.47943 4.15898Z"
fill="#A42615"
/>
<path
d="M6.10048 5.39899C6.10048 5.39899 5.99493 5.85008 6.72933 6.58449C7.09709 6.95224 7.35708 7.14001 7.5704 7.26556C7.78705 7.39222 7.94593 7.45889 7.94593 7.45889L8.59146 7.23667L8.62812 6.45783C8.62812 6.45783 7.90705 6.08785 7.42596 5.54565C6.80266 4.84235 6.91377 4.44793 6.94377 4.22461C6.83599 4.35571 6.10048 5.39899 6.10048 5.39899Z"
fill="#FDB700"
/>
<path
d="M9.98728 6.57724L9.18066 7.69941C9.18066 7.69941 9.044 7.79051 8.55958 7.67608C8.2707 7.60719 7.94739 7.45942 7.94739 7.45942L8.62957 6.45947C8.62957 6.45947 9.034 6.64502 9.35954 6.67279C9.67285 6.69835 9.98728 6.57724 9.98728 6.57724Z"
fill="#DE7F14"
/>
<path
d="M8.40293 4.97806C8.3796 5.1025 8.47404 5.24583 8.63514 5.34582C8.79513 5.44693 8.99735 5.3636 9.00957 5.27471C9.02179 5.18583 9.07067 4.23477 9.4551 4.1881C9.84174 4.14033 10.0195 4.92473 10.7684 5.12694C11.3061 5.27138 11.475 5.1325 11.475 5.1325L11.4039 4.70474C11.4039 4.70474 11.1795 4.84696 10.7328 4.59808C10.2639 4.33699 10.0495 3.61147 9.37288 3.67702C8.67181 3.74479 8.5707 4.52698 8.52293 4.66363C8.47404 4.79918 8.41071 4.93584 8.40293 4.97806Z"
fill="#D8A26D"
/>
<path
d="M9.93205 2.54571C9.98539 2.6535 11.2544 4.66376 11.2544 4.66376C11.2544 4.66376 11.2633 7.23741 11.2633 7.32743C11.2633 7.48856 11.3711 7.55079 11.4956 7.39855C11.5645 7.31409 12.5679 5.64721 12.5679 5.64721C12.5679 5.64721 14.2659 6.34397 14.3915 6.40731C14.5171 6.46954 14.5882 6.38953 14.5171 6.23729C14.4459 6.08505 13.328 4.08368 13.328 4.08368C13.328 4.08368 14.4182 2.52904 14.4804 2.40347C14.5426 2.2779 14.4537 2.19789 14.337 2.25123C14.2204 2.30457 12.8713 2.97466 12.8713 2.97466C12.8713 2.97466 12.379 1.55003 12.3435 1.46447C12.2812 1.31223 12.1823 1.28556 12.1379 1.47336C12.0934 1.66116 11.6467 3.2158 11.6467 3.2158C11.6467 3.2158 10.2165 2.46459 10.0921 2.41125C9.96761 2.35791 9.87871 2.43903 9.93205 2.54571Z"
fill="#FD8E02"
/>
<path
d="M10.9332 3.33275L11.7021 4.44936L11.6843 6.2726L12.3809 5.03044L13.6764 5.70041L12.7831 4.20826L13.3553 3.27942C13.3553 3.27942 12.6231 3.69051 12.6043 3.64607C12.5854 3.60163 12.282 2.68945 12.282 2.68945L11.9787 3.75273L10.9332 3.33275Z"
fill="#FFE268"
/>
<path
d="M11.2556 5.19045L11.2534 4.7427C11.2534 4.7427 11.5212 4.70381 11.7712 4.44272C11.9889 4.21384 12.1523 3.90719 12.3411 4.17495C12.4834 4.37716 12.1689 4.73159 11.9134 4.91714C11.5223 5.20379 11.2556 5.19045 11.2556 5.19045Z"
fill="#FCBA03"
/>
</svg>

After

Width:  |  Height:  |  Size: 4.5 KiB

View File

@@ -0,0 +1,138 @@
<svg
width="16"
height="16"
viewBox="0 0 16 16"
fill="none"
xmlns="http://www.w3.org/2000/svg"
>
<path
d="M14.519 14.6685H1.48179C1.39956 14.6685 1.33289 14.6018 1.33289 14.5196V1.4824C1.33289 1.40017 1.39956 1.3335 1.48179 1.3335H14.519C14.6012 1.3335 14.6679 1.40017 14.6679 1.4824V14.5196C14.6679 14.6018 14.6012 14.6685 14.519 14.6685Z"
fill="#F2A600"
/>
<path
d="M12.7488 3.25146H3.25098V12.7493H12.7488V3.25146Z"
fill="white"
/>
<path
opacity="0.5"
d="M14.519 1.3335H1.48179C1.44067 1.3335 1.404 1.35016 1.37622 1.37683L3.2509 3.25117H12.7488L14.6234 1.37683C14.5968 1.35016 14.5601 1.3335 14.519 1.3335Z"
fill="#D1762C"
/>
<path
d="M3.25056 3.25163L1.37622 1.37695C1.34955 1.40473 1.33289 1.44141 1.33289 1.48252V14.5197C1.33289 14.5608 1.34955 14.5986 1.37622 14.6253L3.25056 12.7506V3.25163Z"
fill="#DE7340"
/>
<path
d="M13.6122 2.38892V13.6126H2.38855V2.38892H13.6122ZM14.2789 1.72217H1.7218V14.2793H14.2789V1.72217Z"
fill="#FFCD40"
/>
<path
d="M2.38843 13.6126V2.38892L1.7218 1.72217V14.2793L2.38843 13.6126Z"
fill="#A65F3E"
/>
<path
d="M13.6044 2.3888L14.2711 1.72217H1.7218L2.38855 2.3888H13.6044Z"
fill="#D1762C"
/>
<path
d="M13.8346 13.8346H2.16644V2.1665H13.8346V13.8346ZM2.38869 13.6124H13.6123V2.38875H2.38869V13.6124Z"
fill="#D1762C"
/>
<path
d="M2.38865 13.6124V2.38875L2.16644 2.1665V13.8346L2.38865 13.6124Z"
fill="#824A34"
/>
<path
d="M13.6123 2.38871L13.8346 2.1665H2.16644L2.38869 2.38871H13.6123Z"
fill="#A65F3E"
/>
<path
d="M12.0566 3.94482H3.94446V12.057H12.0566V3.94482Z"
fill="url(#paint0_linear_2447_4817)"
/>
<path
d="M12.0561 7.84766C11.8117 7.91765 11.5972 8.08431 11.4672 8.30319C11.4472 8.33763 11.4295 8.37652 11.3939 8.39763C11.2795 8.43985 11.1484 8.44874 11.0595 8.54873C11.0217 8.31319 10.7362 8.18208 10.5173 8.25541C10.3317 8.30208 10.2395 8.49762 10.2228 8.67539C10.0273 8.61762 9.81176 8.76761 9.79843 8.97093C9.79621 8.98982 9.79621 9.01093 9.78621 9.02648C9.73732 9.07648 9.63844 9.07093 9.57844 9.13092C9.45956 9.22425 9.404 9.38535 9.27734 9.46757C9.99286 9.06426 10.8584 9.12092 11.6383 8.92538C11.7928 8.8876 11.965 8.83983 12.0561 8.70428V7.84766Z"
fill="#1B5E20"
/>
<path
d="M12.0566 11.3998C12.0566 11.3998 10.213 11.4953 8.20057 10.8754C6.1881 10.2565 5.26465 9.62321 5.26465 9.62321C5.26465 9.62321 6.61037 9.88431 8.40726 9.59099C9.58408 9.39878 10.4564 8.73326 12.0544 8.70215L12.0566 11.3998Z"
fill="#689F38"
/>
<path
d="M7.27824 9.74526C7.2938 9.72415 7.31268 9.70415 7.31935 9.67859C7.32713 9.64748 7.31491 9.61526 7.30602 9.58526C7.28157 9.49749 7.28935 9.40083 7.32602 9.31639C7.3449 9.27195 7.37268 9.23195 7.38601 9.18529C7.40935 9.10529 7.38712 9.01974 7.38268 8.93641C7.37601 8.82308 7.40157 8.70864 7.4549 8.60865C7.46268 8.5942 7.47156 8.57976 7.48712 8.57198C7.52934 8.5531 7.56489 8.61198 7.56934 8.65753C7.58267 8.77308 7.55823 8.89197 7.58267 9.00641C7.61711 9.1664 7.74599 9.30528 7.73711 9.46971C7.73377 9.5286 7.71266 9.58749 7.72377 9.64415C7.73266 9.68748 7.76044 9.72526 7.78377 9.76303C7.84377 9.86081 7.88043 9.9708 7.89154 10.0852C7.90488 10.2219 7.86377 10.3852 7.73599 10.433C7.59711 10.4863 7.4549 10.373 7.34602 10.2986C7.24602 10.2297 7.19825 10.0875 7.20158 9.9708C7.2038 9.8908 7.22936 9.8097 7.27824 9.74526Z"
fill="#2E7D32"
/>
<path
d="M10.5363 7.21366C10.0952 7.24589 9.65401 7.27811 9.21173 7.31033C8.99281 7.32588 8.77056 7.34255 8.55498 7.30033C8.45274 7.28033 8.35273 7.247 8.24827 7.24144C8.05825 7.23144 7.87712 7.31255 7.69265 7.35921C7.4904 7.41032 7.27815 7.42032 7.07257 7.3881C6.92922 7.36588 6.78253 7.31922 6.67808 7.21811C6.62585 7.16811 6.58584 7.10478 6.53028 7.05701C6.4625 6.99701 6.37471 6.96368 6.29914 6.91479C5.9791 6.71036 5.90354 6.20927 6.14912 5.91929C6.18913 5.87262 6.2358 5.8304 6.27136 5.7793C6.34026 5.68152 6.36248 5.55931 6.4036 5.44709C6.53362 5.086 6.87255 4.81157 7.25148 4.74935C7.63042 4.68713 8.03491 4.83601 8.2905 5.12266C8.37385 5.21599 8.4483 5.32599 8.56387 5.37265C8.63165 5.39932 8.70722 5.40154 8.77945 5.41043C9.19173 5.46265 9.53732 5.7793 9.7129 6.15594C9.79069 6.32371 9.84514 6.50926 9.97516 6.63925C10.0285 6.69258 10.0929 6.7348 10.1574 6.7748C10.4463 6.95479 10.7486 7.11256 11.0608 7.247C10.9808 7.24144 10.9019 7.237 10.8219 7.23144"
fill="white"
/>
<path
d="M9.21178 7.31053C8.99286 7.32608 8.77061 7.34275 8.55503 7.30053C8.45279 7.28053 8.35278 7.2472 8.24832 7.24164C8.0583 7.23164 7.87717 7.31275 7.6927 7.35941C7.49045 7.41052 7.2782 7.42052 7.07262 7.3883C6.92927 7.36608 6.78259 7.31942 6.67813 7.21831C6.6259 7.16831 6.58589 7.10498 6.53033 7.05721C6.46255 6.99721 6.37476 6.96388 6.29919 6.91499C6.18362 6.84055 6.11806 6.73722 6.05805 6.61723C6.0436 6.58945 6.0436 6.5639 6.04138 6.53279C6.13806 6.68167 6.3992 6.84166 6.50366 6.725C6.53811 6.62279 6.41587 6.44502 6.37476 6.30391C6.29697 6.03726 6.28919 5.76061 6.38476 5.49951C6.39587 5.4684 6.42365 5.3673 6.46143 5.31396C6.49033 5.53062 6.56589 5.85949 6.72036 6.01393C6.87482 6.16836 7.13263 6.23392 7.31821 6.11837C7.43489 6.04504 7.50379 5.91727 7.58824 5.80838C7.83716 5.48951 8.27166 5.32619 8.66949 5.40285C8.46391 5.52284 8.26388 5.66839 8.14276 5.87282C8.02163 6.07726 7.99274 6.35169 8.12831 6.54835C8.23721 6.70611 8.43501 6.78722 8.62615 6.78389C8.73172 6.78167 8.85062 6.745 8.89396 6.64834C8.91841 6.59279 8.91285 6.52612 8.94063 6.47168C8.99286 6.36724 9.14621 6.35391 9.24956 6.40724C9.35402 6.46168 9.42514 6.56057 9.50515 6.64501C9.80519 6.96055 10.2486 7.08721 10.6686 7.20053C10.6564 7.1972 10.6308 7.2072 10.6175 7.20942C10.5975 7.21276 10.5775 7.21609 10.5575 7.21942C10.5264 7.22498 10.4953 7.23275 10.463 7.22831C10.3964 7.21831 10.3286 7.22942 10.2619 7.23386C10.1052 7.24498 9.94743 7.2572 9.79074 7.26831C9.59849 7.28164 9.40514 7.29608 9.21178 7.31053Z"
fill="#C9E3E6"
/>
<path
d="M12.0566 10.4487C8.15943 11.3565 6.30698 9.12659 3.94446 8.87549V12.0564H12.0566V10.4487Z"
fill="#8BC34A"
/>
<path
d="M12.0566 10.4487C8.15943 11.3565 6.30698 9.12659 3.94446 8.87549V12.0564H12.0566V10.4487Z"
fill="url(#paint1_radial_2447_4817)"
/>
<path
d="M4.92438 9.46168C4.80883 9.49613 4.71328 9.40724 4.61662 9.36058C4.35997 9.17059 4.35997 8.82394 4.37997 8.53395C4.37441 8.46507 4.4233 8.35174 4.50441 8.40063C4.47108 8.23397 4.46774 8.06176 4.49218 7.89399C4.50885 7.76399 4.5544 7.674 4.65995 7.60734C4.66662 7.594 4.66218 7.57845 4.65884 7.564C4.60996 7.3729 4.69884 7.17514 4.70773 6.97737C4.71551 6.82293 4.6744 6.66294 4.72773 6.5185C4.74995 6.45406 4.83106 6.41851 4.8455 6.49962C4.92327 6.80404 5.05771 7.10736 5.03216 7.42735C5.02327 7.47068 5.03993 7.50623 5.07549 7.53178C5.30214 7.734 5.20104 8.0862 5.2377 8.36063C5.34436 8.61284 5.34214 8.89505 5.21548 9.14059C5.15771 9.27614 5.06438 9.40169 4.92438 9.46168Z"
fill="#2E7D32"
/>
<path
d="M6.46127 9.63067C5.97019 9.5129 5.95019 9.64289 5.78798 9.60511C5.23023 9.43734 5.57354 8.89404 5.73465 8.76182C5.66687 8.50295 5.84353 8.3974 6.05241 8.14186C6.08574 7.61855 6.58793 7.77076 6.52349 8.22963C6.65682 8.40295 6.7057 8.59294 6.66237 8.8096C6.67571 8.93848 6.80126 9.02292 6.86014 9.13847C6.97458 9.36401 6.78237 9.70733 6.46127 9.63067Z"
fill="#689F38"
/>
<path
d="M6.27462 8.77985C6.21573 8.90206 6.04685 8.92762 5.91464 8.89429C5.85353 8.87873 5.78909 8.8454 5.7302 8.86762C5.59243 8.91873 5.5491 9.02984 5.54466 9.14094C5.54466 9.28982 5.69354 9.40648 5.84242 9.41315C5.9913 9.41982 6.13129 9.34204 6.24351 9.24649C6.27684 9.44648 6.58682 9.44093 6.72237 9.32649C6.85125 9.2176 6.7957 9.04317 6.7957 9.04317C6.95014 9.21538 6.94236 9.35093 6.92458 9.49426C6.90792 9.63203 6.78903 9.78535 6.86014 9.89201C6.61349 9.95534 6.49127 9.74313 6.29462 9.65536C6.2224 9.63536 6.14685 9.65647 6.07352 9.67091C5.77131 9.7498 5.43244 9.57203 5.43022 9.2376C5.42799 9.0765 5.51688 8.91762 5.65243 8.82985C5.66798 8.81985 5.68465 8.80985 5.69131 8.79207C5.69798 8.77318 5.69131 8.75318 5.68465 8.7343C5.54688 8.36543 5.90686 8.36876 6.06241 8.12988C6.12462 8.19877 6.22684 8.22765 6.31573 8.2021C6.24573 8.36431 6.04907 8.28654 5.92463 8.35209C5.69354 8.54208 6.07352 8.80874 6.27462 8.77985Z"
fill="#2E7D32"
/>
<path
d="M4.61111 8.03417C4.6911 8.17971 4.9022 8.29748 4.85443 8.48747C4.78221 8.4397 4.69443 8.37859 4.60666 8.38192C4.53333 8.4797 4.56222 8.6508 4.61999 8.75191C4.88887 9.11189 5.17885 8.56858 5.20107 8.31415C5.52772 8.69635 5.22885 9.7563 4.63777 9.38743C4.55889 9.3241 4.3289 9.06967 4.34223 8.64524C4.35001 8.38748 4.45334 8.37081 4.51333 8.37859C4.42556 8.24304 4.40889 8.05083 4.44334 7.91195C4.47334 7.79307 4.52555 7.69752 4.60555 7.61974C4.68666 7.53975 4.62777 7.46864 4.62999 7.34642C4.63222 7.16865 4.67888 7.06644 4.71554 6.96533C4.72888 7.15421 4.79887 7.34865 4.76888 7.53641C4.7511 7.61641 4.66221 7.6153 4.62222 7.67307C4.55666 7.78084 4.55111 7.92306 4.61111 8.03417Z"
fill="#1B5E20"
/>
<path
d="M5.00664 10.0212C5.04886 10.1578 4.54888 10.0578 4.61888 9.88229C4.66776 9.76007 4.68999 9.6323 4.71443 9.50675C4.73887 9.30787 4.67999 9.1301 4.58777 8.96011C4.60888 8.94122 4.63332 8.92456 4.65888 8.91234C4.85887 9.28343 4.86553 9.15232 5.02997 8.86234C5.06108 8.85901 5.09219 8.86123 5.12218 8.86901C4.92331 9.18455 4.87553 9.65674 5.00664 10.0212Z"
fill="url(#paint2_radial_2447_4817)"
/>
<defs>
<linearGradient
id="paint0_linear_2447_4817"
x1="7.35993"
y1="10.2856"
x2="8.28906"
y2="6.97175"
gradientUnits="userSpaceOnUse"
>
<stop offset="0.1167" stopColor="#AFE4FE" />
<stop offset="0.6082" stopColor="#84C9ED" />
<stop offset="1" stopColor="#5FB2DE" />
</linearGradient>
<radialGradient
id="paint1_radial_2447_4817"
cx="0"
cy="0"
r="1"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(9.71023 10.5339) rotate(8.61827) scale(3.99428 2.03076)"
>
<stop stopColor="#D4E157" />
<stop offset="1" stopColor="#D4E157" stopOpacity="0" />
</radialGradient>
<radialGradient
id="paint2_radial_2447_4817"
cx="0"
cy="0"
r="1"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(5.06834 9.42892) rotate(15.9186) scale(0.528617 0.65868)"
>
<stop offset="0.4413" stopColor="#A06841" />
<stop offset="0.9229" stopColor="#A06841" stopOpacity="0.138" />
<stop offset="1" stopColor="#A06841" stopOpacity="0" />
</radialGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -0,0 +1,79 @@
<svg width="33" height="32" viewBox="0 0 33 32" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M27.1695 29.336H5.8335C4.36665 29.336 3.1665 28.1359 3.1665 26.669V5.33302C3.1665 3.86617 4.36665 2.66602 5.8335 2.66602H27.1695C28.6364 2.66602 29.8365 3.86617 29.8365 5.33302V26.669C29.8365 28.1359 28.6364 29.336 27.1695 29.336Z" fill="#BDBDBD"/>
<g opacity="0.7">
<path opacity="0.7" d="M22.9916 26.2917C25.3012 26.2917 27.1736 24.4194 27.1736 22.1097C27.1736 19.8001 25.3012 17.9277 22.9916 17.9277C20.6819 17.9277 18.8096 19.8001 18.8096 22.1097C18.8096 24.4194 20.6819 26.2917 22.9916 26.2917Z" fill="#757575"/>
</g>
<path d="M22.9916 25.6434C24.9429 25.6434 26.5248 24.0616 26.5248 22.1103C26.5248 20.159 24.9429 18.5771 22.9916 18.5771C21.0403 18.5771 19.4585 20.159 19.4585 22.1103C19.4585 24.0616 21.0403 25.6434 22.9916 25.6434Z" fill="#37474F"/>
<path d="M25.8351 20.0134C25.7551 19.9068 25.6706 19.8045 25.5817 19.709L23.3508 21.3578C23.1397 21.5133 23.0952 21.8089 23.2508 22.02C23.4063 22.2311 23.7018 22.2755 23.9129 22.12L26.1239 20.4867C26.0417 20.3223 25.9462 20.1645 25.8351 20.0134Z" fill="white"/>
<g opacity="0.7">
<path opacity="0.7" d="M19.3027 26.1103C19.5187 26.1103 19.6938 25.9352 19.6938 25.7192C19.6938 25.5032 19.5187 25.3281 19.3027 25.3281C19.0867 25.3281 18.9116 25.5032 18.9116 25.7192C18.9116 25.9352 19.0867 26.1103 19.3027 26.1103Z" fill="#212121"/>
<path opacity="0.7" d="M18.0207 24.0048C18.2367 24.0048 18.4118 23.8297 18.4118 23.6137C18.4118 23.3978 18.2367 23.2227 18.0207 23.2227C17.8047 23.2227 17.6296 23.3978 17.6296 23.6137C17.6296 23.8297 17.8047 24.0048 18.0207 24.0048Z" fill="#212121"/>
<path opacity="0.7" d="M17.8362 21.624C18.0522 21.624 18.2273 21.4489 18.2273 21.2329C18.2273 21.0169 18.0522 20.8418 17.8362 20.8418C17.6202 20.8418 17.4451 21.0169 17.4451 21.2329C17.4451 21.4489 17.6202 21.624 17.8362 21.624Z" fill="#212121"/>
<path opacity="0.7" d="M18.7536 19.3798C18.9696 19.3798 19.1447 19.2047 19.1447 18.9887C19.1447 18.7728 18.9696 18.5977 18.7536 18.5977C18.5376 18.5977 18.3625 18.7728 18.3625 18.9887C18.3625 19.2047 18.5376 19.3798 18.7536 19.3798Z" fill="#212121"/>
<path opacity="0.7" d="M20.6565 17.7548C20.8725 17.7548 21.0476 17.5797 21.0476 17.3637C21.0476 17.1478 20.8725 16.9727 20.6565 16.9727C20.4405 16.9727 20.2654 17.1478 20.2654 17.3637C20.2654 17.5797 20.4405 17.7548 20.6565 17.7548Z" fill="#212121"/>
<path opacity="0.7" d="M22.9929 17.206C23.2089 17.206 23.384 17.0309 23.384 16.8149C23.384 16.5989 23.2089 16.4238 22.9929 16.4238C22.7769 16.4238 22.6018 16.5989 22.6018 16.8149C22.6018 17.0309 22.7769 17.206 22.9929 17.206Z" fill="#212121"/>
<path opacity="0.7" d="M25.3266 17.7548C25.5426 17.7548 25.7177 17.5797 25.7177 17.3637C25.7177 17.1478 25.5426 16.9727 25.3266 16.9727C25.1106 16.9727 24.9355 17.1478 24.9355 17.3637C24.9355 17.5797 25.1106 17.7548 25.3266 17.7548Z" fill="#212121"/>
<path opacity="0.7" d="M27.2307 19.3798C27.4467 19.3798 27.6218 19.2047 27.6218 18.9887C27.6218 18.7728 27.4467 18.5977 27.2307 18.5977C27.0147 18.5977 26.8396 18.7728 26.8396 18.9887C26.8396 19.2047 27.0147 19.3798 27.2307 19.3798Z" fill="#212121"/>
<path opacity="0.7" d="M28.1465 21.624C28.3625 21.624 28.5376 21.4489 28.5376 21.2329C28.5376 21.0169 28.3625 20.8418 28.1465 20.8418C27.9305 20.8418 27.7554 21.0169 27.7554 21.2329C27.7554 21.4489 27.9305 21.624 28.1465 21.624Z" fill="#212121"/>
<path opacity="0.7" d="M27.9643 24.0048C28.1803 24.0048 28.3554 23.8297 28.3554 23.6137C28.3554 23.3978 28.1803 23.2227 27.9643 23.2227C27.7483 23.2227 27.5732 23.3978 27.5732 23.6137C27.5732 23.8297 27.7483 24.0048 27.9643 24.0048Z" fill="#212121"/>
<path opacity="0.7" d="M26.6799 26.1123C26.8959 26.1123 27.071 25.9372 27.071 25.7212C27.071 25.5052 26.8959 25.3301 26.6799 25.3301C26.4639 25.3301 26.2888 25.5052 26.2888 25.7212C26.2888 25.9372 26.4639 26.1123 26.6799 26.1123Z" fill="#212121"/>
</g>
<path d="M20.2146 21.612C20.4479 20.6099 21.1901 19.7299 22.1434 19.341C22.2678 19.2899 22.4611 19.2966 22.5189 19.4521C22.5878 19.6388 22.5545 19.7655 22.3434 19.8854C21.719 20.2432 21.2079 21.0121 21.019 21.7965C20.9634 22.0253 20.7279 22.1609 20.5012 22.0942C20.2968 22.0298 20.1657 21.8209 20.2146 21.612Z" fill="#2F7889"/>
<g opacity="0.7">
<path opacity="0.7" d="M10.0121 26.2927C12.3217 26.2927 14.1941 24.4204 14.1941 22.1107C14.1941 19.8011 12.3217 17.9287 10.0121 17.9287C7.70242 17.9287 5.83008 19.8011 5.83008 22.1107C5.83008 24.4204 7.70242 26.2927 10.0121 26.2927Z" fill="#757575"/>
</g>
<path d="M10.0122 25.6444C11.9635 25.6444 13.5453 24.0626 13.5453 22.1113C13.5453 20.16 11.9635 18.5781 10.0122 18.5781C8.06085 18.5781 6.479 20.16 6.479 22.1113C6.479 24.0626 8.06085 25.6444 10.0122 25.6444Z" fill="#37474F"/>
<path d="M13.4795 21.4261C13.4528 21.295 13.4195 21.1684 13.3795 21.0439L10.6596 21.5839C10.4041 21.635 10.2374 21.8839 10.2885 22.1394C10.3396 22.395 10.5885 22.5616 10.844 22.5105L13.5417 21.975C13.5328 21.795 13.5172 21.6106 13.4795 21.4261Z" fill="white"/>
<g opacity="0.7">
<path opacity="0.7" d="M6.3259 26.1123C6.5419 26.1123 6.717 25.9372 6.717 25.7212C6.717 25.5052 6.5419 25.3301 6.3259 25.3301C6.10991 25.3301 5.93481 25.5052 5.93481 25.7212C5.93481 25.9372 6.10991 26.1123 6.3259 26.1123Z" fill="#212121"/>
<path opacity="0.7" d="M5.04148 24.0048C5.25747 24.0048 5.43257 23.8297 5.43257 23.6137C5.43257 23.3978 5.25747 23.2227 5.04148 23.2227C4.82549 23.2227 4.65039 23.3978 4.65039 23.6137C4.65039 23.8297 4.82549 24.0048 5.04148 24.0048Z" fill="#212121"/>
<path opacity="0.7" d="M4.85911 21.624C5.0751 21.624 5.2502 21.4489 5.2502 21.2329C5.2502 21.0169 5.0751 20.8418 4.85911 20.8418C4.64311 20.8418 4.46802 21.0169 4.46802 21.2329C4.46802 21.4489 4.64311 21.624 4.85911 21.624Z" fill="#212121"/>
<path opacity="0.7" d="M5.77464 19.3798C5.99063 19.3798 6.16573 19.2047 6.16573 18.9887C6.16573 18.7728 5.99063 18.5977 5.77464 18.5977C5.55864 18.5977 5.38354 18.7728 5.38354 18.9887C5.38354 19.2047 5.55864 19.3798 5.77464 19.3798Z" fill="#212121"/>
<path opacity="0.7" d="M7.67942 17.7548C7.89541 17.7548 8.07051 17.5797 8.07051 17.3637C8.07051 17.1478 7.89541 16.9727 7.67942 16.9727C7.46343 16.9727 7.28833 17.1478 7.28833 17.3637C7.28833 17.5797 7.46343 17.7548 7.67942 17.7548Z" fill="#212121"/>
<path opacity="0.7" d="M10.0134 17.206C10.2294 17.206 10.4045 17.0309 10.4045 16.8149C10.4045 16.5989 10.2294 16.4238 10.0134 16.4238C9.79741 16.4238 9.62231 16.5989 9.62231 16.8149C9.62231 17.0309 9.79741 17.206 10.0134 17.206Z" fill="#212121"/>
<path opacity="0.7" d="M12.3489 17.7548C12.5648 17.7548 12.7399 17.5797 12.7399 17.3637C12.7399 17.1478 12.5648 16.9727 12.3489 16.9727C12.1329 16.9727 11.9578 17.1478 11.9578 17.3637C11.9578 17.5797 12.1329 17.7548 12.3489 17.7548Z" fill="#212121"/>
<path opacity="0.7" d="M14.2517 19.3798C14.4677 19.3798 14.6428 19.2047 14.6428 18.9887C14.6428 18.7728 14.4677 18.5977 14.2517 18.5977C14.0357 18.5977 13.8606 18.7728 13.8606 18.9887C13.8606 19.2047 14.0357 19.3798 14.2517 19.3798Z" fill="#212121"/>
<path opacity="0.7" d="M15.1697 21.624C15.3856 21.624 15.5607 21.4489 15.5607 21.2329C15.5607 21.0169 15.3856 20.8418 15.1697 20.8418C14.9537 20.8418 14.7786 21.0169 14.7786 21.2329C14.7786 21.4489 14.9537 21.624 15.1697 21.624Z" fill="#212121"/>
<path opacity="0.7" d="M14.9851 24.0048C15.2011 24.0048 15.3762 23.8297 15.3762 23.6137C15.3762 23.3978 15.2011 23.2227 14.9851 23.2227C14.7691 23.2227 14.594 23.3978 14.594 23.6137C14.594 23.8297 14.7691 24.0048 14.9851 24.0048Z" fill="#212121"/>
<path opacity="0.7" d="M13.7026 26.1123C13.9186 26.1123 14.0937 25.9372 14.0937 25.7212C14.0937 25.5052 13.9186 25.3301 13.7026 25.3301C13.4866 25.3301 13.3115 25.5052 13.3115 25.7212C13.3115 25.9372 13.4866 26.1123 13.7026 26.1123Z" fill="#212121"/>
</g>
<path d="M7.26146 21.612C7.49478 20.6099 8.23697 19.7299 9.19025 19.341C9.31469 19.2899 9.50801 19.2966 9.56579 19.4521C9.63467 19.6388 9.60134 19.7655 9.39024 19.8854C8.76583 20.2432 8.25474 21.0121 8.06587 21.7965C8.01031 22.0253 7.77477 22.1609 7.54812 22.0942C7.34368 22.0298 7.21258 21.8209 7.26146 21.612Z" fill="#2F7889"/>
<g opacity="0.7">
<path opacity="0.7" d="M22.9916 14.2322C25.3012 14.2322 27.1736 12.3598 27.1736 10.0502C27.1736 7.74051 25.3012 5.86816 22.9916 5.86816C20.6819 5.86816 18.8096 7.74051 18.8096 10.0502C18.8096 12.3598 20.6819 14.2322 22.9916 14.2322Z" fill="#757575"/>
</g>
<path d="M22.9914 13.5839C24.9427 13.5839 26.5245 12.002 26.5245 10.0507C26.5245 8.09942 24.9427 6.51758 22.9914 6.51758C21.0401 6.51758 19.4583 8.09942 19.4583 10.0507C19.4583 12.002 21.0401 13.5839 22.9914 13.5839Z" fill="#37474F"/>
<path d="M20.1765 7.91402C20.0965 8.02068 20.0232 8.13179 19.9565 8.24289L22.1675 9.91836C22.3764 10.0761 22.672 10.0361 22.8297 9.82725C22.9875 9.61837 22.9475 9.32283 22.7386 9.16507L20.5476 7.50293C20.4165 7.62959 20.2899 7.76514 20.1765 7.91402Z" fill="white"/>
<g opacity="0.7">
<path opacity="0.7" d="M19.3027 14.0517C19.5187 14.0517 19.6938 13.8766 19.6938 13.6606C19.6938 13.4446 19.5187 13.2695 19.3027 13.2695C19.0867 13.2695 18.9116 13.4446 18.9116 13.6606C18.9116 13.8766 19.0867 14.0517 19.3027 14.0517Z" fill="#212121"/>
<path opacity="0.7" d="M18.0207 11.9462C18.2367 11.9462 18.4118 11.7711 18.4118 11.5552C18.4118 11.3392 18.2367 11.1641 18.0207 11.1641C17.8047 11.1641 17.6296 11.3392 17.6296 11.5552C17.6296 11.7711 17.8047 11.9462 18.0207 11.9462Z" fill="#212121"/>
<path opacity="0.7" d="M17.8362 9.56538C18.0522 9.56538 18.2273 9.39029 18.2273 9.17429C18.2273 8.9583 18.0522 8.7832 17.8362 8.7832C17.6202 8.7832 17.4451 8.9583 17.4451 9.17429C17.4451 9.39029 17.6202 9.56538 17.8362 9.56538Z" fill="#212121"/>
<path opacity="0.7" d="M18.7536 7.32124C18.9696 7.32124 19.1447 7.14615 19.1447 6.93015C19.1447 6.71416 18.9696 6.53906 18.7536 6.53906C18.5376 6.53906 18.3625 6.71416 18.3625 6.93015C18.3625 7.14615 18.5376 7.32124 18.7536 7.32124Z" fill="#212121"/>
<path opacity="0.7" d="M20.6565 5.69624C20.8725 5.69624 21.0476 5.52115 21.0476 5.30515C21.0476 5.08916 20.8725 4.91406 20.6565 4.91406C20.4405 4.91406 20.2654 5.08916 20.2654 5.30515C20.2654 5.52115 20.4405 5.69624 20.6565 5.69624Z" fill="#212121"/>
<path opacity="0.7" d="M22.9929 5.14742C23.2089 5.14742 23.384 4.97232 23.384 4.75632C23.384 4.54033 23.2089 4.36523 22.9929 4.36523C22.7769 4.36523 22.6018 4.54033 22.6018 4.75632C22.6018 4.97232 22.7769 5.14742 22.9929 5.14742Z" fill="#212121"/>
<path opacity="0.7" d="M25.3266 5.69624C25.5426 5.69624 25.7177 5.52115 25.7177 5.30515C25.7177 5.08916 25.5426 4.91406 25.3266 4.91406C25.1106 4.91406 24.9355 5.08916 24.9355 5.30515C24.9355 5.52115 25.1106 5.69624 25.3266 5.69624Z" fill="#212121"/>
<path opacity="0.7" d="M27.2307 7.32124C27.4467 7.32124 27.6218 7.14615 27.6218 6.93015C27.6218 6.71416 27.4467 6.53906 27.2307 6.53906C27.0147 6.53906 26.8396 6.71416 26.8396 6.93015C26.8396 7.14615 27.0147 7.32124 27.2307 7.32124Z" fill="#212121"/>
<path opacity="0.7" d="M28.1465 9.56538C28.3625 9.56538 28.5376 9.39029 28.5376 9.17429C28.5376 8.9583 28.3625 8.7832 28.1465 8.7832C27.9305 8.7832 27.7554 8.9583 27.7554 9.17429C27.7554 9.39029 27.9305 9.56538 28.1465 9.56538Z" fill="#212121"/>
<path opacity="0.7" d="M27.9641 11.9462C28.1801 11.9462 28.3552 11.7711 28.3552 11.5552C28.3552 11.3392 28.1801 11.1641 27.9641 11.1641C27.7481 11.1641 27.573 11.3392 27.573 11.5552C27.573 11.7711 27.7481 11.9462 27.9641 11.9462Z" fill="#212121"/>
<path opacity="0.7" d="M26.6799 14.0537C26.8959 14.0537 27.071 13.8786 27.071 13.6626C27.071 13.4466 26.8959 13.2715 26.6799 13.2715C26.4639 13.2715 26.2888 13.4466 26.2888 13.6626C26.2888 13.8786 26.4639 14.0537 26.6799 14.0537Z" fill="#212121"/>
</g>
<path d="M20.6813 11.8558C20.2324 11.1447 20.1613 10.6692 20.1902 9.97143C20.1991 9.78255 20.2857 9.46479 20.5657 9.51146C20.819 9.55368 20.7924 9.78033 20.8124 9.93588C20.8946 10.5358 21.0368 10.8625 21.379 11.4136C21.5034 11.6136 21.4323 11.8758 21.2257 11.9913C21.039 12.0913 20.7968 12.038 20.6813 11.8558Z" fill="#2F7889"/>
<g opacity="0.7">
<path opacity="0.7" d="M10.0121 14.2331C12.3217 14.2331 14.1941 12.3608 14.1941 10.0511C14.1941 7.74149 12.3217 5.86914 10.0121 5.86914C7.70242 5.86914 5.83008 7.74149 5.83008 10.0511C5.83008 12.3608 7.70242 14.2331 10.0121 14.2331Z" fill="#757575"/>
</g>
<path d="M10.0122 13.5849C11.9635 13.5849 13.5453 12.003 13.5453 10.0517C13.5453 8.1004 11.9635 6.51855 10.0122 6.51855C8.06085 6.51855 6.479 8.1004 6.479 10.0517C6.479 12.003 8.06085 13.5849 10.0122 13.5849Z" fill="#37474F"/>
<path d="M12.6169 7.66324C12.5258 7.56546 12.4303 7.47436 12.3325 7.3877L10.2904 9.26537C10.0971 9.44314 10.0859 9.74091 10.2615 9.93423C10.4393 10.1276 10.737 10.1387 10.9303 9.96312L12.9569 8.10321C12.8547 7.94989 12.7436 7.80101 12.6169 7.66324Z" fill="white"/>
<g opacity="0.7">
<path opacity="0.7" d="M6.3259 14.0537C6.5419 14.0537 6.717 13.8786 6.717 13.6626C6.717 13.4466 6.5419 13.2715 6.3259 13.2715C6.10991 13.2715 5.93481 13.4466 5.93481 13.6626C5.93481 13.8786 6.10991 14.0537 6.3259 14.0537Z" fill="#212121"/>
<path opacity="0.7" d="M5.04173 11.9462C5.25772 11.9462 5.43282 11.7711 5.43282 11.5552C5.43282 11.3392 5.25772 11.1641 5.04173 11.1641C4.82573 11.1641 4.65063 11.3392 4.65063 11.5552C4.65063 11.7711 4.82573 11.9462 5.04173 11.9462Z" fill="#212121"/>
<path opacity="0.7" d="M4.85935 9.56538C5.07535 9.56538 5.25044 9.39029 5.25044 9.17429C5.25044 8.9583 5.07535 8.7832 4.85935 8.7832C4.64336 8.7832 4.46826 8.9583 4.46826 9.17429C4.46826 9.39029 4.64336 9.56538 4.85935 9.56538Z" fill="#212121"/>
<path opacity="0.7" d="M5.77464 7.32124C5.99063 7.32124 6.16573 7.14615 6.16573 6.93015C6.16573 6.71416 5.99063 6.53906 5.77464 6.53906C5.55864 6.53906 5.38354 6.71416 5.38354 6.93015C5.38354 7.14615 5.55864 7.32124 5.77464 7.32124Z" fill="#212121"/>
<path opacity="0.7" d="M7.67966 5.69624C7.89566 5.69624 8.07075 5.52115 8.07075 5.30515C8.07075 5.08916 7.89566 4.91406 7.67966 4.91406C7.46367 4.91406 7.28857 5.08916 7.28857 5.30515C7.28857 5.52115 7.46367 5.69624 7.67966 5.69624Z" fill="#212121"/>
<path opacity="0.7" d="M10.0134 5.14742C10.2294 5.14742 10.4045 4.97232 10.4045 4.75632C10.4045 4.54033 10.2294 4.36523 10.0134 4.36523C9.79741 4.36523 9.62231 4.54033 9.62231 4.75632C9.62231 4.97232 9.79741 5.14742 10.0134 5.14742Z" fill="#212121"/>
<path opacity="0.7" d="M12.3489 5.69624C12.5648 5.69624 12.7399 5.52115 12.7399 5.30515C12.7399 5.08916 12.5648 4.91406 12.3489 4.91406C12.1329 4.91406 11.9578 5.08916 11.9578 5.30515C11.9578 5.52115 12.1329 5.69624 12.3489 5.69624Z" fill="#212121"/>
<path opacity="0.7" d="M14.2517 7.32124C14.4677 7.32124 14.6428 7.14615 14.6428 6.93015C14.6428 6.71416 14.4677 6.53906 14.2517 6.53906C14.0357 6.53906 13.8606 6.71416 13.8606 6.93015C13.8606 7.14615 14.0357 7.32124 14.2517 7.32124Z" fill="#212121"/>
<path opacity="0.7" d="M15.1697 9.56538C15.3856 9.56538 15.5607 9.39029 15.5607 9.17429C15.5607 8.9583 15.3856 8.7832 15.1697 8.7832C14.9537 8.7832 14.7786 8.9583 14.7786 9.17429C14.7786 9.39029 14.9537 9.56538 15.1697 9.56538Z" fill="#212121"/>
<path opacity="0.7" d="M14.9853 11.9462C15.2013 11.9462 15.3764 11.7711 15.3764 11.5552C15.3764 11.3392 15.2013 11.1641 14.9853 11.1641C14.7693 11.1641 14.5942 11.3392 14.5942 11.5552C14.5942 11.7711 14.7693 11.9462 14.9853 11.9462Z" fill="#212121"/>
<path opacity="0.7" d="M13.7026 14.0537C13.9186 14.0537 14.0937 13.8786 14.0937 13.6626C14.0937 13.4466 13.9186 13.2715 13.7026 13.2715C13.4866 13.2715 13.3115 13.4466 13.3115 13.6626C13.3115 13.8786 13.4866 14.0537 13.7026 14.0537Z" fill="#212121"/>
</g>
<path d="M7.26146 9.54854C7.49478 8.54637 8.23697 7.66642 9.19025 7.27755C9.31469 7.22644 9.50801 7.23311 9.56579 7.38866C9.63467 7.57531 9.60134 7.70197 9.39024 7.82197C8.76583 8.17973 8.25474 8.94858 8.06587 9.73298C8.01031 9.96186 7.77477 10.0974 7.54812 10.0307C7.34368 9.96852 7.21258 9.75742 7.26146 9.54854Z" fill="#2F7889"/>
</svg>

After

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -0,0 +1,8 @@
<svg width="18" height="18" viewBox="0 0 18 18" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M14.9494 13.9422C16.2318 12.4261 17.3268 9.70245 16.1956 6.57511C15.6443 5.05269 15.0219 4.20274 14.2969 3.66277C13.8557 3.33403 12.0933 2.50658 9.75965 2.86781C8.05349 3.13279 5.77487 4.20899 4.29369 5.96014C2.85752 7.6613 1.74883 9.00498 1.69758 10.3099C1.63133 11.9886 2.89627 13.431 3.05001 13.6647C3.32374 14.0797 5.19115 16.4521 8.69971 16.5733C11.797 16.6796 13.8144 15.2847 14.9494 13.9422Z" fill="#403D3E"/>
<path d="M4.55363 2.73722C2.93746 3.89091 1.12131 6.25079 1.44754 9.56061C1.60628 11.1718 2.00251 12.1492 2.57123 12.8504C2.91746 13.2779 4.41988 14.5491 6.77351 14.7366C9.14588 14.9253 10.9495 14.3941 12.8332 13.0842C16.6617 10.4206 16.098 6.39328 15.9343 5.92456C15.7705 5.45583 14.5444 2.69598 11.1733 1.71478C8.19844 0.849823 5.98355 1.71478 4.55363 2.73722Z" fill="#5E6367"/>
<path d="M7.39353 2.96109C5.61737 2.89734 3.91996 4.28852 3.75622 6.00593C3.59248 7.72209 4.65492 9.02952 6.30983 9.29576C7.96475 9.56074 9.87839 8.5558 10.2634 6.45091C10.6609 4.28227 9.08969 3.02234 7.39353 2.96109Z" fill="white"/>
<path d="M7.94217 5.90066C7.94217 5.90066 8.36965 5.81192 8.45465 5.18195C8.53839 4.56198 8.23091 4.03326 7.51345 3.84327C6.73349 3.63703 6.20477 4.06576 6.06727 4.51698C5.87603 5.14445 6.15852 5.44319 6.15852 5.44319C6.15852 5.44319 5.39356 5.62693 5.33231 6.52938C5.27481 7.38058 5.85603 7.83806 6.43975 7.97805C7.16096 8.15179 7.97842 7.9543 8.17841 7.03435C8.34465 6.27689 7.94217 5.90066 7.94217 5.90066Z" fill="#303030"/>
<path d="M6.73983 4.75411C6.67109 5.01284 6.80858 5.26283 7.07857 5.33157C7.3698 5.40532 7.63479 5.30908 7.70603 5.01159C7.76853 4.74786 7.64354 4.51537 7.33605 4.44037C7.08357 4.37788 6.81483 4.47162 6.73983 4.75411Z" fill="white"/>
<path d="M6.95978 6.03974C6.6323 5.93849 6.19982 6.06473 6.13107 6.50471C6.06233 6.94469 6.32606 7.16968 6.67104 7.23217C7.01603 7.29467 7.34226 7.11343 7.40601 6.76095C7.4685 6.40972 7.28601 6.13973 6.95978 6.03974Z" fill="white"/>
</svg>

After

Width:  |  Height:  |  Size: 2.0 KiB

View File

@@ -0,0 +1,7 @@
<svg width="32" height="32" viewBox="0 0 32 32" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M29.374 7.98958V27.4342C29.374 28.5055 28.505 29.3745 27.4337 29.3745H4.56866C3.49742 29.3745 2.62842 28.5055 2.62842 27.4342V4.56915C2.62842 3.4979 3.49742 2.62891 4.56866 2.62891H24.4912C25.2824 2.62891 26.0336 2.98228 26.5381 3.59347L28.7695 6.3027C29.1606 6.77832 29.374 7.37395 29.374 7.98958Z" fill="#616161"/>
<path d="M24.8782 29.3742H7.11157V17.6128C7.11157 17.2105 7.43828 16.8838 7.84055 16.8838H24.1515C24.5538 16.8838 24.8805 17.2105 24.8805 17.6128V29.3742H24.8782Z" fill="white"/>
<path d="M24.8782 26.6523H7.11157V29.3744H24.8782V26.6523Z" fill="#7190F9"/>
<path d="M5.50049 2.62891V13.2414C5.50049 13.637 5.84942 13.9459 6.3117 13.9459H23.3472C23.8094 13.9459 24.1673 13.6392 24.1673 13.2414V2.62891H5.50049Z" fill="#424242"/>
<path d="M9.5564 2.62891V12.6346C9.5564 13.008 9.81865 13.2969 10.1654 13.2969H22.9358C23.2826 13.2969 23.5515 13.008 23.5515 12.6346V2.62891H9.5564ZM20.8845 12.1168C20.8845 12.2768 20.7534 12.4079 20.5933 12.4079H16.7306C16.5706 12.4079 16.4395 12.2768 16.4395 12.1168V3.80905C16.4395 3.64903 16.5706 3.51791 16.7306 3.51791H20.5933C20.7534 3.51791 20.8845 3.64903 20.8845 3.80905V12.1168Z" fill="#E0E0E0"/>
</svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

Some files were not shown because too many files have changed in this diff Show More