Files
kami_apple_exchage/backend/scripts/start_arq_worker.py
danial 8ad2a5366a refactor(backend): 将Celery替换为Arq进行协程任务处理
本次提交将后端的任务队列系统从Celery迁移到了Arq,以支持基于协程的任务处理。主要改动包括:
- 更新文档和配置文件,反映架构变化。
- 修改健康检查和服务初始化逻辑,以适应Arq的使用。
- 移除与Celery相关的代码,并添加Arq任务定义和调度器。
- 更新Dockerfile和相关脚本,确保Arq worker能够正确运行。
- 调整API和业务服务中的任务处理逻辑,移除对Celery的依赖。

这些改动旨在提高系统的异步处理能力和整体性能。
2025-09-18 16:02:05 +08:00

60 lines
1.7 KiB
Python

"""
Arq worker startup script
支持协程池和协程任务的Arq worker启动脚本
使用新版arq的推荐启动方式
"""
import asyncio
import sys
from arq.worker import Worker
from app.core.arq_worker import ArqWorkerSettings
from app.core.config_arq import get_arq_settings
from app.core.log import get_logger
settings = get_arq_settings()
logger = get_logger(__name__)
async def main_async():
"""Async main function to start Arq worker"""
logger.info("🚀 Starting Arq worker with coroutine pool support")
# Create worker instance with all settings
worker = Worker(
functions=ArqWorkerSettings.functions,
cron_jobs=ArqWorkerSettings.cron_jobs,
redis_settings=ArqWorkerSettings.redis_settings,
max_jobs=ArqWorkerSettings.max_jobs,
job_timeout=ArqWorkerSettings.job_timeout,
max_tries=ArqWorkerSettings.max_tries,
on_startup=ArqWorkerSettings.on_startup,
on_shutdown=ArqWorkerSettings.on_shutdown,
health_check_interval=ArqWorkerSettings.health_check_interval,
)
logger.info("✅ Arq worker created successfully")
# Use the worker's main method (this is the correct async approach)
try:
await worker.main()
except KeyboardInterrupt:
logger.info("👋 Received interrupt signal, shutting down gracefully")
except Exception as e:
logger.error(f"❌ Arq worker failed: {e}")
import traceback
logger.error(f"Traceback: {traceback.format_exc()}")
sys.exit(1)
logger.info("✅ Arq worker shutdown complete")
def main():
"""Main function wrapper for async execution"""
asyncio.run(main_async())
if __name__ == "__main__":
main()