Running Apscheduler in multiple instances #962
Replies: 3 comments 57 replies
-
APScheduler uses data store atomicity and locking mechanisms to ensure that each job is only acquired by exactly one scheduler. Can you show me your code, as this should not be happening? I also advise you to use code from |
Beta Was this translation helpful? Give feedback.
-
import pytz def run_task(): mongo_uri= 'mongodb://localhost:27017' data_store = MongoDBDataStore(client_or_uri=mongo_uri, database=database) @app.route('/trigger', methods=['POST']) data = request.json if name == 'main': |
Beta Was this translation helpful? Give feedback.
-
@agronholm can you accommodate DocumentDB like how you did with MongoDB and SQLAlchemy for datastores |
Beta Was this translation helpful? Give feedback.
-
@agronholm I'm trying to deploy app scheduler in my flask application which schedules task via Api calls. I have initiated scheduler in the background. This application runs as a Kubernetes pod in a cluster. Everything was fine until the application is deployed as three instances so if one goes down the other one can take up its place. This causes duplication of tasks.
Is there any way to resolve this so that if one scheduler instance runs the schedules the other may not interrupt?
I'm using Apscheduler 4.0.0a5 with MongoDB as data_ store and redis as event broker.
Beta Was this translation helpful? Give feedback.
All reactions