BullMQ is a powerful queueing system used in Node.js with a focus on performance, reliability, and extensibility. Here are several best practices to follow while working with BullMQ:
const Queue = require('bullmq').Queue; const myQueue = new Queue('my-queue'); myQueue.process(__dirname + '/jobProcessor.js');
completed
and failed
events for jobs. Not handling these can lead to unhandled promise rejections or silent failures.job.on('completed', (result) => { console.log(`Job completed with result ${result}`); }); job.on('failed', (err) => { console.error(`Job failed: ${err.message}`); });
await myQueue.add('urgent job', data, { priority: 1 }); await myQueue.add('normal job', data, { priority: 2 });
Re-use connections when possible: Each instance of a queue or job uses a separate Redis connection. To minimize connection overhead, consider re-using connections when possible.
Use rate-limited queues for throttling: If you want to limit how many jobs are processed per time unit, you can use rate-limited queues in BullMQ.
const limiter = { max: 100, duration: 5000, }; const rateLimitedQueue = new Queue('my-queue', { limiter });
Handle stalled jobs: Jobs can become stalled if the worker processing them crashes. Make sure to handle this case - for example, by setting the stalledInterval
option when creating your queues.
Be cautious with repeatable jobs: Repeatable jobs in BullMQ are not idempotent, meaning multiple instances of the same job can be added to the queue. Be aware of this and design your job handling code accordingly.
Remember to always test thoroughly and monitor your queues' performance and error rates closely.
Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.