Comments & Discussion
Thank you for the entire lesson! Very helpful. And got me wondering. Do you have a practical recommendation for how to calculate how many Horizon workers my server should run? I’d like to understand how to measure my server’s capacity and determine the ideal number of workers without overloading it.
This is not so easy to do. We always did try it and see if it crashes method...
Full amount really depends on your server, jobs amount and the time it takes to complete a job. For some small jobs, you might get away with 10 workers, but there might be a case where you need 40 or more workers. It's really individual.
What I recommend is:
Start with something reasonable, let's say 10 workers. Watch your logs to see if you are happy with results (queue processing time, server load). If you are happy - leave it as is, until you become unhappy. If you are not happy - try to increase it to 15, then check again.
It's not recommended to load the server to the max at any point, especially if you run your queue workers in the same server that serves requests. Spike of any of these - and you have unstable system :)
Hope that helps
Thank you for this article, very useful! I was wondering if you had a course or tutorial on how to configure horizon for let's say the GenerateReport use case, but having 100 of reports per day? I'm always tweaking my setup to handle horizon with an external scraper service, but haven't found the perfect setup yet :D.
Could you clarify the question?
Thank you for the quick reply and of course: I'm running a web app with horizon and sometimes have to handle large amounts of jobs which can be slow. It runs on a single server with 8 cores and 32 gb ram. So I would be interested in real-life examples of horizon/queue job configurations for handling 100-1000s of jobs per hour. My setup does work, but I would like to see other setups for bigger applications and what the best practices are for them. Of course only if that would be interesting for your audience.
Well, I'd suggest identifying bottlenecks (is it memory? is it CPU?) first and scale accordingly to that. It is very situational.