Over the years we have put hundreds of hours of effort into performance benchmarking so that our customers get a consistent Jenkins developer experience - regardless of their workloads. This begins with synthetic tests but ends with us comparing these synthetic tests with the many Jenkins controllers we manage and being able to analyse the data over an aggregate of different real-world workloads. Real-world use is the best indicator of potential performance over synthentic workloads because we don’t do anything to control real world use.
Our large controller is a workhorse for busy teams but before I go further let me emphasise that on our platform we use the Controller -> Agent architecture. The Jenkins controller is delegating work to agents and collecting and aggregating the results. The agents on our platform are created on-demand using the many different cloud plugins available via the Jenkins Plugin ecosystem. Our Jenkins Build Service is based on two main architectures namely Kubernetes and dedicated virtual machines. Our customers select what they prefer when they sign up.
The Controller/Agent architecture improves the performance capability of the Jenkins Controller but the speed of a job can be impacted by a controller that hasn’t be adequately sized.
We currently guarantee the large controller to be able to manage 48 concurrent workloads. However the large Jenkins controller has been benchmarked to handle much more than this. Here is a quick preview of some performance stats of our Large Jenkins Controller with real-world workloads. In the below example multiple pipelines start up approximately eight agents and as the workloads begin the total agents connected to the Jenkins Controller increases to between 80 and 100 agents before the workloads begin completing and the agents shutdown.
This controller is busy for over 12 hours a day and the performance is consistent. Even though the CPU usage chart is for a smaller timeframe the overall performance of this controller is comparable.