Autoscaling Octopus workers using Kubernetes
Blog post from Octopus Deploy
A new Kubernetes worker has been introduced to enhance the scalability of deployment infrastructure by executing deployment tasks within Kubernetes clusters. This worker, an extension of the existing Octopus Kubernetes agent, enables the execution of any workload in the cluster, offering an efficient alternative to traditional physical or virtual worker machines. It operates by executing each deployment task in a new Kubernetes Pod, allowing for horizontal scaling, resource allocation, and cost reduction. Users can install the Kubernetes worker using a Helm chart via a command-line or a guided installation wizard on the Octopus web portal. The worker communicates with the Octopus Server using a polling protocol, and it can be customized for specific workload needs through Helm values. Currently available as an early access preview for cloud customers, the worker will soon be released for self-hosted customers, with further details available in the Octopus Deploy documentation and an upcoming webinar.