The Mixture-of-Agents Alignment (MoAA) framework harnesses the collective intelligence of open-source large language models (LLMs) to improve post-training performance. MoAA generates high-quality supervised fine-tuning data by combining responses from multiple open-source models, referred to as proposers in the Mixture-of-Agents (MoA) framework. This approach enables smaller models to achieve performance levels comparable to those of models up to 10x their size, while retaining the efficiency and cost advantages of small models. MoAA also enhances model alignment through direct preference optimization, which significantly improves upon the baseline models across all benchmarks for both Llama-3.1-8b-Instruct and Gemma-2-9b-it. The framework enables a self-improving pipeline to continuously enhance model performance, providing strong evidence that collective intelligence from multiple models can advance Large Language Models without relying on supervision from more powerful LLMs.