Since October 2024, compar:IA has provided a platform for users to anonymously compare and vote on the responses of different AI models, contributing to a public dataset for a participatory ranking system. This system, developed in collaboration with the digital regulation expertise center PEReN, aims to enhance transparency and understanding of the generative AI ecosystem based on user preferences rather than technical performance. The ranking, updated weekly and available on platforms like Hugging Face, does not claim to identify the best model but instead reflects collective user preferences, highlighting the ecosystem's dynamics and encouraging model diversity, including open-source options. The ranking's methodology, based on the Bradley-Terry model, emphasizes transparency and reproducibility, with all data and calculations publicly accessible. Observations from the ranking reveal increased competition between proprietary and open-source models and a growing interest in energy-efficient models. The ranking also suggests that perceived performance is not necessarily linked to model size, as user preferences may be influenced by response style rather than factual accuracy. While the ranking offers insights into user preferences, it is intended to complement other forms of evaluation, such as factual, technical, and thematic assessments, to provide a more comprehensive view of AI model performance. Future enhancements may include thematic sub-rankings, analysis by question complexity, and expanded European language support.