-
Notifications
You must be signed in to change notification settings - Fork 1k
Google Summer of Code 2025
Mentoring, Community, and Open Source Collaboration
Philosophy
Based on the dynamics of complex adaptive systems, Mesa assesses the open source community is the most powerful problem solving network in history. By connecting passionate people from across the world to explore potential solutions through computationally stored knowledge (i.e. code), humanity can solve problems faster and more effectively than ever.
Values
- Be polite, we are all volunteers
- Everyone has something to learn
- Be curious, not judgmental
- Every contributor contributions ebbs and flows, its fine, contribute as you can
What to Expect
First, if you don't get selected it does not mean your proposal was not awesome and if you want you can absolutely still contribute. We welcome everyone who wants to participate. If Mesa gets selected this will be our second year, from our first year we were incredibly humbled by the number of proposals and the passion of the people who submitted. It truly hurts to say no to so many exceptional people.
Second, if you are selected, the typical rhythm is a weekly meetings that alternate between a discussion about what you are specifically working on for your project and the broader Mesa dev meeting (usually scheduled for 12:30 GMT on Tuesdays). We understand that this time might not work for everyone, so please don’t worry — if you have scheduling conflicts, we are more than happy to work with you to find an alternative that fits your availability. For your project you will be assigned a mentor with backups who will be available for one-on-one meetings, and you can also connect with us via chat and GitHub.
Always remember, the goal primary goal is not to complete the expected outcomes (although we will be ecstatic if that happens). The primary goal, in line with GSoC, is to give you development experience and help you gain an understanding of open source coding and community.
Explore the projects below to see where your skills and interests might fit in. Please feel free to reach out via Mesa's Matrix chat or via email to [email protected] with any questions.
-
Front End Upgrade - Enhance and stabilize Mesa's new Solara-based visualization system to improve robustness, performance, and user experience.
-
Mesa-LLM - Create an extension for integrating large language models as decision-making agents in Mesa simulations.
-
Mesa-Frames Upgrade - Stabilize and enhance Mesa-frames to provide production-ready support for large-scale agent-based modeling.
Summary
Mesa recently transitioned to a new Solara-based visualization system that enables interactive, browser-based model exploration. While the core functionality is in place, there are several opportunities to enhance its robustness, performance, and user experience. This project aims to stabilize and extend Mesa's visualization capabilities, making them more powerful and user-friendly.
Motivation
The visualization system is one of Mesa's most important features - it allows modelers to see complex emergent behaviors and share their models with others. The recent transition from a Tornado-based system to Solara (PR #2263) brought modern web technologies and improved interactivity, but also revealed areas needing refinement. A well-functioning visualization system is crucial for Mesa's adoption and usability.
Historical Context
Mesa's visualization evolved significantly:
- Initially used a Tornado-based server system
- In Mesa 2.x, added experimental Jupyter support using Solara
- Mesa 3.0 fully transitioned to Solara-based visualization
- Recent major improvements include unified plotting backends (PR #2430) and API refinements (PR #2299)
Overall Goal
Create a visualization system that is:
- Robust and performant
- Easy to use for basic cases
- Flexible for advanced customization
- Well-documented with clear examples
- Consistent across different spaces (grid, network, continuous)
Expected GSoC Outcomes
Core Improvements:
- Add support for rotating markers to visualize agent orientation/heading (#2342)
- Enable configurable visualization update intervals for performance (#2579)
- Create an
AgentPortrayalStyle
class to replace the current dictionary system (#2436) - Allow direct model access and control from visualization (#2176)
- Update Mesa Examples to use the new visualization approach
Visual Enhancements:
- Improve grid drawing aesthetics and styling options (#2438)
- Refactor Altair plotting backend to match Matplotlib's clean architecture (#2435)
- Add support for all space types and property layers
- Enable customizable color schemes and visual themes
Documentation:
- Extend and improve the visualization tutorial
- Document all visualization components and their customization options
- Provide example implementations for common visualization patterns
Testing:
- Add automated tests for visualization components
- Create benchmarks for visualization performance
- Set up CI testing for example visualizations (mesa-examples#137)
Skills Required
- Required:
- Python programming
- Experience with data visualization libraries (Matplotlib, Altair)
- Understanding of software design patterns
- Basic knowledge of frontend development
- Preferred:
- Familiarity with Solara or similar frameworks
- Experience with interactive visualizations
- Understanding of agent-based modeling concepts
- Level: Medium/Hard
Size 350 hours
Mentors
- Primary: Tom
- Backup: Jackie, Ewout
Getting Started
- Review the Visualization Tutorial
- Study examples using the new visualization system
- Examine the visualization code in mesa/visualization/solara_viz.py
- Try implementing a small enhancement in one of the example models
Summary
This project aims to integrate large language models (LLMs) as decision-making agents into the Mesa agent-based modeling (ABM) framework. This project will enable more sophisticated, language-driven agent behaviors, allowing researchers to model scenarios involving communication, negotiation, and decision-making influenced by natural language.
Motivation
Current implementations of LLM-based agents often require significant manual coding effort and lack a streamlined interface for designing modular agent architectures. By providing an accessible and flexible API, this project will make it easier for researchers and practitioners to develop, test, and iterate on complex LLM-based agents for applications in areas such as collaborative problem-solving, simulation of human-like reasoning, and dynamic decision-making.
Overall Goal
To design and implement an extension for Mesa that allows users to create LLM-powered agents using a modular and user-friendly approach, by assembling reusable components like planning, memory, and reasoning modules. The extension will enable agents to interact using natural language, process textual data, and make decisions informed by LLM capabilities. The project will design and implement robust APIs, integration tools, and documentation to enable rapid prototyping of agents (e.g., Chain-of-Thought, ReWOO, Tree-of-Thought, etc) using different paradigms (e.g., sequential, class-based, functional approach, etc), facilitating research and experimentation in agent-based modeling and natural language reasoning.
Expected Outcomes
Core Features:
- Develop modular components for defining and configuring LLM-based agents (e.g., interaction modules, memory systems, decision-making units).
- Create built-in templates and presets for common use cases (e.g., ReACT agent).
- These components will seamlessly integrate with existing Mesa functionality, leveraging the established framework for agent behaviors and environment interactions.
- Users will be able to plug these modules into their existing simulations with minimal adjustments.
Enhancement & Improvements:
- Support for integrating various LLMs and frameworks (e.g., Hugging Face, LLama, OpenAI).
- Tools for visualizing and debugging agent behavior at the module level.
Documentation:
- Comprehensive user guides for building agents using the modular API.
- Tutorials demonstrating step-by-step construction of popular LLM-based agents.
- Developer documentation for extending and customizing the API.
Testing & Quality Assurance:
- Unit tests for individual modules and their integration.
- Benchmarking against standard agent-based tasks to ensure performance and usability.
- CI/CD pipeline to maintain high code quality and reliability.
Scientific Contribution
- This project is expected to produce at least one scientific publication, such as a submission to the Journal of Open Source Software (JOSS) or a relevant venue in computational social science or agent-based modeling (e.g., SIMULATION). Selected candidate will have the opportunity to contribute to the publication process. This will include help drafting, refining the paper, and being listed as one of the authors, depending on the level of contributions.
Skills Required
- Required:
- Strong Python programming skills.
- Familiarity with agent-based modeling frameworks like Mesa.
- Experience working with large language models and their APIs.
- Preferred:
- Knowledge of advanced LLM techniques.
- Familiarity with modular library design principles.
- Experience in designing intuitive APIs for scientific computing.
- Knowledge areas:
- Agent-based modeling
- Modular system design
- Natural language reasoning and planning with LLMs
Project Size: 175/350 hours
Mentors
- Primary: Boyu
- Backup: Tom, Jackie
Recommended Bibliography
-
Cheng, Y., Zhang, C., Zhang, Z., Meng, X., Hong, S., Li, W., ... & He, X. (2024). Exploring large language model based intelligent agents: Definitions, methods, and prospects. arXiv preprint arXiv:2401.03428. https://doi.org/10.48550/arXiv.2401.03428
-
Gao, C., Lan, X., Li, N., Yuan, Y., Ding, J., Zhou, Z., ... & Li, Y. (2024). Large language models empowered agent-based modeling and simulation: A survey and perspectives. Humanities and Social Sciences Communications, 11(1), 1-24. https://doi.org/10.1057/s41599-024-03611-3
-
Ghaffarzadegan, N., Majumdar, A., Williams, R., & Hosseinichimeh, N. (2024). Generative agent‐based modeling: an introduction and tutorial. System Dynamics Review, 40(1), e1761. https://doi.org/10.1002/sdr.1761
-
Guo, T., Chen, X., Wang, Y., Chang, R., Pei, S., Chawla, N. V., ... & Zhang, X. (2024). Large language model based multi-agents: A survey of progress and challenges. arXiv preprint arXiv:2402.01680. https://doi.org/10.48550/arXiv.2402.01680
-
Lu, Y., Aleta, A., Du, C., Shi, L., & Moreno, Y. (2024). LLMs and generative agent-based models for complex systems research. Physics of Life Reviews. https://doi.org/10.1016/j.plrev.2024.10.013
-
Ma, Q., Xue, X., Zhou, D., Yu, X., Liu, D., Zhang, X., ... & Ma, W. (2024). Computational experiments meet large language model based agents: A survey and perspective. arXiv preprint arXiv:2402.00262. https://doi.org/10.48550/arXiv.2402.00262
-
Wang, L., Ma, C., Feng, X., Zhang, Z., Yang, H., Zhang, J., ... & Wen, J. (2024). A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6), 186345. https://doi.org/10.1007/s11704-024-40231-1
Summary
Mesa-frames has proven to be a powerful extension for Mesa, offering significant performance improvements through vectorized operations on dataframes. This project aims to stabilize Mesa-frames, improve its integration with Mesa's core functionality, and establish it as a production-ready solution for large-scale agent-based modeling.
Motivation
Mesa-frames has demonstrated impressive performance gains (up to 200x speedup) by leveraging pandas and polars for vectorized operations. While the initial implementation is promising, there are opportunities to improve stability, expand functionality, and better integrate with Mesa's core features. Making Mesa-frames production-ready would provide the Mesa community with a robust solution for scaling agent-based models to handle thousands or millions of agents efficiently.
Historical Context
Mesa-frames was developed in 2024 as a GSoC project to address Mesa's performance limitations with large numbers of agents. Key developments include:
- Initial proof-of-concept showing significant performance gains (Discussion #1939)
- Support for both pandas and polars backends
- Integration with Mesa's AgentSet API
- Basic implementation of core Mesa functionality
Overall Goal
Create a stable, well-tested, and fully-featured version of Mesa-frames that seamlessly integrates with Mesa while maintaining its performance advantages. This includes expanding documentation, improving test coverage, and implementing missing Mesa functionality.
Expected Outcomes
Core Features:
- Address outstanding issues in the mesa-frames repo
- Implement missing Mesa functionality (e.g., PropertyLayers, NetworkGrid support)
- Create a stable release cadence aligned with Mesa's releases
- Improve continuous integration and testing infrastructure
Enhancement & Improvements:
- Add support for more of Mesa's spaces (mesa-frames#6)
- Implement GPU support through cuDF (mesa-frames#10)
- Optimize performance for common agent-based modeling patterns
- Support for discrete event scheduling (mesa-frames#9)
Documentation:
- Expand tutorials with advanced usage examples
- Create migration guides from Mesa to Mesa-frames
- Add performance optimization guidelines
- Document integration patterns with other Mesa extensions
Testing & Quality Assurance:
- Implement comprehensive test suite covering all features
- Add performance regression tests
- Create benchmarks comparing Mesa and Mesa-frames implementations
- Set up continuous performance monitoring
Skills Required
- Required:
- Strong Python programming skills
- Experience with pandas and/or polars
- Understanding of vectorized operations
- Familiarity with agent-based modeling concepts
- Preferred:
- Experience with Mesa or similar ABM frameworks
- Knowledge of GPU computing (cuDF)
- Background in performance optimization
- Understanding of continuous integration practices
- Level: Medium/Hard
Size: 175 / 350 hours
Mentors
- Primary: Adam
- Backup: Tom, Jackie, Jan
Getting Started
- Review the Mesa-frames source code and documentation
- Study the introductory tutorial
- Examine open issues in the Mesa-frames repository
- Try implementing a simple model using both Mesa and Mesa-frames to understand the differences