Sep 1, 2025
Release Notes
Securing the Open Source Supply Chain: CAMEL-AI Joins GitHub’s Secure Open Source Fund
CAMEL joins GitHub’s SOS Fund to strengthen AI security and safeguard the open-source ecosystem.
Sep 1, 2025
Release Notes
CAMEL joins GitHub’s SOS Fund to strengthen AI security and safeguard the open-source ecosystem.
When the Apache Log4j zero-day (“Log4Shell”) broke in December 2021, the software world learned a hard lesson: a single under-resourced open-source library can send shockwaves through the entire software supply chain Modern applications often depend on hundreds of open-source components, many maintained by unpaid volunteers. Ensuring those components are secure is no longer optional – it’s critical for everyone’s security.
*GitHub’s Secure Open Source Fund program sponsor in key open-source projects (71 in the first cohorts) to bolster software supply chain security at scale.*
In response to the urgent need for open-source security, GitHub launched the Secure Open Source (SOS) Fund in late 2024. This program provides maintainers with funding and a focused three-week security sprint, offering hands-on education, expert mentorship, improved tooling, and a community of security-minded peers. The goal is clear – link financial support to real security outcomes – so that widely-used projects get safer, reducing risks across the ecosystem.
The Secure Open Source Fund has already run two sessions (cohorts), and the results have been impressive. 125 maintainers across 71 important open source projects participated, including frameworks, libraries, and tools that millions rely on every day. With dedicated time and resources, these maintainers significantly improved their projects’ security posture. Early outcomes include:
Equally important, the program fostered a culture of security among maintainers. In cohort sessions, participants shared knowledge and even leveraged AI assistants like GitHub Copilot to streamline security tasks (from scanning code to developing fuzz tests) . By the end of the sprint, maintainers had built up security backlogs and concrete plans for ongoing improvements . This momentum benefits the entire open-source ecosystem – fixes shipped by these 71 projects are already protecting millions of downstream builds each day, and many participants are sharing their new security playbooks and incident response plans publicly for others to reuse . In short, GitHub’s initiative has turned a one-off sprint into a “flywheel” for broader security impact .
One of the projects selected for this program is CAMEL-AI, an open-source framework for building multi-agent systems with large language models (LLMs). Being chosen as one of the “71 important and fast-growing projects” underscores the significance of CAMEL-AI in the modern AI ecosystem . In fact, CAMEL-AI was grouped with other cutting-edge AI/ML tools – projects like scikit-learn, OpenCV, AutoGPT’s GravitasML, and Ollama – in the cohort focusing on AI frameworks and edge-LLM tooling . These projects form the bedrock of today’s AI workflows, collectively tallying tens of millions of installations and clones each month . They’re integrated into countless Jupyter notebooks, research pipelines, and applications. This means that a security flaw in an AI framework (imagine a malicious prompt injection or a poisoned model weight file) could cascade into thousands of downstream apps overnight – often without developers even realizing which component was compromised . Hardening projects like CAMEL-AI is therefore essential to protect the broader AI stack: as GitHub’s experts put it, if an LLM agent’s dependency gets hijacked, an attacker gains “remote DevOps” powers over any systems using it . In other words, securing CAMEL-AI helps shield everyone up the chain who builds on top of it.
CAMEL-AI’s inclusion in the SOS Fund cohort meant our team received not only funding, but also direct support from GitHub’s security engineers and industry mentors. Over a three-week “security sprint,” we worked alongside maintainers of many other high-impact projects (spanning beyond AI to areas like web frameworks, DevOps tools, and core libraries). For context, the cohorts included popular front-end frameworks like Next.js and Svelte, critical infrastructure like Node.js and Apache Log4j, and even developer tools such as Oh My Zsh and JUnit, among many others . Being in this diverse company of open-source projects allowed cross-sharing of best practices – what CAMEL-AI learned in AI security, we could exchange with maintainers in other domains, and vice versa.
During the program, the CAMEL-AI team undertook a comprehensive effort to bolster the security of the CAMEL-AI codebase and development workflow. With guidance from the GitHub Security Lab and community experts, we focused on several key areas of improvement:
SECURITY.md
with instructions for responsible disclosure, and created an Incident Response Plan (IRP) tailored to CAMEL-AI. This IRP defines how the team will triage and handle any reported vulnerability – who to involve, how to communicate, and how to fix and release patches on a tight timeline. By rehearsing this plan during the sprint, we are now prepared to react quickly and effectively if an issue arises. Additionally, our maintainers updated project documentation to emphasize security best practices, so new contributors understand that security is “something we actively do” and the standards we expect .Each of these improvements directly enhances CAMEL-AI and its ecosystem. For example, CodeQL scans have already helped catch and fix some subtle issues that might have been overlooked. Tightened workflows and secret scanning mean that the risk of a secret leak or supply-chain attack through CAMEL-AI is dramatically lower. And with a solid vulnerability disclosure and response process in place, our users can trust that CAMEL-AI will handle any future security issue responsibly and transparently.
Importantly, these changes don’t just benefit CAMEL-AI in isolation – they benefit everyone who relies on CAMEL-AI as a building block. By shipping security fixes and defense-in-depth features now, CAMEL-AI is helping protect downstream applications (potentially numbering in the thousands) that leverage multi-agent capabilities. Our work during the SOS Fund sprint contributes to a safer foundation for AI agents: any developer using CAMEL-AI can build their AI systems with greater confidence that the framework itself won’t become an attack vector.
When we first dove into the GitHub Secure Open Source Fund (SOSF), it was more than just a checklist for us at CAMEL-AI. It truly shaped our mindset on security and got our hands dirty improving the CAMEL LLM agent framework. This chapter is the story of that journey, what we learned, what we changed, and where we’re headed next.
Our Biggest Security Takeaways
The SOSF program was filled with “aha!” moments, but three things really stuck with us and changed how we view our project.
From Learning to Doing: Our First Security Fixes ✅
We didn’t want this to just be a theoretical exercise. We rolled up our sleeves and immediately put our learnings into practice.
SECURITY.md
file in our repository. It might seem small, but it’s a huge step. It tells our community exactly how to report a security issue, making the whole process transparent and efficient for everyone.
We’re just getting started. Here’s a peek at what we’re focused on next to keep building a more secure framework.
Security is a Team Sport 🤝We know we can’t do this alone. Security is a community effort, and we’re inviting everyone to get involved.We’ve already opened issues on GitHub to improve how our agents use tools securely, and we’d love for the community to jump in and contribute. Most importantly, if you see something, say something! We’re calling on our community to help us find vulnerabilities and to spark more conversations around LLM security.
Looking back, the biggest change hasn’t been in our code, but in our heads. We now have a much deeper appreciation for the potential vulnerabilities in an AI agent framework. This has sparked a really exciting new idea: what if we could create security guidelines not just for developers, but for the AI agents themselves? Security is no longer just a human’s job. As agents become more autonomous, they need their own built-in principles for safe operation.We plan to share everything we’ve learned with our community—our successes, our challenges, and our new security docs. By being open, we hope to empower everyone to build safer, more trustworthy AI.
Security isn’t a one-time sprint, it’s an ongoing commitment. At CAMEL-AI, we’ve taken big steps forward with the GitHub Secure Open Source Fund, but the real impact comes from our community: developers, researchers, and builders like you.
If you’re excited about multi-agent systems and want to help shape the security of the next wave of AI frameworks:
Together, we can make secure, reliable AI agents the default, not the exception. 🚀