RPCS3, the popular PlayStation 3 emulator, is tightening its rules around AI-generated code submissions. The development team says the issue is not simply the use of AI tools, but undisclosed AI-generated pull requests from contributors who do not understand the code they are submitting and sometimes introduce serious regressions. The message from the project is blunt: people who blindly throw AI-generated slop at the team without disclosure can expect bans and blocks.
Open-source development teams have been dealing with an increasing flood of AI-generated contributions since large language models became widely available. The problem has already hit other projects: earlier this year, the free and open-source game engine Godot was described as struggling under the weight of AI-generated pull requests, meaning code contributions that ask to be pulled into the main project. Now another well-known project, RPCS3, is facing the same kind of problem.
As spotted by GamingOnLinux, the RPCS3 developers posted a clear warning on X. “Please stop submitting AI slop code pull requests to RPCS3. We will start banning those who do without disclosing,” the team wrote. It added: “There are plenty of resources online to learn how to debug and code instead of generating slop that you don’t understand and that doesn’t work.”
The developers explained the situation further in the replies. RPCS3 has been receiving more and more AI-generated pull requests aimed especially at its macOS build, a platform most of the team does not use. Only one member regularly handles most of the emulator’s testing on Apple systems, so poor-quality submissions in that area create a disproportionate burden. “We have had to revert a few slop PRs that caused big regressions a few times, enough is enough,” the team wrote.
The Issue Is Undisclosed And Irresponsible AI Use
The RPCS3 team said the problem is not the use of AI code in itself. The issue is when contributors fail to disclose it, do not understand what they are submitting, and effectively throw random AI output at the project to see what passes review. “We won’t ban if disclosed, except for abuse cases, e.g. throwing a lot of random slop at us to see what passes reviews. Hint: programmers that can understand the problem, the solution, and the implementation can write the same code without AI, and tend to use LLMs to automate repetitive code refactoring instead. It is not the case with the AI slop PRs we have seen.”
The result is a new set of firm rules on the RPCS3 GitHub repository. The use of AI tools for research and reverse engineering purposes is permitted, but contributors are expected to fully own and understand all code they submit. Any communication with the team, including code, code comments, and GitHub comments, must come from the human contributor, not from an AI agent acting autonomously.
The developers ended with a final message for the people arguing with them on social media. “As for all the AI bros seething on our socials, we’re simply blocking you. Learn how to debug, code, and leave behind something useful to humanity when you’re gone, instead of peddling slop.” The wording is harsh, but the point is clear: open-source projects are not rejecting careful, understood, and disclosed AI-assisted development. They are rejecting contributors who outsource work to a model, fail to test or understand the result, and then push the cleanup burden onto maintainers.
Source: PC Gamer




Leave a Reply