https://stackoverflow.blog/2022/08/08/can-you-stop-your-open-source-project-from-being-used-for-evil/

Free and open-source software licenses remove your ability to control what others do with your code. That’s kind of the point. It’s also why they’re so popular: anyone can use, remix, and sell your code into new technological possibilities with little restriction! What could go wrong?

Ethical impulses aren’t new to software. The Free Software Foundation advocates for a “struggle against for-profit corporate control” and against restrictions on users’ freedom to inspect and modify code in the products they buy. It was started after its founder, Richard Stallman, found he was unable to repair his broken printer because he was unable to edit its proprietary code. However, the open-source movement distanced itself from this political stance, instead making the case that open source was good for corporations on “pragmatic, business-case grounds.” But both free and open-source software allow anyone to use code for any purpose. 

For anything? Yes: the Free Software Foundation argues that license terms must not prohibit software’s use in torture, arguing that such a restriction would not be enforceable. Even if it were enforceable, there are so many possible ethical stands—for sample, some might want to prohibit software’s use in meat production, others its use in war—that adhering to license terms would be practically impossible and push people towards proprietary alternatives. Palantir builds software that helps US Immigration Agents separate kids from their families, and proudly uses and produces open-source software, arguing it is “the right thing to do”. And the Open Source Initiative acknowledges that open-source licenses “may not discriminate against persons or groups. Giving everyone freedom means giving evil people freedom, too.”

In my own research, I interviewed open-source developers building a tool that would allow anyone to create deepfakes, videos in which the face of one person is computationally stitched onto the body of someone else. Most deepfakes found online are nonconsensual pornography of women, causing harm including anxiety or job loss. One developer building this tool stated, “I cannot stop people [from] using my software for stuff which I don’t agree with [… open source’s] positive is also its negative.” Developers feel unable to prohibit pornographic uses of their tool given the permissive software license. Instead, they push back by refusing to support those using it to create nonconsensual porn and banning them from their chat rooms and forums—while acknowledging that these users are still able to access and use the software. 

So what about developers who don’t want their work to be used to help separate kids from their families or create nonconsensual pornography?

Ethical source, not open source?

The Ethical Source Movement seeks to use software licenses and other tools to give developers “the freedom and agency to ensure that our work is being used for social good and in service of human rights.” This view emphasizes the rights of developers to have a say in what the fruits of their labor are used for over the rights of any user to use the software for anything. There are a myriad of different licenses: some prohibit software from being used by companies that overwork developers in violation of labor laws, while others prohibit uses that violate human rights or help extract fossil fuels. Is this the thicket Stallman envisions? 

I asked Coraline Ada Ehmke, a leader in the Ethical Source Movement, whether projects using an ethical-source license might mean fewer people use that project. She explained that “with traditional open source, success is generally measured based on the number of adoptions, especially adoptions by large tech companies like Facebook, Google, Amazon.” This is echoed by academic literature studying open-source software, where frequently used projects are seen as successful and important. 

But ethical source, Ehmke says, is more concerned with the “real-world impact of the technologies we create,” focusing on the ethical (or unethical) nature of the downstream uses the software enables, and how these uses affect real people, rather than simply the number of times it is used. This might not be a way to get famous or attract a job offer for working on a highly popular open-source software project, but it might be a way to stop your software being used for evil.

But will ethical source licenses stop people from using your software for evil? Will people who intend to commit evil acts with software care what a license says or abide by its terms? Well, it depends. While the anonymous users of the deepfake software I studied might still have used it to create nonconsensual porn, even if the license terms prohibited this, Ehmke suggests that corporate misuse is perhaps a more pressing concern: she points to campaigns to prevent software from being used by Palantir and a 2019 report by Amnesty International that raised concerns that the business models of big name technology companies may threaten human rights. Anonymous users on the internet might not care about licenses, but as Ehmke says and my own experience with lawyers in tech companies confirms, “These companies and their lawyers care very much about what a license says.” So while ethical source licenses might not stop all harmful uses, they might stop some. 

So perhaps it makes sense to think about misuse in terms of probabilities rather than certainties. In software security, where no measure can prevent all exploits, cybersecurity professionals attempt to address the most harmful and likely-to-be-exploited vulnerabilities first. I like to think of ethical-source licenses in the same way: perhaps not stopping our software from being used for any harm at all, but making some harmful uses less likely, less convenient, or more costly. 

Author’s Note: Please fill out this 10 minute survey to contribute to help us understand ethics concerns that software developers encounter in their work! 

– – –

David Gray Widder is a PhD Student in Software Engineering at Carnegie Mellon, and has studied challenges software engineers face related to trust and ethics in AI at NASA, Microsoft Research, and Intel Labs. You can follow his work or share what you thought about this article on Twitter at @davidthewid.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.