A Holistic View of GitHub Copilot Security
“Copilot security” usually means generating secure code – but security encompasses reliability, privacy, legal risks (not just copyright), developer behavior, complicated ethical questions, and impacts to human power structures like the FOSS ecosystem and our jobs. If AI tools really have the power to transform our civilization, then we owe it to ourselves to think beyond how they work and what they can do. This talk asks how we can ensure that Copilot and similar tools serve our needs as people — which is what both “security” and “software engineering” really boil down to.
We’ll start with Copilot’s performance at secure coding tasks, finding and fixing existing vulnerabilities, and how your interactions can impact the quality of its results. I’ll show how Copilot can aid learning, documentation, and testing, but also show the pitfalls of relying on the AI for critical thinking. That will lead into the difficulties posed by how Copilot fundamentally works: not just prompt injection, data quality, or data poisoning, but how OpenAI gathered the data. How do we create a symbiotic relationship between FOSS developers who share code with each other, and proprietary and expensive large language models that depend on ingesting massive amounts of code? If I do my job right, you’ll start talking these questions through with each other. It’s time to shape the world we want to live and code in.