Development, Adoption, and Integration of Artificial Intelligence: National Security Implications

20.500.12592/9s4n23f

Development, Adoption, and Integration of Artificial Intelligence: National Security Implications

1 Jan 2024

The advancement of Artificial Intelligence (AI) already has and will continue to have a momentous impact on our lives, personally and politically. Engagement is therefore a necessity rather than a choice. Canada’s national security will be dependent on this engagement as both the Canadian government and its security apparatus, as well as hostile actors, seek to leverage AI capability to further their own objectives. It is imperative that the Government of Canada develop the knowledge and strategies to develop the capacity and position itself to deal with emerging threats and policy issues, with AI at the forefront. A whole of government approach is critical and will require the adoption of a framework that is able to leverage the knowledge of both industry and academia to develop Canada’s capacity to position itself to address and safeguard AI. This pan-Canadian approach should allow for the delegation of responsibility, the elimination of duplication of effort, and the creation of a common front in overcoming shared challenges. TOP OF PAGE Context Canada is a world leader in AI development. With its 2017 Pan-Canadian Artificial Intelligence Strategy, Canada invested in both the technology and thought leadership around AI in both a governance and international context. The Canadian government and its organizations recognize that outreach to academia and industry is key to develop conversations and knowledge around this complex and highly technical subject. The Department of National Defence (DND) and the Canadian Armed Forces (CAF) are actively working on interoperability with other organizations to exploit AI and integrate emerging technology for operational advantage and to enhance deterrence. DND and the CAF recently stood up the CAF operational AI Laboratory to learn about AI coherence and assess its adoption and inform the development of a large defence AI centre and various domain initiatives. Five Eyes (FVEY) allies such as the United States and the United Kingdom have begun to develop AI capabilities and guidelines through the United States Cyber Command (USCYBERCOM) and the National Security Agency (NSA), and the Government Communications Headquarters (GCHQ), respectively. In September 2023, the NSA consolidated the AI Security Centre to provide outreach and promote best security practices for the subject. TOP OF PAGE Considerations The protection and security of data is fundamental to successfully developing AI. Data governance regimes and vigilant cybersecurity practices must be an intrinsic part of Canada’s approach to AI, and this should and can be done cooperatively with FVEY partners. The development of AI innovation outpaces the government’s ability to establish guidelines, regulations, and best practices. A working group needs to be evaluating what will come next. Government, academia, and industry will need to establish knowledge and standards of conduct that align with democratic principles and norms. Unknown, but potentially large, sums of money will need to be put toward establishing and furthering this cooperation. Very little domestic and international legislation exists to regulate AI within national security communities. AI must be developed in a transparent manner, which may be challenging for the national security community to implement due to operational security considerations. Guardrails must not act in an anti-competitive manner in the global economy, and must not curb research and development from academic, private, and public sources. These processes are essential to continually improving our understanding of AI and our capabilities in this field. TOP OF PAGE Implications for Canada Adversaries may use AI against partner and allied states and industry, as well as our armed forces. Training in AI should be seen as a national priority and critical infrastructure development goal. Cooperation between government, academia, and industry is central to remaining competitive in several fields. Consultation and collaboration within this triumvirate must foster a learning environment. There is room to reimagine this partnership and create one in which government is not the primary driver, but rather a leading partner in mutual development. This partnership must be dynamic so as to continue testing AI and critically review its national security application. Despite the risks, policy must be progressive and descriptive, which might produce better short and long-term results. Biases will be present in how we construct and develop AI. A deliberate effort to mitigate these potential biases is necessary, as greater diversity can lead to a more holistic view of operational challenges and opportunities. Canada’s allies are currently developing AI for the purposes of conducting signals intelligence (SIGINT) as well as streamlining organizational policy goals. The national security community tends to view AI in terms of offense/defence capability with little thought on how these capabilities will be achieved, as much work needs to be done before these capabilities can be readily assessed and fielded. The government must play a pivotal role in advancing the understanding of the role AI will play in the lives of the electorate and in its security. The openness of democratic societies presents a potential security liability. The rate of innovation presents a challenge in communicating where and how these systems are being used, as autonomous loitering munitions and weapons systems have indicated. AI and its use by the CAF will likewise be contained within the laws of armed conflict. Concerns over AI becoming a tool to constrain civil liberties, deny visas or security clearances, or infringe on privacy rights are plentiful. By some government perspectives, working tactically with FVEY and other partners provides clarity on who is testing what in terms of AI development. AI is influencing the democratic process, e.g., the recent Argentinian election being labelled as the “first AI election” due to AI use. AI bots can similarly impact the democratic process and are capable of learning from social media users and their data footprint. Thought must be given to how threats are perceived in this space, e.g., bad data or deepfakes. Similarly, researching and testing are often static while threats are by nature dynamic and evolving, so we need an AI that can work through different problems and computational issues able to adapt and solve in real time. While it has been suggested that adopting AI in a manner consistent with Canadian values may cost us a tactical or strategic edge against adversaries, the same could be said about any other weapon system; we must act within the grounds of international law. TOP OF PAGE
security artificial intelligence canada defence policy western hemisphere defence policy perspective cyber & tech defence innovation rob haswell
Published in
Canada

Related Topics

All