Connect with us

Hi, what are you looking for?

AI Technology

AI-Enabled Cyber Operations Raise Legal Complexities, Warns Military Symposium Experts

AI’s integration into military cyber operations complicates legal compliance, warns experts at the Fourth Annual Symposium on Cyber and International Law.

The evolving role of cyber operations in modern conflict has prompted international discourse on the application of existing laws governing warfare. A recent symposium held by American University Washington College of Law and several international institutions examined the complex intersections between cyber tactics and international law, identifying ongoing gaps and potential areas of discord.

In September, the Fourth Annual Symposium on Cyber and International Law titled “Navigating Gaps and Seams” brought together experts from around the globe to explore legal challenges posed by cyber operations. This three-day event featured discussions on the implications of artificial intelligence (AI) in the realm of cyber warfare, including an opening keynote address by retired General Paul Nakasone.

One significant panel focused on how AI-enabled cyber operations might complicate the application of the law of armed conflict (LOAC). As cyber capabilities rapidly integrate into military operations, the potential for AI to enhance these operations presents both tactical advantages and legal uncertainties. For instance, Israel’s deployment of AI systems, “Gospel” and “Lavender,” for automated targeting has ignited legal discussions around the obligation to differentiate between military and civilian targets.

The panel explored the responsibilities of states in developing and acquiring AI technologies for military use. A key legal reference is Article 36 of Additional Protocol I, which mandates that states assess whether new weapons comply with international law. While the U.S. is not a party to this protocol, its Department of Defense (DoD) adheres to stringent legal reviews for weapon systems, especially in the context of cyber capabilities.

Under DoD policy, the legal review process extends to the acquisition of cyber weapons, although not all cyber capabilities fall under the definition of “weapons.” The DoD Directive 5000.01 requires legal scrutiny of intended acquisitions to ensure compliance with international obligations, while the updated Directive 3000.09 necessitates reviews of autonomous weapon systems, emphasizing adherence to LOAC and requiring senior military officials to verify compliance with specific criteria.

Despite these frameworks, the panel acknowledged that the private sector—which typically develops AI technology—does not prioritize LOAC compliance. This raises critical questions about state accountability and the need for precise contracting language to ensure that legal obligations are met. A proactive approach in integrating LOAC considerations into procurement processes could incentivize private companies to account for legal risks from the outset of AI development.

As discussions progressed, the panel examined the implications of AI tools on LOAC, particularly regarding what constitutes an attack. The Tallinn Manual 2.0 asserts that cyber operations causing harm or damage qualify as attacks. The U.S. perspective aligns with this, maintaining that the classification of a cyber operation as an attack depends on its consequences rather than the means employed.

Relatedly, Article 51(5)(b) of AP I prohibits attacks that might cause excessive civilian harm relative to the anticipated military advantage. This provision necessitates a sincere effort to foresee collateral impacts, putting increased pressure on states employing AI in cyber operations to ensure the accuracy and reliability of the systems they use.

The panel highlighted the increased burden on military leaders in this context, noting that while AI reduces human involvement in decision-making, commanders bear the ultimate responsibility for ensuring legal compliance. The historical precedent set by cases such as In Re Yamashita underscores the legal obligations of commanders to manage and oversee the actions of their subordinates effectively. Commanders must remain vigilant about AI systems’ functionality, especially when these systems may become targets themselves, presenting risks such as data poisoning that could invalidate their legal assessments.

Interoperability also poses challenges in the context of international partnerships. The distinct legal standards governing the acquisition of AI tools could complicate cooperative efforts between allied states. The panel discussed the need for robust joint exercises to navigate these complexities and ensure that shared capabilities align with the legal obligations of each state.

Ultimately, the panel concluded that the integration of AI into cyber operations does not fundamentally change legal obligations but rather complicates them. The focus is not solely on identifying gaps in the law but on how states can implement processes that account for existing legal frameworks. The future of military strategy will hinge on the ability of states to thoughtfully navigate these legal complexities while harnessing the advantages offered by AI technologies in the cyber domain.

Major Emily E. Bobenrieth is an active duty Army judge advocate currently assigned as an Associate Professor of National Security Law at The Judge Advocate General’s Legal Center and School in Charlottesville, Virginia. The views expressed are those of the author and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.