AI Is a National-Security Danger

AI Is a National-Security Danger


Artificial intelligence poses threats to U.S. national security, and the Biden administration takes them seriously. On Oct. 30 the president signed a wide-ranging executive order on artificial intelligence. Among other things, it mandates that a significant portion of the nation’s AI industry must now check its models for national-security vulnerabilities and potential misuses. This means assembling a “red team” of experts to try to make their AIs do dangerous things—and then devising ways of protecting against similar threats from outside.

This isn’t a mere bureaucratic exercise. It is a clarion call for a new era of responsibility. The executive order defines dual-use AI as any model “that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.”

Copyright ©2023 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top