Rapid rollout of generative artificial intelligence tools like Open AI chatgpt and google forage This has caused an uproar from leaders in government, industry and academia, who are now calling for ways to control the development and use of this emerging technology.

The AI ​​landscape is constantly evolving. although that shouldn’t stop Policy makers need to balance competing goals in their approach to intervention in the sector, to regulation and oversight, and to be responsive to the rapidly changing geopolitical and technological landscape.

In the context of AI, three key goals are essential for government policymakers: encouraging AI innovation, limiting the misuse of AI by authoritarian governments and other actors, and ensuring that consumers of AI-enabled products and devices are not harmed. Do not be Realizing these goals usually results in conflict and tension.

For example, when we apply export controls On some chips to slow down our opponents’ progress in AI, it might inadvertent disturbance The ability of American companies to develop and produce the most advanced semiconductor technology? Or more fundamentally, do we know whether the controls actually slowed down the opponent in question? A similar situation arose for Satellite industry in the 1980s,

Similarly, does the blind pursuit of economic growth and innovation lead us to make unacceptable sacrifices in security? The US has long emphasized innovation rather than careful deployment. We’ve seen too many instances where “moving fast and breaking things” software development, aerospace, transportation And other areas have caused real world damage and even loss of life.

These tensions make it nearly impossible for a government to meet all of its goals in every situation. Then the onus lies on the policy makers to balance these tensions and chart a successful path for the nation.

Furthermore, any attempt to steer the trajectory of AI and other emerging technologies must include a system for monitoring whether the measure is working as intended or needs to be adjusted. Without feedback mechanisms, the US runs the risk of following inappropriate policy responses and potentially ceding technological leadership to peer innovators.

Groups such as the Organization for Economic Co-operation and Development (OECD) have already encouraged leaders to do more Adaptable, Creative Policy Solutions while addressing systemic problems Our society is facing. Although this framework can be applied to almost all areas of policy, it is particularly useful for addressing issues in the technology and national security domains.

The decision of how to balance strategic technology and national security goals varies among countries and regions based on their economic realities, geopolitical situation, and other factors.

For example, in AI policy, the US has invested enough In developing technology while relying largely on the private sector and the courts to build guardrails for technology. In contrast, the EU is leading the charge develop technical standards AI for security. The bloc has taken a similar oversight role in other areas of technology policy. data privacy And disbelief, Each approach carries its own implications, underpinning the often precarious balance between innovation and security, speed and security, efficiency and flexibility, and collaboration and competition.

Effective strategy requires policy makers to acknowledge these tensions and determine the balance that works best for their country.

Our leaders have a variety of tools they can use to strike that balance. Some relate to specific government functions and authorities, such as procurement and taxes. Others are more general purposes, such as international cooperation and information-sharing. Many, if not all, of these levers can be used to shape the AI ​​landscape. immigration And workforce development Policies affect the size and composition of the AI ​​talent pool; economic control AI can advance or hinder the ability of different countries to compete in the marketplace; invested in Research And basic infrastructure Can accelerate and secure the country’s leadership in technology.

Today, much of the discussion about mitigating AI risks focuses on a single policy lever: regulation. But rules are just one tool in the arsenal of policymakers, and we cannot rely on a single policy lever when addressing multidimensional problems. Government leaders have a variety of ways to deal with technology risk beyond regulation, such as funding AI Security Researchto empower competition promoter and expanding the market for the use of secure AI tools Purchase (link broken).

However, these optional policies also come with tradeoffs. Security research can divert funding from other areas. Competition policies can reduce research and development spending in large existing companies. Market-making procurement programs can undermine smaller companies that lack the resources to navigate the federal procurement process.

Policy makers should consider the agreement before embarking on any particular path.

It is impossible to know with certainty what the consequences and interactions of specific policies will be until they are implemented. When rolling out these interventions, it is essential that policy makers create mechanisms that allow them to monitor the effects of their actions in real time – to understand what works well and what needs to be adjusted. Only through regularly updated monitoring systems will leaders be able to identify emerging trends and change course when policies are no longer meeting their intended needs.

This is especially necessary in a rapidly advancing technological field such as AI. Policy makers can lay the groundwork for these feedback systems by investing in the short term. event tracking, third-party auditing And Monitoring data and models Used in critical applications, and also supports the creation of monitoring systems for each policy intervention in the federal budget and bipartisan legislation.

Friends, In a world of nation-state level innovators and fast-moving emerging technology sectors, adaptability is not optional – it is an absolute necessity. Our policy framework needs to be fluid, flexible and far-sighted. This commitment to constant monitoring and adaptability is key to thriving in our rapidly changing world. As we move forward in this area and move towards a systems-oriented strategy for technology and national security policy, adaptability should be our guiding principle.

Our recent report Through Georgetown University’s Center for Security and Emerging Technologies, a detailed outline is provided on how policymakers can begin to examine the interactions and tradeoffs between different policies and build more informed, effective, and adaptable long-term strategies. .

The rapid proliferation of AI tools provides a unique opportunity to move towards a more integrated, systems-oriented approach to policy making. By using our policies wisely, continually evaluating their impacts, and being prepared to make adjustments as necessary, we can meet the challenges and take advantage of the opportunities AI presents.

Jack Corrigan is a senior research analyst at Georgetown University Center for Security and Emerging Technologies, Where? Davey Murdick is the executive director.

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


Leave a Reply

Your email address will not be published. Required fields are marked *